id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
11036 | https://en.wikipedia.org/wiki/Fin | Fin | A fin is a thin component or appendage attached to a larger body or structure. Fins typically function as foils that produce lift or thrust, or provide the ability to steer or stabilize motion while traveling in water, air, or other fluids. Fins are also used to increase surface areas for heat transfer purposes, or simply as ornamentation.
Fins first evolved on fish as a means of locomotion. Fish fins are used to generate thrust and control the subsequent motion. Fish and other aquatic animals, such as cetaceans, actively propel and steer themselves with pectoral and tail fins. As they swim, they use other fins, such as dorsal and anal fins, to achieve stability and refine their maneuvering.
The fins on the tails of cetaceans, ichthyosaurs, metriorhynchids, mosasaurs and plesiosaurs are called flukes.
Thrust generation
Foil shaped fins generate thrust when moved, the lift of the fin sets water or air in motion and pushes the fin in the opposite direction. Aquatic animals get significant thrust by moving fins back and forth in water. Often the tail fin is used, but some aquatic animals generate thrust from pectoral fins. Fins can also generate thrust if they are rotated in air or water. Turbines and propellers (and sometimes fans and pumps) use a number of rotating fins, also called foils, wings, arms or blades. Propellers use the fins to translate torquing force to lateral thrust, thus propelling an aircraft or ship. Turbines work in reverse, using the lift of the blades to generate torque and power from moving gases or water.
Cavitation can be a problem with high power applications, resulting in damage to propellers or turbines, as well as noise and loss of power. Cavitation occurs when negative pressure causes bubbles (cavities) to form in a liquid, which then promptly and violently collapse. It can cause significant damage and wear. Cavitation damage can also occur to the tail fins of powerful swimming marine animals, such as dolphins and tuna. Cavitation is more likely to occur near the surface of the ocean, where the ambient water pressure is relatively low. Even if they have the power to swim faster, dolphins may have to restrict their speed because collapsing cavitation bubbles on their tail are too painful. Cavitation also slows tuna, but for a different reason. Unlike dolphins, these fish do not feel the bubbles, because they have bony fins without nerve endings. Nevertheless, they cannot swim faster because the cavitation bubbles create a vapor film around their fins that limits their speed. Lesions have been found on tuna that are consistent with cavitation damage.
Scombrid fishes (tuna, mackerel and bonito) are particularly high-performance swimmers. Along the margin at the rear of their bodies is a line of small rayless, non-retractable fins, known as finlets. There has been much speculation about the function of these finlets. Research done in 2000 and 2001 by Nauen and Lauder indicated that "the finlets have a hydrodynamic effect on local flow during steady swimming" and that "the most posterior finlet is oriented to redirect flow into the developing tail vortex, which may increase thrust produced by the tail of swimming mackerel".
Fish use multiple fins, so it is possible that a given fin can have a hydrodynamic interaction with another fin. In particular, the fins immediately upstream of the caudal (tail) fin may be proximate fins that can directly affect the flow dynamics at the caudal fin. In 2011, researchers using volumetric imaging techniques were able to generate "the first instantaneous three-dimensional views of wake structures as they are produced by freely swimming fishes". They found that "continuous tail beats resulted in the formation of a linked chain of vortex rings" and that "the dorsal and anal fin wakes are rapidly entrained by the caudal fin wake, approximately within the timeframe of a subsequent tail beat".
Motion control
Once motion has been established, the motion itself can be controlled with the use of other fins. Boats control direction (yaw) with fin-like rudders, and roll with stabilizer and keel fins. Airplanes achieve similar results with small specialised fins that change the shape of their wings and tail fins.
Stabilising fins are used as fletching on arrows and some darts, and at the rear of some bombs, missiles, rockets and self-propelled torpedoes. These are typically planar and shaped like small wings, although grid fins are sometimes used. Static fins have also been used for one satellite, GOCE.
Temperature regulation
Engineering fins are also used as heat transfer fins to regulate temperature in heat sinks or fin radiators.
Ornamentation and other uses
In biology, fins can have an adaptive significance as sexual ornaments. During courtship, the female cichlid, Pelvicachromis taeniatus, displays a large and visually arresting purple pelvic fin. "The researchers found that males clearly preferred females with a larger pelvic fin and that pelvic fins grew in a more disproportionate way than other fins on female fish."
Reshaping human feet with swim fins, rather like the tail fin of a fish, add thrust and efficiency to the kicks of a swimmer or underwater diver Surfboard fins provide surfers with means to maneuver and control their boards. Contemporary surfboards often have a centre fin and two cambered side fins.
The bodies of reef fishes are often shaped differently from open water fishes. Open water fishes are usually built for speed, streamlined like torpedoes to minimise friction as they move through the water. Reef fish operate in the relatively confined spaces and complex underwater landscapes of coral reefs. For this manoeuvrability is more important than straight line speed, so coral reef fish have developed bodies which optimize their ability to dart and change direction. They outwit predators by dodging into fissures in the reef or playing hide and seek around coral heads.
The pectoral and pelvic fins of many reef fish, such as butterflyfish, damselfish and angelfish, have evolved so they can act as brakes and allow complex maneuvers. Many reef fish, such as butterflyfish, damselfish and angelfish, have evolved bodies which are deep and laterally compressed like a pancake, and will fit into fissures in rocks. Their pelvic and pectoral fins are designed differently, so they act together with the flattened body to optimise maneuverability. Some fishes, such as puffer fish, filefish and trunkfish, rely on pectoral fins for swimming and hardly use tail fins at all.
Evolution
There is an old theory, proposed by anatomist Carl Gegenbaur, which has been often disregarded in science textbooks, "that fins and (later) limbs evolved from the gills of an extinct vertebrate". Gaps in the fossil record had not allowed a definitive conclusion. In 2009, researchers from the University of Chicago found evidence that the "genetic architecture of gills, fins and limbs is the same", and that "the skeleton of any appendage off the body of an animal is probably patterned by the developmental genetic program that we have traced back to formation of gills in sharks". Recent studies support the idea that gill arches and paired fins are serially homologous and thus that fins may have evolved from gill tissues.
Fish are the ancestors of all mammals, reptiles, birds and amphibians. In particular, terrestrial tetrapods (four-legged animals) evolved from fish and made their first forays onto land 400 million years ago. They used paired pectoral and pelvic fins for locomotion. The pectoral fins developed into forelegs (arms in the case of humans) and the pelvic fins developed into hind legs. Much of the genetic machinery that builds a walking limb in a tetrapod is already present in the swimming fin of a fish.
In 2011, researchers at Monash University in Australia used primitive but still living lungfish "to trace the evolution of pelvic fin muscles to find out how the load-bearing hind limbs of the tetrapods evolved." Further research at the University of Chicago found bottom-walking lungfishes had already evolved characteristics of the walking gaits of terrestrial tetrapods.
In a classic example of convergent evolution, the pectoral limbs of pterosaurs, birds and bats further evolved along independent paths into flying wings. Even with flying wings there are many similarities with walking legs, and core aspects of the genetic blueprint of the pectoral fin have been retained.
About 200 million years ago the first mammals appeared. A group of these mammals started returning to the sea about 52 million years ago, thus completing a circle. These are the cetaceans (whales, dolphins and porpoises). Recent DNA analysis suggests that cetaceans evolved from within the even-toed ungulates, and that they share a common ancestor with the hippopotamus. About 23 million years ago another group of bearlike land mammals started returning to the sea. These were the pinnipeds (seals). What had become walking limbs in cetaceans and seals evolved further, independently in a reverse form of convergent evolution, back to new forms of swimming fins. The forelimbs became flippers and, in pinnipeds, the hind limbs became a tail terminating in two fins (the cetacean fluke, conversely, is an entirely new organ). Fish tails are usually vertical and move from side to side. Cetacean flukes are horizontal and move up and down, because cetacean spines bend the same way as in other mammals.
Ichthyosaurs are ancient reptiles that resembled dolphins. They first appeared about 245 million years ago and disappeared about 90 million years ago.
"This sea-going reptile with terrestrial ancestors converged so strongly on fishes that it actually evolved a dorsal fin and tail in just the right place and with just the right hydrological design. These structures are all the more remarkable because they evolved from nothing — the ancestral terrestrial reptile had no hump on its back or blade on its tail to serve as a precursor."
The biologist Stephen Jay Gould said the ichthyosaur was his favorite example of convergent evolution.
Robotics
The use of fins for the propulsion of aquatic animals can be remarkably effective. It has been calculated that some fish can achieve a propulsive efficiency greater than 90%. Fish can accelerate and maneuver much more effectively than boats or submarine, and produce less water disturbance and noise. This has led to biomimetic studies of underwater robots which attempt to emulate the locomotion of aquatic animals. An example is the Robot Tuna built by the Institute of Field Robotics, to analyze and mathematically model thunniform motion. In 2005, the Sea Life London Aquarium displayed three robotic fish created by the computer science department at the University of Essex. The fish were designed to be autonomous, swimming around and avoiding obstacles like real fish. Their creator claimed that he was trying to combine "the speed of tuna, acceleration of a pike, and the navigating skills of an eel".
The AquaPenguin, developed by Festo of Germany, copies the streamlined shape and propulsion by front flippers of penguins. Festo also developed AquaRay, AquaJelly and AiraCuda, respectively emulating the locomotion of manta rays, jellyfish and barracuda.
In 2004, Hugh Herr at MIT prototyped a biomechatronic robotic fish with a living actuator by surgically transplanting muscles from frog legs to the robot and then making the robot swim by pulsing the muscle fibers with electricity.
Robotic fish offer some research advantages, such as the ability to examine part of a fish design in isolation from the rest, and variance of a single parameter, such as flexibility or direction. Researchers can directly measure forces more easily than in live fish. "Robotic devices also facilitate three-dimensional kinematic studies and correlated hydrodynamic analyses, as the location of the locomotor surface can be known accurately. And, individual components of a natural motion (such as outstroke vs. instroke of a flapping appendage) can be programmed separately, which is certainly difficult to achieve when working with a live animal."
| Biology and health sciences | External anatomy and regions of the body | Biology |
11042 | https://en.wikipedia.org/wiki/Fat | Fat | In nutrition, biology, and chemistry, fat usually means any ester of fatty acids, or a mixture of such compounds, most commonly those that occur in living beings or in food.
The term often refers specifically to triglycerides (triple esters of glycerol), that are the main components of vegetable oils and of fatty tissue in animals; or, even more narrowly, to triglycerides that are solid or semisolid at room temperature, thus excluding oils. The term may also be used more broadly as a synonym of lipid—any substance of biological relevance, composed of carbon, hydrogen, or oxygen, that is insoluble in water but soluble in non-polar solvents. In this sense, besides the triglycerides, the term would include several other types of compounds like mono- and diglycerides, phospholipids (such as lecithin), sterols (such as cholesterol), waxes (such as beeswax), and free fatty acids, which are usually present in human diet in smaller amounts.
Fats are one of the three main macronutrient groups in human diet, along with carbohydrates and proteins, and the main components of common food products like milk, butter, tallow, lard, salt pork, and cooking oils. They are a major and dense source of food energy for many animals and play important structural and metabolic functions in most living beings, including energy storage, waterproofing, and thermal insulation. The human body can produce the fat it requires from other food ingredients, except for a few essential fatty acids that must be included in the diet. Dietary fats are also the carriers of some flavor and aroma ingredients and vitamins that are not water-soluble.
Biological importance
In humans and many animals, fats serve both as energy sources and as stores for energy in excess of what the body needs immediately. Each gram of fat when burned or metabolized releases about nine food calories (37 kJ = 8.8 kcal).
Fats are also sources of essential fatty acids, an important dietary requirement. Vitamins A, D, E, and K are fat-soluble, meaning they can only be digested, absorbed, and transported in conjunction with fats.
Fats play a vital role in maintaining healthy skin and hair, insulating body organs against shock, maintaining body temperature, and promoting healthy cell function. Fat also serves as a useful buffer against a host of diseases. When a particular substance, whether chemical or biotic, reaches unsafe levels in the bloodstream, the body can effectively dilute—or at least maintain equilibrium of—the offending substances by storing it in new fat tissue. This helps to protect vital organs, until such time as the offending substances can be metabolized or removed from the body by such means as excretion, urination, accidental or intentional bloodletting, sebum excretion, and hair growth.
Adipose tissue
In animals, adipose tissue, or fatty tissue is the body's means of storing metabolic energy over extended periods of time. Adipocytes (fat cells) store fat derived from the diet and from liver metabolism. Under energy stress these cells may degrade their stored fat to supply fatty acids and also glycerol to the circulation. These metabolic activities are regulated by several hormones (e.g., insulin, glucagon and epinephrine). Adipose tissue also secretes the hormone leptin.
Production and processing
A variety of chemical and physical techniques are used for the production and processing of fats, both industrially and in cottage or home settings. They include:
Pressing to extract liquid fats from fruits, seeds, or algae, e.g. olive oil from olives
Solvent extraction using solvents like hexane or supercritical carbon dioxide
Rendering, the melting of fat in adipose tissue, e.g. to produce tallow, lard, fish oil, and whale oil
Churning of milk to produce butter
Hydrogenation to increase the degree of saturation of the fatty acids
Interesterification, the rearrangement of fatty acids across different triglycerides
Winterization to remove oil components with higher melting points
Clarification of butter
Metabolism
The pancreatic lipase acts at the ester bond, hydrolyzing the bond and "releasing" the fatty acid. In triglyceride form, lipids cannot be absorbed by the duodenum. Fatty acids, monoglycerides (one glycerol, one fatty acid), and some diglycerides are absorbed by the duodenum, once the triglycerides have been broken down.
In the intestine, following the secretion of lipases and bile, triglycerides are split into monoacylglycerol and free fatty acids in a process called lipolysis. They are subsequently moved to absorptive enterocyte cells lining the intestines. The triglycerides are rebuilt in the enterocytes from their fragments and packaged together with cholesterol and proteins to form chylomicrons. These are excreted from the cells and collected by the lymph system and transported to the large vessels near the heart before being mixed into the blood. Various tissues can capture the chylomicrons, releasing the triglycerides to be used as a source of energy. Liver cells can synthesize and store triglycerides. When the body requires fatty acids as an energy source, the hormone glucagon signals the breakdown of the triglycerides by hormone-sensitive lipase to release free fatty acids. As the brain cannot utilize fatty acids as an energy source (unless converted to a ketone), the glycerol component of triglycerides can be converted into glucose, via gluconeogenesis by conversion into dihydroxyacetone phosphate and then into glyceraldehyde 3-phosphate, for brain fuel when it is broken down. Fat cells may also be broken down for that reason if the brain's needs ever outweigh the body's.
Triglycerides cannot pass through cell membranes freely. Special enzymes on the walls of blood vessels called lipoprotein lipases must break down triglycerides into free fatty acids and glycerol. Fatty acids can then be taken up by cells via fatty acid transport proteins (FATPs).
Triglycerides, as major components of very-low-density lipoprotein (VLDL) and chylomicrons, play an important role in metabolism as energy sources and transporters of dietary fat. They contain more than twice as much energy (approximately 9kcal/g or 38kJ/g) as carbohydrates (approximately 4kcal/g or 17kJ/g).
Nutritional and health aspects
The most common type of fat, in human diet and most living beings, is a triglyceride, an ester of the triple alcohol glycerol and three fatty acids. The molecule of a triglyceride can be described as resulting from a condensation reaction (specifically, esterification) between each of glycerol's –OH groups and the HO– part of the carboxyl group of each fatty acid, forming an ester bridge with elimination of a water molecule .
Other less common types of fats include diglycerides and monoglycerides, where the esterification is limited to two or just one of glycerol's –OH groups. Other alcohols, such as cetyl alcohol (predominant in spermaceti), may replace glycerol. In the phospholipids, one of the fatty acids is replaced by phosphoric acid or a monoester thereof.
The benefits and risks of various amounts and types of dietary fats have been the object of much study, and are still highly controversial topics.
Essential fatty acids
There are two essential fatty acids (EFAs) in human nutrition: alpha-Linolenic acid (an omega-3 fatty acid) and linoleic acid (an omega-6 fatty acid). The adult body can synthesize other lipids that it needs from these two.
Dietary sources
Saturated vs. unsaturated fats
Different foods contain different amounts of fat with different proportions of saturated and unsaturated fatty acids. Some animal products, like beef and dairy products made with whole or reduced fat milk like yogurt, ice cream, cheese and butter have mostly saturated fatty acids (and some have significant contents of dietary cholesterol). Other animal products, like pork, poultry, eggs, and seafood have mostly unsaturated fats. Industrialized baked goods may use fats with high unsaturated fat contents as well, especially those containing partially hydrogenated oils, and processed foods that are deep-fried in hydrogenated oil are high in saturated fat content.
Plants and fish oil generally contain a higher proportion of unsaturated acids, although there are exceptions such as coconut oil and palm kernel oil. Foods containing unsaturated fats include avocado, nuts, olive oils, and vegetable oils such as canola.
Many scientific studies have found that replacing saturated fats with cis unsaturated fats in the diet reduces risk of cardiovascular diseases (CVDs), diabetes, or death. These studies prompted many medical organizations and public health departments, including the World Health Organization (WHO), to officially issue that advice. Some countries with such recommendations include:
United Kingdom
United States
India
Canada
Australia
Singapore
New Zealand
Hong Kong
A 2004 review concluded that "no lower safe limit of specific saturated fatty acid intakes has been identified" and recommended that the influence of varying saturated fatty acid intakes against a background of different individual lifestyles and genetic backgrounds should be the focus in future studies.
This advice is often oversimplified by labeling the two kinds of fats as bad fats and good fats, respectively. However, since the fats and oils in most natural and traditionally processed foods contain both unsaturated and saturated fatty acids, the complete exclusion of saturated fat is unrealistic and possibly unwise. For instance, some foods rich in saturated fat, such as coconut and palm oil, are an important source of cheap dietary calories for a large fraction of the population in developing countries.
Concerns were also expressed at a 2010 conference of the American Dietetic Association that a blanket recommendation to avoid saturated fats could drive people to also reduce the amount of polyunsaturated fats, which may have health benefits, and/or replace fats by refined carbohydrates — which carry a high risk of obesity and heart disease.
For these reasons, the U.S. Food and Drug Administration, for example, recommends to consume less than 10% (7% for high-risk groups) of calories from saturated fat, with 15-30% of total calories from all fat. A general 7% limit was recommended also by the American Heart Association (AHA) in 2006.
The WHO/FAO report also recommended replacing fats so as to reduce the content of myristic and palmitic acids, specifically.
The so-called Mediterranean diet, prevalent in many countries in the Mediterranean Sea area, includes more total fat than the diet of Northern European countries, but most of it is in the form of unsaturated fatty acids (specifically, monounsaturated and omega-3) from olive oil and fish, vegetables, and certain meats like lamb, while consumption of saturated fat is minimal in comparison.
A 2017 review found evidence that a Mediterranean-style diet could reduce the risk of cardiovascular diseases, overall cancer incidence, neurodegenerative diseases, diabetes, and mortality rate. A 2018 review showed that a Mediterranean-like diet may improve overall health status, such as reduced risk of non-communicable diseases. It also may reduce the social and economic costs of diet-related illnesses.
A small number of contemporary reviews have challenged this negative view of saturated fats. For example, an evaluation of evidence from 1966 to 1973 of the observed health impact of replacing dietary saturated fat with linoleic acid found that it increased rates of death from all causes, coronary heart disease, and cardiovascular disease. These studies have been disputed by many scientists, and the consensus in the medical community is that saturated fat and cardiovascular disease are closely related. Still, these discordant studies fueled debate over the merits of substituting polyunsaturated fats for saturated fats.
Cardiovascular disease
The effect of saturated fat on cardiovascular disease has been extensively studied. The general consensus is that there is evidence of moderate-quality of a strong, consistent, and graded relationship between saturated fat intake, blood cholesterol levels, and the incidence of cardiovascular disease. The relationships are accepted as causal, including by many government and medical organizations.
A 2017 review by the AHA estimated that replacement of saturated fat with polyunsaturated fat in the American diet could reduce the risk of cardiovascular diseases by 30%.
The consumption of saturated fat is generally considered a risk factor for dyslipidemia—abnormal blood lipid levels, including high total cholesterol, high levels of triglycerides, high levels of low-density lipoprotein (LDL, "bad" cholesterol) or low levels of high-density lipoprotein (HDL, "good" cholesterol). These parameters in turn are believed to be risk indicators for some types of cardiovascular disease. These effects were observed in children too.
Several meta-analyses (reviews and consolidations of multiple previously published experimental studies) have confirmed a significant relationship between saturated fat and high serum cholesterol levels, which in turn have been claimed to have a causal relation with increased risk of cardiovascular disease (the so-called lipid hypothesis). However, high cholesterol may be caused by many factors. Other indicators, such as high LDL/HDL ratio, have proved to be more predictive. In a study of myocardial infarction in 52 countries, the ApoB/ApoA1 (related to LDL and HDL, respectively) ratio was the strongest predictor of CVD among all risk factors. There are other pathways involving obesity, triglyceride levels, insulin sensitivity, endothelial function, and thrombogenicity, among others, that play a role in CVD, although it seems, in the absence of an adverse blood lipid profile, the other known risk factors have only a weak atherogenic effect. Different saturated fatty acids have differing effects on various lipid levels.
Cancer
The evidence for a relation between saturated fat intake and cancer is significantly weaker, and there does not seem to be a clear medical consensus about it.
Several reviews of case–control studies have found that saturated fat intake is associated with increased breast cancer risk.
Another review found limited evidence for a positive relationship between consuming animal fat and incidence of colorectal cancer.
Other meta-analyses found evidence for increased risk of ovarian cancer by high consumption of saturated fat.
Some studies have indicated that serum myristic acid and palmitic acid and dietary myristic and palmitic saturated fatty acids and serum palmitic combined with alpha-tocopherol supplementation are associated with increased risk of prostate cancer in a dose-dependent manner. These associations may, however, reflect differences in intake or metabolism of these fatty acids between the precancer cases and controls, rather than being an actual cause.
Bones
Various animal studies have indicated that the intake of saturated fat has a negative effect on the mineral density of bones. One study suggested that men may be particularly vulnerable.
Disposition and overall health
Studies have shown that substituting monounsaturated fatty acids for saturated ones is associated with increased daily physical activity and resting energy expenditure. More physical activity, less anger, and less irritability were associated with a higher-oleic acid diet than one of a palmitic acid diet.
Monounsaturated vs. polyunsaturated fat
The most common fatty acids in human diet are unsaturated or mono-unsaturated. Monounsaturated fats are found in animal flesh such as red meat, whole milk products, nuts, and high fat fruits such as olives and avocados. Olive oil is about 75% monounsaturated fat. The high oleic variety sunflower oil contains at least 70% monounsaturated fat. Canola oil and cashews are both about 58% monounsaturated fat. Tallow (beef fat) is about 50% monounsaturated fat, and lard is about 40% monounsaturated fat. Other sources include hazelnut, avocado oil, macadamia nut oil, grapeseed oil, groundnut oil (peanut oil), sesame oil, corn oil, popcorn, whole grain wheat, cereal, oatmeal, almond oil, hemp oil, and tea-oil camellia.
Polyunsaturated fatty acids can be found mostly in nuts, seeds, fish, seed oils, and oysters.
Food sources of polyunsaturated fats include:
Insulin resistance and sensitivity
MUFAs (especially oleic acid) have been found to lower the incidence of insulin resistance; PUFAs (especially large amounts of arachidonic acid) and SFAs (such as arachidic acid) increased it. These ratios can be indexed in the phospholipids of human skeletal muscle and in other tissues as well. This relationship between dietary fats and insulin resistance is presumed secondary to the relationship between insulin resistance and inflammation, which is partially modulated by dietary fat ratios (omega−3/6/9) with both omega−3 and −9 thought to be anti-inflammatory, and omega−6 pro-inflammatory (as well as by numerous other dietary components, particularly polyphenols and exercise, with both of these anti-inflammatory). Although both pro- and anti-inflammatory types of fat are biologically necessary, fat dietary ratios in most US diets are skewed towards omega−6, with subsequent disinhibition of inflammation and potentiation of insulin resistance. This is contrary to the suggestion that polyunsaturated fats are shown to be protective against insulin resistance.
The large scale KANWU study found that increasing MUFA and decreasing SFA intake could improve insulin sensitivity, but only when the overall fat intake of the diet was low. However, some MUFAs may promote insulin resistance (like the SFAs), whereas PUFAs may protect against it.
Cancer
Levels of oleic acid along with other MUFAs in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. MUFAs and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d).
Results from observational clinical trials on PUFA intake and cancer have been inconsistent and vary by numerous factors of cancer incidence, including gender and genetic risk. Some studies have shown associations between higher intakes and/or blood levels of omega-3 PUFAs and a decreased risk of certain cancers, including breast and colorectal cancer, while other studies found no associations with cancer risk.
Pregnancy disorders
Polyunsaturated fat supplementation was found to have no effect on the incidence of pregnancy-related disorders, such as hypertension or preeclampsia, but may increase the length of gestation slightly and decreased the incidence of early premature births.
Expert panels in the United States and Europe recommend that pregnant and lactating women consume higher amounts of polyunsaturated fats than the general population to enhance the DHA status of the fetus and newborn.
"Cis fat" vs. "trans fat"
In nature, unsaturated fatty acids generally have double bonds in cis configuration (with the adjacent C–C bonds on the same side) as opposed to trans. Nevertheless, trans fatty acids (TFAs) occur in small amounts in meat and milk of ruminants (such as cattle and sheep), typically 2–5% of total fat. Natural TFAs, which include conjugated linoleic acid (CLA) and vaccenic acid, originate in the rumen of these animals. CLA has two double bonds, one in the cis configuration and one in trans, which makes it simultaneously a cis- and a trans-fatty acid.
The processing of fats by hydrogenation can convert some unsaturated fats into trans fat]]s. The presence of trans fats in various processed foods has received much attention.
Omega-three and omega-six fatty acids
The ω−3 fatty acids have received substantial attention. Among omega-3 fatty acids, neither long-chain nor short-chain forms were consistently associated with breast cancer risk. High levels of docosahexaenoic acid (DHA), however, the most abundant omega-3 polyunsaturated fatty acid in erythrocyte (red blood cell) membranes, were associated with a reduced risk of breast cancer. The DHA obtained through the consumption of polyunsaturated fatty acids is positively associated with cognitive and behavioral performance. In addition, DHA is vital for the grey matter structure of the human brain, as well as retinal stimulation and neurotransmission.
Interesterification
Some studies have investigated the health effects of interesterified (IE) fats, by comparing diets with IE and non-IE fats with the same overall fatty acid composition.
Several experimental studies in humans found no statistical difference on fasting blood lipids between a diet with large amounts of IE fat, having 25-40% C16:0 or C18:0 on the 2-position, and a similar diet with non-IE fat, having only 3-9% C16:0 or C18:0 on the 2-position. A negative result was obtained also in a study that compared the effects on blood cholesterol levels of an IE fat product mimicking cocoa butter and the real non-IE product.
A 2007 study funded by the Malaysian Palm Oil Board claimed that replacing natural palm oil by other interesterified or partially hydrogenated fats caused adverse health effects, such as higher LDL/HDL ratio and plasma glucose levels. However, these effects could be attributed to the higher percentage of saturated acids in the IE and partially hydrogenated fats, rather than to the IE process itself.
Rancification
Unsaturated fats undergo auto-oxidation, which involves replacement of a C-H bond with C-OH unit. The process requires oxygen (air) and is accelerated by the presence of traces of metals, which serve as catalysts. Doubly unsaturated fatty acids are particularly prone to this reaction. Vegetable oils resist this process to a small degree because they contain antioxidants, such as tocopherol. Fats and oils often are treated with chelating agents such as citric acid to remove the metal catalysts.
Role in disease
In the human body, high levels of triglycerides in the bloodstream have been linked to atherosclerosis, heart disease
and stroke. However, the relative negative impact of raised levels of triglycerides compared to that of LDL:HDL ratios is as yet unknown. The risk can be partly accounted for by a strong inverse relationship between triglyceride level and HDL-cholesterol level. But the risk is also due to high triglyceride levels increasing the quantity of small, dense LDL particles.
Guidelines
The National Cholesterol Education Program has set guidelines for triglyceride levels:
These levels are tested after fasting 8 to 12 hours. Triglyceride levels remain temporarily higher for a period after eating.
The AHA recommends an optimal triglyceride level of 100mg/dL (1.1mmol/L) or lower to improve heart health.
Reducing triglyceride levels
Fat digestion and metabolism
Fats are broken down in the healthy body to release their constituents, glycerol and fatty acids. Glycerol itself can be converted to glucose by the liver and so become a source of energy. Fats and other lipids are broken down in the body by enzymes called lipases produced in the pancreas.
Many cell types can use either glucose or fatty acids as a source of energy for metabolism. In particular, heart and skeletal muscle prefer fatty acids. Despite long-standing assertions to the contrary, fatty acids can also be used as a source of fuel for brain cells through mitochondrial oxidation.
| Biology and health sciences | Biochemistry and molecular biology | null |
11057 | https://en.wikipedia.org/wiki/Forge | Forge | A forge is a type of hearth used for heating metals, or the workplace (smithy) where such a hearth is located. The forge is used by the smith to heat a piece of metal to a temperature at which it becomes easier to shape by forging, or to the point at which work hardening no longer occurs. The metal (known as the "workpiece") is transported to and from the forge using tongs, which are also used to hold the workpiece on the smithy's anvil while the smith works it with a hammer. Sometimes, such as when hardening steel or cooling the work so that it may be handled with bare hands, the workpiece is transported to the slack tub, which rapidly cools the workpiece in a large body of water. However, depending on the metal type, it may require an oil quench or a salt brine instead; many metals require more than plain water hardening. The slack tub also provides water to control the fire in the forge.
Types
Coal/coke/charcoal forge
A forge typically uses bituminous coal, industrial coke or charcoal as the fuel to heat metal. The designs of these forges have varied over time, but whether the fuel is coal, coke or charcoal the basic design has remained the same.
A forge of this type is essentially a hearth or fireplace designed to allow a fire to be controlled such that metal introduced to the fire may be brought to a malleable state or to bring about other metallurgical effects (hardening, annealing, and tempering as examples). The forge fire in this type of forge is controlled in three ways: amount of air, the volume of fuel, and shape of the fuel/fire.
Over thousands of years of forging, these devices have evolved in one form or another as the essential features of this type of forge:
Tuyere—a pipe through which air can be forced into the fire
Bellows or blower—a means for forcing air into the tuyere
Hearth—a place where the burning fuel can be contained over or against the tuyere opening. Traditionally hearths have been constructed of mud-brick (adobe), fired brick, stone, or later, constructed of iron.
During operation, fuel is placed in or on the hearth and ignited. A source of moving air, such as a fan or bellows, introduces additional air into the fire through the tuyere. With additional air, the fire consumes fuel faster and burns hotter (and cleaner - smoke can be thought of as escaped potential fuel).
A blacksmith balances the fuel and air in the fire to suit particular kinds of work. Often this involves adjusting and maintaining the shape of the fire.
In a typical coal forge, a firepot will be centred in a flat hearth. The tuyere will enter the firepot at the bottom. In operation, the hot core of the fire will be a ball of burning coke in and above the firepot. The heart of the fire will be surrounded by a layer of hot but not burning coke. Around the unburnt coke will be a transitional layer of coal being transformed into coke by the heat of the fire. Surrounding all is a ring or horseshoe-shaped layer of raw coal, usually kept damp and tightly packed to maintain the shape of the fire's heart and to keep the coal from burning directly so that it "cooks" into coke first.
If a larger fire is necessary, the smith increases the air flowing into the fire as well as feeding and deepening the coke heart. The smith can also adjust the length and width of the fire in such a forge to accommodate different shapes of work.
The major variation from the forge and fire just described is a 'backdraft' where there is no fire pot, and the tuyere enters the hearth horizontally from the back wall.
Coke and charcoal may be burned in the same forges that use coal, but since there is no need to convert the raw fuel at the heart of the fire (as with coal), the fire is handled differently.
Individual smiths and specialized applications have fostered the development of a variety of forges of this type, from the coal forge described above to simpler constructions amounting to a hole in the ground with a pipe leading into it.
Gas forge
A gas forge typically uses propane or natural gas as the fuel. One common, efficient design uses a cylindrical forge chamber and a burner tube mounted at a right angle to the body. The chamber is typically lined with refractory materials such as a hard castable refractory ceramic or a soft ceramic thermal blanket (ex: Kaowool). The burner mixes fuel and air which are ignited at the tip, which protrudes a short way into the chamber lining. The air pressure, and therefore heat, can be increased with a mechanical blower or by taking advantage of the Venturi effect.
Gas forges vary in size and construction, from large forges using a big burner with a blower or several atmospheric burners to forges built out of a coffee can utilizing a cheap, simple propane torch. A small forge can even be carved out of a single soft firebrick.
The primary advantage of a gas forge is the ease of use, particularly for a novice. A gas forge is simple to operate compared to coal forges, and the fire produced is clean and consistent. They are less versatile, as the fire cannot be reshaped to accommodate large or unusually shaped pieces. It is also difficult to heat a small section of a piece. A common misconception is that gas forges cannot produce enough heat to enable forge-welding, but a well-designed gas forge is hot enough for any task.
Finery forge
A finery forge is a water-powered mill where pig iron is refined into wrought iron.
Forging equipment
Anvil
The anvil serves as a workbench to the blacksmith, where the metal to be forged is worked. Anvils may seem clunky and heavy, but they are a highly refined tool carefully shaped to suit a blacksmith's needs. Anvils are made of cast or wrought iron with a tool steel face welded on or of a single piece of cast or forged tool steel. Some anvils are made of only cast iron and have no tool steel face. These are not real anvils, and will not serve a blacksmith as such because they are too soft. A common term for a cast iron anvil is "ASO" or "Anvil Shaped Object". The purpose of a tool steel face on an anvil is to provide what some call "rebound" as well as being hard and not denting easily from misplaced hammer blows. The term rebound means it projects some of the force of the blacksmith's hammer blows back into the metal thus moving more metal at once than if there were no rebound. A good anvil can project anywhere from 50 to 99% of the energy back into the workpiece. The flat top, called the "face" is highly polished and usually has two holes (but can have more or less depending on the design). The square hole is called the hardy hole, where the square shank of the hardy tool fits. There are many different kinds of hardy tools. The smaller hole is called the pritchel hole, used as a bolster when punching holes in hot metal, or to hold tools similar to how the hardy tool does, but for tools that require being able to turn a 360-degree angle such as a hold-down tool for when the blacksmith's tongs cannot hold a workpiece as securely as it needs to be. On the front of the anvil, there is sometimes a "horn" that is used for bending, drawing out steel, and many other tasks. Between the horn and the anvil face, there is often a small area called a "step" or a "cutting table" That is used for cutting hot or cold steel with chisels, and hot cut tools without harming the anvil's face. Marks on the face transfer into imperfections in the blacksmith's work.
Hammer
There are many types of hammer used in a blacksmith's workshop but this will name just a few common ones. Hammers can range in shape and weight from half an ounce to nearly 30 pounds depending on the type of work being done with it.
Hand hammer - used by the smith.
Ball-peen hammer
Cross-peen hammer
Straight-peen hammer
Rounding hammer
Sledge hammer - used by the striker.
Chisel
Chisels are made of high carbon steel. They are hardened and tempered at the cutting edge while the head is left soft so it will not crack when hammered. Chisels are of two types, hot and cold chisels. The cold chisel is used for cutting cold metals while the hot chisel is for hot metals. Usually, hot chisels are thinner and therefore can not be substituted with cold chisels. Also, many smiths shape chisels as to have a simple twisted handle as to resemble a hammer, they can be used at a greater distance away from the hot metals. They are very useful and found throughout the world.
Tongs
Tongs are used by the blacksmith for holding hot metals securely. The mouths are custom made by the smith in various shapes to suit the gripping of various shapes of metal. It is not uncommon for a blacksmith to own twenty or more pairs of tongs; traditionally, a smith would start building their collection during the apprenticeship.
There are various types of tongs available in the market.
(1) flat tong
(2) rivet or ring tong
(3) straight lip fluted tong
(4) gad tong
Fuller
Fullers are forming tools of different shapes used in making grooves or hollows. They are often used in pairs, the bottom fuller has a square shank which fits into the hardy hole in the anvil while the top fuller has a handle. The work is placed on the bottom fuller and the top is placed on the work and struck with a hammer. The top fuller is also used for finishing round corners and for stretching or spreading metal.
Hardy
The hardy tool is a tool with a square shank that fits in a hardy hole. There are many different kinds of hardy tools such as the hot cut hardy, used for cutting hot metal on the anvil; the fuller tool, used for drawing out metal and making grooves; bending jigs - and too many others to list.
Slack tub
A slack tub is usually a large container full of water used by a blacksmith to quench hot metal. The slack tub is principally used to cool parts of the work during forging (to protect them, or keep the metal in one area from "spreading" for example, nearby hammer blows); to harden the steel; to tend a coal or charcoal forge; and simply to cool the work quickly for easy inspection. In bladesmithing and tool-making the term will usually be changed to a "quench tank" because oil or brine is used to cool the metal. The term slack is believed to derive from the word "slake", as in slaking the heat.
Types of forging
Drop forging
Drop forging is a process used to shape metal into complex shapes by dropping a heavy hammer with a die on its face onto the workpiece.
Process
The workpiece is placed into the forge. Then the impact of a hammer causes the heated material, which is very malleable, to conform to the shape of the die and die cavities. Typically only one die is needed to completely form the part. Extra space between the die faces causes some of the material to be pressed out of the sides, forming flash. This acts as a relief valve for the extreme pressure produced by the closing of the die halves and is later trimmed off of the finished part.
Equipment
The equipment used in the drop forming process is commonly known as a power or drop hammer. These may be powered by air, hydraulics, or mechanics. Depending on how the machine is powered, the mass of the ram, and the drop height, the striking force can be anywhere from 11,000 to 425,000 pounds.
The tools that are used, dies and punches, come in many different shapes and sizes, as well as materials. Examples of these shapes are flat and v-shaped which are used for open-die forging, and single or multiple-impression dies used for closed die forging. The designs for the dies have many aspects to them that must be considered. They all must be properly aligned, they must be designed so the metal and the flash will flow properly and fill all the grooves, and special considerations must be made for supporting webs and ribs and the parting line location. The materials must also be selected carefully. Some factors that go into the material selection are cost, their ability to harden, their ability to withstand high pressures, hot abrasion, heat cracking, and other such things. The most common materials used for the tools are carbon steel and, in some cases, nickel-based alloys.
Workpiece materials
The materials that are used most commonly in drop forging are aluminium, copper, nickel, mild steel, stainless steel, and magnesium. Mild steel is the best choice, and magnesium generally performs poorly as a drop forging material.
Mythology
Various gods and goddesses are associated with the forge in a number of mythologies, such as the Irish Brigid, West African Ogun, Greek Hephaestus and Roman Vulcan.
| Technology | Metallurgy | null |
11062 | https://en.wikipedia.org/wiki/Friction | Friction | Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. Types of friction include dry, fluid, lubricated, skin, and internal -- an incomplete list. The study of the processes involved is called tribology, and has a history of more than 2000 years.
Friction can have dramatic consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Another important consequence of many types of friction can be wear, which may lead to performance degradation or damage to components. It is known that frictional energy losses account for about 20% of the total energy expenditure of the world.
As briefly discussed later, there are many different contributors to the retarding force in friction, ranging from asperity deformation to the generation of charges and changes in local structure. Friction is not itself a fundamental force, it is a non-conservative force – work done against friction is path dependent. In the presence of friction, some mechanical energy is transformed to heat as well as the free energy of the structural changes and other types of dissipation, so mechanical energy is not conserved. The complexity of the interactions involved makes the calculation of friction from first principles difficult and it is often easier to use empirical methods for analysis and the development of theory.
Types
There are several types of friction:
Dry friction is a force that opposes the relative lateral motion of two solid surfaces in contact. Dry friction is subdivided into static friction ("stiction") between non-moving surfaces, and kinetic friction between moving surfaces. With the exception of atomic or molecular friction, dry friction generally arises from the interaction of surface features, known as asperities (see Figure).
Fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other.
Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces.
Skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body.
Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation.
History
Many ancient authors including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 that "it is easier to further the motion of a moving body than to move a body at rest".
The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology, but the laws documented in his notebooks were not published and remained unknown. These laws were rediscovered by Guillaume Amontons in 1699 and became known as Amonton's three laws of dry friction. Amontons presented the nature of friction in terms of surface irregularities and the force required to raise the weight pressing the surfaces together. This view was further elaborated by Bernard Forest de Bélidor and Leonhard Euler (1750), who derived the angle of repose of a weight on an inclined plane and first distinguished between static and kinetic friction.
John Theophilus Desaguliers (1734) first recognized the role of adhesion in friction. Microscopic forces cause surfaces to stick together; he proposed that friction was the force necessary to tear the adhering surfaces apart.
The understanding of friction was further developed by Charles-Augustin de Coulomb (1785). Coulomb investigated the influence of four main factors on friction: the nature of the materials in contact and their surface coatings; the extent of the surface area; the normal pressure (or load); and the length of time that the surfaces remained in contact (time of repose). Coulomb further considered the influence of sliding velocity, temperature and humidity, in order to decide between the different explanations on the nature of friction that had been proposed. The distinction between static and dynamic friction is made in Coulomb's friction law (see below), although this distinction was already drawn by Johann Andreas von Segner in 1758.
The effect of the time of repose was explained by Pieter van Musschenbroek (1762) by considering the surfaces of fibrous materials, with fibers meshing together, which takes a finite time in which the friction increases.
John Leslie (1766–1832) noted a weakness in the views of Amontons and Coulomb: If friction arises from a weight being drawn up the inclined plane of successive asperities, then why is it not balanced through descending the opposite slope? Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, which should on the whole have the same tendency to accelerate as to retard the motion. In Leslie's view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before.
In the long course of the development of the law of conservation of energy and of the first law of thermodynamics, friction was recognised as a mode of conversion of mechanical work into heat. In 1798, Benjamin Thompson reported on cannon boring experiments.
Arthur Jules Morin (1833) developed the concept of sliding versus rolling friction.
In 1842, Julius Robert Mayer frictionally generated heat in paper pulp and measured the temperature rise. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on the friction of an electric current passing through a resistor, and on the friction of a paddle wheel rotating in a vat of water.
Osborne Reynolds (1866) derived the equation of viscous flow. This completed the classic empirical model of friction (static, kinetic, and fluid) commonly used today in engineering. In 1877, Fleeming Jenkin and J. A. Ewing investigated the continuity between static and kinetic friction.
In 1907, G.H. Bryan published an investigation of the foundations of thermodynamics, Thermodynamics: an Introductory Treatise dealing mainly with First Principles and their Direct Applications. He noted that for a rough body driven over a rough surface, the mechanical work done by the driver exceeds the mechanical work received by the surface. The lost work is accounted for by heat generated by friction.
Over the years, for example in his 1879 thesis, but particularly in 1926, Planck advocated regarding the generation of heat by rubbing as the most specific way to define heat, and the prime example of an irreversible thermodynamic process.
The focus of research during the 20th century has been to understand the physical mechanisms behind friction. Frank Philip Bowden and David Tabor (1950) showed that, at a microscopic level, the actual area of contact between surfaces is a very small fraction of the apparent area. This actual area of contact, caused by asperities increases with pressure. The development of the atomic force microscope (ca. 1986) enabled scientists to study friction at the atomic scale, showing that, on that scale, dry friction is the product of the inter-surface shear stress and the contact area. These two discoveries explain Amonton's first law (below); the macroscopic proportionality between normal force and static frictional force between dry surfaces.
Laws of dry friction
The elementary property of sliding (kinetic) friction were discovered by experiment in the 15th to 18th centuries and were expressed as three empirical laws:
Amontons' First Law: The force of friction is directly proportional to the applied load.
Amontons' Second Law: The force of friction is independent of the apparent area of contact.
Coulomb's Law of Friction: Kinetic friction is independent of the sliding velocity.
Dry friction
Dry friction resists relative lateral motion of two solid surfaces in contact. The two regimes of dry friction are 'static friction' ("stiction") between non-moving surfaces, and kinetic friction (sometimes called sliding friction or dynamic friction) between moving surfaces.
Coulomb friction, named after Charles-Augustin de Coulomb, is an approximate model used to calculate the force of dry friction. It is governed by the model:
where
is the force of friction exerted by each surface on the other. It is parallel to the surface, in a direction opposite to the net applied force.
is the coefficient of friction, which is an empirical property of the contacting materials,
is the normal force exerted by each surface on the other, directed perpendicular (normal) to the surface.
The Coulomb friction may take any value from zero up to , and the direction of the frictional force against a surface is opposite to the motion that surface would experience in the absence of friction. Thus, in the static case, the frictional force is exactly what it must be in order to prevent motion between the surfaces; it balances the net force tending to cause such motion. In this case, rather than providing an estimate of the actual frictional force, the Coulomb approximation provides a threshold value for this force, above which motion would commence. This maximum force is known as traction.
The force of friction is always exerted in a direction that opposes movement (for kinetic friction) or potential movement (for static friction) between the two surfaces. For example, a curling stone sliding along the ice experiences a kinetic force slowing it down. For an example of potential movement, the drive wheels of an accelerating car experience a frictional force pointing forward; if they did not, the wheels would spin, and the rubber would slide backwards along the pavement. Note that it is not the direction of movement of the vehicle they oppose, it is the direction of (potential) sliding between tire and road.
Normal force
The normal force is defined as the net force compressing two parallel surfaces together, and its direction is perpendicular to the surfaces. In the simple case of a mass resting on a horizontal surface, the only component of the normal force is the force due to gravity, where . In this case, conditions of equilibrium tell us that the magnitude of the friction force is zero, . In fact, the friction force always satisfies , with equality reached only at a critical ramp angle (given by ) that is steep enough to initiate sliding.
The friction coefficient is an empirical (experimentally measured) structural property that depends only on various aspects of the contacting materials, such as surface roughness. The coefficient of friction is not a function of mass or volume. For instance, a large aluminum block has the same coefficient of friction as a small aluminum block. However, the magnitude of the friction force itself depends on the normal force, and hence on the mass of the block.
Depending on the situation, the calculation of the normal force might include forces other than gravity. If an object is on a and subjected to an external force tending to cause it to slide, then the normal force between the object and the surface is just , where is the block's weight and is the downward component of the external force. Prior to sliding, this friction force is , where is the horizontal component of the external force. Thus, in general. Sliding commences only after this frictional force reaches the value . Until then, friction is whatever it needs to be to provide equilibrium, so it can be treated as simply a reaction.
If the object is on a such as an inclined plane, the normal force from gravity is smaller than , because less of the force of gravity is perpendicular to the face of the plane. The normal force and the frictional force are ultimately determined using vector analysis, usually via a free body diagram.
In general, process for solving any statics problem with friction is to treat contacting surfaces tentatively as immovable so that the corresponding tangential reaction force between them can be calculated. If this frictional reaction force satisfies , then the tentative assumption was correct, and it is the actual frictional force. Otherwise, the friction force must be set equal to , and then the resulting force imbalance would then determine the acceleration associated with slipping.
Coefficient of friction
The coefficient of friction (COF), often symbolized by the Greek letter μ, is a dimensionless scalar value which equals the ratio of the force of friction between two bodies and the force pressing them together, either during or at the onset of slipping. The coefficient of friction depends on the materials used; for example, ice on steel has a low coefficient of friction, while rubber on pavement has a high coefficient of friction. Coefficients of friction range from near zero to greater than one. The coefficient of friction between two surfaces of similar metals is greater than that between two surfaces of different metals; for example, brass has a higher coefficient of friction when moved against brass, but less if moved against steel or aluminum.
For surfaces at rest relative to each other, , where is the coefficient of static friction. This is usually larger than its kinetic counterpart. The coefficient of static friction exhibited by a pair of contacting surfaces depends upon the combined effects of material deformation characteristics and surface roughness, both of which have their origins in the chemical bonding between atoms in each of the bulk materials and between the material surfaces and any adsorbed material. The fractality of surfaces, a parameter describing the scaling behavior of surface asperities, is known to play an important role in determining the magnitude of the static friction.
For surfaces in relative motion , where is the coefficient of kinetic friction. The Coulomb friction is equal to , and the frictional force on each surface is exerted in the direction opposite to its motion relative to the other surface.
Arthur Morin introduced the term and demonstrated the utility of the coefficient of friction. The coefficient of friction is an empirical measurementit has to be measured experimentally, and cannot be found through calculations. Rougher surfaces tend to have higher effective values. Both static and kinetic coefficients of friction depend on the pair of surfaces in contact; for a given pair of surfaces, the coefficient of static friction is usually larger than that of kinetic friction; in some sets the two coefficients are equal, such as teflon-on-teflon.
Most dry materials in combination have friction coefficient values between 0.3 and 0.6. Values outside this range are rarer, but teflon, for example, can have a coefficient as low as 0.04. A value of zero would mean no friction at all, an elusive property. Rubber in contact with other surfaces can yield friction coefficients from 1 to 2. Occasionally it is maintained that μ is always < 1, but this is not true. While in most relevant applications μ < 1, a value above 1 merely implies that the force required to slide an object along the surface is greater than the normal force of the surface on the object. For example, silicone rubber or acrylic rubber-coated surfaces have a coefficient of friction that can be substantially larger than 1.
While it is often stated that the COF is a "material property", it is better categorized as a "system property". Unlike true material properties (such as conductivity, dielectric constant, yield strength), the COF for any two materials depends on system variables like temperature, velocity, atmosphere and also what are now popularly described as aging and deaging times; as well as on geometric properties of the interface between the materials, namely surface structure. For example, a copper pin sliding against a thick copper plate can have a COF that varies from 0.6 at low speeds (metal sliding against metal) to below 0.2 at high speeds when the copper surface begins to melt due to frictional heating. The latter speed, of course, does not determine the COF uniquely; if the pin diameter is increased so that the frictional heating is removed rapidly, the temperature drops, the pin remains solid and the COF rises to that of a 'low speed' test.
In systems with significant non-uniform stress fields, because local slip occurs before the system slides, the macroscopic coefficient of static friction depends on the applied load, system size, or shape; Amontons' law is not satisfied macroscopically.
Approximate coefficients of friction
Under certain conditions some materials have very low friction coefficients. An example is (highly ordered pyrolytic) graphite which can have a friction coefficient below 0.01.
This ultralow-friction regime is called superlubricity.
Static friction
Static friction is friction between two or more solid objects that are not moving relative to each other. For example, static friction can prevent an object from sliding down a sloped surface. The coefficient of static friction, typically denoted as μs, is usually higher than the coefficient of kinetic friction. Static friction is considered to arise as the result of surface roughness features across multiple length scales at solid surfaces. These features, known as asperities are present down to nano-scale dimensions and result in true solid to solid contact existing only at a limited number of points accounting for only a fraction of the apparent or nominal contact area. The linearity between applied load and true contact area, arising from asperity deformation, gives rise to the linearity between static frictional force and normal force, found for typical Amonton–Coulomb type friction.
The static friction force must be overcome by an applied force before an object can move. The maximum possible friction force between two surfaces before sliding begins is the product of the coefficient of static friction and the normal force: . When there is no sliding occurring, the friction force can have any value from zero up to . Any force smaller than attempting to slide one surface over the other is opposed by a frictional force of equal magnitude and opposite direction. Any force larger than overcomes the force of static friction and causes sliding to occur. The instant sliding occurs, static friction is no longer applicable—the friction between the two surfaces is then called kinetic friction. However, an apparent static friction can be observed even in the case when the true static friction is zero.
An example of static friction is the force that prevents a car wheel from slipping as it rolls on the ground. Even though the wheel is in motion, the patch of the tire in contact with the ground is stationary relative to the ground, so it is static rather than kinetic friction. Upon slipping, the wheel friction changes to kinetic friction. An anti-lock braking system operates on the principle of allowing a locked wheel to resume rotating so that the car maintains static friction.
The maximum value of static friction, when motion is impending, is sometimes referred to as limiting friction,
although this term is not used universally.
Kinetic friction
Kinetic friction, also known as dynamic friction or sliding friction, occurs when two objects are moving relative to each other and rub together (like a sled on the ground). The coefficient of kinetic friction is typically denoted as μk, and is usually less than the coefficient of static friction for the same materials. However, Richard Feynman comments that "with dry metals it is very hard to show any difference."
The friction force between two surfaces after sliding begins is the product of the coefficient of kinetic friction and the normal force: . This is responsible for the Coulomb damping of an oscillating or vibrating system.
New models are beginning to show how kinetic friction can be greater than static friction. In many other cases roughness effects are dominant, for example in rubber to road friction. Surface roughness and contact area affect kinetic friction for micro- and nano-scale objects where surface area forces dominate inertial forces.
The origin of kinetic friction at nanoscale can be rationalized by an energy model. During sliding, a new surface forms at the back of a sliding true contact, and existing surface disappears at the front of it. Since all surfaces involve the thermodynamic surface energy, work must be spent in creating the new surface, and energy is released as heat in removing the surface. Thus, a force is required to move the back of the contact, and frictional heat is released at the front.
Angle of friction
For certain applications, it is more useful to define static friction in terms of the maximum angle before which one of the items will begin sliding. This is called the angle of friction or friction angle. It is defined as:
and thus:
where is the angle from horizontal and μs is the static coefficient of friction between the objects. This formula can also be used to calculate μs from empirical measurements of the friction angle.
Friction at the atomic level
Determining the forces required to move atoms past each other is a challenge in designing nanomachines. In 2008 scientists for the first time were able to move a single atom across a surface, and measure the forces required. Using ultrahigh vacuum and nearly zero temperature (5 K), a modified atomic force microscope was used to drag a cobalt atom, and a carbon monoxide molecule, across surfaces of copper and platinum.
Limitations of the Coulomb model
The Coulomb approximation follows from the assumptions that: surfaces are in atomically close contact only over a small fraction of their overall area; that this contact area is proportional to the normal force (until saturation, which takes place when all area is in atomic contact); and that the frictional force is proportional to the applied normal force, independently of the contact area. The Coulomb approximation is fundamentally an empirical construct. It is a rule-of-thumb describing the approximate outcome of an extremely complicated physical interaction. The strength of the approximation is its simplicity and versatility. Though the relationship between normal force and frictional force is not exactly linear (and so the frictional force is not entirely independent of the contact area of the surfaces), the Coulomb approximation is an adequate representation of friction for the analysis of many physical systems.
When the surfaces are conjoined, Coulomb friction becomes a very poor approximation (for example, adhesive tape resists sliding even when there is no normal force, or a negative normal force). In this case, the frictional force may depend strongly on the area of contact. Some drag racing tires are adhesive for this reason. However, despite the complexity of the fundamental physics behind friction, the relationships are accurate enough to be useful in many applications.
"Negative" coefficient of friction
, a single study has demonstrated the potential for an effectively negative coefficient of friction in the low-load regime, meaning that a decrease in normal force leads to an increase in friction. This contradicts everyday experience in which an increase in normal force leads to an increase in friction. This was reported in the journal Nature in October 2012 and involved the friction encountered by an atomic force microscope stylus when dragged across a graphene sheet in the presence of graphene-adsorbed oxygen.
Numerical simulation of the Coulomb model
Despite being a simplified model of friction, the Coulomb model is useful in many numerical simulation applications such as multibody systems and granular material. Even its most simple expression encapsulates the fundamental effects of sticking and sliding which are required in many applied cases, although specific algorithms have to be designed in order to efficiently numerically integrate mechanical systems with Coulomb friction and bilateral or unilateral contact. Some quite nonlinear effects, such as the so-called Painlevé paradoxes, may be encountered with Coulomb friction.
Dry friction and instabilities
Dry friction can induce several types of instabilities in mechanical systems which display a stable behaviour in the absence of friction.
These instabilities may be caused by the decrease of the friction force with an increasing velocity of sliding, by material expansion due to heat generation during friction (the thermo-elastic instabilities), or by pure dynamic effects of sliding of two elastic materials (the Adams–Martins instabilities). The latter were originally discovered in 1995 by George G. Adams and João Arménio Correia Martins for smooth surfaces and were later found in periodic rough surfaces. In particular, friction-related dynamical instabilities are thought to be responsible for brake squeal and the 'song' of a glass harp, phenomena which involve stick and slip, modelled as a drop of friction coefficient with velocity.
A practically important case is the self-oscillation of the strings of bowed instruments such as the violin, cello, hurdy-gurdy, erhu, etc.
A connection between dry friction and flutter instability in a simple mechanical system has been discovered, watch the movie for more details.
Frictional instabilities can lead to the formation of new self-organized patterns (or "secondary structures") at the sliding interface, such as in-situ formed tribofilms which are utilized for the reduction of friction and wear in so-called self-lubricating materials.
Fluid friction
Fluid friction occurs between fluid layers that are moving relative to each other. This internal resistance to flow is named viscosity. In everyday terms, the viscosity of a fluid is described as its "thickness". Thus, water is "thin", having a lower viscosity, while honey is "thick", having a higher viscosity. The less viscous the fluid, the greater its ease of deformation or movement.
All real fluids (except superfluids) offer some resistance to shearing and therefore are viscous. For teaching and explanatory purposes it is helpful to use the concept of an inviscid fluid or an ideal fluid which offers no resistance to shearing and so is not viscous.
Lubricated friction
Lubricated friction is a case of fluid friction where a fluid separates two solid surfaces. Lubrication is a technique employed to reduce wear of one or both surfaces in close proximity moving relative to each another by interposing a substance called a lubricant between the surfaces.
In most cases the applied load is carried by pressure generated within the fluid due to the frictional viscous resistance to motion of the lubricating fluid between the surfaces. Adequate lubrication allows smooth continuous operation of equipment, with only mild wear, and without excessive stresses or seizures at bearings. When lubrication breaks down, metal or other components can rub destructively over each other, causing heat and possibly damage or failure.
Skin friction
Skin friction arises from the interaction between the fluid and the skin of the body, and is directly related to the area of the surface of the body that is in contact with the fluid. Skin friction follows the drag equation and rises with the square of the velocity.
Skin friction is caused by viscous drag in the boundary layer around the object. There are two ways to decrease skin friction: the first is to shape the moving body so that smooth flow is possible, like an airfoil. The second method is to decrease the length and cross-section of the moving object as much as is practicable.
Internal friction
Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation.
Plastic deformation in solids is an irreversible change in the internal molecular structure of an object. This change may be due to either (or both) an applied force or a change in temperature. The change of an object's shape is called strain. The force causing it is called stress.
Elastic deformation in solids is reversible change in the internal molecular structure of an object. Stress does not necessarily cause permanent change. As deformation occurs, internal forces oppose the applied force. If the applied stress is not too large these opposing forces may completely resist the applied force, allowing the object to assume a new equilibrium state and to return to its original shape when the force is removed. This is known as elastic deformation or elasticity.
Radiation friction
As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward-acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief."
Other types of friction
Rolling resistance
Rolling resistance is the force that resists the rolling of a wheel or other circular object along a surface caused by deformations in the object or surface. Generally the force of rolling resistance is less than that associated with kinetic friction. Typical values for the coefficient of rolling resistance are 0.001.
One of the most common examples of rolling resistance is the movement of motor vehicle tires on a road, a process which generates heat and sound as by-products.
Braking friction
Any wheel equipped with a brake is capable of generating a large retarding force, usually for the purpose of slowing and stopping a vehicle or piece of rotating machinery. Braking friction differs from rolling friction because the coefficient of friction for rolling friction is small whereas the coefficient of friction for braking friction is designed to be large by choice of materials for brake pads.
Triboelectric effect
Rubbing two materials against each other can lead to charge transfer, either electrons or ions. The energy required for this contributes to the friction. In addition, sliding can cause a build-up of electrostatic charge, which can be hazardous if flammable gases or vapours are present. When the static build-up discharges, explosions can be caused by ignition of the flammable mixture.
Belt friction
Belt friction is a physical property observed from the forces acting on a belt wrapped around a pulley, when one end is being pulled. The resulting tension, which acts on both ends of the belt, can be modeled by the belt friction equation.
In practice, the theoretical tension acting on the belt or rope calculated by the belt friction equation can be compared to the maximum tension the belt can support. This helps a designer of such a rig to know how many times the belt or rope must be wrapped around the pulley to prevent it from slipping. Mountain climbers and sailing crews demonstrate a standard knowledge of belt friction when accomplishing basic tasks.
Reduction
Devices
Devices such as wheels, ball bearings, roller bearings, and air cushion or other types of fluid bearings can change sliding friction into a much smaller type of rolling friction.
Many thermoplastic materials such as nylon, HDPE and PTFE are commonly used in low friction bearings. They are especially useful because the coefficient of friction falls with increasing imposed load. For improved wear resistance, very high molecular weight grades are usually specified for heavy duty or critical bearings.
Lubricants
A common way to reduce friction is by using a lubricant, such as oil, water, or grease, which is placed between the two surfaces, often dramatically lessening the coefficient of friction. The science of friction and lubrication is called tribology. Lubricant technology is when lubricants are mixed with the application of science, especially to industrial or commercial objectives.
Superlubricity, a recently discovered effect, has been observed in graphite: it is the substantial decrease of friction between two sliding objects, approaching zero levels. A very small amount of frictional energy would still be dissipated.
Lubricants to overcome friction need not always be thin, turbulent fluids or powdery solids such as graphite and talc; acoustic lubrication actually uses sound as a lubricant.
Another way to reduce friction between two parts is to superimpose micro-scale vibration to one of the parts. This can be sinusoidal vibration as used in ultrasound-assisted cutting or vibration noise, known as dither.
Energy of friction
According to the law of conservation of energy, no energy is destroyed due to friction, though it may be lost to the system of concern. Mechanical energy is transformed into heat. A sliding hockey puck comes to rest because friction converts its kinetic energy into heat which raises the internal energy of the puck and the ice surface. Since heat quickly dissipates, many early philosophers, including Aristotle, wrongly concluded that moving objects come to rest spontaneously.
When an object is pushed along a surface along a path C, the energy converted to heat is given by a line integral, in accordance with the definition of work
where
is the friction force,
is the vector obtained by multiplying the magnitude of the normal force by a unit vector pointing against the object's motion,
is the coefficient of kinetic friction, which is inside the integral because it may vary from location to location (e.g. if the material changes along the path),
is the position of the object.
Dissipation of energy by friction in a process is a classic example of thermodynamic irreversibility.
Work of friction
The work done by friction can translate into deformation, wear, and heat that can affect the contact surface properties (even the coefficient of friction between the surfaces). This can be beneficial as in polishing. The work of friction is used to mix and join materials such as in the process of friction welding. Excessive erosion or wear of mating sliding surfaces occurs when work due to frictional forces rise to unacceptable levels. Harder corrosion particles caught between mating surfaces in relative motion (fretting) exacerbates wear of frictional forces. As surfaces are worn by work due to friction, fit and surface finish of an object may degrade until it no longer functions properly. For example, bearing seizure or failure may result from excessive wear due to work of friction.
In the reference frame of the interface between two surfaces, static friction does no work, because there is never displacement between the surfaces. In the same reference frame, kinetic friction is always in the direction opposite the motion, and does negative work. However, friction can do positive work in certain frames of reference. One can see this by placing a heavy box on a rug, then pulling on the rug quickly. In this case, the box slides backwards relative to the rug, but moves forward relative to the frame of reference in which the floor is stationary. Thus, the kinetic friction between the box and rug accelerates the box in the same direction that the box moves, doing positive work.
When sliding takes place between two rough bodies in contact, the algebraic sum of the works done is different from zero, and the algebraic sum of the quantities of heat gained by the two bodies is equal to the quantity of work lost by friction, and the total quantity of heat gained is positive. In a natural thermodynamic process, the work done by an agency in the surroundings of a thermodynamic system or working body is greater than the work received by the body, because of friction. Thermodynamic work is measured by changes in a body's state variables, sometimes called work-like variables, other than temperature and entropy. Examples of work-like variables, which are ordinary macroscopic physical variables and which occur in conjugate pairs, are pressure – volume, and electric field – electric polarization. Temperature and entropy are a specifically thermodynamic conjugate pair of state variables. They can be affected microscopically at an atomic level, by mechanisms such as friction, thermal conduction, and radiation. The part of the work done by an agency in the surroundings that does not change the volume of the working body but is dissipated in friction, is called isochoric work. It is received as heat, by the working body and sometimes partly by a body in the surroundings. It is not counted as thermodynamic work received by the working body.
Applications
Friction is an important factor in many engineering disciplines.
Transportation
Automobile brakes inherently rely on friction, slowing a vehicle by converting its kinetic energy into heat. Incidentally, dispersing this large amount of heat safely is one technical challenge in designing brake systems. Disk brakes rely on friction between a disc and brake pads that are squeezed transversely against the rotating disc. In drum brakes, brake shoes or pads are pressed outwards against a rotating cylinder (brake drum) to create friction. Since braking discs can be more efficiently cooled than drums, disc brakes have better stopping performance.
Rail adhesion refers to the grip wheels of a train have on the rails, see Frictional contact mechanics.
Road slipperiness is an important design and safety factor for automobiles
Split friction is a particularly dangerous condition arising due to varying friction on either side of a car.
Road texture affects the interaction of tires and the driving surface.
Measurement
A tribometer is an instrument that measures friction on a surface.
A profilograph is a device used to measure pavement surface roughness.
Household usage
Friction is used to heat and ignite matchsticks (friction between the head of a matchstick and the rubbing surface of the match box).
Sticky pads are used to prevent object from slipping off smooth surfaces by effectively increasing the friction coefficient between the surface and the object.
| Physical sciences | Classical mechanics | null |
11090 | https://en.wikipedia.org/wiki/Forest | Forest | A forest is an ecosystem characterized by a dense community of trees. Hundreds of definitions of forest are used throughout the world, incorporating factors such as tree density, tree height, land use, legal standing, and ecological function. The United Nations' Food and Agriculture Organization (FAO) defines a forest as, "Land spanning more than 0.5 hectares with trees higher than 5 meters and a canopy cover of more than 10 percent, or trees able to reach these thresholds in situ. It does not include land that is predominantly under agricultural or urban use." Using this definition, Global Forest Resources Assessment 2020 found that forests covered , or approximately 31 percent of the world's land area in 2020.
Forests are the largest terrestrial ecosystems of Earth by area, and are found around the globe. 45 percent of forest land is in the tropical latitudes. The next largest share of forests are found in subarctic climates, followed by temperate, and subtropical zones.
Forests account for 75% of the gross primary production of the Earth's biosphere, and contain 80% of the Earth's plant biomass. Net primary production is estimated at 21.9 gigatonnes of biomass per year for tropical forests, 8.1 for temperate forests, and 2.6 for boreal forests.
Forests form distinctly different biomes at different latitudes and elevations, and with different precipitation and evapotranspiration rates. These biomes include boreal forests in subarctic climates, tropical moist forests and tropical dry forests around the Equator, and temperate forests at the middle latitudes. Forests form in areas of the Earth with high rainfall, while drier conditions produce a transition to savanna. However, in areas with intermediate rainfall levels, forest transitions to savanna rapidly when the percentage of land that is covered by trees drops below 40 to 45 percent. Research conducted in the Amazon rainforest shows that trees can alter rainfall rates across a region, releasing water from their leaves in anticipation of seasonal rains to trigger the wet season early. Because of this, seasonal rainfall in the Amazon begins two to three months earlier than the climate would otherwise allow. Deforestation in the Amazon and anthropogenic climate change hold the potential to interfere with this process, causing the forest to pass a threshold where it transitions into savanna.
Deforestation threatens many forest ecosystems. Deforestation occurs when humans remove trees from a forested area by cutting or burning, either to harvest timber or to make way for farming. Most deforestation today occurs in tropical forests. The vast majority of this deforestation is because of the production of four commodities: wood, beef, soy, and palm oil. Over the past 2,000 years, the area of land covered by forest in Europe has been reduced from 80% to 34%. Large areas of forest have also been cleared in China and in the eastern United States, in which only 0.1% of land was left undisturbed. Almost half of Earth's forest area (49 percent) is relatively intact, while 9 percent is found in fragments with little or no connectivity. Tropical rainforests and boreal coniferous forests are the least fragmented, whereas subtropical dry forests and temperate oceanic forests are among the most fragmented. Roughly 80 percent of the world's forest area is found in patches larger than . The remaining 20 percent is located in more than 34 million patches around the world – the vast majority less than in size.
Human society and forests can affect one another positively or negatively. Forests provide ecosystem services to humans and serve as tourist attractions. Forests can also affect people's health. Human activities, including unsustainable use of forest resources, can negatively affect forest ecosystems.
Definitions
Although the word forest is commonly used, there is no universally recognised precise definition, with more than 800 definitions of forest used around the world. Although a forest is usually defined by the presence of trees, under many definitions an area completely lacking trees may still be considered a forest if it grew trees in the past, will grow trees in the future, or was legally designated as a forest regardless of vegetation type.
There are three broad categories of definitions of forest in use: administrative, land use, and land cover. Administrative definitions are legal designations, and may not reflect the type of vegetation that grows upon the land; an area can be legally designated "forest" even if no trees grow on it. Land-use definitions are based on the primary purpose the land is used for. Under a land-use definition, any area used primarily for harvesting timber, including areas that have been cleared by harvesting, disease, fire, or for the construction of roads and infrastructure, are still defined as forests, even if they contain no trees. Land-cover definitions define forests based upon the density of trees, area of tree canopy cover, or area of the land occupied by the cross-section of tree trunks (basal area) meeting a particular threshold. This type of definition depends upon the presence of trees sufficient to meet the threshold, or at least of immature trees that are expected to meet the threshold once they mature.
Under land-cover definitions, there is considerable variation on where the cutoff points are between a forest, woodland, and savanna. Under some definitions, to be considered a forest requires very high levels of tree canopy cover, from 60% to 100%, which excludes woodlands and savannas, which have a lower canopy cover. Other definitions consider savannas to be a type of forest, and include all areas with tree canopies over 10%.
Some areas covered with trees are legally defined as agricultural areas, for example Norway spruce plantations, under Austrian forest law, when the trees are being grown as Christmas trees and are below a certain height.
Etymology
The word forest derives from the Old French forest (also forès), denoting "forest, vast expanse covered by trees"; forest was first introduced into English as the word denoting wild land set aside for hunting without necessarily having trees on the land. Possibly a borrowing, probably via Frankish or Old High German, of the Medieval Latin , denoting "open wood", Carolingian scribes first used foresta in the capitularies of Charlemagne, specifically to denote the royal hunting grounds of the king. The word was not endemic to the Romance languages, e.g., native words for forest in the Romance languages derived from the Latin silva, which denoted "forest" and "wood(land)" (cf. the English sylva and sylvan; the Italian, Spanish, and Portuguese selva; the Romanian silvă; the Old French selve). Cognates of forest in Romance languages—e.g., the Italian foresta, Spanish and Portuguese floresta, etc.—are all ultimately derivations of the French word.
The precise origin of Medieval Latin is obscure. Some authorities claim the word derives from the Late Latin phrase forestam silvam, denoting "the outer wood"; others claim the word is a Latinisation of the Frankish *forhist, denoting "forest, wooded country", and was assimilated to forestam silvam, pursuant to the common practice of Frankish scribes. The Old High German forst denoting "forest"; Middle Low German vorst denoting "forest"; Old English fyrhþ denoting "forest, woodland, game preserve, hunting ground" (English frith); and Old Norse fýri, denoting "coniferous forest"; all of which derive from the Proto-Germanic *furhísa-, *furhíþija-, denoting "a fir-wood, coniferous forest", from the Proto-Indo-European *perkwu-, denoting "a coniferous or mountain forest, wooded height" all attest to the Frankish *forhist.
Uses of forest in English to denote any uninhabited and unenclosed area are presently considered archaic. The Norman rulers of England introduced the word as a legal term, as seen in Latin texts such as Magna Carta, to denote uncultivated land that was legally designated for hunting by feudal nobility (see royal forest).
These hunting forests did not necessarily contain any trees. Because that often included significant areas of woodland, "forest" eventually came to connote woodland in general, regardless of tree density. By the beginning of the fourteenth century, English texts used the word in all three of its senses: common, legal, and archaic. Other English words used to denote "an area with a high density of trees" are firth, frith, holt, weald, wold, wood, and woodland. Unlike forest, these are all derived from Old English and were not borrowed from another language. Some present classifications reserve woodland for denoting a locale with more open space between trees, and distinguish kinds of woodlands as open forests and closed forests, premised on their crown covers. Finally, sylva (plural sylvae or, less classically, sylvas) is a peculiar English spelling of the Latin silva, denoting a "woodland", and has precedent in English, including its plural forms. While its use as a synonym of forest, and as a Latinate word denoting a woodland, may be admitted; in a specific technical sense it is restricted to denoting the species of trees that comprise the woodlands of a region, as in its sense in the subject of silviculture. The resorting to sylva in English indicates more precisely the denotation that the use of forest intends.
Evolutionary history
The first known forests on Earth arose in the Middle Devonian (approximately 390 million years ago), with the evolution of cladoxylopsid plants like Calamophyton. Appeared in the Late Devonian, Archaeopteris was both tree-like and fern-like plant, growing to in height or more. It quickly spread throughout the world, from the equator to subpolar latitudes. It is the first species known to cast shade due to its fronds and forming soil from its roots. Archaeopteris was deciduous, dropping its fronds onto the forest floor, the shade, soil, and forest duff from the dropped fronds creating the early forest. The shed organic matter altered the freshwater environment, slowing its flow and providing food. This promoted freshwater fish.
Ecology
Forests account for 75% of the gross primary productivity of the Earth's biosphere, and contain 80% of the Earth's plant biomass. Biomass per unit area is high compared to other vegetation communities. Much of this biomass occurs below ground in the root systems and as partially decomposed plant detritus. The woody component of a forest contains lignin, which is relatively slow to decompose compared with other organic materials such as cellulose or carbohydrate. The world's forests contain about 606 gigatonnes of living biomass (above- and below-ground) and 59 gigatonnes of dead wood. The total biomass has decreased slightly since 1990, but biomass per unit area has increased.
Forest ecosystems broadly differ based on climate; latitudes 10° north and south of the equator are mostly covered in tropical rainforest, and the latitudes between 53°N and 67°N have boreal forest. As a general rule, forests dominated by angiosperms (broadleaf forests) are more species-rich than those dominated by gymnosperms (conifer, montane, or needleleaf forests), although exceptions exist. The trees that form the principal structural and defining component of a forest may be of a great variety of species (as in tropical rainforests and temperate deciduous forests), or relatively few species over large areas (e.g., taiga and arid montane coniferous forests). The biodiversity of forests also encompasses shrubs, herbaceous plants, mosses, ferns, lichens, fungi, and a variety of animals.
Trees rising up to in height add a vertical dimension to the area of land that can support plant and animal species, opening up numerous ecological niches for arboreal animal species, epiphytes, and various species that thrive under the regulated microclimate created under the canopy. Forests have intricate three-dimensional structures that increase in complexity with lower levels of disturbance and greater variety of tree species.
The biodiversity of forests varies considerably according to factors such as forest type, geography, climate, and soils – in addition to human use. Most forest habitats in temperate regions support relatively few animal and plant species, and species that tend to have large geographical distributions, while the montane forests of Africa, South America, Southeast Asia, and lowland forests of Australia, coastal Brazil, the Caribbean islands, Central America, and insular Southeast Asia have many species with small geographical distributions. Areas with dense human populations and intense agricultural land use, such as Europe, parts of Bangladesh, China, India, and North America, are less intact in terms of their biodiversity. Northern Africa, southern Australia, coastal Brazil, Madagascar, and South Africa are also identified as areas with striking losses in biodiversity intactness.
Components
A forest consists of many components that can be broadly divided into two categories: biotic (living) and abiotic (non-living). The living parts include trees, shrubs, vines, grasses and other herbaceous (non-woody) plants, mosses, algae, fungi, insects, mammals, birds, reptiles, amphibians, and microorganisms living on the plants and animals and in the soil, connected by mycorrhizal networks.
Layers
The main layers of all forest types are the forest floor, the understory, and the canopy. The emergent layer, above the canopy, exists in tropical rainforests. Each layer has a different set of plants and animals, depending upon the availability of sunlight, moisture, and food.
The Forest floor is covered in dead plant material such as fallen leaves and decomposing logs, which detritivores break down into new soil. The layer of decaying leaves that covers the soil is necessary for many insects to overwinter and for amphibians, birds, and other animals to shelter and forage for food. Leaf litter also keeps the soil moist, stops erosion, and protects roots against extreme heat and cold. The fungal mycelium that helps form the mycorrhizal network transmits nutrients from decaying material to trees and other plants. The forest floor supports a variety of plants, ferns, grasses, and tree seedlings, as well as animals such as ants, amphibians, spiders, and millipedes.
Understory is made up of bushes, shrubs, and young trees that are adapted to living in the shade of the canopy.
Canopy is formed by the mass of intertwined branches, twigs, and leaves of mature trees. The crowns of the dominant trees receive most of the sunlight. This is the most productive part of the trees, where maximum food is produced. The canopy forms a shady, protective "umbrella" over the rest of the forest.
Emergent layer exists in a tropical rain forest and is composed of a few scattered trees that tower over the canopy.
In botany and countries like Germany and Poland, a different classification of forest vegetation is often used: tree, shrub, herb, and moss layers (see stratification (vegetation)).
Types
Forests are classified differently and to different degrees of specificity. One such classification is in terms of the biomes in which they exist, combined with leaf longevity of the dominant species (whether they are evergreen or deciduous). Another distinction is whether the forests are composed predominantly of broadleaf trees, coniferous (needle-leaved) trees, or mixed.
Boreal forests occupy the subarctic zone and are generally evergreen and coniferous.
Temperate zones support both broadleaf deciduous forests (e.g., temperate deciduous forest) and evergreen coniferous forests (e.g., temperate coniferous forests and temperate rainforests). Warm temperate zones support broadleaf evergreen forests, including laurel forests.
Tropical and subtropical forests include tropical and subtropical moist forests, tropical and subtropical dry forests, and tropical and subtropical coniferous forests.
Forests are classified according to physiognomy based on their overall physical structure or developmental stage (e.g. old growth vs. second growth).
Forests can also be classified more specifically based on the climate and the dominant tree species present, resulting in numerous different forest types (e.g., Ponderosa pine/Douglas fir forest).
The number of trees in the world, according to a 2015 estimate, is 3 trillion, of which 1.4 trillion are in the tropics or sub-tropics, 0.6 trillion in the temperate zones, and 0.7 trillion in the coniferous boreal forests. The 2015 estimate is about eight times higher than previous estimates, and is based on tree densities measured on over 400,000 plots. It remains subject to a wide margin of error, not least because the samples are mainly from Europe and North America.
Forests can also be classified according to the amount of human alteration. Old-growth forest contains mainly natural patterns of biodiversity in established seral patterns, and they contain mainly species native to the region and habitat. In contrast, secondary forest is forest regrowing following timber harvest and may contain species originally from other regions or habitats.
Different global forest classification systems have been proposed, but none has gained universal acceptance. UNEP-WCMC's forest category classification system is a simplification of other, more complex systems (e.g. UNESCO's forest and woodland 'subformations'). This system divides the world's forests into 26 major types, which reflect climatic zones as well as the principal types of trees. These 26 major types can be reclassified into 6 broader categories: temperate needleleaf, temperate broadleaf and mixed, tropical moist, tropical dry, sparse trees and parkland, and forest plantations. Each category is described in a separate section below.
Temperate needleleaf
Temperate needleleaf forests mostly occupy the higher latitudes of the Northern Hemisphere, as well as some warm temperate areas, especially on nutrient-poor or otherwise unfavourable soils. These forests are composed entirely, or nearly so, of coniferous species (Coniferophyta). In the Northern Hemisphere, pines Pinus, spruces Picea, larches Larix, firs Abies, Douglas firs Pseudotsuga, and hemlocks Tsuga make up the canopy; but other taxa are also important. In the Southern Hemisphere, most coniferous trees (members of Araucariaceae and Podocarpaceae) occur mixed with broadleaf species, and are classed as broadleaf-and-mixed forests.
Temperate broadleaf and mixed
Temperate broadleaf and mixed forests include a substantial component of trees of the Anthophyta group. They are generally characteristic of the warmer temperate latitudes, but extend to cool temperate ones, particularly in the southern hemisphere. They include such forest types as the mixed deciduous forests of the United States and their counterparts in China and Japan; the broadleaf evergreen rainforests of Japan, Chile, and Tasmania; the sclerophyllous forests of Australia, central Chile, the Mediterranean, and California; and the southern beech Nothofagus forests of Chile and New Zealand.
Tropical moist
There are many different types of tropical moist forests, with lowland evergreen broad-leaf tropical rainforests: for example várzea and igapó forests and the terra firme forests of the Amazon Basin; the peat swamp forests; dipterocarp forests of Southeast Asia; and the high forests of the Congo Basin. Seasonal tropical forests, perhaps the best description for the colloquial term "jungle", typically range from the rainforest zone 10 degrees north or south of the equator, to the Tropic of Cancer and Tropic of Capricorn. Forests located on mountains are also included in this category, divided largely into upper and lower montane formations, on the basis of the variation of physiognomy corresponding to changes in altitude.
Tropical dry
Tropical dry forests are characteristic of areas in the tropics affected by seasonal drought. The seasonality of rainfall is usually reflected in the deciduousness of the forest canopy, with most trees being leafless for several months of the year. Under some conditions, such as less fertile soils or less predictable drought regimes, the proportion of evergreen species increases and the forests are characterised as "sclerophyllous". Thorn forest, a dense forest of low stature with a high frequency of thorny or spiny species, is found where drought is prolonged, and especially where grazing animals are plentiful. On very poor soils, and especially where fire or herbivory are recurrent phenomena, savannas develop.
Sparse trees and savanna
Sparse trees and savanna are forests with sparse tree-canopy cover. They occur principally in areas of transition from forested to non-forested landscapes. The two major zones in which these ecosystems occur are in the boreal region and in the seasonally dry tropics. At high latitudes, north of the main zone of boreal forestland, growing conditions are not adequate to maintain a continuously closed forest cover, so tree cover is both sparse and discontinuous. This vegetation is variously called open taiga, open lichen woodland, and forest tundra. A savanna is a mixed woodland–grassland ecosystem characterized by the trees being sufficiently widely spaced so that the canopy does not close. The open canopy allows sufficient light to reach the ground to support an unbroken herbaceous layer that consists primarily of grasses. Savannas maintain an open canopy despite a high tree density.
Plantations
Forest plantations are generally intended for the production of timber and pulpwood. Commonly mono-specific, planted with even spacing between the trees, and intensively managed, these forests are generally important as habitat for native biodiversity. Some are managed in ways that enhance their biodiversity protection functions and can provide ecosystem services such as nutrient capital maintenance, watershed and soil structure protection and carbon storage.
Area
The annual net loss of forest area has decreased since 1990, but the world is not on track to meet the target of the United Nations Strategic Plan for Forests to increase forest area by 3 percent by 2030.
While deforestation is taking place in some areas, new forests are being established through natural expansion or deliberate efforts in other areas. As a result, the net loss of forest area is less than the rate of deforestation; and it, too, is decreasing: from per year in the 1990s to per year during 2010–2020. In absolute terms, the global forest area decreased by between 1990 and 2020, which is an area about the size of Libya.
Societal significance
Ecosystem services
Forests provide a diversity of ecosystem services including:
Converting carbon dioxide into oxygen and biomass. A full-grown tree produces about of net oxygen per year.
Acting as a carbon sink. Therefore, they are necessary to mitigate climate change.
Aiding in regulating climate. For example, research from 2017 shows that forests induce rainfall. If the forest is cut, it can lead to drought, and in the tropics to occupational heat stress of outdoor workers.
Purifying water.
Mitigating natural hazards such as floods.
Serving as a genetic reserve.
Serving as a source of lumber and as recreational areas.
Serving as a source of woodlands and trees for millions of people dependent almost entirely on forests for subsistence for their essential fuelwood, food, and fodder needs.
The main ecosystem services can be summarized in the next table:
Some researchers state that forests do not only provide benefits, but can in certain cases also incur costs to humans. Forests may impose an economic burden, diminish the enjoyment of natural areas, reduce the food-producing capacity of grazing land and cultivated land, reduce biodiversity, reduce available water for humans and wildlife, harbour dangerous or destructive wildlife, and act as reservoirs of human and livestock disease.
An important consideration regarding carbon sequestration is that forests can turn from a carbon sink to a carbon source if plant diversity, density or forest area decreases, as has been observed in different tropical forests The typical tropical forest may become a carbon source by the 2060s. An assessment of European forests found early signs of carbon sink saturation, after decades of increasing strength. The Intergovernmental Panel on Climate Change (IPCC) concluded that a combination of measures aimed at increasing forest carbon stocks, andsustainable timber offtake will generate the largest carbon sequestration benefit.
Forest-dependent people
The term forest-dependent people is used to describe any of a wide variety of livelihoods that are dependent on access to forests, products harvested from forests, or ecosystem services provided by forests, including those of Indigenous peoples dependent on forests. In India, approximately 22 percent of the population belongs to forest-dependent communities, which live in close proximity to forests and practice agroforestry as a principal part of their livelihood. People of Ghana who rely on timber and bushmeat harvested from forests and Indigenous peoples of the Amazon rainforest are also examples of forest-dependent people. Though forest-dependence by more common definitions is statistically associated with poverty and rural livelihoods, elements of forest-dependence exist in communities with a wide range of characteristics. Generally, richer households derive more cash value from forest resources, whereas among poorer households, forest resources are more important for home consumption and increase community resilience.
Indigenous peoples
Forests are fundamental to the culture and livelihood of indigenous people groups that live in and depend on forests, many of which have been removed from and denied access to the lands on which they lived as part of global colonialism. Indigenous lands contain 36% or more of intact forest worldwide, host more biodiversity, and experience less deforestation. Indigenous activists have argued that degradation of forests and indigenous peoples' marginalization and land dispossession are interconnected. Other concerns among indigenous peoples include lack of Indigenous involvement in forest management and loss of knowledge related for the forest ecosystem. Since 2002, the amount of land that is legally owned by or designated for indigenous peoples has broadly increased, but land acquisition in lower-income countries by multinational corporations, often with little or no consultation of indigenous peoples, has also increased. Research in the Amazon rainforest suggests that indigenous methods of agroforestry form reservoirs of biodiversity. In the U.S. state of Wisconsin, forests managed by indigenous people have more plant diversity, fewer invasive species, higher tree regeneration rates, and higher volume of trees.
Management
Forest management has changed considerably over the last few centuries, with rapid changes from the 1980s onward, culminating in a practice now referred to as sustainable forest management. Forest ecologists concentrate on forest patterns and processes, usually with the aim of elucidating cause-and-effect relationships. Foresters who practice sustainable forest management focus on the integration of ecological, social, and economic values, often in consultation with local communities and other stakeholders.
Humans have generally decreased the amount of forest worldwide. Anthropogenic factors that can affect forests include logging, urban sprawl, human-caused forest fires, acid rain, invasive species, and the slash and burn practices of swidden agriculture or shifting cultivation. The loss and re-growth of forests lead to a distinction between two broad types of forest: primary or old-growth forest and secondary forest. There are also many natural factors that can cause changes in forests over time, including forest fires, insects, diseases, weather, competition between species, etc. In 1997, the World Resources Institute recorded that only 20% of the world's original forests remained in large intact tracts of undisturbed forest. More than 75% of these intact forests lie in three countries: the boreal forests of Russia and Canada, and the rainforest of Brazil.
According to Food and Agriculture Organization's (FAO) Global Forest Resources Assessment 2020, an estimated of forest have been lost worldwide through deforestation since 1990, but the rate of forest loss has declined substantially. In the most recent five-year period (2015–2020), the annual rate of deforestation was estimated at , down from annually in 2010–2015.
The forest transition
The transition of a region from forest loss to net gain in forested land is referred to as the forest transition. This change occurs through a few main pathways, including increase in commercial tree plantations, adoption of agroforestry techniques by small farmers, or spontaneous regeneration when former agricultural land is abandoned. It can be motivated by the economic benefits of forests, the ecosystem services forests provide, or cultural changes where people increasingly appreciate forests for their spiritual, aesthetic, or otherwise intrinsic value. According to the Special Report on Global Warming of 1.5 °C of the Intergovernmental Panel on Climate Change, to avoid temperature rise by more than 1.5 degrees above pre-industrial levels, there will need to be an increase in global forest cover equal to the land area of Canada () by 2050.
China instituted a ban on logging, beginning in 1998, due to the erosion and flooding that it caused. In addition, ambitious tree-planting programmes in countries such as China, India, the United States, and Vietnam – combined with natural expansion of forests in some regions – have added more than of new forests annually. As a result, the net loss of forest area was reduced to per year between 2000 and 2010, down from annually in the 1990s. In 2015, a study for Nature Climate Change showed that the trend has recently been reversed, leading to an "overall gain" in global biomass and forests. This gain is due especially to reforestation in China and Russia. New forests are not equivalent to old growth forests in terms of species diversity, resilience, and carbon capture. On 7 September 2015, the FAO released a new study stating that over the last 25 years the global deforestation rate has decreased by 50% due to improved management of forests and greater government protection.
There is an estimated of forest in protected areas worldwide. Of the six major world regions, South America has the highest share of forests in protected areas, at 31 percent. The area of such areas globally has increased by since 1990, but the rate of annual increase slowed in 2010–2020.
Smaller areas of woodland in cities may be managed as urban forestry, sometimes within public parks. These are often created for human benefits; Attention Restoration Theory argues that spending time in nature reduces stress and improves health, while forest schools and kindergartens help young people to develop social as well as scientific skills in forests. These typically need to be close to where the children live.
Canada
Canada has about of forest land. More than 90% of forest land is publicly owned and about 50% of the total forest area is allocated for harvesting. These allocated areas are managed using the principles of sustainable forest management, which include extensive consultation with local stakeholders. About eight percent of Canada's forest is legally protected from resource development. Much more forest land—about 40 percent of the total forest land base—is subject to varying degrees of protection through processes such as integrated land use planning or defined management areas, such as certified forests.
By December 2006, over of forest land in Canada (about half the global total) had been certified as being sustainably managed. Clearcutting, first used in the latter half of the 20th century, is less expensive, but devastating to the environment; and companies are required by law to ensure that harvested areas are adequately regenerated. Most Canadian provinces have regulations limiting the size of new clear-cuts, although some older ones grew to over several years.
The Canadian Forest Service is the government department which looks after Forests in Canada.
Latvia
Latvia has about of forest land, which equates to about 50.5% of Latvia's total area of of forest land (46% of total forest land) is publicly owned and of forest land (54% of the total) is in private hands. Latvia's forests have been steadily increasing over the years, which is in contrast to many other nations, mostly due to the forestation of land not used for agriculture. In 1935, there were only of forest; today this has increased by more than 150%. Birch is the most common tree at 28.2%, followed by pine (26.9%), spruce (18.3%), grey alder (9.7%), aspen (8.0%), black alder (5.7%), oak/ash (1.2%), with other hardwood trees making up the rest (2.0%).
United States
In the United States, most forests have historically been affected by humans to some degree, though in recent years improved forestry practices have helped regulate or moderate large-scale impacts. The United States Forest Service estimated a net loss of about between 1997 and 2020; this estimate includes conversion of forest land to other uses, including urban and suburban development, as well as afforestation and natural reversion of abandoned crop and pasture land to forest. In many areas of the United States, the area of forest is stable or increasing, particularly in many northern states. The opposite problem from flooding has plagued national forests, with loggers complaining that a lack of thinning and proper forest management has resulted in large forest fires.
| Physical sciences | Terrestrial features | null |
11123 | https://en.wikipedia.org/wiki/Fornax | Fornax | Fornax () is a constellation in the southern celestial hemisphere, partly ringed by the celestial river Eridanus. Its name is Latin for furnace. It was named by French astronomer Nicolas Louis de Lacaille in 1756. Fornax is one of the 88 modern constellations.
The three brightest stars—Alpha, Beta and Nu Fornacis—form a flattened triangle facing south. With an apparent magnitude of 3.91, Alpha Fornacis is the brightest star in Fornax. Six star systems have been found to have exoplanets. The Fornax Dwarf galaxy is a small faint satellite galaxy of the Milky Way. NGC 1316 is a relatively close radio galaxy.
The Hubble's Ultra-Deep Field is located within the Fornax constellation.
It is the 41st largest constellation in the night-sky, occupying an area of 398 square degrees. It is located in the first quadrant of the southern hemisphere (SQ1) and can be seen at latitudes between +50° and -90° during the month of December.
History
The French astronomer Nicolas Louis de Lacaille first described the constellation in French as le Fourneau Chymique (the Chemical Furnace) with an alembic and receiver in his early catalogue, before abbreviating it to le Fourneau on his planisphere in 1752, after he had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope. He devised fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. All but one honoured instruments that symbolised the Age of Enlightenment. Lacaille Latinised the name to Fornax Chimiae on his 1763 chart.
Characteristics
The constellation Eridanus borders Fornax to the east, north and south, while Cetus, Sculptor and Phoenix gird it to the north, west and south respectively. Covering 397.5 square degrees and 0.964% of the night sky, it ranks 41st of the 88 constellations in size, The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "For". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 8 segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −23.76° and −39.58°. The whole constellation is visible to observers south of latitude 50°N.
Features
Stars
Lacaille gave Bayer designations to 27 stars now named Alpha to Omega Fornacis, labelling two stars 3.5 degrees apart as Gamma, three stars Eta, two stars Iota, two Lambda and three Chi. Phi Fornacis was added by Gould, and Theta and Omicron were dropped by Gould and Baily respectively. Upsilon, too, was later found to be two stars and designated as such. Overall, there are 59 stars within the constellation's borders brighter than or equal to apparent magnitude 6.5. However, there are no stars brighter than the fourth magnitude.
The three brightest stars form a flattish triangle, with Alpha (also called Dalim) and Nu Fornacis marking its eastern and western points and Beta Fornacis marking the shallow southern apex. Originally designated 12 Eridani by John Flamsteed, Alpha Fornacis was named by Lacaille as the brightest star in the new constellation. It is a binary star that can be resolved by small amateur telescopes. With an apparent magnitude of 3.91, the primary is a yellow-white subgiant 1.21 times as massive as the Sun that has begun to cool and expand after exhausting its core hydrogen, having swollen to 1.9 times the Sun's radius. Of magnitude 6.5, the secondary star is 0.78 times as massive as the Sun. It has been identified as a blue straggler, and has either accumulated material from, or merged with, a third star in the past. It is a strong source of X-rays. The pair is 46.4 ± 0.3 light-years distant from Earth.
Beta Fornacis is a yellow-hued giant star of spectral type G8IIIb of magnitude 4.5 that has cooled and swelled to 11 times the Sun's diameter, 178 ± 2 light-years from Earth. It is a red clump giant, which means it has undergone helium flash and is currently generating energy through the fusion of helium at its core.
Nu Fornacis is 370 ± 10 light-years distant from Earth. It is a blue giant star of spectral type B9.5IIIspSi that is 3.65 ± 0.18 times as massive and around 245 times as luminous as the Sun, with 3.2 ± 0.4 times its diameter. It varies in luminosity over a period of 1.89 days—the same as its rotational period. This is because of differences in abundances of metals in its atmosphere; it belongs to a class of star known as an Alpha2 Canum Venaticorum variable.
Shining with an apparent magnitude of 5.89, Epsilon Fornacis is a binary star system located 104.4 ± 0.3 light-years distant from Earth. Its component stars orbit each other every 37 years. The primary star is around 12 billion years old and has cooled and expanded to 2.53 times the diameter of the Sun, while having only 91% of its mass.
Omega Fornacis is a binary star system composed of a blue main-sequence star of spectral type B9.5V and magnitude 4.96, and a white main sequence star of spectral type A7V and magnitude 7.88. The system is 470 ± 10 light-years distant from Earth.
Kappa Fornacis is a triple star system composed of a yellow giant and a pair of red dwarfs.
R Fornacis is a long-period variable and carbon star.
LP 944-20 is a brown dwarf of spectral type M9 that has around 7% the mass of the Sun. Approximately 21 light-years distant from Earth, it is a faint object with an apparent magnitude of 18.69. Observations published in 2007 showed that the atmosphere of LP 944-20 contains much lithium and that it has dusty clouds. Smaller and less luminous still is 2MASS 0243-2453, a T-type brown dwarf of spectral type T6. With a surface temperature of 1040–1100 K, it has 2.4–4.1% the mass of the Sun, a diameter 9.2 to 10.6% of that of the Sun, and an age of 0.4–1.7 billion years.
Six star systems in Fornax have been found to have planets:
Lambda2 Fornacis is a star about 1.2 times as massive as the Sun with a planet about as massive as Neptune, discovered by doppler spectroscopy in 2009. The planet has an orbit of around 17.24 days.
HD 20868 is an orange dwarf with a mass around 78% that of the Sun, 151 ± 10 light-years away from Earth. It was found to have an orbiting planet approximately double the mass of Jupiter with a period of 380 days.
WASP-72 is a star around 1.4 times as massive that has begun to cool and expand off the main sequence, reaching double the Sun's diameter. It has a planet around as massive as Jupiter orbiting it every 2.2 days.
HD 20781 and HD 20782 are a pair of sunlike yellow main sequence stars that orbit each other. Each has been found to have planets.
HR 858 is a near naked eye visible star in Fornax, 31.3 parsecs away. In May 2019, it was announced to have at least 3 exoplanets as observed by transit method of the Transiting Exoplanet Survey Satellite.
Deep-sky objects
Local Group
NGC 1049 is a globular cluster 500,000 light-years from Earth. It is in the Fornax Dwarf Galaxy. NGC 1360 is a planetary nebula in Fornax with a magnitude of approximately 9.0, 1,280 light-years from Earth. Its central star is of magnitude 11.4, an unusually bright specimen. It is five times the size of the famed Ring Nebula in Lyra at 6.5 arcminutes. Unlike the Ring Nebula, NGC 1360 is clearly elliptical.
The Fornax Dwarf galaxy is a dwarf galaxy that is part of the Local Group of galaxies. It is not visible in amateur telescopes, despite its relatively small distance of 500,000 light-years.
Helmi stream is a small galactic stream in Fornax. This small galaxy was destroyed by Milky Way 6 billion years ago. There was candidate for extragalactic planet, HIP 13044 b.
Outside
NGC 1097 is a barred spiral galaxy in Fornax, about 45 million light-years from Earth. At magnitude 9, it is visible in medium amateur telescopes. It is notable as a Seyfert galaxy with strong spectral emissions indicating ionized gases and a central supermassive black hole.
Fornax Cluster
The Fornax Cluster is a cluster of galaxies lying at a distance of 19 megaparsecs (62 million light-years). It is the second richest galaxy cluster within 100 million light-years, after the considerably larger Virgo Cluster, and may be associated with the nearby Eridanus Group. It lies primarily in the constellation Fornax, with its southern boundaries partially crossing into the constellation of Eridanus, and covers an area of sky about 6° across or about 28 sq degrees. The Fornax cluster is a part of larger Fornax Wall. Down are some famous objects in this cluster:
NGC 1365 is another barred spiral galaxy located at a distance of 56 million light-years from Earth. Like NGC 1097, it is also a Seyfert galaxy. Its bar is a center of star formation and shows extensions of the spiral arms' dust lanes. The bright nucleus indicates the presence of an active galactic nucleus – a galaxy with a supermassive black hole at the center, accreting matter from the bar. It is a 10th magnitude galaxy associated with the Fornax Cluster.
Fornax A is a radio galaxy with extensive radio lobes that corresponds to the optical galaxy NGC 1316, a 9th-magnitude galaxy. One of the closer active galaxies to Earth at a distance of 62 million light-years, Fornax A appears in the optical spectrum as a large elliptical galaxy with dust lanes near its core. These dust lanes have caused astronomers to discern that it recently merged with a small spiral galaxy. Because it has a high rate of type Ia supernovae, NGC 1316 has been used to determine the size of the universe. The jets producing the radio lobes are not particularly powerful, giving the lobes a more diffuse, knotted structure due to interactions with the intergalactic medium. Associated with this peculiar galaxy is an entire cluster of galaxies.
NGC 1399 is a large elliptical galaxy in the Southern constellation Fornax, the central galaxy in the Fornax cluster.
The galaxy is 66 million light-years away from Earth. With a diameter of 130 000 light-years, it is one of the largest galaxies in the Fornax cluster and slightly larger than Milky Way. William Herschel discovered this galaxy on October 22, 1835.
NGC 1386 is a spiral galaxy located in the constellation Eridanus. It is located at a distance of circa 53 million light years from Earth and has apparent dimensions of 3.89' x 1.349'. It is a Seyfert galaxy, the only one in Fornax Cluster.
NGC 1427A is an irregular galaxy in the constellation Eridanus. Its distance modulus has been estimated using the globular cluster luminosity function to be 31.01 ± 0.21 which is about 52 Mly. It is the brightest dwarf irregular member of the Fornax cluster and is in the foreground of the cluster's central galaxy NGC 1399.
NGC 1460 is a barred lenticular galaxy in the constellation Eridanus. It was discovered by John Herschel on November 28, 1837. It is moving away from the Milky Way 1341 km/s. NGC 1460 has a Hubble classification of SB0, which indicates it is a barred lenticular galaxy. But, this one contains a huge bar at its core. The bar is spreading from center to the edge of the galaxy, as seen on Hubble image in the box. This bar is one of the largest seen in barred lenticular galaxies.
There are also first ultracompact dwarf galaxies discovered.
Distant universe
Fornax has been the target of investigations into the furthest reaches of the universe. The Hubble Ultra Deep Field is located within Fornax, and the Fornax Cluster, a small cluster of galaxies, lies primarily within Fornax. At a meeting of the Royal Astronomical Society in Britain, a team from University of Queensland described 40 unknown "dwarf" galaxies in this constellation; follow-up observations with the Hubble Space Telescope and the European Southern Observatory's Very Large Telescope revealed that ultra compact dwarfs are much smaller than previously known dwarf galaxies, about across.
`
UDFj-39546284 is a candidate protogalaxy located in Fornax, although recent analyses have suggested it is likely to be a lower redshift source.
GRB 190114C was a notable gamma ray burst explosion from a galaxy 4.5 billion light years away near the Fornax constellation, that was initially detected in January 2019. According to astronomers, "the brightest light ever seen from Earth [to date] ... [the] biggest explosion in the Universe since the Big Bang".
Equivalents
In Chinese astronomy, the stars that correspond to Fornax are within the White Tiger of the West (西方白虎, Xī Fāng Bái Hǔ).
| Physical sciences | Other | Astronomy |
11145 | https://en.wikipedia.org/wiki/Fire | Fire | Fire is the rapid oxidation of a material (the fuel) in the exothermic chemical process of combustion, releasing heat, light, and various reaction products.
At a certain point in the combustion reaction, called the ignition point, flames are produced. The flame is the visible portion of the fire. Flames consist primarily of carbon dioxide, water vapor, oxygen and nitrogen. If hot enough, the gases may become ionized to produce plasma. Depending on the substances alight, and any impurities outside, the color of the flame and the fire's intensity will be different.
Fire, in its most common form, has the potential to result in conflagration, which can lead to physical damage, which can be permanent, through burning. Fire is a significant process that influences ecological systems worldwide. The positive effects of fire include stimulating growth and maintaining various ecological systems.
Its negative effects include hazard to life and property, atmospheric pollution, and water contamination. When fire removes protective vegetation, heavy rainfall can contribute to increased soil erosion by water. Additionally, the burning of vegetation releases nitrogen into the atmosphere, unlike elements such as potassium and phosphorus which remain in the ash and are quickly recycled into the soil. This loss of nitrogen caused by a fire produces a long-term reduction in the fertility of the soil, which can be recovered as atmospheric nitrogen is fixed and converted to ammonia by natural phenomena such as lightning or by leguminous plants such as clover, peas, and green beans.
Fire is one of the four classical elements and has been used by humans in rituals, in agriculture for clearing land, for cooking, generating heat and light, for signaling, propulsion purposes, smelting, forging, incineration of waste, cremation, and as a weapon or mode of destruction.
Etymology
The word "fire" originated , which can be traced back to the Germanic root , which itself comes from the Proto-Indo-European from the root . The current spelling of "fire" has been in use since as early as 1200, but it was not until around 1600 that it completely replaced the Middle English term (which is still preserved in the word "fiery").
History
Fossil record
The fossil record of fire first appears with the establishment of a land-based flora in the Middle Ordovician period, , permitting the accumulation of oxygen in the atmosphere as never before, as the new hordes of land plants pumped it out as a waste product. When this concentration rose above 13%, it permitted the possibility of wildfire. Wildfire is first recorded in the Late Silurian fossil record, , by fossils of charcoalified plants. Apart from a controversial gap in the Late Devonian, charcoal is present ever since. The level of atmospheric oxygen is closely related to the prevalence of charcoal: clearly oxygen is the key factor in the abundance of wildfire. Fire also became more abundant when grasses radiated and became the dominant component of many ecosystems, around ; this kindling provided tinder which allowed for the more rapid spread of fire. These widespread fires may have initiated a positive feedback process, whereby they produced a warmer, drier climate more conducive to fire.
Human control of fire
Early human control
The ability to control fire was a dramatic change in the habits of early humans. Making fire to generate heat and light made it possible for people to cook food, simultaneously increasing the variety and availability of nutrients and reducing disease by killing pathogenic microorganisms in the food. The heat produced would also help people stay warm in cold weather, enabling them to live in cooler climates. Fire also kept nocturnal predators at bay. Evidence of occasional cooked food is found from . Although this evidence shows that fire may have been used in a controlled fashion about 1 million years ago, other sources put the date of regular use at 400,000 years ago. Evidence becomes widespread around 50 to 100 thousand years ago, suggesting regular use from this time; resistance to air pollution started to evolve in human populations at a similar point in time. The use of fire became progressively more sophisticated, as it was used to create charcoal and to control wildlife from tens of thousands of years ago.
Fire has also been used for centuries as a method of torture and execution, as evidenced by death by burning as well as torture devices such as the iron boot, which could be filled with water, oil, or even lead and then heated over an open fire to the agony of the wearer.
By the Neolithic Revolution, during the introduction of grain-based agriculture, people all over the world used fire as a tool in landscape management. These fires were typically controlled burns or "cool fires", as opposed to uncontrolled "hot fires", which damage the soil. Hot fires destroy plants and animals, and endanger communities. This is especially a problem in the forests of today where traditional burning is prevented in order to encourage the growth of timber crops. Cool fires are generally conducted in the spring and autumn. They clear undergrowth, burning up biomass that could trigger a hot fire should it get too dense. They provide a greater variety of environments, which encourages game and plant diversity. For humans, they make dense, impassable forests traversable. Another human use for fire in regards to landscape management is its use to clear land for agriculture. Slash-and-burn agriculture is still common across much of tropical Africa, Asia and South America. For small farmers, controlled fires are a convenient way to clear overgrown areas and release nutrients from standing vegetation back into the soil. However, this useful strategy is also problematic. Growing population, fragmentation of forests and warming climate are making the earth's surface more prone to ever-larger escaped fires. These harm ecosystems and human infrastructure, cause health problems, and send up spirals of carbon and soot that may encourage even more warming of the atmosphere – and thus feed back into more fires. Globally today, as much as 5 million square kilometres – an area more than half the size of the United States – burns in a given year.
Later human control
There are numerous modern applications of fire. In its broadest sense, fire is used by nearly every human being on Earth in a controlled setting every day. Users of internal combustion vehicles employ fire every time they drive. Thermal power stations provide electricity for a large percentage of humanity by igniting fuels such as coal, oil or natural gas, then using the resultant heat to boil water into steam, which then drives turbines.
Use of fire in war
The use of fire in warfare has a long history. Fire was the basis of all early thermal weapons. The Byzantine fleet used Greek fire to attack ships and men.
The invention of gunpowder in China led to the fire lance, a flame-thrower weapon dating to around 1000 CE which was a precursor to projectile weapons driven by burning gunpowder.
The earliest modern flamethrowers were used by infantry in the First World War, first used by German troops against entrenched French troops near Verdun in February 1915. They were later successfully mounted on armoured vehicles in the Second World War.
Hand-thrown incendiary bombs improvised from glass bottles, later known as Molotov cocktails, were deployed during the Spanish Civil War in the 1930s. Also during that war, incendiary bombs were deployed against Guernica by Fascist Italian and Nazi German air forces that had been created specifically to support Franco's Nationalists.
Incendiary bombs were dropped by Axis and Allies during the Second World War, notably on Coventry, Tokyo, Rotterdam, London, Hamburg and Dresden; in the latter two cases firestorms were deliberately caused in which a ring of fire surrounding each city was drawn inward by an updraft caused by a central cluster of fires. The United States Army Air Force also extensively used incendiaries against Japanese targets in the latter months of the war, devastating entire cities constructed primarily of wood and paper houses. The incendiary fluid napalm was used in July 1944, towards the end of the Second World War, although its use did not gain public attention until the Vietnam War.
Fire management
Controlling a fire to optimize its size, shape, and intensity is generally called fire management, and the more advanced forms of it, as traditionally (and sometimes still) practiced by skilled cooks, blacksmiths, ironmasters, and others, are highly skilled activities. They include knowledge of which fuel to burn; how to arrange the fuel; how to stoke the fire both in early phases and in maintenance phases; how to modulate the heat, flame, and smoke as suited to the desired application; how best to bank a fire to be revived later; how to choose, design, or modify stoves, fireplaces, bakery ovens, or industrial furnaces; and so on. Detailed expositions of fire management are available in various books about blacksmithing, about skilled camping or military scouting, and about domestic arts.
Productive use for energy
Burning fuel converts chemical energy into heat energy; wood has been used as fuel since prehistory. The International Energy Agency states that nearly 80% of the world's power has consistently come from fossil fuels such as petroleum, natural gas, and coal in the past decades. The fire in a power station is used to heat water, creating steam that drives turbines. The turbines then spin an electric generator to produce electricity. Fire is also used to provide mechanical work directly by thermal expansion, in both external and internal combustion engines.
The unburnable solid remains of a combustible material left after a fire is called clinker if its melting point is below the flame temperature, so that it fuses and then solidifies as it cools, and ash if its melting point is above the flame temperature.
Physical properties
Chemistry
Fire is a chemical process in which a fuel and an oxidizing agent react, yielding carbon dioxide and water. This process, known as a combustion reaction, does not proceed directly and involves intermediates. Although the oxidizing agent is typically oxygen, other compounds are able to fulfill the role. For instance, chlorine trifluoride is able to ignite sand.
Fires start when a flammable or a combustible material, in combination with a sufficient quantity of an oxidizer such as oxygen gas or another oxygen-rich compound (though non-oxygen oxidizers exist), is exposed to a source of heat or ambient temperature above the flash point for the fuel/oxidizer mix, and is able to sustain a rate of rapid oxidation that produces a chain reaction. This is commonly called the fire tetrahedron. Fire cannot exist without all of these elements in place and in the right proportions. For example, a flammable liquid will start burning only if the fuel and oxygen are in the right proportions. Some fuel-oxygen mixes may require a catalyst, a substance that is not consumed, when added, in any chemical reaction during combustion, but which enables the reactants to combust more readily.
Once ignited, a chain reaction must take place whereby fires can sustain their own heat by the further release of heat energy in the process of combustion and may propagate, provided there is a continuous supply of an oxidizer and fuel.
If the oxidizer is oxygen from the surrounding air, the presence of a force of gravity, or of some similar force caused by acceleration, is necessary to produce convection, which removes combustion products and brings a supply of oxygen to the fire. Without gravity, a fire rapidly surrounds itself with its own combustion products and non-oxidizing gases from the air, which exclude oxygen and extinguish the fire. Because of this, the risk of fire in a spacecraft is small when it is coasting in inertial flight. This does not apply if oxygen is supplied to the fire by some process other than thermal convection.
Fire can be extinguished by removing any one of the elements of the fire tetrahedron. Consider a natural gas flame, such as from a stove-top burner. The fire can be extinguished by any of the following:
turning off the gas supply, which removes the fuel source;
covering the flame completely, which smothers the flame as the combustion both uses the available oxidizer (the oxygen in the air) and displaces it from the area around the flame with CO2;
application of an inert gas such as carbon dioxide, smothering the flame by displacing the available oxidizer;
application of water, which removes heat from the fire faster than the fire can produce it (similarly, blowing hard on a flame will displace the heat of the currently burning gas from its fuel source, to the same end); or
application of a retardant chemical such as Halon (largely banned in some countries ) to the flame, which retards the chemical reaction itself until the rate of combustion is too slow to maintain the chain reaction.
In contrast, fire is intensified by increasing the overall rate of combustion. Methods to do this include balancing the input of fuel and oxidizer to stoichiometric proportions, increasing fuel and oxidizer input in this balanced mix, increasing the ambient temperature so the fire's own heat is better able to sustain combustion, or providing a catalyst, a non-reactant medium in which the fuel and oxidizer can more readily react.
Flame
A flame is a mixture of reacting gases and solids emitting visible, infrared, and sometimes ultraviolet light, the frequency spectrum of which depends on the chemical composition of the burning material and intermediate reaction products. In many cases, such as the burning of organic matter, for example wood, or the incomplete combustion of gas, incandescent solid particles called soot produce the familiar red-orange glow of "fire". This light has a continuous spectrum. Complete combustion of gas has a dim blue color due to the emission of single-wavelength radiation from various electron transitions in the excited molecules formed in the flame. Usually oxygen is involved, but hydrogen burning in chlorine also produces a flame, producing hydrogen chloride (HCl). Other possible combinations producing flames, amongst many, are fluorine and hydrogen, and hydrazine and nitrogen tetroxide. Hydrogen and hydrazine/UDMH flames are similarly pale blue, while burning boron and its compounds, evaluated in mid-20th century as a high energy fuel for jet and rocket engines, emits intense green flame, leading to its informal nickname of "Green Dragon".
The glow of a flame is complex. Black-body radiation is emitted from soot, gas, and fuel particles, though the soot particles are too small to behave like perfect blackbodies. There is also photon emission by de-excited atoms and molecules in the gases. Much of the radiation is emitted in the visible and infrared bands. The color depends on temperature for the black-body radiation, and on chemical makeup for the emission spectra.
The common distribution of a flame under normal gravity conditions depends on convection, as soot tends to rise to the top of a general flame, as in a candle in normal gravity conditions, making it yellow. In microgravity or zero gravity, such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient (although it may go out if not moved steadily, as the CO2 from combustion does not disperse as readily in microgravity, and tends to smother the flame). There are several possible explanations for this difference, of which the most likely is that the temperature is sufficiently evenly distributed that soot is not formed and complete combustion occurs. Experiments by NASA reveal that diffusion flames in microgravity allow more soot to be completely oxidized after they are produced than diffusion flames on Earth, because of a series of mechanisms that behave differently in micro gravity when compared to normal gravity conditions. These discoveries have potential applications in applied science and industry, especially concerning fuel efficiency.
Typical adiabatic temperatures
The adiabatic flame temperature of a given fuel and oxidizer pair is that at which the gases achieve stable combustion.
Oxy–dicyanoacetylene
Oxy–acetylene
Oxyhydrogen
Air–acetylene
Blowtorch (air–MAPP gas)
Bunsen burner (air–natural gas)
Candle (air–paraffin) .
Fire science
Fire science is a branch of physical science which includes fire behavior, dynamics, and combustion. Applications of fire science include fire protection, fire investigation, and wildfire management.
Fire ecology
Every natural ecosystem on land has its own fire regime, and the organisms in those ecosystems are adapted to or dependent upon that fire regime. Fire creates a mosaic of different habitat patches, each at a different stage of succession. Different species of plants, animals, and microbes specialize in exploiting a particular stage, and by creating these different types of patches, fire allows a greater number of species to exist within a landscape.
Prevention and protection systems
Wildfire prevention programs around the world may employ techniques such as wildland fire use and prescribed or controlled burns. Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions.
Fire fighting services are provided in most developed areas to extinguish or contain uncontrolled fires. Trained firefighters use fire apparatus, water supply resources such as water mains and fire hydrants or they might use A and B class foam depending on what is feeding the fire.
Fire prevention is intended to reduce sources of ignition. Fire prevention also includes education to teach people how to avoid causing fires. Buildings, especially schools and tall buildings, often conduct fire drills to inform and prepare citizens on how to react to a building fire. Purposely starting destructive fires constitutes arson and is a crime in most jurisdictions.
Model building codes require passive fire protection and active fire protection systems to minimize damage resulting from a fire. The most common form of active fire protection is fire sprinklers. To maximize passive fire protection of buildings, building materials and furnishings in most developed countries are tested for fire-resistance, combustibility and flammability. Upholstery, carpeting and plastics used in vehicles and vessels are also tested.
Where fire prevention and fire protection have failed to prevent damage, fire insurance can mitigate the financial impact.
| Physical sciences | Science and medicine | null |
11149 | https://en.wikipedia.org/wiki/Fresnel%20equations | Fresnel equations | The Fresnel equations (or Fresnel coefficients) describe the reflection and transmission of light (or electromagnetic radiation in general) when incident on an interface between different optical media. They were deduced by French engineer and physicist Augustin-Jean Fresnel () who was the first to understand that light is a transverse wave, when no one realized that the waves were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the s and p polarizations incident upon a material interface.
Overview
When light strikes the interface between a medium with refractive index and a second medium with refractive index , both reflection and refraction of the light may occur. The Fresnel equations give the ratio of the reflected wave's electric field to the incident wave's electric field, and the ratio of the transmitted wave's electric field to the incident wave's electric field, for each of two components of polarization. (The magnetic fields can also be related using similar coefficients.) These ratios are generally complex, describing not only the relative amplitudes but also the phase shifts at the interface.
The equations assume the interface between the media is flat and that the media are homogeneous and isotropic. The incident light is assumed to be a plane wave, which is sufficient to solve any problem since any incident light field can be decomposed into plane waves and polarizations.
S and P polarizations
There are two sets of Fresnel coefficients for two different linear polarization components of the incident wave. Since any polarization state can be resolved into a combination of two orthogonal linear polarizations, this is sufficient for any problem. Likewise, unpolarized (or "randomly polarized") light has an equal amount of power in each of two linear polarizations.
The s polarization refers to polarization of a wave's electric field normal to the plane of incidence (the direction in the derivation below); then the magnetic field is in the plane of incidence. The p polarization refers to polarization of the electric field in the plane of incidence (the plane in the derivation below); then the magnetic field is normal to the plane of incidence. The names "s" and "p" for the polarization components refer to German "senkrecht" (perpendicular or normal) and "parallel" (parallel to the plane of incidence).
Although the reflection and transmission are dependent on polarization, at normal incidence () there is no distinction between them so all polarization states are governed by a single set of Fresnel coefficients (and another special case is mentioned below in which that is true).
Configuration
In the diagram on the right, an incident plane wave in the direction of the ray strikes the interface between two media of refractive indices and at point . Part of the wave is reflected in the direction , and part refracted in the direction . The angles that the incident, reflected and refracted rays make to the normal of the interface are given as , and , respectively.
The relationship between these angles is given by the law of reflection:
and Snell's law:
The behavior of light striking the interface is explained by considering the electric and magnetic fields that constitute an electromagnetic wave, and the laws of electromagnetism, as shown below. The ratio of waves' electric field (or magnetic field) amplitudes are obtained, but in practice one is more often interested in formulae which determine power coefficients, since power (or irradiance) is what can be directly measured at optical frequencies. The power of a wave is generally proportional to the square of the electric (or magnetic) field amplitude.
Power (intensity) reflection and transmission coefficients
We call the fraction of the incident power that is reflected from the interface the reflectance (or reflectivity, or power reflection coefficient) , and the fraction that is refracted into the second medium is called the transmittance (or transmissivity, or power transmission coefficient) . Note that these are what would be measured right at each side of an interface and do not account for attenuation of a wave in an absorbing medium following transmission or reflection.
The reflectance for s-polarized light is
while the reflectance for p-polarized light is
where and are the wave impedances of media 1 and 2, respectively.
We assume that the media are non-magnetic (i.e., ), which is typically a good approximation at optical frequencies (and for transparent media at other frequencies). Then the wave impedances are determined solely by the refractive indices and :
where is the impedance of free space and . Making this substitution, we obtain equations using the refractive indices:
The second form of each equation is derived from the first by eliminating using Snell's law and trigonometric identities.
As a consequence of conservation of energy, one can find the transmitted power (or more correctly, irradiance: power per unit area) simply as the portion of the incident power that isn't reflected:
and
Note that all such intensities are measured in terms of a wave's irradiance in the direction normal to the interface; this is also what is measured in typical experiments. That number could be obtained from irradiances in the direction of an incident or reflected wave (given by the magnitude of a wave's Poynting vector) multiplied by for a wave at an angle to the normal direction (or equivalently, taking the dot product of the Poynting vector with the unit vector normal to the interface). This complication can be ignored in the case of the reflection coefficient, since , so that the ratio of reflected to incident irradiance in the wave's direction is the same as in the direction normal to the interface.
Although these relationships describe the basic physics, in many practical applications one is concerned with "natural light" that can be described as unpolarized. That means that there is an equal amount of power in the s and p polarizations, so that the effective reflectivity of the material is just the average of the two reflectivities:
For low-precision applications involving unpolarized light, such as computer graphics, rather than rigorously computing the effective reflection coefficient for each angle, Schlick's approximation is often used.
Special cases
Normal incidence
For the case of normal incidence, , and there is no distinction between s and p polarization. Thus, the reflectance simplifies to
For common glass () surrounded by air (), the power reflectance at normal incidence can be seen to be about 4%, or 8% accounting for both sides of a glass pane.
Brewster's angle
At a dielectric interface from to , there is a particular angle of incidence at which goes to zero and a p-polarised incident wave is purely refracted, thus all reflected light is s-polarised. This angle is known as Brewster's angle, and is around 56° for and (typical glass).
Total internal reflection
When light travelling in a denser medium strikes the surface of a less dense medium (i.e., ), beyond a particular incidence angle known as the critical angle, all light is reflected and . This phenomenon, known as total internal reflection, occurs at incidence angles for which Snell's law predicts that the sine of the angle of refraction would exceed unity (whereas in fact for all real ). For glass with surrounded by air, the critical angle is approximately 42°.
45° incidence
Reflection at 45° incidence is very commonly used for making 90° turns. For the case of light traversing from a less dense medium into a denser one at 45° incidence (), it follows algebraically from the above equations that equals the square of :
This can be used to either verify the consistency of the measurements of and , or to derive one of them when the other is known. This relationship is only valid for the simple case of a single plane interface between two homogeneous materials, not for films on substrates, where a more complex analysis is required.
Measurements of and at 45° can be used to estimate the reflectivity at normal incidence. The "average of averages" obtained by calculating first the arithmetic as well as the geometric average of and , and then averaging these two averages again arithmetically, gives a value for with an error of less than about 3% for most common optical materials. This is useful because measurements at normal incidence can be difficult to achieve in an experimental setup since the incoming beam and the detector will obstruct each other. However, since the dependence of and on the angle of incidence for angles below 10° is very small, a measurement at about 5° will usually be a good approximation for normal incidence, while allowing for a separation of the incoming and reflected beam.
Complex amplitude reflection and transmission coefficients
The above equations relating powers (which could be measured with a photometer for instance) are derived from the Fresnel equations which solve the physical problem in terms of electromagnetic field complex amplitudes, i.e., considering phase shifts in addition to their amplitudes. Those underlying equations supply generally complex-valued ratios of those EM fields and may take several different forms, depending on the formalism used. The complex amplitude coefficients for reflection and transmission are usually represented by lower case and (whereas the power coefficients are capitalized). As before, we are assuming the magnetic permeability, of both media to be equal to the permeability of free space as is essentially true of all dielectrics at optical frequencies.
In the following equations and graphs, we adopt the following conventions. For s polarization, the reflection coefficient is defined as the ratio of the reflected wave's complex electric field amplitude to that of the incident wave, whereas for p polarization is the ratio of the waves complex magnetic field amplitudes (or equivalently, the negative of the ratio of their electric field amplitudes). The transmission coefficient is the ratio of the transmitted wave's complex electric field amplitude to that of the incident wave, for either polarization. The coefficients and are generally different between the s and p polarizations, and even at normal incidence (where the designations s and p do not even apply!) the sign of is reversed depending on whether the wave is considered to be s or p polarized, an artifact of the adopted sign convention (see graph for an air-glass interface at 0° incidence).
The equations consider a plane wave incident on a plane interface at angle of incidence , a wave reflected at angle , and a wave transmitted at angle . In the case of an interface into an absorbing material (where is complex) or total internal reflection, the angle of transmission does not generally evaluate to a real number. In that case, however, meaningful results can be obtained using formulations of these relationships in which trigonometric functions and geometric angles are avoided; the inhomogeneous waves launched into the second medium cannot be described using a single propagation angle.
Using this convention,
One can see that and . One can write very similar equations applying to the ratio of the waves' magnetic fields, but comparison of the electric fields is more conventional.
Because the reflected and incident waves propagate in the same medium and make the same angle with the normal to the surface, the power reflection coefficient is just the squared magnitude of :
On the other hand, calculation of the power transmission coefficient is less straightforward, since the light travels in different directions in the two media. What's more, the wave impedances in the two media differ; power (irradiance) is given by the square of the electric field amplitude divided by the characteristic impedance of the medium (or by the square of the magnetic field multiplied by the characteristic impedance). This results in:
using the above definition of . The introduced factor of is the reciprocal of the ratio of the media's wave impedances. The factors adjust the waves' powers so they are reckoned in the direction normal to the interface, for both the incident and transmitted waves, so that full power transmission corresponds to .
In the case of total internal reflection where the power transmission is zero, nevertheless describes the electric field (including its phase) just beyond the interface. This is an evanescent field which does not propagate as a wave (thus ) but has nonzero values very close to the interface. The phase shift of the reflected wave on total internal reflection can similarly be obtained from the phase angles of and (whose magnitudes are unity in this case). These phase shifts are different for s and p waves, which is the well-known principle by which total internal reflection is used to effect polarization transformations.
Alternative forms
In the above formula for , if we put (Snell's law) and multiply the numerator and denominator by , we obtain
If we do likewise with the formula for , the result is easily shown to be equivalent to
These formulas are known respectively as Fresnel's sine law and Fresnel's tangent law. Although at normal incidence these expressions reduce to 0/0, one can see that they yield the correct results in the limit as .
Multiple surfaces
When light makes multiple reflections between two or more parallel surfaces, the multiple beams of light generally interfere with one another, resulting in net transmission and reflection amplitudes that depend on the light's wavelength. The interference, however, is seen only when the surfaces are at distances comparable to or smaller than the light's coherence length, which for ordinary white light is few micrometers; it can be much larger for light from a laser.
An example of interference between reflections is the iridescent colours seen in a soap bubble or in thin oil films on water. Applications include Fabry–Pérot interferometers, antireflection coatings, and optical filters. A quantitative analysis of these effects is based on the Fresnel equations, but with additional calculations to account for interference.
The transfer-matrix method, or the recursive Rouard method can be used to solve multiple-surface problems.
History
In 1808, Étienne-Louis Malus discovered that when a ray of light was reflected off a non-metallic surface at the appropriate angle, it behaved like one of the two rays emerging from a doubly-refractive calcite crystal. He later coined the term polarization to describe this behavior. In 1815, the dependence of the polarizing angle on the refractive index was determined experimentally by David Brewster. But the reason for that dependence was such a deep mystery that in late 1817, Thomas Young was moved to write:
In 1821, however, Augustin-Jean Fresnel derived results equivalent to his sine and tangent laws (above), by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Fresnel promptly confirmed by experiment that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water; in particular, the equations gave the correct polarization at Brewster's angle. The experimental confirmation was reported in a "postscript" to the work in which Fresnel first revealed his theory that light waves, including "unpolarized" waves, were purely transverse.
Details of Fresnel's derivation, including the modern forms of the sine law and tangent law, were given later, in a memoir read to the French Academy of Sciences in January 1823. That derivation combined conservation of energy with continuity of the tangential vibration at the interface, but failed to allow for any condition on the normal component of vibration. The first derivation from electromagnetic principles was given by Hendrik Lorentz in 1875.
In the same memoir of January 1823, Fresnel found that for angles of incidence greater than the critical angle, his formulas for the reflection coefficients ( and ) gave complex values with unit magnitudes. Noting that the magnitude, as usual, represented the ratio of peak amplitudes, he guessed that the argument represented the phase shift, and verified the hypothesis experimentally. The verification involved
calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions),
subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and
checking that the final polarization was circular.
Thus he finally had a quantitative theory for what we now call the Fresnel rhomb — a device that he had been using in experiments, in one form or another, since 1817 (see Fresnel rhomb §History).
The success of the complex reflection coefficient inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index.
Four weeks before he presented his completed theory of total internal reflection and the rhomb, Fresnel submitted a memoir in which he introduced the needed terms linear polarization, circular polarization, and elliptical polarization, and in which he explained optical rotation as a species of birefringence: linearly-polarized light can be resolved into two circularly-polarized components rotating in opposite directions, and if these propagate at different speeds, the phase difference between them — hence the orientation of their linearly-polarized resultant — will vary continuously with distance.
Thus Fresnel's interpretation of the complex values of his reflection coefficients marked the confluence of several streams of his research and, arguably, the essential completion of his reconstruction of physical optics on the transverse-wave hypothesis (see Augustin-Jean Fresnel).
Derivation
Here we systematically derive the above relations from electromagnetic premises.
Material parameters
In order to compute meaningful Fresnel coefficients, we must assume that the medium is (approximately) linear and homogeneous. If the medium is also isotropic, the four field vectors are related by
where and are scalars, known respectively as the (electric) permittivity and the (magnetic) permeability of the medium. For vacuum, these have the values and , respectively. Hence we define the relative permittivity (or dielectric constant) , and the relative permeability .
In optics it is common to assume that the medium is non-magnetic, so that . For ferromagnetic materials at radio/microwave frequencies, larger values of must be taken into account. But, for optically transparent media, and for all other materials at optical frequencies (except possible metamaterials), is indeed very close to 1; that is, .
In optics, one usually knows the refractive index of the medium, which is the ratio of the speed of light in vacuum () to the speed of light in the medium. In the analysis of partial reflection and transmission, one is also interested in the electromagnetic wave impedance , which is the ratio of the amplitude of to the amplitude of . It is therefore desirable to express and in terms of and , and thence to relate to . The last-mentioned relation, however, will make it convenient to derive the reflection coefficients in terms of the wave admittance , which is the reciprocal of the wave impedance .
In the case of uniform plane sinusoidal waves, the wave impedance or admittance is known as the intrinsic impedance or admittance of the medium. This case is the one for which the Fresnel coefficients are to be derived.
Electromagnetic plane waves
In a uniform plane sinusoidal electromagnetic wave, the electric field has the form
where is the (constant) complex amplitude vector, is the imaginary unit, is the wave vector (whose magnitude is the angular wavenumber), is the position vector, is the angular frequency, is time, and it is understood that the real part of the expression is the physical field. The value of the expression is unchanged if the position varies in a direction normal to ; hence is normal to the wavefronts.
To advance the phase by the angle ϕ, we replace by (that is, we replace by ), with the result that the (complex) field is multiplied by . So a phase advance is equivalent to multiplication by a complex constant with a negative argument. This becomes more obvious when the field () is factored as , where the last factor contains the time-dependence. That factor also implies that differentiation w.r.t. time corresponds to multiplication by .
If ℓ is the component of in the direction of , the field () can be written . If the argument of is to be constant, ℓ must increase at the velocity known as the phase velocity . This in turn is equal to Solving for gives
As usual, we drop the time-dependent factor , which is understood to multiply every complex field quantity. The electric field for a uniform plane sine wave will then be represented by the location-dependent phasor
For fields of that form, Faraday's law and the Maxwell-Ampère law respectively reduce to
Putting and , as above, we can eliminate and to obtain equations in only and :
If the material parameters and are real (as in a lossless dielectric), these equations show that form a right-handed orthogonal triad, so that the same equations apply to the magnitudes of the respective vectors. Taking the magnitude equations and substituting from (), we obtain
where and are the magnitudes of and . Multiplying the last two equations gives
Dividing (or cross-multiplying) the same two equations gives , where
This is the intrinsic admittance.
From () we obtain the phase velocity For vacuum this reduces to Dividing the second result by the first gives
For a non-magnetic medium (the usual case), this becomes .
Taking the reciprocal of (), we find that the intrinsic impedance is In vacuum this takes the value known as the impedance of free space. By division, For a non-magnetic medium, this becomes
Wave vectors
In Cartesian coordinates , let the region have refractive index , intrinsic admittance , etc., and let the region have refractive index , intrinsic admittance , etc. Then the plane is the interface, and the axis is normal to the interface (see diagram). Let and (in bold roman type) be the unit vectors in the and directions, respectively. Let the plane of incidence be the plane (the plane of the page), with the angle of incidence measured from towards . Let the angle of refraction, measured in the same sense, be , where the subscript stands for transmitted (reserving for reflected).
In the absence of Doppler shifts, ω does not change on reflection or refraction. Hence, by (), the magnitude of the wave vector is proportional to the refractive index.
So, for a given , if we redefine as the magnitude of the wave vector in the reference medium (for which ), then the wave vector has magnitude in the first medium (region in the diagram) and magnitude in the second medium. From the magnitudes and the geometry, we find that the wave vectors are
where the last step uses Snell's law. The corresponding dot products in the phasor form () are
Hence:
s components
For the s polarization, the field is parallel to the axis and may therefore be described by its component in the direction. Let the reflection and transmission coefficients be and , respectively. Then, if the incident field is taken to have unit amplitude, the phasor form () of its -component is
and the reflected and transmitted fields, in the same form, are
Under the sign convention used in this article, a positive reflection or transmission coefficient is one that preserves the direction of the transverse field, meaning (in this context) the field normal to the plane of incidence. For the s polarization, that means the field. If the incident, reflected, and transmitted fields (in the above equations) are in the -direction ("out of the page"), then the respective fields are in the directions of the red arrows, since form a right-handed orthogonal triad. The fields may therefore be described by their components in the directions of those arrows, denoted by . Then, since ,
At the interface, by the usual interface conditions for electromagnetic fields, the tangential components of the and fields must be continuous; that is,
When we substitute from equations () to () and then from (), the exponential factors cancel out, so that the interface conditions reduce to the simultaneous equations
which are easily solved for and , yielding
and
At normal incidence , indicated by an additional subscript 0, these results become
and
At grazing incidence , we have , hence and .
p components
For the p polarization, the incident, reflected, and transmitted fields are parallel to the red arrows and may therefore be described by their components in the directions of those arrows. Let those components be (redefining the symbols for the new context). Let the reflection and transmission coefficients be and . Then, if the incident field is taken to have unit amplitude, we have
If the fields are in the directions of the red arrows, then, in order for to form a right-handed orthogonal triad, the respective fields must be in the -direction ("into the page") and may therefore be described by their components in that direction. This is consistent with the adopted sign convention, namely that a positive reflection or transmission coefficient is one that preserves the direction of the transverse field the field in the case of the p polarization. The agreement of the other field with the red arrows reveals an alternative definition of the sign convention: that a positive reflection or transmission coefficient is one for which the field vector in the plane of incidence points towards the same medium before and after reflection or transmission.
So, for the incident, reflected, and transmitted fields, let the respective components in the -direction be . Then, since ,
At the interface, the tangential components of the and fields must be continuous; that is,
When we substitute from equations () and () and then from (), the exponential factors again cancel out, so that the interface conditions reduce to
Solving for and , we find
and
At normal incidence indicated by an additional subscript 0, these results become
and
At , we again have , hence and .
Comparing () and () with () and (), we see that at normal incidence, under the adopted sign convention, the transmission coefficients for the two polarizations are equal, whereas the reflection coefficients have equal magnitudes but opposite signs. While this clash of signs is a disadvantage of the convention, the attendant advantage is that the signs agree at grazing incidence.
Power ratios (reflectivity and transmissivity)
The Poynting vector for a wave is a vector whose component in any direction is the irradiance (power per unit area) of that wave on a surface perpendicular to that direction. For a plane sinusoidal wave the Poynting vector is , where and are due only to the wave in question, and the asterisk denotes complex conjugation. Inside a lossless dielectric (the usual case), and are in phase, and at right angles to each other and to the wave vector ; so, for s polarization, using the and components of and respectively (or for p polarization, using the and components of and ), the irradiance in the direction of is given simply by , which is in a medium of intrinsic impedance . To compute the irradiance in the direction normal to the interface, as we shall require in the definition of the power transmission coefficient, we could use only the component (rather than the full component) of or or, equivalently, simply multiply by the proper geometric factor, obtaining .
From equations () and (), taking squared magnitudes, we find that the reflectivity (ratio of reflected power to incident power) is
for the s polarization, and
for the p polarization. Note that when comparing the powers of two such waves in the same medium and with the same cosθ, the impedance and geometric factors mentioned above are identical and cancel out. But in computing the power transmission (below), these factors must be taken into account.
The simplest way to obtain the power transmission coefficient (transmissivity, the ratio of transmitted power to incident power in the direction normal to the interface, i.e. the direction) is to use (conservation of energy). In this way we find
for the s polarization, and
for the p polarization.
In the case of an interface between two lossless media (for which ϵ and μ are real and positive), one can obtain these results directly using the squared magnitudes of the amplitude transmission coefficients that we found earlier in equations () and (). But, for given amplitude (as noted above), the component of the Poynting vector in the direction is proportional to the geometric factor and inversely proportional to the wave impedance . Applying these corrections to each wave, we obtain two ratios multiplying the square of the amplitude transmission coefficient:
for the s polarization, and
for the p polarization. The last two equations apply only to lossless dielectrics, and only at incidence angles smaller than the critical angle (beyond which, of course, ).
For unpolarized light:
where .
Equal refractive indices
From equations () and (), we see that two dissimilar media will have the same refractive index, but different admittances, if the ratio of their permeabilities is the inverse of the ratio of their permittivities. In that unusual situation we have (that is, the transmitted ray is undeviated), so that the cosines in equations (), (), (), (), and () to () cancel out, and all the reflection and transmission ratios become independent of the angle of incidence; in other words, the ratios for normal incidence become applicable to all angles of incidence. When extended to spherical reflection or scattering, this results in the Kerker effect for Mie scattering.
Non-magnetic media
Since the Fresnel equations were developed for optics, they are usually given for non-magnetic materials. Dividing () by ()) yields
For non-magnetic media we can substitute the vacuum permeability for , so that
that is, the admittances are simply proportional to the corresponding refractive indices. When we make these substitutions in equations () to () and equations () to (), the factor cμ0 cancels out. For the amplitude coefficients we obtain:
For the case of normal incidence these reduce to:
The power reflection coefficients become:
The power transmissions can then be found from .
Brewster's angle
For equal permeabilities (e.g., non-magnetic media), if and are complementary, we can substitute for , and for , so that the numerator in equation () becomes , which is zero (by Snell's law). Hence and only the s-polarized component is reflected. This is what happens at the Brewster angle. Substituting for in Snell's law, we readily obtain
for Brewster's angle.
Equal permittivities
Although it is not encountered in practice, the equations can also apply to the case of two media with a common permittivity but different refractive indices due to different permeabilities. From equations () and (), if is fixed instead of , then becomes inversely proportional to , with the result that the subscripts 1 and 2 in equations () to () are interchanged (due to the additional step of multiplying the numerator and denominator by ). Hence, in () and (), the expressions for and in terms of refractive indices will be interchanged, so that Brewster's angle () will give instead of , and any beam reflected at that angle will be p-polarized instead of s-polarized. Similarly, Fresnel's sine law will apply to the p polarization instead of the s polarization, and his tangent law to the s polarization instead of the p polarization.
This switch of polarizations has an analog in the old mechanical theory of light waves (see §History, above). One could predict reflection coefficients that agreed with observation by supposing (like Fresnel) that different refractive indices were due to different densities and that the vibrations were normal to what was then called the plane of polarization, or by supposing (like MacCullagh and Neumann) that different refractive indices were due to different elasticities and that the vibrations were parallel to that plane. Thus the condition of equal permittivities and unequal permeabilities, although not realistic, is of some historical interest.
| Physical sciences | Optics | Physics |
11168 | https://en.wikipedia.org/wiki/Fortran | Fortran | Fortran (; formerly FORTRAN) is a third generation, compiled, imperative programming language that is especially suited to numeric computation and scientific computing.
Fortran was originally developed by IBM. It first compiled correctly in 1958. Fortran computer programs have been written to support scientific and engineering applications, such as numerical weather prediction, finite element analysis, computational fluid dynamics, plasma physics, geophysics, computational physics, crystallography and computational chemistry. It is a popular language for high-performance computing and is used for programs that benchmark and rank the world's fastest supercomputers.
Fortran has evolved through numerous versions and dialects. In 1966, the American National Standards Institute (ANSI) developed a standard for Fortran to limit proliferation of compilers using slightly different syntax. Successive versions have added support for a character data type (Fortran 77), structured programming, array programming, modular programming, generic programming (Fortran 90), parallel computing (Fortran 95), object-oriented programming (Fortran 2003), and concurrent programming (Fortran 2008).
Since April 2024, Fortran has ranked among the top ten languages in the TIOBE index, a measure of the popularity of programming languages.
Naming
The first manual for FORTRAN describes it as a Formula Translating System, and printed the name with small caps, . Other sources suggest the name stands for Formula Translator, or Formula Translation.
Early IBM computers did not support lowercase letters, and the names of versions of the language through FORTRAN 77 were usually spelled in all-uppercase. FORTRAN 77 was the last version in which the Fortran character set included only uppercase letters.
The official language standards for Fortran have referred to the language as "Fortran" with initial caps since Fortran 90.
Origins
In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a more practical alternative to assembly language for programming their IBM 704 mainframe computer. Backus' historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Harold Stern, Lois Haibt, and David Sayre. Its concepts included easier entry of equations into a computer, an idea developed by J. Halcombe Laning and demonstrated in the Laning and Zierler system of 1952.
A draft specification for The IBM Mathematical Formula Translating System was completed by November 1954. The first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. Fortran produced efficient enough code for assembly language programmers to accept a high-level programming language replacement.
John Backus said during a 1979 interview with Think, the IBM employee magazine, "Much of my work has come from being lazy. I didn't like writing programs, and so, when I was working on the IBM 701, writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs."
The language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex number data type in the language made Fortran especially suited to technical applications such as electrical engineering.
By 1960, versions of FORTRAN were available for the IBM 709, 650, 1620, and 7090 computers. Significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed.
FORTRAN was provided for the IBM 1401 computer by an innovative 63-phase compiler that ran entirely in its core memory of only 8000 (six-bit) characters. The compiler could be run from tape, or from a 2200-card deck; it used no further tape or disk storage. It kept the program in memory and loaded overlays that gradually transformed it, in place, into executable form, as described by Haines.
This article was reprinted, edited, in both editions of Anatomy of a Compiler and in the IBM manual "Fortran Specifications and Operating Procedures, IBM 1401". The executable form was not entirely machine language; rather, floating-point arithmetic, sub-scripting, input/output, and function references were interpreted, preceding UCSD Pascal P-code by two decades. GOTRAN, a simplified, interpreted version of FORTRAN I (with only 12 statements not 32) for "load and go" operation was available (at least for the early IBM 1620 computer). Modern Fortran, and almost all later versions, are fully compiled, as done for other high-performance languages.
The development of Fortran paralleled the early evolution of compiler technology, and many advances in the theory and design of compilers were specifically motivated by the need to generate efficient code for Fortran programs.
FORTRAN
The initial release of FORTRAN for the IBM 704 contained 32 statements, including:
and statements
Assignment statements
Three-way arithmetic statement, which passed control to one of three locations in the program depending on whether the result of the arithmetic expression was negative, zero, or positive
Control statements for checking exceptions (, , and ); and control statements for manipulating sense switches and sense lights (, , and )
, computed , , and assigned
loops
Formatted I/O: , , , , , and
Unformatted I/O: , , , and
Other I/O: , , and
, , and
statement (for providing optimization hints to the compiler).
The arithmetic statement was reminiscent of (but not readily implementable by) a three-way comparison instruction (CAS—Compare Accumulator with Storage) available on the 704. The statement provided the only way to compare numbers—by testing their difference, with an attendant risk of overflow. This deficiency was later overcome by "logical" facilities introduced in FORTRAN IV.
The statement was used originally (and optionally) to give branch probabilities for the three branch cases of the arithmetic statement. It could also be used to suggest how many iterations a loop might run. The first FORTRAN compiler used this weighting to perform at compile time a Monte Carlo simulation of the generated code, the results of which were used to optimize the placement of basic blocks in memory—a very sophisticated optimization for its time. The Monte Carlo technique is documented in Backus et al.'s paper on this original implementation, The FORTRAN Automatic Coding System:
The fundamental unit of program is the basic block; a basic block is a stretch of program which has one entry point and one exit point. The purpose of section 4 is to prepare for section 5 a table of predecessors (PRED table) which enumerates the basic blocks and lists for every basic block each of the basic blocks which can be its immediate predecessor in flow, together with the absolute frequency of each such basic block link. This table is obtained by running the program once in Monte-Carlo fashion, in which the outcome of conditional transfers arising out of IF-type statements and computed GO TO's is determined by a random number generator suitably weighted according to whatever FREQUENCY statements have been provided.
The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and outputting an error code on its console. That code could be looked up by the programmer in an error messages table in the operator's manual, providing them with a brief description of the problem. Later, an error-handling subroutine to handle user errors such as division by zero, developed by NASA, was incorporated, informing users of which line of code contained the error.
Fixed layout and punched cards
Before the development of disk files, text editors and terminals, programs were most often entered on a keypunch keyboard onto 80-column punched cards, one line to a card. The resulting deck of cards would be fed into a card reader to be compiled. Punched card codes included no lower-case letters or many special characters, and special versions of the IBM 026 keypunch were offered that would correctly print the re-purposed special characters used in FORTRAN.
Reflecting punched card input practice, Fortran programs were originally written in a fixed-column format, with the first 72 columns read into twelve 36-bit words.
A letter "C" in column 1 caused the entire card to be treated as a comment and ignored by the compiler. Otherwise, the columns of the card were divided into four fields:
1 to 5 were the label field: a sequence of digits here was taken as a label for use in DO or control statements such as GO TO and IF, or to identify a FORMAT statement referred to in a WRITE or READ statement. Leading zeros are ignored and 0 is not a valid label number.
6 was a continuation field: a character other than a blank or a zero here caused the card to be taken as a continuation of the statement on the prior card. The continuation cards were usually numbered 1, 2, etc. and the starting card might therefore have zero in its continuation column—which is not a continuation of its preceding card.
7 to 72 served as the statement field.
73 to 80 were ignored (the IBM 704's card reader only used 72 columns).
Columns 73 to 80 could therefore be used for identification information, such as punching a sequence number or text, which could be used to re-order cards if a stack of cards was dropped; though in practice this was reserved for stable, production programs. An IBM 519 could be used to copy a program deck and add sequence numbers. Some early compilers, e.g., the IBM 650's, had additional restrictions due to limitations on their card readers. Keypunches could be programmed to tab to column 7 and skip out after column 72. Later compilers relaxed most fixed-format restrictions, and the requirement was eliminated in the Fortran 90 standard.
Within the statement field, whitespace characters (blanks) were ignored outside a text literal. This allowed omitting spaces between tokens for brevity or including spaces within identifiers for clarity. For example, was a valid identifier, equivalent to , and 101010DO101I=1,101 was a valid statement, equivalent to 10101 DO 101 I = 1, 101 because the zero in column 6 is treated as if it were a space (!), while 101010DO101I=1.101 was instead 10101 DO101I = 1.101, the assignment of 1.101 to a variable called DO101I. Note the slight visual difference between a comma and a period.
Hollerith strings, originally allowed only in FORMAT and DATA statements, were prefixed by a character count and the letter H (e.g., ), allowing blanks to be retained within the character string. Miscounts were a problem.
Evolution
FORTRAN II
IBM's FORTRAN II appeared in 1958. The main enhancement was to support procedural programming by allowing user-written subroutines and functions which returned values with parameters passed by reference. The COMMON statement provided a way for subroutines to access common (or global) variables. Six new statements were introduced:
, , and
and
Over the next few years, FORTRAN II added support for the and data types.
Early FORTRAN compilers supported no recursion in subroutines. Early computer architectures supported no concept of a stack, and when they did directly support subroutine calls, the return location was often stored in one fixed location adjacent to the subroutine code (e.g. the IBM 1130) or a specific machine register (IBM 360 et seq), which only allows recursion if a stack is maintained by software and the return address is stored on the stack before the call is made and restored after the call returns. Although not specified in FORTRAN 77, many F77 compilers supported recursion as an option, and the Burroughs mainframes, designed with recursion built-in, did so by default. It became a standard in Fortran 90 via the new keyword RECURSIVE.
Simple FORTRAN II program
This program, for Heron's formula, reads data on a tape reel containing three 5-digit integers A, B, and C as input. There are no "type" declarations available: variables whose name starts with I, J, K, L, M, or N are "fixed-point" (i.e. integers), otherwise floating-point. Since integers are to be processed in this example, the names of the variables start with the letter "I". The name of a variable must start with a letter and can continue with both letters and digits, up to a limit of six characters in FORTRAN II. If A, B, and C cannot represent the sides of a triangle in plane geometry, then the program's execution will end with an error code of "STOP 1". Otherwise, an output line will be printed showing the input values for A, B, and C, followed by the computed AREA of the triangle as a floating-point number occupying ten spaces along the line of output and showing 2 digits after the decimal point, the .2 in F10.2 of the FORMAT statement with label 601.
C AREA OF A TRIANGLE WITH A STANDARD SQUARE ROOT FUNCTION
C INPUT - TAPE READER UNIT 5, INTEGER INPUT
C OUTPUT - LINE PRINTER UNIT 6, REAL OUTPUT
C INPUT ERROR DISPLAY ERROR OUTPUT CODE 1 IN JOB CONTROL LISTING
READ INPUT TAPE 5, 501, IA, IB, IC
501 FORMAT (3I5)
C IA, IB, AND IC MAY NOT BE NEGATIVE OR ZERO
C FURTHERMORE, THE SUM OF TWO SIDES OF A TRIANGLE
C MUST BE GREATER THAN THE THIRD SIDE, SO WE CHECK FOR THAT, TOO
IF (IA) 777, 777, 701
701 IF (IB) 777, 777, 702
702 IF (IC) 777, 777, 703
703 IF (IA+IB-IC) 777, 777, 704
704 IF (IA+IC-IB) 777, 777, 705
705 IF (IB+IC-IA) 777, 777, 799
777 STOP 1
C USING HERON'S FORMULA WE CALCULATE THE
C AREA OF THE TRIANGLE
799 S = FLOATF (IA + IB + IC) / 2.0
AREA = SQRTF( S * (S - FLOATF(IA)) * (S - FLOATF(IB)) *
+ (S - FLOATF(IC)))
WRITE OUTPUT TAPE 6, 601, IA, IB, IC, AREA
601 FORMAT (4H A= ,I5,5H B= ,I5,5H C= ,I5,8H AREA= ,F10.2,
+ 13H SQUARE UNITS)
STOP
END
FORTRAN III
IBM also developed a FORTRAN III in 1958 that allowed for inline assembly code among other features; however, this version was never released as a product. Like the 704 FORTRAN and FORTRAN II, FORTRAN III included machine-dependent features that made code written in it unportable from machine to machine. Early versions of FORTRAN provided by other vendors suffered from the same disadvantage.
FORTRAN IV
IBM began development of FORTRAN IV in 1961 as a result of customer demands. FORTRAN IV removed the machine-dependent features of FORTRAN II (such as ), while adding new features such as a data type, logical Boolean expressions, and the logical IF statement as an alternative to the arithmetic IF statement. FORTRAN IV was eventually released in 1962, first for the IBM 7030 ("Stretch") computer, followed by versions for the IBM 7090, IBM 7094, and later for the IBM 1401 in 1966.
By 1965, FORTRAN IV was supposed to be compliant with the standard being developed by the American Standards Association X3.4.3 FORTRAN Working Group.
Between 1966 and 1968, IBM offered several FORTRAN IV compilers for its System/360, each named by letters that indicated the minimum amount of memory the compiler needed to run.
The letters (F, G, H) matched the codes used with System/360 model numbers to indicate memory size, each letter increment being a factor of two larger:
1966 : FORTRAN IV F for DOS/360 (64K bytes)
1966 : FORTRAN IV G for OS/360 (128K bytes)
1968 : FORTRAN IV H for OS/360 (256K bytes)
Digital Equipment Corporation maintained DECSYSTEM-10 Fortran IV (F40) for PDP-10 from 1967 to 1975. Compilers were also available for the UNIVAC 1100 series and the Control Data 6000 series and 7000 series systems.
At about this time FORTRAN IV had started to become an important educational tool and implementations such as the University of Waterloo's WATFOR and WATFIV were created to simplify the complex compile and link processes of earlier compilers.
In the FORTRAN IV programming environment of the era, except for that used on Control Data Corporation (CDC) systems, only one instruction was placed per line. The CDC version allowed for multiple instructions per line if separated by a (dollar) character. The FORTRAN sheet was divided into four fields, as described above.
Two compilers of the time, IBM "G" and UNIVAC, allowed comments to be written on the same line as instructions, separated by a special character: "master space": V (perforations 7 and 8) for UNIVAC and perforations 12/11/0/7/8/9 (hexadecimal FF) for IBM. These comments were not to be inserted in the middle of continuation cards.
FORTRAN 66
Perhaps the most significant development in the early history of FORTRAN was the decision by the American Standards Association (now American National Standards Institute (ANSI)) to form a committee sponsored by the Business Equipment Manufacturers Association (BEMA) to develop an American Standard Fortran. The resulting two standards, approved in March 1966, defined two languages, FORTRAN (based on FORTRAN IV, which had served as a de facto standard), and Basic FORTRAN (based on FORTRAN II, but stripped of its machine-dependent features). The FORTRAN defined by the first standard, officially denoted X3.9-1966, became known as FORTRAN 66 (although many continued to term it FORTRAN IV, the language on which the standard was largely based). FORTRAN 66 effectively became the first industry-standard version of FORTRAN. FORTRAN 66 included:
Main program, , , and program units
, , , , and data types
, , and statements
statement for specifying initial values
Intrinsic and (e.g., library) functions
Assignment statement
, computed , assigned , and statements
Logical and arithmetic (three-way) statements
loop statement
, , , , and statements for sequential I/O
statement and assigned format
, , , and statements
Hollerith constants in and statements, and as arguments to procedures
Identifiers of up to six characters in length
Comment lines
line
The above Fortran II version of the Heron program needs several modifications to compile as a Fortran 66 program. Modifications include using the more machine independent versions of the and statements, and removal of the unneeded type conversion functions. Though not required, the arithmetic statements can be re-written to use logical statements and expressions in a more structured fashion.
C AREA OF A TRIANGLE WITH A STANDARD SQUARE ROOT FUNCTION
C INPUT - TAPE READER UNIT 5, INTEGER INPUT
C OUTPUT - LINE PRINTER UNIT 6, REAL OUTPUT
C INPUT ERROR DISPLAY ERROR OUTPUT CODE 1 IN JOB CONTROL LISTING
READ (5, 501) IA, IB, IC
501 FORMAT (3I5)
C
C IA, IB, AND IC MAY NOT BE NEGATIVE OR ZERO
C FURTHERMORE, THE SUM OF TWO SIDES OF A TRIANGLE
C MUST BE GREATER THAN THE THIRD SIDE, SO WE CHECK FOR THAT, TOO
IF (IA .GT. 0 .AND. IB .GT. 0 .AND. IC .GT. 0) GOTO 10
WRITE (6, 602)
602 FORMAT (42H IA, IB, AND IC MUST BE GREATER THAN ZERO.)
STOP 1
10 CONTINUE
C
IF (IA+IB-IC .GT. 0
+ .AND. IA+IC-IB .GT. 0
+ .AND. IB+IC-IA .GT. 0) GOTO 20
WRITE (6, 603)
603 FORMAT (50H SUM OF TWO SIDES MUST BE GREATER THAN THIRD SIDE.)
STOP 1
20 CONTINUE
C
C USING HERON'S FORMULA WE CALCULATE THE
C AREA OF THE TRIANGLE
S = (IA + IB + IC) / 2.0
AREA = SQRT ( S * (S - IA) * (S - IB) * (S - IC))
WRITE (6, 601) IA, IB, IC, AREA
601 FORMAT (4H A= ,I5,5H B= ,I5,5H C= ,I5,8H AREA= ,F10.2,
+ 13H SQUARE UNITS)
STOP
END
FORTRAN 77
After the release of the FORTRAN 66 standard, compiler vendors introduced several extensions to Standard Fortran, prompting ANSI committee X3J3 in 1969 to begin work on revising the 1966 standard, under sponsorship of CBEMA, the Computer Business Equipment Manufacturers Association (formerly BEMA). Final drafts of this revised standard circulated in 1977, leading to formal approval of the new FORTRAN standard in April 1978. The new standard, called FORTRAN 77 and officially denoted X3.9-1978, added a number of significant features to address many of the shortcomings of FORTRAN 66:
Block and statements, with optional and clauses, to provide improved language support for structured programming
loop extensions, including parameter expressions, negative increments, and zero trip counts
, , and statements for improved I/O capability
Direct-access file I/O
statement, to override implicit conventions that undeclared variables are if their name begins with , , , , , or (and otherwise)
data type, replacing Hollerith strings with vastly expanded facilities for character input and output and processing of character-based data
statement for specifying constants
statement for persistent local variables
Generic names for intrinsic functions (e.g. also accepts arguments of other types, such as or ).
A set of intrinsics () for lexical comparison of strings, based upon the ASCII collating sequence. (These ASCII functions were demanded by the U.S. Department of Defense, in their conditional approval vote.)
A maximum of seven dimensions in arrays, rather than three. Allowed subscript expressions were also generalized.
In this revision of the standard, a number of features were removed or altered in a manner that might invalidate formerly standard-conforming programs.
(Removal was the only allowable alternative to X3J3 at that time, since the concept of "deprecation" was not yet available for ANSI standards.)
While most of the 24 items in the conflict list (see Appendix A2 of X3.9-1978) addressed loopholes or pathological cases permitted by the prior standard but rarely used, a small number of specific capabilities were deliberately removed, such as:
Hollerith constants and Hollerith data, such as GREET = 12HHELLO THERE!
Reading into an H edit (Hollerith field) descriptor in a FORMAT specification
Overindexing of array bounds by subscripts DIMENSION A(10,5)
Y = A(11,1)
Transfer of control out of and back into the range of a DO loop (also known as "Extended Range")
A Fortran 77 version of the Heron program requires no modifications to the Fortran 66 version. However this example demonstrates additional cleanup of the I/O statements, including using list-directed I/O, and replacing the Hollerith edit descriptors in the statements with quoted strings. It also uses structured and statements, rather than /.
PROGRAM HERON
C AREA OF A TRIANGLE WITH A STANDARD SQUARE ROOT FUNCTION
C INPUT - DEFAULT STANDARD INPUT UNIT, INTEGER INPUT
C OUTPUT - DEFAULT STANDARD OUTPUT UNIT, REAL OUTPUT
C INPUT ERROR DISPLAY ERROR OUTPUT CODE 1 IN JOB CONTROL LISTING
READ (*, *) IA, IB, IC
C
C IA, IB, AND IC MAY NOT BE NEGATIVE OR ZERO
C FURTHERMORE, THE SUM OF TWO SIDES OF A TRIANGLE
C MUST BE GREATER THAN THE THIRD SIDE, SO WE CHECK FOR THAT, TOO
IF (IA .LE. 0 .OR. IB .LE. 0 .OR. IC .LE. 0) THEN
WRITE (*, *) 'IA, IB, and IC must be greater than zero.'
STOP 1
END IF
C
IF (IA+IB-IC .LE. 0
+ .OR. IA+IC-IB .LE. 0
+ .OR. IB+IC-IA .LE. 0) THEN
WRITE (*, *) 'Sum of two sides must be greater than third side.'
STOP 1
END IF
C
C USING HERON'S FORMULA WE CALCULATE THE
C AREA OF THE TRIANGLE
S = (IA + IB + IC) / 2.0
AREA = SQRT ( S * (S - IA) * (S - IB) * (S - IC))
WRITE (*, 601) IA, IB, IC, AREA
601 FORMAT ('A= ', I5, ' B= ', I5, ' C= ', I5, ' AREA= ', F10.2,
+ ' square units')
STOP
END
Transition to ANSI Standard Fortran
The development of a revised standard to succeed FORTRAN 77 would be repeatedly delayed as the standardization process struggled to keep up with rapid changes in computing and programming practice. In the meantime, as the "Standard FORTRAN" for nearly fifteen years, FORTRAN 77 would become the historically most important dialect.
An important practical extension to FORTRAN 77 was the release of MIL-STD-1753 in 1978. This specification, developed by the U.S. Department of Defense, standardized a number of features implemented by most FORTRAN 77 compilers but not included in the ANSI FORTRAN 77 standard. These features would eventually be incorporated into the Fortran 90 standard.
and statements
statement
variant of the statement
Bit manipulation intrinsic functions, based on similar functions included in Industrial Real-Time Fortran (ANSI/ISA S61.1 (1976))
The IEEE 1003.9 POSIX Standard, released in 1991, provided a simple means for FORTRAN 77 programmers to issue POSIX system calls. Over 100 calls were defined in the document allowing access to POSIX-compatible process control, signal handling, file system control, device control, procedure pointing, and stream I/O in a portable manner.
Fortran 90
The much-delayed successor to FORTRAN 77, informally known as Fortran 90 (and prior to that, Fortran 8X), was finally released as ISO/IEC standard 1539:1991 in 1991 and an ANSI Standard in 1992. In addition to changing the official spelling from FORTRAN to Fortran, this major revision added many new features to reflect the significant changes in programming practice that had evolved since the 1978 standard:
Free-form source input removed the need to skip the first six character positions before entering statements.
Lowercase Fortran keywords
Identifiers up to 31 characters in length (In the previous standard, it was only six characters).
Inline comments
Ability to operate on arrays (or array sections) as a whole, thus greatly simplifying math and engineering computations.
whole, partial and masked array assignment statements and array expressions, such as X(1:N)=R(1:N)*COS(A(1:N))
statement for selective array assignment
array-valued constants and expressions,
user-defined array-valued functions and array constructors.
procedures
Modules, to group related procedures and data together, and make them available to other program units, including the capability to limit the accessibility to only specific parts of the module.
A vastly improved argument-passing mechanism, allowing interfaces to be checked at compile time
User-written interfaces for generic procedures
Operator overloading
Derived (structured) data types
New data type declaration syntax, to specify the data type and other attributes of variables
Dynamic memory allocation by means of the attribute and the and statements
attribute, pointer assignment, and statement to facilitate the creation and manipulation of dynamic data structures
Structured looping constructs, with an statement for loop termination, and and statements for terminating normal loop iterations in an orderly way
, , . . . , construct for multi-way selection
Portable specification of numerical precision under the user's control
New and enhanced intrinsic procedures.
Obsolescence and deletions
Unlike the prior revision, Fortran 90 removed no features. Any standard-conforming FORTRAN 77 program was also standard-conforming under Fortran 90, and either standard should have been usable to define its behavior.
A small set of features were identified as "obsolescent" and were expected to be removed in a future standard. All of the functionalities of these early-version features can be performed by newer Fortran features. Some are kept to simplify porting of old programs but many were deleted in Fortran 95.
"Hello, World!" example
program helloworld
print *, "Hello, World!"
end program helloworld
Fortran 95
Fortran 95, published officially as ISO/IEC 1539-1:1997, was a minor revision, mostly to resolve some outstanding issues from the Fortran 90 standard. Nevertheless, Fortran 95 also added a number of extensions, notably from the High Performance Fortran specification:
and nested constructs to aid vectorization
User-defined and procedures
Default initialization of derived type components, including pointer initialization
Expanded the ability to use initialization expressions for data objects
Initialization of pointers to
Clearly defined that arrays are automatically deallocated when they go out of scope.
A number of intrinsic functions were extended (for example a argument was added to the intrinsic).
Several features noted in Fortran 90 to be "obsolescent" were removed from Fortran 95:
statements using and index variables
Branching to an statement from outside its block
statement
and assigned statement, and assigned format specifiers
Hollerith edit descriptor.
An important supplement to Fortran 95 was the ISO technical report TR-15581: Enhanced Data Type Facilities, informally known as the Allocatable TR. This specification defined enhanced use of arrays, prior to the availability of fully Fortran 2003-compliant Fortran compilers. Such uses include arrays as derived type components, in procedure dummy argument lists, and as function return values. ( arrays are preferable to -based arrays because arrays are guaranteed by Fortran 95 to be deallocated automatically when they go out of scope, eliminating the possibility of memory leakage. In addition, elements of allocatable arrays are contiguous, and aliasing is not an issue for optimization of array references, allowing compilers to generate faster code than in the case of pointers.)
Another important supplement to Fortran 95 was the ISO technical report TR-15580: Floating-point exception handling, informally known as the IEEE TR. This specification defined support for IEEE floating-point arithmetic and floating-point exception handling.
Conditional compilation and varying length strings
In addition to the mandatory "Base language" (defined in ISO/IEC 1539-1 : 1997), the Fortran 95 language also included two optional modules:
Varying length character strings (ISO/IEC 1539-2 : 2000)
Conditional compilation (ISO/IEC 1539-3 : 1998)
which, together, compose the multi-part International Standard (ISO/IEC 1539).
According to the standards developers, "the optional parts describe self-contained features which have been requested by a substantial body of users and/or implementors, but which are not deemed to be of sufficient generality for them to be required in all standard-conforming Fortran compilers." Nevertheless, if a standard-conforming Fortran does provide such options, then they "must be provided in accordance with the description of those facilities in the appropriate Part of the Standard".
Modern Fortran
The language defined by the twenty-first century standards, in particular because of its incorporation of object-oriented programming support and subsequently Coarray Fortran, is often referred to as 'Modern Fortran', and the term is increasingly used in the literature.
Fortran 2003
Fortran 2003, officially published as ISO/IEC 1539-1:2004, was a major revision introducing many new features. A comprehensive summary of the new features of Fortran 2003 is available at the Fortran Working Group (ISO/IEC JTC1/SC22/WG5) official Web site.
From that article, the major enhancements for this revision include:
Derived type enhancements: parameterized derived types, improved control of accessibility, improved structure constructors, and finalizers
Object-oriented programming support: type extension and inheritance, polymorphism, dynamic type allocation, and type-bound procedures, providing complete support for abstract data types
Data manipulation enhancements: allocatable components (incorporating TR 15581), deferred type parameters, attribute, explicit type specification in array constructors and allocate statements, pointer enhancements, extended initialization expressions, and enhanced intrinsic procedures
Input/output enhancements: asynchronous transfer, stream access, user specified transfer operations for derived types, user specified control of rounding during format conversions, named constants for preconnected units, the statement, regularization of keywords, and access to error messages
Procedure pointers
Support for IEEE floating-point arithmetic and floating-point exception handling (incorporating TR 15580)
Interoperability with the C programming language
Support for international usage: access to ISO 10646 4-byte characters and choice of decimal or comma in numeric formatted input/output
Enhanced integration with the host operating system: access to command-line arguments, environment variables, and processor error messages
An important supplement to Fortran 2003 was the ISO technical report TR-19767: Enhanced module facilities in Fortran. This report provided sub-modules, which make Fortran modules more similar to Modula-2 modules. They are similar to Ada private child sub-units. This allows the specification and implementation of a module to be expressed in separate program units, which improves packaging of large libraries, allows preservation of trade secrets while publishing definitive interfaces, and prevents compilation cascades.
Fortran 2008
ISO/IEC 1539-1:2010, informally known as Fortran 2008, was approved in September 2010. As with Fortran 95, this is a minor upgrade, incorporating clarifications and corrections to Fortran 2003, as well as introducing some new capabilities. The new capabilities include:
Sub-modules – additional structuring facilities for modules; supersedes ISO/IEC TR 19767:2005
Coarray Fortran – a parallel execution model
The DO CONCURRENT construct – for loop iterations with no interdependencies
The CONTIGUOUS attribute – to specify storage layout restrictions
The BLOCK construct – can contain declarations of objects with construct scope
Recursive allocatable components – as an alternative to recursive pointers in derived types
The Final Draft international Standard (FDIS) is available as document N1830.
A supplement to Fortran 2008 is the International Organization for Standardization (ISO) Technical Specification (TS) 29113 on Further Interoperability of Fortran with C, which has been submitted to ISO in May 2012 for approval. The specification adds support for accessing the array descriptor from C and allows ignoring the type and rank of arguments.
Fortran 2018
The Fortran 2018 revision of the language was earlier referred to as Fortran 2015. It was a significant revision and was released on November 28, 2018.
Fortran 2018 incorporates two previously published Technical Specifications:
ISO/IEC TS 29113:2012 Further Interoperability with C
ISO/IEC TS 18508:2015 Additional Parallel Features in Fortran
Additional changes and new features include support for ISO/IEC/IEEE 60559:2011 (the version of the IEEE floating-point standard before the latest minor revision IEEE ), hexadecimal input/output, IMPLICIT NONE enhancements and other changes.
Fortran 2018 deleted the arithmetic IF statement. It also deleted non-block DO constructs - loops which do not end with an END DO or CONTINUE statement. These had been an obsolescent part of the language since Fortran 90.
New obsolescences are: COMMON and EQUIVALENCE statements and the BLOCK DATA program unit, labelled DO loops, specific names for intrinsic functions, and the FORALL statement and construct.
Fortran 2023
Fortran 2023 (ISO/IEC 1539-1:2023) was published in November 2023, and can be purchased from the ISO.
Fortran 2023 is a minor extension of Fortran 2018 that focuses on correcting errors and omissions
in Fortran 2018. It also adds some small features, including an enumerated type capability.
Language features
A full description of the Fortran language features brought by Fortran 95 is covered in the related article, Fortran 95 language features. The language versions defined by later standards are often referred to collectively as 'Modern Fortran' and are described in the literature.
Science and engineering
Although a 1968 journal article by the authors of BASIC already described FORTRAN as "old-fashioned", programs have been written in Fortran for many decades and there is a vast body of Fortran software in daily use throughout the scientific and engineering communities. Jay Pasachoff wrote in 1984 that "physics and astronomy students simply have to learn FORTRAN. So much exists in FORTRAN that it seems unlikely that scientists will change to Pascal, Modula-2, or whatever." In 1993, Cecil E. Leith called FORTRAN the "mother tongue of scientific computing", adding that its replacement by any other possible language "may remain a forlorn hope".
It is the primary language for some of the most intensive super-computing tasks, such as in astronomy, climate modeling, computational chemistry, computational economics, computational fluid dynamics, computational physics, data analysis, hydrological modeling, numerical linear algebra and numerical libraries (LAPACK, IMSL and NAG), optimization, satellite simulation, structural engineering, and weather prediction. Many of the floating-point benchmarks to gauge the performance of new computer processors, such as the floating-point components of the SPEC benchmarks (e.g., CFP2006, CFP2017) are written in Fortran. Math algorithms are well documented in Numerical Recipes.
Apart from this, more modern codes in computational science generally use large program libraries, such as METIS for graph partitioning, PETSc or Trilinos for linear algebra capabilities, deal.II or FEniCS for mesh and finite element support, and other generic libraries. Since the early 2000s, many of the widely used support libraries have also been implemented in C and more recently, in C++. On the other hand, high-level languages such as the Wolfram Language, MATLAB, Python, and R have become popular in particular areas of computational science. Consequently, a growing fraction of scientific programs are also written in such higher-level scripting languages. For this reason, facilities for inter-operation with C were added to Fortran 2003 and enhanced by the ISO/IEC technical specification 29113, which was incorporated into Fortran 2018 to allow more flexible interoperation with other programming languages.
Portability
Portability was a problem in the early days because there was no agreed upon standard—not even IBM's reference manual—and computer companies vied to differentiate their offerings from others by providing incompatible features. Standards have improved portability. The 1966 standard provided a reference syntax and semantics, but vendors continued to provide incompatible extensions. Although careful programmers were coming to realize that use of incompatible extensions caused expensive portability problems, and were therefore using programs such as The PFORT Verifier, it was not until after the 1977 standard, when the National Bureau of Standards (now NIST) published FIPS PUB 69, that processors purchased by the U.S. Government were required to diagnose extensions of the standard. Rather than offer two processors, essentially every compiler eventually had at least an option to diagnose extensions.
Incompatible extensions were not the only portability problem. For numerical calculations, it is important to take account of the characteristics of the arithmetic. This was addressed by Fox et al. in the context of the 1966 standard by the PORT library. The ideas therein became widely used, and were eventually incorporated into the 1990 standard by way of intrinsic inquiry functions. The widespread (now almost universal) adoption of the IEEE 754 standard for binary floating-point arithmetic has essentially removed this problem.
Access to the computing environment (e.g., the program's command line, environment variables, textual explanation of error conditions) remained a problem until it was addressed by the 2003 standard.
Large collections of library software that could be described as being loosely related to engineering and scientific calculations, such as graphics libraries, have been written in C, and therefore access to them presented a portability problem. This has been addressed by incorporation of C interoperability into the 2003 standard.
It is now possible (and relatively easy) to write an entirely portable program in Fortran, even without recourse to a preprocessor.
Obsolete variants
Until the Fortran 66 standard was developed, each compiler supported its own variant of Fortran. Some were more divergent from the mainstream than others.
The first Fortran compiler set a high standard of efficiency for compiled code. This goal made it difficult to create a compiler so it was usually done by the computer manufacturers to support hardware sales. This left an important niche: compilers that were fast and provided good diagnostics for the programmer (often a student). Examples include Watfor, Watfiv, PUFFT, and on a smaller scale, FORGO, Wits Fortran, and Kingston Fortran 2.
Fortran 5 was marketed by Data General Corp from the early 1970s to the early 1980s, for the Nova, Eclipse, and MV line of computers. It had an optimizing compiler that was quite good for minicomputers of its time. The language most closely resembles FORTRAN 66.
FORTRAN V was distributed by Control Data Corporation in 1968 for the CDC 6600 series. The language was based upon FORTRAN IV.
Univac also offered a compiler for the 1100 series known as FORTRAN V. A spinoff of Univac Fortran V was Athena FORTRAN.
Specific variants produced by the vendors of high-performance scientific computers (e.g., Burroughs, Control Data Corporation (CDC), Cray, Honeywell, IBM, Texas Instruments, and UNIVAC) added extensions to Fortran to take advantage of special hardware features such as instruction cache, CPU pipelines, and vector arrays. For example, one of IBM's FORTRAN compilers (H Extended IUP) had a level of optimization which reordered the machine code instructions to keep multiple internal arithmetic units busy simultaneously. Another example is CFD, a special variant of FORTRAN designed specifically for the ILLIAC IV supercomputer, running at NASA's Ames Research Center.
IBM Research Labs also developed an extended FORTRAN-based language called VECTRAN for processing vectors and matrices.
Object-Oriented Fortran was an object-oriented extension of Fortran, in which data items can be grouped into objects, which can be instantiated and executed in parallel. It was available for Sun, Iris, iPSC, and nCUBE, but is no longer supported.
Such machine-specific extensions have either disappeared over time or have had elements incorporated into the main standards. The major remaining extension is OpenMP, which is a cross-platform extension for shared memory programming. One new extension, Coarray Fortran, is intended to support parallel programming.
FOR TRANSIT was the name of a reduced version of the IBM 704 FORTRAN language, which was implemented for the IBM 650, using a translator program developed at Carnegie in the late 1950s. The following comment appears in the IBM Reference Manual (FOR TRANSIT Automatic Coding System C28-4038, Copyright 1957, 1959 by IBM):
The FORTRAN system was designed for a more complex machine than the 650, and consequently some of the 32 statements found in the FORTRAN Programmer's Reference Manual are not acceptable to the FOR TRANSIT system. In addition, certain restrictions to the FORTRAN language have been added. However, none of these restrictions make a source program written for FOR TRANSIT incompatible with the FORTRAN system for the 704.
The permissible statements were:
Arithmetic assignment statements, e.g., a = b
GO TO (n1, n2, ..., nm), i
IF (a) n1, n2, n3
DO n i = m1, m2
Up to ten subroutines could be used in one program.
FOR TRANSIT statements were limited to columns 7 through 56, only. Punched cards were used for input and output on the IBM 650. Three passes were required to translate source code to the "IT" language, then to compile the IT statements into SOAP assembly language, and finally to produce the object program, which could then be loaded into the machine to run the program (using punched cards for data input, and outputting results onto punched cards).
Two versions existed for the 650s with a 2000 word memory drum: FOR TRANSIT I (S) and FOR TRANSIT II, the latter for machines equipped with indexing registers and automatic floating-point decimal (bi-quinary) arithmetic. Appendix A of the manual included wiring diagrams for the IBM 533 card reader/punch control panel.
Fortran-based languages
Prior to FORTRAN 77, many preprocessors were commonly used to provide a friendlier language, with the advantage that the preprocessed code could be compiled on any machine with a standard FORTRAN compiler. These preprocessors would typically support structured programming, variable names longer than six characters, additional data types, conditional compilation, and even macro capabilities. Popular preprocessors included EFL, FLECS, iftran, MORTRAN, SFtran, S-Fortran, Ratfor, and Ratfiv. EFL, Ratfor and Ratfiv, for example, implemented C-like languages, outputting preprocessed code in standard FORTRAN 66. The PFORT preprocessor was often used to verify that code conformed to a portable subset of the language. Despite advances in the Fortran language, preprocessors continue to be used for conditional compilation and macro substitution.
One of the earliest versions of FORTRAN, introduced in the '60s, was popularly used in colleges and universities. Developed, supported, and distributed by the University of Waterloo, WATFOR was based largely on FORTRAN IV. A student using WATFOR could submit their batch FORTRAN job and, if there were no syntax errors, the program would move straight to execution. This simplification allowed students to concentrate on their program's syntax and semantics, or execution logic flow, rather than dealing with submission Job Control Language (JCL), the compile/link-edit/execution successive process(es), or other complexities of the mainframe/minicomputer environment. A down side to this simplified environment was that WATFOR was not a good choice for programmers needing the expanded abilities of their host processor(s), e.g., WATFOR typically had very limited access to I/O devices. WATFOR was succeeded by WATFIV and its later versions.
(line programming)
LRLTRAN was developed at the Lawrence Radiation Laboratory to provide support for vector arithmetic and dynamic storage, among other extensions to support systems programming. The distribution included the Livermore Time Sharing System (LTSS) operating system.
The Fortran-95 Standard includes an optional Part 3 which defines an optional conditional compilation capability. This capability is often referred to as "CoCo".
Many Fortran compilers have integrated subsets of the C preprocessor into their systems.
SIMSCRIPT is an application specific Fortran preprocessor for modeling and simulating large discrete systems.
The F programming language was designed to be a clean subset of Fortran 95 that attempted to remove the redundant, unstructured, and deprecated features of Fortran, such as the statement. F retains the array features added in Fortran 90, and removes control statements that were made obsolete by structured programming constructs added to both FORTRAN 77 and Fortran 90. F is described by its creators as "a compiled, structured, array programming language especially well suited to education and scientific computing". Essential Lahey Fortran 90 (ELF90) was a similar subset.
Lahey and Fujitsu teamed up to create Fortran for the Microsoft .NET Framework. Silverfrost FTN95 is also capable of creating .NET code.
Code examples
The following program illustrates dynamic memory allocation and array-based operations, two features introduced with Fortran 90. Particularly noteworthy is the absence of loops and / statements in manipulating the array; mathematical operations are applied to the array as a whole. Also apparent is the use of descriptive variable names and general code formatting that conform with contemporary programming style. This example computes an average over data entered interactively.
program average
! Read in some numbers and take the average
! As written, if there are no data points, an average of zero is returned
! While this may not be desired behavior, it keeps this example simple
implicit none
real, allocatable :: points(:)
integer :: number_of_points
real :: average_points, positive_average, negative_average
average_points = 0.
positive_average = 0.
negative_average = 0.
write (*,*) "Input number of points to average:"
read (*,*) number_of_points
allocate (points(number_of_points))
write (*,*) "Enter the points to average:"
read (*,*) points
! Take the average by summing points and dividing by number_of_points
if (number_of_points > 0) average_points = sum(points) / number_of_points
! Now form average over positive and negative points only
if (count(points > 0.) > 0) positive_average = sum(points, points > 0.) / count(points > 0.)
if (count(points < 0.) > 0) negative_average = sum(points, points < 0.) / count(points < 0.)
! Print result to terminal stdout unit 6
write (*,'(a,g12.4)') 'Average = ', average_points
write (*,'(a,g12.4)') 'Average of positive points = ', positive_average
write (*,'(a,g12.4)') 'Average of negative points = ', negative_average
deallocate (points) ! free memory
end program average
Humor
During the same FORTRAN standards committee meeting at which the name "FORTRAN 77" was chosen, a satirical technical proposal was incorporated into the official distribution bearing the title "Letter O Considered Harmful". This proposal purported to address the confusion that sometimes arises between the letter "O" and the numeral zero, by eliminating the letter from allowable variable names. However, the method proposed was to eliminate the letter from the character set entirely (thereby retaining 48 as the number of lexical characters, which the colon had increased to 49). This was considered beneficial in that it would promote structured programming, by making it impossible to use the notorious statement as before. (Troublesome statements would also be eliminated.) It was noted that this "might invalidate some existing programs" but that most of these "probably were non-conforming, anyway".
When X3J3 debated whether the minimum trip count for a DO loop should be zero or one in Fortran 77, Loren Meissner suggested a minimum trip count of two—reasoning (tongue-in-cheek) that if it were less than two, then there would be no reason for a loop.
When assumed-length arrays were being added, there was a dispute as to the appropriate character to separate upper and lower bounds. In a comment examining these arguments, Walt Brainerd penned an article entitled "Astronomy vs. Gastroenterology" because some proponents had suggested using the star or asterisk ("*"), while others favored the colon (":").
Variable names beginning with the letters I–N have a default type of integer, while variables starting with any other letters defaulted to real, although programmers could override the defaults with an explicit declaration. This led to the joke: "In FORTRAN, GOD is REAL (unless declared INTEGER)."
| Technology | "Historical" languages | null |
11180 | https://en.wikipedia.org/wiki/Functional%20analysis | Functional analysis | Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (for example, inner product, norm, or topology) and the linear functions defined on these spaces and suitably respecting these structures. The historical roots of functional analysis lie in the study of spaces of functions and the formulation of properties of transformations of functions such as the Fourier transform as transformations defining, for example, continuous or unitary operators between function spaces. This point of view turned out to be particularly useful for the study of differential and integral equations.
The usage of the word functional as a noun goes back to the calculus of variations, implying a function whose argument is a function. The term was first used in Hadamard's 1910 book on that subject. However, the general concept of a functional had previously been introduced in 1887 by the Italian mathematician and physicist Vito Volterra. The theory of nonlinear functionals was continued by students of Hadamard, in particular Fréchet and Lévy. Hadamard also founded the modern school of linear functional analysis further developed by Riesz and the group of Polish mathematicians around Stefan Banach.
In modern introductory texts on functional analysis, the subject is seen as the study of vector spaces endowed with a topology, in particular infinite-dimensional spaces. In contrast, linear algebra deals mostly with finite-dimensional spaces, and does not use topology. An important part of functional analysis is the extension of the theories of measure, integration, and probability to infinite-dimensional spaces, also known as infinite dimensional analysis.
Normed vector spaces
The basic and historically first class of spaces studied in functional analysis are complete normed vector spaces over the real or complex numbers. Such spaces are called Banach spaces. An important example is a Hilbert space, where the norm arises from an inner product. These spaces are of fundamental importance in many areas, including the mathematical formulation of quantum mechanics, machine learning, partial differential equations, and Fourier analysis.
More generally, functional analysis includes the study of Fréchet spaces and other topological vector spaces not endowed with a norm.
An important object of study in functional analysis are the continuous linear operators defined on Banach and Hilbert spaces. These lead naturally to the definition of C*-algebras and other operator algebras.
Hilbert spaces
Hilbert spaces can be completely classified: there is a unique Hilbert space up to isomorphism for every cardinality of the orthonormal basis. Finite-dimensional Hilbert spaces are fully understood in linear algebra, and infinite-dimensional separable Hilbert spaces are isomorphic to . Separability being important for applications, functional analysis of Hilbert spaces consequently mostly deals with this space. One of the open problems in functional analysis is to prove that every bounded linear operator on a Hilbert space has a proper invariant subspace. Many special cases of this invariant subspace problem have already been proven.
Banach spaces
General Banach spaces are more complicated than Hilbert spaces, and cannot be classified in such a simple manner as those. In particular, many Banach spaces lack a notion analogous to an orthonormal basis.
Examples of Banach spaces are -spaces for any real number Given also a measure on set then sometimes also denoted or has as its vectors equivalence classes of measurable functions whose absolute value's -th power has finite integral; that is, functions for which one has
If is the counting measure, then the integral may be replaced by a sum. That is, we require
Then it is not necessary to deal with equivalence classes, and the space is denoted written more simply in the case when is the set of non-negative integers.
In Banach spaces, a large part of the study involves the dual space: the space of all continuous linear maps from the space into its underlying field, so-called functionals. A Banach space can be canonically identified with a subspace of its bidual, which is the dual of its dual space. The corresponding map is an isometry but in general not onto. A general Banach space and its bidual need not even be isometrically isomorphic in any way, contrary to the finite-dimensional situation. This is explained in the dual space article.
Also, the notion of derivative can be extended to arbitrary functions between Banach spaces. See, for instance, the Fréchet derivative article.
Linear functional analysis
Major and foundational results
There are four major theorems which are sometimes called the four pillars of functional analysis:
the Hahn–Banach theorem
the open mapping theorem
the closed graph theorem
the uniform boundedness principle, also known as the Banach–Steinhaus theorem.
Important results of functional analysis include:
Uniform boundedness principle
The uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm.
The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus but it was also proven independently by Hans Hahn.
Spectral theorem
There are many theorems known as the spectral theorem, but one in particular has many applications in functional analysis.
This is the beginning of the vast research area of functional analysis called operator theory; see also the spectral measure.
There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now may be complex-valued.
Hahn–Banach theorem
The Hahn–Banach theorem is a central tool in functional analysis. It allows the extension of bounded linear functionals defined on a subspace of some vector space to the whole space, and it also shows that there are "enough" continuous linear functionals defined on every normed vector space to make the study of the dual space "interesting".
Open mapping theorem
The open mapping theorem, also known as the Banach–Schauder theorem (named after Stefan Banach and Juliusz Schauder), is a fundamental result which states that if a continuous linear operator between Banach spaces is surjective then it is an open map. More precisely,
The proof uses the Baire category theorem, and completeness of both and is essential to the theorem. The statement of the theorem is no longer true if either space is just assumed to be a normed space, but is true if and are taken to be Fréchet spaces.
Closed graph theorem
Other topics
Foundations of mathematics considerations
Most spaces considered in functional analysis have infinite dimension. To show the existence of a vector space basis for such spaces may require Zorn's lemma. However, a somewhat different concept, the Schauder basis, is usually more relevant in functional analysis. Many theorems require the Hahn–Banach theorem, usually proved using the axiom of choice, although the strictly weaker Boolean prime ideal theorem suffices. The Baire category theorem, needed to prove many important theorems, also requires a form of axiom of choice.
Points of view
Functional analysis includes the following tendencies:
Abstract analysis. An approach to analysis based on topological groups, topological rings, and topological vector spaces.
Geometry of Banach spaces contains many topics. One is combinatorial approach connected with Jean Bourgain; another is a characterization of Banach spaces in which various forms of the law of large numbers hold.
Noncommutative geometry. Developed by Alain Connes, partly building on earlier notions, such as George Mackey's approach to ergodic theory.
Connection with quantum mechanics. Either narrowly defined as in mathematical physics, or broadly interpreted by, for example, Israel Gelfand, to include most types of representation theory.
| Mathematics | Calculus and analysis | null |
11240 | https://en.wikipedia.org/wiki/Flatulence | Flatulence | Flatulence is the expulsion of gas from the intestines via the anus, commonly referred to as farting. "Flatus" is the medical word for gas generated in the stomach or bowels. A proportion of intestinal gas may be swallowed environmental air, and hence flatus is not entirely generated in the stomach or bowels. The scientific study of this area of medicine is termed flatology.
Passing gas is a normal bodily process.
Flatus is brought to the rectum and pressurized by muscles in the intestines. It is normal to pass flatus ("to fart"), though volume and frequency vary greatly among individuals. It is also normal for intestinal gas to have a feculent or unpleasant odor, which may be intense. The noise commonly associated with flatulence is produced by the anus and buttocks, which act together in a manner similar to that of an embouchure. Both the sound and odor are sources of embarrassment, annoyance or amusement (flatulence humor). In many societies, flatus is a taboo. Thus, many people either let their flatus out quietly or even hold it completely. However, holding the gases inside is not healthy.
There are several general symptoms related to intestinal gas: pain, bloating and abdominal distension, excessive flatus volume, excessive flatus odor, and gas incontinence. Furthermore, eructation (colloquially known as "burping") is sometimes included under the topic of flatulence. When excessive or malodorous, flatus can be a sign of a health disorder, such as irritable bowel syndrome, celiac disease or lactose intolerance.
Terminology
Non-medical definitions of the term include "the uncomfortable condition of having gas in the stomach and bowels", or "a state of excessive gas in the alimentary canal". These definitions highlight that many people consider "bloating", abdominal distension or increased volume of intestinal gas, to be synonymous with the term flatulence (although this is technically inaccurate).
Colloquially, flatulence may be referred to as "farting", "pumping", "trumping", "blowing off", "pooting", "passing gas", "breaking wind", "backfiring", "tooting", "beefing", or simply (in American English) "gas" or (British English) "wind". Derived terms include vaginal flatulence, otherwise known as a queef. In rhyming slang, blowing a raspberry (at someone) means imitating with the mouth the sound of a fart, in real or feigned derision.
Signs and symptoms
Generally speaking, there are four different types of complaints that relate to intestinal gas, which may present individually or in combination.
Bloating and pain
Patients may complain of bloating as abdominal distension, discomfort and pain from "trapped wind". In the past, functional bowel disorders such as irritable bowel syndrome that produced symptoms of bloating were attributed to increased production of intestinal gas.
However, three significant pieces of evidence refute this theory. First, in normal subjects, even very high rates of gas infusion into the small intestine (30mL/min) is tolerated without complaints of pain or bloating and harmlessly passed as flatus per rectum. Secondly, studies aiming to quantify the total volume of gas produced by patients with irritable bowel syndrome (some including gas emitted from the mouth by eructation) have consistently failed to demonstrate increased volumes compared to healthy subjects. The proportion of hydrogen produced may be increased in some patients with irritable bowel syndrome, but this does not affect the total volume. Thirdly, the volume of flatus produced by patients with irritable bowel syndrome who have pain and abdominal distension would be tolerated in normal subjects without any complaints of pain.
Patients who complain of bloating frequently can be shown to have objective increases in abdominal girth, often increased throughout the day and then resolving during sleep. The increase in girth combined with the fact that the total volume of flatus is not increased led to studies aiming to image the distribution of intestinal gas in patients with bloating. They found that gas was not distributed normally in these patients: there was segmental gas pooling and focal distension. In conclusion, abdominal distension, pain and bloating symptoms are the result of abnormal intestinal gas dynamics rather than increased flatus production.
Excessive volume
The range of volumes of flatus in normal individuals varies hugely (476–1,491 mL/24 h). All intestinal gas is either swallowed environmental air, present intrinsically in foods and beverages, or the result of gut fermentation.
Swallowing small amounts of air occurs while eating and drinking. This is emitted from the mouth by eructation (burping) and is normal. Excessive swallowing of environmental air is called aerophagia, and has been shown in a few case reports to be responsible for increased flatus volume. This is, however, considered a rare cause of increased flatus volume. Gases contained in food and beverages are likewise emitted largely through eructation, e.g., carbonated beverages.
Endogenously produced intestinal gases make up 74 percent of flatus in normal subjects. The volume of gas produced is partially dependent upon the composition of the intestinal microbiota, which is normally very resistant to change, but is also very different in different individuals. Some patients are predisposed to increased endogenous gas production by virtue of their gut microbiota composition. The greatest concentration of gut bacteria is in the colon, while the small intestine is normally nearly sterile. Fermentation occurs when unabsorbed food residues arrive in the colon.
Therefore, even more than the composition of the microbiota, diet is the primary factor that dictates the volume of flatus produced. Diets that aim to reduce the amount of undigested fermentable food residues arriving in the colon have been shown to significantly reduce the volume of flatus produced. Again, increased volume of intestinal gas will not cause bloating and pain in normal subjects. Abnormal intestinal gas dynamics will create pain, distension, and bloating, regardless of whether there is high or low total flatus volume.
Odor
Although flatus possesses an odor, this may be abnormally increased in some patients and cause social distress to the patient. Increased odor of flatus presents a distinct clinical issue from other complaints related to intestinal gas. Some patients may exhibit over-sensitivity to bad flatus odor, and in extreme forms, olfactory reference syndrome may be diagnosed. Recent informal research found a correlation between flatus odor and both loudness and humidity content.
Incontinence of flatus
"Gas incontinence" could be defined as loss of voluntary control over the passage of flatus. It is a recognised subtype of faecal incontinence, and is usually related to minor disruptions of the continence mechanisms. Some consider gas incontinence to be the first, sometimes only, symptom of faecal incontinence.
Cause
Intestinal gas is composed of varying quantities of exogenous sources and endogenous sources. The exogenous gases are swallowed (aerophagia) when eating or drinking or increased swallowing during times of excessive salivation (as might occur when nauseated or as the result of gastroesophageal reflux disease). The endogenous gases are produced either as a by-product of digesting certain types of food, or of incomplete digestion, as is the case during steatorrhea. Anything that causes food to be incompletely digested by the stomach or small intestine may cause flatulence when the material arrives in the large intestine, due to fermentation by yeast or prokaryotes normally or abnormally present in the gastrointestinal tract.
Flatulence-producing foods are typically high in certain polysaccharides, especially oligosaccharides such as inulin. Those foods include beans, lentils, dairy products, onions, garlic, spring onions, leeks, turnips, swedes, radishes, sweet potatoes, potatoes, cashews, Jerusalem artichokes, oats, wheat, and yeast in breads. Cauliflower, broccoli, cabbage, Brussels sprouts and other cruciferous vegetables that belong to the genus Brassica are commonly reputed to not only increase flatulence, but to increase the pungency of the flatus.
In beans, endogenous gases seem to arise from complex oligosaccharides (carbohydrates) that are particularly resistant to digestion by mammals, but are readily digestible by microorganisms (methane-producing archaea; Methanobrevibacter smithii) that inhabit the digestive tract. These oligosaccharides pass through the small intestine largely unchanged, and when they reach the large intestine, bacteria ferment them, producing copious amounts of flatus.
When excessive or malodorous, flatus can be a sign of a health disorder, such as irritable bowel syndrome, celiac disease, non-celiac gluten sensitivity or lactose intolerance. It can also be caused by certain medicines, such as ibuprofen, laxatives, antifungal medicines or statins. Some infections, such as giardiasis, are also associated with flatulence.
Interest in the causes of flatulence was spurred by high-altitude flight and human spaceflight; the low atmospheric pressure, confined conditions, and stresses peculiar to those endeavours were cause for concern. In the field of mountaineering, the phenomenon of high altitude flatus expulsion was first recorded over two hundred years ago.
Mechanism
Production, composition, and odor
Flatus (intestinal gas) is mostly produced as a byproduct of bacterial fermentation in the gastrointestinal (GI) tract, especially the colon. There are reports of aerophagia (excessive air swallowing) causing excessive intestinal gas, but this is considered rare.
Over 99% of the volume of flatus is composed of odorless gases. These include oxygen, nitrogen, carbon dioxide, hydrogen and methane. Nitrogen is not produced in the gut, but a component of environmental air. Patients who have excessive intestinal gas that is mostly composed of nitrogen have aerophagia. Hydrogen, carbon dioxide and methane are all produced in the gut and contribute 74% of the volume of flatus in normal subjects. Methane and hydrogen are flammable, and so flatus can be ignited if it contains adequate amounts of these components.
Not all humans produce flatus that contains methane. For example, in one study of the faeces of nine adults, only five of the samples contained archaea capable of producing methane. The prevalence of methane over hydrogen in human flatus may correlate with obesity, constipation and irritable bowel syndrome, as archaea that oxidise hydrogen into methane promote the metabolism's ability to absorb fatty acids from food.
The remaining trace (<1% volume) compounds contribute to the odor of flatus. Historically, compounds such as indole, skatole, ammonia and short chain fatty acids were thought to cause the odor of flatus. More recent evidence proves that the major contribution to the odor of flatus comes from a combination of volatile sulfur compounds. Hydrogen sulfide, methyl mercaptan (also known as methanethiol), dimethyl sulfide, dimethyl disulfide and dimethyl trisulfide are present in flatus. The benzopyrrole volatiles indole and skatole have an odor of mothballs, and therefore probably do not contribute greatly to the characteristic odor of flatus.
In one study, hydrogen sulfide concentration was shown to correlate convincingly with perceived bad odor of flatus, followed by methyl mercaptan and dimethyl sulfide. This is supported by the fact that hydrogen sulfide may be the most abundant volatile sulfur compound present. These results were generated from subjects who were eating a diet high in pinto beans to stimulate flatus production.
Others report that methyl mercaptan was the greatest contributor to the odor of flatus in patients not under any specific dietary alterations. It has now been demonstrated that methyl mercaptan, dimethyl sulfide, and hydrogen sulfide (described as decomposing vegetables, unpleasantly sweet/wild radish and rotten eggs respectively) are all present in human flatus in concentrations above their smell perception thresholds.
It is recognized that increased dietary sulfur-containing amino acids significantly increases the odor of flatus. It is therefore likely that the odor of flatus is created by a combination of volatile sulfur compounds, with minimal contribution from non-sulfur volatiles. This odor can also be caused by the presence of large numbers of microflora bacteria or the presence of faeces in the rectum. Diets high in protein, especially sulfur-containing amino acids, have been demonstrated to significantly increase the odor of flatus.
Volume and intestinal gas dynamics
Normal flatus volume is 476 to 1491 mL per 24 hours. This variability between individuals is greatly dependent upon diet. Similarly, the number of flatus episodes per day is variable; the normal range is given as 8–20 per day. The volume of flatus associated with each flatulence event again varies (5–375 mL). The volume of the first flatulence upon waking in the morning is significantly larger than those during the day. This may be due to buildup of intestinal gas in the colon during sleep, the peak in peristaltic activity in the first few hours after waking or the strong prokinetic effect of rectal distension on the rate of transit of intestinal gas. It is now known that gas is moved along the gut independently of solids and liquids, and this transit is more efficient in the erect position compared to when supine. It is thought that large volumes of intestinal gas present low resistance, and can be propelled by subtle changes in gut tone, capacitance and proximal contraction and distal relaxation. This process is thought not to affect solid and liquid intra-lumenal contents.
Researchers investigating the role of sensory nerve endings in the anal canal did not find them to be essential for retaining fluids in the anus, and instead speculate that their role may be to distinguish between flatus and faeces, thereby helping detect a need to defecate or to signal the end of defecation.
The sound varies depending on the volume of gas, the size of the opening that the air is being pushed through, which is affected by the state of tension in the sphincter muscle, and the force or velocity of the gas being propelled, as well as other factors, such as whether the gas was caused by swallowed air. Among humans, flatulence occasionally happens accidentally, such as incidentally to coughing or sneezing or during orgasm; on other occasions, flatulence can be voluntarily elicited by tensing the rectum or "bearing down" on stomach or bowel muscles and subsequently relaxing the anal sphincter, resulting in the expulsion of flatus.
Management
Since problems involving intestinal gas present as different (but sometimes combined) complaints, the management is cause-related.
Pain and bloating
While not affecting the production of the gases themselves, surfactants (agents that lower surface tension) can reduce the disagreeable sensations associated with flatulence, by aiding the dissolution of the gases into liquid and solid faecal matter. Preparations containing simethicone reportedly operate by promoting the coalescence of smaller bubbles into larger ones more easily passed from the body, either by burping or flatulence. Such preparations do not decrease the total amount of gas generated in or passed from the colon, but make the bubbles larger and thereby allowing them to be passed more easily.
Other drugs including prokinetics, lubiprostone, antibiotics and probiotics are also used to treat bloating in patients with functional bowel disorders such as irritable bowel syndrome, and there is some evidence that these measures may reduce symptoms.
A flexible tube, inserted into the rectum, can be used to collect intestinal gas in a flatus bag. This method is occasionally needed in a hospital setting, when the patient is unable to pass gas normally.
Volume
One method of reducing the volume of flatus produced is dietary modification, reducing the amount of fermentable carbohydrates. This is the theory behind diets such as the low-FODMAP diet (a diet low in fermentable oligosaccharides, disaccharides, monosaccharides, alcohols, and polyols).
Most starches, including potatoes, corn, noodles, and wheat, produce gas as they are broken down in the large intestine. Intestinal gas can be reduced by fermenting the beans, and making them less gas-inducing, or by cooking them in the liquor from a previous batch. For example, the fermented bean product miso is less likely to produce as much intestinal gas. Some legumes also stand up to prolonged cooking, which can help break down the oligosaccharides into simple sugars. Fermentative lactic acid bacteria such as Lactobacillus casei and Lactobacillus plantarum reduce flatulence in the human intestinal tract.
Probiotics (live yogurt, kefir, etc.) are reputed to reduce flatulence when used to restore balance to the normal intestinal flora. Live (bioactive) yogurt contains, among other lactic bacteria, Lactobacillus acidophilus, which may be useful in reducing flatulence. L. acidophilus may make the intestinal environment more acidic, supporting a natural balance of the fermentative processes. L. acidophilus is available in supplements. Prebiotics, which generally are non-digestible oligosaccharides, such as fructooligosaccharide, generally increase flatulence in a similar way as described for lactose intolerance.
Digestive enzyme supplements may significantly reduce the amount of flatulence caused by some components of foods not being digested by the body and thereby promoting the action of microbes in the small and large intestines. It has been suggested that alpha-galactosidase enzymes, which can digest certain complex sugars, are effective in reducing the volume and frequency of flatus. The enzymes alpha-galactosidase, lactase, amylase, lipase, protease, cellulase, glucoamylase, invertase, malt diastase, pectinase, and bromelain are available, either individually or in combination blends, in commercial products.
The antibiotic rifaximin, often used to treat diarrhea caused by the microorganism E. coli, may reduce both the production of intestinal gas and the frequency of flatus events.
Odor
Bismuth
The odor created by flatulence is commonly treated with bismuth subgallate, available under the name Devrom. Bismuth subgallate is commonly used by individuals who have had ostomy surgery, bariatric surgery, faecal incontinence and irritable bowel syndrome. Bismuth subsalicylate is a compound that binds hydrogen sulfide, and one study reported a dose of 524 mg four times a day for 3–7 days bismuth subsalicylate yielded a >95% reduction in faecal hydrogen sulfide release in both humans and rats.
Another bismuth compound, bismuth subnitrate was also shown to bind to hydrogen sulfide. Another study showed that bismuth acted synergistically with various antibiotics to inhibit sulfate-reducing gut bacteria and sulfide production. Some authors proposed a theory that hydrogen sulfide was involved in the development of ulcerative colitis and that bismuth might be helpful in the management of this condition. However, bismuth administration in rats did not prevent them from developing ulcerative colitis despite reduced hydrogen sulfide production. Also, evidence suggests that colonic hydrogen sulfide is largely present in bound forms, probably sulfides of iron and other metals. Rarely, serious bismuth toxicity may occur with higher doses.
Activated charcoal
Despite being an ancient treatment for various digestive complaints, activated charcoal did not produce reduction in both the total flatus volume nor the release of sulfur-containing gasses, and there was no reduction in abdominal symptoms (after 0.52g activated charcoal four times a day for one week). The authors suggested that saturation of charcoal binding sites during its passage through the gut was the reason for this. A further study concluded that activated charcoal (4g) does not influence gas formation in vitro or in vivo. Other authors reported that activated charcoal was effective. A study in 8 dogs concluded activated charcoal (unknown oral dose) reduced hydrogen sulfide levels by 71%. In combination with yucca schidigera, and zinc acetate, this was increased to an 86% reduction in hydrogen sulfide, although flatus volume and number was unchanged. An early study reported activated charcoal (unknown oral dose) prevented a large increase in the number of flatus events and increased breath hydrogen concentrations that normally occur following a gas-producing meal.
Garments and external devices
In 1998, Chester "Buck" Weimer of Pueblo, Colorado, received a patent for the first undergarment that contained a replaceable charcoal filter. The undergarments are air-tight and provide a pocketed escape hole in which a charcoal filter can be inserted. In 2001 Weimer received the Ig Nobel Prize for Biology for his invention.
A similar product was released in 2002, but rather than an entire undergarment, consumers are able to purchase an insert similar to a pantiliner that contains activated charcoal. The inventors, Myra and Brian Conant of Mililani, Hawaii, still claim on their website to have discovered the undergarment product in 2002 (four years after Chester Weimer filed for a patent for his product), but state that their tests "concluded" that they should release an insert instead.
Incontinence
Flatus incontinence where there is involuntary passage of gas, is a type of faecal incontinence, and is managed similarly.
Society and culture
In many cultures, flatulence in public is regarded as embarrassing, but, depending on context, may also be considered humorous. People will often strain to hold in the passing of gas when in polite company, or position themselves to silence or conceal the passing of gas. In other cultures, it may be no more embarrassing than coughing.
While the act of passing flatus in some cultures is generally considered to be an unfortunate occurrence in public settings, flatulence may, in casual circumstances and especially among children, be used as either a humorous supplement to a joke ("pull my finger"), or as a comic activity in and of itself. The social acceptability of flatulence-based humour in entertainment and the mass media varies over the course of time and between cultures. A sufficient number of entertainers have performed using their flatus to lead to the coining of the term flatulist. The whoopee cushion is a joking device invented in the early 20th century for simulating a fart. In 2008, a farting application for the iPhone earned nearly $10,000 in one day.
A farting game named Touch Wood was documented by John Gregory Bourke in the 1890s. It was known as Safety in the 20th century in the U.S., and is still played by children as of 2011.
In January 2011, the Malawi Minister of Justice, George Chaponda, said that Air Fouling Legislation would make public "farting" illegal in his country. When reporting the story, the media satirised Chaponda's statement with punning headlines. Later, the minister withdrew his statement.
Environmental impact
Flatulence is often blamed as a significant source of greenhouse gases, owing to the erroneous belief that the methane released by livestock is in the flatus. While livestock account for around 20% of global methane emissions, 90–95% of that is released by exhaling or burping. In cows, gas and burps are produced by methane-generating microbes called methanogens, which live inside the cow's digestive system. Proposals for reducing methane production in cows include the feeding of supplements such as oregano and seaweed, and the genetic engineering of gut biome microbes to produce less methane.
Since New Zealand produces large amounts of agricultural products, it has the unique position of having higher methane emissions from livestock compared to other greenhouse gas sources. The New Zealand government is a signatory to the Kyoto Protocol and therefore attempts to reduce greenhouse emissions. To achieve this, an agricultural emissions research levy was proposed, which promptly became known as a "fart tax" or "flatulence tax". It encountered opposition from farmers, farming lobby groups and opposition politicians.
Entertainment
Historical comment on the ability to fart at will is observed as early as Saint Augustine's City of God (5th century A.D.). Augustine mentions "people who produce at will without any stench such rhythmical sounds from their fundament that they appear to be making music even from that quarter." Intentional passing of gas and its use as entertainment for others appear to have been somewhat well known in pre-modern Europe, according to mentions of it in medieval and later literature, including Rabelais.
Le Pétomane ("the Fartomaniac") was a famous French performer in the 19th century who, as well as many professional farters before him, did flatulence impressions and held shows. The performer Mr. Methane carries on le Pétomane's tradition today. Also, a 2002 fiction film Thunderpants revolves around a boy named Patrick Smash who has an ongoing flatulence problem from the time of his birth.
Since the 1970s, farting has increasingly been featured in film, especially comedies such as Blazing Saddles and Scooby-Doo.
In the popular vulgar cartoon series "South Park," characters sometimes watch a show-within-a-show called "The Terrance and Phillip Show" whose humor primarily revolves around flatulence.
Personal experiences
People find other peoples' flatus unpleasant, but are unfazed by, and may even enjoy, the scent of their own. While there has been little research carried out upon the subject, some speculative guesses have been made as to why this might be so. For example, one explanation for this phenomenon is that people are very familiar with the scent of their own flatus, and that survival in nature may depend on the detection of and reaction to foreign scents.
Some people have Eproctophilla, the fetish of flatulence, finding sexual gratification and pleasure from either the sound of the gas, smells from the gas, feeling of the gas, some combination of the three, or all three.
| Biology and health sciences | Basics | Biology |
11274 | https://en.wikipedia.org/wiki/Elementary%20particle | Elementary particle | In particle physics, an elementary particle or fundamental particle is a subatomic particle that is not composed of other particles. The Standard Model presently recognizes seventeen distinct particles—twelve fermions and five bosons. As a consequence of flavor and color combinations and antimatter, the fermions and bosons are known to have 48 and 13 variations, respectively. Among the 61 elementary particles embraced by the Standard Model number: electrons and other leptons, quarks, and the fundamental bosons. Subatomic particles such as protons or neutrons, which contain two or more elementary particles, are known as composite particles.
Ordinary matter is composed of atoms, themselves once thought to be indivisible elementary particles. The name atom comes from the Ancient Greek word ἄτομος (atomos) which means indivisible or uncuttable. Despite the theories about atoms that had existed for thousands of years, the factual existence of atoms remained controversial until 1905. In that year, Albert Einstein published his paper on Brownian motion, putting to rest theories that had regarded molecules as mathematical illusions. Einstein subsequently identified matter as ultimately composed of various concentrations of energy.
Subatomic constituents of the atom were first identified toward the end of the 19th century, beginning with the electron, followed by the proton in 1919, the photon in the 1920s, and the neutron in 1932. By that time, the advent of quantum mechanics had radically altered the definition of a "particle" by putting forward an understanding in which they carried out a simultaneous existence as matter waves.
Many theoretical elaborations upon, and beyond, the Standard Model have been made since its codification in the 1970s. These include notions of supersymmetry, which double the number of elementary particles by hypothesizing that each known particle associates with a "shadow" partner far more massive. However, like an additional elementary boson mediating gravitation, such superpartners remain undiscovered as of 2013.
Overview
All elementary particles are either bosons or fermions. These classes are distinguished by their quantum statistics: fermions obey Fermi–Dirac statistics and bosons obey Bose–Einstein statistics. Their spin is differentiated via the spin–statistics theorem: it is half-integer for fermions, and integer for bosons.
In the Standard Model, elementary particles are represented for predictive utility as point particles. Though extremely successful, the Standard Model is limited by its omission of gravitation and has some parameters arbitrarily added but unexplained.
Cosmic abundance of elementary particles
According to the current models of Big Bang nucleosynthesis, the primordial composition of visible matter of the universe should be about 75% hydrogen and 25% helium-4 (in mass). Neutrons are made up of one up and two down quarks, while protons are made of two up and one down quark. Since the other common elementary particles (such as electrons, neutrinos, or weak bosons) are so light or so rare when compared to atomic nuclei, we can neglect their mass contribution to the observable universe's total mass. Therefore, one can conclude that most of the visible mass of the universe consists of protons and neutrons, which, like all baryons, in turn consist of up quarks and down quarks.
Some estimates imply that there are roughly baryons (almost entirely protons and neutrons) in the observable universe.
The number of protons in the observable universe is called the Eddington number.
In terms of number of particles, some estimates imply that nearly all the matter, excluding dark matter, occurs in neutrinos, which constitute the majority of the roughly elementary particles of matter that exist in the visible universe. Other estimates imply that roughly elementary particles exist in the visible universe (not including dark matter), mostly photons and other massless force carriers.
Standard Model
The Standard Model of particle physics contains 12 flavors of elementary fermions, plus their corresponding antiparticles, as well as elementary bosons that mediate the forces and the Higgs boson, which was reported on July 4, 2012, as having been likely detected by the two main experiments at the Large Hadron Collider (ATLAS and CMS). The Standard Model is widely considered to be a provisional theory rather than a truly fundamental one, however, since it is not known if it is compatible with Einstein's general relativity. There may be hypothetical elementary particles not described by the Standard Model, such as the graviton, the particle that would carry the gravitational force, and sparticles, supersymmetric partners of the ordinary particles.
Fundamental fermions
The 12 fundamental fermions are divided into 3 generations of 4 particles each. Half of the fermions are leptons, three of which have an electric charge of −1 e, called the electron (), the muon (), and the tau (); the other three leptons are neutrinos (, , ), which are the only elementary fermions with neither electric nor color charge. The remaining six particles are quarks (discussed below).
Generations
Mass
The following table lists current measured masses and mass estimates for all the fermions, using the same scale of measure: millions of electron-volts relative to square of light speed (MeV/c2). For example, the most accurately known quark mass is of the top quark () at , estimated using the on-shell scheme.
Estimates of the values of quark masses depend on the version of quantum chromodynamics used to describe quark interactions. Quarks are always confined in an envelope of gluons that confer vastly greater mass to the mesons and baryons where quarks occur, so values for quark masses cannot be measured directly. Since their masses are so small compared to the effective mass of the surrounding gluons, slight differences in the calculation make large differences in the masses.
Antiparticles
There are also 12 fundamental fermionic antiparticles that correspond to these 12 particles. For example, the antielectron (positron) is the electron's antiparticle and has an electric charge of +1 e.
Quarks
Isolated quarks and antiquarks have never been detected, a fact explained by confinement. Every quark carries one of three color charges of the strong interaction; antiquarks similarly carry anticolor. Color-charged particles interact via gluon exchange in the same way that charged particles interact via photon exchange. Gluons are themselves color-charged, however, resulting in an amplification of the strong force as color-charged particles are separated. Unlike the electromagnetic force, which diminishes as charged particles separate, color-charged particles feel increasing force.
Nonetheless, color-charged particles may combine to form color neutral composite particles called hadrons. A quark may pair up with an antiquark: the quark has a color and the antiquark has the corresponding anticolor. The color and anticolor cancel out, forming a color neutral meson. Alternatively, three quarks can exist together, one quark being "red", another "blue", another "green". These three colored quarks together form a color-neutral baryon. Symmetrically, three antiquarks with the colors "antired", "antiblue" and "antigreen" can form a color-neutral antibaryon.
Quarks also carry fractional electric charges, but, since they are confined within hadrons whose charges are all integral, fractional charges have never been isolated. Note that quarks have electric charges of either e or e, whereas antiquarks have corresponding electric charges of either e or e.
Evidence for the existence of quarks comes from deep inelastic scattering: firing electrons at nuclei to determine the distribution of charge within nucleons (which are baryons). If the charge is uniform, the electric field around the proton should be uniform and the electron should scatter elastically. Low-energy electrons do scatter in this way, but, above a particular energy, the protons deflect some electrons through large angles. The recoiling electron has much less energy and a jet of particles is emitted. This inelastic scattering suggests that the charge in the proton is not uniform but split among smaller charged particles: quarks.
Fundamental bosons
In the Standard Model, vector (spin-1) bosons (gluons, photons, and the W and Z bosons) mediate forces, whereas the Higgs boson (spin-0) is responsible for the intrinsic mass of particles. Bosons differ from fermions in the fact that multiple bosons can occupy the same quantum state (Pauli exclusion principle). Also, bosons can be either elementary, like photons, or a combination, like mesons. The spin of bosons are integers instead of half integers.
Gluons
Gluons mediate the strong interaction, which join quarks and thereby form hadrons, which are either baryons (three quarks) or mesons (one quark and one antiquark). Protons and neutrons are baryons, joined by gluons to form the atomic nucleus. Like quarks, gluons exhibit color and anticolor – unrelated to the concept of visual color and rather the particles' strong interactions – sometimes in combinations, altogether eight variations of gluons.
Electroweak bosons
There are three weak gauge bosons: W+, W−, and Z0; these mediate the weak interaction. The W bosons are known for their mediation in nuclear decay: The W− converts a neutron into a proton then decays into an electron and electron-antineutrino pair.
The Z0 does not convert particle flavor or charges, but rather changes momentum; it is the only mechanism for elastically scattering neutrinos. The weak gauge bosons were discovered due to momentum change in electrons from neutrino-Z exchange. The massless photon mediates the electromagnetic interaction. These four gauge bosons form the electroweak interaction among elementary particles.
Higgs boson
Although the weak and electromagnetic forces appear quite different to us at everyday energies, the two forces are theorized to unify as a single electroweak force at high energies. This prediction was clearly confirmed by measurements of cross-sections for high-energy electron-proton scattering at the HERA collider at DESY. The differences at low energies is a consequence of the high masses of the W and Z bosons, which in turn are a consequence of the Higgs mechanism. Through the process of spontaneous symmetry breaking, the Higgs selects a special direction in electroweak space that causes three electroweak particles to become very heavy (the weak bosons) and one to remain with an undefined rest mass as it is always in motion (the photon). On 4 July 2012, after many years of experimentally searching for evidence of its existence, the Higgs boson was announced to have been observed at CERN's Large Hadron Collider. Peter Higgs who first posited the existence of the Higgs boson was present at the announcement. The Higgs boson is believed to have a mass of approximately . The statistical significance of this discovery was reported as 5 sigma, which implies a certainty of roughly 99.99994%. In particle physics, this is the level of significance required to officially label experimental observations as a discovery. Research into the properties of the newly discovered particle continues.
Graviton
The graviton is a hypothetical elementary spin-2 particle proposed to mediate gravitation. While it remains undiscovered due to the difficulty inherent in its detection, it is sometimes included in tables of elementary particles. The conventional graviton is massless, although some models containing massive Kaluza–Klein gravitons exist.
Beyond the Standard Model
Although experimental evidence overwhelmingly confirms the predictions derived from the Standard Model, some of its parameters were added arbitrarily, not determined by a particular explanation, which remain mysterious, for instance the hierarchy problem. Theories beyond the Standard Model attempt to resolve these shortcomings.
Grand unification
One extension of the Standard Model attempts to combine the electroweak interaction with the strong interaction into a single 'grand unified theory' (GUT). Such a force would be spontaneously broken into the three forces by a Higgs-like mechanism. This breakdown is theorized to occur at high energies, making it difficult to observe unification in a laboratory. The most dramatic prediction of grand unification is the existence of X and Y bosons, which cause proton decay. The non-observation of proton decay at the Super-Kamiokande neutrino observatory rules out the simplest GUTs, however, including SU(5) and SO(10).
Supersymmetry
Supersymmetry extends the Standard Model by adding another class of symmetries to the Lagrangian. These symmetries exchange fermionic particles with bosonic ones. Such a symmetry predicts the existence of supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos, and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by from the ordinary particle. Due to the breaking of supersymmetry, the sparticles are much heavier than their ordinary counterparts; they are so heavy that existing particle colliders would not be powerful enough to produce them. Some physicists believe that sparticles will be detected by the Large Hadron Collider at CERN.
String theory
String theory is a model of physics whereby all "particles" that make up matter are composed of strings (measuring at the Planck length) that exist in an 11-dimensional (according to M-theory, the leading version) or 12-dimensional (according to F-theory) universe. These strings vibrate at different frequencies that determine mass, electric charge, color charge, and spin. A "string" can be open (a line) or closed in a loop (a one-dimensional sphere, that is, a circle). As a string moves through space it sweeps out something called a world sheet. String theory predicts 1- to 10-branes (a 1-brane being a string and a 10-brane being a 10-dimensional object) that prevent tears in the "fabric" of space using the uncertainty principle (e.g., the electron orbiting a hydrogen atom has the probability, albeit small, that it could be anywhere else in the universe at any given moment).
String theory proposes that our universe is merely a 4-brane, inside which exist the three space dimensions and the one time dimension that we observe. The remaining 7 theoretical dimensions either are very tiny and curled up (and too small to be macroscopically accessible) or simply do not/cannot exist in our universe (because they exist in a grander scheme called the "multiverse" outside our known universe).
Some predictions of the string theory include existence of extremely massive counterparts of ordinary particles due to vibrational excitations of the fundamental string and existence of a massless spin-2 particle behaving like the graviton.
Technicolor
Technicolor theories try to modify the Standard Model in a minimal way by introducing a new QCD-like interaction. This means one adds a new theory of so-called Techniquarks, interacting via so called Technigluons. The main idea is that the Higgs boson is not an elementary particle but a bound state of these objects.
Preon theory
According to preon theory there are one or more orders of particles more fundamental than those (or most of those) found in the Standard Model. The most fundamental of these are normally called preons, which is derived from "pre-quarks". In essence, preon theory tries to do for the Standard Model what the Standard Model did for the particle zoo that came before it. Most models assume that almost everything in the Standard Model can be explained in terms of three to six more fundamental particles and the rules that govern their interactions. Interest in preons has waned since the simplest models were experimentally ruled out in the 1980s.
Acceleron theory
Accelerons are the hypothetical subatomic particles that integrally link the newfound mass of the neutrino to the dark energy conjectured to be accelerating the expansion of the universe.
In this theory, neutrinos are influenced by a new force resulting from their interactions with accelerons, leading to dark energy. Dark energy results as the universe tries to pull neutrinos apart. Accelerons are thought to interact with matter more infrequently than they do with neutrinos.
| Physical sciences | Subatomic particles: General | Physics |
11296 | https://en.wikipedia.org/wiki/First%20aid | First aid | First aid is the first and immediate assistance given to any person with a medical emergency, with care provided to preserve life, prevent the condition from worsening, or to promote recovery until medical services arrive. First aid is generally performed by someone with basic medical or first response training. Mental health first aid is an extension of the concept of first aid to cover mental health, while psychological first aid is used as early treatment of people who are at risk for developing PTSD. Conflict first aid, focused on preservation and recovery of an individual's social or relationship well-being, is being piloted in Canada.
There are many situations that may require first aid, and many countries have legislation, regulation, or guidance, which specifies a minimum level of first aid provision in certain circumstances. This can include specific training or equipment to be available in the workplace (such as an automated external defibrillator), the provision of specialist first aid cover at public gatherings, or mandatory first aid training within schools. Generally, five steps are associated with first aid:
Assess the surrounding areas.
Move to a safe surrounding (if not already; for example, road accidents are unsafe to be dealt with on roads).
Call for help: both professional medical help and people nearby who might help in first aid such as the compressions of cardiopulmonary resuscitation (CPR).
Perform suitable first aid depending on the injury suffered by the casualty.
Evaluate the casualty for any fatal signs of danger, or possibility of performing the first aid again.
Early history and warfare
Skills of what is now known as first aid have been recorded throughout history, especially in relation to warfare, where the care of both traumatic and medical cases is required in particularly large numbers. The bandaging of battle wounds is shown on Classical Greek pottery from , whilst the parable of the Good Samaritan includes references to binding or dressing wounds. There are numerous references to first aid performed within the Roman army, with a system of first aid supported by surgeons, field ambulances, and hospitals. Roman legions had the specific role of capsarii, who were responsible for first aid such as bandaging, and are the forerunners of the modern combat medic.
Further examples occur through history, still mostly related to battle, with examples such as the Knights Hospitaller in the 11th century AD, providing care to pilgrims and knights in the Holy Land.
Formalization of life saving treatments
During the late 18th century, drowning as a cause of death was a major concern amongst the population. In 1767, a society for the preservation of life from accidents in water was started in Amsterdam, and in 1773, physician William Hawes began publicizing the power of artificial respiration as means of resuscitation of those who appeared drowned. This led to the formation, in 1774, of the Society for the Recovery of Persons Apparently Drowned, later the Royal Humane Society, who did much to promote resuscitation.
Napoleon's surgeon, Baron Dominique-Jean Larrey, is credited with creating an ambulance corps, the ambulance volantes, which included medical assistants, tasked to administer first aid in battle.
In 1859, Swiss businessman Jean-Henri Dunant witnessed the aftermath of the Battle of Solferino, and his work led to the formation of the Red Cross, with a key stated aim of "aid to sick and wounded soldiers in the field". The Red Cross and Red Crescent are still the largest provider of first aid worldwide.
In 1870, Prussian military surgeon Friedrich von Esmarch introduced formalized first aid to the military, and first coined the term "erste hilfe" (translating to 'first aid'), including training for soldiers in the Franco-Prussian War on care for wounded comrades using pre-learnt bandaging and splinting skills, and making use of the Esmarch bandage which he designed. The bandage was issued as standard to the Prussian combatants, and also included aide-memoire pictures showing common uses.
In 1872, the Order of Saint John of Jerusalem in England changed its focus from hospice care, and set out to start a system of practical medical help, starting with making a grant towards the establishment of the UK's first ambulance service. This was followed by creating its own wheeled transport litter in 1875 (the St John Ambulance), and in 1877 established the St John Ambulance Association (the forerunner of modern-day St John Ambulance) "to train men and women for the benefit of the sick and wounded".
Also in the UK, Surgeon-Major Peter Shepherd had seen the advantages of von Esmarch's new teaching of first aid, and introduced an equivalent programme for the British Army, and so being the first user of "first aid for the injured" in English, disseminating information through a series of lectures. Following this, in 1878, Shepherd and Colonel Francis Duncan took advantage of the newly charitable focus of St John, and established the concept of teaching first aid skills to civilians. The first classes were conducted in the hall of the Presbyterian school in Woolwich (near Woolwich barracks where Shepherd was based) using a comprehensive first aid curriculum.
First aid training began to spread through the British Empire through organisations such as St John, often starting, as in the UK, with high risk activities such as ports and railways.
The first recorded first aid training in the United States took place in Jermyn, Pennsylvania in 1899.
Main emergences that require first aid and their corresponding cares
List of some situations that require specific first aid, and information about them (in alphabetical order):
Bleeding
A bleeding or hemorrhage is the uncontrolled escape of blood from any vein or artery.
In wounds that are caused by an external agent, there can be an additional risk of infection.
Cardiac arrest (total stop of heartbeat)
A cardiac arrest is the complete stop of heart function.
Choking
A choking is an obstruction in the airway
Diabetes, hyperglycemia
A hyperglycemia or hyperglycaemia is an excessive level of blood sugars.
Diabetes, hypoglycemia
A hypoglycemia or hypoglycaemia is an excessive fall of blood sugars in a diabetic patient.
It almost always occurs because of a problem with a medication to reduce the sugar level in blood.
Drowning
A drowning is a suffocation into a liquid substance.
First aid for drowning are very similar to those for cardiorespiratory arrest, but starting with 2 initial ventilations.
Infarction of the heart
A cardiac infarction is the sudden lack of blood supply to the heart, normally because of a problem in one of its arteries.
Stroke
A stroke is a sudden lack of blood supply to the brain.
Aims of first aid
The primary goal of first aid is to prevent death or serious injury from worsening. The key aims of first aid can be summarized with the acronym of 'the three Ps':
Preserve life: The overriding aim of all medical care which includes first aid, is to save lives and minimize the threat of death. First aid done correctly should help reduce the patient's level of pain and calm them down during the evaluation and treatment process.
Prevent further harm: Prevention of further harm includes addressing both external factors, such as moving a patient away from any cause of harm, and applying first aid techniques to prevent worsening of the condition, such as applying pressure to stop a bleed from becoming dangerous.
Promote recovery: First aid also involves trying to start the recovery process from the illness or injury, and in some cases might involve completing a treatment, such as in the case of applying a plaster to a small wound.
First aid is not medical treatment, and cannot be compared with what a trained medical professional provides. First aid involves making common sense decisions in the best interest of an injured person.
Setting the priorities
A first aid intervention would follow an order, which would try to attend in the best manner the main threats for the life and mobility of the victim.
There are some first aid protocols (such as ATLS, BATLS and SAFE-POINT) that define which are the priorities and the correct execution of the steps for saving human life. A major benefit of the use of official protocols is that they require minimum resources, time and skills, and have a great degree of success.
ABCDE and csABCDE general protocol
The ABCDE method is the general protocol of first aid and implies a quite general view.
It was initially developed by Dr Peter Safar in the 1950s. But it has received some modifications, improvements and variations that were intended for more specific contexts. For example: it has been completed with improvements from the ATLS (Advanced Trauma Life Support) version of the American College of Surgeons and the BATLS (Battlefield Advanced Trauma Life Support) version of the British Army.
As a result, the mnemonic of the steps of this protocol is ABCDE, or its improved version (cs)ABCDE (sometimes called xABCDE, the words in the mnemonic may vary), which represent:
—An attached first part (named as "cs" or "x", or in any other way) that will always mention stopping the critical losses of blood and managing with a special and careful treatment to patients with serious damages at the spine that threaten their future mobility:
catastrophic-bleeding (stopping urgently the massive external bleedings, as it is marked in the BATLS version).
spine-protection (previous examination of the spine, and careful preventive treatment for its damages, as it is marked in the ATLS version).
—The ABCDE protocol itself:
Airway (clearing airways).
Breathing (ensuring respiration).
Circulation (ensuring effective cardiac output).Any Defibrillation process for a cardiac arrest (total stop of heartbeat) would be included here, or in 'Disability' (as a double mnemonic 'D').
Disability (neurological condition, level of glucose can also be examined).
Exposure (or 'Evaluate': other questions in an overall examination of the patient, environment).
ABC and CABD cardiopulmonary resuscitation protocol
This protocol (originally named as ABC) is a simplified version or concrete application of the previous csABCDE (or ABCDE) protocol, that focuses in the use of cardio-pulmonary resuscitation. The American Heart Association and the International Liaison Committee on Resuscitation teach it as a reference.
Its current mnemonic is CABD (an improvement in the sequence for most of the cases):
Circulation or Chest Compressions.
Airway: attempt to open the airway (using a head-tilt and chin-lift technique; not in the case of babies, which require avoid tilting the head).
Breathing or Rescue Breaths.
Defibrillation: use of an automated external defibrillator to recover heart function.
Wider protocols
These are the protocols that do not only deal with direct care to the victim but they also mention other complementary tasks (before and later).
European protocol
This method has been studied and employed for a long time in many European countries, as France. It is a reference, of a certain reputation, that could be applied solely or to a certain degree, usually combining it with the common csABCDE (ABCDE) method or its simplified CABD (ABC) variant about cardio-pulmonary resuscitation. The European method has a wider range than them, and their steps include tasks that are previous to the first aid techniques themselves.
These are its steps (with no official mnemonic that helps to remember them):
Protection for patients and rescuers. If dangers are present, the patient would be moved to a safer place with a careful management of any detected spinal injury.
Evaluation of the patient (looking for priorities as critical bleeding and cardiac arrest).
Alerting to medical services and bystanders.
Performing the first aid practices. The CABD (or ABC) method for cardio-pulmonary resuscitation and many details of the wider csABCDE (or ABCDE) method would be included in this step.
Other mentionable protocols
Some other known protocols that could be mentioned in many contexts (in alphabetical order):
AMEGA protocol
It is similar to the European protocol, because it also has a wider range than the common csABCDE (or ABCDE) protocol, and includes other tasks that are previous to the first aid techniques themselves. The order of the steps is changed, and the experience with it is lesser, but it adds the idea of a posterior 'aftermath' phase.
The mnemonic AMEGA refers to:
Assess the situation, looking for risks.
Make safe the situation, after having identified the risks.
Emergency aid. Performing the first aid practices.
Get help. Asking for emergency help to medical services and bystanders.
Aftermath. The aftermath tasks include recording and reporting, continued care of patients and the welfare of responders and the replacement of used first aid kit elements.
ATLS and BATLS protocols
They are basically the common ABCDE and csABCDE protocol, but focusing in particular aspects.
The ATLS (Advanced Trauma Life Support) version was developed by the American College of Surgeons, focusing in the particular needs of trauma and specifically in the spinal injuries. And the BATLS (Battlefield Advanced Trauma Life Support) version is an improvement for the British Army that added the concept of 'catastrophic bleeding'.
The preference for one or another among all these protocols can depend on the context and the audience.
Check, Call and Care protocol
It comes from Red Cross and, as the European protocol, has a wider range than the common csABCDE (ABCDE) method. So it could be seen as a simplification of the European protocol, and, especially, easier to remember as a guide for most of cases.
It mentions the following steps:
Check the scene for safety of the rescuer and others, and check the patient's condition.
Call to emergency medical services.
Care the patient.
SAFE-POINT protocol
Another European protocol, which appeared in the field of construction of Czech Republic to react to any emergence .
Their steps (which have not any mnemonic) are:
Safety of the rescuers.
Calling to emergency telephone number.
Bleeding: treating the massive bleedings.
Freeing the airways.
Resuscitation: applying cardiopulmonary resuscitation.
Keeping warm to the patient.
Key basic skills
Certain skills are considered essential to the provision of first aid and are taught ubiquitously.
Displacement skills
If there are dangers around (such as fire, electric dangers or others) the patient has to be moved to a safe place, (if it is safe for the first aid provider to do so), where providing the required first aid procedures is possible.
—In case of a possible severe spinal injury: when a patient seems to have a possible serious injury in the spinal cord (in the backbone, either at the neck part or the back part), that patient must not be moved except if that is necessary, and, when necessary, it must be done as little as possible and very carefully (see mentions about this type of injury in the gallery of drawings below). These precautions avoid many risks of causing further damages for the patient's mobility in the future.
Usually, the patient needs to end up lying down, in a face-up position, on a sufficiently firm surface (for example, on the floor, which allows to perform the chest compressions of cardiopulmonary resuscitation).
Checking skills
They evaluate the condition of the victim, first attending to the main threats for life.
The preferred initial way of checking consist of asking, commonly by touching the patient in one of his shoulders and shouting something, such as: "can you hear me?"
In some cases, the victim has a wound that bleeds abundantly, which requires its own additional treatment to stop the blood loss (usually, it would begin by keeping the wound pressed).
If the patient does not react, the heartbeats can be checked in the carotid pulse: placing two fingers on any side of the neck (on the left or the right side), near his head. In cases where checking the carotid pulse is impossible, heartbeats can be perceived in the radial pulse: placing two fingers on a wrist, under the part of the thumb, and applying moderate pressure. Breathing can also be checked additionally, placing an ear on the mouth and, at the same time, watching the chest rising by the effect of the air. It is recommended not to waste too much time of first aid in checking (professional rescuers are taught to take 10 seconds in it).
Cardiopulmonary resuscitation (CPR)
Cardiopulmonary resuscitation (CPR) is the method of first aid for treating victims of cardiac arrest (complete stop of heartbeat).
Airway, Breathing, and Circulation skills
ABC method stands for Airway, Breathing, and Circulation. The same mnemonic is used by emergency health professionals.
It is focused on critical life-saving intervention, and it must be rendered before treatment of less serious injuries.
Attention must first be brought to the airway to ensure it is clear. An obstruction (choking) is a life-threatening emergency. If an object blocks the airway, it requires anti-choking procedures. Following any evaluation of the airway, a first aid attendant would determine adequacy of breathing and provide rescue breathing if safe to do so.
Assessment of circulation is now not usually carried out for patients who are not breathing, with first aiders now trained to go straight to chest compressions (and thus providing artificial circulation) but pulse checks may be done on less serious patients.
Some organizations add a fourth step of "D" for Deadly bleeding or Defibrillation, while others consider this as part of the Circulation step simply referred as Disability. Variations on techniques to evaluate and maintain the ABCs depend on the skill level of the first aider. Once the ABCs are secured, first aiders can begin additional treatments or examination, as required if they possess the proper training (such as measuring pupil dilation).
Some organizations teach the same order of priority using the "3Bs": Breathing, Bleeding, and Bones (or "4Bs": Breathing, Bleeding, Burns, and Bones). While the ABCs and 3Bs are taught to be performed sequentially, certain conditions may require the consideration of two steps simultaneously. This includes the provision of both artificial respiration and chest compressions to someone who is not breathing and has no pulse, and the consideration of cervical spine injuries when ensuring an open airway.
Preserving life
The patient must have an open airway—that is, an unobstructed passage that allows air to travel from the open mouth or uncongested nose, down through the pharynx and into the lungs. Conscious people maintain their own airway automatically, but those who are unconscious (with a GCS of less than 8) may be unable to do so, as the part of the brain that manages spontaneous breathing may not be functioning.
Whether conscious or not, the patient may be placed in the recovery position, laying on their side. In addition to relaxing the patient, this can have the effect of clearing the tongue from the pharynx. It also avoids a common cause of death in unconscious patients, which is choking on regurgitated stomach contents.
The airway can also become blocked by a foreign object. To dislodge the object and solve the choking case, the first aider may use anti-choking methods (such as 'back slaps, 'chest thrusts' or 'abdominal thrusts').
Once the airway has been opened, the first aider would reassess the patient's breathing. If there is no breathing, or the patient is not breathing normally (e.g., agonal breathing), the first aider would initiate CPR, which attempts to restart the patient's breathing by forcing air into the lungs. They may also manually massage the heart to promote blood flow around the body.
If the choking person is an infant, the first aider may use anti-choking methods for babies. During that procedure, series of five strong blows are delivered on the infant's upper back after placing the infant's face in the aider's forearm. If the infant is able to cough or cry, no breathing assistance should be given. Chest thrusts can also be applied with two fingers on the lower half of the middle of the chest. Coughing and crying indicate the airway is open and the foreign object will likely to come out from the force the coughing or crying produces.
A first responder should know how to use an Automatic External Defibrillator (AED) in the case of a person having a sudden cardiac arrest. The survival rate of those who have a cardiac arrest outside of the hospital is low. Permanent brain damage sets in after five minutes of no oxygen delivery, so rapid action on the part of the rescuer is necessary. An AED is a device that can examine a heartbeat and produce electric shocks to restart the heart.
A first aider should be prepared to quickly deal with less severe problems such as cuts, grazes or bone fracture. They may be able to completely resolve a situation if they have the proper training and equipment. For situations that are more severe, complex or dangerous, a first aider might need to do the best they can with the equipment they have, and wait for an ambulance to arrive at the scene.
First aid kits
A first aid kit consists of a strong, durable bag or transparent plastic box. They are commonly identified with a white cross on a green background. A first aid kit does not have to be bought ready-made. The advantage of ready-made first aid kits are that they have well organized compartments and familiar layouts.
Contents
There is no universal agreement upon the list for the contents of a first aid kit. The UK Health and Safety Executive stress that the contents of workplace first aid kits will vary according to the nature of the work activities. As an example of possible contents of a kit, British Standard BS 8599 First Aid Kits for the Workplace lists the following items:
Information leaflet
Medium sterile dressings
Large sterile dressings
Bandages
Triangular dressings
Safety pins
Adhesive dressings
Sterile wet wipes
Microporous tape
Nitrile gloves
Face shield
Foil blanket
Burn dressings
Clothing shears
Conforming bandages
Finger dressing
Antiseptic cream
Scissors
Tweezers
Cotton
Training principles
Basic principles, such as knowing the use of adhesive bandage or applying direct pressure on a bleed, are often acquired passively through life experiences. However, to provide effective, life-saving first aid interventions requires instruction and practical training. This is especially true where it relates to potentially fatal illnesses and injuries, such as those that require CPR; these procedures may be invasive, and carry a risk of further injury to the patient and the provider. As with any training, it is more useful if it occurs before an actual emergency. And, in many countries, calling emergency medical services allows listening basic first aid instructions over the phone while the ambulance is on the way.
Training is generally provided by attending a course, typically leading to certification. Due to regular changes in procedures and protocols, based on updated clinical knowledge, and to maintain skill, attendance at regular refresher courses or re-certification is often necessary. First aid training is often available through community organizations such as the Red Cross and St. John Ambulance, or through commercial providers, who will train people for a fee. This commercial training is most common for training of employees to perform first aid in their workplace. Many community organizations also provide a commercial service, which complements their community programmes.
1.Junior level certificate Basic Life Support
2.Senior level certificate
3.Special certificate
Types of first aid which require training
There are several types of first aid (and first aider) that require specific additional training. These are usually undertaken to fulfill the demands of the work or activity undertaken.
Aquatic/Marine first aid is usually practiced by professionals such as lifeguards, professional mariners or in diver rescue, and covers the specific problems which may be faced after water-based rescue or delayed MedEvac.
Battlefield first aid takes into account the specific needs of treating wounded combatants and non-combatants during armed conflict.
Conflict First Aid focuses on support for stability and recovery of personal, social, group or system well-being and to address circumstantial safety needs.
Hyperbaric first aid may be practiced by underwater diving professionals, who need to treat conditions such as decompression sickness.
Oxygen first aid is the providing of oxygen to casualties with conditions resulting in hypoxia. It is also a standard first aid procedure for underwater diving incidents where gas bubble formation in the tissues is possible.
Wilderness first aid is the provision of first aid under conditions where the arrival of emergency responders or the evacuation of an injured person may be delayed due to constraints of terrain, weather, and available persons or equipment. It may be necessary to care for an injured person for several hours or days.
Mental health first aid is taught independently of physical first aid. How to support someone experiencing a mental health problem or in a crisis situation. Also how to identify the first signs of someone developing mental ill health and guide people towards appropriate help.
First aid services
Some people undertake specific training in order to provide first aid at public or private events, during filming, or other places where people gather. They may be designated as a first aider, or use some other title. This role may be undertaken on a voluntary basis, with organisations such as the Red Cross society and St. John Ambulance, or as paid employment with a medical contractor.
People performing a first aid role, whether in a professional or voluntary capacity, are often expected to have a high level of first aid training and are often uniformed.
Symbols
Although commonly associated with first aid, the symbol of a red cross is an official protective symbol of the Red Cross. According to the Geneva Conventions and other international laws, the use of this and similar symbols is reserved for official agencies of the International Red Cross and Red Crescent, and as a protective emblem for medical personnel and facilities in combat situations. Use by any other person or organization is illegal, and may lead to prosecution.
The internationally accepted symbol for first aid is the white cross on a green background shown below.
Some organizations may make use of the Star of Life, although this is usually reserved for use by ambulance services, or may use symbols such as the Maltese Cross, like the Order of Malta Ambulance Corps and St John Ambulance. Other symbols may also be used.
| Biology and health sciences | General concepts | null |
11299 | https://en.wikipedia.org/wiki/Fox | Fox | Foxes are small-to-medium-sized omnivorous mammals belonging to several genera of the family Canidae. They have a flattened skull; upright, triangular ears; a pointed, slightly upturned snout; and a long, bushy tail ("brush").
Twelve species belong to the monophyletic "true fox" group of genus Vulpes. Another 25 current or extinct species are sometimes called foxes – they are part of the paraphyletic group of the South American foxes or an outlying group, which consists of the bat-eared fox, gray fox, and island fox.
Foxes live on every continent except Antarctica. The most common and widespread species of fox is the red fox (Vulpes vulpes) with about 47 recognized subspecies. The global distribution of foxes, together with their widespread reputation for cunning, has contributed to their prominence in popular culture and folklore in many societies around the world. The hunting of foxes with packs of hounds, long an established pursuit in Europe, especially in the British Isles, was exported by European settlers to various parts of the New World.
Etymology
The word fox comes from Old English and derives from Proto-Germanic *fuhsaz. This in turn derives from Proto-Indo-European *puḱ- "thick-haired, tail." Male foxes are known as dogs, tods, or reynards; females as vixens; and young as cubs, pups, or kits, though the last term is not to be confused with the kit fox, a distinct species. "Vixen" is one of very few modern English words that retain the Middle English southern dialectal "v" pronunciation instead of "f"; i.e., northern English "fox" versus southern English "vox". A group of foxes is referred to as a skulk, leash, or earth.
Phylogenetic relationships
Within the Canidae, the results of DNA analysis shows several phylogenetic divisions:
The fox-like canids, which include the kit fox (Vulpes velox), red fox (Vulpes vulpes), Cape fox (Vulpes chama), Arctic fox (Vulpes lagopus), and fennec fox (Vulpes zerda).
The wolf-like canids, (genus Canis, Cuon and Lycaon) including the dog (Canis lupus familiaris), gray wolf (Canis lupus), red wolf (Canis rufus), eastern wolf (Canis lycaon), coyote (Canis latrans), golden jackal (Canis aureus), Ethiopian wolf (Canis simensis), black-backed jackal (Canis mesomelas), side-striped jackal (Canis adustus), dhole (Cuon alpinus), and African wild dog (Lycaon pictus).
The South American canids, including the bush dog (Speothos venaticus), hoary fox (Lycalopex uetulus), crab-eating fox (Cerdocyon thous) and maned wolf (Chrysocyon brachyurus).
Various monotypic taxa, including the bat-eared fox (Otocyon megalotis), gray fox (Urocyon cinereoargenteus), and raccoon dog (Nyctereutes procyonoides).
Biology
General morphology
Foxes are generally smaller than some other members of the family Canidae such as wolves and jackals, while they may be larger than some within the family, such as raccoon dogs. In the largest species, the red fox, males weigh between , while the smallest species, the fennec fox, weighs just .
Fox features typically include a triangular face, pointed ears, an elongated rostrum, and a bushy tail. They are digitigrade (meaning they walk on their toes). Unlike most members of the family Canidae, foxes have partially retractable claws. Fox vibrissae, or whiskers, are black. The whiskers on the muzzle, known as mystacial vibrissae, average long, while the whiskers everywhere else on the head average to be shorter in length. Whiskers (carpal vibrissae) are also on the forelimbs and average long, pointing downward and backward. Other physical characteristics vary according to habitat and adaptive significance.
Pelage
Fox species differ in fur color, length, and density. Coat colors range from pearly white to black-and-white to black flecked with white or grey on the underside. Fennec foxes (and other species of fox adapted to life in the desert, such as kit foxes), for example, have large ears and short fur to aid in keeping the body cool. Arctic foxes, on the other hand, have tiny ears and short limbs as well as thick, insulating fur, which aid in keeping the body warm. Red foxes, by contrast, have a typical auburn pelt, the tail normally ending with a white marking.
A fox's coat color and texture may vary due to the change in seasons; fox pelts are richer and denser in the colder months and lighter in the warmer months. To get rid of the dense winter coat, foxes moult once a year around April; the process begins from the feet, up the legs, and then along the back. Coat color may also change as the individual ages.
Dentition
A fox's dentition, like all other canids, is I 3/3, C 1/1, PM 4/4, M 3/2 = 42. (Bat-eared foxes have six extra molars, totalling in 48 teeth.) Foxes have pronounced carnassial pairs, which is characteristic of a carnivore. These pairs consist of the upper premolar and the lower first molar, and work together to shear tough material like flesh. Foxes' canines are pronounced, also characteristic of a carnivore, and are excellent in gripping prey.
Behaviour
In the wild, the typical lifespan of a fox is one to three years, although individuals may live up to ten years. Unlike many canids, foxes are not always pack animals. Typically, they live in small family groups, but some (such as Arctic foxes) are known to be solitary.
Foxes are omnivores. Their diet is made up primarily of invertebrates such as insects and small vertebrates such as reptiles and birds. They may also eat eggs and vegetation. Many species are generalist predators, but some (such as the crab-eating fox) have more specialized diets. Most species of fox consume around of food every day. Foxes cache excess food, burying it for later consumption, usually under leaves, snow, or soil. While hunting, foxes tend to use a particular pouncing technique, such that they crouch down to camouflage themselves in the terrain and then use their hind legs to leap up with great force and land on top of their chosen prey. Using their pronounced canine teeth, they can then grip the prey's neck and shake it until it is dead or can be readily disemboweled.
The gray fox is one of only two canine species known to regularly climb trees; the other is the raccoon dog.
Sexual characteristics
The male fox's scrotum is held up close to the body with the testes inside even after they descend. Like other canines, the male fox has a baculum, or penile bone. The testes of red foxes are smaller than those of Arctic foxes. Sperm formation in red foxes begins in August–September, with the testicles attaining their greatest weight in December–February.
Vixens are in heat for one to six days, making their reproductive cycle twelve months long. As with other canines, the ova are shed during estrus without the need for the stimulation of copulating. Once the egg is fertilized, the vixen enters a period of gestation that can last from 52 to 53 days. Foxes tend to have an average litter size of four to five with an 80 percent success rate in becoming pregnant. Litter sizes can vary greatly according to species and environmentthe Arctic fox, for example, can have up to eleven kits.
The vixen usually has six or eight mammae. Each teat has 8 to 20 lactiferous ducts, which connect the mammary gland to the nipple, allowing for milk to be carried to the nipple.
Vocalization
The fox's vocal repertoire is vast, and includes:
Whine Made shortly after birth. Occurs at a high rate when kits are hungry and when their body temperatures are low. Whining stimulates the mother to care for her young; it also has been known to stimulate the male fox into caring for his mate and kits.
Yelp Made about 19 days later. The kits' whining turns into infantile barks, yelps, which occur heavily during play.
Explosive call At the age of about one month, the kits can emit an explosive call which is intended to be threatening to intruders or other cubs; a high-pitched howl.
Combative call In adults, the explosive call becomes an open-mouthed combative call during any conflict; a sharper bark.
Growl An adult fox's indication to their kits to feed or head to the adult's location.
Bark Adult foxes warn against intruders and in defense by barking.
In the case of domesticated foxes, the whining seems to remain in adult individuals as a sign of excitement and submission in the presence of their owners.
Classification
Canids commonly known as foxes include the following genera and species:
Conservation
Several fox species are endangered in their native environments. Pressures placed on foxes include habitat loss and being hunted for pelts, other trade, or control. Due in part to their opportunistic hunting style and industriousness, foxes are commonly resented as nuisance animals. Contrastingly, foxes, while often considered pests themselves, have been successfully employed to control pests on fruit farms while leaving the fruit intact.
Urocyon littoralis
The island fox, though considered a near-threatened species throughout the world, is becoming increasingly endangered in its endemic environment of the California Channel Islands. A population on an island is smaller than those on the mainland because of limited resources like space, food and shelter. Island populations are therefore highly susceptible to external threats ranging from introduced predatory species and humans to extreme weather.
On the California Channel Islands, it was found that the population of the island fox was so low due to an outbreak of canine distemper virus from 1999 to 2000 as well as predation by non-native golden eagles. Since 1993, the eagles have caused the population to decline by as much as 95%. Because of the low number of foxes, the population went through an Allee effect (an effect in which, at low enough densities, an individual's fitness decreases). Conservationists had to take healthy breeding pairs out of the wild population to breed them in captivity until they had enough foxes to release back into the wild. Nonnative grazers were also removed so that native plants would be able to grow back to their natural height, thereby providing adequate cover and protection for the foxes against golden eagles.
Pseudalopex fulvipes
Darwin's fox was considered critically endangered because of their small known population of 250 mature individuals as well as their restricted distribution. However, the IUCN have since downgraded the conservation status from crictically endangered in their 2004 and 2008 assessments to endangered in the 2016 assessment, following findings of a wider distribution than previously reported. On the Chilean mainland, the population is limited to Nahuelbuta National Park and the surrounding Valdivian rainforest. Similarly on Chiloé Island, their population is limited to the forests that extend from the southernmost to the northwesternmost part of the island. Though the Nahuelbuta National Park is protected, 90% of the species live on Chiloé Island.
A major issue the species faces is their dwindling, limited habitat due to the cutting and burning of the unprotected forests. Because of deforestation, the Darwin's fox habitat is shrinking, allowing for their competitor's (chilla fox) preferred habitat of open space, to increase; the Darwin's fox, subsequently, is being outcompeted. Another problem they face is their inability to fight off diseases transmitted by the increasing number of pet dogs. To conserve these animals, researchers suggest the need for the forests that link the Nahuelbuta National Park to the coast of Chile and in turn Chiloé Island and its forests, to be protected. They also suggest that other forests around Chile be examined to determine whether Darwin's foxes have previously existed there or can live there in the future, should the need to reintroduce the species to those areas arise. And finally, the researchers advise for the creation of a captive breeding program, in Chile, because of the limited number of mature individuals in the wild.
Relationships with humans
Foxes are often considered pests or nuisance creatures for their opportunistic attacks on poultry and other small livestock. Fox attacks on humans are not common.
Many foxes adapt well to human environments, with several species classified as "resident urban carnivores" for their ability to sustain populations entirely within urban boundaries. Foxes in urban areas can live longer and can have smaller litter sizes than foxes in non-urban areas. Urban foxes are ubiquitous in Europe, where they show altered behaviors compared to non-urban foxes, including increased population density, smaller territory, and pack foraging. Foxes have been introduced in numerous locations, with varying effects on indigenous flora and fauna.
In some countries, foxes are major predators of rabbits and hens. Population oscillations of these two species were the first nonlinear oscillation studied and led to the derivation of the Lotka–Volterra equation.
As food
Fox meat is edible, though it is not considered a common cuisine in any country.
Hunting
Fox hunting originated in the United Kingdom in the 16th century. Hunting with dogs is now banned in the United Kingdom, though hunting without dogs is still permitted. Red foxes were introduced into Australia in the early 19th century for sport, and have since become widespread through much of the country. They have caused population decline among many native species and prey on livestock, especially new lambs. Fox hunting is practiced as recreation in several other countries including Canada, France, Ireland, Italy, Russia, United States and Australia.
Domestication
There are many records of domesticated red foxes and others, but rarely of sustained domestication. A recent and notable exception is the Russian silver fox, which resulted in visible and behavioral changes, and is a case study of an animal population modeling according to human domestication needs. The current group of domesticated silver foxes are the result of nearly fifty years of experiments in the Soviet Union and Russia to de novo domesticate the silver morph of the red fox. This selective breeding resulted in physical and behavioral traits appearing that are frequently seen in domestic cats, dogs, and other animals, such as pigmentation changes, floppy ears, and curly tails. Notably, the new foxes became more tame, allowing themselves to be petted, whimpering to get attention and sniffing and licking their caretakers.
Urban settings
Foxes are among the comparatively few mammals which have been able to adapt themselves to a certain degree to living in urban (mostly suburban) human environments. Their omnivorous diet allows them to survive on discarded food waste, and their skittish and often nocturnal nature means that they are often able to avoid detection, despite their larger size.
Urban foxes have been identified as threats to cats and small dogs, and for this reason there is often pressure to exclude them from these environments.
The San Joaquin kit fox is a highly endangered species that has, ironically, become adapted to urban living in the San Joaquin Valley and Salinas Valley of southern California. Its diet includes mice, ground squirrels, rabbits, hares, bird eggs, and insects, and it has claimed habitats in open areas, golf courses, drainage basins, and school grounds.
Though rare, bites by foxes have been reported; in 2018, a woman in Clapham, London was bitten on the arm by a fox after she had left the door to her flat open.
In popular culture
The fox appears in many cultures, usually in folklore. There are slight variations in their depictions. In European, Persian, East Asian, and Native American folklore, foxes are symbols of cunning and trickery—a reputation derived especially from their reputed ability to evade hunters. This is usually represented as a character possessing these traits. These traits are used on a wide variety of characters, either making them a nuisance to the story, a misunderstood hero, or a devious villain.
In East Asian folklore, foxes are depicted as familiar spirits possessing magic powers. Similar to in the folklore of other regions, foxes are portrayed as mischievous, usually tricking other people, with the ability to disguise as an attractive female human. Others depict them as mystical, sacred creatures who can bring wonder or ruin. Nine-tailed foxes appear in Chinese folklore, literature, and mythology, in which, depending on the tale, they can be a good or a bad omen. The motif was eventually introduced from Chinese to Japanese and Korean cultures.
The constellation Vulpecula represents a fox.
| Biology and health sciences | Carnivora | null |
11302 | https://en.wikipedia.org/wiki/Felidae | Felidae | Felidae () is the family of mammals in the order Carnivora colloquially referred to as cats. A member of this family is also called a felid ().
The 41 extant Felidae species exhibit the greatest diversity in fur patterns of all terrestrial carnivores. Cats have retractile claws, slender muscular bodies and strong flexible forelimbs. Their teeth and facial muscles allow for a powerful bite. They are all obligate carnivores, and most are solitary predators ambushing or stalking their prey. Wild cats occur in Africa, Europe, Asia and the Americas. Some wild cat species are adapted to forest and savanna habitats, some to arid environments, and a few also to wetlands and mountainous terrain. Their activity patterns range from nocturnal and crepuscular to diurnal, depending on their preferred prey species.
Reginald Innes Pocock divided the extant Felidae into three subfamilies: the Pantherinae, the Felinae and the Acinonychinae, differing from each other by the ossification of the hyoid apparatus and by the cutaneous sheaths which protect their claws.
This concept has been revised following developments in molecular biology and techniques for the analysis of morphological data. Today, the living Felidae are divided into two subfamilies: the Pantherinae and Felinae, with the Acinonychinae subsumed into the latter. Pantherinae includes five Panthera and two Neofelis species, while Felinae includes the other 34 species in 12 genera.
The first cats emerged during the Oligocene about , with the appearance of Proailurus and Pseudaelurus. The latter species complex was ancestral to two main lines of felids: the cats in the extant subfamilies and a group of extinct "saber-tooth" felids of the subfamily Machairodontinae, which range from the type genus Machairodus of the late Miocene to Smilodon of the Pleistocene. The "false saber-toothed cats", the Barbourofelidae and Nimravidae, are not true cats but are closely related. Together with the Felidae, Viverridae, Nandiniidae, Eupleridae, hyenas and mongooses, they constitute the Feliformia.
Characteristics
All members of the cat family have the following characteristics in common:
They are digitigrade and have five toes on their forefeet and four on their hind feet. Their curved claws are protractile and attached to the terminal bones of the toe with ligaments and tendons. The claws are guarded by cutaneous sheaths, except in the Acinonyx.
The plantar pads of both fore and hind feet form compact three-lobed cushions.
They actively protract the claws by contracting muscles in the toe, and they passively retract them. The dewclaws are expanded but do not protract.
They have lithe and flexible bodies with muscular limbs.
Their skulls are foreshortened with a rounded profile and large orbits.
They have 30 teeth with a dental formula of . The upper third premolar and lower molar are adapted as carnassial teeth, suited to tearing and cutting flesh. The canine teeth are large, reaching exceptional size in the extinct Machairodontinae. The lower carnassial is smaller than the upper carnassial and has a crown with two compressed blade-like pointed cusps.
Their tongues are covered with horn-like papillae, which rasp meat from prey and aid in grooming.
Their noses project slightly beyond the lower jaw.
Their eyes are relatively large, situated to provide binocular vision. Their night vision is especially good due to the presence of a tapetum lucidum, which reflects light inside the eyeball, and gives felid eyes their distinctive shine. As a result, the eyes of felids are about six times more light-sensitive than those of humans, and many species are at least partially nocturnal. The retina of felids also contains a relatively high proportion of rod cells, adapted for distinguishing moving objects in conditions of dim light, which are complemented by the presence of cone cells for sensing colour during the day.
They have well-developed and highly sensitive whiskers above the eyes, on the cheeks, and the muzzle, but not below the chin. Whiskers help to navigate in the dark and to capture and hold prey.
Their external ears are large and especially sensitive to high-frequency sounds in the smaller cat species. This sensitivity allows them to locate small rodent prey.
The penis is subconical, facing downward when not erect and backward during urination. The baculum is small or vestigial, and shorter than in the Canidae. Most felids have penile spines that induce ovulation during copulation.
They have a vomeronasal organ in the roof of the mouth, allowing them to "taste" the air. The use of this organ is associated with the flehmen response.
They cannot detect the sweetness of sugar, as they lack the sweet taste receptor.
They share a broadly similar set of vocalizations but with some variation between species. In particular, the pitch of calls varies, with larger species producing deeper sounds; overall, the frequency of felid calls ranges between 50 and 10,000 hertz. The standard sounds made by felids include mewing, chuffing, spitting, hissing, snarling and growling. Mewing and chuffing are the main contact sound, whereas the others signify an aggressive motivation.
They can purr during both phases of respiration, though pantherine cats seem to purr only during oestrus and copulation, and as cubs when suckling. Purring is generally a low-pitch sound of 16.8–27.5 Hz and is mixed with other vocalization types during the expiratory phase. The ability to roar comes from an elongated and specially adapted larynx and hyoid apparatus. When air passes through the larynx on the way from the lungs, the cartilage walls of the larynx vibrate, producing sound. Only lions, leopards, tigers, and jaguars are truly able to roar, although the loudest mews of snow leopards have a similar, if less structured, sound. Clouded leopards can neither purr nor roar, and so Neofelis is said to be a sister group to Panthera. Sabretoothed cats may have had the ability to both roar and purr.
The colour, length and density of their fur are very diverse. Fur colour covers the gamut from white to black, and fur patterns from distinctive small spots, and stripes to small blotches and rosettes. Most cat species are born with spotted fur, except the jaguarundi (Herpailurus yagouaroundi), Asian golden cat (Catopuma temminckii) and caracal (Caracal caracal). The spotted fur of lion (Panthera leo), cheetah (Acinonyx jubatus) and cougar (Puma concolor) cubs change to uniform fur during their ontogeny. Those living in cold environments have thick fur with long hair, like the snow leopard (Panthera uncia) and the Pallas's cat (Otocolobus manul). Those living in tropical and hot climate zones have short fur. Several species exhibit melanism with all-black individuals, cougars are notable for lacking melanism but leucism and albinism are present in cougars along with many other felids.
In the great majority of cat species, the tail is between a third and a half of the body length, although with some exceptions, like the Lynx species and margay (Leopardus wiedii). Cat species vary greatly in body and skull sizes, and weights:
The largest cat species is the tiger (Panthera tigris), with a head-to-body length of up to , a weight range of at least , and a skull length ranging from . Although the maximum skull length of a lion is slightly greater at , it is generally smaller in head-to-body length than the tiger.
The smallest cat species are the rusty-spotted cat (Prionailurus rubiginosus) and the black-footed cat (Felis nigripes). The former is in length and weighs . The latter has a head-to-body length of and a maximum recorded weight of .
Most cat species have a haploid number of 18 or 19. Central and South American cats have a haploid number of 18, possibly due to the combination of two smaller chromosomes into a larger one.
Felidae have type IIx muscle fibers three times more powerful than the muscle fibers of human athletes.
Evolution
The family Felidae is part of the Feliformia, a suborder that diverged probably about into several families. The Felidae and the Asiatic linsangs are considered a sister group, which split about .
The earliest cats probably appeared about . Proailurus is the oldest known cat that occurred after the Eocene–Oligocene extinction event about ; fossil remains were excavated in France and Mongolia's Hsanda Gol Formation. Fossil occurrences indicate that the Felidae arrived in North America around . This is about 20million years later than the Ursidae and the Nimravidae, and about 10 million years later than the Canidae.
In the Early Miocene about , Pseudaelurus lived in Africa. Its fossil jaws were also excavated in geological formations of Europe's Vallesian, Asia's Middle Miocene and North America's late Hemingfordian to late Barstovian epochs.
In the Early or Middle Miocene, the saber-toothed Machairodontinae evolved in Africa and migrated northwards in the Late Miocene. With their large upper canines, they were adapted to prey on large-bodied megaherbivores. Miomachairodus is the oldest known member of this subfamily. Metailurus lived in Africa and Eurasia about . Several Paramachaerodus skeletons were found in Spain. Homotherium appeared in Africa, Eurasia and North America around , and Megantereon about . Smilodon lived in North and South America from about . This subfamily became extinct in the Late Pleistocene.
Results of mitochondrial analysis indicate that the living Felidae species descended from a common ancestor, which originated in Asia in the Late Miocene epoch. They migrated to Africa, Europe and the Americas in the course of at least ten migration waves during the past ~11 million years. Low sea levels and interglacial and glacial periods facilitated these migrations. Panthera blytheae is the oldest known pantherine cat dated to the late Messinian to early Zanclean ages about . A fossil skull was excavated in 2010 in Zanda County on the Tibetan Plateau. Panthera palaeosinensis from North China probably dates to the Late Miocene or Early Pliocene. The skull of the holotype is similar to that of a lion or leopard. Panthera zdanskyi dates to the Gelasian about . Several fossil skulls and jawbones were excavated in northwestern China. Panthera gombaszoegensis is the earliest known pantherine cat that lived in Europe about .
Living felids fall into eight evolutionary lineages or species clades. Genotyping of the nuclear DNA of all 41 felid species revealed that hybridization between species occurred in the course of evolution within the majority of the eight lineages.
Modelling of felid coat pattern transformations revealed that nearly all patterns evolved from small spots.
Classification
Traditionally, five subfamilies had been distinguished within the Felidae based on phenotypical features: the Pantherinae, the Felinae, the Acinonychinae, and the extinct Machairodontinae and Proailurinae. Acinonychinae used to only contain the genus Acinonyx but this genus is now within the Felinae subfamily.
Phylogeny
The following cladogram based on Piras et al. (2013) depicts the phylogeny of basal living and extinct groups.
The phylogenetic relationships of living felids are shown in the following cladogram:
| Biology and health sciences | Carnivora | null |
11376 | https://en.wikipedia.org/wiki/Floating-point%20arithmetic | Floating-point arithmetic | In computing, floating-point arithmetic (FP) is arithmetic on subsets of real numbers formed by a signed sequence of a fixed number of digits in some base, called a significand, scaled by an integer exponent of that base.
Numbers of this form are called floating-point numbers.
For example, the number 2469/200 is a floating-point number in base ten with five digits:
However, 7716/625 = 12.3456 is not a floating-point number in base ten with five digits—it needs six digits.
The nearest floating-point number with only five digits is 12.346.
And 1/3 = 0.3333… is not a floating-point number in base ten with any finite number of digits.
In practice, most floating-point systems use base two, though base ten (decimal floating point) is also common.
Floating-point arithmetic operations, such as addition and division, approximate the corresponding real number arithmetic operations by rounding any result that is not a floating-point number itself to a nearby floating-point number.
For example, in a floating-point arithmetic with five base-ten digits, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345.
The term floating point refers to the fact that the number's radix point can "float" anywhere to the left, right, or between the significant digits of the number. This position is indicated by the exponent, so floating point can be considered a form of scientific notation.
A floating-point system can be used to represent, with a fixed number of digits, numbers of very different orders of magnitude — such as the number of meters between galaxies or between protons in an atom. For this reason, floating-point arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of this dynamic range is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent.
Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE.
The speed of floating-point operations, commonly measured in terms of FLOPS, is an important characteristic of a computer system, especially for applications that involve intensive mathematical calculations.
A floating-point unit (FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers.
Overview
Floating-point numbers
A number representation specifies some way of encoding a number, usually as a string of digits.
There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the location of the radix point is indicated by placing an explicit "point" character (dot or comma) there. If the radix point is not specified, then the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit. In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345.
In scientific notation, the given number is scaled by a power of 10, so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is seconds, a value that would be represented in standard-form scientific notation as seconds.
Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of:
A signed (meaning positive or negative) digit string of a given length in a given base (or radix). This digit string is referred to as the significand, mantissa, or coefficient. The length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost (least significant) digit. This article generally follows the convention that the radix point is set just after the most significant (leftmost) digit.
A signed integer exponent (also referred to as the characteristic, or scale), which modifies the magnitude of the number.
To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative.
Using base-10 (the familiar decimal notation) as an example, the number , which has ten decimal digits of precision, is represented as the significand together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by to give , or . In storing such a number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred.
Symbolically, this final value is:
where is the significand (ignoring any implied decimal point), is the precision (the number of digits in the significand), is the base (in our example, this is the number ten), and is the exponent.
Historically, several number bases have been used for representing floating-point numbers, with base two (binary) being the most common, followed by base ten (decimal floating point), and other less common varieties, such as base sixteen (hexadecimal floating point), base eight (octal floating point), base four (quaternary floating point), base three (balanced ternary floating point) and even base 256 and base .
A floating-point number is a rational number, because it can be represented as one integer divided by another; for example is (145/100)×1000 or /100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floating-point number using a binary base, but 1/5 can be represented exactly using a decimal base (, or ). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but in base 3, it is trivial (0.1 or 1×3−1) . The occasions on which infinite expansions occur depend on the base and its prime factors.
The way in which the significand (including its sign) and exponent are stored in a computer is implementation-dependent. The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary single-precision (32-bit) floating-point representation, , and so the significand is a string of 24 bits. For instance, the number π's first 33 bits are:
In this binary expansion, let us denote the positions from 0 (leftmost bit, or most significant bit) to 32 (rightmost bit). The 24-bit significand will stop at position 23, shown as the underlined bit above. The next bit, at position 24, is called the round bit or rounding bit. It is used to round the 33-bit approximation to the nearest 24-bit number (there are specific rules for halfway values, which is not the case here). This bit, which is in this example, is added to the integer formed by the leftmost 24 bits, yielding:
When this is stored in memory using the IEEE 754 encoding, this becomes the significand . The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from left-to-right as follows:
where is the precision ( in this example), is the position of the bit of the significand from the left (starting at and finishing at here) and is the exponent ( in this example).
It can be required that the most significant digit of the significand of a non-zero number be non-zero (except when the corresponding exponent would be smaller than the minimum one). This process is called normalization. For binary formats (which uses only the digits and ), this non-zero digit is necessarily . Therefore, it does not need to be represented in memory, allowing the format to have one more bit of precision. This rule is variously called the leading bit convention, the implicit bit convention, the hidden bit convention, or the assumed bit convention.
Alternatives to floating-point numbers
The floating-point representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives:
Fixed-point representation uses integer hardware operations controlled by a software implementation of a specific convention about the location of the binary or decimal point, for example, 6 bits or digits from the right. The hardware to manipulate these representations is less costly than floating point, and it can be used to perform normal integer operations, too. Binary fixed point is usually used in special-purpose applications on embedded processors that can only do integer arithmetic, but decimal fixed point is common in commercial applications.
Logarithmic number systems (LNSs) represent a real number by the logarithm of its absolute value and a sign bit. The value distribution is similar to floating point, but the value-to-representation curve (i.e., the graph of the logarithm function) is smooth (except at 0). Conversely to floating-point arithmetic, in a logarithmic number system multiplication, division and exponentiation are simple to implement, but addition and subtraction are complex. The (symmetric) level-index arithmetic (LI and SLI) of Charles Clenshaw, Frank Olver and Peter Turner is a scheme based on a generalized logarithm representation.
Tapered floating-point representation, used in Unum.
Some simple rational numbers (e.g., 1/3 and 1/10) cannot be represented exactly in binary floating point, no matter what the precision is. Using a different radix allows one to represent some of them (e.g., 1/10 in decimal floating point), but the possibilities remain limited. Software packages that perform rational arithmetic represent numbers as fractions with integral numerator and denominator, and can therefore represent any rational number exactly. Such packages generally need to use "bignum" arithmetic for the individual integers.
Interval arithmetic allows one to represent numbers as intervals and obtain guaranteed bounds on results. It is generally based on other arithmetics, in particular floating point.
Computer algebra systems such as Mathematica, Maxima, and Maple can often handle irrational numbers like or in a completely "formal" way (symbolic computation), without dealing with a specific encoding of the significand. Such a program can evaluate expressions like "" exactly, because it is programmed to process the underlying mathematics directly, instead of using approximate values for each intermediate calculation.
History
In 1914, the Spanish engineer Leonardo Torres Quevedo published Essays on Automatics, where he designed a special-purpose electromechanical calculator based on Charles Babbage's analytical engine and described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format as n x 10, and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, "n will always be the same number of digits (e.g. six), the first digit of n will be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form: n; m." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying a syntax to be used that could be entered through a typewriter, as was the case of his Electromechanical Arithmometer in 1920.
In 1938, Konrad Zuse of Berlin completed the Z1, the first binary, programmable mechanical computer; it uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand (including one implicit bit), and a sign bit. The more reliable relay-based Z3, completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as , and it stops on undefined operations, such as .
Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes and NaN representations, anticipating features of the IEEE Standard by four decades. In contrast, von Neumann recommended against floating-point numbers for the 1951 IAS machine, arguing that fixed-point arithmetic is preferable.
The first commercial computer with floating-point hardware was Zuse's Z4 computer, designed in 1942–1945. In 1946, Bell Laboratories introduced the Model V, which implemented decimal floating-point numbers.
The Pilot ACE has binary floating-point arithmetic, and it became operational in 1950 at National Physical Laboratory, UK. Thirty-three were later sold commercially as the English Electric DEUCE. The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers.
The mass-produced IBM 704 followed in 1954; it introduced the use of a biased exponent. For many decades after that, floating-point hardware was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "scientific computation" (SC) capability (see also Extensions for Scientific Computation (XSC)). It was not until the launch of the Intel i486 in 1989 that general-purpose personal computers had floating-point capability in hardware as a standard feature.
The UNIVAC 1100/2200 series, introduced in 1962, supported two floating-point representations:
Single precision: 36 bits, organized as a 1-bit sign, an 8-bit exponent, and a 27-bit significand.
Double precision: 72 bits, organized as a 1-bit sign, an 11-bit exponent, and a 60-bit significand.
The IBM 7094, also introduced in 1962, supported single-precision and double-precision representations, but with no relation to the UNIVAC's representations. Indeed, in 1964, IBM introduced hexadecimal floating-point representations in its System/360 mainframes; these same representations are still available for use in modern z/Architecture systems. In 1998, IBM implemented IEEE-compatible binary floating-point arithmetic in its mainframes; in 2005, IBM also added IEEE-compatible decimal floating-point arithmetic.
Initially, computers used many different representations for floating-point numbers. The lack of standardization at the mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to the creation of the IEEE 754 standard once the 32-bit (or 64-bit) word had become commonplace. This standard was significantly based on a proposal from Intel, which was designing the i8087 numerical coprocessor; Motorola, which was designing the 68000 around the same time, gave significant input as well.
In 1989, mathematician and computer scientist William Kahan was honored with the Turing Award for being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor, Harold Stone.
Among the x86 innovations are these:
A precisely specified floating-point representation at the bit-string level, so that all compliant computers interpret bit patterns the same way. This makes it possible to accurately and efficiently transfer floating-point numbers from one computer to another (after accounting for endianness).
A precisely specified behavior for the arithmetic operations: A result is required to be produced as if infinitely precise arithmetic were used to yield a value that is then rounded according to specific rules. This means that a compliant computer program would always produce the same result when given a particular input, thus mitigating the almost mystical reputation that floating-point computation had developed for its hitherto seemingly non-deterministic behavior.
The ability of exceptional conditions (overflow, divide by zero, etc.) to propagate through a computation in a benign manner and then be handled by the software in a controlled fashion.
Range of floating-point numbers
A floating-point number consists of two fixed-point components, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend on their range, the floating-point range linearly depends on the significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number.
On a typical computer system, a double-precision (64-bit) binary floating-point number has a coefficient of 53 bits (including 1 implied bit), an exponent of 11 bits, and 1 sign bit. Since 210 = 1024, the complete range of the positive normal floating-point numbers in this format is from 2−1022 ≈ 2 × 10−308 to approximately 21024 ≈ 2 × 10308.
The number of normal floating-point numbers in a system (B, P, L, U) where
B is the base of the system,
P is the precision of the significand (in base B),
L is the smallest exponent of the system,
U is the largest exponent of the system,
is .
There is a smallest positive normal floating-point number,
Underflow level = UFL = ,
which has a 1 as the leading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent.
There is a largest floating-point number,
Overflow level = OFL = ,
which has B − 1 as the value for each digit of the significand and the largest possible value for the exponent.
In addition, there are representable values strictly between −UFL and UFL. Namely, positive and negative zeros, as well as subnormal numbers.
IEEE 754: floating point in modern computers
The IEEE standardized the computer representation for binary floating-point numbers in IEEE 754 (a.k.a. IEC 60559) in 1985. This first standard is followed by almost all modern machines. It was revised in 2008. IBM mainframes support IBM's own hexadecimal floating point format and IEEE 754-2008 decimal floating point in addition to the IEEE 754 binary format. The Cray T90 series had an IEEE version, but the SV1 still uses Cray floating-point format.
The standard provides for many closely related formats, differing in only a few details. Five of these formats are called basic formats, and others are termed extended precision formats and extendable precision format. Three formats are especially widely used in computer hardware and languages:
Single precision (binary32), usually used to represent the "float" type in the C language family. This is a binary format that occupies 32 bits (4 bytes) and its significand has a precision of 24 bits (about 7 decimal digits).
Double precision (binary64), usually used to represent the "double" type in the C language family. This is a binary format that occupies 64 bits (8 bytes) and its significand has a precision of 53 bits (about 16 decimal digits).
Double extended, also ambiguously called "extended precision" format. This is a binary format that occupies at least 79 bits (80 if the hidden/implicit bit rule is not used) and its significand has a precision of at least 64 bits (about 19 decimal digits). The C99 and C11 standards of the C language family, in their annex F ("IEC 60559 floating-point arithmetic"), recommend such an extended format to be provided as "long double". A format satisfying the minimal requirements (64-bit significand precision, 15-bit exponent, thus fitting on 80 bits) is provided by the x86 architecture. Often on such processors, this format can be used with "long double", though extended precision is not available with MSVC. For alignment purposes, many tools store this 80-bit value in a 96-bit or 128-bit space. On other processors, "long double" may stand for a larger format, such as quadruple precision, or just double precision, if any form of extended precision is not available.
Increasing the precision of the floating-point representation generally reduces the amount of accumulated round-off error caused by intermediate calculations.
Other IEEE formats include:
Decimal64 and decimal128 floating-point formats. These formats (especially decimal128) are pervasive in financial transactions because, along with the decimal32 format, they allow correct decimal rounding.
Quadruple precision (binary128). This is a binary format that occupies 128 bits (16 bytes) and its significand has a precision of 113 bits (about 34 decimal digits).
Half precision, also called binary16, a 16-bit floating-point value. It is being used in the NVIDIA Cg graphics language, and in the openEXR standard (where it actually predates the introduction in the IEEE 754 standard).
Any integer with absolute value less than 224 can be exactly represented in the single-precision format, and any integer with absolute value less than 253 can be exactly represented in the double-precision format. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get 53-bit integers on platforms that have double-precision floats but only 32-bit integers.
The standard specifies some special values, and their representation: positive infinity (), negative infinity (), a negative zero (−0) distinct from ordinary ("positive") zero, and "not a number" values (NaNs).
Comparison of floating-point numbers, as defined by the IEEE standard, is a bit different from usual integer comparison. Negative and positive zero compare equal, and every NaN compares unequal to every value, including itself. All finite floating-point numbers are strictly smaller than and strictly greater than , and they are ordered in the same way as their values (in the set of real numbers).
Internal representation
Floating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and the significand or mantissa, from left to right. For the IEEE 754 binary formats (basic and extended) which have extant hardware implementations, they are apportioned as follows:
While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros and subnormal numbers; values of all 1s are reserved for the infinities and NaNs. The exponent range for normal numbers is [−126, 127] for single precision, [−1022, 1023] for double, or [−16382, 16383] for quad. Normal numbers exclude subnormal values, zeros, infinities, and NaNs.
In the IEEE binary interchange formats the leading 1 bit of a normalized significand is not actually stored in the computer datum. It is called the "hidden" or "implicit" bit. Because of this, the single-precision format actually has a significand with 24 bits of precision, the double-precision format has 53, and quad has 113.
For example, it was shown above that π, rounded to 24 bits of precision, has:
sign = 0 ; e = 1 ; s = 110010010000111111011011 (including the hidden bit)
The sum of the exponent bias (127) and the exponent (1) is 128, so this is represented in the single-precision format as
0 10000000 10010010000111111011011 (excluding the hidden bit) = 40490FDB as a hexadecimal number.
An example of a layout for 32-bit floating point is
and the 64-bit ("double") layout is similar.
Other notable floating-point formats
In addition to the widely used IEEE 754 standard formats, other floating-point formats are used, or have been used, in certain domain-specific areas.
The Microsoft Binary Format (MBF) was developed for the Microsoft BASIC language products, including Microsoft's first ever product the Altair BASIC (1975), TRS-80 LEVEL II, CP/M's MBASIC, IBM PC 5150's BASICA, MS-DOS's GW-BASIC and QuickBASIC prior to version 4.00. QuickBASIC version 4.00 and 4.50 switched to the IEEE 754-1985 format but can revert to the MBF format using the /MBF command option. MBF was designed and developed on a simulated Intel 8080 by Monte Davidoff, a dormmate of Bill Gates, during spring of 1975 for the MITS Altair 8800. The initial release of July 1975 supported a single-precision (32 bits) format due to cost of the MITS Altair 8800 4-kilobytes memory. In December 1975, the 8-kilobytes version added a double-precision (64 bits) format. A single-precision (40 bits) variant format was adopted for other CPU's, notably the MOS 6502 (Apple //, Commodore PET, Atari), Motorola 6800 (MITS Altair 680) and Motorola 6809 (TRS-80 Color Computer). All Microsoft language products from 1975 through 1987 used the Microsoft Binary Format until Microsoft adopted the IEEE-754 standard format in all its products starting in 1988 to their current releases. MBF consists of the MBF single-precision format (32 bits, "6-digit BASIC"), the MBF extended-precision format (40 bits, "9-digit BASIC"), and the MBF double-precision format (64 bits); each of them is represented with an 8-bit exponent, followed by a sign bit, followed by a significand of respectively 23, 31, and 55 bits.
The Bfloat16 format requires the same amount of memory (16 bits) as the IEEE 754 half-precision format, but allocates 8 bits to the exponent instead of 5, thus providing the same range as a IEEE 754 single-precision number. The tradeoff is a reduced precision, as the trailing significand field is reduced from 10 to 7 bits. This format is mainly used in the training of machine learning models, where range is more valuable than precision. Many machine learning accelerators provide hardware support for this format.
The TensorFloat-32 format combines the 8 bits of exponent of the Bfloat16 with the 10 bits of trailing significand field of half-precision formats, resulting in a size of 19 bits. This format was introduced by Nvidia, which provides hardware support for it in the Tensor Cores of its GPUs based on the Nvidia Ampere architecture. The drawback of this format is its size, which is not a power of 2. However, according to Nvidia, this format should only be used internally by hardware to speed up computations, while inputs and outputs should be stored in the 32-bit single-precision IEEE 754 format.
The Hopper architecture GPUs provide two FP8 formats: one with the same numerical range as half-precision (E5M2) and one with higher precision, but less range (E4M3).
Representable numbers, conversion and rounding
By their nature, all numbers expressed in floating-point format are rational numbers with a terminating expansion in the relevant base (for example, a terminating decimal expansion in base-10, or a terminating binary expansion in base-2). Irrational numbers, such as π or √2, or non-terminating rational numbers, must be approximated. The number of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. For example, the decimal number 123456789 cannot be exactly represented if only eight decimal digits of precision are available (it would be rounded to one of the two straddling representable values, 12345678 × 101 or 12345679 × 101), the same applies to non-terminating digits (. to be rounded to either .55555555 or .55555556).
When a number is represented in some format (such as a character string) which is not a native floating-point representation supported in a computer implementation, then it will require a conversion before it can be used in that implementation. If the number can be represented exactly in the floating-point format then the conversion is exact. If there is not an exact representation then the conversion requires a choice of which floating-point number to use to represent the original value. The representation chosen will have a different value from the original, and the value thus adjusted is called the rounded value.
Whether or not a rational number has a terminating expansion depends on the base. For example, in base-10 the number 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333...). In base-2 only rationals with denominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has a prime factor other than 2 will have an infinite binary expansion. This means that numbers that appear to be short and exact when written in decimal format may need to be approximated when converted to binary floating-point. For example, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binary representation would have a "1100" sequence continuing endlessly:
e = −4; s = 1100110011001100110011001100110011...,
where, as previously, s is the significand and e is the exponent.
When rounded to 24 bits this becomes
e = −4; s = 110011001100110011001101,
which is actually 0.100000001490116119384765625 in decimal.
As a further example, the real number π, represented in binary as an infinite sequence of bits is
11.0010010000111111011010101000100010000101101000110000100011010011...
but is
11.0010010000111111011011
when approximated by rounding to a precision of 24 bits.
In binary single-precision floating-point, this is represented as s = 1.10010010000111111011011 with e = 1.
This has a decimal value of
3.1415927410125732421875,
whereas a more accurate approximation of the true value of π is
3.14159265358979323846264338327950...
The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimal representation of π in the first 7 digits. The difference is the discretization error and is limited by the machine epsilon.
The arithmetical difference between two consecutive representable floating-point numbers which have the same exponent is called a unit in the last place (ULP). For example, if there is no representable number lying between the representable numbers 1.45a70c22hex and 1.45a70c24hex, the ULP is 2×16−8, or 2−31. For numbers with a base-2 exponent part of 0, i.e. numbers with an absolute value higher than or equal to 1 but lower than 2, an ULP is exactly 2−23 or about 10−7 in single precision, and exactly 2−53 or about 10−16 in double precision. The mandated behavior of IEEE-compliant hardware is that the result be within one-half of a ULP.
Rounding modes
Rounding is used when the exact result of a floating-point operation (or a conversion to floating-point format) would need more digits than there are digits in the significand. IEEE 754 requires correct rounding: that is, the rounded result is as if infinitely precise arithmetic was used to compute the value and then rounded (although in implementation only three extra bits are needed to ensure this). There are several different rounding schemes (or rounding modes). Historically, truncation was the typical approach. Since the introduction of IEEE 754, the default method (round to nearest, ties to even, sometimes called Banker's Rounding) is more commonly used. This method rounds the ideal (infinitely precise) result of an arithmetic operation to the nearest representable value, and gives that representation as the result. In the case of a tie, the value that would make the significand end in an even digit is chosen. The IEEE 754 standard requires the same rounding to be applied to all fundamental algebraic operations, including square root and conversions, when there is a numeric (non-NaN) result. It means that the results of IEEE 754 operations are completely determined in all bits of the result, except for the representation of NaNs. ("Library" functions such as cosine and log are not mandated.)
Alternative rounding options are also available. IEEE 754 specifies the following rounding modes:
round to nearest, where ties round to the nearest even digit in the required position (the default and by far the most common mode)
round to nearest, where ties round away from zero (optional for binary floating-point and commonly used in decimal)
round up (toward +∞; negative results thus round toward zero)
round down (toward −∞; negative results thus round away from zero)
round toward zero (truncation; it is similar to the common behavior of float-to-integer conversions, which convert −3.9 to −3 and 3.9 to 3)
Alternative modes are useful when the amount of error being introduced must be bounded. Applications that require a bounded error are multi-precision floating-point, and interval arithmetic.
The alternative rounding modes are also useful in diagnosing numerical instability: if the results of a subroutine vary substantially between rounding to + and − infinity then it is likely numerically unstable and affected by round-off error.
Binary-to-decimal conversion with minimal number of digits
Converting a double-precision binary floating-point number to a decimal string is a common operation, but an algorithm producing results that are both accurate and minimal did not appear in print until 1990, with Steele and White's Dragon4. Some of the improvements since then include:
David M. Gay's dtoa.c, a practical open-source implementation of many ideas in Dragon4.
Grisu3, with a 4× speedup as it removes the use of bignums. Must be used with a fallback, as it fails for ~0.5% of cases.
Errol3, an always-succeeding algorithm similar to, but slower than, Grisu3. Apparently not as good as an early-terminating Grisu with fallback.
Ryū, an always-succeeding algorithm that is faster and simpler than Grisu3.
Schubfach, an always-succeeding algorithm that is based on a similar idea to Ryū, developed almost simultaneously and independently. Performs better than Ryū and Grisu3 in certain benchmarks.
Many modern language runtimes use Grisu3 with a Dragon4 fallback.
Decimal-to-binary conversion
The problem of parsing a decimal string into a binary FP representation is complex, with an accurate parser not appearing until Clinger's 1990 work (implemented in dtoa.c). Further work has likewise progressed in the direction of faster parsing.
Floating-point operations
For ease of presentation and understanding, decimal radix with 7 digit precision will be used in the examples, as in the IEEE 754 decimal32 format. The fundamental principles are the same in any radix or precision, except that normalization is optional (it does not affect the numerical value of the result). Here, s denotes the significand and e denotes the exponent.
Addition and subtraction
A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number (with the smaller exponent) is shifted right by three digits, and one then proceeds with the usual addition method:
123456.7 = 1.234567 × 10^5
101.7654 = 1.017654 × 10^2 = 0.001017654 × 10^5
Hence:
123456.7 + 101.7654 = (1.234567 × 10^5) + (1.017654 × 10^2)
= (1.234567 × 10^5) + (0.001017654 × 10^5)
= (1.234567 + 0.001017654) × 10^5
= 1.235584654 × 10^5
In detail:
e=5; s=1.234567 (123456.7)
+ e=2; s=1.017654 (101.7654)
e=5; s=1.234567
+ e=5; s=0.001017654 (after shifting)
--------------------
e=5; s=1.235584654 (true sum: 123558.4654)
This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized if necessary. The final result is
e=5; s=1.235585 (final sum: 123558.5)
The lowest three digits of the second operand (654) are essentially lost. This is round-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them:
e=5; s=1.234567
+ e=−3; s=9.876543
e=5; s=1.234567
+ e=5; s=0.00000009876543 (after shifting)
----------------------
e=5; s=1.23456709876543 (true sum)
e=5; s=1.234567 (after rounding and normalization)
In the above conceptual examples it would appear that a large number of extra digits would need to be provided by the adder to ensure correct rounding; however, for binary addition or subtraction using careful implementation techniques only a guard bit, a rounding bit and one extra sticky bit need to be carried beyond the precision of the operands.
Another problem of loss of significance occurs when approximations to two nearly equal numbers are subtracted. In the following example e = 5; s = 1.234571 and e = 5; s = 1.234567 are approximations to the rationals 123457.1467 and 123456.659.
e=5; s=1.234571
− e=5; s=1.234567
----------------
e=5; s=0.000004
e=−1; s=4.000000 (after rounding and normalization)
The floating-point difference is computed exactly because the numbers are close—the Sterbenz lemma guarantees this, even in case of underflow when gradual underflow is supported. Despite this, the difference of the original numbers is e = −1; s = 4.877000, which differs more than 20% from the difference e = −1; s = 4.000000 of the approximations. In extreme cases, all significant digits of precision can be lost. This cancellation illustrates the danger in assuming that all of the digits of a computed result are meaningful. Dealing with the consequences of these errors is a topic in numerical analysis; see also Accuracy problems.
Multiplication and division
To multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized.
e=3; s=4.734612
× e=5; s=5.417242
-----------------------
e=8; s=25.648538980104 (true product)
e=8; s=25.64854 (after rounding)
e=9; s=2.564854 (after normalization)
Similarly, division is accomplished by subtracting the divisor's exponent from the dividend's exponent, and dividing the dividend's significand by the divisor's significand.
There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed in succession. In practice, the way these operations are carried out in digital logic can be quite complex (see Booth's multiplication algorithm and Division algorithm).
Literal syntax
Literals for floating-point numbers depend on languages. They typically use e or E to denote scientific notation. The C programming language and the IEEE 754 standard also define a hexadecimal literal syntax with a base-2 exponent instead of 10. In languages like C, when the decimal exponent is omitted, a decimal point is needed to differentiate them from integers. Other languages do not have an integer type (such as JavaScript), or allow overloading of numeric types (such as Haskell). In these cases, digit strings such as 123 may also be floating-point literals.
Examples of floating-point literals are:
99.9
-5000.12
6.02e23
-3e-45
0x1.fffffep+127 in C and IEEE 754
Dealing with exceptional cases
Floating-point computation in a computer can run into three kinds of problems:
An operation can be mathematically undefined, such as ∞/∞, or division by zero.
An operation can be legal in principle, but not supported by the specific format, for example, calculating the square root of −1 or the inverse sine of 2 (both of which result in complex numbers).
An operation can be legal in principle, but the result can be impossible to represent in the specified format, because the exponent is too large or too small to encode in the exponent field. Such an event is called an overflow (exponent too large), underflow (exponent too small) or denormalization (precision loss).
Prior to the IEEE standard, such conditions usually caused the program to terminate, or triggered some kind of trap that the programmer might be able to catch. How this worked was system-dependent, meaning that floating-point programs were not portable. (The term "exception" as used in IEEE 754 is a general term meaning an exceptional condition, which is not necessarily an error, and is a different usage to that typically defined in programming languages such as a C++ or Java, in which an "exception" is an alternative flow of control, closer to what is termed a "trap" in IEEE 754 terminology.)
Here, the required default method of handling exceptions according to IEEE 754 is discussed (the IEEE 754 optional trapping and other "alternate exception handling" modes are not discussed). Arithmetic exceptions are (by default) required to be recorded in "sticky" status flag bits. That they are "sticky" means that they are not reset by the next (arithmetic) operation, but stay set until explicitly reset. The use of "sticky" flags thus allows for testing of exceptional conditions to be delayed until after a full floating-point expression or subroutine: without them exceptional conditions that could not be otherwise ignored would require explicit testing immediately after every floating-point operation. By default, an operation always returns a result according to specification without interrupting computation. For instance, 1/0 returns +∞, while also setting the divide-by-zero flag bit (this default of ∞ is designed to often return a finite result when used in subsequent operations and so be safely ignored).
The original IEEE 754 standard, however, failed to recommend operations to handle such sets of arithmetic exception flag bits. So while these were implemented in hardware, initially programming language implementations typically did not provide a means to access them (apart from assembler). Over time some programming language standards (e.g., C99/C11 and Fortran) have been updated to specify methods to access and change status flag bits. The 2008 version of the IEEE 754 standard now specifies a few operations for accessing and handling the arithmetic flag bits. The programming model is based on a single thread of execution and use of them by multiple threads has to be handled by a means outside of the standard (e.g. C11 specifies that the flags have thread-local storage).
IEEE 754 specifies five arithmetic exceptions that are to be recorded in the status flags ("sticky bits"):
inexact, set if the rounded (and returned) value is different from the mathematically exact result of the operation.
underflow, set if the rounded value is tiny (as specified in IEEE 754) and inexact (or maybe limited to if it has denormalization loss, as per the 1985 version of IEEE 754), returning a subnormal value including the zeros.
overflow, set if the absolute value of the rounded value is too large to be represented. An infinity or maximal finite value is returned, depending on which rounding is used.
divide-by-zero, set if the result is infinite given finite operands, returning an infinity, either +∞ or −∞.
invalid, set if a finite or infinite result cannot be returned e.g. sqrt(−1) or 0/0, returning a quiet NaN.
The default return value for each of the exceptions is designed to give the correct result in the majority of cases such that the exceptions can be ignored in the majority of codes. inexact returns a correctly rounded result, and underflow returns a value less than or equal to the smallest positive normal number in magnitude and can almost always be ignored. divide-by-zero returns infinity exactly, which will typically then divide a finite number and so give zero, or else will give an invalid exception subsequently if not, and so can also typically be ignored. For example, the effective resistance of n resistors in parallel (see fig. 1) is given by . If a short-circuit develops with set to 0, will return +infinity which will give a final of 0, as expected (see the continued fraction example of IEEE 754 design rationale for another example).
Overflow and invalid exceptions can typically not be ignored, but do not necessarily represent errors: for example, a root-finding routine, as part of its normal operation, may evaluate a passed-in function at values outside of its domain, returning NaN and an invalid exception flag to be ignored until finding a useful start point.
Accuracy problems
The fact that floating-point numbers cannot accurately represent all real numbers, and that floating-point operations cannot accurately represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers.
For example, the decimal numbers 0.1 and 0.01 cannot be represented exactly as binary floating-point numbers. In the IEEE 754 binary32 format with its 24-bit significand, the result of attempting to square the approximation to 0.1 is neither 0.01 nor the representable number closest to it. The decimal number 0.1 is represented in binary as ; , which is
Squaring this number gives
Squaring it with rounding to the 24-bit precision gives
But the representable number closest to 0.01 is
Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity, nor will it even overflow in the usual floating-point formats (assuming an accurate implementation of tan). It is simply not possible for standard floating-point hardware to attempt to compute tan(π/2), because π/2 cannot be represented exactly. This computation in C:
/* Enough digits to be sure we get the correct approximation. */
double pi = 3.1415926535897932384626433832795;
double z = tan(pi/2.0);
will give a result of 16331239353195370.0. In single precision (using the tanf function), the result will be −22877332.0.
By the same token, an attempted computation of sin(π) will not yield zero. The result will be (approximately) 0.1225 in double precision, or −0.8742 in single precision.
While floating-point addition and multiplication are both commutative ( and ), they are not necessarily associative. That is, is not necessarily equal to . Using 7-digit significand decimal arithmetic:
a = 1234.567, b = 45.67834, c = 0.0004
(a + b) + c:
1234.567 (a)
+ 45.67834 (b)
1280.24534 rounds to 1280.245
1280.245 (a + b)
+ 0.0004 (c)
1280.2454 rounds to 1280.245 ← (a + b) + c
a + (b + c):
45.67834 (b)
+ 0.0004 (c)
45.67874
1234.567 (a)
+ 45.67874 (b + c)
1280.24574 rounds to 1280.246 ← a + (b + c)
They are also not necessarily distributive. That is, may not be the same as :
1234.567 × 3.333333 = 4115.223
1.234567 × 3.333333 = 4.115223
4115.223 + 4.115223 = 4119.338
but
1234.567 + 1.234567 = 1235.802
1235.802 × 3.333333 = 4119.340
In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur:
Incidents
On 25 February 1991, a loss of significance in a MIM-104 Patriot missile battery prevented it from intercepting an incoming Scud missile in Dhahran, Saudi Arabia, contributing to the death of 28 soldiers from the U.S. Army's 14th Quartermaster Detachment. The error was actually introduced by a fixed-point computation, but the underlying issue would have been the same with floating-point arithmetic.
Machine precision and backward error analysis
Machine precision is a quantity that characterizes the accuracy of a floating-point system, and is used in backward error analysis of floating-point algorithms. It is also known as unit roundoff or machine epsilon. Usually denoted , its value depends on the particular rounding being used.
With rounding to zero,
whereas rounding to nearest,
where B is the base of the system and P is the precision of the significand (in base B).
This is important since it bounds the relative error in representing any non-zero real number within the normalized range of a floating-point system:
Backward error analysis, the theory of which was developed and popularized by James H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable. The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined as backward stable. Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, the condition number of a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem.
As a trivial example, consider a simple expression giving the inner product of (length two) vectors and , then
and so
where
where
by definition, which is the sum of two slightly perturbed (on the order of Εmach) input data, and so is backward stable. For more realistic examples in numerical linear algebra, see Higham 2002 and other references below.
Minimizing the effect of accuracy problems
Although individual arithmetic operations of IEEE 754 are guaranteed accurate to within half a ULP, more complicated formulae can suffer from larger errors for a variety of reasons. The loss of accuracy can be substantial if a problem or its data are ill-conditioned, meaning that the correct result is hypersensitive to tiny perturbations in its data. However, even functions that are well-conditioned can suffer from large loss of accuracy if an algorithm numerically unstable for that data is used: apparently equivalent formulations of expressions in a programming language can differ markedly in their numerical stability. One approach to remove the risk of such loss of accuracy is the design and analysis of numerically stable algorithms, which is an aim of the branch of mathematics known as numerical analysis. Another approach that can protect against the risk of numerical instabilities is the computation of intermediate (scratch) values in an algorithm at a higher precision than the final result requires, which can remove, or reduce by orders of magnitude, such risk: IEEE 754 quadruple precision and extended precision are designed for this purpose when computing at double precision.
For example, the following algorithm is a direct implementation to compute the function which is well-conditioned at 1.0, however it can be shown to be numerically unstable and lose up to half the significant digits carried by the arithmetic when computed near 1.0.
double A(double X)
{
double Y, Z; // [1]
Y = X - 1.0;
Z = exp(Y);
if (Z != 1.0)
Z = Y / (Z - 1.0); // [2]
return Z;
}
If, however, intermediate computations are all performed in extended precision (e.g. by setting line [1] to C99 ), then up to full precision in the final double result can be maintained. Alternatively, a numerical analysis of the algorithm reveals that if the following non-obvious change to line [2] is made:
Z = log(Z) / (Z - 1.0);
then the algorithm becomes numerically stable and can compute to full double precision.
To maintain the properties of such carefully constructed numerically stable programs, careful handling by the compiler is required. Certain "optimizations" that compilers might make (for example, reordering operations) can work against the goals of well-behaved software. There is some controversy about the failings of compilers and language designs in this area: C99 is an example of a language where such optimizations are carefully specified to maintain numerical precision. See the external references at the bottom of this article.
A detailed treatment of the techniques for writing high-quality floating-point software is beyond the scope of this article, and the reader is referred to, and the other references at the bottom of this article. Kahan suggests several rules of thumb that can substantially decrease by orders of magnitude the risk of numerical anomalies, in addition to, or in lieu of, a more careful numerical analysis. These include: as noted above, computing all expressions and intermediate results in the highest precision supported in hardware (a common rule of thumb is to carry twice the precision of the desired result, i.e. compute in double precision for a final single-precision result, or in double extended or quad precision for up to double-precision results); and rounding input data and results to only the precision required and supported by the input data (carrying excess precision in the final result beyond that required and supported by the input data can be misleading, increases storage cost and decreases speed, and the excess bits can affect convergence of numerical procedures: notably, the first form of the iterative example given below converges correctly when using this rule of thumb). Brief descriptions of several additional issues and techniques follow.
As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of a proton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact. An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation. The "decimal" data type of the C# and Python programming languages, and the decimal formats of the IEEE 754-2008 standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal.
Expectations from mathematics may not be realized in the field of floating-point computation. For example, it is known that , and that , however these facts cannot be relied on when the quantities involved are the result of floating-point computation.
The use of the equality test (if (x==y) ...) requires care when dealing with floating-point numbers. Even simple expressions like 0.6/0.2-3==0 will, on most computers, fail to be true (in IEEE 754 double precision, for example, 0.6/0.2 - 3 is approximately equal to -4.44089209850063e-16). Consequently, such tests are sometimes replaced with "fuzzy" comparisons (if (abs(x-y) < epsilon) ..., where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon. Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to round-off errors. It is often better to organize the code in such a way that such tests are unnecessary. For example, in computational geometry, exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods.
Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples are matrix inversion, eigenvector computation, and differential equation solving. These algorithms must be very carefully designed, using numerical approaches such as iterative refinement, if they are to work well.
Summation of a vector of floating-point values is a basic algorithm in scientific computing, and so an awareness of when loss of significance can occur is essential. For example, if one is adding a very large number of numbers, the individual addends are very small compared with the sum. This can lead to loss of significance. A typical addition would then be something like
3253.671
+ 3.141276
-----------
3256.812
The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, all approximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are not regained. The Kahan summation algorithm may be used to reduce the errors.
Round-off error can affect the convergence and accuracy of iterative numerical procedures. As an example, Archimedes approximated π by calculating the perimeters of polygons inscribing and circumscribing a circle, starting with hexagons, and successively doubling the number of sides. As noted above, computations may be rearranged in a way that is mathematically equivalent but less prone to error (numerical analysis). Two forms of the recurrence formula for the circumscribed polygon are:
First form:
Second form:
, converging as
Here is a computation using IEEE "double" (a significand with 53 bits of precision) arithmetic:
i 6 × 2i × ti, first form 6 × 2i × ti, second form
---------------------------------------------------------
0 .4641016151377543863 .4641016151377543863
1 .2153903091734710173 .2153903091734723496
2 596599420974940120 596599420975006733
3 60862151314012979 60862151314352708
4 27145996453136334 27145996453689225
5 8730499801259536 8730499798241950
6 6627470548084133 6627470568494473
7 6101765997805905 6101766046906629
8 70343230776862 70343215275928
9 37488171150615 37487713536668
10 9278733740748 9273850979885
11 7256228504127 7220386148377
12 717412858693 707019992125
13 189011456060 78678454728
14 717412858693 46593073709
15 19358822321783 8571730119
16 717412858693 6566394222
17 810075796233302 6065061913
18 717412858693 939728836
19 4061547378810956 908393901
20 05434924008406305 900560168
21 00068646912273617 8608396
22 349453756585929919 8122118
23 00068646912273617 95552
24 .2245152435345525443 68907
25 62246
26 62246
27 62246
28 62246
The true value is
While the two forms of the recurrence formula are clearly mathematically equivalent, the first subtracts 1 from a number extremely close to 1, leading to an increasingly problematic loss of significant digits. As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision.
"Fast math" optimization
The aforementioned lack of associativity of floating-point operations in general means that compilers cannot as effectively reorder arithmetic expressions as they could with integer and fixed-point arithmetic, presenting a roadblock in optimizations such as common subexpression elimination and auto-vectorization. The "fast math" option on many compilers (ICC, GCC, Clang, MSVC...) turns on reassociation along with unsafe assumptions such as a lack of NaN and infinite numbers in IEEE 754. Some compilers also offer more granular options to only turn on reassociation. In either case, the programmer is exposed to many of the precision pitfalls mentioned above for the portion of the program using "fast" math.
In some compilers (GCC and Clang), turning on "fast" math may cause the program to disable subnormal floats at startup, affecting the floating-point behavior of not only the generated code, but also any program using such code as a library.
In most Fortran compilers, as allowed by the ISO/IEC 1539-1:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting (also on by default). This setting stops the compiler from reassociating beyond the boundaries of parentheses. Intel Fortran Compiler is a notable outlier.
A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen in Icing, a verified compiler.
| Technology | Computer science | null |
11432 | https://en.wikipedia.org/wiki/Full%20moon | Full moon | The full moon is the lunar phase when the Moon appears fully illuminated from Earth's perspective. This occurs when Earth is located between the Sun and the Moon (when the ecliptic longitudes of the Sun and Moon differ by 180°). This means that the lunar hemisphere facing Earth—the near side—is completely sunlit and appears as an approximately circular disk. The full moon occurs roughly once a month.
The time interval between a full moon and the next repetition of the same phase, a synodic month, averages about 29.53 days. Because of irregularities in the moon's orbit, the new and full moons may fall up to thirteen hours either side of their mean. If the calendar date is not locally determined through observation of the new moon at the beginning of the month there is the potential for a further twelve hours difference depending on the time zone. Potential discrepancies also arise from whether the calendar day is considered to begin in the evening or at midnight. It is normal for the full moon to fall on the fourteenth or the fifteenth of the month according to whether the start of the month is reckoned from the appearance of the new moon or from the conjunction. A tabular lunar calendar will also exhibit variations depending on the intercalation system used. Because a calendar month consists of a whole number of days, a month in a lunar calendar may be either 29 or 30 days long.
Characteristics
A full moon is often thought of as an event of a full night's duration, although its phase seen from Earth continuously waxes or wanes, and is full only at the instant when waxing ends and waning begins. For any given location, about half of these maximum full moons may be visible, while the other half occurs during the day, when the full moon is below the horizon. As the Moon's orbit is inclined by 5.145° from the ecliptic, it is not generally perfectly opposite from the Sun during full phase, therefore a full moon is in general not perfectly full except on nights with a lunar eclipse as the Moon crosses the ecliptic at opposition from the Sun.
Many almanacs list full moons not only by date, but also by their exact time, usually in Coordinated Universal Time (UTC). Typical monthly calendars that include lunar phases may be offset by one day when prepared for a different time zone.
The full moon is generally a suboptimal time for astronomical observation of the Moon because shadows vanish. It is a poor time for other observations because the bright sunlight reflected by the Moon, amplified by the opposition surge, then outshines many stars.
Moon phases
There are eight phases of the moon, which vary from partial to full illumination. The moon phases are also called lunar phases. These stages have different names that come from its shape and size at each phase. For example, the crescent moon is 'banana' shaped, and the half-moon is D-shaped. When the moon is nearly full, it is called a gibbous moon. The crescent and gibbous moons each last approximately a week.
Each phase is also described in accordance to its position on the full 29.5-day cycle. The eight phases of the moon in order:
new moon
waxing crescent moon
first quarter moon
waxing gibbous moon
full moon
waning gibbous moon
last quarter moon
waning crescent moon
Formula
The date and approximate time of a specific full moon (assuming a circular orbit) can be calculated from the following equation:
where d is the number of days since 1 January 2000 00:00:00 in the Terrestrial Time scale used in astronomical ephemerides; for Universal Time (UT) add the following approximate correction to d:
days
where N is the number of full moons since the first full moon of 2000. The true time of a full moon may differ from this approximation by up to about 14.5 hours as a result of the non-circularity of the Moon's orbit. See New moon for an explanation of the formula and its parameters.
The age and apparent size of the full moon vary in a cycle of just under 14 synodic months, which has been referred to as a full moon cycle.
Lunar eclipses
When the Moon moves into Earth's shadow, a lunar eclipse occurs, during which all or part of the Moon's face may appear reddish due to the Rayleigh scattering of blue wavelengths and the refraction of sunlight through Earth's atmosphere. Lunar eclipses happen only during a full moon and around points on its orbit where the satellite may pass through the planet's shadow. A lunar eclipse does not occur every month because the Moon's orbit is inclined 5.145° with respect to the ecliptic plane of Earth; thus, the Moon usually passes north or south of Earth's shadow, which is mostly restricted to this plane of reference. Lunar eclipses happen only when the full moon occurs around either node of its orbit (ascending or descending). Therefore, a lunar eclipse occurs about every six months, and often two weeks before or after a solar eclipse, which occurs during a new moon around the opposite node.
In folklore and tradition
In Buddhism, Vesak is celebrated on the full moon day of the Vaisakha month, marking the birth, enlightenment, and the death of the Buddha.
In Arabic, badr (بدر ) means 'full moon', but it is often translated as 'white moon', referring to The White Days, the three days when the full moon is celebrated.
Full moons are traditionally associated with insomnia (inability to sleep), insanity (hence the terms lunacy and lunatic) and various "magical phenomena" such as lycanthropy. Psychologists, however, have found that there is no strong evidence for effects on human behavior around the time of a full moon. They find that studies are generally not consistent, with some showing a positive effect and others showing a negative effect. In one instance, the 23 December 2000 issue of the British Medical Journal published two studies on dog bite admission to hospitals in England and Australia. The study of the Bradford Royal Infirmary found that dog bites were twice as common during a full moon, whereas the study conducted by the public hospitals in Australia found that they were less likely.
The symbol of the Triple Goddess is drawn with the circular image of the full moon in the center flanked by a left facing crescent and right facing crescent, on either side, representing a maiden, mother and crone archetype.
Full moon names
Historically, month names are names of moons (lunations, not necessarily full moons) in lunisolar calendars. Since the introduction of the solar Julian calendar in the Roman Empire, and later the Gregorian calendar worldwide, people no longer perceive month names as "moon" names. The traditional Old English month names were equated with the names of the Julian calendar from an early time, soon after Christianization, according to the testimony of Bede around AD 700.
Some full moons have developed new names in modern times, such as "blue moon", as well as "harvest moon" and "hunter's moon" for the full moons of autumn.
The golden or reddish hue of the Harvest Moon and other full moons near the horizon is caused by atmospheric scattering. When the Moon is low in the sky, its light passes through a thicker layer of Earth's atmosphere, scattering shorter wavelengths like blue and violet and allowing longer wavelengths, such as red and yellow, to dominate. This effect, combined with environmental factors such as dust, pollutants, or haze, can intensify or dull the Moon's color. Clear skies often enhance the yellow or golden appearance, particularly during the autumn months when these full moons are observed.
Lunar eclipses occur only at a full moon and often cause a reddish hue on the near side of the Moon. This full moon has been called a blood moon in popular culture.
Harvest and hunter's moons
The "harvest moon" and the "hunter's moon" are traditional names for the full moons in late summer and in the autumn in the Northern Hemisphere, usually in September and October, respectively. People may celebrate these occurrences in festivities such as the Chinese Mid-Autumn Festival.
The "harvest moon" (also known as the "barley moon" or "full corn moon") is the full moon nearest to the autumnal equinox (22 or 23 September), occurring anytime within two weeks before or after that date. The "hunter's moon" is the full moon following it. The names are recorded from the early 18th century. The Oxford English Dictionary entry for "harvest moon" cites a 1706 reference, and for "hunter's moon" a 1710 edition of The British Apollo, which attributes the term to "the country people" ("The Country People call this the Hunters-Moon.") The names became traditional in American folklore, where they are now often popularly attributed to Native Americans. The Feast of the Hunters' Moon is a yearly festival in West Lafayette, Indiana, held in late September or early October each year since 1968. In 2010 the harvest moon occurred on the night of the equinox itself (some 5 hours after the moment of equinox) for the first time since 1991, after a period known as the Metonic cycle.
All full moons rise around the time of sunset. Since the Moon moves eastward among the stars faster than the Sun, lunar culmination is delayed by about 50.47 minutes (on average) each day, thus causing moonrise to occur later each day.
Due to the high lunar standstill, the harvest and hunter's moons of 2007 were special because the time difference between moonrises on successive evenings was much shorter than average. The moon rose about 30 minutes later from one night to the next, as seen from about 40° N or S latitude (because the full moon of September 2007 rose in the northeast rather than in the east). Hence, no long period of darkness occurred between sunset and moonrise for several days after the full moon, thus lengthening the time in the evening when there is enough twilight and moonlight to work to get the harvest in.
Native American
Various 18th and 19th century writers gave what were claimed to be Native American or First Nations moon names. These were not the names of the full moons as such, but were the names of lunar months beginning with each new moon. According to Jonathan Carver in 1778, "Some nations among them reckon their years by moons, and make them consist of twelve synodical or lunar months, observing, when thirty moons have waned, to add a supernumerary one, which they term the lost moon; and then begin to count as before." Carver gave the names of the lunar months (starting from the first after the March equinox) as Worm, Plants, Flowers, Hot, Buck, Sturgeon, Corn, Travelling, Beaver, Hunting, Cold, Snow. Carver's account was reproduced verbatim in Events in Indian History (1841), but completely different lists were given by Eugene Vetromile (1856) and Peter Jones (1861).
In a book on Native American culture published in 1882, Richard Irving Dodge stated:
There is a difference among authorities as to whether or not the moons themselves are named. Brown gives names for nine moons corresponding to months. Maximillian gives the names of twelve moons; and Belden, who lived many years among the Sioux, asserts that "the Indians compute their time very much as white men do, only they use moons instead of months to designate the seasons, each answering to some month in our calendar." Then follows a list of twelve moons with Indian and English names. While I cannot contradict so positive and minute a statement of one so thoroughly in a position to know, I must assert with equal positiveness that I have never met any wild Indians, of the Sioux or other Plains tribes, who had a permanent, common, conventional name for any moon. The looseness of Belden's general statement, that "Indians compute time like white people," when his only particularization of similarity is between the months and moons, is in itself sufficient to render the whole statement questionable.
My experience is that the Indian, in attempting to fix on a particular moon, will designate it by some natural and well-known phenomenon which culminates during that moon. But two Indians of the same tribe may fix on different designations; and even the same Indian, on different occasions, may give different names to the same moon. Thus, an Indian of the middle Plains will to-day designate a spring moon as "the moon when corn is planted;" to-morrow, speaking of the same moon, he may call it "the moon when the buffalo comes." Moreover, though there are thirteen moons in our year, no observer has ever given an Indian name to the thirteenth. My opinion is, that if any of the wild tribes have given conventional names to twelve moons, it is not an indigenous idea, but borrowed from the whites.
Jonathan Carver's list of purportedly Native American month names was adopted in the 19th century by the Improved Order of Red Men, an all-white U.S. fraternal organization. They called the month of January "Cold moon", the rest being Snow, Worm, Plant, Flower, Hot, Buck, Sturgeon, Corn, Travelling, Beaver and Hunting moon. They numbered years from the time of Columbus's arrival in America.
In The American Boy's Book of Signs, Signals and Symbols (1918), Daniel Carter Beard wrote: "The Indians' Moons naturally vary in the different parts of the country, but by comparing them all and striking an average as near as may be, the moons are reduced to the following." He then gave a list that had two names for each lunar month, again quite different from earlier lists that had been published.
The 1937 Maine Farmers' Almanac published a list of full moon names that it said "were named by our early English ancestors as follows":
It also mentioned blue moon. These were considered in some quarters to be Native American full moon names, and some were adopted by colonial Americans. The Farmers' Almanac (since 1955 published in Maine, but not the same publication as the Maine Farmers' Almanac) continues to print such names.
Such names have gained currency in American folklore. They appeared in print more widely outside of the almanac tradition from the 1990s in popular publications about the Moon.
Mysteries of the Moon by Patricia Haddock ("Great Mysteries Series", Greenhaven Press, 1992) gave an extensive list of such names along with the individual tribal groups they were supposedly associated with. Haddock supposes that certain "Colonial American" moon names were adopted from Algonquian languages (which were formerly spoken in the territory of New England), while others are based in European tradition (e.g. the Colonial American names for the May moon, "Milk Moon", "Mother's Moon", "Hare Moon" have no parallels in the supposed native names, while the name of November, "Beaver Moon" is supposedly based in an Algonquian language). Many other names have been reported.
These have passed into modern mythology, either as full-moon names, or as names for lunar months. Deanna J. Conway's Moon Magick: Myth & Magick, Crafts & Recipes, Rituals & Spells (1995) gave as headline names for the lunar months (from January): Wolf, Ice, Storm, Growing, Hare, Mead, Hay, Corn, Harvest, Blood, Snow, Cold. Conway also gave multiple alternative names for each month, e.g. the first lunar month after the winter solstice could be called the Wolf, Quiet, Snow, Cold, Chaste or Disting Moon, or the Moon of Little Winter. For the last lunar month Conway offered the names Cold, Oak or Wolf Moon, or Moon of Long Nights, Long Night's Moon, Aerra Geola (Month Before Yule), Wintermonat (Winter Month), Heilagmanoth (Holy Month), Big Winter Moon, Moon of Popping Trees. Conway did not cite specific sources for most of the names she listed, but some have gained wider currency as full-moon names, such as Pink Moon for a full moon in April, Long Night's Moon for the last in December and Ice Moon for the first full moon of January or February.
Hindu full moon festivals
In Hinduism, most festivals are celebrated on auspicious days. Many Hindu festivals are celebrated on days with a full moon night, called the purnima. Different parts of India celebrate the same festival with different names, as listed below:
Chaitra Purnima – Gudi Padua, Ugadi, Hanuman Jayanti (15 April 2014)
Vaishakha Purnima – Narasimha Jayanti, Buddha Jayanti (14 May 2014)
Jyeshtha Purnima – Savitri Vrata, Vat Purnima (8 June 2014)
Ashadha Purnima – Guru Purnima, Vyasa Purnima
Shravana Purnima – Upanayana ceremony, Avani Avittam, Raksha Bandhan, Onam
Bhadrapada Purnima – Start of Pitru Paksha, Madhu Purnima
Ashvin Purnima – Sharad Purnima
Kartika Purnima – Karthikai Deepam, Thrukkarthika
Margashirsha Purnima – Thiruvathira, Dattatreya Jayanti
Pushya Purnima – Thaipusam, Shakambhari Purnima
Magha Purnima
Phalguna Purnima – Holi
Lunar and lunisolar calendars
Most pre-modern calendars the world over were lunisolar, combining the solar year with the lunation by means of intercalary months. The Julian calendar abandoned this method in favour of a purely solar reckoning while conversely the 7th-century Islamic calendar opted for a purely lunar one.
A continuing lunisolar calendar is the Hebrew calendar. Evidence of this is noted in the dates of Passover and Easter in Judaism and Christianity, respectively. Passover falls on the full moon on 15 Nisan of the Hebrew calendar. The date of the Jewish Rosh Hashana and Sukkot festivals along with all other Jewish holidays are dependent on the dates of the new moons.
Intercalary months
In lunisolar calendars, an intercalary month occurs seven times in the 19 years of the Metonic cycle, or on average every 2.7 years (19/7). In the Hebrew calendar this is noted with a periodic extra month of Adar in the early spring.
Meetings arranged to coincide with full moon
Before the days of good street lighting and car headlights, several organisations arranged their meetings for full moon, so that it would be easier for their members to walk, or ride home. Examples include the Lunar Society of Birmingham, several Masonic societies, including Warren Lodge No. 32, USA and Masonic Hall, York, Western Australia, and several New Zealand local authorities, including Awakino, Ohura and Whangarei County Councils and Maori Hill and Wanganui East Borough Councils.
| Physical sciences | Celestial mechanics | Astronomy |
11439 | https://en.wikipedia.org/wiki/Faster-than-light | Faster-than-light | Faster-than-light (superluminal or supercausal) travel and communication are the conjectural propagation of matter or information faster than the speed of light in vacuum (). The special theory of relativity implies that only particles with zero rest mass (i.e., photons) may travel at the speed of light, and that nothing may travel faster.
Particles whose speed exceeds that of light (tachyons) have been hypothesized, but their existence would violate causality and would imply time travel. The scientific consensus is that they do not exist.
According to all observations and current scientific theories, matter travels at slower-than-light (subluminal) speed with respect to the locally distorted spacetime region. Speculative faster-than-light concepts include the Alcubierre drive, Krasnikov tubes, traversable wormholes, and quantum tunneling. Some of these proposals find loopholes around general relativity, such as by expanding or contracting space to make the object appear to be travelling greater than c. Such proposals are still widely believed to be impossible as they still violate current understandings of causality, and they all require fanciful mechanisms to work (such as requiring exotic matter).
Superluminal travel of non-information
In the context of this article, "faster-than-light" means the transmission of information or matter faster than c, a constant equal to the speed of light in vacuum, which is 299,792,458 m/s (by definition of the metre) or about 186,282.397 miles per second. This is not quite the same as traveling faster than light, since:
Some processes propagate faster than c, but cannot carry information (see examples in the sections immediately following).
In some materials where light travels at speed c/n (where n is the refractive index) other particles can travel faster than c/n (but still slower than c), leading to Cherenkov radiation (see phase velocity below).
Neither of these phenomena violates special relativity or creates problems with causality, and thus neither qualifies as faster-than-light as described here.
In the following examples, certain influences may appear to travel faster than light, but they do not convey energy or information faster than light, so they do not violate special relativity.
Daily sky motion
For an earth-bound observer, objects in the sky complete one revolution around the Earth in one day. Proxima Centauri, the nearest star outside the Solar System, is about four and a half light-years away. In this frame of reference, in which Proxima Centauri is perceived to be moving in a circular trajectory with a radius of four light years, it could be described as having a speed many times greater than c as the rim speed of an object moving in a circle is a product of the radius and angular speed. It is also possible on a geostatic view, for objects such as comets to vary their speed from subluminal to superluminal and vice versa simply because the distance from the Earth varies. Comets may have orbits which take them out to more than 1000 AU. The circumference of a circle with a radius of 1000 AU is greater than one light day. In other words, a comet at such a distance is superluminal in a geostatic, and therefore non-inertial, frame.
Light spots and shadows
If a laser beam is swept across a distant object, the spot of laser light can seem to move across the object at a speed greater than c. Similarly, a shadow projected onto a distant object seems to move across the object faster than c. In neither case does the light travel from the source to the object faster than c, nor does any information travel faster than light. No object is moving in these examples. For comparison, consider water squirting out of a garden hose as it is swung side to side: water does not instantly follow the direction of the hose.
Closing speeds
The rate at which two objects in motion in a single frame of reference get closer together is called the mutual or closing speed. This may approach twice the speed of light, as in the case of two particles travelling at close to the speed of light in opposite directions with respect to the reference frame.
Imagine two fast-moving particles approaching each other from opposite sides of a particle accelerator of the collider type. The closing speed would be the rate at which the distance between the two particles is decreasing. From the point of view of an observer standing at rest relative to the accelerator, this rate will be slightly less than twice the speed of light.
Special relativity does not prohibit this. It tells us that it is wrong to use Galilean relativity to compute the velocity of one of the particles, as would be measured by an observer traveling alongside the other particle. That is, special relativity gives the correct velocity-addition formula for computing such relative velocity.
It is instructive to compute the relative velocity of particles moving at v and −v in accelerator frame, which corresponds to the closing speed of 2v > c. Expressing the speeds in units of c, β = v/c:
Proper speeds
If a spaceship travels to a planet one light-year (as measured in the Earth's rest frame) away from Earth at high speed, the time taken to reach that planet could be less than one year as measured by the traveller's clock (although it will always be more than one year as measured by a clock on Earth). The value obtained by dividing the distance traveled, as determined in the Earth's frame, by the time taken, measured by the traveller's clock, is known as a proper speed or a proper velocity. There is no limit on the value of a proper speed as a proper speed does not represent a speed measured in a single inertial frame. A light signal that left the Earth at the same time as the traveller would always get to the destination before the traveller would.
Phase velocities above c
The phase velocity of an electromagnetic wave, when traveling through a medium, can routinely exceed c, the vacuum velocity of light. For example, this occurs in most glasses at X-ray frequencies. However, the phase velocity of a wave corresponds to the propagation speed of a theoretical single-frequency (purely monochromatic) component of the wave at that frequency. Such a wave component must be infinite in extent and of constant amplitude (otherwise it is not truly monochromatic), and so cannot convey any information.
Thus a phase velocity above c does not imply the propagation of signals with a velocity above c.
Group velocities above c
The group velocity of a wave may also exceed c in some circumstances. In such cases, which typically at the same time involve rapid attenuation of the intensity, the maximum of the envelope of a pulse may travel with a velocity above c. However, even this situation does not imply the propagation of signals with a velocity above c, even though one may be tempted to associate pulse maxima with signals. The latter association has been shown to be misleading, because the information on the arrival of a pulse can be obtained before the pulse maximum arrives. For example, if some mechanism allows the full transmission of the leading part of a pulse while strongly attenuating the pulse maximum and everything behind (distortion), the pulse maximum is effectively shifted forward in time, while the information on the pulse does not come faster than c without this effect. However, group velocity can exceed c in some parts of a Gaussian beam in vacuum (without attenuation). The diffraction causes the peak of the pulse to propagate faster, while overall power does not.
Cosmic expansion
According to Hubble's law, the expansion of the universe causes distant galaxies to appear to recede from us faster than the speed of light. However, the recession speed associated with Hubble's law, defined as the rate of increase in proper distance per interval of cosmological time, is not a velocity in a relativistic sense. Moreover, in general relativity, velocity is a local notion, and there is not even a unique definition for the relative velocity of a cosmologically distant object. Faster-than-light cosmological recession speeds are entirely a coordinate effect.
There are many galaxies visible in telescopes with redshift numbers of 1.4 or higher. All of these have cosmological recession speeds greater than the speed of light. Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually.
However, because the expansion of the universe is accelerating, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future, because the light never reaches a point where its "peculiar velocity" towards us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in ). The current distance to this cosmological event horizon is about 16 billion light-years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event was less than 16 billion light-years away, but the signal would never reach us if the event was more than 16 billion light-years away.
Astronomical observations
Apparent superluminal motion is observed in many radio galaxies, blazars, quasars, and recently also in microquasars. The effect was predicted before it was observed by Martin Rees and can be explained as an optical illusion caused by the object partly moving in the direction of the observer, when the speed calculations assume it does not. The phenomenon does not contradict the theory of special relativity. Corrected calculations show these objects have velocities close to the speed of light (relative to our reference frame). They are the first examples of large amounts of mass moving at close to the speed of light. Earth-bound laboratories have only been able to accelerate small numbers of elementary particles to such speeds.
Quantum mechanics
Certain phenomena in quantum mechanics, such as quantum entanglement, might give the superficial impression of allowing communication of information faster than light. According to the no-communication theorem these phenomena do not allow true communication; they only let two observers in different locations see the same system simultaneously, without any way of controlling what either sees. Wavefunction collapse can be viewed as an epiphenomenon of quantum decoherence, which in turn is nothing more than an effect of the underlying local time evolution of the wavefunction of a system and all of its environment. Since the underlying behavior does not violate local causality or allow FTL communication, it follows that neither does the additional effect of wavefunction collapse, whether real or apparent.
The uncertainty principle implies that individual photons may travel for short distances at speeds somewhat faster (or slower) than c, even in vacuum; this possibility must be taken into account when enumerating Feynman diagrams for a particle interaction. However, it was shown in 2011 that a single photon may not travel faster than c.
There have been various reports in the popular press of experiments on faster-than-light transmission in optics — most often in the context of a kind of quantum tunnelling phenomenon. Usually, such reports deal with a phase velocity or group velocity faster than the vacuum velocity of light. However, as stated above, a superluminal phase velocity cannot be used for faster-than-light transmission of information
Hartman effect
The Hartman effect is the tunneling effect through a barrier where the tunneling time tends to a constant for large barriers. This could, for instance, be the gap between two prisms. When the prisms are in contact, the light passes straight through, but when there is a gap, the light is refracted. There is a non-zero probability that the photon will tunnel across the gap rather than follow the refracted path.
However, it has been claimed that the Hartman effect cannot actually be used to violate relativity by transmitting signals faster than c, also because the tunnelling time "should not be linked to a velocity since evanescent waves do not propagate". The evanescent waves in the Hartman effect are due to virtual particles and a non-propagating static field, as mentioned in the sections above for gravity and electromagnetism.
Casimir effect
In physics, the Casimir–Polder force is a physical force exerted between separate objects due to resonance of vacuum energy in the intervening space between the objects. This is sometimes described in terms of virtual particles interacting with the objects, owing to the mathematical form of one possible way of calculating the strength of the effect. Because the strength of the force falls off rapidly with distance, it is only measurable when the distance between the objects is extremely small. Because the effect is due to virtual particles mediating a static field effect, it is subject to the comments about static fields discussed above.
EPR paradox
The EPR paradox refers to a famous thought experiment of Albert Einstein, Boris Podolsky and Nathan Rosen that was realized experimentally for the first time by Alain Aspect in 1981 and 1982 in the Aspect experiment. In this experiment, the two measurements of an entangled state are correlated even when the measurements are distant from the source and each other. However, no information can be transmitted this way; the answer to whether or not the measurement actually affects the other quantum system comes down to which interpretation of quantum mechanics one subscribes to.
An experiment performed in 1997 by Nicolas Gisin has demonstrated quantum correlations between particles separated by over 10 kilometers. But as noted earlier, the non-local correlations seen in entanglement cannot actually be used to transmit classical information faster than light, so that relativistic causality is preserved. The situation is akin to sharing a synchronized coin flip, where the second person to flip their coin will always see the opposite of what the first person sees, but neither has any way of knowing whether they were the first or second flipper, without communicating classically. See No-communication theorem for further information. A 2008 quantum physics experiment also performed by Nicolas Gisin and his colleagues has determined that in any hypothetical non-local hidden-variable theory, the speed of the quantum non-local connection (what Einstein called "spooky action at a distance") is at least 10,000 times the speed of light.
Delayed choice quantum eraser
The delayed-choice quantum eraser is a version of the EPR paradox in which the observation (or not) of interference after the passage of a photon through a double slit experiment depends on the conditions of observation of a second photon entangled with the first. The characteristic of this experiment is that the observation of the second photon can take place at a later time than the observation of the first photon, which may give the impression that the measurement of the later photons "retroactively" determines whether the earlier photons show interference or not, although the interference pattern can only be seen by correlating the measurements of both members of every pair and so it cannot be observed until both photons have been measured, ensuring that an experimenter watching only the photons going through the slit does not obtain information about the other photons in an faster-than-light or backwards-in-time manner.
Superluminal communication
Faster-than-light communication is, according to relativity, equivalent to time travel. What we measure as the speed of light in vacuum (or near vacuum) is actually the fundamental physical constant c. This means that all inertial and, for the coordinate speed of light, non-inertial observers, regardless of their relative velocity, will always measure zero-mass particles such as photons traveling at c in vacuum. This result means that measurements of time and velocity in different frames are no longer related simply by constant shifts, but are instead related by Poincaré transformations. These transformations have important implications:
The relativistic momentum of a massive particle would increase with speed in such a way that at the speed of light an object would have infinite momentum.
To accelerate an object of non-zero rest mass to c would require infinite time with any finite acceleration, or infinite acceleration for a finite amount of time.
Either way, such acceleration requires infinite energy.
Some observers with sub-light relative motion will disagree about which occurs first of any two events that are separated by a space-like interval. In other words, any travel that is faster-than-light will be seen as traveling backwards in time in some other, equally valid, frames of reference, or need to assume the speculative hypothesis of possible Lorentz violations at a presently unobserved scale (for instance the Planck scale). Therefore, any theory which permits "true" FTL also has to cope with time travel and all its associated paradoxes, or else to assume the Lorentz invariance to be a symmetry of thermodynamical statistical nature (hence a symmetry broken at some presently unobserved scale).
In special relativity the coordinate speed of light is only guaranteed to be c in an inertial frame; in a non-inertial frame the coordinate speed may be different from c. In general relativity no coordinate system on a large region of curved spacetime is "inertial", so it is permissible to use a global coordinate system where objects travel faster than c, but in the local neighborhood of any point in curved spacetime we can define a "local inertial frame" and the local speed of light will be c in this frame, with massive objects moving through this local neighborhood always having a speed less than c in the local inertial frame.
Justifications
Casimir vacuum and quantum tunnelling
Special relativity postulates that the speed of light in vacuum is invariant in inertial frames. That is, it will be the same from any frame of reference moving at a constant speed. The equations do not specify any particular value for the speed of light, which is an experimentally determined quantity for a fixed unit of length. Since 1983, the SI unit of length (the meter) has been defined using the speed of light.
The experimental determination has been made in vacuum. However, the vacuum we know is not the only possible vacuum which can exist. The vacuum has energy associated with it, called simply the vacuum energy, which could perhaps be altered in certain cases. When vacuum energy is lowered, light itself has been predicted to go faster than the standard value c. This is known as the Scharnhorst effect. Such a vacuum can be produced by bringing two perfectly smooth metal plates together at near atomic diameter spacing. It is called a Casimir vacuum. Calculations imply that light will go faster in such a vacuum by a minuscule amount: a photon traveling between two plates that are 1 micrometer apart would increase the photon's speed by only about one part in 1036. Accordingly, there has as yet been no experimental verification of the prediction. A recent analysis argued that the Scharnhorst effect cannot be used to send information backwards in time with a single set of plates since the plates' rest frame would define a "preferred frame" for FTL signaling. However, with multiple pairs of plates in motion relative to one another the authors noted that they had no arguments that could "guarantee the total absence of causality violations", and invoked Hawking's speculative chronology protection conjecture which suggests that feedback loops of virtual particles would create "uncontrollable singularities in the renormalized quantum stress-energy" on the boundary of any potential time machine, and thus would require a theory of quantum gravity to fully analyze. Other authors argue that Scharnhorst's original analysis, which seemed to show the possibility of faster-than-c signals, involved approximations which may be incorrect, so that it is not clear whether this effect could actually increase signal speed at all.
It was later claimed by Eckle et al. that particle tunneling does indeed occur in zero real time. Their tests involved tunneling electrons, where the group argued a relativistic prediction for tunneling time should be 500–600 attoseconds (an attosecond is one quintillionth (10−18) of a second). All that could be measured was 24 attoseconds, which is the limit of the test accuracy. Again, though, other physicists believe that tunneling experiments in which particles appear to spend anomalously short times inside the barrier are in fact fully compatible with relativity, although there is disagreement about whether the explanation involves reshaping of the wave packet or other effects.
Give up (absolute) relativity
Because of the strong empirical support for special relativity, any modifications to it must necessarily be quite subtle and difficult to measure. The best-known attempt is doubly special relativity, which posits that the Planck length is also the same in all reference frames, and is associated with the work of Giovanni Amelino-Camelia and João Magueijo.
There are speculative theories that claim inertia is produced by the combined mass of the universe (e.g., Mach's principle), which implies that the rest frame of the universe might be preferred by conventional measurements of natural law. If confirmed, this would imply special relativity is an approximation to a more general theory, but since the relevant comparison would (by definition) be outside the observable universe, it is difficult to imagine (much less construct) experiments to test this hypothesis. Despite this difficulty, such experiments have been proposed.
Spacetime distortion
Although the theory of special relativity forbids objects to have a relative velocity greater than light speed, and general relativity reduces to special relativity in a local sense (in small regions of spacetime where curvature is negligible), general relativity does allow the space between distant objects to expand in such a way that they have a "recession velocity" which exceeds the speed of light, and it is thought that galaxies which are at a distance of more than about 14 billion light-years from us today have a recession velocity which is faster than light. Miguel Alcubierre theorized that it would be possible to create a warp drive, in which a ship would be enclosed in a "warp bubble" where the space at the front of the bubble is rapidly contracting and the space at the back is rapidly expanding, with the result that the bubble can reach a distant destination much faster than a light beam moving outside the bubble, but without objects inside the bubble locally traveling faster than light. However, several objections raised against the Alcubierre drive appear to rule out the possibility of actually using it in any practical fashion. Another possibility predicted by general relativity is the traversable wormhole, which could create a shortcut between arbitrarily distant points in space. As with the Alcubierre drive, travelers moving through the wormhole would not locally move faster than light travelling through the wormhole alongside them, but they would be able to reach their destination (and return to their starting location) faster than light traveling outside the wormhole.
Gerald Cleaver and Richard Obousy, a professor and student of Baylor University, theorized that manipulating the extra spatial dimensions of string theory around a spaceship with an extremely large amount of energy would create a "bubble" that could cause the ship to travel faster than the speed of light. To create this bubble, the physicists believe manipulating the 10th spatial dimension would alter the dark energy in three large spatial dimensions: height, width and length. Cleaver said positive dark energy is currently responsible for speeding up the expansion rate of our universe as time moves on.
Lorentz symmetry violation
The possibility that Lorentz symmetry may be violated has been seriously considered in the last two decades, particularly after the development of a realistic effective field theory that describes this possible violation, the so-called Standard-Model Extension. This general framework has allowed experimental searches by ultra-high energy cosmic-ray experiments and a wide variety of experiments in gravity, electrons, protons, neutrons, neutrinos, mesons, and photons.
The breaking of rotation and boost invariance causes direction dependence in the theory as well as unconventional energy dependence that introduces novel effects, including Lorentz-violating neutrino oscillations and modifications to the dispersion relations of different particle species, which naturally could make particles move faster than light.
In some models of broken Lorentz symmetry, it is postulated that the symmetry is still built into the most fundamental laws of physics, but that spontaneous symmetry breaking of Lorentz invariance shortly after the Big Bang could have left a "relic field" throughout the universe which causes particles to behave differently depending on their velocity relative to the field; however, there are also some models where Lorentz symmetry is broken in a more fundamental way. If Lorentz symmetry can cease to be a fundamental symmetry at the Planck scale or at some other fundamental scale, it is conceivable that particles with a critical speed different from the speed of light be the ultimate constituents of matter.
In current models of Lorentz symmetry violation, the phenomenological parameters are expected to be energy-dependent. Therefore, as widely recognized, existing low-energy bounds cannot be applied to high-energy phenomena; however, many searches for Lorentz violation at high energies have been carried out using the Standard-Model Extension.
Lorentz symmetry violation is expected to become stronger as one gets closer to the fundamental scale.
Superfluid theories of physical vacuum
In this approach, the physical vacuum is viewed as a quantum superfluid which is essentially non-relativistic, whereas Lorentz symmetry is not an exact symmetry of nature but rather the approximate description valid only for the small fluctuations of the superfluid background. Within the framework of the approach, a theory was proposed in which the physical vacuum is conjectured to be a quantum Bose liquid whose ground-state wavefunction is described by the logarithmic Schrödinger equation. It was shown that the relativistic gravitational interaction arises as the small-amplitude collective excitation mode whereas relativistic elementary particles can be described by the particle-like modes in the limit of low momenta. The important fact is that at very high velocities the behavior of the particle-like modes becomes distinct from the relativistic one – they can reach the speed of light limit at finite energy; also, faster-than-light propagation is possible without requiring moving objects to have imaginary mass.
FTL neutrino flight results
MINOS experiment
In 2007 the MINOS collaboration reported results measuring the flight-time of 3 GeV neutrinos yielding a speed exceeding that of light by 1.8-sigma significance. However, those measurements were considered to be statistically consistent with neutrinos traveling at the speed of light. After the detectors for the project were upgraded in 2012, MINOS corrected their initial result and found agreement with the speed of light. Further measurements are going to be conducted.
OPERA neutrino anomaly
On September 22, 2011, a preprint from the OPERA Collaboration indicated detection of 17 and 28 GeV muon neutrinos, sent 730 kilometers (454 miles) from CERN near Geneva, Switzerland to the Gran Sasso National Laboratory in Italy, traveling faster than light by a relative amount of (approximately 1 in 40,000), a statistic with 6.0-sigma significance. On 17 November 2011, a second follow-up experiment by OPERA scientists confirmed their initial results. However, scientists were skeptical about the results of these experiments, the significance of which was disputed. In March 2012, the ICARUS collaboration failed to reproduce the OPERA results with their equipment, detecting neutrino travel time from CERN to the Gran Sasso National Laboratory indistinguishable from the speed of light. Later the OPERA team reported two flaws in their equipment set-up that had caused errors far outside their original confidence interval: a fiber-optic cable attached improperly, which caused the apparently faster-than-light measurements, and a clock oscillator ticking too fast.
Tachyons
In special relativity, it is impossible to accelerate an object the speed of light, or for a massive object to move the speed of light. However, it might be possible for an object to exist which moves faster than light. The hypothetical elementary particles with this property are called tachyons or tachyonic particles. Attempts to quantize them failed to produce faster-than-light particles, and instead illustrated that their presence leads to an instability.
Various theorists have suggested that the neutrino might have a tachyonic nature, while others have disputed the possibility.
General relativity
General relativity was developed after special relativity to include concepts like gravity. It maintains the principle that no object can accelerate to the speed of light in the reference frame of any coincident observer. However, it permits distortions in spacetime that allow an object to move faster than light from the point of view of a distant observer. One such distortion is the Alcubierre drive, which can be thought of as producing a ripple in spacetime that carries an object along with it. Another possible system is the wormhole, which connects two distant locations as though by a shortcut. Both distortions would need to create a very strong curvature in a highly localized region of space-time and their gravity fields would be immense. To counteract the unstable nature, and prevent the distortions from collapsing under their own 'weight', one would need to introduce hypothetical exotic matter or negative energy.
General relativity also recognizes that any means of faster-than-light travel could also be used for time travel. This raises problems with causality. Many physicists believe that the above phenomena are impossible and that future theories of gravity will prohibit them. One theory states that stable wormholes are possible, but that any attempt to use a network of wormholes to violate causality would result in their decay. In string theory, Eric G. Gimon and Petr Hořava have argued that in a supersymmetric five-dimensional Gödel universe, quantum corrections to general relativity effectively cut off regions of spacetime with causality-violating closed timelike curves. In particular, in the quantum theory a smeared supertube is present that cuts the spacetime in such a way that, although in the full spacetime a closed timelike curve passed through every point, no complete curves exist on the interior region bounded by the tube.
In fiction and popular culture
FTL travel is a common plot device in science fiction.
| Physical sciences | Theory of relativity | Physics |
11464 | https://en.wikipedia.org/wiki/Frigate | Frigate | A frigate () is a type of warship. In different eras, the roles and capabilities of ships classified as frigates have varied.
The name frigate in the 17th to early 18th centuries was given to any full-rigged ship built for speed and maneuverability, intended to be used in scouting, escort and patrol roles. The term was applied loosely to ships varying greatly in design. In the second quarter of the 18th century, what is now generally regarded as the 'true frigate' was developed in France. This type of vessel was characterised by possessing only one armed deck, with an unarmed deck below it used for berthing the crew.
Late in the 19th century (British and French prototypes were constructed in 1858), a type of powerful ironclad warships was developed, and because they had a single gun deck, the term 'frigate' was used to describe them. Later developments in ironclad ships rendered the 'frigate' designation obsolete and the term fell out of favour. During the Second World War, the name 'frigate' was reintroduced to describe a seagoing escort ship that was intermediate in size between a corvette and a destroyer. After World War II, a wide variety of ships have been classified as frigates, and the reasons for such classification have not been consistent. While some navies have used the word 'frigate' principally for large ocean-going anti-submarine warfare (ASW) combatants, others have used the term to describe ships that are otherwise recognizable as corvettes, destroyers, and even nuclear-powered guided-missile cruisers. Some European navies use the term for ships that would formerly have been called destroyers, as well as for frigates. The rank "frigate captain" derives from the name of this type of ship.
Age of sail
Origins
The term "frigate" (Italian: fregata; Dutch: fregat; Spanish/Catalan/Portuguese/Sicilian: fragata; French: frégate) originated in the Mediterranean in the late 15th century, referring to a lighter galley-type warship with oars, sails and a light armament, built for speed and maneuverability.
The etymology of the word remains uncertain, although it may have originated as a corruption of aphractus, a Latin word for an open vessel with no lower deck. Aphractus, in turn, derived from the Ancient Greek phrase ἄφρακτος ναῦς (aphraktos naus) – "undefended ship". In 1583, during the Eighty Years' War of 1568–1648, Habsburg Spain recovered the southern Netherlands from the Protestant rebels. This soon resulted in the use of the occupied ports as bases for privateers, the "Dunkirkers", to attack the shipping of the Dutch and their allies. To achieve this the Dunkirkers developed small, maneuverable, sailing vessels that came to be referred to as frigates. The success of these Dunkirker vessels influenced the ship design of other navies contending with them, but because most regular navies required ships of greater endurance than the Dunkirker frigates could provide, the term soon came to apply less exclusively to any relatively fast and elegant sail-only warship. In French, the term "frigate" gave rise to a verb – frégater, meaning 'to build long and low', and to an adjective, adding more confusion. Even the huge English could be described as "a delicate frigate" by a contemporary after her upper decks were reduced in 1651.
The navy of the Dutch Republic became the first navy to build the larger ocean-going frigates. The Dutch navy had three principal tasks in the struggle against Spain: to protect Dutch merchant ships at sea, to blockade the ports of Spanish-held Flanders to damage trade and halt enemy privateering, and to fight the Spanish fleet and prevent troop landings. The first two tasks required speed, shallowness of draft for the shallow waters around the Netherlands, and the ability to carry sufficient supplies to maintain a blockade. The third task required heavy armament, sufficient to stand up to the Spanish fleet. The first of the larger battle-capable frigates were built around 1600 at Hoorn in Holland. By the later stages of the Eighty Years' War the Dutch had switched entirely from the heavier ships still used by the English and Spanish to the lighter frigates, carrying around 40 guns and weighing around 300 tons.
The effectiveness of the Dutch frigates became most evident in the Battle of the Downs in 1639, encouraging most other navies, especially the English, to adopt similar designs. The fleets built by the Commonwealth of England in the 1650s generally consisted of ships described as "frigates", the largest of which were two-decker "great frigates" of the third rate. Carrying 60 guns, these vessels were as big and capable as "great ships" of the time; however, most other frigates at the time were used as "cruisers": independent fast ships. The term "frigate" implied a long hull-design, which relates directly to speed (see hull speed) and which also, in turn, helped the development of the broadside tactic in naval warfare.
At this time, a further design evolved, reintroducing oars and resulting in galley frigates such as of 1676, which was rated as a 32-gun fifth-rate but also had a bank of 40 oars set below the upper deck that could propel the ship in the absence of a favorable wind. In Danish, the word "fregat" often applies to warships carrying as few as 16 guns, such as , which the British classified as a sloop. Under the rating system of the Royal Navy, by the middle of the 18th century, the term "frigate" was technically restricted to single-decked ships of the fifth rate, though small 28-gun frigates classed as sixth rate.
Classic design
The classic sailing frigate, or 'true frigate', well-known today for its role in the Napoleonic Wars, can be traced back to French developments in the second quarter of the 18th century. The French-built of 1740 is often regarded as the first example of this type. These ships were square-rigged and carried all their main guns on a single continuous upper deck. The lower deck, known as the "gun deck", now carried no armament, and functioned as a "berth deck" where the crew lived, and was in fact placed below the waterline of the new frigates. The typical earlier cruiser had a partially armed lower deck, from which it was known as a 'half-battery' or demi-batterie ship. Removing the guns from this deck allowed the height of the hull upperworks to be lowered, giving the resulting 'true-frigate' much improved sailing qualities. The unarmed deck meant that the frigate's guns were carried comparatively high above the waterline; as a result, when seas were too rough for two-deckers to open their lower deck gunports, frigates were still able to fight with all their guns (see the action of 13 January 1797, for an example when this was decisive).
The Royal Navy captured a number of the new French frigates, including Médée, during the War of the Austrian Succession (1740–1748) and were impressed by them, particularly for their inshore handling capabilities. They soon built copies (ordered in 1747), based on a French privateer named Tygre, and started to adapt the type to their own needs, setting the standard for other frigates as the leading naval power. The first British frigates carried 28 guns including an upper deck battery of twenty-four 9-pounder guns (the remaining four smaller guns were carried on the quarterdeck) but soon developed into fifth-rate ships of 32 or 36 guns including an upper deck battery of twenty-six 12-pounder guns, with the remaining six or ten smaller guns carried on the quarterdeck and forecastle. Technically, 'rated ships' with fewer than 28 guns could not be classed as frigates but as "post ships"; however, in common parlance most post ships were often described as "frigates", the same casual misuse of the term being extended to smaller two-decked ships that were too small to stand in the line of battle.
A total of fifty-nine French sailing frigates were built between 1777 and 1790, with a standard design averaging a hull length of and an average draught of . The new frigates recorded sailing speeds of up to , significantly faster than their predecessor vessels.
Heavy frigate
In 1778, the British Admiralty introduced a larger "heavy" frigate, with a main battery of twenty-six or twenty-eight 18-pounder guns (with smaller guns carried on the quarterdeck and forecastle). This move may reflect the naval conditions at the time, with both France and Spain as enemies the usual British preponderance in ship numbers was no longer the case and there was pressure on the British to produce cruisers of individually greater force. In reply, the first French 18-pounder frigates were laid down in 1781. The 18-pounder frigate eventually became the standard frigate of the French Revolutionary and Napoleonic Wars. The British produced larger, 38-gun, and slightly smaller, 36-gun, versions and also a 32-gun design that can be considered an 'economy version'. The 32-gun frigates also had the advantage that they could be built by the many smaller, less-specialised shipbuilders.
Frigates could (and usually did) additionally carry smaller carriage-mounted guns on their quarterdecks and forecastles (the superstructures above the upper deck). In 1778 the Carron Iron Company of Scotland produced a naval gun which would revolutionise the armament of smaller naval vessels, including the frigate. The carronade was a large calibre, short-barrelled naval cannon which was light, quick to reload and needed a smaller crew than a conventional long gun. Due to its lightness it could be mounted on the forecastle and quarterdeck of frigates. It greatly increased the firepower, measured in weight of metal (the combined weight of all projectiles fired in one broadside), of these vessels. The disadvantages of the carronade were that it had a much shorter range and was less accurate than a long gun. The British quickly saw the advantages of the new weapon and soon employed it on a wide scale. The US Navy also copied the design soon after its appearance. The French and other nations eventually adopted variations of the weapon in succeeding decades. The typical heavy frigate had a main armament of 18-pounder long guns, plus 32-pounder carronades mounted on its upper decks.
Super-heavy frigates
The first 'super-heavy frigates', armed with 24-pounder long guns, were built by the naval architect F H Chapman for the Swedish navy in 1782. Because of a shortage of ships-of-the-line, the Swedes wanted these frigates, the Bellona class, to be able to stand in the battle line in an emergency. In the 1790s the French built a small number of large 24-pounder frigates, such as and Egyptienne, they also cut-down (reduced the height of the hull to give only one continuous gun deck) a number of older ships-of-the-line (including ) to produce super-heavy frigates; the resulting ship was known as a rasée. It is not known whether the French were seeking to produce very potent cruisers or merely to address stability problems in old ships. The British, alarmed by the prospect of these powerful heavy frigates, responded by rasée-ing three of their smaller 64-gun battleships, including , which went on to have a very successful career as a frigate. At this time the British also built a few 24-pounder-armed large frigates, the most successful of which was (1,277 tons).
In 1797, three of the United States Navy's first six major ships were rated as 44-gun frigates, which operationally carried fifty-six to sixty 24-pounder long guns and 32-pounder or 42-pounder carronades on two decks; they were exceptionally powerful. These ships were so large, at around 1,500 tons, and well-armed that they were often regarded as equal to ships of the line, and after a series of losses at the outbreak of the War of 1812, Royal Navy fighting instructions ordered British frigates (usually rated at 38 guns or less) to never engage the large American frigates at any less than a 2:1 advantage. , preserved as a museum ship by the US Navy, is the oldest commissioned warship afloat, and is a surviving example of a frigate from the Age of Sail. Constitution and her sister ships and were created in a response to deal with the Barbary Coast pirates and in conjunction with the Naval Act of 1794. Joshua Humphreys proposed that only live oak, a tree that grew only in America, should be used to build these ships.
The British, wounded by repeated defeats in single-ship actions, responded to the success of the American 44s in three ways. They built a class of conventional 40-gun, 24-pounder armed frigates on the lines of Endymion. They cut down three old 74-gun Ships-of-the-Line into rasées, producing frigates with a 32-pounder main armament, supplemented by 42-pounder carronades. These had an armament that far exceeded the power of the American ships. Finally, and , 1,500-ton spar-decked frigates (with an enclosed waist, giving a continuous line of guns from bow to stern at the level of the quarterdeck/forecastle), were built, which were an almost exact match in size and firepower to the American 44-gun frigates.
Role
Frigates were perhaps the hardest-worked of warship types during the Age of Sail. While smaller than a ship-of-the-line, they were formidable opponents for the large numbers of sloops and gunboats, not to mention privateers or merchantmen. Able to carry six months' stores, they had very long range; and vessels larger than frigates were considered too valuable to operate independently.
Frigates scouted for the fleet, went on commerce-raiding missions and patrols, and conveyed messages and dignitaries. Usually, frigates would fight in small numbers or singly against other frigates. They would avoid contact with ships-of-the-line; even in the midst of a fleet engagement it was bad etiquette for a ship of the line to fire on an enemy frigate which had not fired first. Frigates were involved in fleet battles, often as "repeating frigates". In the smoke and confusion of battle, signals made by the fleet commander, whose flagship might be in the thick of the fighting, might be missed by the other ships of the fleet. Frigates were therefore stationed to windward or leeward of the main line of battle, and had to maintain a clear line of sight to the commander's flagship. Signals from the flagship were then repeated by the frigates, which themselves standing out of the line and clear from the smoke and disorder of battle, could be more easily seen by the other ships of the fleet. If damage or loss of masts prevented the flagship from making clear conventional signals, the repeating frigates could interpret them and hoist their own in the correct manner, passing on the commander's instructions clearly. For officers in the Royal Navy, a frigate was a desirable posting. Frigates often saw action, which meant a greater chance of glory, promotion, and prize money.
Unlike larger ships that were placed in ordinary, frigates were kept in service in peacetime as a cost-saving measure and to provide experience to frigate captains and officers which would be useful in wartime. Frigates could also carry marines for boarding enemy ships or for operations on shore; in 1832, the frigate landed a party of 282 sailors and Marines ashore in the US Navy's first Sumatran expedition. Frigates remained a crucial element of navies until the mid-19th century. The first ironclads were classified as "frigates" because of the number of guns they carried. However, terminology changed as iron and steam became the norm, and the role of the frigate was assumed first by the protected cruiser and then by the light cruiser.
Frigates are often the vessel of choice in historical naval novels due to their relative freedom compared to ships-of-the-line (kept for fleet actions) and smaller vessels (generally assigned to a home port and less widely ranging). For example, the Patrick O'Brian Aubrey–Maturin series, C. S. Forester's Horatio Hornblower series and Alexander Kent's Richard Bolitho series. The motion picture Master and Commander: The Far Side of the World features a reconstructed historic frigate, HMS Rose, to depict Aubrey's frigate HMS Surprise.
Age of steam
Vessels classed as frigates continued to play a great role in navies with the adoption of steam power in the 19th century. In the 1830s, navies experimented with large paddle steamers equipped with large guns mounted on one deck, which were termed "paddle frigates".
From the mid-1840s on, frigates which more closely resembled the traditional sailing frigate were built with steam engines and screw propellers. These "screw frigates", built first of wood and later of iron, continued to perform the traditional role of the frigate until late in the 19th century.
Armoured frigate
From 1859, armour was added to ships based on existing frigate and ship of the line designs. The additional weight of the armour on these first ironclad warships meant that they could have only one gun deck, and they were technically frigates, even though they were more powerful than existing ships-of-the-line and occupied the same strategic role. The phrase "armoured frigate" remained in use for some time to denote a sail-equipped, broadside-firing type of ironclad. The first such ship was the revolutionary Marine Nationale wooden-hulled , protected by 12 cm-thick (4.7 in) armour plates. The British response was of the Warrior-class ironclads, launched in 1860. With her iron hull, steam engines propelling the 9,137 ton vessel to speeds of up to 14 knots and rifled breechloading 110-pdr guns, Warrior is the ancestor of all modern warships.
During the 1880s, as warship design shifted from iron to steel and cruising warships without sails started to appear, the term "frigate" fell out of use. Vessels with armoured sides were designated as "battleships" or "armoured cruisers", while "protected cruisers" only possessed an armoured deck, and unarmoured vessels, including frigates and sloops, were classified as "unprotected cruisers".
Modern era
World War II
Modern frigates are related to earlier frigates only by name. The term "frigate" was readopted during the Second World War by the British Royal Navy to describe an anti-submarine escort vessel that was larger than a corvette (based on a mercantile design), while smaller than a destroyer. The vessels were originally to be termed "twin screw corvettes" until the Royal Canadian Navy suggested to the British re-introducing the term "frigate" for the significantly enlarged vessels. Equal in size and capability to the American destroyer escort, frigates are usually less expensive to build and maintain. Small anti-submarine escorts designed for naval use from scratch had previously been classified as sloops by the Royal Navy, and the s of 1939–1945 (propelled by steam turbines as opposed to cheaper triple-expansion steam engines) were as large as the new types of frigate, and more heavily armed. 22 of these were reclassified as frigates after the war, as were the remaining 24 smaller s.
The frigate was introduced to remedy some of the shortcomings inherent in the corvette design: limited armament, a hull form not suited to open-ocean work, a single shaft which limited speed and maneuverability, and a lack of range. The frigate was designed and built to the same mercantile construction standards (scantlings) as the corvette, allowing manufacture by yards unused to warship construction. The first frigates of the (1941) were essentially two sets of corvette machinery in one larger hull, armed with the latest Hedgehog anti-submarine weapon.
The frigate possessed less offensive firepower and speed than a destroyer, including an escort destroyer, but such qualities were not required for anti-submarine warfare. Submarines were slow while submerged, and ASDIC sets did not operate effectively at speeds of over . Rather, the frigate was an austere and weatherly vessel suitable for mass-construction and fitted with the latest innovations in anti-submarine warfare. As the frigate was intended purely for convoy duties, and not to deploy with the fleet, it had limited range and speed.
It was not until the Royal Navy's of 1944 that a British design classified as a "frigate" was produced for fleet use, although it still suffered from limited speed. These anti-aircraft frigates, built on incomplete hulls, were similar to the United States Navy's destroyer escorts (DE), although the latter had greater speed and offensive armament to better suit them to fleet deployments. The destroyer escort concept came from design studies by the General Board of the United States Navy in 1940, as modified by requirements established by a British commission in 1941 prior to the American entry into the war, for deep-water escorts. The American-built destroyer escorts serving in the British Royal Navy were rated as Captain-class frigates. The U.S. Navy's two Canadian-built and 96 British-influenced, American-built frigates that followed originally were classified as "patrol gunboats" (PG) in the U.S. Navy but on 15 April 1943 were all reclassified as patrol frigates (PF).
Modern frigate
Guided-missile role
The introduction of the surface-to-air missile after World War II made relatively small ships effective for anti-aircraft warfare: the "guided-missile frigate". In the USN, these vessels were called "ocean escorts" and designated "DE" or "DEG" until 1975 – a holdover from the World War II destroyer escort or "DE". While the Royal Canadian Navy used similar designations for their warships built in the 1950s, the British Royal Navy maintained the use of the term "frigate"; in the 1990s the RCN re-introduced the frigate designation. Likewise, the French Navy refers to missile-equipped ships, up to cruiser-sized ships (, , and es), by the name of "frégate", while smaller units are named aviso. The Soviet Navy used the term "guard-ship" (сторожевой корабль).
From the 1950s to the 1970s, the United States Navy commissioned ships classed as guided-missile frigates (hull classification symbol DLG or DLGN, literally meaning guided-missile destroyer leaders), which were actually anti-aircraft warfare cruisers built on destroyer-style hulls. These had one or two twin launchers per ship for the RIM-2 Terrier missile, upgraded to the RIM-67 Standard ER missile in the 1980s. This type of ship was intended primarily to defend aircraft carriers against anti-ship cruise missiles, augmenting and eventually replacing converted World War II cruisers (CAG/CLG/CG) in this role. The guided-missile frigates also had an anti-submarine capability that most of the World War II cruiser conversions lacked. Some of these ships – and along with the and es – were nuclear-powered (DLGN). These "frigates" were roughly mid-way in size between cruisers and destroyers. This was similar to the use of the term "frigate" during the age of sail during which it referred to a medium-sized warship, but it was inconsistent with conventions used by other contemporary navies which regarded frigates as being smaller than destroyers. During the 1975 ship reclassification, the large American frigates were redesignated as guided-missile cruisers or destroyers (CG/CGN/DDG), while ocean escorts (the American classification for ships smaller than destroyers, with hull symbol DE/DEG (destroyer escort)) such as the Knox-class were reclassified as frigates (FF/FFG), sometimes called "fast frigates". In the late 1970s, as a gradual successor to the Knox frigates, the US Navy introduced the 51-ship guided-missile frigates (FFG), the last of which was decommissioned in 2015, although some serve in other navies. By 1995 the older guided-missile cruisers and destroyers were replaced by the s and s.
One of the most successful post-1945 designs was the British , which was used by several navies. Laid down in 1959, the Leander class was based on the previous Type 12 anti-submarine frigate but equipped for anti-aircraft use as well. They were used by the UK into the 1990s, at which point some were sold onto other navies. The Leander design, or improved versions of it, were licence-built for other navies as well. Nearly all modern frigates are equipped with some form of offensive or defensive missiles, and as such are rated as guided-missile frigates (FFG). Improvements in surface-to-air missiles (e.g., the Eurosam Aster 15) allow modern guided-missile frigates to form the core of many modern navies and to be used as a fleet defence platform, without the need for specialised anti-air warfare frigates.
Modern destroyers and frigates have sufficient endurance and seaworthiness for long voyages and so are considered blue water vessels, while corvettes (even the largest ones capable of carrying an anti-submarine warfare helicopter) are typically deployed in coastal or littoral zones so are regarded as brown-water or green-water vessels. According to Dr. Sidharth Kaushal of the Royal United Services Institute for Defence and Security Studies, describing the difference between 21st century destroyers and frigates, the larger "destroyers can more easily carry and generate the power for more powerful high-resolution radar and a larger number of vertical launch cells. They can thus provide theatre wide air and missile defence for forces such as a carrier battle group and typically serve this function". By contrast the smaller "frigates are thus usually used as escort vessels to protect sea lines of communication or as an auxiliary component of a strike group". The largest and powerful destroyers are often classified as cruisers, such as the s, due to their extra armament and facilities to serve as fleet flagships.
Other uses
The Royal Navy Type 61 (Salisbury class) were "air direction" frigates equipped to track aircraft. To this end they had reduced armament compared to the Type 41 (Leopard-class) air-defence frigates built on the same hull. Multi-role frigates like the MEKO 200, and es are designed for navies needing warships deployed in a variety of situations that a general frigate class would not be able to fulfill and not requiring the need for deploying destroyers.
Anti-submarine role
At the opposite end of the spectrum, some frigates are specialised for anti-submarine warfare. Increasing submarine speeds towards the end of World War II (see German Type XXI submarine) greatly reduced the margin of speed superiority of frigate over submarine. The frigate could no longer be slow and powered by mercantile machinery and consequently postwar frigates, such as the , were faster.
Such ships carry improved sonar equipment, such as the variable depth sonar or towed array, and specialised weapons such as torpedoes, forward-throwing weapons such as Limbo and missile-carried anti-submarine torpedoes such as ASROC or Ikara. The Royal Navy's original Type 22 frigate is an example of a specialised anti-submarine warfare frigate, though it also has Sea Wolf surface-to-air missiles for point defense plus Exocet surface-to-surface missiles for limited offensive capability.
Especially for anti-submarine warfare, most modern frigates have a landing deck and hangar aft to operate helicopters, eliminating the need for the frigate to close with unknown sub-surface threats, and using fast helicopters to attack nuclear submarines which may be faster than surface warships. For this task the helicopter is equipped with sensors such as sonobuoys, wire-mounted dipping sonar and magnetic anomaly detectors to identify possible threats, and torpedoes or depth-charges to attack them.
With their onboard radar helicopters can also be used to reconnoitre over-the-horizon targets and, if equipped with anti-ship missiles such as Penguin or Sea Skua, to attack them. The helicopter is also invaluable for search and rescue operation and has largely replaced the use of small boats or the jackstay rig for such duties as transferring personnel, mail and cargo between ships or to shore. With helicopters these tasks can be accomplished faster and less dangerously, and without the need for the frigate to slow down or change course.
Air defence role
Frigates designed in the 1960s and 1970s, such as the US Navy's , West Germany's , and Royal Navy's Type 22 frigate were equipped with a small number of short-ranged surface-to-air missiles (Sea Sparrow or Sea Wolf) for point defense only.
By contrast newer frigates starting with the are specialised for "zone-defense" air defence, because of the major developments in fighter jets and ballistic missiles. Recent examples include the air defence and command frigate of the Royal Netherlands Navy. These ships are armed with VL Standard Missile 2 Block IIIA, one or two Goalkeeper CIWS systems, ( has two Goalkeepers, the rest of the ships have the capacity for another one.) VL Evolved Sea Sparrow Missiles, a special SMART-L radar and a Thales Active Phased Array Radar (APAR), all of which are for air defence. Another example is the of the Royal Danish Navy.
Further developments
Stealth technology has been introduced in modern frigate design by the French design. Frigate shapes are designed to offer a minimal radar cross section, which also lends them good air penetration; the maneuverability of these frigates has been compared to that of sailing ships. Examples are the Italian and French with the Aster 15 and Aster 30 missile for anti-missile capabilities, the German and s, the Turkish type frigates with the MK-41 VLS, the Indian , and classes with the Brahmos missile system and the Malaysian with the Naval Strike Missile.
The modern French Navy applies the term first-class frigate and second-class frigate to both destroyers and frigates in service. Pennant numbers remain divided between F-series numbers for those ships internationally recognised as frigates and D-series pennant numbers for those more traditionally recognised as destroyers. This can result in some confusion as certain classes are referred to as frigates in French service while similar ships in other navies are referred to as destroyers. This also results in some recent classes of French ships such as the being among the largest in the world to carry the rating of frigate. The Frégates de Taille Intermédiaire (FTI), which means frigates of intermediate size, is a French military program to design and create a planned class of frigates to be used by the French Navy. At the moment, the program consists of five ships, with commissioning planned from 2023 onwards.
In the German Navy, frigates were used to replace aging destroyers; however in size and role the new German frigates exceed the former class of destroyers. The future German s are the largest class of frigates worldwide with a displacement of more than 7,200 tons. The same was done in the Spanish Navy, which went ahead with the deployment of the first Aegis frigates, the s. The Myanmar Navy is producing modern frigates with a reduced radar cross section known as the . Before the Kyan Sittha class, the Myanmar Navy also produced an . Although the size of the Myanmar Navy is quite small, it is producing modern guided-missile frigates with the help of Russia, China, and India. However, the fleets of the Myanmar Navy are still expanding with several on-going shipbuilding programmes, including one , 4,000-tonne frigate with the vertical missile launch systems. The four planned Tamandaré-class frigates of the Brazilian Navy will be responsible for introducing ships with stealth technology in the national navy and the Latin American region, with the first boat expected to be launched in 2024.
Littoral combat ship (LCS)
Some new classes of ships similar to corvettes are optimized for high-speed deployment and combat with small craft rather than combat between equal opponents; an example is the U.S. littoral combat ship (LCS). As of 2015, all s in the United States Navy have been decommissioned, and their role partially being assumed by the new LCS. While the LCS class ships are smaller than the frigate class they will replace, they offer a similar degree of weaponry while requiring less than half the crew complement and offering a top speed of over . A major advantage for the LCS ships is that they are designed around specific mission modules allowing them to fulfill a variety of roles. The modular system also allows for most upgrades to be performed ashore and installed later into the ship, keeping the ships available for deployment for the maximum time.
The latest U.S. deactivation plans mean that this is the first time that the U.S. Navy has been without a frigate class of ships since 1943 (technically is rated as a frigate and is still in commission, but does not count towards Navy force levels). The remaining 20 LCSs to be acquired from 2019 and onwards that will be enhanced will be designated as frigates, and existing ships given modifications may also have their classification changed to FF as well.
Frigates in preservation
A few frigates have survived as museum ships. They are:
Original sailing frigates
in Boston, United States. Second oldest commissioned warship in the world, oldest commissioned warship afloat. Active as the flagship of the United States Navy.
NRP Dom Fernando II e Glória in Almada, Portugal.
in Hartlepool, England.
in Dundee, Scotland.
Replica sailing frigates
, sailing replica of the 1779 Hermione which carried Lafayette to the United States.
, originally named Grand Turk was built for the TV series Hornblower in 1997. She was sold to France in 2010 and renamed Étoile du Roy.
, a sailing replica of Russia's first warship, homeported in Saint Petersburg, Russia.
in San Diego, United States, replica of HMS Rose, used in the film, Master and Commander: The Far Side of the World.
Steam frigates
in Den Helder, Netherlands.
in Ebeltoft, Denmark.
, replica in Esashi, Japan.
in Portsmouth, England.
in Buenos Aires, Argentina.
Modern era frigates
in Copenhagen, Denmark.
in Brisbane, Australia.
TCG Ege (F256), formerly in Izmit, Turkey.
ROKS Taedong (PF-63), formerly in South Korea.
ROKS Ulsan (FF-951), in Ulsan, South Korea.
ROKS Seoul (FF-952), in Seoul, South Korea.
HTMS Tachin (PF-1), formerly in Nakhon Nayok, Thailand.
HTMS Prasase (PF-2), formerly in Rayong Province, Thailand.
HTMS Phutthaloetla Naphalai in Sattahip, Thailand.
HTMS Phutthayotfa Chulalok in Sattahip, Thailand.
CNS Yingtan (FFG-531) in Qingdao, China.
CNS Xiamen (FFG-515) in Taizhou, China.
CNS Ji'an (FFG-518) in Wuxue, China.
CNS Siping (FFG-544) in Xingguo County, China
CNS Jinhua (FFG-534) in Hengdian, China
CNS Dangdong (FFG-543) in Dangdong, China
in Lucknow, India (Planned)
in London, England.
in London, England.
in Glasgow, Scotland (planned)
in Horten, Norway.
in Lumut, Malaysia.
in Yangon, Myanmar
Former museums
Dominican frigate Mella was on display in the Dominican Republic from 1998 to 2003, when she was scrapped due to her deteriorating condition.
KD Rahmat was on display in Lumut, Malaysia from 2011 to 2017. She sank at her moorings due to poor condition, and was later scrapped.
RFS Druzhnyy was on display in Moscow, Russia from 2002 to 2016, until the museum plans fell through and was sold for scrap.
was on display in Birkenhead, England from 1990 to 2006, when the museum that operated her was forced to close. She was later scrapped in 2012.
CNS Nanchong (FF-502) was on display in Qingdao, China from 1988 to 2012, when her faulty material made preservation difficult and was later scrapped.
Operators
By country
operates three Adhafer-class frigates and two MEKO A-200AN frigates
operates six Espora-class frigates/corvettes
operates a single modified and two Hamilton-class patrol frigates from the United States
operates six s
operates three s purchased from Belgium
operates twelve s
operates 31 Jiangkai II-class frigates and two Jiang kai I-class frigates.
operates three Jiangwei I-class frigates transferred from the navy
operates four s
operates four Thetis-class frigates and three s, two s.
operates two s purchased from Chile
operates one MEKO A-200EN frigate and a Black Swan-class frigate used as training ship
operates a single -class frigate
operates six s.
operates four s, four s and three s with the latter sometimes classed as destroyers.
operates nine s purchased from the Netherlands, and four s
operates 14 frigates, comprising one Nilgiri-class, three s, seven s, and three s.
operates two s, five s, purchased from the Netherlands, and three Bung Tomo-class light frigates, purchased from the UK.
operates five s and three s.
operates two Thaon di Revel-class patrol frigates, four s.
operates four s with more under construction and six Abukuma-class frigates.
operates two s
operates six s, two s, and eight s.
operates two s
operates single Reformador-class frigate
operates three Tarik Ben Ziyad-class frigates.
operates two s and one
operates two Hamilton-class patrol frigates from the United States, a single Aradu-class frigate, though its operational status is doubtful, and a single Obuma-class frigate used as training hulk.
operates four s (a variant of the Chinese Type 054 frigate)
operates seven s, with four being transferred from Italy
operates a single , transferred from the Navy
operates two s and two s
operates three s
operates one Gremyashchiy-class frigates/corvettes, seven Steregushchiy-class frigates/corvettes, three s, three s, two s, two s and two s.
operates two s
operates four s
operates four s, made in Germany based on the MEKO A200 design
operates single and two old frigates used as training ships, the and .
operates four s, four s, and one s..
operates one
By class
current operator
operates three ships
current operator
operates three ships
operates one ship, is building a second ship, and is building two further ships to a modified design.
current operator
operates eight ships
operates two ships
FREMM multipurpose frigate current operator
operates two Bergamini-class frigates from Italy, one Aquitaine-class frigate from France
operates eight Aquitaine-class frigates
operates ten Bergamini-class frigates
operates one Aquitaine-class frigate ordered from France
current operator
operates six ships
operates two ships ordered from France
current operator
operates two ships
operates four ships
current operator
operates two ships purchased from The Netherlands
operates two ships purchased from The Netherlands
operates two ships
operates two ships purchased from the Netherlands
current operator
operates six ships purchased from the US
operates two ships purchased from the US
operates four ships purchased from the US
current operator
operates three ships
operates single ship
operates single ship
current operator
operates six s, which are the Taiwanese variant of the French La Fayette class
operates five ships
operates three s, which are the Saudi variant of the French La Fayette class
operates six s, these are the Singapore variant of the French La Fayette class
current operator
operates a single ship donated from the US
operates two s purchased from Australia, these are the Australian variant of the US Oliver Hazard Perry class
operates 10 s, which are the Taiwanese variant of the US Oliver Hazard Perry class
operates four ships
operates a single ship purchased from the US
operates two ships purchased from the US
Operates five s, these are the Spanish variant of the US Oliver Hazard Perry class
operates eight s purchased from the US
current operator
operates single ship
operates single ship, though its operational status is doubtful
operates five ships
Type 053 frigate current operator
operates two Jianghu II-class frigates and two Jianghu III-class frigates purchased from China
operates six Jianghu-class frigates and seven Jiangwei II-class frigates
operates two Jianghu II-class frigates purchased from China
operates four s and two s purchased from China
operates four s (a variant of the Chinese Type 053H3 frigate)
operates a single Jiangwei I-class frigate purchased from China
Type 22 frigate current operator
operates single ship purchased from the UK
operates single ship purchased from the UK
operates two ships purchased from the UK
Type 23 frigate current operator
operates three ships purchased from the UK
operates 11 ships
Disputed classes
These ships are classified by their respective nations as frigates, but are considered destroyers internationally due to size, armament, and role.
operates three s and four s.
operates four s.
operates four s.
operates the , classified as a destroyer until 2001.
operates five s.
Former operators
decommissioned its last true frigates, the in 1998.
decommissioned its last in 1998.
lost its entire fleet, including two s and the training frigate Ethiopia, following the independence of Eritrea in 1991.
decommissioned EML Admiral Pitka in 2013.
decommissioned its last in 1985.
decommissioned all three s upon German Reunification in 1990.
lost its only operational frigate Ibn Khaldoum which was sunk in 2003.
decommissioned its last in 1959.
decommissioned both its Kotor-class frigates in 2019.
transferred its two s to Montenegro upon their independence in 2006.
decommissioned its last two Visby-class frigates in 1982, following defense reviews.
operated a single Hetman Sahaidachny which was scuttled in 2022.
decommissioned its last in 2015.
decommissioned its last in 2022.
transferred its six remaining Trần Quang Khải-class frigates to the Philippines following the Fall of Saigon in 1975. The seventh ship was captured by North Vietnam and recommissioned into the Vietnam People's Navy.
Future development
has ordered three Steregushchiy-class frigates from Russia.
has ordered nine s. These ships are the Australian variant of the Type 26 frigates, and will carry the AEGIS combat system.
is planning to build two Anti-Submarine Warfare frigates to replace the current s. It is a joint project with the Netherlands.
has ordered four s. These ships will replace Brazil's aging s.
plans to order 15 Type 26 frigates as the design for the Canadian Surface Combatant. These ships will replace the decommissioned s and s.
is continuing to build Jiangkai II-class frigates.
is planning to build 10–15 new frigates to replace the aging Knox class and Cheng Kung class.
is planning to build four s. These vessels, despite their classification have been described as frigates by the Finnish defense ministry and lead to a debate over the classification in the Finnish Parliament.
is building five Amiral Ronarc'h-class frigates.
has ordered six F126 frigates to replace the s. Construction of the first vessel started in December 2023.
is planning to build three Belharra-class frigates as a part of plans for replacing its aging s. There is an option for a fourth ship.
is building a total of 11 frigates, seven s and 4 Talwar-class frigate. Another 8 ships of Project 17B are planned.
is currently building one Type 31 frigate with another one planned. Indonesia will also order six Bergamini-class frigates and two .
is building 16 Thaon di Revel-class frigates. These vessels will replace the decommissioned s and s. Italy is also planning to commission two more Bergamini-class frigates.
is currently building four more s.
is currently building four s. These ships 0will replace the s.
is currently building six s.
is currently building six s and currently planning for 12 ships for the class.
will commission one more Reformador-class frigate.
is constructing a new frigate which is long and displaces 4,000 tonnes.
is planning to build four Anti-Submarine Warfare frigates to replace the current s. It is a joint project with Belgium.
is currently building three Projekt 106 frigates to replace its aging Oliver Hazard Perry-class frigates.
is currently building ten s.
ordered four upgraded versions of the from the United States. These ships are to replace the aging s.
is currently planning to build five s. These ships will replace Spain's s.
is currently building an additional .
is currently building the s as a part of the MILGEM project.
was building one Volodymyr Velykyi-class frigate. Construction began in 2011, then suffered delays and was completely stopped in 2014. The Black Sea Shipyard responsible for the program went bankrupt in 2021, the ship was only 17% complete. It was hoped that this class would help rebuild the Ukrainian Navy, which has been depleted since the capture of most of its fleet following the 2014 Russian Annexation of Crimea. The United States has offered to transfer two Oliver Hazard Perry-class frigates to Ukraine.
is currently building eight Type 26 frigates. These ships, along with five planned Type 31 frigates will replace the Type 23 frigates currently in service. Additionally, five Type 32 frigates are also planned to supplement the Royal Navy's strength.
is currently building 20 s. These ships are a variant of the FREMM multipurpose frigate and will replace the decommissioned Oliver Hazard Perry-class frigates. As of 2024, 6 frigates have been funded.
| Technology | Naval warfare | null |
11488 | https://en.wikipedia.org/wiki/Furlong | Furlong | A furlong is a measure of distance in imperial units and United States customary units equal to one-eighth of a mile, equivalent to any of 660 feet, 220 yards, 40 rods, 10 chains, or approximately 201 metres. It is now mostly confined to use in horse racing, where in many countries it is the standard measurement of race lengths, and agriculture, where it is used to measure rural field lengths and distances.
In the United States, some states use older definitions for surveying purposes, leading to variations in the length of the furlong of two parts per million, or about . This variation is small enough to not have practical consequences in most applications.
Using the international definition of the yard as exactly 0.9144 metres, one furlong is 201.168 metres, and five furlongs are about 1 kilometre ( exactly).
History
The name furlong derives from the Old English words (furrow) and (long). Dating back at least to early Anglo-Saxon times, it originally referred to the length of the furrow in one acre of a ploughed open field (a medieval communal field which was divided into strips). The furlong (meaning furrow length) was the distance a team of oxen could plough without resting. This was standardised to be exactly 40 rods or 10 chains. The system of long furrows arose because turning a team of oxen pulling a heavy plough was difficult. This offset the drainage advantages of short furrows and meant furrows were made as long as possible. An acre is an area that is one furlong long and one chain (66 feet or 22 yards) wide. For this reason, the furlong was once also called an acre's length, though in modern usage an area of one acre can be of any shape. The term furlong, or shot, was also used to describe a grouping of adjacent strips within an open field.
Among the early Anglo-Saxons, the rod was the fundamental unit of land measurement. A furlong was 40 rods; an acre 4 by 40 rods, or 4 rods by 1 furlong, and thus 160 square rods; there are 10 acres in a square furlong. At the time, the Saxons used the North German foot, which was about 10 percent longer than the foot of the international 1959 agreement. When England changed to a shorter foot in the late 13th century, rods and furlongs remained unchanged, since property boundaries were already defined in rods and furlongs. The only thing that changed was the number of feet and yards in a rod or a furlong, and the number of square feet and square yards in an acre. The definition of the rod went from 15 old feet to new feet, or from 5 old yards to new yards. The furlong went from 600 old feet to 660 new feet, or from 200 old yards to 220 new yards. The acre went from 36,000 old square feet to 43,560 new square feet, or from 4,000 old square yards to 4,840 new square yards.
The furlong was historically viewed as being equivalent to the Roman stade (stadium), which in turn derived from the Greek system. For example, the King James Bible uses the term "furlong" in place of the Greek stadion, although more recent translations often use miles or kilometres in the main text and give the original numbers in footnotes.
In the Roman system, there were 625 feet to the stadium, eight stadia to the mile, and 1½ miles to the league. A league was considered to be the distance a man could walk in one hour, and the mile (from mille, meaning "thousand") consisted of 1,000 passus (paces, five feet, or double-step).
After the fall of the Western Roman Empire, medieval Europe continued with the Roman system, which the people proceeded to diversify, leading to serious complications in trade, taxation, etc. Around the year 1300, by royal decree England standardized a long list of measures. Among the important units of distance and length at the time were the foot, yard, rod (or pole), furlong, and the mile. The rod was defined as yards or feet, and the mile was eight furlongs, so the definition of the furlong became 40 rods and that of the mile became 5,280 feet (eight furlongs/mile times 40 rods/furlong times feet/rod). The invention of the measuring chain in the 1620s led to the introduction of an intermediate unit of length, the chain of 22 yards, being equal to four rods, and to one-tenth of a furlong.
A description from 1675 states, "Dimensurator or Measuring Instrument whereof the mosts usual has been the Chain, and the common length for English Measures four Poles, as answering indifferently to the Englishs Mile and Acre, 10 such Chains in length making a Furlong, and 10 single square Chains an Acre, so that a square Mile contains 640 square Acres." —John Ogilby, Britannia, 1675
The official use of the furlong was abolished in the United Kingdom under the Weights and Measures Act 1985, an act that also abolished the official use of many other traditional units of measurement.
Use
In Myanmar furlongs are currently used in conjunction with miles to indicate distances on highway signs. Mileposts on the Yangon–Mandalay Expressway use miles and furlongs.
In the rest of the world the furlong has very limited use, with the notable exception of horse racing in most English-speaking countries, including Canada and the United States. The distances for horse racing in Australia were converted to metric in 1972 and the term survives only in slang. In the United Kingdom, Ireland, Canada, and the United States, races are still given in miles and furlongs. Also distances along English canals navigated by narrowboats are commonly expressed in miles and furlongs.
The city of Chicago's street numbering system allots a measure of 800 address units to each mile, in keeping with the city's system of eight blocks per mile. This means that every block in a typical Chicago neighborhood (in either north–south or east–west direction but rarely both) is approximately one furlong in length. City blocks in the Hoddle Grid of Melbourne are also one furlong in length. Salt Lake City's blocks are each a square furlong in the downtown area. The blocks become less regular in shape farther from the center, but the numbering system (800 units to each mile) remains the same everywhere in Salt Lake County. Blocks in central Logan, Utah, and in large sections of Phoenix, Arizona, are similarly a square furlong in extent (eight to a mile, which explains the series of freeway exits: 19th Ave, 27th, 35th, 43rd, 51st, 59th ...).
Much of Ontario, Canada, was originally surveyed on a ten-furlong grid, with major roads being laid out along the grid lines. Now that distances are shown on road signs in kilometres, these major roads are almost exactly two kilometres apart. The exits on highways running through Toronto, for example, are generally at intervals of two kilometres.
The Bangor City Forest in Bangor, Maine has its trail system marked in miles and furlongs.
The furlong is also a base unit of the humorous FFF system of units.
Definition of length
The exact length of the furlong varies slightly among English-speaking countries. In Canada and the United Kingdom, which define the furlong in terms of the international yard of exactly 0.9144 metres, a furlong is 201.168 m. Australia does not formally define the furlong, but it does define the chain and link in terms of the international yard.
The United States previously defined the furlong, chain, rod, and link in terms of the U.S. survey foot of exactly metre, resulting in a furlong approximately 201.1684 m long. The difference of approximately two parts per million between the old U.S. value and the "international" value was insignificant for most practical measurements.
In October 2019, U.S. National Geodetic Survey and National Institute of Standards and Technology announced their joint intent to retire the U.S. survey foot, with effect from the end of 2022. The furlong in U.S. Customary units is thereafter defined based on the International 1959 foot, giving the length of the furlong as exact 201.168 meters in the United States as well.
| Physical sciences | English | Basics and measurement |
11492 | https://en.wikipedia.org/wiki/Foot | Foot | The foot (: feet) is an anatomical structure found in many vertebrates. It is the terminal portion of a limb which bears weight and allows locomotion. In many animals with feet, the foot is a separate organ at the terminal part of the leg made up of one or more segments or bones, generally including claws and/or nails.
Etymology
The word "foot", in the sense of meaning the "terminal part of the leg of a vertebrate animal" comes from Old English fot, from Proto-Germanic *fot (source also of Old Frisian fot, Old Saxon fot, Old Norse fotr, Danish fod, Swedish fot, Dutch voet, Old High German fuoz, German Fuß, Gothic fotus, all meaning "foot"), from PIE root *ped- "foot".
The plural form feet is an instance of i-mutation.
Structure
The human foot is a strong and complex mechanical structure containing 26 bones, 33 joints (20 of which are actively articulated), and more than a hundred muscles, tendons, and ligaments. The joints of the foot are the ankle and subtalar joint and the interphalangeal joints of the foot. An anthropometric study of 1197 North American adult Caucasian males (mean age 35.5 years) found that a man's foot length was 26.3 cm with a standard deviation of 1.2 cm.
The foot can be subdivided into the hindfoot, the midfoot, and the forefoot:
The hindfoot is composed of the talus (or ankle bone) and the calcaneus (or heel bone). The two long bones of the lower leg, the tibia and fibula, are connected to the top of the talus to form the ankle. Connected to the talus at the subtalar joint, the calcaneus, the largest bone of the foot, is cushioned underneath by a layer of fat.
The five irregular bones of the midfoot, the cuboid, navicular, and three cuneiform bones, form the arches of the foot which serve as a shock absorber. The midfoot is connected to the hind- and fore-foot by muscles and the plantar fascia.
The forefoot is composed of five toes and the corresponding five proximal long bones forming the metatarsus. Similar to the fingers of the hand, the bones of the toes are called phalanges and the big toe has two phalanges while the other four toes have three phalanges each. The joints between the phalanges are called interphalangeal and those between the metatarsus and phalanges are called metatarsophalangeal (MTP).
Both the midfoot and forefoot constitute the dorsum (the area facing upward while standing) and the planum (the area facing downward while standing).
The instep is the arched part of the top of the foot between the toes and the ankle.
Bones
tibia, fibula
tarsus (7): talus, calcaneus, cuneiformes (3), cuboid, and navicular
metatarsus (5): first, second, third, fourth, and fifth metatarsal bone
phalanges (14)
There can be many sesamoid bones near the metatarsophalangeal joints, although they are only regularly present in the distal portion of the first metatarsal bone.
Arches
The human foot has two longitudinal arches and a transverse arch maintained by the interlocking shapes of the foot bones, strong ligaments, and pulling muscles during activity. The slight mobility of these arches when weight is applied to and removed from the foot makes walking and running more economical in terms of energy. As can be examined in a footprint, the medial longitudinal arch curves above the ground. This arch stretches from the heel bone over the "keystone" ankle bone to the three medial metatarsals. In contrast, the lateral longitudinal arch is very low. With the cuboid serving as its keystone, it redistributes part of the weight to the calcaneus and the distal end of the fifth metatarsal. The two longitudinal arches serve as pillars for the transverse arch which run obliquely across the tarsometatarsal joints. Excessive strain on the tendons and ligaments of the feet can result in fallen arches or flat feet.
Muscles
The muscles acting on the foot can be classified into extrinsic muscles, those originating on the anterior or posterior aspect of the lower leg, and intrinsic muscles, originating on the dorsal (top) or plantar (base) aspects of the foot.
Extrinsic
All muscles originating on the lower leg except the popliteus muscle are attached to the bones of the foot. The tibia and fibula and the interosseous membrane separate these muscles into anterior and posterior groups, in their turn subdivided into subgroups and layers.
Anterior group
Extensor group: the tibialis anterior originates on the proximal half of the tibia and the interosseous membrane and is inserted near the tarsometatarsal joint of the first digit. In the non-weight-bearing leg, the tibialis anterior dorsiflexes the foot and lift its medial edge (supination). In the weight-bearing leg, it brings the leg toward the back of the foot, like in rapid walking. The extensor digitorum longus arises on the lateral tibial condyle and along the fibula, and is inserted on the second to fifth digits and proximally on the fifth metatarsal. The extensor digitorum longus acts similar to the tibialis anterior except that it also dorsiflexes the digits. The extensor hallucis longus originates medially on the fibula and is inserted on the first digit. It dorsiflexes the big toe and also acts on the ankle in the unstressed leg. In the weight-bearing leg, it acts similarly to the tibialis anterior.
Peroneal group: the peroneus longus arises on the proximal aspect of the fibula and peroneus brevis below it. Together, their tendons pass behind the lateral malleolus. Distally, the peroneus longus crosses the plantar side of the foot to reach its insertion on the first tarsometatarsal joint, while the peroneus brevis reaches the proximal part of the fifth metatarsal. These two muscles are the strongest pronators and aid in plantar flexion. The peroneus longus also acts like a bowstring that braces the transverse arch of the foot.
Posterior group
The superficial layer of posterior leg muscles is formed by the triceps surae and the plantaris. The triceps surae consists of the soleus and the two heads of the gastrocnemius. The heads of gastrocnemius arise on the femur, proximal to the condyles, and the soleus arises on the proximal dorsal parts of the tibia and fibula. The tendons of these muscles merge to be inserted onto the calcaneus as the Achilles tendon. The plantaris originates on the femur proximal to the lateral head of the gastrocnemius and its long tendon is embedded medially into the Achilles tendon. The triceps surae is the primary plantar flexor. Its strength becomes most obvious during ballet dancing. It is fully activated only with the knee extended, because the gastrocnemius is shortened during flexion of the knee. During walking it not only lifts the heel, but also flexes the knee, assisted by the plantaris.
In the deep layer of posterior muscles, the tibialis posterior arises proximally on the back of the interosseous membrane and adjoining bones, and divides into two parts in the sole of the foot to attach to the tarsus. In the non-weight-bearing leg, it produces plantar flexion and supination, and, in the weight-bearing leg, it proximates the heel to the calf. The flexor hallucis longus arises on the back of the fibula on the lateral side, and its relatively thick muscle belly extends distally down to the flexor retinaculum where it passes over to the medial side to stretch across the sole to the distal phalanx of the first digit. The popliteus is also part of this group, but, with its oblique course across the back of the knee, does not act on the foot.
Intrinsic
On the top of the foot, the tendons of extensor digitorum brevis and extensor hallucis brevis lie deep in the system of long extrinsic extensor tendons. They both arise on the calcaneus and extend into the dorsal aponeurosis of digits one to four, just beyond the penultimate joints. They act to dorsiflex the digits. Similar to the intrinsic muscles of the hand, there are three groups of muscles in the sole of foot, those of the first and last digits, and a central group:
Muscles of the big toe: the abductor hallucis stretches medially along the border of the sole, from the calcaneus to the first digit. Below its tendon, the tendons of the long flexors pass through the tarsal canal. The abductor hallucis is an abductor and a weak flexor, and also helps maintain the arch of the foot. The flexor hallucis brevis arises on the medial cuneiform bone and related ligaments and tendons. An important plantar flexor, it is crucial to ballet dancing. Both these muscles are inserted with two heads proximally and distally to the first metatarsophalangeal joint. The adductor hallucis is part of this group, though it originally formed a separate system (see contrahens). It has two heads, the oblique head originating obliquely across the central part of the midfoot, and the transverse head originating near the metatarsophalangeal joints of digits five to three. Both heads are inserted into the lateral sesamoid bone of the first digit. The adductor hallucis acts as a tensor of the plantar arches and also adducts the big toe and might plantar flex the proximal phalanx.
Muscles of the little toe: Stretching laterally from the calcaneus to the proximal phalanx of the fifth digit, the abductor digiti minimi form the lateral margin of the foot and are the largest of the muscles of the fifth digit. Arising from the base of the fifth metatarsal, the flexor digiti minimi is inserted together with abductor on the first phalanx. Often absent, the opponens digiti minimi originates near the cuboid bone and is inserted on the fifth metatarsal bone. These three muscles act to support the arch of the foot and to plantar flex the fifth digit.
Central muscle group: The four lumbricals arise on the medial side of the tendons of flexor digitorum longus and are inserted on the medial margins of the proximal phalanges. The quadratus plantae originates with two slips from the lateral and medial margins of the calcaneus and inserts into the lateral margin of the flexor digitorum tendon. It is also known as the flexor accessorius. The flexor digitorum brevis arises inferiorly on the calcaneus and its three tendons are inserted into the middle phalanges of digits two to four (sometimes also the fifth digit). These tendons divide before their insertions and the tendons of flexor digitorum longus pass through these divisions. Flexor digitorum brevis flexes the middle phalanges. It is occasionally absent. Between the toes, the dorsal and plantar interossei stretch from the metatarsals to the proximal phalanges of digits two to five. The plantar interossei adduct and the dorsal interossei abduct these digits, and are also plantar flexors at the metatarsophalangeal joints.
Clinical significance
Due to their position and function, feet are exposed to a variety of potential infections and injuries, including athlete's foot, bunions, ingrown toenails, Morton's neuroma, plantar fasciitis, plantar warts, and stress fractures. In addition, there are several genetic disorders that can affect the shape and function of the feet, including clubfoot or flat feet.
This leaves humans more vulnerable to medical problems that are caused by poor leg and foot alignments. Also, the wearing of shoes, sneakers and boots can impede proper alignment and movement within the ankle and foot. For example, high-heeled shoes are known to throw off the natural weight balance (this can also affect the lower back). For the sake of posture, flat soles with no heels are advised.
A doctor who specializes in the treatment of the feet practices podiatry and is called a podiatrist. A pedorthist specializes in the use and modification of footwear to treat problems related to the lower limbs.
Fractures of the foot include:
Lisfranc fracture – in which one or all of the metatarsals are displaced from the tarsus
Jones fracture – a fracture of the fifth metatarsal
March fracture – a fracture of the distal third of one of the metatarsals occurring because of recurrent stress
Calcaneal fracture
Broken toe – a fracture of a phalanx
Cuneiform fracture – Due to the ligamentous support of the midfoot, isolated cuneiform fractures are rare.
Pronation
In anatomy, pronation is a rotational movement of the forearm (at the radioulnar joint) or foot (at the subtalar and talocalcaneonavicular joints). Pronation of the foot refers to how the body distributes weight as it cycles through the gait. During the gait cycle the foot can pronate in many different ways based on rearfoot and forefoot function. Types of pronation include neutral pronation, underpronation (supination), and overpronation.
Neutral pronation
An individual who neutrally pronates initially strikes the ground on the lateral side of the heel. As the individual transfers weight from the heel to the metatarsus, the foot will roll in a medial direction, such that the weight is distributed evenly across the metatarsus. In this stage of the gait, the knee will generally, but not always, track directly over the hallux.
This rolling inward motion as the foot progresses from heel to toe is the way that the body naturally absorbs shock. Neutral pronation is the most ideal, efficient type of gait when using a heel strike gait; in a forefoot strike, the body absorbs shock instead via flexion of the foot.
Overpronation
As with a neutral pronator, an individual who overpronates initially strikes the ground on the lateral side of the heel. As the individual transfers weight from the heel to the metatarsus, however, the foot will roll too far in a medial direction, such that the weight is distributed unevenly across the metatarsus, with excessive weight borne on the hallux. In this stage of the gait, the knee will generally, but not always, track inward.
An overpronator does not absorb shock efficiently. Imagine someone jumping onto a diving board, but the board is so flimsy that when it is struck, it bends and allows the person to plunge straight down into the water instead of back into the air. Similarly, an overpronator's arches will collapse, or the ankles will roll inward (or a combination of the two) as they cycle through the gait. An individual whose bone structure involves external rotation at the hip, knee, or ankle will be more likely to overpronate than one whose bone structure has internal rotation or central alignment. An individual who overpronates tends to wear down their running shoes on the medial (inside) side of the shoe toward the toe area.
When choosing a running or walking shoe, a person with overpronation can choose shoes that have good inside support—usually by strong material at the inside sole and arch of the shoe. It is usually visible. The inside support area is marked by strong greyish material to support the weight when a person lands on the outside foot and then roll onto the inside foot.
Underpronation (supination)
An individual who underpronates also initially strikes the ground on the lateral side of the heel. As the individual transfers weight from the heel to the metatarsus, the foot will not roll far enough in a medial direction. The weight is distributed unevenly across the metatarsus, with excessive weight borne on the fifth metatarsal, toward the lateral side of the foot. In this stage of the gait, the knee will generally, but not always, track laterally of the hallux.
Like an overpronator, an underpronator does not absorb shock efficiently, but for the opposite reason. The underpronated foot is like a diving board that, instead of failing to spring someone in the air because it is too flimsy, fails to do so because it is too rigid. There is virtually no give. An underpronator's arches or ankles do not experience much motion as they cycle through the gait. An individual whose bone structure involves internal rotation at the hip, knee, or ankle will be more likely to underpronate than one whose bone structure has external rotation or central alignment. Usually – but not always – those who are bow-legged tend to underpronate. An individual who underpronates tends to wear down their running shoes on the lateral (outside) side of the shoe toward the rear of the shoe in the heel area.
Society and culture
Humans usually wear shoes or similar footwear for protection from hazards when walking outside. There are a number of contexts where it is considered inappropriate to wear shoes. Some people consider it rude to wear shoes into a house and sacred places in multiple cultures like Māori Marae, which should only be entered with bare feet.
Foot fetishism is the most common sexual fetish.
Other animals
A paw is the soft foot of a mammal, generally a quadruped, that has claws or nails (e.g., a cat or dog's paw). A hard foot is called a hoof. Depending on style of locomotion, animals can be classified as plantigrade (sole walking), digitigrade (toe walking), or unguligrade (nail walking).
The metatarsals are the bones that make up the main part of the foot in humans, and part of the leg in large animals or paw in smaller animals. The number of metatarsals are directly related to the mode of locomotion with many larger animals having their digits reduced to two (elk, cow, sheep) or one (horse). The metatarsal bones of feet and paws are tightly grouped compared to, most notably, the human hand where the thumb metacarpal diverges from the rest of the metacarpus.
Metaphorical and cultural usage
The word "foot" is used to refer to a "...linear measure was in Old English (the exact length has varied over time), this being considered the length of a man's foot; a unit of measure used widely and anciently. In this sense the plural is often foot. The current inch and foot are implied from measurements in 12c."
The word "foot" also has a musical meaning; a "...metrical foot (late Old English, translating Latin pes, Greek pous in the same sense) is commonly taken to represent one rise and one fall of a foot: keeping time according to some, dancing according to others."
The word "foot" was used in Middle English to mean "a person" (c. 1200).
The expression "...to put one's best foot foremost first recorded 1849 (Shakespeare has the better foot before, 1596)". The expression to "...put one's foot in (one's) mouth "say something stupid" was first used in 1942. The expression "put (one's) foot in something" meaning to "make a mess of it" was used in 1823.
The word "footloose" was first used in the 1690s, meaning "free to move the feet, unshackled"; the figurative sense of "free to act as one pleases" was first used in 1873. Like "footloose", "flat-footed" at first had its obvious literal meaning (in 1600, it meant "with flat feet") but by 1912 it meant "unprepared" (U.S. baseball slang).
| Biology and health sciences | Human anatomy | null |
11512 | https://en.wikipedia.org/wiki/Fast%20Fourier%20transform | Fast Fourier transform | A fast Fourier transform (FFT) is an algorithm that computes the Discrete Fourier Transform (DFT) of a sequence, or its inverse (IDFT). A Fourier transform converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where is the data size. The difference in speed can be enormous, especially for long data sets where may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.
Fast Fourier transforms are widely used for applications in engineering, music, science, and mathematics. The basic ideas were popularized in 1965, but some algorithms had been derived as early as 1805. In 1994, Gilbert Strang described the FFT as "the most important numerical algorithm of our lifetime", and it was included in Top 10 Algorithms of 20th Century by the IEEE magazine Computing in Science & Engineering.
The best-known FFT algorithms depend upon the factorization of , but there are FFTs with complexity for all, even prime, . Many FFT algorithms depend only on the fact that is an 'th primitive root of unity, and thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a factor, any FFT algorithm can easily be adapted for it.
History
The development of fast algorithms for DFT can be traced to Carl Friedrich Gauss's unpublished 1805 work on the orbits of asteroids Pallas and Juno. Gauss wanted to interpolate the orbits from sample observations; his method was very similar to the one that would be published in 1965 by James Cooley and John Tukey, who are generally credited for the invention of the modern generic FFT algorithm. While Gauss's work predated even Joseph Fourier's 1822 results, he did not analyze the method's complexity, and eventually used other methods to achieve the same end.
Between 1805 and 1965, some versions of FFT were published by other authors. Frank Yates in 1932 published his version called interaction algorithm, which provided efficient computation of Hadamard and Walsh transforms. Yates' algorithm is still used in the field of statistical design and analysis of experiments. In 1942, G. C. Danielson and Cornelius Lanczos published their version to compute DFT for x-ray crystallography, a field where calculation of Fourier transforms presented a formidable bottleneck. While many methods in the past had focused on reducing the constant factor for computation by taking advantage of symmetries, Danielson and Lanczos realized that one could use the periodicity and apply a doubling trick to "double [] with only slightly more than double the labor", though like Gauss they did not do the analysis to discover that this led to scaling.
James Cooley and John Tukey independently rediscovered these earlier algorithms and published a more general FFT in 1965 that is applicable when is composite and not necessarily a power of 2, as well as analyzing the scaling. Tukey came up with the idea during a meeting of President Kennedy's Science Advisory Committee where a discussion topic involved detecting nuclear tests by the Soviet Union by setting up sensors to surround the country from outside. To analyze the output of these sensors, an FFT algorithm would be needed. In discussion with Tukey, Richard Garwin recognized the general applicability of the algorithm not just to national security problems, but also to a wide range of problems including one of immediate interest to him, determining the periodicities of the spin orientations in a 3-D crystal of Helium-3. Garwin gave Tukey's idea to Cooley (both worked at IBM's Watson labs) for implementation. Cooley and Tukey published the paper in a relatively short time of six months. As Tukey did not work at IBM, the patentability of the idea was doubted and the algorithm went into the public domain, which, through the computing revolution of the next decade, made FFT one of the indispensable algorithms in digital signal processing.
Definition
Let be complex numbers. The DFT is defined by the formula
where is a primitive 'th root of 1.
Evaluating this definition directly requires operations: there are outputs , and each output requires a sum of terms. An FFT is any method to compute the same results in operations. All known FFT algorithms require operations, although there is no known proof that lower complexity is impossible.
To illustrate the savings of an FFT, consider the count of complex multiplications and additions for data points. Evaluating the DFT's sums directly involves complex multiplications and complex additions, of which operations can be saved by eliminating trivial operations such as multiplications by 1, leaving about 30 million operations. In contrast, the radix-2 Cooley–Tukey algorithm, for a power of 2, can compute the same result with only complex multiplications (again, ignoring simplifications of multiplications by 1 and similar) and complex additions, in total about 30,000 operations — a thousand times less than with direct evaluation. In practice, actual performance on modern computers is usually dominated by factors other than the speed of arithmetic operations and the analysis is a complicated subject (for example, see Frigo & Johnson, 2005), but the overall improvement from to remains.
Algorithms
Cooley–Tukey algorithm
By far the most commonly used FFT is the Cooley–Tukey algorithm. This is a divide-and-conquer algorithm that recursively breaks down a DFT of any composite size into smaller DFTs of size , along with multiplications by complex roots of unity traditionally called twiddle factors (after Gentleman and Sande, 1966).
This method (and the general idea of an FFT) was popularized by a publication of Cooley and Tukey in 1965, but it was later discovered that those two authors had together independently re-invented an algorithm known to Carl Friedrich Gauss around 1805 (and subsequently rediscovered several times in limited forms).
The best known use of the Cooley–Tukey algorithm is to divide the transform into two pieces of size at each step, and is therefore limited to power-of-two sizes, but any factorization can be used in general (as was known to both Gauss and Cooley/Tukey). These are called the radix-2 and mixed-radix cases, respectively (and other variants such as the split-radix FFT have their own names as well). Although the basic idea is recursive, most traditional implementations rearrange the algorithm to avoid explicit recursion. Also, because the Cooley–Tukey algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT, such as those described below.
Other FFT algorithms
There are FFT algorithms other than Cooley–Tukey.
For with coprime and , one can use the prime-factor (Good–Thomas) algorithm (PFA), based on the Chinese remainder theorem, to factorize the DFT similarly to Cooley–Tukey but without the twiddle factors. The Rader–Brenner algorithm (1976) is a Cooley–Tukey-like factorization but with purely imaginary twiddle factors, reducing multiplications at the cost of increased additions and reduced numerical stability; it was later superseded by the split-radix variant of Cooley–Tukey (which achieves the same multiplication count but with fewer additions and without sacrificing accuracy). Algorithms that recursively factorize the DFT into smaller operations other than DFTs include the Bruun and QFT algorithms. (The Rader–Brenner and QFT algorithms were proposed for power-of-two sizes, but it is possible that they could be adapted to general composite . Bruun's algorithm applies to arbitrary even composite sizes.) Bruun's algorithm, in particular, is based on interpreting the FFT as a recursive factorization of the polynomial , here into real-coefficient polynomials of the form and .
Another polynomial viewpoint is exploited by the Winograd FFT algorithm, which factorizes into cyclotomic polynomials—these often have coefficients of 1, 0, or −1, and therefore require few (if any) multiplications, so Winograd can be used to obtain minimal-multiplication FFTs and is often used to find efficient algorithms for small factors. Indeed, Winograd showed that the DFT can be computed with only irrational multiplications, leading to a proven achievable lower bound on the number of multiplications for power-of-two sizes; this comes at the cost of many more additions, a tradeoff no longer favorable on modern processors with hardware multipliers. In particular, Winograd also makes use of the PFA as well as an algorithm by Rader for FFTs of prime sizes.
Rader's algorithm, exploiting the existence of a generator for the multiplicative group modulo prime , expresses a DFT of prime size as a cyclic convolution of (composite) size , which can then be computed by a pair of ordinary FFTs via the convolution theorem (although Winograd uses other convolution methods). Another prime-size FFT is due to L. I. Bluestein, and is sometimes called the chirp-z algorithm; it also re-expresses a DFT as a convolution, but this time of the same size (which can be zero-padded to a power of two and evaluated by radix-2 Cooley–Tukey FFTs, for example), via the identity
Hexagonal fast Fourier transform (HFFT) aims at computing an efficient FFT for the hexagonally-sampled data by using a new addressing scheme for hexagonal grids, called Array Set Addressing (ASA).
FFT algorithms specialized for real or symmetric data
In many applications, the input data for the DFT are purely real, in which case the outputs satisfy the symmetry
and efficient FFT algorithms have been designed for this situation (see e.g. Sorensen, 1987). One approach consists of taking an ordinary algorithm (e.g. Cooley–Tukey) and removing the redundant parts of the computation, saving roughly a factor of two in time and memory. Alternatively, it is possible to express an even-length real-input DFT as a complex DFT of half the length (whose real and imaginary parts are the even/odd elements of the original real data), followed by post-processing operations.
It was once believed that real-input DFTs could be more efficiently computed by means of the discrete Hartley transform (DHT), but it was subsequently argued that a specialized real-input DFT algorithm (FFT) can typically be found that requires fewer operations than the corresponding DHT algorithm (FHT) for the same number of inputs. Bruun's algorithm (above) is another method that was initially proposed to take advantage of real inputs, but it has not proved popular.
There are further FFT specializations for the cases of real data that have even/odd symmetry, in which case one can gain another factor of roughly two in time and memory and the DFT becomes the discrete cosine/sine transform(s) (DCT/DST). Instead of directly modifying an FFT algorithm for these cases, DCTs/DSTs can also be computed via FFTs of real data combined with pre- and post-processing.
Computational issues
Bounds on complexity and operation counts
A fundamental question of longstanding theoretical interest is to prove lower bounds on the complexity and exact operation counts of fast Fourier transforms, and many open problems remain. It is not rigorously proved whether DFTs truly require (i.e., order or greater) operations, even for the simple case of power of two sizes, although no algorithms with lower complexity are known. In particular, the count of arithmetic operations is usually the focus of such questions, although actual performance on modern-day computers is determined by many other factors such as cache or CPU pipeline optimization.
Following work by Shmuel Winograd (1978), a tight lower bound is known for the number of real multiplications required by an FFT. It can be shown that only irrational real multiplications are required to compute a DFT of power-of-two length . Moreover, explicit algorithms that achieve this count are known (Heideman & Burrus, 1986; Duhamel, 1990). However, these algorithms require too many additions to be practical, at least on modern computers with hardware multipliers (Duhamel, 1990; Frigo & Johnson, 2005).
A tight lower bound is not known on the number of required additions, although lower bounds have been proved under some restrictive assumptions on the algorithms. In 1973, Morgenstern proved an lower bound on the addition count for algorithms where the multiplicative constants have bounded magnitudes (which is true for most but not all FFT algorithms). Pan (1986) proved an lower bound assuming a bound on a measure of the FFT algorithm's asynchronicity, but the generality of this assumption is unclear. For the case of power-of-two , Papadimitriou (1979) argued that the number of complex-number additions achieved by Cooley–Tukey algorithms is optimal under certain assumptions on the graph of the algorithm (his assumptions imply, among other things, that no additive identities in the roots of unity are exploited). (This argument would imply that at least real additions are required, although this is not a tight bound because extra additions are required as part of complex-number multiplications.) Thus far, no published FFT algorithm has achieved fewer than complex-number additions (or their equivalent) for power-of-two .
A third problem is to minimize the total number of real multiplications and additions, sometimes called the arithmetic complexity (although in this context it is the exact count and not the asymptotic complexity that is being considered). Again, no tight lower bound has been proven. Since 1968, however, the lowest published count for power-of-two was long achieved by the split-radix FFT algorithm, which requires real multiplications and additions for . This was recently reduced to (Johnson and Frigo, 2007; Lundy and Van Buskirk, 2007). A slightly larger count (but still better than split radix for ) was shown to be provably optimal for under additional restrictions on the possible algorithms (split-radix-like flowgraphs with unit-modulus multiplicative factors), by reduction to a satisfiability modulo theories problem solvable by brute force (Haynal & Haynal, 2011).
Most of the attempts to lower or prove the complexity of FFT algorithms have focused on the ordinary complex-data case, because it is the simplest. However, complex-data FFTs are so closely related to algorithms for related problems such as real-data FFTs, discrete cosine transforms, discrete Hartley transforms, and so on, that any improvement in one of these would immediately lead to improvements in the others (Duhamel & Vetterli, 1990).
Approximations
All of the FFT algorithms discussed above compute the DFT exactly (i.e. neglecting floating-point errors). A few FFT algorithms have been proposed, however, that compute the DFT approximately, with an error that can be made arbitrarily small at the expense of increased computations. Such algorithms trade the approximation error for increased speed or other properties. For example, an approximate FFT algorithm by Edelman et al. (1999) achieves lower communication requirements for parallel computing with the help of a fast multipole method. A wavelet-based approximate FFT by Guo and Burrus (1996) takes sparse inputs/outputs (time/frequency localization) into account more efficiently than is possible with an exact FFT. Another algorithm for approximate computation of a subset of the DFT outputs is due to Shentov et al. (1995). The Edelman algorithm works equally well for sparse and non-sparse data, since it is based on the compressibility (rank deficiency) of the Fourier matrix itself rather than the compressibility (sparsity) of the data. Conversely, if the data are sparse—that is, if only out of Fourier coefficients are nonzero—then the complexity can be reduced to , and this has been demonstrated to lead to practical speedups compared to an ordinary FFT for in a large- example () using a probabilistic approximate algorithm (which estimates the largest coefficients to several decimal places).
Accuracy
FFT algorithms have errors when finite-precision floating-point arithmetic is used, but these errors are typically quite small; most FFT algorithms, e.g. Cooley–Tukey, have excellent numerical properties as a consequence of the pairwise summation structure of the algorithms. The upper bound on the relative error for the Cooley–Tukey algorithm is , compared to for the naïve DFT formula, where is the machine floating-point relative precision. In fact, the root mean square (rms) errors are much better than these upper bounds, being only for Cooley–Tukey and for the naïve DFT (Schatzman, 1996). These results, however, are very sensitive to the accuracy of the twiddle factors used in the FFT (i.e. the trigonometric function values), and it is not unusual for incautious FFT implementations to have much worse accuracy, e.g. if they use inaccurate trigonometric recurrence formulas. Some FFTs other than Cooley–Tukey, such as the Rader–Brenner algorithm, are intrinsically less stable.
In fixed-point arithmetic, the finite-precision errors accumulated by FFT algorithms are worse, with rms errors growing as for the Cooley–Tukey algorithm (Welch, 1969). Achieving this accuracy requires careful attention to scaling to minimize loss of precision, and fixed-point FFT algorithms involve rescaling at each intermediate stage of decompositions like Cooley–Tukey.
To verify the correctness of an FFT implementation, rigorous guarantees can be obtained in time by a simple procedure checking the linearity, impulse-response, and time-shift properties of the transform on random inputs (Ergün, 1995).
The values for intermediate frequencies may be obtained by various averaging methods.
Multidimensional FFTs
As defined in the multidimensional DFT article, the multidimensional DFT
transforms an array with a -dimensional vector of indices by a set of nested summations (over for each ), where the division is performed element-wise. Equivalently, it is the composition of a sequence of d sets of one-dimensional DFTs, performed along one dimension at a time (in any order).
This compositional viewpoint immediately provides the simplest and most common multidimensional DFT algorithm, known as the row-column algorithm (after the two-dimensional case, below). That is, one simply performs a sequence of one-dimensional FFTs (by any of the above algorithms): first you transform along the dimension, then along the dimension, and so on (actually, any ordering works). This method is easily shown to have the usual complexity, where is the total number of data points transformed. In particular, there are transforms of size , etc., so the complexity of the sequence of FFTs is:
In two dimensions, the xk can be viewed as an matrix, and this algorithm corresponds to first performing the FFT of all the rows (resp. columns), grouping the resulting transformed rows (resp. columns) together as another matrix, and then performing the FFT on each of the columns (resp. rows) of this second matrix, and similarly grouping the results into the final result matrix.
In more than two dimensions, it is often advantageous for cache locality to group the dimensions recursively. For example, a three-dimensional FFT might first perform two-dimensional FFTs of each planar slice for each fixed n1, and then perform the one-dimensional FFTs along the n1 direction. More generally, an asymptotically optimal cache-oblivious algorithm consists of recursively dividing the dimensions into two groups and that are transformed recursively (rounding if is not even) (see Frigo and Johnson, 2005). Still, this remains a straightforward variation of the row-column algorithm that ultimately requires only a one-dimensional FFT algorithm as the base case, and still has complexity. Yet another variation is to perform matrix transpositions in between transforming subsequent dimensions, so that the transforms operate on contiguous data; this is especially important for out-of-core and distributed memory situations where accessing non-contiguous data is extremely time-consuming.
There are other multidimensional FFT algorithms that are distinct from the row-column algorithm, although all of them have complexity. Perhaps the simplest non-row-column FFT is the vector-radix FFT algorithm, which is a generalization of the ordinary Cooley–Tukey algorithm where one divides the transform dimensions by a vector of radices at each step. (This may also have cache benefits.) The simplest case of vector-radix is where all of the radices are equal (e.g. vector-radix-2 divides all of the dimensions by two), but this is not necessary. Vector radix with only a single non-unit radix at a time, i.e. , is essentially a row-column algorithm. Other, more complicated, methods include polynomial transform algorithms due to Nussbaumer (1977), which view the transform in terms of convolutions and polynomial products. See Duhamel and Vetterli (1990) for more information and references.
Other generalizations
An generalization to spherical harmonics on the sphere with nodes was described by Mohlenkamp, along with an algorithm conjectured (but not proven) to have complexity; Mohlenkamp also provides an implementation in the libftsh library. A spherical-harmonic algorithm with complexity is described by Rokhlin and Tygert.
The fast folding algorithm is analogous to the FFT, except that it operates on a series of binned waveforms rather than a series of real or complex scalar values. Rotation (which in the FFT is multiplication by a complex phasor) is a circular shift of the component waveform.
Various groups have also published FFT algorithms for non-equispaced data, as reviewed in Potts et al. (2001). Such algorithms do not strictly compute the DFT (which is only defined for equispaced data), but rather some approximation thereof (a non-uniform discrete Fourier transform, or NDFT, which itself is often computed only approximately). More generally there are various other methods of spectral estimation.
Applications
The FFT is used in digital recording, sampling, additive synthesis and pitch correction software.
The FFT's importance derives from the fact that it has made working in the frequency domain equally computationally feasible as working in the temporal or spatial domain. Some of the important applications of the FFT include:
fast large-integer multiplication algorithms and polynomial multiplication,
efficient matrix–vector multiplication for Toeplitz, circulant and other structured matrices,
filtering algorithms (see overlap–add and overlap–save methods),
fast algorithms for discrete cosine or sine transforms (e.g. fast DCT used for JPEG and MPEG/MP3 encoding and decoding),
fast Chebyshev approximation,
solving difference equations,
computation of isotopic distributions.
modulation and demodulation of complex data symbols using orthogonal frequency division multiplexing (OFDM) for 5G, LTE, Wi-Fi, DSL, and other modern communication systems.
An original application of the FFT in finance particularly in the Valuation of options was developed by Marcello Minenna.
Limitation
Despite its strengths, the Fast Fourier Transform (FFT) has limitations, particularly when analyzing signals with non-stationary frequency content—where the frequency characteristics change over time. The FFT provides a global frequency representation, meaning it analyzes frequency information across the entire signal duration. This global perspective makes it challenging to detect short-lived or transient features within signals, as the FFT assumes that all frequency components are present throughout the entire signal.
For cases where frequency information varies over time, alternative transforms like the wavelet transform can be more suitable. The wavelet transform allows for a localized frequency analysis, capturing both frequency and time-based information. This makes it better suited for applications where critical information appears briefly in the signal. These differences highlight that while the FFT is a powerful tool for many applications, it may not be ideal for all types of signal analysis.
Research areas
Big FFTs With the explosion of big data in fields such as astronomy, the need for 512K FFTs has arisen for certain interferometry calculations. The data collected by projects such as WMAP and LIGO require FFTs of tens of billions of points. As this size does not fit into main memory, so called out-of-core FFTs are an active area of research.
Approximate FFTs For applications such as MRI, it is necessary to compute DFTs for nonuniformly spaced grid points and/or frequencies. Multipole based approaches can compute approximate quantities with factor of runtime increase.
Group FFTs The FFT may also be explained and interpreted using group representation theory allowing for further generalization. A function on any compact group, including non-cyclic, has an expansion in terms of a basis of irreducible matrix elements. It remains active area of research to find efficient algorithm for performing this change of basis. Applications including efficient spherical harmonic expansion, analyzing certain Markov processes, robotics etc.
Quantum FFTs Shor's fast algorithm for integer factorization on a quantum computer has a subroutine to compute DFT of a binary vector. This is implemented as sequence of 1- or 2-bit quantum gates now known as quantum FFT, which is effectively the Cooley–Tukey FFT realized as a particular factorization of the Fourier matrix. Extension to these ideas is currently being explored.
Language reference
| Mathematics | Harmonic analysis | null |
11522 | https://en.wikipedia.org/wiki/Fly-by-wire | Fly-by-wire | Fly-by-wire (FBW) is a system that replaces the conventional manual flight controls of an aircraft with an electronic interface. The movements of flight controls are converted to electronic signals, and flight control computers determine how to move the actuators at each control surface to provide the ordered response. Implementations either use mechanical flight control backup systems or else are fully electronic.
Improved fully fly-by-wire systems interpret the pilot's control inputs as a desired outcome and calculate the control surface positions required to achieve that outcome; this results in various combinations of rudder, elevator, aileron, flaps and engine controls in different situations using a closed feedback loop. The pilot may not be fully aware of all the control outputs acting to affect the outcome, only that the aircraft is reacting as expected. The fly-by-wire computers act to stabilize the aircraft and adjust the flying characteristics without the pilot's involvement, and to prevent the pilot from operating outside of the aircraft's safe performance envelope.
Rationale
Mechanical and hydro-mechanical flight control systems are relatively heavy and require careful routing of flight control cables through the aircraft by systems of pulleys, cranks, tension cables and hydraulic pipes. Both systems often require redundant backup to deal with failures, which increases weight. Both have limited ability to compensate for changing aerodynamic conditions. Dangerous characteristics such as stalling, spinning and pilot-induced oscillation (PIO), which depend mainly on the stability and structure of the aircraft rather than the control system itself, are dependent on the pilot's actions.
The term "fly-by-wire" implies a purely electrically signaled control system. It is used in the general sense of computer-configured controls, where a computer system is interposed between the operator and the final control actuators or surfaces. This modifies the manual inputs of the pilot in accordance with control parameters.
Side-sticks or conventional flight control yokes can be used to fly fly-by-wire aircraft.
Weight saving
A fly-by-wire aircraft can be lighter than a similar design with conventional controls. This is partly due to the lower overall weight of the system components and partly because the natural stability of the aircraft can be relaxed (slightly for a transport aircraft; more for a maneuverable fighter), which means that the stability surfaces that are part of the aircraft structure can therefore be made smaller. These include the vertical and horizontal stabilizers (fin and tailplane) that are (normally) at the rear of the fuselage. If these structures can be reduced in size, airframe weight is reduced. The advantages of fly-by-wire controls were first exploited by the military and then in the commercial airline market. The Airbus series of airliners used full-authority fly-by-wire controls beginning with their A320 series, see A320 flight control (though some limited fly-by-wire functions existed on A310 aircraft). Boeing followed with their 777 and later designs.
Basic operation
Closed-loop feedback control
A pilot commands the flight control computer to make the aircraft perform a certain action, such as pitch the aircraft up, or roll to one side, by moving the control column or sidestick. The flight control computer then calculates what control surface movements will cause the plane to perform that action and issues those commands to the electronic controllers for each surface. The controllers at each surface receive these commands and then move actuators attached to the control surface until it has moved to where the flight control computer commanded it to. The controllers measure the position of the flight control surface with sensors such as LVDTs.
Automatic stability systems
Fly-by-wire control systems allow aircraft computers to perform tasks without pilot input. Automatic stability systems operate in this way. Gyroscopes and sensors such as accelerometers are mounted in an aircraft to sense rotation on the pitch, roll and yaw axes. Any movement (from straight and level flight for example) results in signals to the computer, which can automatically move control actuators to stabilize the aircraft.
Safety and redundancy
While traditional mechanical or hydraulic control systems usually fail gradually, the loss of all flight control computers immediately renders the aircraft uncontrollable. For this reason, most fly-by-wire systems incorporate either redundant computers (triplex, quadruplex etc.), some kind of mechanical or hydraulic backup or a combination of both. A "mixed" control system with mechanical backup feedbacks any rudder elevation directly to the pilot and therefore makes closed loop (feedback) systems senseless.
Aircraft systems may be quadruplexed (four independent channels) to prevent loss of signals in the case of failure of one or even two channels. High performance aircraft that have fly-by-wire controls (also called CCVs or Control-Configured Vehicles) may be deliberately designed to have low or even negative stability in some flight regimes rapid-reacting CCV controls can electronically stabilize the lack of natural stability.
Pre-flight safety checks of a fly-by-wire system are often performed using built-in test equipment (BITE). A number of control movement steps can be automatically performed, reducing workload of the pilot or groundcrew and speeding up flight-checks.
Some aircraft, the Panavia Tornado for example, retain a very basic hydro-mechanical backup system for limited flight control capability on losing electrical power; in the case of the Tornado this allows rudimentary control of the stabilators only for pitch and roll axis movements.
History
Servo-electrically operated control surfaces were first tested in the 1930s on the Soviet Tupolev ANT-20. Long runs of mechanical and hydraulic connections were replaced with wires and electric servos.
In 1934, filed a patent about the automatic-electronic system, which flared the aircraft, when it was close to the ground.
In 1941, Karl Otto Altvater, who was an engineer at Siemens, developed and tested the first fly-by-wire system for the Heinkel He 111, in which the aircraft was fully controlled by electronic impulses.
The first non-experimental aircraft that was designed and flown (in 1958) with a fly-by-wire flight control system was the Avro Canada CF-105 Arrow, a feat not repeated with a production aircraft (though the Arrow was cancelled with five built) until Concorde in 1969, which became the first fly-by-wire airliner. This system also included solid-state components and system redundancy, was designed to be integrated with a computerised navigation and automatic search and track radar, was flyable from ground control with data uplink and downlink, and provided artificial feel (feedback) to the pilot.
The first electronic fly-by-wire testbed operated by the U.S. Air Force was a Boeing B-47E Stratojet (Ser. No. 53-2280)
The first pure electronic fly-by-wire aircraft with no mechanical or hydraulic backup was the Apollo Lunar Landing Training Vehicle (LLTV), first flown in 1968. This was preceded in 1964 by the Lunar Landing Research Vehicle (LLRV) which pioneered fly-by-wire flight with no mechanical backup. Control was through a digital computer with three analog redundant channels. In the USSR, the Sukhoi T-4 also flew. At about the same time in the United Kingdom a trainer variant of the British Hawker Hunter fighter was modified at the British Royal Aircraft Establishment with fly-by-wire flight controls for the right-seat pilot.
In the UK the two seater Avro 707C was flown with a Fairey system with mechanical backup in the early to mid-60s. The program was curtailed when the air-frame ran out of flight time.
In 1972, the first digital fly-by-wire fixed-wing aircraft without a mechanical backup to take to the air was an F-8 Crusader, which had been modified electronically by NASA of the United States as a test aircraft; the F-8 used the Apollo guidance, navigation and control hardware.
The Airbus A320 began service in 1988 as the first mass-produced airliner with digital fly-by-wire controls. As of June 2024, over 11,000 A320 family aircraft, variants included, are operational around the world, making it one of the best-selling commercial jets.
Boeing chose fly-by-wire flight controls for the 777 in 1994, departing from traditional cable and pulley systems. In addition to overseeing the aircraft's flight control, the FBW offered "envelope protection", which guaranteed that the system would step in to avoid accidental mishandling, stalls, or excessive structural stress on the aircraft. The 777 used ARINC 629 buses to connect primary flight computers (PFCs) with actuator-control electronics units (ACEs). Every PFC housed three 32-bit microprocessors, including a Motorola 68040, an Intel 80486, and an AMD 29050, all programmed in Ada programming language.
Analog systems
All fly-by-wire flight control systems eliminate the complexity, fragility and weight of the mechanical circuit of the hydromechanical or electromechanical flight control systems – each being replaced with electronic circuits. The control mechanisms in the cockpit now operate signal transducers, which in turn generate the appropriate commands. These are next processed by an electronic controller—either an analog one, or (more modernly) a digital one. Aircraft and spacecraft autopilots are now part of the electronic controller.
The hydraulic circuits are similar except that mechanical servo valves are replaced with electrically controlled servo valves, operated by the electronic controller. This is the simplest and earliest configuration of an analog fly-by-wire flight control system. In this configuration, the flight control systems must simulate "feel". The electronic controller controls electrical devices that provide the appropriate "feel" forces on the manual controls. This was used in Concorde, the first production fly-by-wire airliner.
Digital systems
A digital fly-by-wire flight control system can be extended from its analog counterpart. Digital signal processing can receive and interpret input from multiple sensors simultaneously (such as the altimeters and the pitot tubes) and adjust the controls in real time. The computers sense position and force inputs from pilot controls and aircraft sensors. They then solve differential equations related to the aircraft's equations of motion to determine the appropriate command signals for the flight controls to execute the intentions of the pilot.
The programming of the digital computers enable flight envelope protection. These protections are tailored to an aircraft's handling characteristics to stay within aerodynamic and structural limitations of the aircraft. For example, the computer in flight envelope protection mode can try to prevent the aircraft from being handled dangerously by preventing pilots from exceeding preset limits on the aircraft's flight-control envelope, such as those that prevent stalls and spins, and which limit airspeeds and g forces on the airplane. Software can also be included that stabilize the flight-control inputs to avoid pilot-induced oscillations.
Since the flight-control computers continuously feedback the environment, pilot's workloads can be reduced. This also enables military aircraft with relaxed stability. The primary benefit for such aircraft is more maneuverability during combat and training flights, and the so-called "carefree handling" because stalling, spinning and other undesirable performances are prevented automatically by the computers. Digital flight control systems (DFCS) enable inherently unstable combat aircraft, such as the Lockheed F-117 Nighthawk and the Northrop Grumman B-2 Spirit flying wing to fly in usable and safe manners.
Legislation
The United States Federal Aviation Administration (FAA) has adopted the RTCA/DO-178C, titled "Software Considerations in Airborne Systems and Equipment Certification", as the certification standard for aviation software. Any safety-critical component in a digital fly-by-wire system including applications of the laws of aeronautics and computer operating systems will need to be certified to DO-178C Level A or B, depending on the class of aircraft, which is applicable for preventing potential catastrophic failures.
Nevertheless, the top concern for computerized, digital, fly-by-wire systems is reliability, even more so than for analog electronic control systems. This is because the digital computers that are running software are often the only control path between the pilot and aircraft's flight control surfaces. If the computer software crashes for any reason, the pilot may be unable to control an aircraft. Hence virtually all fly-by-wire flight control systems are either triply or quadruply redundant in their computers and electronics. These have three or four flight-control computers operating in parallel and three or four separate data buses connecting them with each control surface.
Redundancy
The multiple redundant flight control computers continuously monitor each other's output. If one computer begins to give aberrant results for any reason, potentially including software or hardware failures or flawed input data, then the combined system is designed to exclude the results from that computer in deciding the appropriate actions for the flight controls. Depending on specific system details there may be the potential to reboot an aberrant flight control computer, or to reincorporate its inputs if they return to agreement. Complex logic exists to deal with multiple failures, which may prompt the system to revert to simpler back-up modes.
In addition, most of the early digital fly-by-wire aircraft also had an analog electrical, mechanical, or hydraulic back-up flight control system. The Space Shuttle had, in addition to its redundant set of four digital computers running its primary flight-control software, a fifth backup computer running a separately developed, reduced-function, software flight-control system – one that could be commanded to take over in the event that a fault ever affected all of the other four computers. This backup system served to reduce the risk of total flight control system failure ever happening because of a general-purpose flight software fault that had escaped notice in the other four computers.
Efficiency of flight
For airliners, flight-control redundancy improves their safety, but fly-by-wire control systems, which are physically lighter and have lower maintenance demands than conventional controls also improve economy, both in terms of cost of ownership and for in-flight economy. In certain designs with limited relaxed stability in the pitch axis, for example the Boeing 777, the flight control system may allow the aircraft to fly at a more aerodynamically efficient angle of attack than a conventionally stable design. Modern airliners also commonly feature computerized Full-Authority Digital Engine Control systems (FADECs) that control their engines, air inlets, fuel storage and distribution system, in a similar fashion to the way that FBW controls the flight control surfaces. This allows the engine output to be continually varied for the most efficient usage possible.
The second generation Embraer E-Jet family gained a 1.5% efficiency improvement over the first generation from the fly-by-wire system, which enabled a reduction from 280 ft.² to 250 ft.² for the horizontal stabilizer on the E190/195 variants.
Airbus/Boeing
Airbus and Boeing differ in their approaches to implementing fly-by-wire systems in commercial aircraft. Since the Airbus A320, Airbus flight-envelope control systems always retain ultimate flight control when flying under normal law and will not permit pilots to violate aircraft performance limits unless they choose to fly under alternate law. This strategy has been continued on subsequent Airbus airliners. However, in the event of multiple failures of redundant computers, the A320 does have a mechanical back-up system for its pitch trim and its rudder, the Airbus A340 has a purely electrical (not electronic) back-up rudder control system and beginning with the A380, all flight-control systems have back-up systems that are purely electrical through the use of a "three-axis Backup Control Module" (BCM).
Boeing airliners, such as the Boeing 777, allow the pilots to completely override the computerized flight control system, permitting the aircraft to be flown outside of its usual flight control envelope.
Applications
Concorde was the first production fly-by-wire aircraft with analog control.
The General Dynamics F-16 was the first production aircraft to use digital fly-by-wire controls.
The Space Shuttle orbiter had an all-digital fly-by-wire control system. This system was first exercised (as the only flight control system) during the glider unpowered-flight "Approach and Landing Tests" that began with the Space Shuttle Enterprise during 1977.
Launched into production during 1984, the Airbus Industries Airbus A320 became the first airliner to fly with an all-digital fly-by-wire control system.
With its launch in 1993 the Boeing C-17 Globemaster III became the first fly-by-wire military transport aircraft.
In 2005, the Dassault Falcon 7X became the first business jet with fly-by-wire controls.
A fully digital fly-by-wire without a closed feedback loop was integrated in 2002 in the first generation Embraer E-Jet family. By closing the loop (feedback), the second generation Embraer E-Jet family gained a 1.5% efficiency improvement in 2016.
Engine digital control
The advent of FADEC (Full Authority Digital Engine Control) engines permits operation of the flight control systems and autothrottles for the engines to be fully integrated. On modern military aircraft other systems such as autostabilization, navigation, radar and weapons system are all integrated with the flight control systems. FADEC allows maximum performance to be extracted from the aircraft without fear of engine misoperation, aircraft damage or high pilot workloads.
In the civil field, the integration increases flight safety and economy. Airbus fly-by-wire aircraft are protected from dangerous situations such as low-speed stall or overstressing by flight envelope protection. As a result, in such conditions, the flight control systems commands the engines to increase thrust without pilot intervention. In economy cruise modes, the flight control systems adjust the throttles and fuel tank selections precisely. FADEC reduces rudder drag needed to compensate for sideways flight from unbalanced engine thrust. On the A330/A340 family, fuel is transferred between the main (wing and center fuselage) tanks and a fuel tank in the horizontal stabilizer, to optimize the aircraft's center of gravity during cruise flight. The fuel management controls keep the aircraft's center of gravity accurately trimmed with fuel weight, rather than drag-inducing aerodynamic trims in the elevators.
Further developments
Fly-by-optics
Fly-by-optics is sometimes used instead of fly-by-wire because it offers a higher data transfer rate, immunity to electromagnetic interference and lighter weight. In most cases, the cables are just changed from electrical to optical fiber cables. Sometimes it is referred to as "fly-by-light" due to its use of fiber optics. The data generated by the software and interpreted by the controller remain the same. Fly-by-light has the effect of decreasing electro-magnetic disturbances to sensors in comparison to more common fly-by-wire control systems. The Kawasaki P-1 is the first production aircraft in the world to be equipped with such a flight control system.
Power-by-wire
Having eliminated the mechanical transmission circuits in fly-by-wire flight control systems, the next step is to eliminate the bulky and heavy hydraulic circuits. The hydraulic circuit is replaced by an electrical power circuit. The power circuits power electrical or self-contained electrohydraulic actuators that are controlled by the digital flight control computers. All benefits of digital fly-by-wire are retained since the power-by-wire components are strictly complementary to the fly-by-wire components.
The biggest benefits are weight savings, the possibility of redundant power circuits and tighter integration between the aircraft flight control systems and its avionics systems. The absence of hydraulics greatly reduces maintenance costs. This system is used in the Lockheed Martin F-35 Lightning II and in Airbus A380 backup flight controls. The Boeing 787 and Airbus A350 also incorporate electrically powered backup flight controls which remain operational even in the event of a total loss of hydraulic power.
Fly-by-wireless
Wiring adds a considerable amount of weight to an aircraft; therefore, researchers are exploring implementing fly-by-wireless solutions. Fly-by-wireless systems are very similar to fly-by-wire systems, however, instead of using a wired protocol for the physical layer a wireless protocol is employed.
In addition to reducing weight, implementing a wireless solution has the potential to reduce costs throughout an aircraft's life cycle. For example, many key failure points associated with wire and connectors will be eliminated thus hours spent troubleshooting wires and connectors will be reduced. Furthermore, engineering costs could potentially decrease because less time would be spent on designing wiring installations, late changes in an aircraft's design would be easier to manage, etc.
Intelligent flight control system
A newer flight control system, called intelligent flight control system (IFCS), is an extension of modern digital fly-by-wire flight control systems. The aim is to intelligently compensate for aircraft damage and failure during flight, such as automatically using engine thrust and other avionics to compensate for severe failures such as loss of hydraulics, loss of rudder, loss of ailerons, loss of an engine, etc. Several demonstrations were made on a flight simulator where a Cessna-trained small-aircraft pilot successfully landed a heavily damaged full-size concept jet, without prior experience with large-body jet aircraft. This development is being spearheaded by NASA Dryden Flight Research Center. It is reported that enhancements are mostly software upgrades to existing fully computerized digital fly-by-wire flight control systems. The Dassault Falcon 7X and Embraer Legacy 500 business jets have flight computers that can partially compensate for engine-out scenarios by adjusting thrust levels and control inputs, but still require pilots to respond appropriately.
| Technology | Aircraft components | null |
11524 | https://en.wikipedia.org/wiki/Fahrenheit | Fahrenheit | The Fahrenheit scale () is a temperature scale based on one proposed in 1724 by the European physicist Daniel Gabriel Fahrenheit (1686–1736). It uses the degree Fahrenheit (symbol: °F) as the unit. Several accounts of how he originally defined his scale exist, but the original paper suggests the lower defining point, 0 °F, was established as the freezing temperature of a solution of brine made from a mixture of water, ice, and ammonium chloride (a salt). The other limit established was his best estimate of the average human body temperature, originally set at 90 °F, then 96 °F (about 2.6 °F less than the modern value due to a later redefinition of the scale).
For much of the 20th century, the Fahrenheit scale was defined by two fixed points with a 180 °F separation: the temperature at which pure water freezes was defined as 32 °F and the boiling point of water was defined to be 212 °F, both at sea level and under standard atmospheric pressure. It is now formally defined using the Kelvin scale.
It continues to be used in the United States (including its unincorporated territories), its freely associated states in the Western Pacific (Palau, the Federated States of Micronesia and the Marshall Islands), the Cayman Islands, and Liberia.
Fahrenheit is commonly still used alongside the Celsius scale in other countries that use the U.S. metrological service, such as Antigua and Barbuda, Saint Kitts and Nevis, the Bahamas, and Belize. A handful of British Overseas Territories, including the Virgin Islands, Montserrat, Anguilla, and Bermuda, also still use both scales. All other countries now use Celsius ("centigrade" until 1948), which was invented 18 years after the Fahrenheit scale.
Definition and conversion
Historically, on the Fahrenheit scale the freezing point of water was 32 °F, and the boiling point was 212 °F (at standard atmospheric pressure). This put the boiling and freezing points of water 180 degrees apart. Therefore, a degree on the Fahrenheit scale was of the interval between the freezing point and the boiling point. On the Celsius scale, the freezing and boiling points of water were originally defined to be 100 degrees apart. A temperature interval of 1 °F was equal to an interval of degrees Celsius. With the Fahrenheit and Celsius scales now both defined by the kelvin, this relationship was preserved, a temperature interval of 1 °F being equal to an interval of K and of °C. The Fahrenheit and Celsius scales intersect numerically at −40 in the respective unit (i.e., −40 °F ≘ −40 °C).
Absolute zero is 0 K, −273.15 °C, or −459.67 °F. The Rankine temperature scale uses degree intervals of the same size as those of the Fahrenheit scale, except that absolute zero is 0 °R the same way that the Kelvin temperature scale matches the Celsius scale, except that absolute zero is 0 K.
The combination of degree symbol (°) followed by an uppercase letter F is the conventional symbol for the Fahrenheit temperature scale. A number followed by this symbol (and separated from it with a space) denotes a specific temperature point (e.g., "Gallium melts at 85.5763 °F"). A difference between temperatures or an uncertainty in temperature is also conventionally written the same way as well, e.g., "The output of the heat exchanger experiences an increase of 72 °F" or "Our standard uncertainty is ±5 °F". However, some authors instead use the notation "An increase of " (reversing the symbol order) to indicate temperature differences. Similar conventions exist for the Celsius scale, see .
Conversion (specific temperature point)
For an exact conversion between degrees Fahrenheit and Celsius, and kelvins of a specific temperature point, the following formulas can be applied. Here, is the value in degrees Fahrenheit, the value in degrees Celsius, and the value in kelvins:
°F to °C: =
°C to °F: = × 1.8 + 32
°F to K: =
K to °F: = × 1.8 − 459.67
There is also an exact conversion between Celsius and Fahrenheit scales making use of the correspondence −40 °F ≘ −40 °C. Again, is the numeric value in degrees Fahrenheit, and the numeric value in degrees Celsius:
°F to °C: = − 40
°C to °F: = ( + 40) × 1.8 − 40
Conversion (temperature difference or interval)
When converting a temperature interval between the Fahrenheit and Celsius scales, only the ratio is used, without any constant (in this case, the interval has the same numeric value in kelvins as in degrees Celsius):
°F to °C or K: = =
°C or K to °F: = × 1.8 = × 1.8
History
Fahrenheit proposed his temperature scale in 1724, basing it on two reference points of temperature. In his initial scale (which is not the final Fahrenheit scale), the zero point was determined by placing the thermometer in "a mixture of ice, water, and salis Armoniaci [transl. ammonium chloride] or even sea salt". This combination forms a eutectic system, which stabilizes its temperature automatically: 0 °F was defined to be that stable temperature. A second point, 96 degrees, was approximately the human body's temperature. A third point, 32 degrees, was marked as being the temperature of ice and water "without the aforementioned salts".
According to a German story, Fahrenheit actually chose the lowest air temperature measured in his hometown Danzig (Gdańsk, Poland) in winter 1708–09 as 0 °F, and only later had the need to be able to make this value reproducible using brine.
According to a letter Fahrenheit wrote to his friend Herman Boerhaave, his scale was built on the work of Ole Rømer, whom he had met earlier. In Rømer scale, brine freezes at zero, water freezes and melts at 7.5 degrees, body temperature is 22.5, and water boils at 60 degrees. Fahrenheit multiplied each value by 4 in order to eliminate fractions and make the scale more fine-grained. He then re-calibrated his scale using the melting point of ice and normal human body temperature (which were at 30 and 90 degrees); he adjusted the scale so that the melting point of ice would be 32 degrees, and body temperature 96 degrees, so that 64 intervals would separate the two, allowing him to mark degree lines on his instruments by simply bisecting the interval 6 times (since 64 = 26).
Fahrenheit soon after observed that water boils at about 212 degrees using this scale. The use of the freezing and boiling points of water as thermometer fixed reference points became popular following the work of Anders Celsius, and these fixed points were adopted by a committee of the Royal Society led by Henry Cavendish in 1776–77. Under this system, the Fahrenheit scale is redefined slightly so that the freezing point of water was exactly 32 °F, and the boiling point was exactly 212 °F, or 180 degrees higher. It is for this reason that normal human body temperature is approximately 98.6 °F (oral temperature) on the revised scale (whereas it was 90° on Fahrenheit's multiplication of Rømer, and 96° on his original scale).
In the present-day Fahrenheit scale, 0 °F no longer corresponds to the eutectic temperature of ammonium chloride brine as described above. Instead, that eutectic is at approximately 4 °F on the final Fahrenheit scale.
The Rankine temperature scale was based upon the Fahrenheit temperature scale, with its zero representing absolute zero instead.
Usage
General
The Fahrenheit scale was the primary temperature standard for climatic, industrial and medical purposes in Anglophone countries until the 1960s. In the late 1960s and 1970s, the Celsius scale replaced Fahrenheit in almost all of those countries—with the notable exception of the United States.
Fahrenheit is used in the United States, its territories and associated states (all serviced by the U.S. National Weather Service), as well as the (British) Cayman Islands and Liberia for everyday applications. The Fahrenheit scale is in use in U.S. for all temperature measurements including weather forecasts, cooking, and food freezing temperatures, however for scientific research the scale is Celsius and Kelvin.
United States
Early in the 20th century, Halsey and Dale suggested that reasons for resistance to use the centigrade (now Celsius) system in the U.S. included the larger size of each degree Celsius and the lower zero point in the Fahrenheit system; the Fahrenheit scale is more intuitive than Celsius for describing outdoor temperatures in temperate latitudes, with 100 °F being a hot summer day and 0 °F a cold winter day.
Canada
Canada has passed legislation favoring the International System of Units, while also maintaining legal definitions for traditional Canadian imperial units. Canadian weather reports are conveyed using degrees Celsius with occasional reference to Fahrenheit especially for cross-border broadcasts. Fahrenheit is still used on virtually all Canadian ovens. Thermometers, both digital and analog, sold in Canada usually employ both the Celsius and Fahrenheit scales.
European Union
In the European Union, it is mandatory to use Kelvins or degrees Celsius when quoting temperature for "economic, public health, public safety and administrative" purposes, though degrees Fahrenheit may be used alongside degrees Celsius as a supplementary unit.
United Kingdom
Most British people use Celsius. However, the use of Fahrenheit still may appear at times alongside degrees Celsius in the print media with no standard convention for when the measurement is included.
For example, The Times has an all-metric daily weather page but includes a Celsius-to-Fahrenheit conversion table. Some UK tabloids have adopted a tendency of using Fahrenheit for mid to high temperatures. It has been suggested that the rationale to keep using Fahrenheit was one of emphasis for high temperatures: "−6 °C" sounds colder than "21 °F", and "94 °F" sounds more sensational than "34 °C".
Unicode representation of symbol
Unicode provides the Fahrenheit symbol at code point . However, this is a compatibility character encoded for roundtrip compatibility with legacy encodings. The Unicode standard explicitly discourages the use of this character: "The sequence + is preferred over , and those two sequences should be treated as identical for searching."
| Physical sciences | Temperature | null |
11529 | https://en.wikipedia.org/wiki/Fermion | Fermion | In particle physics, a fermion is a subatomic particle that follows Fermi–Dirac statistics. Fermions have a half-odd-integer spin (spin , spin , etc.) and obey the Pauli exclusion principle. These particles include all quarks and leptons and all composite particles made of an odd number of these, such as all baryons and many atoms and nuclei. Fermions differ from bosons, which obey Bose–Einstein statistics.
Some fermions are elementary particles (such as electrons), and some are composite particles (such as protons). For example, according to the spin-statistics theorem in relativistic quantum field theory, particles with integer spin are bosons. In contrast, particles with half-integer spin are fermions.
In addition to the spin characteristic, fermions have another specific property: they possess conserved baryon or lepton quantum numbers. Therefore, what is usually referred to as the spin-statistics relation is, in fact, a spin statistics-quantum number relation.
As a consequence of the Pauli exclusion principle, only one fermion can occupy a particular quantum state at a given time. Suppose multiple fermions have the same spatial probability distribution, then, at least one property of each fermion, such as its spin, must be different. Fermions are usually associated with matter, whereas bosons are generally force carrier particles. However, in the current state of particle physics, the distinction between the two concepts is unclear. Weakly interacting fermions can also display bosonic behavior under extreme conditions. For example, at low temperatures, fermions show superfluidity for uncharged particles and superconductivity for charged particles.
Composite fermions, such as protons and neutrons, are the key building blocks of everyday matter.
English theoretical physicist Paul Dirac coined the name fermion from the surname of Italian physicist Enrico Fermi.
Elementary fermions
The Standard Model recognizes two types of elementary fermions: quarks and leptons. In all, the model distinguishes 24 different fermions. There are six quarks (up, down, strange, charm, bottom and top), and six leptons (electron, electron neutrino, muon, muon neutrino, tauon and tauon neutrino), along with the corresponding antiparticle of each of these.
Mathematically, there are many varieties of fermions, with the three most common types being:
Weyl fermions (massless),
Dirac fermions (massive), and
Majorana fermions (each its own antiparticle).
Most Standard Model fermions are believed to be Dirac fermions, although it is unknown at this time whether the neutrinos are Dirac or Majorana fermions (or both). Dirac fermions can be treated as a combination of two Weyl fermions. In July 2015, Weyl fermions have been experimentally realized in Weyl semimetals.
Composite fermions
Composite particles (such as hadrons, nuclei, and atoms) can be bosons or fermions depending on their constituents. More precisely, because of the relation between spin and statistics, a particle containing an odd number of fermions is itself a fermion. It will have half-integer spin.
Examples include the following:
A baryon, such as the proton or neutron, contains three fermionic quarks.
The nucleus of a carbon-13 atom contains six protons and seven neutrons.
The atom helium-3 (3He) consists of two protons, one neutron, and two electrons. The deuterium atom consists of one proton, one neutron, and one electron.
The number of bosons within a composite particle made up of simple particles bound with a potential has no effect on whether it is a boson or a fermion.
Fermionic or bosonic behavior of a composite particle (or system) is only seen at large (compared to size of the system) distances. At proximity, where spatial structure begins to be important, a composite particle (or system) behaves according to its constituent makeup.
Fermions can exhibit bosonic behavior when they become loosely bound in pairs. This is the origin of superconductivity and the superfluidity of helium-3: in superconducting materials, electrons interact through the exchange of phonons, forming Cooper pairs, while in helium-3, Cooper pairs are formed via spin fluctuations.
The quasiparticles of the fractional quantum Hall effect are also known as composite fermions; they consist of electrons with an even number of quantized vortices attached to them.
| Physical sciences | Fermions | null |
11545 | https://en.wikipedia.org/wiki/Feedback | Feedback | Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause and effect that forms a circuit or loop. The system can then be said to feed back into itself. The notion of cause-and-effect has to be handled carefully when applied to feedback systems:
History
Self-regulating mechanisms have existed since antiquity, and the idea of feedback started to enter economic theory in Britain by the 18th century, but it was not at that time recognized as a universal abstraction and so did not have a name.
The first ever known artificial feedback device was a float valve, for maintaining water at a constant level, invented in 270 BC in Alexandria, Egypt. This device illustrated the principle of feedback: a low water level opens the valve, the rising water then provides feedback into the system, closing the valve when the required level is reached. This then reoccurs in a circular fashion as the water level fluctuates.
Centrifugal governors were used to regulate the distance and pressure between millstones in windmills since the 17th century. In 1788, James Watt designed his first centrifugal governor following a suggestion from his business partner Matthew Boulton, for use in the steam engines of their production. Early steam engines employed a purely reciprocating motion, and were used for pumping water – an application that could tolerate variations in the working speed, but the use of steam engines for other applications called for more precise control of the speed.
In 1868, James Clerk Maxwell wrote a famous paper, "On governors", that is widely considered a classic in feedback control theory. This was a landmark paper on control theory and the mathematics of feedback.
The verb phrase to feed back, in the sense of returning to an earlier position in a mechanical process, was in use in the US by the 1860s, and in 1909, Nobel laureate Karl Ferdinand Braun used the term "feed-back" as a noun to refer to (undesired) coupling between components of an electronic circuit.
By the end of 1912, researchers using early electronic amplifiers (audions) had discovered that deliberately coupling part of the output signal back to the input circuit would boost the amplification (through regeneration), but would also cause the audion to howl or sing. This action of feeding back of the signal from output to input gave rise to the use of the term "feedback" as a distinct word by 1920.
The development of cybernetics from the 1940s onwards was centred around the study of circular causal feedback mechanisms.
Over the years there has been some dispute as to the best definition of feedback. According to cybernetician Ashby (1956), mathematicians and theorists interested in the principles of feedback mechanisms prefer the definition of "circularity of action", which keeps the theory simple and consistent. For those with more practical aims, feedback should be a deliberate effect via some more tangible connection.
Focusing on uses in management theory, Ramaprasad (1983) defines feedback generally as "...information about the gap between the actual level and the reference level of a system parameter" that is used to "alter the gap in some way". He emphasizes that the information by itself is not feedback unless translated into action.
Types
Positive and negative feedback
Positive feedback: If the signal feedback from output is in phase with the input signal, the feedback is called positive feedback.
Negative feedback: If the signal feedback is out of phase by 180° with respect to the input signal, the feedback is called negative feedback.
As an example of negative feedback, the diagram might represent a cruise control system in a car that matches a target speed such as the speed limit. The controlled system is the car; its input includes the combined torque from the engine and from the changing slope of the road (the disturbance). The car's speed (status) is measured by a speedometer. The error signal is the difference of the speed as measured by the speedometer from the target speed (set point). The controller interprets the speed to adjust the accelerator, commanding the fuel flow to the engine (the effector). The resulting change in engine torque, the feedback, combines with the torque exerted by the change of road grade to reduce the error in speed, minimising the changing slope.
The terms "positive" and "negative" were first applied to feedback prior to WWII. The idea of positive feedback already existed in the 1920s when the regenerative circuit was made. Friis and Jensen (1924) described this circuit in a set of electronic amplifiers as a case where the "feed-back" action is positive in contrast to negative feed-back action, which they mentioned only in passing. Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amplifiers. According to Black:
According to Mindell (2002) confusion in the terms arose shortly after this:
Even before these terms were being used, James Clerk Maxwell had described their concept through several kinds of "component motions" associated with the centrifugal governors used in steam engines. He distinguished those that lead to a continued increase in a disturbance or the amplitude of a wave or oscillation, from those that lead to a decrease of the same quality.
Terminology
The terms positive and negative feedback are defined in different ways within different disciplines.
the change of the gap between reference and actual values of a parameter or trait, based on whether the gap is widening (positive) or narrowing (negative).
the valence of the action or effect that alters the gap, based on whether it makes the recipient or observer happy (positive) or unhappy (negative).
The two definitions may be confusing, like when an incentive (reward) is used to boost poor performance (narrow a gap). Referring to definition 1, some authors use alternative terms, replacing positive and negative with self-reinforcing and self-correcting, reinforcing and balancing, discrepancy-enhancing and discrepancy-reducing or regenerative and degenerative respectively. And for definition 2, some authors promote describing the action or effect as positive and negative reinforcement or punishment rather than feedback.
Yet even within a single discipline an example of feedback can be called either positive or negative, depending on how values are measured or referenced.
This confusion may arise because feedback can be used to provide information or motivate, and often has both a qualitative and a quantitative component. As Connellan and Zemke (1993) put it:
Limitations of negative and positive feedback
While simple systems can sometimes be described as one or the other type, many systems with feedback loops cannot be shoehorned into either type, and this is especially true when multiple loops are present.
Other types of feedback
In general, feedback systems can have many signals fed back and the feedback loop frequently contain mixtures of positive and negative feedback where positive and negative feedback can dominate at different frequencies or different points in the state space of a system.
The term bipolar feedback has been coined to refer to biological systems where positive and negative feedback systems can interact, the output of one affecting the input of another, and vice versa.
Some systems with feedback can have very complex behaviors such as chaotic behaviors in non-linear systems, while others have much more predictable behaviors, such as those that are used to make and design digital systems.
Feedback is used extensively in digital systems. For example, binary counters and similar devices employ feedback where the current state and inputs are used to calculate a new state which is then fed back and clocked back into the device to update it.
Applications
Mathematics and dynamical systems
By using feedback properties, the behavior of a system can be altered to meet the needs of an application; systems can be made stable, responsive or held constant. It is shown that dynamical systems with a feedback experience an adaptation to the edge of chaos.
Physics
Physical systems present feedback through the mutual interactions of its parts. Feedback is also relevant for the regulation of experimental conditions, noise reduction, and signal control. The thermodynamics of feedback-controlled systems has intrigued physicist since the Maxwell's demon, with recent advances on the consequences for entropy reduction and performance increase.
Biology
In biological systems such as organisms, ecosystems, or the biosphere, most parameters must stay under control within a narrow range around a certain optimal level under certain environmental conditions. The deviation of the optimal value of the controlled parameter can result from the changes in internal and external environments. A change of some of the environmental conditions may also require change of that range to change for the system to function. The value of the parameter to maintain is recorded by a reception system and conveyed to a regulation module via an information channel. An example of this is insulin oscillations.
Biological systems contain many types of regulatory circuits, both positive and negative. As in other contexts, positive and negative do not imply that the feedback causes good or bad effects. A negative feedback loop is one that tends to slow down a process, whereas the positive feedback loop tends to accelerate it. The mirror neurons are part of a social feedback system, when an observed action is "mirrored" by the brain—like a self-performed action.
Normal tissue integrity is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted molecules that act as mediators; failure of key feedback mechanisms in cancer disrupts tissue function.
In an injured or infected tissue, inflammatory mediators elicit feedback responses in cells, which alter gene expression, and change the groups of molecules expressed and secreted, including molecules that induce diverse cells to cooperate and restore tissue structure and function. This type of feedback is important because it enables coordination of immune responses and recovery from infections and injuries. During cancer, key elements of this feedback fail. This disrupts tissue function and immunity.
Mechanisms of feedback were first elucidated in bacteria, where a nutrient elicits changes in some of their metabolic functions.
Feedback is also central to the operations of genes and gene regulatory networks. Repressor (see Lac repressor) and activator proteins are used to create genetic operons, which were identified by François Jacob and Jacques Monod in 1961 as feedback loops. These feedback loops may be positive (as in the case of the coupling between a sugar molecule and the proteins that import sugar into a bacterial cell), or negative (as is often the case in metabolic consumption).
On a larger scale, feedback can have a stabilizing effect on animal populations even when profoundly affected by external changes, although time lags in feedback response can give rise to predator-prey cycles.
In zymology, feedback serves as regulation of activity of an enzyme by its direct or downstream in the metabolic pathway (see Allosteric regulation).
The hypothalamic–pituitary–adrenal axis is largely controlled by positive and negative feedback, much of which is still unknown.
In psychology, the body receives a stimulus from the environment or internally that causes the release of hormones. Release of hormones then may cause more of those hormones to be released, causing a positive feedback loop. This cycle is also found in certain behaviour. For example, "shame loops" occur in people who blush easily. When they realize that they are blushing, they become even more embarrassed, which leads to further blushing, and so on.
Climate science
The climate system is characterized by strong positive and negative feedback loops between processes that affect the state of the atmosphere, ocean, and land. A simple example is the ice–albedo positive feedback loop whereby melting snow exposes more dark ground (of lower albedo), which in turn absorbs heat and causes more snow to melt.
Control theory
Feedback is extensively used in control theory, using a variety of methods including state space (controls), full state feedback, and so forth. In the context of control theory, "feedback" is traditionally assumed to specify "negative feedback".
The most common general-purpose controller using a control-loop feedback mechanism is a proportional-integral-derivative (PID) controller. Heuristically, the terms of a PID controller can be interpreted as corresponding to time: the proportional term depends on the present error, the integral term on the accumulation of past errors, and the derivative term is a prediction of future error, based on current rate of change.
Education
For feedback in the educational context, see corrective feedback.
Mechanical engineering
In ancient times, the float valve was used to regulate the flow of water in Greek and Roman water clocks; similar float valves are used to regulate fuel in a carburettor and also used to regulate tank water level in the flush toilet.
The Dutch inventor Cornelius Drebbel (1572–1633) built thermostats (c1620) to control the temperature of chicken incubators and chemical furnaces. In 1745, the windmill was improved by blacksmith Edmund Lee, who added a fantail to keep the face of the windmill pointing into the wind. In 1787, Tom Mead regulated the rotation speed of a windmill by using a centrifugal pendulum to adjust the distance between the bedstone and the runner stone (i.e., to adjust the load).
The use of the centrifugal governor by James Watt in 1788 to regulate the speed of his steam engine was one factor leading to the Industrial Revolution. Steam engines also use float valves and pressure release valves as mechanical regulation devices. A mathematical analysis of Watt's governor was done by James Clerk Maxwell in 1868.
The Great Eastern was one of the largest steamships of its time and employed a steam powered rudder with feedback mechanism designed in 1866 by John McFarlane Gray. Joseph Farcot coined the word servo in 1873 to describe steam-powered steering systems. Hydraulic servos were later used to position guns. Elmer Ambrose Sperry of the Sperry Corporation designed the first autopilot in 1912. Nicolas Minorsky published a theoretical analysis of automatic ship steering in 1922 and described the PID controller.
Internal combustion engines of the late 20th century employed mechanical feedback mechanisms such as the vacuum timing advance but mechanical feedback was replaced by electronic engine management systems once small, robust and powerful single-chip microcontrollers became affordable.
Electronic engineering
The use of feedback is widespread in the design of electronic components such as amplifiers, oscillators, and stateful logic circuit elements such as flip-flops and counters. Electronic feedback systems are also very commonly used to control mechanical, thermal and other physical processes.
If the signal is inverted on its way round the control loop, the system is said to have negative feedback; otherwise, the feedback is said to be positive. Negative feedback is often deliberately introduced to increase the stability and accuracy of a system by correcting or reducing the influence of unwanted changes. This scheme can fail if the input changes faster than the system can respond to it. When this happens, the lag in arrival of the correcting signal can result in over-correction, causing the output to oscillate or "hunt". While often an unwanted consequence of system behaviour, this effect is used deliberately in electronic oscillators.
Harry Nyquist at Bell Labs derived the Nyquist stability criterion for determining the stability of feedback systems. An easier method, but less general, is to use Bode plots developed by Hendrik Bode to determine the gain margin and phase margin. Design to ensure stability often involves frequency compensation to control the location of the poles of the amplifier.
Electronic feedback loops are used to control the output of electronic devices, such as amplifiers. A feedback loop is created when all or some portion of the output is fed back to the input. A device is said to be operating open loop if no output feedback is being employed and closed loop if feedback is being used.
When two or more amplifiers are cross-coupled using positive feedback, complex behaviors can be created. These multivibrators are widely used and include:
astable circuits, which act as oscillators
monostable circuits, which can be pushed into a state, and will return to the stable state after some time
bistable circuits, which have two stable states that the circuit can be switched between
Negative feedback
Negative feedback occurs when the fed-back output signal has a relative phase of 180° with respect to the input signal (upside down). This situation is sometimes referred to as being out of phase, but that term also is used to indicate other phase separations, as in "90° out of phase". Negative feedback can be used to correct output errors or to desensitize a system to unwanted fluctuations. In feedback amplifiers, this correction is generally for waveform distortion reduction or to establish a specified gain level. A general expression for the gain of a negative feedback amplifier is the asymptotic gain model.
Positive feedback
Positive feedback occurs when the fed-back signal is in phase with the input signal. Under certain gain conditions, positive feedback reinforces the input signal to the point where the output of the device oscillates between its maximum and minimum possible states. Positive feedback may also introduce hysteresis into a circuit. This can cause the circuit to ignore small signals and respond only to large ones. It is sometimes used to eliminate noise from a digital signal. Under some circumstances, positive feedback may cause a device to latch, i.e., to reach a condition in which the output is locked to its maximum or minimum state. This fact is very widely used in digital electronics to make bistable circuits for volatile storage of information.
The loud squeals that sometimes occurs in audio systems, PA systems, and rock music are known as audio feedback. If a microphone is in front of a loudspeaker that it is connected to, sound that the microphone picks up comes out of the speaker, and is picked up by the microphone and re-amplified. If the loop gain is sufficient, howling or squealing at the maximum power of the amplifier is possible.
Oscillator
An electronic oscillator is an electronic circuit that produces a periodic, oscillating electronic signal, often a sine wave or a square wave. Oscillators convert direct current (DC) from a power supply to an alternating current signal. They are widely used in many electronic devices. Common examples of signals generated by oscillators include signals broadcast by radio and television transmitters, clock signals that regulate computers and quartz clocks, and the sounds produced by electronic beepers and video games.
Oscillators are often characterized by the frequency of their output signal:
A low-frequency oscillator (LFO) is an electronic oscillator that generates a frequency below ≈20 Hz. This term is typically used in the field of audio synthesizers, to distinguish it from an audio frequency oscillator.
An audio oscillator produces frequencies in the audio range, about 16 Hz to 20 kHz.
An RF oscillator produces signals in the radio frequency (RF) range of about 100 kHz to 100 GHz.
Oscillators designed to produce a high-power AC output from a DC supply are usually called inverters.
There are two main types of electronic oscillator: the linear or harmonic oscillator and the nonlinear or relaxation oscillator.
Latches and flip-flops
A latch or a flip-flop is a circuit that has two stable states and can be used to store state information. They typically constructed using feedback that crosses over between two arms of the circuit, to provide the circuit with a state. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic. Latches and flip-flops are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems.
Latches and flip-flops are used as data storage elements. Such data storage can be used for storage of state, and such a circuit is described as sequential logic. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal.
Flip-flops can be either simple (transparent or opaque) or clocked (synchronous or edge-triggered). Although the term flip-flop has historically referred generically to both simple and clocked circuits, in modern usage it is common to reserve the term flip-flop exclusively for discussing clocked circuits; the simple ones are commonly called latches.
Using this terminology, a latch is level-sensitive, whereas a flip-flop is edge-sensitive. That is, when a latch is enabled it becomes transparent, while a flip flop's output only changes on a single type (positive going or negative going) of clock edge.
Software
Feedback loops provide generic mechanisms for controlling the running, maintenance, and evolution of software and computing systems. Feedback-loops are important models in the engineering of adaptive software, as they define the behaviour of the interactions among the control elements over the adaptation process, to guarantee system properties at run-time. Feedback loops and foundations of control theory have been successfully applied to computing systems. In particular, they have been applied to the development of products such as IBM Db2 and IBM Tivoli. From a software perspective, the autonomic (MAPE, monitor analyze plan execute) loop proposed by researchers of IBM is another valuable contribution to the application of feedback loops to the control of dynamic properties and the design and evolution of autonomic software systems.
Software Development
User interface design
Feedback is also a useful design principle for designing user interfaces.
Video feedback
Video feedback is the video equivalent of acoustic feedback. It involves a loop between a video camera input and a video output, e.g., a television screen or monitor. Aiming the camera at the display produces a complex video image based on the feedback.
Human resource management
| Physical sciences | Science basics | Basics and measurement |
11555 | https://en.wikipedia.org/wiki/Fluorescence | Fluorescence | Fluorescence is one of two kinds of photoluminescence, the emission of light by a substance that has absorbed light or other electromagnetic radiation. When exposed to ultraviolet radiation, many substances will glow (fluoresce) with colored visible light. The color of the light emitted depends on the chemical composition of the substance. Fluorescent materials generally cease to glow nearly immediately when the radiation source stops. This distinguishes them from the other type of light emission, phosphorescence. Phosphorescent materials continue to emit light for some time after the radiation stops.
This difference in timing is a result of quantum spin effects.
Fluorescence occurs when a photon of the incoming radiation is absorbed by a molecule exciting it to a higher energy level followed by emission of light as the molecule returns to a lower energy state. The emitted light may have a longer wavelength, and therefore a lower photon energy, than the absorbed radiation. For example when the absorbed radiation could be in the ultraviolet region of the electromagnetic spectrum (invisible to the human eye), while the emitted light is in the visible region. This gives the fluorescent substance a distinct color that is best seen when it has been exposed to UV light, making it appear to glow in the dark. However, any light of a shorter wavelength may cause a material to fluoresce at a longer wavelength. Fluorescent materials may also be excited by certain wavelengths of visible light, which masks the glow, yet their colors may appear bright and intensified. Other fluorescent materials emit their light in the infrared or even the ultraviolet regions of the spectrum.
Fluorescence has many practical applications, including mineralogy, gemology, medicine, chemical sensors (fluorescence spectroscopy), fluorescent labelling, dyes, biological detectors, cosmic-ray detection, vacuum fluorescent displays, and cathode-ray tubes. Its most common everyday application is in (gas-discharge) fluorescent lamps and LED lamps, in which fluorescent coatings convert UV or blue light into longer-wavelengths resulting in white light which can even appear indistinguishable from that of the traditional but energy-inefficient incandescent lamp. Fluorescence also occurs frequently in nature in some minerals and in many biological forms across all kingdoms of life. The latter may be referred to as biofluorescence, indicating that the fluorophore is part of or is extracted from a living organism (rather than an inorganic dye or stain). But since fluorescence is due to a specific chemical, which can also be synthesized artificially in most cases, it is sufficient to describe the substance itself as fluorescent.
History
Fluorescence was observed long before it was named and understood.
An early observation of fluorescence was known to the Aztecs and described in 1560 by Bernardino de Sahagún and in 1565 by Nicolás Monardes in the infusion known as lignum nephriticum (Latin for "kidney wood"). It was derived from the wood of two tree species, Pterocarpus indicus and Eysenhardtia polystachya.
The chemical compound responsible for this fluorescence is matlaline, which is the oxidation product of one of the flavonoids found in this wood.
In 1819, E.D. Clarke
and in 1822 René Just Haüy
described some varieties of fluorites that had a different color depending on whether the light was reflected or (apparently) transmitted. Haüy incorrectly viewed the effect as light scattering similar to opalescence. In 1833 Sir David Brewster described a similar effect in chlorophyll which he also considered a form of opalescence.
Sir John Herschel studied quinine in 1845 and came to a different incorrect conclusion.
In 1842, A.E. Becquerel observed that calcium sulfide emits light after being exposed to solar ultraviolet, making him the first to state that the emitted light is of longer wavelength than the incident light. While his observation of photoluminescence was similar to that described 10 years later by Stokes, who observed a fluorescence of a solution of quinine, the phenomenon that Becquerel described with calcium sulfide is now called phosphorescence.
In his 1852 paper on the "Refrangibility" (wavelength change) of light, George Gabriel Stokes described the ability of fluorspar, uranium glass and many other substances to change invisible light beyond the violet end of the visible spectrum into visible light. He named this phenomenon fluorescence
"I am almost inclined to coin a word, and call the appearance fluorescence, from fluor-spar [i.e., fluorite], as the analogous term opalescence is derived from the name of a mineral."
Neither Becquerel nor Stokes understood one key aspect of photoluminescence: the critical difference from incandescence, the emission of light by heated material. To distinguish it from incandescence, in the late 1800s, Gustav Wiedemann proposed the term luminescence to designate any emission of light more intense than expected from the source's temperature.
Advances in spectroscopy and quantum electronics between the 1950s and 1970s provided a way to distinguish between the three different mechanisms that produce the light, as well as narrowing down the typical timescales those mechanisms take to decay after absorption. In modern science, this distinction became important because some items, such as lasers, required the fastest decay times, which typically occur in the nanosecond (billionth of a second) range. In physics, this first mechanism was termed "fluorescence" or "singlet emission", and is common in many laser mediums such as ruby. Other fluorescent materials were discovered to have much longer decay times, because some of the atoms would change their spin to a triplet state, thus would glow brightly with fluorescence under excitation but produce a dimmer afterglow for a short time after the excitation was removed, which became labeled "phosphorescence" or "triplet phosphorescence". The typical decay times ranged from a few microseconds to one second, which are still fast enough by human-eye standards to be colloquially referred to as fluorescent. Common examples include fluorescent lamps, organic dyes, and even fluorspar. Longer emitters, commonly referred to as glow-in-the-dark substances, ranged from one second to many hours, and this mechanism was called persistent phosphorescence or persistent luminescence, to distinguish it from the other two mechanisms.
Physical principles
Mechanism
Fluorescence occurs when an excited molecule, atom, or nanostructure, relaxes to a lower energy state (usually the ground state) through emission of a photon without a change in electron spin. When the initial and final states have different multiplicity (spin), the phenomenon is termed phosphorescence.
When a molecule in its ground state (called S0) is photoexcited it may end up in any one of a number of excited states (S1, S2, S3,...). These higher excited states are different vibrational levels, populated in proportion to their overlap with the ground state according to the Franck-Condon principle. These vibrational excited states typically decay rapidly by to S1, followed by radiative transition to the ground state or to vibrational states close to the ground state. This transition is called fluorescence. All of these states are singlet states.
A different pathway for deexcitation is intersystem crossing from the S1 to a triplet state T1. Decay from T1 to S0 is typically slower and less intense and is called phosphorescence.
Absorption of a photon of energy results in an excited state of the same multiplicity (spin) of the ground state, usually a singlet (Sn with n > 0). In solution, states with n > 1 relax rapidly to the lowest vibrational level of the first excited state (S1) by transferring energy to the solvent molecules through non-radiative processes, including internal conversion followed by vibrational relaxation, in which the energy is dissipated as heat. Thus the fluorescence energy is typically less than the photoexcitation energy.
The excited state S1 can relax by other mechanisms that do not involve the emission of light. These processes, called non-radiative processes, compete with fluorescence emission and decrease its efficiency. Examples include internal conversion, intersystem crossing to the triplet state, and energy transfer to another molecule. An example of energy transfer is Förster resonance energy transfer. Relaxation from an excited state can also occur through collisional quenching, a process where a molecule (the quencher) collides with the fluorescent molecule during its excited state lifetime. Molecular oxygen (O2) is an extremely efficient quencher of fluorescence because of its unusual triplet ground state.
Quantum yield
The fluorescence quantum yield gives the efficiency of the fluorescence process. It is defined as the ratio of the number of photons emitted to the number of photons absorbed.
The maximum possible fluorescence quantum yield is 1.0 (100%); each photon absorbed results in a photon emitted. Compounds with quantum yields of 0.10 are still considered quite fluorescent. Another way to define the quantum yield of fluorescence is by the rate of excited state decay:
where is the rate constant of spontaneous emission of radiation and
is the sum of all rates of excited state decay. Other rates of excited state decay are caused by mechanisms other than photon emission and are, therefore, often called "non-radiative rates", which can include:
dynamic collisional quenching
near-field dipole–dipole interaction (or resonance energy transfer)
internal conversion
intersystem crossing
Thus, if the rate of any pathway changes, both the excited state lifetime and the fluorescence quantum yield will be affected.
Fluorescence quantum yields are measured by comparison to a standard. The quinine salt quinine sulfate in a sulfuric acid solution was regarded as the most common fluorescence standard,
however, a recent study revealed that the fluorescence quantum yield of this solution is strongly affected by the temperature, and should no longer be used as the standard solution. The quinine in 0.1 M perchloric acid () shows no temperature dependence up to 45 °C, therefore it can be considered as a reliable standard solution.
Lifetime
The fluorescence lifetime refers to the average time the molecule stays in its excited state before emitting a photon. Fluorescence typically follows first-order kinetics:
where is the concentration of excited state molecules at time , is the initial concentration and is the decay rate or the inverse of the fluorescence lifetime. This is an instance of exponential decay. Various radiative and non-radiative processes can de-populate the excited state. In such case the total decay rate is the sum over all rates:
where is the total decay rate, the radiative decay rate and the non-radiative decay rate. It is similar to a first-order chemical reaction in which the first-order rate constant is the sum of all of the rates (a parallel kinetic model). If the rate of spontaneous emission, or any of the other rates are fast, the lifetime is short. For commonly used fluorescent compounds, typical excited state decay times for photon emissions with energies from the UV to near infrared are within the range of 0.5 to 20 nanoseconds. The fluorescence lifetime is an important parameter for practical applications of fluorescence such as fluorescence resonance energy transfer and fluorescence-lifetime imaging microscopy.
Jablonski diagram
The Jablonski diagram describes most of the relaxation mechanisms for excited state molecules. The diagram alongside shows how fluorescence occurs due to the relaxation of certain excited electrons of a molecule.
Fluorescence anisotropy
Fluorophores are more likely to be excited by photons if the transition moment of the fluorophore is parallel to the electric vector of the photon. The polarization of the emitted light will also depend on the transition moment. The transition moment is dependent on the physical orientation of the fluorophore molecule. For fluorophores in solution, the intensity and polarization of the emitted light is dependent on rotational diffusion. Therefore, anisotropy measurements can be used to investigate how freely a fluorescent molecule moves in a particular environment.
Fluorescence anisotropy can be defined quantitatively as
where is the emitted intensity parallel to the polarization of the excitation light and is the emitted intensity perpendicular to the polarization of the excitation light.
Anisotropy is independent of the intensity of the absorbed or emitted light, it is the property of the light, so photobleaching of the dye will not affect the anisotropy value as long as the signal is detectable.
Fluorescence
Strongly fluorescent pigments often have an unusual appearance which is often described colloquially as a "neon color" (originally "day-glo" in the late 1960s, early 1970s). This phenomenon was termed "Farbenglut" by Hermann von Helmholtz and "fluorence" by Ralph M. Evans. It is generally thought to be related to the high brightness of the color relative to what it would be as a component of white. Fluorescence shifts energy in the incident illumination from shorter wavelengths to longer (such as blue to yellow) and thus can make the fluorescent color appear brighter (more saturated) than it could possibly be by reflection alone.
Rules
There are several general rules that deal with fluorescence. Each of the following rules have exceptions but they are useful guidelines for understanding fluorescence (these rules do not necessarily apply to two-photon absorption).
Kasha's rule
Kasha's rule states that the luminesce (fluorescence or phosphorescence) of a molecule will be emitted only from the lowest excited state of its given multiplicity. Vavilov's rule (a logical extension of Kasha's rule thusly called Kasha–Vavilov rule) dictates that the quantum yield of luminescence is independent of the wavelength of exciting radiation and is proportional to the absorbance of the excited wavelength. Kasha's rule does not always apply and is violated by simple molecules, such an example is azulene. A somewhat more reliable statement, although still with exceptions, would be that the fluorescence spectrum shows very little dependence on the wavelength of exciting radiation.
Mirror image rule
For many fluorophores the absorption spectrum is a mirror image of the emission spectrum.
This is known as the mirror image rule and is related to the Franck–Condon principle which states that electronic transitions are vertical, that is energy changes without distance changing as can be represented with a vertical line in Jablonski diagram. This means the nucleus does not move and the vibration levels of the excited state resemble the vibration levels of the ground state.
Stokes shift
In general, emitted fluorescence light has a longer wavelength and lower energy than the absorbed light. This phenomenon, known as Stokes shift, is due to energy loss between the time a photon is absorbed and when a new one is emitted. The causes and magnitude of Stokes shift can be complex and are dependent on the fluorophore and its environment. However, there are some common causes. It is frequently due to non-radiative decay to the lowest vibrational energy level of the excited state. Another factor is that the emission of fluorescence frequently leaves a fluorophore in a higher vibrational level of the ground state.
In nature
There are many natural compounds that exhibit fluorescence, and they have a number of applications. Some deep-sea animals, such as the greeneye, have fluorescent structures.
Compared to bioluminescence and biophosphorescence
Fluorescence
Fluorescence is the phenomenon of absorption of electromagnetic radiation, typically from ultraviolet or visible light, by a molecule and the subsequent emission of a photon of a lower energy (smaller frequency, longer wavelength). This causes the light that is emitted to be a different color than the light that is absorbed. Stimulating light excites an electron to an excited state. When the molecule returns to the ground state, it releases a photon, which is the fluorescent emission. The excited state lifetime is short, so emission of light is typically only observable when the absorbing light is on. Fluorescence can be of any wavelength but is often more significant when emitted photons are in the visible spectrum. When it occurs in a living organism, it is sometimes called biofluorescence. Fluorescence should not be confused with bioluminescence and biophosphorescence. Pumpkin toadlets that live in the Brazilian Atlantic forest are fluorescent.
Bioluminescence
Bioluminescence differs from fluorescence in that it is the natural production of light by chemical reactions within an organism, whereas fluorescence is the absorption and reemission of light from the environment. Fireflies and anglerfish are two examples of bioluminescent organisms. To add to the potential confusion, some organisms are both bioluminescent and fluorescent, like the sea pansy Renilla reniformis, where bioluminescence serves as the light source for fluorescence.
Phosphorescence
Phosphorescence is similar to fluorescence in its requirement of light wavelengths as a provider of excitation energy. The difference here lies in the relative stability of the energized electron. Unlike with fluorescence, in phosphorescence the electron retains stability, emitting light that continues to "glow in the dark" even after the stimulating light source has been removed. For example, glow-in-the-dark stickers are phosphorescent, but there are no truly biophosphorescent animals known.
Mechanisms
Epidermal chromatophores
Pigment cells that exhibit fluorescence are called fluorescent chromatophores, and function somatically similar to regular chromatophores. These cells are dendritic, and contain pigments called fluorosomes. These pigments contain fluorescent proteins which are activated by K+ (potassium) ions, and it is their movement, aggregation, and dispersion within the fluorescent chromatophore that cause directed fluorescence patterning. Fluorescent cells are innervated the same as other chromatophores, like melanophores, pigment cells that contain melanin. Short term fluorescent patterning and signaling is controlled by the nervous system. Fluorescent chromatophores can be found in the skin (e.g. in fish) just below the epidermis, amongst other chromatophores.
Epidermal fluorescent cells in fish also respond to hormonal stimuli by the α–MSH and MCH hormones much the same as melanophores. This suggests that fluorescent cells may have color changes throughout the day that coincide with their circadian rhythm. Fish may also be sensitive to cortisol induced stress responses to environmental stimuli, such as interaction with a predator or engaging in a mating ritual.
Phylogenetics
Evolutionary origins
The incidence of fluorescence across the tree of life is widespread, and has been studied most extensively in cnidarians and fish. The phenomenon appears to have evolved multiple times in multiple taxa such as in the anguilliformes (eels), gobioidei (gobies and cardinalfishes), and tetradontiformes (triggerfishes), along with the other taxa discussed later in the article. Fluorescence is highly genotypically and phenotypically variable even within ecosystems, in regards to the wavelengths emitted, the patterns displayed, and the intensity of the fluorescence. Generally, the species relying upon camouflage exhibit the greatest diversity in fluorescence, likely because camouflage may be one of the uses of fluorescence.
It is suspected by some scientists that GFPs and GFP-like proteins began as electron donors activated by light. These electrons were then used for reactions requiring light energy. Functions of fluorescent proteins, such as protection from the sun, conversion of light into different wavelengths, or for signaling are thought to have evolved secondarily.
Adaptive functions
Currently, relatively little is known about the functional significance of fluorescence and fluorescent proteins. However, it is suspected that fluorescence may serve important functions in signaling and communication, mating, lures, camouflage, UV protection and antioxidation, photoacclimation, dinoflagellate regulation, and in coral health.
Aquatic
Water absorbs light of long wavelengths, so less light from these wavelengths reflects back to reach the eye. Therefore, warm colors from the visual light spectrum appear less vibrant at increasing depths. Water scatters light of shorter wavelengths above violet, meaning cooler colors dominate the visual field in the photic zone. Light intensity decreases 10 fold with every 75 m of depth, so at depths of 75 m, light is 10% as intense as it is on the surface, and is only 1% as intense at 150 m as it is on the surface. Because the water filters out the wavelengths and intensity of water reaching certain depths, different proteins, because of the wavelengths and intensities of light they are capable of absorbing, are better suited to different depths. Theoretically, some fish eyes can detect light as deep as 1000 m. At these depths of the aphotic zone, the only sources of light are organisms themselves, giving off light through chemical reactions in a process called bioluminescence.
Fluorescence is simply defined as the absorption of electromagnetic radiation at one wavelength and its reemission at another, lower energy wavelength. Thus any type of fluorescence depends on the presence of external sources of light. Biologically functional fluorescence is found in the photic zone, where there is not only enough light to cause fluorescence, but enough light for other organisms to detect it.
The visual field in the photic zone is naturally blue, so colors of fluorescence can be detected as bright reds, oranges, yellows, and greens. Green is the most commonly found color in the marine spectrum, yellow the second most, orange the third, and red is the rarest. Fluorescence can occur in organisms in the aphotic zone as a byproduct of that same organism's bioluminescence. Some fluorescence in the aphotic zone is merely a byproduct of the organism's tissue biochemistry and does not have a functional purpose. However, some cases of functional and adaptive significance of fluorescence in the aphotic zone of the deep ocean is an active area of research.
Photic zone
Fish
Bony fishes living in shallow water generally have good color vision due to their living in a colorful environment. Thus, in shallow-water fishes, red, orange, and green fluorescence most likely serves as a means of communication with conspecifics, especially given the great phenotypic variance of the phenomenon.
Many fish that exhibit fluorescence, such as sharks, lizardfish, scorpionfish, wrasses, and flatfishes, also possess yellow intraocular filters. Yellow intraocular filters in the lenses and cornea of certain fishes function as long-pass filters. These filters enable the species to visualize and potentially exploit fluorescence, in order to enhance visual contrast and patterns that are unseen to other fishes and predators that lack this visual specialization. Fish that possess the necessary yellow intraocular filters for visualizing fluorescence potentially exploit a light signal from members of it. Fluorescent patterning was especially prominent in cryptically patterned fishes possessing complex camouflage. Many of these lineages also possess yellow long-pass intraocular filters that could enable visualization of such patterns.
Another adaptive use of fluorescence is to generate orange and red light from the ambient blue light of the photic zone to aid vision. Red light can only be seen across short distances due to attenuation of red light wavelengths by water. Many fish species that fluoresce are small, group-living, or benthic/aphotic, and have conspicuous patterning. This patterning is caused by fluorescent tissue and is visible to other members of the species, however the patterning is invisible at other visual spectra. These intraspecific fluorescent patterns also coincide with intra-species signaling. The patterns present in ocular rings to indicate directionality of an individual's gaze, and along fins to indicate directionality of an individual's movement. Current research suspects that this red fluorescence is used for private communication between members of the same species. Due to the prominence of blue light at ocean depths, red light and light of longer wavelengths are muddled, and many predatory reef fish have little to no sensitivity for light at these wavelengths. Fish such as the fairy wrasse that have developed visual sensitivity to longer wavelengths are able to display red fluorescent signals that give a high contrast to the blue environment and are conspicuous to conspecifics in short ranges, yet are relatively invisible to other common fish that have reduced sensitivities to long wavelengths. Thus, fluorescence can be used as adaptive signaling and intra-species communication in reef fish.
Additionally, it is suggested that fluorescent tissues that surround an organism's eyes are used to convert blue light from the photic zone or green bioluminescence in the aphotic zone into red light to aid vision.
Sharks
A new fluorophore was described in two species of sharks, wherein it was due to an undescribed group of brominated tryptophane-kynurenine small molecule metabolites.
Coral
Fluorescence serves a wide variety of functions in coral. Fluorescent proteins in corals may contribute to photosynthesis by converting otherwise unusable wavelengths of light into ones for which the coral's symbiotic algae are able to conduct photosynthesis. Also, the proteins may fluctuate in number as more or less light becomes available as a means of photoacclimation. Similarly, these fluorescent proteins may possess antioxidant capacities to eliminate oxygen radicals produced by photosynthesis. Finally, through modulating photosynthesis, the fluorescent proteins may also serve as a means of regulating the activity of the coral's photosynthetic algal symbionts.
Cephalopods
Alloteuthis subulata and Loligo vulgaris, two types of nearly transparent squid, have fluorescent spots above their eyes. These spots reflect incident light, which may serve as a means of camouflage, but also for signaling to other squids for schooling purposes.
Jellyfish
Another, well-studied example of fluorescence in the ocean is the hydrozoan Aequorea victoria. This jellyfish lives in the photic zone off the west coast of North America and was identified as a carrier of green fluorescent protein (GFP) by Osamu Shimomura. The gene for these green fluorescent proteins has been isolated and is scientifically significant because it is widely used in genetic studies to indicate the expression of other genes.
Mantis shrimp
Several species of mantis shrimp, which are stomatopod crustaceans, including Lysiosquillina glabriuscula, have yellow fluorescent markings along their antennal scales and carapace (shell) that males present during threat displays to predators and other males. The display involves raising the head and thorax, spreading the striking appendages and other maxillipeds, and extending the prominent, oval antennal scales laterally, which makes the animal appear larger and accentuates its yellow fluorescent markings. Furthermore, as depth increases, mantis shrimp fluorescence accounts for a greater part of the visible light available. During mating rituals, mantis shrimp actively fluoresce, and the wavelength of this fluorescence matches the wavelengths detected by their eye pigments.
Aphotic zone
Siphonophores
Siphonophorae is an order of marine animals from the phylum Hydrozoa that consist of a specialized medusoid and polyp zooid. Some siphonophores, including the genus Erenna that live in the aphotic zone between depths of 1600 m and 2300 m, exhibit yellow to red fluorescence in the photophores of their tentacle-like tentilla. This fluorescence occurs as a by-product of bioluminescence from these same photophores. The siphonophores exhibit the fluorescence in a flicking pattern that is used as a lure to attract prey.
Dragonfish
The predatory deep-sea dragonfish Malacosteus niger, the closely related genus Aristostomias and the species Pachystomias microdon use fluorescent red accessory pigments to convert the blue light emitted from their own bioluminescence to red light from suborbital photophores. This red luminescence is invisible to other animals, which allows these dragonfish extra light at dark ocean depths without attracting or signaling predators.
Terrestrial
Amphibians
Fluorescence is widespread among amphibians and has been documented in several families of frogs, salamanders and caecilians, but the extent of it varies greatly.
The polka-dot tree frog (Hypsiboas punctatus), widely found in South America, was unintentionally discovered to be the first fluorescent amphibian in 2017. The fluorescence was traced to a new compound found in the lymph and skin glands. The main fluorescent compound is Hyloin-L1 and it gives a blue-green glow when exposed to violet or ultraviolet light. The scientists behind the discovery suggested that the fluorescence can be used for communication. They speculated that fluorescence possibly is relatively widespread among frogs. Only a few months later, fluorescence was discovered in the closely related Hypsiboas atlanticus. Because it is linked to secretions from skin glands, they can also leave fluorescent markings on surfaces where they have been.
In 2019, two other frogs, the tiny pumpkin toadlet (Brachycephalus ephippium) and red pumpkin toadlet (B. pitanga) of southeastern Brazil, were found to have naturally fluorescent skeletons, which are visible through their skin when exposed to ultraviolet light. It was initially speculated that the fluorescence supplemented their already aposematic colours (they are toxic) or that it was related to mate choice (species recognition or determining fitness of a potential partner), but later studies indicate that the former explanation is unlikely, as predation attempts on the toadlets appear to be unaffected by the presence/absence of fluorescence.
In 2020 it was confirmed that green or yellow fluorescence is widespread not only in adult frogs that are exposed to blue or ultraviolet light, but also among tadpoles, salamanders and caecilians. The extent varies greatly depending on species; in some it is highly distinct and in others it is barely noticeable. It can be based on their skin pigmentation, their mucus or their bones.
Butterflies
Swallowtail (Papilio) butterflies have complex systems for emitting fluorescent light. Their wings contain pigment-infused crystals that provide directed fluorescent light. These crystals function to produce fluorescent light best when they absorb radiance from sky-blue light (wavelength about 420 nm). The wavelengths of light that the butterflies see the best correspond to the absorbance of the crystals in the butterfly's wings. This likely functions to enhance the capacity for signaling.
Parrots
Parrots have fluorescent plumage that may be used in mate signaling. A study using mate-choice experiments on budgerigars (Melopsittacus undulates) found compelling support for fluorescent sexual signaling, with both males and females significantly preferring birds with the fluorescent experimental stimulus. This study suggests that the fluorescent plumage of parrots is not simply a by-product of pigmentation, but instead an adapted sexual signal. Considering the intricacies of the pathways that produce fluorescent pigments, there may be significant costs involved. Therefore, individuals exhibiting strong fluorescence may be honest indicators of high individual quality, since they can deal with the associated costs.
Arachnids
Spiders fluoresce under UV light and possess a huge diversity of fluorophores. Andrews, Reed, & Masta noted that spiders are the only known group in which fluorescence is "taxonomically widespread, variably expressed, evolutionarily labile, and probably under selection and potentially of ecological importance for intraspecific and interspecific signaling". They showed that fluorescence evolved multiple times across spider taxa, with novel fluorophores evolving during spider diversification.
In some spiders, ultraviolet cues are important for predator–prey interactions, intraspecific communication, and camouflage-matching with fluorescent flowers. Differing ecological contexts could favor inhibition or enhancement of fluorescence expression, depending upon whether fluorescence helps spiders be cryptic or makes them more conspicuous to predators. Therefore, natural selection could be acting on expression of fluorescence across spider species.
Scorpions are also fluorescent, in their case due to the presence of beta carboline in their cuticles.
Platypus
In 2020 fluorescence was reported for several platypus specimens.
Plants
Many plants are fluorescent due to the presence of chlorophyll, which is probably the most widely distributed fluorescent molecule, producing red emission under a range of excitation wavelengths. This attribute of chlorophyll is commonly used by ecologists to measure photosynthetic efficiency.
The Mirabilis jalapa flower contains violet, fluorescent betacyanins and yellow, fluorescent betaxanthins. Under white light, parts of the flower containing only betaxanthins appear yellow, but in areas where both betaxanthins and betacyanins are present, the visible fluorescence of the flower is faded due to internal light-filtering mechanisms. Fluorescence was previously suggested to play a role in pollinator attraction, however, it was later found that the visual signal by fluorescence is negligible compared to the visual signal of light reflected by the flower.
Abiotic
Gemology, mineralogy and geology
In addition to the eponymous fluorspar, many
gemstones and minerals may have a distinctive fluorescence or may fluoresce differently under short-wave ultraviolet, long-wave ultraviolet, visible light, or X-rays.
Many types of calcite and amber will fluoresce under shortwave UV, longwave UV and visible light. Rubies, emeralds, and diamonds exhibit red fluorescence under long-wave UV, blue and sometimes green light; diamonds also emit light under X-ray radiation.
Fluorescence in minerals is caused by a wide range of activators. In some cases, the concentration of the activator must be restricted to below a certain level, to prevent quenching of the fluorescent emission. Furthermore, the mineral must be free of impurities such as iron or copper, to prevent quenching of possible fluorescence. Divalent manganese, in concentrations of up to several percent, is responsible for the red or orange fluorescence of calcite, the green fluorescence of willemite, the yellow fluorescence of esperite, and the orange fluorescence of wollastonite and clinohedrite. Hexavalent uranium, in the form of the uranyl cation (), fluoresces at all concentrations in a yellow green, and is the cause of fluorescence of minerals such as autunite or andersonite, and, at low concentration, is the cause of the fluorescence of such materials as some samples of hyalite opal. Trivalent chromium at low concentration is the source of the red fluorescence of ruby. Divalent europium is the source of the blue fluorescence, when seen in the mineral fluorite. Trivalent lanthanides such as terbium and dysprosium are the principal activators of the creamy yellow fluorescence exhibited by the yttrofluorite variety of the mineral fluorite, and contribute to the orange fluorescence of zircon. Powellite (calcium molybdate) and scheelite (calcium tungstate) fluoresce intrinsically in yellow and blue, respectively. When present together in solid solution, energy is transferred from the higher-energy tungsten to the lower-energy molybdenum, such that fairly low levels of molybdenum are sufficient to cause a yellow emission for scheelite, instead of blue. Low-iron sphalerite (zinc sulfide), fluoresces and phosphoresces in a range of colors, influenced by the presence of various trace impurities.
Crude oil (petroleum) fluoresces in a range of colors, from dull-brown for heavy oils and tars through to bright-yellowish and bluish-white for very light oils and condensates. This phenomenon is used in oil exploration drilling to identify very small amounts of oil in drill cuttings and core samples.
Humic acids and fulvic acids produced by the degradation of organic matter in soils (humus) may also fluoresce because of the presence of aromatic cycles in their complex molecular structures. Humic substances dissolved in groundwater can be detected and characterized by spectrofluorimetry.
Organic liquids
Organic (carbon based) solutions such anthracene or stilbene, dissolved in benzene or toluene, fluoresce with ultraviolet or gamma ray irradiation. The decay times of this fluorescence are on the order of nanoseconds, since the duration of the light depends on the lifetime of the excited states of the fluorescent material, in this case anthracene or stilbene.
Scintillation is defined a flash of light produced in a transparent material by the passage of a particle (an electron, an alpha particle, an ion, or a high-energy photon). Stilbene and derivatives are used in scintillation counters to detect such particles. Stilbene is also one of the gain mediums used in dye lasers.
Atmosphere
Fluorescence is observed in the atmosphere when the air is under energetic electron bombardment. In cases such as the natural aurora, high-altitude nuclear explosions, and rocket-borne electron gun experiments, the molecules and ions formed have a fluorescent response to light.
Common materials that fluoresce
Vitamin B2 fluoresces yellow.
Tonic water fluoresces blue due to the presence of quinine.
Highlighter ink is often fluorescent due to the presence of pyranine.
Banknotes, postage stamps and credit cards often have fluorescent security features.
In novel technology
In August 2020 researchers reported the creation of the brightest fluorescent solid optical materials so far by enabling the transfer of properties of highly fluorescent dyes via spatial and electronic isolation of the dyes by mixing cationic dyes with anion-binding cyanostar macrocycles. According to a co-author these materials may have applications in areas such as solar energy harvesting, bioimaging, and lasers.
Applications
Lighting
The common fluorescent lamp relies on fluorescence. Inside the glass tube is a partial vacuum and a small amount of mercury. An electric discharge in the tube causes the mercury atoms to emit mostly ultraviolet light. The tube is lined with a coating of a fluorescent material, called the phosphor, which absorbs ultraviolet light and re-emits visible light. Fluorescent lighting is more energy-efficient than incandescent lighting elements. However, the uneven spectrum of traditional fluorescent lamps may cause certain colors to appear different from when illuminated by incandescent light or daylight. The mercury vapor emission spectrum is dominated by a short-wave UV line at 254 nm (which provides most of the energy to the phosphors), accompanied by visible light emission at 436 nm (blue), 546 nm (green) and 579 nm (yellow-orange). These three lines can be observed superimposed on the white continuum using a hand spectroscope, for light emitted by the usual white fluorescent tubes. These same visible lines, accompanied by the emission lines of trivalent europium and trivalent terbium, and further accompanied by the emission continuum of divalent europium in the blue region, comprise the more discontinuous light emission of the modern trichromatic phosphor systems used in many compact fluorescent lamp and traditional lamps where better color rendition is a goal.
Fluorescent lights were first available to the public at the 1939 New York World's Fair. Improvements since then have largely been better phosphors, longer life, and more consistent internal discharge, and easier-to-use shapes (such as compact fluorescent lamps). Some high-intensity discharge (HID) lamps couple their even-greater electrical efficiency with phosphor enhancement for better color rendition.
White light-emitting diodes (LEDs) became available in the mid-1990s as LED lamps, in which blue light emitted from the semiconductor strikes phosphors deposited on the tiny chip. The combination of the blue light that continues through the phosphor and the green to red fluorescence from the phosphors produces a net emission of white light.
Glow sticks sometimes utilize fluorescent materials to absorb light from the chemiluminescent reaction and emit light of a different color.
Analytical chemistry
Many analytical procedures involve the use of a fluorometer, usually with a single exciting wavelength and single detection wavelength. Because of the sensitivity that the method affords, fluorescent molecule concentrations as low as 1 part per trillion can be measured.
Fluorescence in several wavelengths can be detected by an array detector, to detect compounds from HPLC flow. Also, TLC plates can be visualized if the compounds or a coloring reagent is fluorescent. Fluorescence is most effective when there is a larger ratio of atoms at lower energy levels in a Boltzmann distribution. There is, then, a higher probability of excitement and release of photons by lower-energy atoms, making analysis more efficient.
Spectroscopy
Usually the setup of a fluorescence assay involves a light source, which may emit many different wavelengths of light. In general, a single wavelength is required for proper analysis, so, in order to selectively filter the light, it is passed through an excitation monochromator, and then that chosen wavelength is passed through the sample cell. After absorption and re-emission of the energy, many wavelengths may emerge due to Stokes shift and various electron transitions. To separate and analyze them, the fluorescent radiation is passed through an emission monochromator, and observed selectively by a detector.
Lasers
Lasers most often use the fluorescence of certain materials as their active media, such as the red glow produced by a ruby (chromium sapphire), the infrared of titanium sapphire, or the unlimited range of colors produced by organic dyes. These materials normally fluoresce through a process called spontaneous emission, in which the light is emitted in all directions and often at many discrete spectral lines all at once. In many lasers, the fluorescent medium is "pumped" by exposing it to an intense light source, creating a population inversion, meaning that more of its atoms become in an excited state (high energy) rather than at ground state (low energy). When this occurs, the spontaneous fluorescence can then induce the other atoms to emit their photons in the same direction and at the same wavelength, creating stimulated emission. When a portion of the spontaneous fluorescence is trapped between two mirrors, nearly all of the medium's fluorescence can be stimulated to emit along the same line, producing a laser beam.
Biochemistry and medicine
Fluorescence in the life sciences is used generally as a non-destructive way of tracking or analysis of biological molecules by means of the fluorescent emission at a specific frequency where there is no background from the excitation light, as relatively few cellular components are naturally fluorescent (called intrinsic or autofluorescence).
In fact, a protein or other component can be "labelled" with an extrinsic fluorophore, a fluorescent dye that can be a small molecule, protein, or quantum dot, finding a large use in many biological applications.
The quantification of a dye is done with a spectrofluorometer and finds additional applications in:
Microscopy
When scanning the fluorescence intensity across a plane one has fluorescence microscopy of tissues, cells, or subcellular structures, which is accomplished by labeling an antibody with a fluorophore and allowing the antibody to find its target antigen within the sample. Labelling multiple antibodies with different fluorophores allows visualization of multiple targets within a single image (multiple channels). DNA microarrays are a variant of this.
Immunology: An antibody is first prepared by having a fluorescent chemical group attached, and the sites (e.g., on a microscopic specimen) where the antibody has bound can be seen, and even quantified, by the fluorescence.
FLIM (Fluorescence Lifetime Imaging Microscopy) can be used to detect certain bio-molecular interactions that manifest themselves by influencing fluorescence lifetimes.
Cell and molecular biology: detection of colocalization using fluorescence-labelled antibodies for selective detection of the antigens of interest using specialized software such as ImageJ.
Other techniques
FRET (Förster resonance energy transfer, also known as fluorescence resonance energy transfer) is used to study protein interactions, detect specific nucleic acid sequences and used as biosensors, while fluorescence lifetime (FLIM) can give an additional layer of information.
Biotechnology: biosensors using fluorescence are being studied as possible Fluorescent glucose biosensors.
Automated sequencing of DNA by the chain termination method; each of four different chain terminating bases has its own specific fluorescent tag. As the labelled DNA molecules are separated, the fluorescent label is excited by a UV source, and the identity of the base terminating the molecule is identified by the wavelength of the emitted light.
FACS (fluorescence-activated cell sorting). One of several important cell sorting techniques used in the separation of different cell lines (especially those isolated from animal tissues).
DNA detection: the compound ethidium bromide, in aqueous solution, has very little fluorescence, as it is quenched by water. Ethidium bromide's fluorescence is greatly enhanced after it binds to DNA, so this compound is very useful in visualising the location of DNA fragments in agarose gel electrophoresis. Intercalated ethidium is in a hydrophobic environment when it is between the base pairs of the DNA, protected from quenching by water which is excluded from the local environment of the intercalated ethidium. Ethidium bromide may be carcinogenic – an arguably safer alternative is the dye SYBR Green.
FIGS (Fluorescence image-guided surgery) is a medical imaging technique that uses fluorescence to detect properly labeled structures during surgery.
Intravascular fluorescence is a catheter-based medical imaging technique that uses fluorescence to detect high-risk features of atherosclerosis and unhealed vascular stent devices. Plaque autofluorescence has been used in a first-in-man study in coronary arteries in combination with optical coherence tomography. Molecular agents has been also used to detect specific features, such as stent fibrin accumulation and enzymatic activity related to artery inflammation.
SAFI (species altered fluorescence imaging) an imaging technique in electrokinetics and microfluidics. It uses non-electromigrating dyes whose fluorescence is easily quenched by migrating chemical species of interest. The dye(s) are usually seeded everywhere in the flow and differential quenching of their fluorescence by analytes is directly observed.
Fluorescence-based assays for screening toxic chemicals. The optical assays consist of a mixture of environment-sensitive fluorescent dyes and human skin cells that generate fluorescence spectra patterns. This approach can reduce the need for laboratory animals in biomedical research and pharmaceutical industry.
Bone-margin detection: Alizarin-stained specimens and certain fossils can be lit by fluorescent lights to view anatomical structures, including bone margins.
Forensics
Fingerprints can be visualized with fluorescent compounds such as ninhydrin or DFO (1,8-Diazafluoren-9-one). Blood and other substances are sometimes detected by fluorescent reagents, like fluorescein. Fibers, and other materials that may be encountered in forensics or with a relationship to various collectibles, are sometimes fluorescent.
Non-destructive testing
Fluorescent penetrant inspection is used to find cracks and other defects on the surface of a part. Dye tracing, using fluorescent dyes, is used to find leaks in liquid and gas plumbing systems.
Signage
Fluorescent colors are frequently used in signage, particularly road signs. Fluorescent colors are generally recognizable at longer ranges than their non-fluorescent counterparts, with fluorescent orange being particularly noticeable. This property has led to its frequent use in safety signs and labels.
Optical brighteners
Fluorescent compounds are often used to enhance the appearance of fabric and paper, causing a "whitening" effect. A white surface treated with an optical brightener can emit more visible light than that which shines on it, making it appear brighter. The blue light emitted by the brightener compensates for the diminishing blue of the treated material and changes the hue away from yellow or brown and toward white. Optical brighteners are used in laundry detergents, high brightness paper, cosmetics, high-visibility clothing and more.
| Physical sciences | Electrodynamics | null |
11556 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20arithmetic | Fundamental theorem of arithmetic | In mathematics, the fundamental theorem of arithmetic, also called the unique factorization theorem and prime factorization theorem, states that every integer greater than 1 can be represented uniquely as a product of prime numbers, up to the order of the factors. For example,
The theorem says two things about this example: first, that 1200 be represented as a product of primes, and second, that no matter how this is done, there will always be exactly four 2s, one 3, two 5s, and no other primes in the product.
The requirement that the factors be prime is necessary: factorizations containing composite numbers may not be unique
(for example, ).
This theorem is one of the main reasons why 1 is not considered a prime number: if 1 were prime, then factorization into primes would not be unique; for example,
The theorem generalizes to other algebraic structures that are called unique factorization domains and include principal ideal domains, Euclidean domains, and polynomial rings over a field. However, the theorem does not hold for algebraic integers. This failure of unique factorization is one of the reasons for the difficulty of the proof of Fermat's Last Theorem. The implicit use of unique factorization in rings of algebraic integers is behind the error of many of the numerous false proofs that have been written during the 358 years between Fermat's statement and Wiles's proof.
History
The fundamental theorem can be derived from Book VII, propositions 30, 31 and 32, and Book IX, proposition 14 of Euclid's Elements.
(In modern terminology: if a prime p divides the product ab, then p divides either a or b or both.) Proposition 30 is referred to as Euclid's lemma, and it is the key in the proof of the fundamental theorem of arithmetic.
(In modern terminology: every integer greater than one is divided evenly by some prime number.) Proposition 31 is proved directly by infinite descent.
Proposition 32 is derived from proposition 31, and proves that the decomposition is possible.
(In modern terminology: a least common multiple of several prime numbers is not a multiple of any other prime number.) Book IX, proposition 14 is derived from Book VII, proposition 30, and proves partially that the decomposition is unique – a point critically noted by André Weil. Indeed, in this proposition the exponents are all equal to one, so nothing is said for the general case.
While Euclid took the first step on the way to the existence of prime factorization, Kamāl al-Dīn al-Fārisī took the final step and stated for the first time the fundamental theorem of arithmetic.
Article 16 of Gauss's Disquisitiones Arithmeticae is an early modern statement and proof employing modular arithmetic.
Applications
Canonical representation of a positive integer
Every positive integer can be represented in exactly one way as a product of prime powers
where are primes and the are positive integers. This representation is commonly extended to all positive integers, including 1, by the convention that the empty product is equal to 1 (the empty product corresponds to ).
This representation is called the canonical representation of , or the standard form of n. For example,
999 = 33×37,
1000 = 23×53,
1001 = 7×11×13.
Factors may be inserted without changing the value of (for example, ). In fact, any positive integer can be uniquely represented as an infinite product taken over all the positive prime numbers, as
where a finite number of the are positive integers, and the others are zero.
Allowing negative exponents provides a canonical form for positive rational numbers.
Arithmetic operations
The canonical representations of the product, greatest common divisor (GCD), and least common multiple (LCM) of two numbers a and b can be expressed simply in terms of the canonical representations of a and b themselves:
However, integer factorization, especially of large numbers, is much more difficult than computing products, GCDs, or LCMs. So these formulas have limited use in practice.
Arithmetic functions
Many arithmetic functions are defined using the canonical representation. In particular, the values of additive and multiplicative functions are determined by their values on the powers of prime numbers.
Proof
The proof uses Euclid's lemma (Elements VII, 30): If a prime divides the product of two integers, then it must divide at least one of these integers.
Existence
It must be shown that every integer greater than is either prime or a product of primes. First, is prime. Then, by strong induction, assume this is true for all numbers greater than and less than . If is prime, there is nothing more to prove. Otherwise, there are integers and , where , and . By the induction hypothesis, and are products of primes. But then is a product of primes.
Uniqueness
Suppose, to the contrary, there is an integer that has two distinct prime factorizations. Let be the least such integer and write , where each and is prime. We see that divides , so divides some by Euclid's lemma. Without loss of generality, say divides . Since and are both prime, it follows that . Returning to our factorizations of , we may cancel these two factors to conclude that . We now have two distinct prime factorizations of some integer strictly smaller than , which contradicts the minimality of .
Uniqueness without Euclid's lemma
The fundamental theorem of arithmetic can also be proved without using Euclid's lemma. The proof that follows is inspired by Euclid's original version of the Euclidean algorithm.
Assume that is the smallest positive integer which is the product of prime numbers in two different ways. Incidentally, this implies that , if it exists, must be a composite number greater than . Now, say
Every must be distinct from every Otherwise, if say then there would exist some positive integer that is smaller than and has two distinct prime factorizations. One may also suppose that by exchanging the two factorizations, if needed.
Setting and one has
Also, since one has
It then follows that
As the positive integers less than have been supposed to have a unique prime factorization, must occur in the factorization of either or . The latter case is impossible, as , being smaller than , must have a unique prime factorization, and differs from every The former case is also impossible, as, if is a divisor of it must be also a divisor of which is impossible as and are distinct primes.
Therefore, there cannot exist a smallest integer with more than a single distinct prime factorization. Every positive integer must either be a prime number itself, which would factor uniquely, or a composite that also factors uniquely into primes, or in the case of the integer , not factor into any prime.
Generalizations
The first generalization of the theorem is found in Gauss's second monograph (1832) on biquadratic reciprocity. This paper introduced what is now called the ring of Gaussian integers, the set of all complex numbers a + bi where a and b are integers. It is now denoted by He showed that this ring has the four units ±1 and ±i, that the non-zero, non-unit numbers fall into two classes, primes and composites, and that (except for order), the composites have unique factorization as a product of primes (up to the order and multiplication by units).
Similarly, in 1844 while working on cubic reciprocity, Eisenstein introduced the ring , where is a cube root of unity. This is the ring of Eisenstein integers, and he proved it has the six units and that it has unique factorization.
However, it was also discovered that unique factorization does not always hold. An example is given by . In this ring one has
Examples like this caused the notion of "prime" to be modified. In it can be proven that if any of the factors above can be represented as a product, for example, 2 = ab, then one of a or b must be a unit. This is the traditional definition of "prime". It can also be proven that none of these factors obeys Euclid's lemma; for example, 2 divides neither (1 + ) nor (1 − ) even though it divides their product 6. In algebraic number theory 2 is called irreducible in (only divisible by itself or a unit) but not prime in (if it divides a product it must divide one of the factors). The mention of is required because 2 is prime and irreducible in Using these definitions it can be proven that in any integral domain a prime must be irreducible. Euclid's classical lemma can be rephrased as "in the ring of integers every irreducible is prime". This is also true in and but not in
The rings in which factorization into irreducibles is essentially unique are called unique factorization domains. Important examples are polynomial rings over the integers or over a field, Euclidean domains and principal ideal domains.
In 1843 Kummer introduced the concept of ideal number, which was developed further by Dedekind (1876) into the modern theory of ideals, special subsets of rings. Multiplication is defined for ideals, and the rings in which they have unique factorization are called Dedekind domains.
There is a version of unique factorization for ordinals, though it requires some additional conditions to ensure uniqueness.
Any commutative Möbius monoid satisfies a unique factorization theorem and thus possesses arithmetical properties similar to those of the multiplicative semigroup of positive integers. Fundamental Theorem of Arithmetic is, in fact, a special case of the unique factorization theorem in commutative Möbius monoids.
| Mathematics | Other | null |
11579 | https://en.wikipedia.org/wiki/Fermi%20paradox | Fermi paradox | The Fermi paradox is the discrepancy between the lack of conclusive evidence of advanced extraterrestrial life and the apparently high likelihood of its existence. Those affirming the paradox generally conclude that if the conditions required for life to arise from non-living matter are as permissive as the available evidence on Earth indicates, then extraterrestrial life would be sufficiently common such that it would be implausible for it not to have been detected yet.
The quandary takes its name from the Italian-American physicist Enrico Fermi: in the summer of 1950, Fermi was engaged in casual conversation about contemporary UFO reports and the possibility of faster-than-light travel with fellow physicists Edward Teller, Herbert York, and Emil Konopinski while the group was walking to lunch. The conversation moved on to other topics, until Fermi later blurted out during lunch, "But where is everybody?" (the exact quote is uncertain.)
There have been many attempts to resolve the Fermi paradox, such as suggesting that intelligent extraterrestrial beings are extremely rare, that the lifetime of such civilizations is short, or that they exist but (for various reasons) humans see no evidence.
Chain of reasoning
The following are some of the facts and hypotheses that together serve to highlight the apparent contradiction:
There are billions of stars in the Milky Way similar to the Sun.
With high probability, some of these stars have Earth-like planets in a circumstellar habitable zone.
Many of these stars, and hence their planets, are much older than the Sun. If Earth-like planets are typical, some may have developed intelligent life long ago.
Some of these civilizations may have developed interstellar travel, a step humans are investigating now.
Even at the slow pace of currently envisioned interstellar travel, the Milky Way galaxy could be completely traversed in a few million years.
Since many of the Sun-like stars are billions of years older than the Sun, the Earth should have already been visited by extraterrestrial civilizations, or at least their probes.
However, there is no convincing evidence that this has happened.
History
Fermi was not the first to ask the question. An earlier implicit mention was by Konstantin Tsiolkovsky in an unpublished manuscript from 1933. He noted "people deny the presence of intelligent beings on the planets of the universe" because "(i) if such beings exist they would have visited Earth, and (ii) if such civilizations existed then they would have given us some sign of their existence". This was not a paradox for others, who took this to imply the absence of extraterrestrial life. But it was one for him, since he believed in extraterrestrial life and the possibility of space travel. Therefore, he proposed what is now known as the zoo hypothesis and speculated that mankind is not yet ready for higher beings to contact us. In turn, Tsiolkovsky himself was not the first to discover the paradox, as shown by his reference to other people's reasons for not accepting the premise that extraterrestrial civilizations exist.
In 1975, Michael H. Hart published a detailed examination of the paradox, one of the first to do so. He argued that if intelligent extraterrestrials exist, and are capable of space travel, then the galaxy could have been colonized in a time much less than that of the age of the Earth. However, there is no observable evidence they have been here, which Hart called "Fact A".
Other names closely related to Fermi's question ("Where are they?") include the Great Silence, and (Latin for "silence of the universe"), though these only refer to one portion of the Fermi paradox, that humans see no evidence of other civilizations.
Original conversations
In the summer of 1950 at Los Alamos National Laboratory in New Mexico, Enrico Fermi and co-workers Emil Konopinski, Edward Teller, and Herbert York had one or several lunchtime conversations. In one, Fermi suddenly blurted out, "Where is everybody?" (Teller's letter), or "Don't you ever wonder where everybody is?" (York's letter), or "But where is everybody?" (Konopinski's letter). Teller wrote, "The result of his question was general laughter because of the strange fact that, in spite of Fermi's question coming out of the blue, everybody around the table seemed to understand at once that he was talking about extraterrestrial life."
In 1984, York wrote that Fermi "followed up with a series of calculations on the probability of earthlike planets, the probability of life given an earth, the probability of humans given life, the likely rise and duration of high technology, and so on. He concluded on the basis of such calculations that we ought to have been visited long ago and many times over." Teller remembers that not much came of this conversation "except perhaps a statement that the distances to the next location of living beings may be very great and that, indeed, as far as our galaxy is concerned, we are living somewhere in the sticks, far removed from the metropolitan area of the galactic center."
Fermi died of cancer in 1954. However, in letters to the three surviving men decades later in 1984, Dr. Eric Jones of Los Alamos was able to partially put the original conversation back together. He informed each of the men that he wished to include a reasonably accurate version or composite in the written proceedings he was putting together for a previously held conference entitled "Interstellar Migration and the Human Experience". Jones first sent a letter to Edward Teller which included a secondhand account from Hans Mark. Teller responded, and then Jones sent Teller's letter to Herbert York. York responded, and finally, Jones sent both Teller's and York's letters to Emil Konopinski who also responded. Furthermore, Konopinski was able to later identify a cartoon which Jones found as the one involved in the conversation and thereby help to settle the time period as being the summer of 1950.
Basis
The Fermi paradox is a conflict between the argument that scale and probability seem to favor intelligent life being common in the universe, and the total lack of evidence of intelligent life having ever arisen anywhere other than on Earth.
The first aspect of the Fermi paradox is a function of the scale or the large numbers involved: there are an estimated 200–400 billion stars in the Milky Way (2–4 × 1011) and 70 sextillion (7×1022) in the observable universe. Even if intelligent life occurs on only a minuscule percentage of planets around these stars, there might still be a great number of extant civilizations, and if the percentage were high enough it would produce a significant number of extant civilizations in the Milky Way. This assumes the mediocrity principle, by which Earth is a typical planet.
The second aspect of the Fermi paradox is the argument of probability: given intelligent life's ability to overcome scarcity, and its tendency to colonize new habitats, it seems possible that at least some civilizations would be technologically advanced, seek out new resources in space, and colonize their star system and, subsequently, surrounding star systems. Since there is no significant evidence on Earth, or elsewhere in the known universe, of other intelligent life after 13.8 billion years of the universe's history, there is a conflict requiring a resolution. Some examples of possible resolutions are that intelligent life is rarer than is thought, that assumptions about the general development or behavior of intelligent species are flawed, or, more radically, that current scientific understanding of the nature of the universe itself is quite incomplete.
The Fermi paradox can be asked in two ways. The first is, "Why are no aliens or their artifacts found on Earth, or in the Solar System?". If interstellar travel is possible, even the "slow" kind nearly within the reach of Earth technology, then it would only take from 5 million to 50 million years to colonize the galaxy. This is relatively brief on a geological scale, let alone a cosmological one. Since there are many stars older than the Sun, and since intelligent life might have evolved earlier elsewhere, the question then becomes why the galaxy has not been colonized already. Even if colonization is impractical or undesirable to all alien civilizations, large-scale exploration of the galaxy could be possible by probes. These might leave detectable artifacts in the Solar System, such as old probes or evidence of mining activity, but none of these have been observed.
The second form of the question is "Why are there no signs of intelligence elsewhere in the universe?". This version does not assume interstellar travel, but includes other galaxies as well. For distant galaxies, travel times may well explain the lack of alien visits to Earth, but a sufficiently advanced civilization could potentially be observable over a significant fraction of the size of the observable universe. Even if such civilizations are rare, the scale argument indicates they should exist somewhere at some point during the history of the universe, and since they could be detected from far away over a considerable period of time, many more potential sites for their origin are within range of human observation. It is unknown whether the paradox is stronger for the Milky Way galaxy or for the universe as a whole.
Drake equation
The theories and principles in the Drake equation are closely related to the Fermi paradox. The equation was formulated by Frank Drake in 1961 in an attempt to find a systematic means to evaluate the numerous probabilities involved in the existence of alien life. The equation is presented as follows:
Where is the number of technologically advanced civilizations in the Milky Way galaxy, and is asserted to be the product of
, the rate of formation of stars in the galaxy;
, the fraction of those stars with planetary systems;
, the number of planets, per solar system, with an environment suitable for organic life;
, the fraction of those suitable planets whereon organic life appears;
, the fraction of life-bearing planets whereon intelligent life appears;
, the fraction of civilizations that reach the technological level whereby detectable signals may be dispatched; and
, the length of time that those civilizations dispatch their signals.
The fundamental problem is that the last four terms (, , , and ) are entirely unknown, rendering statistical estimates impossible.
The Drake equation has been used by both optimists and pessimists, with wildly differing results. The first scientific meeting on the search for extraterrestrial intelligence (SETI), which had 10 attendees including Frank Drake and Carl Sagan, speculated that the number of civilizations was roughly between 1,000 and 100,000,000 civilizations in the Milky Way galaxy. Conversely, Frank Tipler and John D. Barrow used pessimistic numbers and speculated that the average number of civilizations in a galaxy is much less than one. Almost all arguments involving the Drake equation suffer from the overconfidence effect, a common error of probabilistic reasoning about low-probability events, by guessing specific numbers for likelihoods of events whose mechanism is not yet understood, such as the likelihood of abiogenesis on an Earth-like planet, with current likelihood estimates varying over many hundreds of orders of magnitude. An analysis that takes into account some of the uncertainty associated with this lack of understanding has been carried out by Anders Sandberg, Eric Drexler and Toby Ord, and suggests "a substantial ex ante probability of there being no other intelligent life in our observable universe".
Great Filter
The Great Filter, a concept introduced by Robin Hanson in 1996, represents whatever natural phenomena that would make it unlikely for life to evolve from inanimate matter to an advanced civilization. The most commonly agreed-upon low probability event is abiogenesis: a gradual process of increasing complexity of the first self-replicating molecules by a randomly occurring chemical process. Other proposed great filters are the emergence of eukaryotic cells or of meiosis or some of the steps involved in the evolution of a brain capable of complex logical deductions.
Astrobiologists Dirk Schulze-Makuch and William Bains, reviewing the history of life on Earth, including convergent evolution, concluded that transitions such as oxygenic photosynthesis, the eukaryotic cell, multicellularity, and tool-using intelligence are likely to occur on any Earth-like planet given enough time. They argue that the Great Filter may be abiogenesis, the rise of technological human-level intelligence, or an inability to settle other worlds because of self-destruction or a lack of resources. Paleobiologist Olev Vinn has suggested that the great filter may have universal biological roots related to evolutionary animal behavior.
Grabby Aliens
In 2021, the concepts of quiet, loud, and grabby aliens were introduced by Hanson et al. The possible "loud" aliens expand rapidly in a highly detectable way throughout the universe and endure, while "quiet" aliens are hard or impossible to detect and eventually disappear. "Grabby" aliens prevent the emergence of other civilizations in their sphere of influence, which expands at a rate near the speed of light. The authors argue that if loud civilizations are rare, as they appear to be, then quiet civilizations are also rare. The paper suggests that humanity's current stage of technological development is relatively early in the potential timeline of intelligent life in the universe, as loud aliens would otherwise be observable by astronomers.
Earlier in 2013, Anders Sandberg and Stuart Armstrong examined the potential for intelligent life to spread intergalactically throughout the universe and the implications for the Fermi Paradox. Their study suggests that with sufficient energy, intelligent civilizations could potentially colonize the entire Milky Way galaxy within a few million years, and spread to nearby galaxies in a timespan that is cosmologically brief. They conclude that intergalactic colonization appears possible with the resources of a single solar system and that intergalactic colonization is of comparable difficulty to interstellar colonization, and therefore the Fermi paradox is much sharper than commonly thought.
Empirical evidence
There are two parts of the Fermi paradox that rely on empirical evidence—that there are many potentially habitable planets, and that humans see no evidence of life. The first point, that many suitable planets exist, was an assumption in Fermi's time but is now supported by the discovery that exoplanets are common. Current models predict billions of habitable worlds in the Milky Way.
The second part of the paradox, that humans see no evidence of extraterrestrial life, is also an active field of scientific research. This includes both efforts to find any indication of life, and efforts specifically directed to finding intelligent life. These searches have been made since 1960, and several are ongoing.
Although astronomers do not usually search for extraterrestrials, they have observed phenomena that they could not immediately explain without positing an intelligent civilization as the source. For example, pulsars, when first discovered in 1967, were called little green men (LGM) because of the precise repetition of their pulses. In all cases, explanations with no need for intelligent life have been found for such observations, but the possibility of discovery remains. Proposed examples include asteroid mining that would change the appearance of debris disks around stars, or spectral lines from nuclear waste disposal in stars.
Explanations based on technosignatures, such as radio communications, have been presented.
Electromagnetic emissions
Radio technology and the ability to construct a radio telescope are presumed to be a natural advance for technological species, theoretically creating effects that might be detected over interstellar distances. The careful searching for non-natural radio emissions from space may lead to the detection of alien civilizations. Sensitive alien observers of the Solar System, for example, would note unusually intense radio waves for a G2 star due to Earth's television and telecommunication broadcasts. In the absence of an apparent natural cause, alien observers might infer the existence of a terrestrial civilization. Such signals could be either "accidental" by-products of a civilization, or deliberate attempts to communicate, such as the Arecibo message. It is unclear whether "leakage", as opposed to a deliberate beacon, could be detected by an extraterrestrial civilization. The most sensitive radio telescopes on Earth, , would not be able to detect non-directional radio signals (such as broadband) even at a fraction of a light-year away, but other civilizations could hypothetically have much better equipment.
A number of astronomers and observatories have attempted and are attempting to detect such evidence, mostly through SETI organizations such as the SETI Institute and Breakthrough Listen. Several decades of SETI analysis have not revealed any unusually bright or meaningfully repetitive radio emissions.
Direct planetary observation
Exoplanet detection and classification is a very active sub-discipline in astronomy; the first candidate terrestrial planet discovered within a star's habitable zone was found in 2007. New refinements in exoplanet detection methods, and use of existing methods from space (such as the Kepler and TESS missions) are starting to detect and characterize Earth-size planets, to determine whether they are within the habitable zones of their stars. Such observational refinements may allow for a better estimation of how common these potentially habitable worlds are.
Conjectures about interstellar probes
The Hart–Tipler conjecture is a form of contraposition which states that because no interstellar probes have been detected, there likely is no other intelligent life in the universe, as such life should be expected to eventually create and launch such probes. Self-replicating probes could exhaustively explore a galaxy the size of the Milky Way in as little as a million years. If even a single civilization in the Milky Way attempted this, such probes could spread throughout the entire galaxy. Another speculation for contact with an alien probe—one that would be trying to find human beings—is an alien Bracewell probe. Such a hypothetical device would be an autonomous space probe whose purpose is to seek out and communicate with alien civilizations (as opposed to von Neumann probes, which are usually described as purely exploratory). These were proposed as an alternative to carrying a slow speed-of-light dialogue between vastly distant neighbors. Rather than contending with the long delays a radio dialogue would suffer, a probe housing an artificial intelligence would seek out an alien civilization to carry on a close-range communication with the discovered civilization. The findings of such a probe would still have to be transmitted to the home civilization at light speed, but an information-gathering dialogue could be conducted in real time.
Direct exploration of the Solar System has yielded no evidence indicating a visit by aliens or their probes. Detailed exploration of areas of the Solar System where resources would be plentiful may yet produce evidence of alien exploration, though the entirety of the Solar System is vast and difficult to investigate. Attempts to signal, attract, or activate hypothetical Bracewell probes in Earth's vicinity have not succeeded.
Searches for stellar-scale artifacts
In 1959, Freeman Dyson observed that every developing human civilization constantly increases its energy consumption, and he conjectured that a civilization might try to harness a large part of the energy produced by a star. He proposed a hypothetical "Dyson sphere" as a possible means: a shell or cloud of objects enclosing a star to absorb and utilize as much radiant energy as possible. Such a feat of astroengineering would drastically alter the observed spectrum of the star involved, changing it at least partly from the normal emission lines of a natural stellar atmosphere to those of black-body radiation, probably with a peak in the infrared. Dyson speculated that advanced alien civilizations might be detected by examining the spectra of stars and searching for such an altered spectrum.
There have been some attempts to find evidence of the existence of Dyson spheres that would alter the spectra of their core stars. Direct observation of thousands of galaxies has shown no explicit evidence of artificial construction or modifications. In October 2015, there was some speculation that a dimming of light from star KIC 8462852, observed by the Kepler space telescope, could have been a result of Dyson sphere construction. However, in 2018, observations determined that the amount of dimming varied by the frequency of the light, pointing to dust, rather than an opaque object such as a Dyson sphere, as the cause of the dimming.
Hypothetical explanations for the paradox
Rarity of intelligent life
Extraterrestrial life is rare or non-existent
Those who think that intelligent extraterrestrial life is (nearly) impossible argue that the conditions needed for the evolution of life—or at least the evolution of biological complexity—are rare or even unique to Earth. Under this assumption, called the rare Earth hypothesis, a rejection of the mediocrity principle, complex multicellular life is regarded as exceedingly unusual.
The rare Earth hypothesis argues that the evolution of biological complexity requires a host of fortuitous circumstances, such as a galactic habitable zone, a star and planet(s) having the requisite conditions, such as enough of a continuous habitable zone, the advantage of a giant guardian like Jupiter and a large moon, conditions needed to ensure the planet has a magnetosphere and plate tectonics, the chemistry of the lithosphere, atmosphere, and oceans, the role of "evolutionary pumps" such as massive glaciation and rare bolide impacts. Perhaps most importantly, advanced life needs whatever it was that led to the transition of (some) prokaryotic cells to eukaryotic cells, sexual reproduction and the Cambrian explosion.
In his book Wonderful Life (1989), Stephen Jay Gould suggested that if the "tape of life" were rewound to the time of the Cambrian explosion, and one or two tweaks made, human beings most probably never would have evolved. Other thinkers such as Fontana, Buss, and Kauffman have written about the self-organizing properties of life.
Extraterrestrial intelligence is rare or non-existent
It is possible that even if complex life is common, intelligence (and consequently civilizations) is not. While there are remote sensing techniques that could perhaps detect life-bearing planets without relying on the signs of technology, none of them have any ability to tell if any detected life is intelligent. This is sometimes referred to as the "algae vs. alumnae" problem.
Charles Lineweaver states that when considering any extreme trait in an animal, intermediate stages do not necessarily produce "inevitable" outcomes. For example, large brains are no more "inevitable", or convergent, than are the long noses of animals such as aardvarks and elephants. As he points out, "dolphins have had ~20million years to build a radio telescope and have not done so". In addition, Rebecca Boyle points out that of all the species that have ever evolved in the history of life on the planet Earth, only one—human beings and only in the beginning stages—has ever become space-faring.
Periodic extinction by natural events
New life might commonly die out due to runaway heating or cooling on their fledgling planets. On Earth, there have been numerous major extinction events that destroyed the majority of complex species alive at the time; the extinction of the non-avian dinosaurs is the best known example. These are thought to have been caused by events such as impact from a large meteorite, massive volcanic eruptions, or astronomical events such as gamma-ray bursts. It may be the case that such extinction events are common throughout the universe and periodically destroy intelligent life, or at least its civilizations, before the species is able to develop the technology to communicate with other intelligent species.
However, the chances of extinction by natural events may be very low on the scale of a civilization's lifetime. Based on an analysis of impact craters on Earth and the Moon, the average interval between impacts large enough to cause global consequences (like the Chicxulub impact) is estimated to be around 100 million years.
Evolutionary explanations
Intelligent alien species have not developed advanced technologies
It may be that while alien species with intelligence exist, they are primitive or have not reached the level of technological advancement necessary to communicate. Along with non-intelligent life, such civilizations would also be very difficult to detect. A trip using conventional rockets would take hundreds of thousands of years to reach the nearest stars.
To skeptics, the fact that in the history of life on the Earth, only one species has developed a civilization to the point of being capable of spaceflight and radio technology, lends more credence to the idea that technologically advanced civilizations are rare in the universe.
Amedeo Balbi and Adam Frank propose the concept of an "oxygen bottleneck" for the emergence of technospheres. The "oxygen bottleneck" refers to the critical level of atmospheric oxygen necessary for fire and combustion. Earth's current atmospheric oxygen concentration is about 21%, but has been much lower in the past and may also be on many exoplanets. The authors argue that while the threshold of oxygen required for the existence of complex life and ecosystems is much lower, technological advancement, particularly that reliant on combustion, such as metal smelting and energy production, requires higher oxygen concentrations of around 18% or more. Thus, the presence of high levels of oxygen in a planet's atmosphere is not only a potential biosignature but also a critical factor in the emergence of detectable technological civilizations.
Another hypothesis in this category is the "Water World hypothesis". According to author and scientist David Brin: "it turns out that our Earth skates the very inner edge of our sun's continuously habitable—or 'Goldilocks'—zone. And Earth may be anomalous. It may be that because we are so close to our sun, we have an anomalously oxygen-rich atmosphere, and we have anomalously little ocean for a water world. In other words, 32 percent continental mass may be high among water worlds..." Brin continues, "In which case, the evolution of creatures like us, with hands and fire and all that sort of thing, may be rare in the galaxy. In which case, when we do build starships and head out there, perhaps we'll find lots and lots of life worlds, but they're all like Polynesia. We'll find lots and lots of intelligent lifeforms out there, but they're all dolphins, whales, squids, who could never build their own starships. What a perfect universe for us to be in, because nobody would be able to boss us around, and we'd get to be the voyagers, the Star Trek people, the starship builders, the policemen, and so on."
The rapid increase of scientific and technological progress seen in the 19th and 20th centuries, compared to earlier eras, led to the common assumption that such progresses will keep growing at exponential rates as time goes by, eventually leading to the progress level required for space exploration. The "universal limit to technological development" (ULTD) hypothesis proposes that there is a limit to the potential growth of a civilization, and that this limit may be placed well below the point required for space exploration. Such limits may be based on economic reasons, natural reasons (such as faster-than-light travel being impossible), and even limitations based on the species' own biology.
It is the nature of intelligent life to destroy itself
This is the argument that technological civilizations may usually or invariably destroy themselves before or shortly after developing radio or spaceflight technology. The astrophysicist Sebastian von Hoerner stated that the progress of science and technology on Earth was driven by two factors—the struggle for domination and the desire for an easy life. The former potentially leads to complete destruction, while the latter may lead to biological or mental degeneration. Possible means of annihilation via major global issues, where global interconnectedness actually makes humanity more vulnerable than resilient, are many, including war, accidental environmental contamination or damage, the development of biotechnology, synthetic life like mirror life, resource depletion, climate change, or poorly-designed artificial intelligence. This general theme is explored both in fiction and in scientific hypothesizing.
In 1966, Sagan and Shklovskii speculated that technological civilizations will either tend to destroy themselves within a century of developing interstellar communicative capability or master their self-destructive tendencies and survive for billion-year timescales. Self-annihilation may also be viewed in terms of thermodynamics: insofar as life is an ordered system that can sustain itself against the tendency to disorder, Stephen Hawking's "external transmission" or interstellar communicative phase, where knowledge production and knowledge management is more important than transmission of information via evolution, may be the point at which the system becomes unstable and self-destructs. Here, Hawking emphasizes self-design of the human genome (transhumanism) or enhancement via machines (e.g., brain–computer interface) to enhance human intelligence and reduce aggression, without which he implies human civilization may be too stupid collectively to survive an increasingly unstable system. For instance, the development of technologies during the "external transmission" phase, such as weaponization of artificial general intelligence or antimatter, may not be met by concomitant increases in human ability to manage its own inventions. Consequently, disorder increases in the system: global governance may become increasingly destabilized, worsening humanity's ability to manage the possible means of annihilation listed above, resulting in global societal collapse.
A less theoretical example might be the resource-depletion issue on Polynesian islands, of which Easter Island is only the best known. David Brin points out that during the expansion phase from 1500 BC to 800 AD there were cycles of overpopulation followed by what might be called periodic cullings of adult males through war or ritual. He writes, "There are many stories of islands whose men were almost wiped out—sometimes by internal strife, and sometimes by invading males from other islands."
Using extinct civilizations such as Easter Island (Rapa Nui) as models, a study conducted in 2018 by Adam Frank et al. posited that climate change induced by "energy intensive" civilizations may prevent sustainability within such civilizations, thus explaining the paradoxical lack of evidence for intelligent extraterrestrial life. Based on dynamical systems theory, the study examined how technological civilizations (exo-civilizations) consume resources and the feedback effects this consumption has on their planets and its carrying capacity. According to Adam Frank "[t]he point is to recognize that driving climate change may be something generic. The laws of physics demand that any young population, building an energy-intensive civilization like ours, is going to have feedback on its planet. Seeing climate change in this cosmic context may give us better insight into what’s happening to us now and how to deal with it." Generalizing the Anthropocene, their model produces four different outcomes:
Die-off: A scenario where the population grows quickly, surpassing the planet's carrying capacity, which leads to a peak followed by a rapid decline. The population eventually stabilizes at a much lower equilibrium level, allowing the planet to partially recover.
Sustainability: A scenario where civilizations successfully transition from high-impact resources (like fossil fuels) to sustainable ones (like solar energy) before significant environmental degradation occurs. This allows the civilization and planet to reach a stable equilibrium, avoiding catastrophic effects.
Collapse Without Resource Change: In this trajectory, the population and environmental degradation increase rapidly. The civilization does not switch to sustainable resources in time, leading to a total collapse where a tipping point is crossed and the population drops.
Collapse With Resource Change: Similar to the previous scenario, but in this case, the civilization attempts to transition to sustainable resources. However, the change comes too late, and the environmental damage is irreversible, still leading to the civilization's collapse.
It is the nature of intelligent life to destroy others
Another hypothesis is that an intelligent species beyond a certain point of technological capability will destroy other intelligent species as they appear, perhaps by using self-replicating probes. Science fiction writer Fred Saberhagen has explored this idea in his Berserker series, as has physicist Gregory Benford and, as well, science fiction writer Greg Bear in his The Forge of God novel, and later Liu Cixin in his The Three-Body Problem series.
A species might undertake such extermination out of expansionist motives, greed, paranoia, or aggression. In 1981, cosmologist Edward Harrison argued that such behavior would be an act of prudence: an intelligent species that has overcome its own self-destructive tendencies might view any other species bent on galactic expansion as a threat. It has also been suggested that a successful alien species would be a superpredator, as are humans. Another possibility invokes the "tragedy of the commons" and the anthropic principle: the first lifeform to achieve interstellar travel will necessarily (even if unintentionally) prevent competitors from arising, and humans simply happen to be first.
Civilizations only broadcast detectable signals for a brief period of time
It may be that alien civilizations are detectable through their radio emissions for only a short time, reducing the likelihood of spotting them. The usual assumption is that civilizations outgrow radio through technological advancement. However, there could be other leakage such as that from microwaves used to transmit power from solar satellites to ground receivers. Regarding the first point, in a 2006 Sky & Telescope article, Seth Shostak wrote, "Moreover, radio leakage from a planet is only likely to get weaker as a civilization advances and its communications technology gets better. Earth itself is increasingly switching from broadcasts to leakage-free cables and fiber optics, and from primitive but obvious carrier-wave broadcasts to subtler, hard-to-recognize spread-spectrum transmissions."
More hypothetically, advanced alien civilizations may evolve beyond broadcasting at all in the electromagnetic spectrum and communicate by technologies not developed or used by mankind. Some scientists have hypothesized that advanced civilizations may send neutrino signals. If such signals exist, they could be detectable by neutrino detectors that are now under construction for other goals.
Alien life may be too incomprehensible
Another possibility is that human theoreticians have underestimated how much alien life might differ from that on Earth. Aliens may be psychologically unwilling to attempt to communicate with human beings. Perhaps human mathematics is parochial to Earth and not shared by other life, though others argue this can only apply to abstract math since the math associated with physics must be similar (in results, if not in methods).
In his 2009 book, SETI scientist Seth Shostak wrote, "Our experiments [such as plans to use drilling rigs on Mars] are still looking for the type of extraterrestrial that would have appealed to Percival Lowell [astronomer who believed he had observed canals on Mars]."
Physiology might also cause a communication barrier. Carl Sagan speculated that an alien species might have a thought process orders of magnitude slower (or faster) than that of humans. A message broadcast by that species might well seem like random background noise to humans, and therefore go undetected.
Paul Davies states that 500 years ago the very idea of a computer doing work merely by manipulating internal data may not have been viewed as a technology at all. He writes, "Might there be a still level[...] If so, this 'third level' would never be manifest through observations made at the informational level, still less the matter level. There is no vocabulary to describe the third level, but that doesn't mean it is non-existent, and we need to be open to the possibility that alien technology may operate at the third level, or maybe the fourth, fifth[...] levels."
Arthur C. Clarke hypothesized that "our technology must still be laughably primitive; we may well be like jungle savages listening for the throbbing of tom-toms, while the ether around them carries more words per second than they could utter in a lifetime". Another thought is that technological civilizations invariably experience a technological singularity and attain a post-biological character.
Sociological explanations
Colonization is not the cosmic norm
In response to Tipler's idea of self-replicating probes, Stephen Jay Gould wrote, "I must confess that I simply don't know how to react to such arguments. I have enough trouble predicting the plans and reactions of the people closest to me. I am usually baffled by the thoughts and accomplishments of humans in different cultures. I'll be damned if I can state with certainty what some extraterrestrial source of intelligence might do."
Alien species may have only settled part of the galaxy
According to a study by Frank et al., advanced civilizations may not colonize everything in the galaxy due to their potential adoption of steady states of expansion. This hypothesis suggests that civilizations might reach a stable pattern of expansion where they neither collapse nor aggressively spread throughout the galaxy. A February 2019 article in Popular Science states, "Sweeping across the Milky Way and establishing a unified galactic empire might be inevitable for a monolithic super-civilization, but most cultures are neither monolithic nor super—at least if our experience is any guide." Astrophysicist Adam Frank, along with co-authors such as astronomer Jason Wright, ran a variety of simulations in which they varied such factors as settlement lifespans, fractions of suitable planets, and recharge times between launches. They found many of their simulations seemingly resulted in a "third category" in which the Milky Way remains partially settled indefinitely. The abstract to their 2019 paper states, "These results break the link between Hart's famous 'Fact A' (no interstellar visitors on Earth now) and the conclusion that humans must, therefore, be the only technological civilization in the galaxy. Explicitly, our solutions admit situations where our current circumstances are consistent with an otherwise settled, steady-state galaxy."
An alternative scenario is that long-lived civilizations may only choose to colonize stars during closest approach. As low mass K- and M-type dwarfs are by far the most common types of main sequence stars in the Milky Way, they are more likely to pass close to existing civilizations. These stars have longer life spans, which may be preferred by such a civilization. Interstellar travel capability of 0.3 light years is theoretically sufficient to colonize all M-dwarfs in the galaxy within 2 billion years. If the travel capability is increased to 2 light years, then all K-dwarfs can be colonized in the same time frame.
Alien species may isolate themselves in virtual worlds
Avi Loeb suggests that one possible explanation for the Fermi paradox is virtual reality technology. Individuals of extraterrestrial civilizations may prefer to spend time in virtual worlds or metaverses that have different physical law constraints as opposed to focusing on colonizing planets. Nick Bostrom suggests that some advanced beings may divest themselves entirely of physical form, create massive artificial virtual environments, transfer themselves into these environments through mind uploading, and exist totally within virtual worlds, ignoring the external physical universe.
It may be that intelligent alien life develops an "increasing disinterest" in their outside world. Possibly any sufficiently advanced society will develop highly engaging media and entertainment well before the capacity for advanced space travel, with the rate of appeal of these social contrivances being destined, because of their inherent reduced complexity, to overtake any desire for complex, expensive endeavors such as space exploration and communication. Once any sufficiently advanced civilization becomes able to master its environment, and most of its physical needs are met through technology, various "social and entertainment technologies", including virtual reality, are postulated to become the primary drivers and motivations of that civilization.
Artificial intelligence may not expand
While artificial intelligence supplanting its creators could only deepen the Fermi paradox, such as through enabling the colonizing of the galaxy through self-replicating probes, it is also possible that after replacing its creators, artificial intelligence either doesn't expand or endure for a variety of reasons. Michael A. Garrett has suggested that biological civilizations may universally underestimate the speed that AI systems progress, and not react to it in time, thus making it a possible great filter. He also argues that this could make the longevity of advanced technological civilizations less than 200 years, thus explaining the great silence observed by SETI.
Economic explanations
Lack of resources needed to physically spread throughout the galaxy
The ability of an alien culture to colonize other star systems is based on the idea that interstellar travel is technologically feasible. While the current understanding of physics rules out the possibility of faster-than-light travel, it appears that there are no major theoretical barriers to the construction of "slow" interstellar ships, even though the engineering required is considerably beyond present human capabilities. This idea underlies the concept of the Von Neumann probe and the Bracewell probe as a potential evidence of extraterrestrial intelligence.
It is possible, however, that present scientific knowledge cannot properly gauge the feasibility and costs of such interstellar colonization. Theoretical barriers may not yet be understood, and the resources needed may be so great as to make it unlikely that any civilization could afford to attempt it. Even if interstellar travel and colonization are possible, they may be difficult, leading to a colonization model based on percolation theory.
Colonization efforts may not occur as an unstoppable rush, but rather as an uneven tendency to "percolate" outwards, within an eventual slowing and termination of the effort given the enormous costs involved and the expectation that colonies will inevitably develop a culture and civilization of their own. Colonization may thus occur in "clusters", with large areas remaining uncolonized at any one time.
Information is cheaper to transmit than matter is to transfer
If a human-capability machine intelligence is possible, and if it is possible to transfer such constructs over vast distances and rebuild them on a remote machine, then it might not make strong economic sense to travel the galaxy by spaceflight. Louis K. Scheffer calculates the cost of radio transmission of information across space to be cheaper than spaceflight by a factor of 108–1017. For a machine civilization, the costs of interstellar travel are therefore enormous compared to the more efficient option of sending computational signals across space to already established sites. After the first civilization has physically explored or colonized the galaxy, as well as sent such machines for easy exploration, then any subsequent civilizations, after having contacted the first, may find it cheaper, faster, and easier to explore the galaxy through intelligent mind transfers to the machines built by the first civilization. However, since a star system needs only one such remote machine, and the communication is most likely highly directed, transmitted at high-frequencies, and at a minimal power to be economical, such signals would be hard to detect from Earth.
By contrast, in economics the counter-intuitive Jevons paradox implies that higher productivity results in higher demand. In other words, increased economic efficiency results in increased economic growth. For example, increased renewable energy has the risk of not directly resulting in declining fossil fuel use, but rather additional economic growth as fossil fuels instead are directed to alternative uses. Thus, technological innovation makes human civilization more capable of higher levels of consumption, as opposed to its existing consumption being achieved more efficiently at a stable level.
Discovery of extraterrestrial life is too difficult
Humans have not listened properly
There are some assumptions that underlie the SETI programs that may cause searchers to miss signals that are present. Extraterrestrials might, for example, transmit signals that have a very high or low data rate, or employ unconventional (in human terms) frequencies, which would make them hard to distinguish from background noise. Signals might be sent from non-main sequence star systems that humans search with lower priority; current programs assume that most alien life will be orbiting Sun-like stars.
The greatest challenge is the sheer size of the radio search needed to look for signals (effectively spanning the entire observable universe), the limited amount of resources committed to SETI, and the sensitivity of modern instruments. SETI estimates, for instance, that with a radio telescope as sensitive as the Arecibo Observatory, Earth's television and radio broadcasts would only be detectable at distances up to 0.3 light-years, less than 1/10 the distance to the nearest star. A signal is much easier to detect if it consists of a deliberate, powerful transmission directed at Earth. Such signals could be detected at ranges of hundreds to tens of thousands of light-years distance. However, this means that detectors must be listening to an appropriate range of frequencies, and be in that region of space to which the beam is being sent. Many SETI searches assume that extraterrestrial civilizations will be broadcasting a deliberate signal, like the Arecibo message, in order to be found.
Thus, to detect alien civilizations through their radio emissions, Earth observers either need more sensitive instruments or must hope for fortunate circumstances: that the broadband radio emissions of alien radio technology are much stronger than humanity's own; that one of SETI's programs is listening to the correct frequencies from the right regions of space; or that aliens are deliberately sending focused transmissions in Earth's general direction.
Humans have not listened for long enough
Humanity's ability to detect intelligent extraterrestrial life has existed for only a very brief period—from 1937 onwards, if the invention of the radio telescope is taken as the dividing line—and Homo sapiens is a geologically recent species. The whole period of modern human existence to date is a very brief period on a cosmological scale, and radio transmissions have only been propagated since 1895. Thus, it remains possible that human beings have neither existed long enough nor made themselves sufficiently detectable to be found by extraterrestrial intelligence.
Intelligent life may be too far away
It may be that non-colonizing technologically capable alien civilizations exist, but that they are simply too far apart for meaningful two-way communication. Sebastian von Hoerner estimated the average duration of civilization at 6,500 years and the average distance between civilizations in the Milky Way at 1,000 light years. If two civilizations are separated by several thousand light-years, it is possible that one or both cultures may become extinct before meaningful dialogue can be established. Human searches may be able to detect their existence, but communication will remain impossible because of distance. It has been suggested that this problem might be ameliorated somewhat if contact and communication is made through a Bracewell probe. In this case at least one partner in the exchange may obtain meaningful information. Alternatively, a civilization may simply broadcast its knowledge, and leave it to the receiver to make what they may of it. This is similar to the transmission of information from ancient civilizations to the present, and humanity has undertaken similar activities like the Arecibo message, which could transfer information about Earth's intelligent species, even if it never yields a response or does not yield a response in time for humanity to receive it. It is possible that observational signatures of self-destroyed civilizations could be detected, depending on the destruction scenario and the timing of human observation relative to it.
A related speculation by Sagan and Newman suggests that if other civilizations exist, and are transmitting and exploring, their signals and probes simply have not arrived yet. However, critics have noted that this is unlikely, since it requires that humanity's advancement has occurred at a very special point in time, while the Milky Way is in transition from empty to full. This is a tiny fraction of the lifespan of a galaxy under ordinary assumptions, so the likelihood that humanity is in the midst of this transition is considered low in the paradox.
Some SETI skeptics may also believe that humanity is at a very special point of time—specifically, a transitional period from no space-faring societies to one space-faring society, namely that of human beings.
Intelligent life may exist hidden from view
Planetary scientist Alan Stern put forward the idea that there could be a number of worlds with subsurface oceans (such as Jupiter's Europa or Saturn's Enceladus). The surface would provide a large degree of protection from such things as cometary impacts and nearby supernovae, as well as creating a situation in which a much broader range of orbits are acceptable. Life, and potentially intelligence and civilization, could evolve. Stern states, "If they have technology, and let's say they're broadcasting, or they have city lights or whatever—we can't see it in any part of the spectrum, except maybe very-low-frequency [radio]."
Advanced civilizations may limit their search for life to technological signatures
If life is abundant in the universe but the cost of space travel is high, an advanced civilization may choose to focus its search not on signs of life in general, but on those of other advanced civilizations, and specifically on radio signals. Since humanity has only recently began to use radio communication, its signals may have yet to arrive to other inhabited planets, and if they have, probes from those planets may have yet to arrive on Earth.
Willingness to communicate
Everyone is listening but no one is transmitting
Alien civilizations might be technically capable of contacting Earth, but could be only listening instead of transmitting. If all or most civilizations act in the same way, the galaxy could be full of civilizations eager for contact, but everyone is listening and no one is transmitting. This is the so-called SETI Paradox.
The only civilization known, humanity, does not explicitly transmit, except for a few small efforts. Even these efforts, and certainly any attempt to expand them, are controversial. It is not even clear humanity would respond to a detected signal—the official policy within the SETI community is that "[no] response to a signal or other evidence of extraterrestrial intelligence should be sent until appropriate international consultations have taken place". However, given the possible impact of any reply, it may be very difficult to obtain any consensus on who would speak and what they would say.
Communication is dangerous
An alien civilization might feel it is too dangerous to communicate, either for humanity or for them. It is argued that when very different civilizations have met on Earth, the results have often been disastrous for one side or the other, and the same may well apply to interstellar contact. Even contact at a safe distance could lead to infection by computer code or even ideas themselves. Perhaps prudent civilizations actively hide not only from Earth but from everyone, out of fear of other civilizations.
Perhaps the Fermi paradox itself—or the alien equivalent of it—is the reason for any civilization to avoid contact with other civilizations, even if no other obstacles existed. From any one civilization's point of view, it would be unlikely for them to be the first ones to make first contact. Therefore, according to this reasoning, it is likely that previous civilizations faced fatal problems with first contact and doing so should be avoided. So perhaps every civilization keeps quiet because of the possibility that there is a real reason for others to do so.
In 1987, science fiction author Greg Bear explored this concept in his novel The Forge of God. In The Forge of God, humanity is likened to a baby crying in a hostile forest: "There once was an infant lost in the woods, crying its heart out, wondering why no one answered, drawing down the wolves." One of the characters explains, "We've been sitting in our tree chirping like foolish birds for over a century now, wondering why no other birds answered. The galactic skies are full of hawks, that's why. Planetisms that don't know enough to keep quiet, get eaten."
In Liu Cixin's 2008 novel The Dark Forest, the author proposes a literary explanation for the Fermi paradox in which many multiple alien civilizations exist, but are both silent and paranoid, destroying any nascent lifeforms loud enough to make themselves known. This is because any other intelligent life may represent a future threat. As a result, Liu's fictional universe contains a plethora of quiet civilizations which do not reveal themselves, as in a "dark forest"...filled with "armed hunter(s) stalking through the trees like a ghost". This idea has come to be known as the dark forest hypothesis.
Earth is deliberately being avoided
The zoo hypothesis states that intelligent extraterrestrial life exists and does not contact life on Earth to allow for its natural evolution and development. A variation on the zoo hypothesis is the laboratory hypothesis, where humanity has been or is being subject to experiments, with Earth or the Solar System effectively serving as a laboratory. The zoo hypothesis may break down under the uniformity of motive flaw: all it takes is a single culture or civilization to decide to act contrary to the imperative within humanity's range of detection for it to be abrogated, and the probability of such a violation of hegemony increases with the number of civilizations, tending not towards a "Galactic Club" with a unified foreign policy with regard to life on Earth but multiple "Galactic Cliques". However, if artificial superintelligences dominate galactic life, and if it is true that such intelligences tend towards merged hegemonic behavior, then this would address the uniformity of motive flaw by dissuading rogue behavior.
Analysis of the inter-arrival times between civilizations in the galaxy based on common astrobiological assumptions suggests that the initial civilization would have a commanding lead over the later arrivals. As such, it may have established what has been termed the zoo hypothesis through force or as a galactic or universal norm and the resultant "paradox" by a cultural founder effect with or without the continued activity of the founder. Some colonization scenarios predict spherical expansion across star systems, with continued expansion coming from the systems just previously settled. It has been suggested that this would cause a strong selection process among the colonization front favoring cultural or biological adaptations to living in starships or space habitats. As a result, they may forgo living on planets. This may result in the destruction of terrestrial planets in these systems for use as building materials, thus preventing the development of life on those worlds. Or, they may have an ethic of protection for "nursery worlds", and protect them.
It is possible that a civilization advanced enough to travel between solar systems could be actively visiting or observing Earth while remaining undetected or unrecognized. Following this logic, and building on arguments that other proposed solutions to the Fermi paradox may be implausible, Ian Crawford and Dirk Schulze-Makuch have argued that technological civilisations are either very rare in the Galaxy or are deliberately hiding from us.
Earth is deliberately being isolated
A related idea to the zoo hypothesis is that, beyond a certain distance, the perceived universe is a simulated reality. The planetarium hypothesis speculates that beings may have created this simulation so that the universe appears to be empty of other life.
Alien life is already here, unacknowledged
A significant fraction of the population believes that at least some UFOs (Unidentified Flying Objects) are spacecraft piloted by aliens. While most of these are unrecognized or mistaken interpretations of mundane phenomena, some occurrences remain puzzling even after investigation. The consensus scientific view is that although they may be unexplained, they do not rise to the level of convincing evidence.
Similarly, it is theoretically possible that SETI groups are not reporting positive detections, or governments have been blocking signals or suppressing publication. This response might be attributed to security or economic interests from the potential use of advanced extraterrestrial technology. It has been suggested that the detection of an extraterrestrial radio signal or technology could well be the most highly secret information that exists. Claims that this has already happened are common in the popular press, but the scientists involved report the opposite experience—the press becomes informed and interested in a potential detection even before a signal can be confirmed.
Regarding the idea that aliens are in secret contact with governments, David Brin writes, "Aversion to an idea, simply because of its long association with crackpots, gives crackpots altogether too much influence."
| Physical sciences | Astronomy basics | Astronomy |
11593 | https://en.wikipedia.org/wiki/Flat%20Earth | Flat Earth | Flat Earth is an archaic and scientifically disproven conception of the Earth's shape as a plane or disk. Many ancient cultures subscribed to a flat-Earth cosmography, notably including the cosmology in the ancient Near East. The model has undergone a recent resurgence as a conspiracy theory.
The idea of a spherical Earth appeared in ancient Greek philosophy with Pythagoras (6th century BC). However, the early Greek cosmological view of a flat Earth persisted among most pre-Socratics (6th–5th century BC). In the early 4th century BC, Plato wrote about a spherical Earth. By about 330 BC, his former student Aristotle had provided strong empirical evidence for a spherical Earth. Knowledge of the Earth's global shape gradually began to spread beyond the Hellenistic world. By the early period of the Christian Church, the spherical view was widely held, with some notable exceptions. In contrast, ancient Chinese scholars consistently describe the Earth as flat, and this perception remained unchanged until their encounters with Jesuit missionaries in the 17th century. It is a historical myth that medieval Europeans generally thought the Earth was flat. This myth was created in the 17th century by Protestants to argue against Catholic teachings. Traditionalist Muslim scholars have maintained that the earth is flat, though, since the 9th century, Muslim scholars tended to believe in a spherical Earth.
Despite the scientific facts and obvious effects of Earth's sphericity, pseudoscientific flat-Earth conspiracy theories persist, and from the 2010s at latest, believers in a flat earth have increased, both as membership in modern flat Earth societies, and as unaffiliated individuals using social media. In a 2018 study reported on by Scientific American, only 82% of 18 to 24 year old American respondents agreed with the statement "I have always believed the world is round". However, a firm belief in a flat Earth is rare, with less than 2% acceptance in all age groups.
History
Belief in flat Earth
Near East
In early Egyptian and Mesopotamian thought, the world was portrayed as a disk floating in the ocean. A similar model is found in the Homeric account from the 8th century BC in which "Okeanos, the personified body of water surrounding the circular surface of the Earth, is the begetter of all life and possibly of all gods."
The Pyramid Texts and Coffin Texts of ancient Egypt show a similar cosmography; Nun (the Ocean) encircled nbwt ("dry lands" or "Islands").
The Israelites also imagined the Earth to be a disc floating on water with an arched firmament above it that separated the Earth from the heavens. The sky was a solid dome with the Sun, Moon, planets, and stars embedded in it.
Greece
Poets
Both Homer and Hesiod described a disc cosmography on the Shield of Achilles. This poetic tradition of an Earth-encircling (gaiaokhos) sea (Oceanus) and a disc also appears in Stasinus of Cyprus, Mimnermus, Aeschylus, and Apollonius Rhodius.
Homer's description of the disc cosmography on the shield of Achilles with the encircling ocean is repeated far later in Quintus Smyrnaeus' Posthomerica (4th century AD), which continues the narration of the Trojan War.
Philosophers
Several pre-Socratic philosophers believed that the world was flat: Thales (c. 550 BC) according to several sources, and Leucippus (c. 440 BC) and Democritus (c. 460–370 BC) according to Aristotle.
Thales thought that the Earth floated in water like a log. It has been argued, however, that Thales actually believed in a spherical Earth. Anaximander (c. 550 BC) believed that the Earth was a short cylinder with a flat, circular top that remained stable because it was the same distance from all things. Anaximenes of Miletus believed that "the Earth is flat and rides on air; in the same way the Sun and the Moon and the other heavenly bodies, which are all fiery, ride the air because of their flatness". Xenophanes (c. 500 BC) thought that the Earth was flat, with its upper side touching the air, and the lower side extending without limit.
Belief in a flat Earth continued into the 5th century BC. Anaxagoras (c. 450 BC) agreed that the Earth was flat, and his pupil Archelaus believed that the flat Earth was depressed in the middle like a saucer, to allow for the fact that the Sun does not rise and set at the same time for everyone.
Historians
Hecataeus of Miletus believed that the Earth was flat and surrounded by water. Herodotus in his Histories ridiculed the belief that water encircled the world, yet most classicists agree that he still believed Earth was flat because of his descriptions of literal "ends" or "edges" of the Earth.
Northern Europe
The ancient Norse and Germanic peoples believed in a flat-Earth cosmography with the Earth surrounded by an ocean, with the axis mundi, a world tree (Yggdrasil), or pillar (Irminsul) in the centre. In the world-encircling ocean sat a snake called Jormungandr. The Norse creation account preserved in Gylfaginning (VIII) states that during the creation of the Earth, an impassable sea was placed around it:
The late Norse Konungs skuggsjá, on the other hand, explains Earth's shape as a sphere:
East Asia
In ancient China, the prevailing belief was that the Earth was flat and square, while the heavens were round, an assumption virtually unquestioned until the introduction of European astronomy in the 17th century. The English sinologist Cullen emphasizes the point that there was no concept of a round Earth in ancient Chinese astronomy:
The model of an egg was often used by Chinese astronomers such as Zhang Heng (78–139 AD) to describe the heavens as spherical:
This analogy with a curved egg led some modern historians, notably Joseph Needham, to conjecture that Chinese astronomers were, after all, aware of the Earth's sphericity. The egg reference, however, was rather meant to clarify the relative position of the flat Earth to the heavens:
Further examples cited by Needham supposed to demonstrate dissenting voices from the ancient Chinese consensus actually refer without exception to the Earth being square, not to it being flat. Accordingly, the 13th-century scholar Li Ye, who argued that the movements of the round heaven would be hindered by a square Earth, did not advocate a spherical Earth, but rather that its edge should be rounded off so as to be circular. However, Needham disagrees, affirming that Li Ye believed the Earth to be spherical, similar in shape to the heavens but much smaller. This was preconceived by the 4th-century scholar Yu Xi, who argued for the infinity of outer space surrounding the Earth and that the latter could be either square or round, in accordance to the shape of the heavens. When Chinese geographers of the 17th century, influenced by European cartography and astronomy, showed the Earth as a sphere that could be circumnavigated by sailing around the globe, they did so with formulaic terminology previously used by Zhang Heng to describe the spherical shape of the Sun and Moon (i.e. that they were as round as a crossbow bullet).
As noted in the book Huainanzi, in the 2nd century BC, Chinese astronomers effectively inverted Eratosthenes' calculation of the curvature of the Earth to calculate the height of the Sun above the Earth. By assuming the Earth was flat, they arrived at a distance of (approximately ). The Zhoubi Suanjing also discusses how to determine the distance of the Sun by measuring the length of noontime shadows at different latitudes, a method similar to Eratosthenes' measurement of the circumference of the Earth, but the Zhoubi Suanjing assumes that the Earth is flat.
Alternate or mixed theories
Greece: spherical Earth
Pythagoras in the 6th century BC and Parmenides in the 5th century BC stated that the Earth is spherical, and this view spread rapidly in the Greek world. Around 330 BC, Aristotle maintained on the basis of physical theory and observational evidence that the Earth was spherical, and reported an estimate of its circumference. The Earth's circumference was first determined around 240 BC by Eratosthenes. By the 2nd century AD, Ptolemy had derived his maps from a globe and developed the system of latitude, longitude, and climes. His Almagest was written in Greek and only translated into Latin in the 11th century from Arabic translations.
Lucretius (1st century BC) opposed the concept of a spherical Earth, because he considered that an infinite universe had no center towards which heavy bodies would tend. Thus, he thought the idea of animals walking around topsy-turvy under the Earth was absurd. By the 1st century AD, Pliny the Elder was in a position to say that everyone agreed on the spherical shape of Earth, though disputes continued regarding the nature of the antipodes, and how it is possible to keep the ocean in a curved shape.
South Asia
The Vedic texts depict the cosmos in many ways. One of the earliest Indian cosmological texts pictures the Earth as one of a stack of flat disks.
In the Vedic texts, Dyaus (heaven) and Prithvi (Earth) are compared to wheels on an axle, yielding a flat model. They are also described as bowls or leather bags, yielding a concave model. According to Macdonell: "the conception of the Earth being a disc surrounded by an ocean does not appear in the Samhitas. But it was naturally regarded as circular, being compared with a wheel (10.89) and expressly called circular (parimandala) in the Shatapatha Brahmana."
By about the 5th century AD, the siddhanta astronomy texts of South Asia, particularly of Aryabhata, assume a spherical Earth as they develop mathematical methods for quantitative astronomy for calendar and time keeping.
The medieval Indian texts called the Puranas describe the Earth as a flat-bottomed, circular disk with concentric oceans and continents. This general scheme is present not only in the Hindu cosmologies, but also in Buddhist and Jain cosmologies of South Asia. However, some Puranas include other models. The fifth canto of the Bhagavata Purana, for example, includes sections that describe the Earth both as flat and spherical.
Early Christian Church
During the early period of the Christian Church, the spherical view continued to be widely held, with some notable exceptions.
Athenagoras, an eastern Christian writing around the year 175 AD, said that the Earth was spherical. Methodius (c. 290 AD), an eastern Christian writing against "the theory of the Chaldeans and the Egyptians" said: "Let us first lay bare ... the theory of the Chaldeans and the Egyptians. They say that the circumference of the universe is likened to the turnings of a well-rounded globe, the Earth being a central point. They say that since its outline is spherical, ... the Earth should be the center of the universe, around which the heaven is whirling." Lactantius, a western Christian writer and advisor to the first Christian Roman Emperor, Constantine, writing sometime between 304 and 313 AD, ridiculed the notion of antipodes and the philosophers who fancied that "the universe is round like a ball. They also thought that heaven revolves in accordance with the motion of the heavenly bodies. ... For that reason, they constructed brass globes, as though after the figure of the universe." Arnobius, another eastern Christian writing sometime around 305 AD, described the round Earth: "In the first place, indeed, the world itself is neither right nor left. It has neither upper nor lower regions, nor front nor back. For whatever is round and bounded on every side by the circumference of a solid sphere, has no beginning or end ..."
The influential theologian and philosopher Saint Augustine, one of the four Great Church Fathers of the Western Church, similarly objected to the "fable" of antipodes:
Some historians do not view Augustine's scriptural commentaries as endorsing any particular cosmological model, endorsing instead the view that Augustine shared the common view of his contemporaries that the Earth is spherical, in line with his endorsement of science in De Genesi ad litteram. C. P. E. Nothaft, responding to writers like Leo Ferrari who described Augustine as endorsing a flat Earth, says that "...other recent writers on the subject treat Augustine’s acceptance of the earth’s spherical shape as a well-established fact".
Diodorus of Tarsus, a leading figure in the School of Antioch and mentor of John Chrysostom, may have argued for a flat Earth; however, Diodorus' opinion on the matter is known only from a later criticism. Chrysostom, one of the four Great Church Fathers of the Eastern Church and Archbishop of Constantinople, explicitly espoused the idea, based on scripture, that the Earth floats miraculously on the water beneath the firmament.
Christian Topography (547) by the Alexandrian monk Cosmas Indicopleustes, who had traveled as far as Sri Lanka and the source of the Blue Nile, is now widely considered the most valuable geographical document of the early medieval age, although it received relatively little attention from contemporaries. In it, the author repeatedly expounds the doctrine that the universe consists of only two places, the Earth below the firmament and heaven above it. Carefully drawing on arguments from scripture, he describes the Earth as a rectangle, 400 days' journey long by 200 wide, surrounded by four oceans and enclosed by four massive walls which support the firmament. The spherical Earth theory is contemptuously dismissed as "pagan".
Severian, Bishop of Gabala ( 408), wrote that the Earth is flat and the Sun does not pass under it in the night, but "travels through the northern parts as if hidden by a wall". Basil of Caesarea (329–379) argued that the matter was theologically irrelevant.
Europe: Early Middle Ages
Early medieval Christian writers felt little urge to assume flatness of the Earth, though they had fuzzy impressions of the writings of Ptolemy and Aristotle, relying more on Pliny.
With the end of the Western Roman Empire, Western Europe entered the Middle Ages with great difficulties that affected the continent's intellectual production. Most scientific treatises of classical antiquity (in Greek) were unavailable, leaving only simplified summaries and compilations. In contrast, the Eastern Roman Empire did not fall, and it preserved the learning. Still, many textbooks of the Early Middle Ages supported the sphericity of the Earth in the western part of Europe.
Europe's view of the shape of the Earth in Late Antiquity and the Early Middle Ages may be best expressed by the writings of early Christian scholars:
Bishop Isidore of Seville (560–636) taught in his widely read encyclopedia, the Etymologies, diverse views such as that the Earth "resembles a wheel" resembling Anaximander in language and the map that he provided. This was widely interpreted as referring to a disc-shaped Earth. An illustration from Isidore's De Natura Rerum shows the five zones of the Earth as adjacent circles. Some have concluded that he thought the Arctic and Antarctic zones were adjacent to each other. He did not admit the possibility of antipodes, which he took to mean people dwelling on the opposite side of the Earth, considering them legendary and noting that there was no evidence for their existence. Isidore's T and O map, which was seen as representing a small part of a spherical Earth, continued to be used by authors through the Middle Ages, e.g. the 9th-century bishop Rabanus Maurus, who compared the habitable part of the northern hemisphere (Aristotle's northern temperate clime) with a wheel. At the same time, Isidore's works also gave the views of sphericity, for example, in chapter 28 of De Natura Rerum, Isidore claims that the Sun orbits the Earth and illuminates the other side when it is night on this side. See French translation of De Natura Rerum. In his other work Etymologies, there are also affirmations that the sphere of the sky has Earth in its center and the sky being equally distant on all sides. Other researchers have argued these points as well. "The work remained unsurpassed until the thirteenth century and was regarded as the summit of all knowledge. It became an essential part of European medieval culture. Soon after the invention of typography it appeared many times in print." However, "The Scholastics – later medieval philosophers, theologians, and scientists – were helped by the Arabic translators and commentaries, but they hardly needed to struggle against a flat-Earth legacy from the early middle ages (500–1050). Early medieval writers often had fuzzy and imprecise impressions of both Ptolemy and Aristotle and relied more on Pliny, but they felt (with one exception), little urge to assume flatness."
St Vergilius of Salzburg (c. 700–784), in the middle of the 8th century, discussed or taught some geographical or cosmographical ideas that St Boniface found sufficiently objectionable that he complained about them to Pope Zachary. The only surviving record of the incident is contained in Zachary's reply, dated 748, where he wrote:
Some authorities have suggested that the sphericity of the Earth was among the aspects of Vergilius's teachings that Boniface and Zachary considered objectionable. Others have considered this unlikely, and take the wording of Zachary's response to indicate at most an objection to belief in the existence of humans living in the antipodes. In any case, there is no record of any further action having been taken against Vergilius. He was later appointed bishop of Salzburg and was canonised in the 13th century.
A possible non-literary but graphic indication that people in the Middle Ages believed that the Earth (or perhaps the world) was a sphere is the use of the orb (globus cruciger) in the regalia of many kingdoms and of the Holy Roman Empire. It is attested from the time of the Christian late-Roman emperor Theodosius II (423) throughout the Middle Ages; the Reichsapfel was used in 1191 at the coronation of emperor Henry VI. However the word means "circle", and there is no record of a globe as a representation of the Earth since ancient times in the west until that of Martin Behaim in 1492. Additionally it could well be a representation of the entire "world" or cosmos.
A recent study of medieval concepts of the sphericity of the Earth noted that "since the eighth century, no cosmographer worthy of note has called into question the sphericity of the Earth". However, the work of these intellectuals may not have had significant influence on public opinion, and it is difficult to tell what the wider population may have thought of the shape of the Earth if they considered the question at all.
Europe: High and Late Middle Ages
Hermann of Reichenau (1013–1054) was among the earliest Christian scholars to estimate the circumference of Earth with Eratosthenes' method. Thomas Aquinas (1225–1274), the most widely taught theologian of the Middle Ages, believed in a spherical Earth and took for granted that his readers also knew the Earth is round. Lectures in the medieval universities commonly advanced evidence in favor of the idea that the Earth was a sphere.
Jill Tattersall shows that in many vernacular works in 12th- and 13th-century French texts the Earth was considered "round like a table" rather than "round like an apple". She writes, "[I]n virtually all the examples quoted ... from epics and from non-'historical' romances (that is, works of a less learned character) the actual form of words used suggests strongly a circle rather than a sphere", though she notes that even in these works the language is ambiguous.
Portuguese navigation down and around the coast of Africa in the latter half of the 1400s gave wide-scale observational evidence for Earth's sphericity. In these explorations, the Sun's position moved more northward the further south the explorers travelled. Its position directly overhead at noon gave evidence for crossing the equator. These apparent solar motions in detail were more consistent with north–south curvature and a distant Sun, than with any flat-Earth explanation. The ultimate demonstration came when Ferdinand Magellan's expedition completed the first global circumnavigation in 1521. Antonio Pigafetta, one of the few survivors of the voyage, recorded the loss of a day in the course of the voyage, giving evidence for east–west curvature.
Middle East: Islamic scholars
Prior to the introduction of Greek cosmology into the Islamic world, Muslims tended to view the Earth as flat, and Muslim traditionalists who rejected Greek philosophy continued to hold to this view later on while various theologians held opposing opinions. Beginning in the 10th century onwards, some Muslim traditionalists began to adopt the notion of a spherical Earth with the influence of Greek and Ptolemaic cosmology.
In Quranic cosmology, the Earth (al-arḍ) was "spread out." Whether or not this implies a flat earth was debated by Muslims. Some modern historians believe the Quran saw the world as flat. On the other hand, the 12th-century commentary, the Tafsir al-Kabir (al-Razi) by Fakhr al-Din al-Razi argues that though this verse does describe a flat surface, it is limited in its application to local regions of the Earth which are roughly flat as opposed to the Earth as a whole. Others who would support a ball-shaped Earth included Ibn Hazm.
Ming Dynasty in China
A spherical terrestrial globe was introduced to Yuan-era Khanbaliq (i.e. Beijing) in 1267 by the Persian astronomer Jamal ad-Din, but it is not known to have made an impact on the traditional Chinese conception of the shape of the Earth. As late as 1595, an early Jesuit missionary to China, Matteo Ricci, recorded that the Ming-dynasty Chinese say: "The Earth is flat and square, and the sky is a round canopy; they did not succeed in conceiving the possibility of the antipodes."
In the 17th century, the idea of a spherical Earth spread in China due to the influence of the Jesuits, who held high positions as astronomers at the imperial court. Matteo Ricci, in collaboration with Chinese cartographers and translator Li Zhizao, published the Kunyu Wanguo Quantu in 1602, the first Chinese world map based on European discoveries. The astronomical and geographical treatise Gezhicao () written in 1648 by Xiong Mingyu () explained that the Earth was spherical, not flat or square, and could be circumnavigated.
Myth of flat-Earth prevalence
In the 19th century, a historical myth arose which held that the predominant cosmological doctrine during the Middle Ages was that the Earth was flat. An early proponent of this myth was the American writer Washington Irving, who maintained that Christopher Columbus had to overcome the opposition of churchmen to gain sponsorship for his voyage of exploration. Later significant advocates of this view were John William Draper and Andrew Dickson White, who used it as a major element in their advocacy of the thesis that there was a long-lasting and essential conflict between science and religion. Some studies of the historical connections between science and religion have demonstrated that theories of their mutual antagonism ignore examples of their mutual support.
Subsequent studies of medieval science have shown that most scholars in the Middle Ages, including those read by Christopher Columbus, maintained that the Earth was spherical.
Modern flat Earth beliefs
In the modern era, the pseudoscientific belief in a flat Earth originated with the English writer Samuel Rowbotham with the 1849 pamphlet Zetetic Astronomy. Lady Elizabeth Blount established the Universal Zetetic Society in 1893, which published journals. In 1956, Samuel Shenton set up the International Flat Earth Research Society, better known as the "Flat Earth Society" from Dover, England, as a direct descendant of the Universal Zetetic Society.
In the Internet era, the availability of communications technology and social media like YouTube, Facebook and Twitter have made it easy for individuals, famous or not, to spread disinformation and attract others to erroneous ideas, including that of the flat Earth.
Modern believers in a flat Earth face overwhelming publicly accessible evidence of Earth's sphericity. They also need to explain why governments, media outlets, schools, scientists, surveyors, airlines and other organizations accept that the world is spherical. To satisfy these tensions and maintain their beliefs, they generally embrace some form of conspiracy theory. In addition, believers tend to not trust observations they have not made themselves, and often distrust, disagree with or accuse each other of being in league with conspiracies.
Education
Before learning from their social environment, a child's perception of their physical environment often leads to a false concept about the shape of Earth and what happens beyond the horizon. Many children think that Earth ends there and that one can fall off the edge. Education helps them gradually change their belief into a realist one of a spherical Earth.
| Physical sciences | Earth science basics: General | Earth science |
11615 | https://en.wikipedia.org/wiki/Finite%20field | Finite field | In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtraction and division are defined and satisfy certain basic rules. The most common examples of finite fields are the integers mod when is a prime number.
The order of a finite field is its number of elements, which is either a prime number or a prime power. For every prime number and every positive integer there are fields of order . All finite fields of a given order are isomorphic.
Finite fields are fundamental in a number of areas of mathematics and computer science, including number theory, algebraic geometry, Galois theory, finite geometry, cryptography and coding theory.
Properties
A finite field is a finite set that is a field; this means that multiplication, addition, subtraction and division (excluding division by zero) are defined and satisfy the rules of arithmetic known as the field axioms.
The number of elements of a finite field is called its order or, sometimes, its size. A finite field of order exists if and only if is a prime power (where is a prime number and is a positive integer). In a field of order , adding copies of any element always results in zero; that is, the characteristic of the field is .
For , all fields of order are isomorphic (see below). Moreover, a field cannot contain two different finite subfields with the same order. One may therefore identify all finite fields with the same order, and they are unambiguously denoted , or , where the letters GF stand for "Galois field".
In a finite field of order , the polynomial has all elements of the finite field as roots. The non-zero elements of a finite field form a multiplicative group. This group is cyclic, so all non-zero elements can be expressed as powers of a single element called a primitive element of the field. (In general there will be several primitive elements for a given field.)
The simplest examples of finite fields are the fields of prime order: for each prime number , the prime field of order may be constructed as the integers modulo , .
The elements of the prime field of order may be represented by integers in the range . The sum, the difference and the product are the remainder of the division by of the result of the corresponding integer operation. The multiplicative inverse of an element may be computed by using the extended Euclidean algorithm (see ).
Let be a finite field. For any element in and any integer , denote by the sum of copies of . The least positive such that is the characteristic of the field. This allows defining a multiplication of an element of by an element of by choosing an integer representative for . This multiplication makes into a -vector space. It follows that the number of elements of is for some integer .
The identity
(sometimes called the freshman's dream) is true in a field of characteristic . This follows from the binomial theorem, as each binomial coefficient of the expansion of , except the first and the last, is a multiple of .
By Fermat's little theorem, if is a prime number and is in the field then . This implies the equality
for polynomials over . More generally, every element in satisfies the polynomial equation .
Any finite field extension of a finite field is separable and simple. That is, if is a finite field and is a subfield of , then is obtained from by adjoining a single element whose minimal polynomial is separable. To use a piece of jargon, finite fields are perfect.
A more general algebraic structure that satisfies all the other axioms of a field, but whose multiplication is not required to be commutative, is called a division ring (or sometimes skew field). By Wedderburn's little theorem, any finite division ring is commutative, and hence is a finite field.
Existence and uniqueness
Let be a prime power, and be the splitting field of the polynomial
over the prime field . This means that is a finite field of lowest order, in which has distinct roots (the formal derivative of is , implying that , which in general implies that the splitting field is a separable extension of the original). The above identity shows that the sum and the product of two roots of are roots of , as well as the multiplicative inverse of a root of . In other words, the roots of form a field of order , which is equal to by the minimality of the splitting field.
The uniqueness up to isomorphism of splitting fields implies thus that all fields of order are isomorphic. Also, if a field has a field of order as a subfield, its elements are the roots of , and cannot contain another subfield of order .
In summary, we have the following classification theorem first proved in 1893 by E. H. Moore:
The order of a finite field is a prime power. For every prime power there are fields of order , and they are all isomorphic. In these fields, every element satisfies
and the polynomial factors as
It follows that contains a subfield isomorphic to if and only if is a divisor of ; in that case, this subfield is unique. In fact, the polynomial divides if and only if is a divisor of .
Explicit construction
Non-prime fields
Given a prime power with prime and , the field may be explicitly constructed in the following way. One first chooses an irreducible polynomial in of degree (such an irreducible polynomial always exists). Then the quotient ring
of the polynomial ring by the ideal generated by is a field of order .
More explicitly, the elements of are the polynomials over whose degree is strictly less than . The addition and the subtraction are those of polynomials over . The product of two elements is the remainder of the Euclidean division by of the product in .
The multiplicative inverse of a non-zero element may be computed with the extended Euclidean algorithm; see .
However, with this representation, elements of may be difficult to distinguish from the corresponding polynomials. Therefore, it is common to give a name, commonly to the element of that corresponds to the polynomial . So, the elements of become polynomials in , where , and, when one encounters a polynomial in of degree greater or equal to (for example after a multiplication), one knows that one has to use the relation to reduce its degree (it is what Euclidean division is doing).
Except in the construction of , there are several possible choices for , which produce isomorphic results. To simplify the Euclidean division, one commonly chooses for a polynomial of the form
which make the needed Euclidean divisions very efficient. However, for some fields, typically in characteristic , irreducible polynomials of the form may not exist. In characteristic , if the polynomial is reducible, it is recommended to choose with the lowest possible that makes the polynomial irreducible. If all these trinomials are reducible, one chooses "pentanomials" , as polynomials of degree greater than , with an even number of terms, are never irreducible in characteristic , having as a root.
A possible choice for such a polynomial is given by Conway polynomials. They ensure a certain compatibility between the representation of a field and the representations of its subfields.
In the next sections, we will show how the general construction method outlined above works for small finite fields.
Field with four elements
The smallest non-prime field is the field with four elements, which is commonly denoted or It consists of the four elements such that , , , and , for every , the other operation results being easily deduced from the distributive law. See below for the complete operation tables.
This may be deduced as follows from the results of the preceding section.
Over , there is only one irreducible polynomial of degree :
Therefore, for the construction of the preceding section must involve this polynomial, and
Let denote a root of this polynomial in . This implies that
and that and are the elements of that are not in . The tables of the operations in result from this, and are as follows:
A table for subtraction is not given, because subtraction is identical to addition, as is the case for every field of characteristic 2.
In the third table, for the division of by , the values of must be read in the left column, and the values of in the top row. (Because for every in every ring the division by 0 has to remain undefined.) From the tables, it can be seen that the additive structure of is isomorphic to the Klein four-group, while the non-zero multiplicative structure is isomorphic to the group .
The map
is the non-trivial field automorphism, called the Frobenius automorphism, which sends into the second root of the above-mentioned irreducible polynomial .
GF(p2) for an odd prime p
For applying the above general construction of finite fields in the case of , one has to find an irreducible polynomial of degree 2. For , this has been done in the preceding section. If is an odd prime, there are always irreducible polynomials of the form , with in .
More precisely, the polynomial is irreducible over if and only if is a quadratic non-residue modulo (this is almost the definition of a quadratic non-residue). There are quadratic non-residues modulo . For example, is a quadratic non-residue for , and is a quadratic non-residue for . If , that is , one may choose as a quadratic non-residue, which allows us to have a very simple irreducible polynomial .
Having chosen a quadratic non-residue , let be a symbolic square root of , that is, a symbol that has the property , in the same way that the complex number is a symbolic square root of . Then, the elements of are all the linear expressions
with and in . The operations on are defined as follows (the operations between elements of represented by Latin letters are the operations in ):
GF(8) and GF(27)
The polynomial
is irreducible over and , that is, it is irreducible modulo and (to show this, it suffices to show that it has no root in nor in ). It follows that the elements of and may be represented by expressions
where are elements of or (respectively), and is a symbol such that
The addition, additive inverse and multiplication on and may thus be defined as follows; in following formulas, the operations between elements of or , represented by Latin letters, are the operations in or , respectively:
GF(16)
The polynomial
is irreducible over , that is, it is irreducible modulo . It follows that the elements of may be represented by expressions
where are either or (elements of ), and is a symbol such that
(that is, is defined as a root of the given irreducible polynomial). As the characteristic of is , each element is its additive inverse in . The addition and multiplication on may be defined as follows; in following formulas, the operations between elements of , represented by Latin letters are the operations in .
The field has eight primitive elements (the elements that have all nonzero elements of as integer powers). These elements are the four roots of and their multiplicative inverses. In particular, is a primitive element, and the primitive elements are with less than and coprime with (that is, 1, 2, 4, 7, 8, 11, 13, 14).
Multiplicative structure
The set of non-zero elements in is an abelian group under the multiplication, of order . By Lagrange's theorem, there exists a divisor of such that for every non-zero in . As the equation has at most solutions in any field, is the lowest possible value for .
The structure theorem of finite abelian groups implies that this multiplicative group is cyclic, that is, all non-zero elements are powers of a single element. In summary:
Such an element is called a primitive element of . Unless , the primitive element is not unique. The number of primitive elements is where is Euler's totient function.
The result above implies that for every in . The particular case where is prime is Fermat's little theorem.
Discrete logarithm
If is a primitive element in , then for any non-zero element in , there is a unique integer with such that .
This integer is called the discrete logarithm of to the base .
While can be computed very quickly, for example using exponentiation by squaring, there is no known efficient algorithm for computing the inverse operation, the discrete logarithm. This has been used in various cryptographic protocols, see Discrete logarithm for details.
When the nonzero elements of are represented by their discrete logarithms, multiplication and division are easy, as they reduce to addition and subtraction modulo . However, addition amounts to computing the discrete logarithm of . The identity
allows one to solve this problem by constructing the table of the discrete logarithms of , called Zech's logarithms, for (it is convenient to define the discrete logarithm of zero as being ).
Zech's logarithms are useful for large computations, such as linear algebra over medium-sized fields, that is, fields that are sufficiently large for making natural algorithms inefficient, but not too large, as one has to pre-compute a table of the same size as the order of the field.
Roots of unity
Every nonzero element of a finite field is a root of unity, as for every nonzero element of .
If is a positive integer, an th primitive root of unity is a solution of the equation that is not a solution of the equation for any positive integer . If is a th primitive root of unity in a field , then contains all the roots of unity, which are .
The field contains a th primitive root of unity if and only if is a divisor of ; if is a divisor of , then the number of primitive th roots of unity in is (Euler's totient function). The number of th roots of unity in is .
In a field of characteristic , every th root of unity is also a th root of unity. It follows that primitive th roots of unity never exist in a field of characteristic .
On the other hand, if is coprime to , the roots of the th cyclotomic polynomial are distinct in every field of characteristic , as this polynomial is a divisor of , whose discriminant is nonzero modulo . It follows that the th cyclotomic polynomial factors over into distinct irreducible polynomials that have all the same degree, say , and that is the smallest field of characteristic that contains the th primitive roots of unity.
When computing Brauer characters, one uses the map to map eigenvalues of a representation matrix to the complex numbers. Under this mapping, the base subfield consists of evenly spaced points around the unit circle (omitting zero).
Example: GF(64)
The field has several interesting properties that smaller fields do not share: it has two subfields such that neither is contained in the other; not all generators (elements with minimal polynomial of degree over ) are primitive elements; and the primitive elements are not all conjugate under the Galois group.
The order of this field being , and the divisors of being , the subfields of are , , , and itself. As and are coprime, the intersection of and in is the prime field .
The union of and has thus elements. The remaining elements of generate in the sense that no other subfield contains any of them. It follows that they are roots of irreducible polynomials of degree over . This implies that, over , there are exactly irreducible monic polynomials of degree . This may be verified by factoring over .
The elements of are primitive th roots of unity for some dividing . As the 3rd and the 7th roots of unity belong to and , respectively, the generators are primitive th roots of unity for some in . Euler's totient function shows that there are primitive th roots of unity, primitive st roots of unity, and primitive rd roots of unity. Summing these numbers, one finds again elements.
By factoring the cyclotomic polynomials over , one finds that:
The six primitive th roots of unity are roots of and are all conjugate under the action of the Galois group.
The twelve primitive st roots of unity are roots of They form two orbits under the action of the Galois group. As the two factors are reciprocal to each other, a root and its (multiplicative) inverse do not belong to the same orbit.
The primitive elements of are the roots of They split into six orbits of six elements each under the action of the Galois group.
This shows that the best choice to construct is to define it as . In fact, this generator is a primitive element, and this polynomial is the irreducible polynomial that produces the easiest Euclidean division.
Frobenius automorphism and Galois theory
In this section, is a prime number, and is a power of .
In , the identity implies that the map
is a -linear endomorphism and a field automorphism of , which fixes every element of the subfield . It is called the Frobenius automorphism, after Ferdinand Georg Frobenius.
Denoting by the composition of with itself times, we have
It has been shown in the preceding section that is the identity. For , the automorphism is not the identity, as, otherwise, the polynomial
would have more than roots.
There are no other -automorphisms of . In other words, has exactly -automorphisms, which are
In terms of Galois theory, this means that is a Galois extension of , which has a cyclic Galois group.
The fact that the Frobenius map is surjective implies that every finite field is perfect.
Polynomial factorization
If is a finite field, a non-constant monic polynomial with coefficients in is irreducible over , if it is not the product of two non-constant monic polynomials, with coefficients in .
As every polynomial ring over a field is a unique factorization domain, every monic polynomial over a finite field may be factored in a unique way (up to the order of the factors) into a product of irreducible monic polynomials.
There are efficient algorithms for testing polynomial irreducibility and factoring polynomials over finite fields. They are a key step for factoring polynomials over the integers or the rational numbers. At least for this reason, every computer algebra system has functions for factoring polynomials over finite fields, or, at least, over finite prime fields.
Irreducible polynomials of a given degree
The polynomial
factors into linear factors over a field of order . More precisely, this polynomial is the product of all monic polynomials of degree one over a field of order .
This implies that, if then is the product of all monic irreducible polynomials over , whose degree divides . In fact, if is an irreducible factor over of , its degree divides , as its splitting field is contained in . Conversely, if is an irreducible monic polynomial over of degree dividing , it defines a field extension of degree , which is contained in , and all roots of belong to , and are roots of ; thus divides . As does not have any multiple factor, it is thus the product of all the irreducible monic polynomials that divide it.
This property is used to compute the product of the irreducible factors of each degree of polynomials over ; see Distinct degree factorization.
Number of monic irreducible polynomials of a given degree over a finite field
The number of monic irreducible polynomials of degree over
is given by
where is the Möbius function. This formula is an immediate consequence of the property of above and the Möbius inversion formula.
By the above formula, the number of irreducible (not necessarily monic) polynomials of degree over is .
The exact formula implies the inequality
this is sharp if and only if is a power of some prime.
For every and every , the right hand side is positive, so there is at least one irreducible polynomial of degree over .
Applications
In cryptography, the difficulty of the discrete logarithm problem in finite fields or in elliptic curves is the basis of several widely used protocols, such as the Diffie–Hellman protocol. For example, in 2014, a secure internet connection to Wikipedia involved the elliptic curve Diffie–Hellman protocol (ECDHE) over a large finite field. In coding theory, many codes are constructed as subspaces of vector spaces over finite fields.
Finite fields are used by many error correction codes, such as Reed–Solomon error correction code or BCH code. The finite field almost always has characteristic of , since computer data is stored in binary. For example, a byte of data can be interpreted as an element of . One exception is PDF417 bar code, which is . Some CPUs have special instructions that can be useful for finite fields of characteristic , generally variations of carry-less product.
Finite fields are widely used in number theory, as many problems over the integers may be solved by reducing them modulo one or several prime numbers. For example, the fastest known algorithms for polynomial factorization and linear algebra over the field of rational numbers proceed by reduction modulo one or several primes, and then reconstruction of the solution by using Chinese remainder theorem, Hensel lifting or the LLL algorithm.
Similarly many theoretical problems in number theory can be solved by considering their reductions modulo some or all prime numbers. See, for example, Hasse principle. Many recent developments of algebraic geometry were motivated by the need to enlarge the power of these modular methods. Wiles' proof of Fermat's Last Theorem is an example of a deep result involving many mathematical tools, including finite fields.
The Weil conjectures concern the number of points on algebraic varieties over finite fields and the theory has many applications including exponential and character sum estimates.
Finite fields have widespread application in combinatorics, two well known examples being the definition of Paley Graphs and the related construction for Hadamard Matrices. In arithmetic combinatorics finite fields and finite field models are used extensively, such as in Szemerédi's theorem on arithmetic progressions.
Extensions
Wedderburn's little theorem
A division ring is a generalization of field. Division rings are not assumed to be commutative. There are no non-commutative finite division rings: Wedderburn's little theorem states that all finite division rings are commutative, and hence are finite fields. This result holds even if we relax the associativity axiom to alternativity, that is, all finite alternative division rings are finite fields, by the Artin–Zorn theorem.
Algebraic closure
A finite field is not algebraically closed: the polynomial
has no roots in , since for all in .
Given a prime number , let be an algebraic closure of It is not only unique up to an isomorphism, as do all algebraic closures, but contrarily to the general case, all its subfield are fixed by all its automorphisms, and it is also the algebraic closure of all finite fields of the same characteristic .
This property results mainly from the fact that the elements of are exactly the roots of and this defines an inclusion for These inclusions allow writing informally The formal validation of this notation results from the fact that the above field inclusions form a directed set of fields; Its direct limit is which may thus be considered as "directed union".
Primitive elements in the algebraic closure
Given a primitive element of then is a primitive element of
For explicit computations, it may be useful to have a coherent choice of the primitive elements for all finite fields; that is, to choose the primitive element of in order that, whenever one has where is the primitive element already chosen for
Such a construction may be obtained by Conway polynomials.
Quasi-algebraic closure
Although finite fields are not algebraically closed, they are quasi-algebraically closed, which means that every homogeneous polynomial over a finite field has a non-trivial zero whose components are in the field if the number of its variables is more than its degree. This was a conjecture of Artin and Dickson proved by Chevalley (see Chevalley–Warning theorem).
| Mathematics | Abstract algebra | null |
11617 | https://en.wikipedia.org/wiki/Feynman%20diagram | Feynman diagram | In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduced the diagrams in 1948.
The calculation of probability amplitudes in theoretical particle physics requires the use of large, complicated integrals over a large number of variables. Feynman diagrams instead represent these integrals graphically.
Feynman diagrams give a simple visualization of what would otherwise be an arcane and abstract formula. According to David Kaiser, "Since the middle of the 20th century, theoretical physicists have increasingly turned to this tool to help them undertake critical calculations. Feynman diagrams have revolutionized nearly every aspect of theoretical physics."
While the diagrams apply primarily to quantum field theory, they can be used in other areas of physics, such as solid-state theory. Frank Wilczek wrote that the calculations that won him the 2004 Nobel Prize in Physics "would have been literally unthinkable without Feynman diagrams, as would [Wilczek's] calculations that established a route to production and observation of the Higgs particle."
A Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude or correlation function of a quantum mechanical or statistical field theory. Within the canonical formulation of quantum field theory, a Feynman diagram represents a term in the Wick's expansion of the perturbative -matrix. Alternatively, the path integral formulation of quantum field theory represents the transition amplitude as a weighted sum of all possible histories of the system from the initial to the final state, in terms of either particles or fields. The transition amplitude is then given as the matrix element of the -matrix between the initial and final states of the quantum system.
Feynman used Ernst Stueckelberg's interpretation of the positron as if it were an electron moving backward in time. Thus, antiparticles are represented as moving backward along the time axis in Feynman diagrams.
Motivation and history
When calculating scattering cross-sections in particle physics, the interaction between particles can be described by starting from a free field that describes the incoming and outgoing particles, and including an interaction Hamiltonian to describe how the particles deflect one another. The amplitude for scattering is the sum of each possible interaction history over all possible intermediate particle states. The number of times the interaction Hamiltonian acts is the order of the perturbation expansion, and the time-dependent perturbation theory for fields is known as the Dyson series. When the intermediate states at intermediate times are energy eigenstates (collections of particles with a definite momentum) the series is called old-fashioned perturbation theory (or time-dependent/time-ordered perturbation theory).
The Dyson series can be alternatively rewritten as a sum over Feynman diagrams, where at each vertex both the energy and momentum are conserved, but where the length of the energy-momentum four-vector is not necessarily equal to the mass, i.e. the intermediate particles are so-called off-shell. The Feynman diagrams are much easier to keep track of than "old-fashioned" terms, because the old-fashioned way treats the particle and antiparticle contributions as separate. Each Feynman diagram is the sum of exponentially many old-fashioned terms, because each internal line can separately represent either a particle or an antiparticle. In a non-relativistic theory, there are no antiparticles and there is no doubling, so each Feynman diagram includes only one term.
Feynman gave a prescription for calculating the amplitude (the Feynman rules, below) for any given diagram from a field theory Lagrangian. Each internal line corresponds to a factor of the virtual particle's propagator; each vertex where lines meet gives a factor derived from an interaction term in the Lagrangian, and incoming and outgoing lines carry an energy, momentum, and spin.
In addition to their value as a mathematical tool, Feynman diagrams provide deep physical insight into the nature of particle interactions. Particles interact in every way available; in fact, intermediate virtual particles are allowed to propagate faster than light. The probability of each final state is then obtained by summing over all such possibilities. This is closely tied to the functional integral formulation of quantum mechanics, also invented by Feynman—see path integral formulation.
The naïve application of such calculations often produces diagrams whose amplitudes are infinite, because the short-distance particle interactions require a careful limiting procedure, to include particle self-interactions. The technique of renormalization, suggested by Ernst Stueckelberg and Hans Bethe and implemented by Dyson, Feynman, Schwinger, and Tomonaga compensates for this effect and eliminates the troublesome infinities. After renormalization, calculations using Feynman diagrams match experimental results with very high accuracy.
Feynman diagram and path integral methods are also used in statistical mechanics and can even be applied to classical mechanics.
Alternate names
Murray Gell-Mann always referred to Feynman diagrams as Stueckelberg diagrams, after Swiss physicist Ernst Stueckelberg, who devised a similar notation many years earlier. Stueckelberg was motivated by the need for a manifestly covariant formalism for quantum field theory, but did not provide as automated a way to handle symmetry factors and loops, although he was first to find the correct physical interpretation in terms of forward and backward in time particle paths, all without the path-integral.
Historically, as a book-keeping device of covariant perturbation theory, the graphs were called Feynman–Dyson diagrams or Dyson graphs, because the path integral was unfamiliar when they were introduced, and Freeman Dyson's derivation from old-fashioned perturbation theory borrowed from the perturbative expansions in statistical mechanics was easier to follow for physicists trained in earlier methods. Feynman had to lobby hard for the diagrams, which confused physicists trained in equations and graphs.
Representation of physical reality
In their presentations of fundamental interactions, written from the particle physics perspective, Gerard 't Hooft and Martinus Veltman gave good arguments for taking the original, non-regularized Feynman diagrams as the most succinct representation of the physics of quantum scattering of fundamental particles. Their motivations are consistent with the convictions of James Daniel Bjorken and Sidney Drell:
The Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand. Although the statement of the theory in terms of graphs may imply perturbation theory, use of graphical methods in the many-body problem shows that this formalism is flexible enough to deal with phenomena of nonperturbative characters ... Some modification of the Feynman rules of calculation may well outlive the elaborate mathematical structure of local canonical quantum field theory ...
In quantum field theories, Feynman diagrams are obtained from a Lagrangian by Feynman rules.
Dimensional regularization is a method for regularizing integrals in the evaluation of Feynman diagrams; it assigns values to them that are meromorphic functions of an auxiliary complex parameter , called the dimension. Dimensional regularization writes a Feynman integral as an integral depending on the spacetime dimension and spacetime points.
Particle-path interpretation
A Feynman diagram is a representation of quantum field theory processes in terms of particle interactions. The particles are represented by the diagram lines. The lines can be squiggly or straight, with an arrow or without, depending on the type of particle. A point where lines connect to other lines is a vertex, and this is where the particles meet and interact. The interactions are: emit/absorb particles, deflect particles, or change particle type.
The three different types of lines are: internal lines, connecting vertices, incoming lines, extending from "the past" to a vertex, representing an initial state, and outgoing lines, extending from a vertex to "the future", representing the end state (the latter two are also known as external lines). Traditionally, the bottom of the diagram is the past and the top the future; alternatively, the past is to the left and the future to the right. When calculating correlation functions instead of scattering amplitudes, past and future are not relevant and all lines are internal. The particles then begin and end on small x's, which represent the positions of the operators whose correlation is calculated.
Feynman diagrams are a pictorial representation of a contribution to the total amplitude for a process that can happen in different ways. When a group of incoming particles scatter off each other, the process can be thought of as one where the particles travel over all possible paths, including paths that go backward in time.
Feynman diagrams are graphs that represent the interaction of particles rather than the physical position of the particle during a scattering process. They are not the same as spacetime diagrams and bubble chamber images even though they all describe particle scattering. Unlike a bubble chamber picture, only the sum of all relevant Feynman diagrams represent any given particle interaction; particles do not choose a particular diagram each time they interact. The law of summation is in accord with the principle of superposition—every diagram contributes to the total process's amplitude.
Description
A Feynman diagram represents a perturbative contribution to the amplitude of a quantum transition from some initial quantum state to some final quantum state.
For example, in the process of electron-positron annihilation the initial state is one electron and one positron, while the final state is two photons.
Conventionally, the initial state is at the left of the diagram and the final state at the right (although other layouts are also used).
The particles in the initial state are depicted by lines pointing in the direction of the initial state (e.g., to the left). The particles in the final state are represented by lines pointing in the direction of the final state (e.g., to the right).
QED involves two types of particles: matter particles such as electrons or positrons (called fermions) and exchange particles (called gauge bosons). They are represented in Feynman diagrams as follows:
Electron in the initial state is represented by a solid line, with an arrow indicating the spin of the particle e.g. pointing toward the vertex (→•).
Electron in the final state is represented by a line, with an arrow indicating the spin of the particle e.g. pointing away from the vertex: (•→).
Positron in the initial state is represented by a solid line, with an arrow indicating the spin of the particle e.g. pointing away from the vertex: (←•).
Positron in the final state is represented by a line, with an arrow indicating the spin of the particle e.g. pointing toward the vertex: (•←).
Virtual Photon in the initial and the final states is represented by a wavy line (~• and •~).
In QED each vertex has three lines attached to it: one bosonic line, one fermionic line with arrow toward the vertex, and one fermionic line with arrow away from the vertex.
Vertices can be connected by a bosonic or fermionic propagator. A bosonic propagator is represented by a wavy line connecting two vertices (•~•). A fermionic propagator is represented by a solid line with an arrow connecting two vertices, (•←•).
The number of vertices gives the order of the term in the perturbation series expansion of the transition amplitude.
Electron–positron annihilation example
The electron–positron annihilation interaction:
e+ + e− → 2γ
has a contribution from the second order Feynman diagram:
In the initial state (at the bottom; early time) there is one electron (e−) and one positron (e+) and in the final state (at the top; late time) there are two photons (γ).
Canonical quantization formulation
The probability amplitude for a transition of a quantum system (between asymptotically free states) from the initial state to the final state is given by the matrix element
where is the -matrix. In terms of the time-evolution operator , it is simply
In the interaction picture, this expands to
where is the interaction Hamiltonian and signifies the time-ordered product of operators. Dyson's formula expands the time-ordered matrix exponential into a perturbation series in the powers of the interaction Hamiltonian density,
Equivalently, with the interaction Lagrangian , it is
A Feynman diagram is a graphical representation of a single summand in the Wick's expansion of the time-ordered product in the th-order term of the Dyson series of the -matrix,
where signifies the normal-ordered product of the operators and (±) takes care of the possible sign change when commuting the fermionic operators to bring them together for a contraction (a propagator) and represents all possible contractions.
Feynman rules
The diagrams are drawn according to the Feynman rules, which depend upon the interaction Lagrangian. For the QED interaction Lagrangian
describing the interaction of a fermionic field with a bosonic gauge field , the Feynman rules can be formulated in coordinate space as follows:
Each integration coordinate is represented by a point (sometimes called a vertex);
A bosonic propagator is represented by a wiggly line connecting two points;
A fermionic propagator is represented by a solid line connecting two points;
A bosonic field is represented by a wiggly line attached to the point ;
A fermionic field is represented by a solid line attached to the point with an arrow toward the point;
An anti-fermionic field is represented by a solid line attached to the point with an arrow away from the point;
Example: second order processes in QED
The second order perturbation term in the -matrix is
Scattering of fermions
The Wick's expansion of the integrand gives (among others) the following term
where
is the electromagnetic contraction (propagator) in the Feynman gauge. This term is represented by the Feynman diagram at the right. This diagram gives contributions to the following processes:
e− e− scattering (initial state at the right, final state at the left of the diagram);
e+ e+ scattering (initial state at the left, final state at the right of the diagram);
e− e+ scattering (initial state at the bottom/top, final state at the top/bottom of the diagram).
Compton scattering and annihilation/generation of e− e+ pairs
Another interesting term in the expansion is
where
is the fermionic contraction (propagator).
Path integral formulation
In a path integral, the field Lagrangian, integrated over all possible field histories, defines the probability amplitude to go from one field configuration to another. In order to make sense, the field theory must have a well-defined ground state, and the integral must be performed a little bit rotated into imaginary time, i.e. a Wick rotation. The path integral formalism is completely equivalent to the canonical operator formalism above.
Scalar field Lagrangian
A simple example is the free relativistic scalar field in dimensions, whose action integral is:
The probability amplitude for a process is:
where and are space-like hypersurfaces that define the boundary conditions. The collection of all the on the starting hypersurface give the field's initial value, analogous to the starting position for a point particle, and the field values at each point of the final hypersurface defines the final field value, which is allowed to vary, giving a different amplitude to end up at different values. This is the field-to-field transition amplitude.
The path integral gives the expectation value of operators between the initial and final state:
and in the limit that A and B recede to the infinite past and the infinite future, the only contribution that matters is from the ground state (this is only rigorously true if the path-integral is defined slightly rotated into imaginary time). The path integral can be thought of as analogous to a probability distribution, and it is convenient to define it so that multiplying by a constant does not change anything:
The field's partition function is the normalization factor on the bottom, which coincides with the statistical mechanical partition function at zero temperature when rotated into imaginary time.
The initial-to-final amplitudes are ill-defined if one thinks of the continuum limit right from the beginning, because the fluctuations in the field can become unbounded. So the path-integral can be thought of as on a discrete square lattice, with lattice spacing and the limit should be taken carefully. If the final results do not depend on the shape of the lattice or the value of , then the continuum limit exists.
On a lattice
On a lattice, (i), the field can be expanded in Fourier modes:
Here the integration domain is over restricted to a cube of side length , so that large values of are not allowed. It is important to note that the -measure contains the factors of 2 from Fourier transforms, this is the best standard convention for -integrals in QFT. The lattice means that fluctuations at large are not allowed to contribute right away, they only start to contribute in the limit . Sometimes, instead of a lattice, the field modes are just cut off at high values of instead.
It is also convenient from time to time to consider the space-time volume to be finite, so that the modes are also a lattice. This is not strictly as necessary as the space-lattice limit, because interactions in are not localized, but it is convenient for keeping track of the factors in front of the -integrals and the momentum-conserving delta functions that will arise.
On a lattice, (ii), the action needs to be discretized:
where is a pair of nearest lattice neighbors and . The discretization should be thought of as defining what the derivative means.
In terms of the lattice Fourier modes, the action can be written:
For near zero this is:
Now we have the continuum Fourier transform of the original action. In finite volume, the quantity is not infinitesimal, but becomes the volume of a box made by neighboring Fourier modes, or .
The field is real-valued, so the Fourier transform obeys:
In terms of real and imaginary parts, the real part of is an even function of , while the imaginary part is odd. The Fourier transform avoids double-counting, so that it can be written:
over an integration domain that integrates over each pair exactly once.
For a complex scalar field with action
the Fourier transform is unconstrained:
and the integral is over all .
Integrating over all different values of is equivalent to integrating over all Fourier modes, because taking a Fourier transform is a unitary linear transformation of field coordinates. When you change coordinates in a multidimensional integral by a linear transformation, the value of the new integral is given by the determinant of the transformation matrix. If
then
If is a rotation, then
so that , and the sign depends on whether the rotation includes a reflection or not.
The matrix that changes coordinates from to can be read off from the definition of a Fourier transform.
and the Fourier inversion theorem tells you the inverse:
which is the complex conjugate-transpose, up to factors of 2. On a finite volume lattice, the determinant is nonzero and independent of the field values.
and the path integral is a separate factor at each value of .
The factor is the infinitesimal volume of a discrete cell in -space, in a square lattice box
where is the side-length of the box. Each separate factor is an oscillatory Gaussian, and the width of the Gaussian diverges as the volume goes to infinity.
In imaginary time, the Euclidean action becomes positive definite, and can be interpreted as a probability distribution. The probability of a field having values is
The expectation value of the field is the statistical expectation value of the field when chosen according to the probability distribution:
Since the probability of is a product, the value of at each separate value of is independently Gaussian distributed. The variance of the Gaussian is , which is formally infinite, but that just means that the fluctuations are unbounded in infinite volume. In any finite volume, the integral is replaced by a discrete sum, and the variance of the integral is .
Monte Carlo
The path integral defines a probabilistic algorithm to generate a Euclidean scalar field configuration. Randomly pick the real and imaginary parts of each Fourier mode at wavenumber to be a Gaussian random variable with variance . This generates a configuration at random, and the Fourier transform gives . For real scalar fields, the algorithm must generate only one of each pair , and make the second the complex conjugate of the first.
To find any correlation function, generate a field again and again by this procedure, and find the statistical average:
where is the number of configurations, and the sum is of the product of the field values on each configuration. The Euclidean correlation function is just the same as the correlation function in statistics or statistical mechanics. The quantum mechanical correlation functions are an analytic continuation of the Euclidean correlation functions.
For free fields with a quadratic action, the probability distribution is a high-dimensional Gaussian, and the statistical average is given by an explicit formula. But the Monte Carlo method also works well for bosonic interacting field theories where there is no closed form for the correlation functions.
Scalar propagator
Each mode is independently Gaussian distributed. The expectation of field modes is easy to calculate:
for , since then the two Gaussian random variables are independent and both have zero mean.
in finite volume , when the two -values coincide, since this is the variance of the Gaussian. In the infinite volume limit,
Strictly speaking, this is an approximation: the lattice propagator is:
But near , for field fluctuations long compared to the lattice spacing, the two forms coincide.
The delta functions contain factors of 2, so that they cancel out the 2 factors in the measure for integrals.
where is the ordinary one-dimensional Dirac delta function. This convention for delta-functions is not universal—some authors keep the factors of 2 in the delta functions (and in the -integration) explicit.
Equation of motion
The form of the propagator can be more easily found by using the equation of motion for the field. From the Lagrangian, the equation of motion is:
and in an expectation value, this says:
Where the derivatives act on , and the identity is true everywhere except when and coincide, and the operator order matters. The form of the singularity can be understood from the canonical commutation relations to be a delta-function. Defining the (Euclidean) Feynman propagator as the Fourier transform of the time-ordered two-point function (the one that comes from the path-integral):
So that:
If the equations of motion are linear, the propagator will always be the reciprocal of the quadratic-form matrix that defines the free Lagrangian, since this gives the equations of motion. This is also easy to see directly from the path integral. The factor of disappears in the Euclidean theory.
Wick theorem
Because each field mode is an independent Gaussian, the expectation values for the product of many field modes obeys Wick's theorem:
is zero unless the field modes coincide in pairs. This means that it is zero for an odd number of , and for an even number of , it is equal to a contribution from each pair separately, with a delta function.
where the sum is over each partition of the field modes into pairs, and the product is over the pairs. For example,
An interpretation of Wick's theorem is that each field insertion can be thought of as a dangling line, and the expectation value is calculated by linking up the lines in pairs, putting a delta function factor that ensures that the momentum of each partner in the pair is equal, and dividing by the propagator.
Higher Gaussian moments — completing Wick's theorem
There is a subtle point left before Wick's theorem is proved—what if more than two of the s have the same momentum? If it's an odd number, the integral is zero; negative values cancel with the positive values. But if the number is even, the integral is positive. The previous demonstration assumed that the s would only match up in pairs.
But the theorem is correct even when arbitrarily many of the are equal, and this is a notable property of Gaussian integration:
Dividing by ,
If Wick's theorem were correct, the higher moments would be given by all possible pairings of a list of different :
where the are all the same variable, the index is just to keep track of the number of ways to pair them. The first can be paired with others, leaving . The next unpaired can be paired with different leaving , and so on. This means that Wick's theorem, uncorrected, says that the expectation value of should be:
and this is in fact the correct answer. So Wick's theorem holds no matter how many of the momenta of the internal variables coincide.
Interaction
Interactions are represented by higher order contributions, since quadratic contributions are always Gaussian. The simplest interaction is the quartic self-interaction, with an action:
The reason for the combinatorial factor 4! will be clear soon. Writing the action in terms of the lattice (or continuum) Fourier modes:
Where is the free action, whose correlation functions are given by Wick's theorem. The exponential of in the path integral can be expanded in powers of , giving a series of corrections to the free action.
The path integral for the interacting action is then a power series of corrections to the free action. The term represented by should be thought of as four half-lines, one for each factor of . The half-lines meet at a vertex, which contributes a delta-function that ensures that the sum of the momenta are all equal.
To compute a correlation function in the interacting theory, there is a contribution from the terms now. For example, the path-integral for the four-field correlator:
which in the free field was only nonzero when the momenta were equal in pairs, is now nonzero for all values of . The momenta of the insertions can now match up with the momenta of the s in the expansion. The insertions should also be thought of as half-lines, four in this case, which carry a momentum , but one that is not integrated.
The lowest-order contribution comes from the first nontrivial term in the Taylor expansion of the action. Wick's theorem requires that the momenta in the half-lines, the factors in , should match up with the momenta of the external half-lines in pairs. The new contribution is equal to:
The 4! inside is canceled because there are exactly 4! ways to match the half-lines in to the external half-lines. Each of these different ways of matching the half-lines together in pairs contributes exactly once, regardless of the values of , by Wick's theorem.
Feynman diagrams
The expansion of the action in powers of gives a series of terms with progressively higher number of s. The contribution from the term with exactly s is called th order.
The th order terms has:
internal half-lines, which are the factors of from the s. These all end on a vertex, and are integrated over all possible .
external half-lines, which are the come from the insertions in the integral.
By Wick's theorem, each pair of half-lines must be paired together to make a line, and this line gives a factor of
which multiplies the contribution. This means that the two half-lines that make a line are forced to have equal and opposite momentum. The line itself should be labelled by an arrow, drawn parallel to the line, and labeled by the momentum in the line . The half-line at the tail end of the arrow carries momentum , while the half-line at the head-end carries momentum . If one of the two half-lines is external, this kills the integral over the internal , since it forces the internal to be equal to the external . If both are internal, the integral over remains.
The diagrams that are formed by linking the half-lines in the s with the external half-lines, representing insertions, are the Feynman diagrams of this theory. Each line carries a factor of , the propagator, and either goes from vertex to vertex, or ends at an insertion. If it is internal, it is integrated over. At each vertex, the total incoming is equal to the total outgoing .
The number of ways of making a diagram by joining half-lines into lines almost completely cancels the factorial factors coming from the Taylor series of the exponential and the 4! at each vertex.
Loop order
A forest diagram is one where all the internal lines have momentum that is completely determined by the external lines and the condition that the incoming and outgoing momentum are equal at each vertex. The contribution of these diagrams is a product of propagators, without any integration. A tree diagram is a connected forest diagram.
An example of a tree diagram is the one where each of four external lines end on an . Another is when three external lines end on an , and the remaining half-line joins up with another , and the remaining half-lines of this run off to external lines. These are all also forest diagrams (as every tree is a forest); an example of a forest that is not a tree is when eight external lines end on two s.
It is easy to verify that in all these cases, the momenta on all the internal lines is determined by the external momenta and the condition of momentum conservation in each vertex.
A diagram that is not a forest diagram is called a loop diagram, and an example is one where two lines of an are joined to external lines, while the remaining two lines are joined to each other. The two lines joined to each other can have any momentum at all, since they both enter and leave the same vertex. A more complicated example is one where two s are joined to each other by matching the legs one to the other. This diagram has no external lines at all.
The reason loop diagrams are called loop diagrams is because the number of -integrals that are left undetermined by momentum conservation is equal to the number of independent closed loops in the diagram, where independent loops are counted as in homology theory. The homology is real-valued (actually valued), the value associated with each line is the momentum. The boundary operator takes each line to the sum of the end-vertices with a positive sign at the head and a negative sign at the tail. The condition that the momentum is conserved is exactly the condition that the boundary of the -valued weighted graph is zero.
A set of valid -values can be arbitrarily redefined whenever there is a closed loop. A closed loop is a cyclical path of adjacent vertices that never revisits the same vertex. Such a cycle can be thought of as the boundary of a hypothetical 2-cell. The -labellings of a graph that conserve momentum (i.e. which has zero boundary) up to redefinitions of (i.e. up to boundaries of 2-cells) define the first homology of a graph. The number of independent momenta that are not determined is then equal to the number of independent homology loops. For many graphs, this is equal to the number of loops as counted in the most intuitive way.
Symmetry factors
The number of ways to form a given Feynman diagram by joining half-lines is large, and by Wick's theorem, each way of pairing up the half-lines contributes equally. Often, this completely cancels the factorials in the denominator of each term, but the cancellation is sometimes incomplete.
The uncancelled denominator is called the symmetry factor of the diagram. The contribution of each diagram to the correlation function must be divided by its symmetry factor.
For example, consider the Feynman diagram formed from two external lines joined to one , and the remaining two half-lines in the joined to each other. There are 4 × 3 ways to join the external half-lines to the , and then there is only one way to join the two remaining lines to each other. The comes divided by , but the number of ways to link up the half lines to make the diagram is only 4 × 3, so the contribution of this diagram is divided by two.
For another example, consider the diagram formed by joining all the half-lines of one to all the half-lines of another . This diagram is called a vacuum bubble, because it does not link up to any external lines. There are 4! ways to form this diagram, but the denominator includes a 2! (from the expansion of the exponential, there are two s) and two factors of 4!. The contribution is multiplied by = .
Another example is the Feynman diagram formed from two s where each links up to two external lines, and the remaining two half-lines of each are joined to each other. The number of ways to link an to two external lines is 4 × 3, and either could link up to either pair, giving an additional factor of 2. The remaining two half-lines in the two s can be linked to each other in two ways, so that the total number of ways to form the diagram is , while the denominator is . The total symmetry factor is 2, and the contribution of this diagram is divided by 2.
The symmetry factor theorem gives the symmetry factor for a general diagram: the contribution of each Feynman diagram must be divided by the order of its group of automorphisms, the number of symmetries that it has.
An automorphism of a Feynman graph is a permutation of the lines and a permutation of the vertices with the following properties:
If a line goes from vertex to vertex , then goes from to . If the line is undirected, as it is for a real scalar field, then can go from to too.
If a line ends on an external line, ends on the same external line.
If there are different types of lines, should preserve the type.
This theorem has an interpretation in terms of particle-paths: when identical particles are present, the integral over all intermediate particles must not double-count states that differ only by interchanging identical particles.
Proof: To prove this theorem, label all the internal and external lines of a diagram with a unique name. Then form the diagram by linking a half-line to a name and then to the other half line.
Now count the number of ways to form the named diagram. Each permutation of the s gives a different pattern of linking names to half-lines, and this is a factor of . Each permutation of the half-lines in a single gives a factor of 4!. So a named diagram can be formed in exactly as many ways as the denominator of the Feynman expansion.
But the number of unnamed diagrams is smaller than the number of named diagram by the order of the automorphism group of the graph.
Connected diagrams: linked-cluster theorem
Roughly speaking, a Feynman diagram is called connected if all vertices and propagator lines are linked by a sequence of vertices and propagators of the diagram itself. If one views it as an undirected graph it is connected. The remarkable relevance of such diagrams in QFTs is due to the fact that they are sufficient to determine the quantum partition function . More precisely, connected Feynman diagrams determine
To see this, one should recall that
with constructed from some (arbitrary) Feynman diagram that can be thought to consist of several connected components . If one encounters (identical) copies of a component within the Feynman diagram one has to include a symmetry factor . However, in the end each contribution of a Feynman diagram to the partition function has the generic form
where labels the (infinitely) many connected Feynman diagrams possible.
A scheme to successively create such contributions from the to is obtained by
and therefore yields
To establish the normalization one simply calculates all connected vacuum diagrams, i.e., the diagrams without any sources (sometimes referred to as external legs of a Feynman diagram).
The linked-cluster theorem was first proved to order four by Keith Brueckner in 1955, and for infinite orders by Jeffrey Goldstone in 1957.
Vacuum bubbles
An immediate consequence of the linked-cluster theorem is that all vacuum bubbles, diagrams without external lines, cancel when calculating correlation functions. A correlation function is given by a ratio of path-integrals:
The top is the sum over all Feynman diagrams, including disconnected diagrams that do not link up to external lines at all. In terms of the connected diagrams, the numerator includes the same contributions of vacuum bubbles as the denominator:
Where the sum over diagrams includes only those diagrams each of whose connected components end on at least one external line. The vacuum bubbles are the same whatever the external lines, and give an overall multiplicative factor. The denominator is the sum over all vacuum bubbles, and dividing gets rid of the second factor.
The vacuum bubbles then are only useful for determining itself, which from the definition of the path integral is equal to:
where is the energy density in the vacuum. Each vacuum bubble contains a factor of zeroing the total at each vertex, and when there are no external lines, this contains a factor of , because the momentum conservation is over-enforced. In finite volume, this factor can be identified as the total volume of space time. Dividing by the volume, the remaining integral for the vacuum bubble has an interpretation: it is a contribution to the energy density of the vacuum.
Sources
Correlation functions are the sum of the connected Feynman diagrams, but the formalism treats the connected and disconnected diagrams differently. Internal lines end on vertices, while external lines go off to insertions. Introducing sources unifies the formalism, by making new vertices where one line can end.
Sources are external fields, fields that contribute to the action, but are not dynamical variables. A scalar field source is another scalar field that contributes a term to the (Lorentz) Lagrangian:
In the Feynman expansion, this contributes H terms with one half-line ending on a vertex. Lines in a Feynman diagram can now end either on an vertex, or on an vertex, and only one line enters an vertex. The Feynman rule for an vertex is that a line from an with momentum gets a factor of .
The sum of the connected diagrams in the presence of sources includes a term for each connected diagram in the absence of sources, except now the diagrams can end on the source. Traditionally, a source is represented by a little "×" with one line extending out, exactly as an insertion.
where is the connected diagram with external lines carrying momentum as indicated. The sum is over all connected diagrams, as before.
The field is not dynamical, which means that there is no path integral over : is just a parameter in the Lagrangian, which varies from point to point. The path integral for the field is:
and it is a function of the values of at every point. One way to interpret this expression is that it is taking the Fourier transform in field space. If there is a probability density on , the Fourier transform of the probability density is:
The Fourier transform is the expectation of an oscillatory exponential. The path integral in the presence of a source is:
which, on a lattice, is the product of an oscillatory exponential for each field value:
The Fourier transform of a delta-function is a constant, which gives a formal expression for a delta function:
This tells you what a field delta function looks like in a path-integral. For two scalar fields and ,
which integrates over the Fourier transform coordinate, over . This expression is useful for formally changing field coordinates in the path integral, much as a delta function is used to change coordinates in an ordinary multi-dimensional integral.
The partition function is now a function of the field , and the physical partition function is the value when is the zero function:
The correlation functions are derivatives of the path integral with respect to the source:
In Euclidean space, source contributions to the action can still appear with a factor of , so that they still do a Fourier transform.
Spin ; "photons" and "ghosts"
Spin : Grassmann integrals
The field path integral can be extended to the Fermi case, but only if the notion of integration is expanded. A Grassmann integral of a free Fermi field is a high-dimensional determinant or Pfaffian, which defines the new type of Gaussian integration appropriate for Fermi fields.
The two fundamental formulas of Grassmann integration are:
where is an arbitrary matrix and are independent Grassmann variables for each index , and
where is an antisymmetric matrix, is a collection of Grassmann variables, and the is to prevent double-counting (since ).
In matrix notation, where and are Grassmann-valued row vectors, and are Grassmann-valued column vectors, and is a real-valued matrix:
where the last equality is a consequence of the translation invariance of the Grassmann integral. The Grassmann variables are external sources for , and differentiating with respect to pulls down factors of .
again, in a schematic matrix notation. The meaning of the formula above is that the derivative with respect to the appropriate component of and gives the matrix element of . This is exactly analogous to the bosonic path integration formula for a Gaussian integral of a complex bosonic field:
So that the propagator is the inverse of the matrix in the quadratic part of the action in both the Bose and Fermi case.
For real Grassmann fields, for Majorana fermions, the path integral is a Pfaffian times a source quadratic form, and the formulas give the square root of the determinant, just as they do for real Bosonic fields. The propagator is still the inverse of the quadratic part.
The free Dirac Lagrangian:
formally gives the equations of motion and the anticommutation relations of the Dirac field, just as the Klein Gordon Lagrangian in an ordinary path integral gives the equations of motion and commutation relations of the scalar field. By using the spatial Fourier transform of the Dirac field as a new basis for the Grassmann algebra, the quadratic part of the Dirac action becomes simple to invert:
The propagator is the inverse of the matrix linking and , since different values of do not mix together.
The analog of Wick's theorem matches and in pairs:
where S is the sign of the permutation that reorders the sequence of and to put the ones that are paired up to make the delta-functions next to each other, with the coming right before the . Since a pair is a commuting element of the Grassmann algebra, it does not matter what order the pairs are in. If more than one pair have the same , the integral is zero, and it is easy to check that the sum over pairings gives zero in this case (there are always an even number of them). This is the Grassmann analog of the higher Gaussian moments that completed the Bosonic Wick's theorem earlier.
The rules for spin- Dirac particles are as follows: The propagator is the inverse of the Dirac operator, the lines have arrows just as for a complex scalar field, and the diagram acquires an overall factor of −1 for each closed Fermi loop. If there are an odd number of Fermi loops, the diagram changes sign. Historically, the −1 rule was very difficult for Feynman to discover. He discovered it after a long process of trial and error, since he lacked a proper theory of Grassmann integration.
The rule follows from the observation that the number of Fermi lines at a vertex is always even. Each term in the Lagrangian must always be Bosonic. A Fermi loop is counted by following Fermionic lines until one comes back to the starting point, then removing those lines from the diagram. Repeating this process eventually erases all the Fermionic lines: this is the Euler algorithm to 2-color a graph, which works whenever each vertex has even degree. The number of steps in the Euler algorithm is only equal to the number of independent Fermionic homology cycles in the common special case that all terms in the Lagrangian are exactly quadratic in the Fermi fields, so that each vertex has exactly two Fermionic lines. When there are four-Fermi interactions (like in the Fermi effective theory of the weak nuclear interactions) there are more -integrals than Fermi loops. In this case, the counting rule should apply the Euler algorithm by pairing up the Fermi lines at each vertex into pairs that together form a bosonic factor of the term in the Lagrangian, and when entering a vertex by one line, the algorithm should always leave with the partner line.
To clarify and prove the rule, consider a Feynman diagram formed from vertices, terms in the Lagrangian, with Fermion fields. The full term is Bosonic, it is a commuting element of the Grassmann algebra, so the order in which the vertices appear is not important. The Fermi lines are linked into loops, and when traversing the loop, one can reorder the vertex terms one after the other as one goes around without any sign cost. The exception is when you return to the starting point, and the final half-line must be joined with the unlinked first half-line. This requires one permutation to move the last to go in front of the first , and this gives the sign.
This rule is the only visible effect of the exclusion principle in internal lines. When there are external lines, the amplitudes are antisymmetric when two Fermi insertions for identical particles are interchanged. This is automatic in the source formalism, because the sources for Fermi fields are themselves Grassmann valued.
Spin 1: photons
The naive propagator for photons is infinite, since the Lagrangian for the A-field is:
The quadratic form defining the propagator is non-invertible. The reason is the gauge invariance of the field; adding a gradient to does not change the physics.
To fix this problem, one needs to fix a gauge. The most convenient way is to demand that the divergence of is some function , whose value is random from point to point. It does no harm to integrate over the values of , since it only determines the choice of gauge. This procedure inserts the following factor into the path integral for :
The first factor, the delta function, fixes the gauge. The second factor sums over different values of that are inequivalent gauge fixings. This is simply
The additional contribution from gauge-fixing cancels the second half of the free Lagrangian, giving the Feynman Lagrangian:
which is just like four independent free scalar fields, one for each component of . The Feynman propagator is:
The one difference is that the sign of one propagator is wrong in the Lorentz case: the timelike component has an opposite sign propagator. This means that these particle states have negative norm—they are not physical states. In the case of photons, it is easy to show by diagram methods that these states are not physical—their contribution cancels with longitudinal photons to only leave two physical photon polarization contributions for any value of .
If the averaging over is done with a coefficient different from , the two terms do not cancel completely. This gives a covariant Lagrangian with a coefficient , which does not affect anything:
and the covariant propagator for QED is:
Spin 1: non-Abelian ghosts
To find the Feynman rules for non-Abelian gauge fields, the procedure that performs the gauge fixing must be carefully corrected to account for a change of variables in the path-integral.
The gauge fixing factor has an extra determinant from popping the delta function:
To find the form of the determinant, consider first a simple two-dimensional integral of a function that depends only on , not on the angle . Inserting an integral over :
The derivative-factor ensures that popping the delta function in removes the integral. Exchanging the order of integration,
but now the delta-function can be popped in ,
The integral over just gives an overall factor of 2, while the rate of change of with a change in is just , so this exercise reproduces the standard formula for polar integration of a radial function:
In the path-integral for a nonabelian gauge field, the analogous manipulation is:
The factor in front is the volume of the gauge group, and it contributes a constant, which can be discarded. The remaining integral is over the gauge fixed action.
To get a covariant gauge, the gauge fixing condition is the same as in the Abelian case:
Whose variation under an infinitesimal gauge transformation is given by:
where is the adjoint valued element of the Lie algebra at every point that performs the infinitesimal gauge transformation. This adds the Faddeev Popov determinant to the action:
which can be rewritten as a Grassmann integral by introducing ghost fields:
The determinant is independent of , so the path-integral over can give the Feynman propagator (or a covariant propagator) by choosing the measure for as in the abelian case. The full gauge fixed action is then the Yang Mills action in Feynman gauge with an additional ghost action:
The diagrams are derived from this action. The propagator for the spin-1 fields has the usual Feynman form. There are vertices of degree 3 with momentum factors whose couplings are the structure constants, and vertices of degree 4 whose couplings are products of structure constants. There are additional ghost loops, which cancel out timelike and longitudinal states in loops.
In the Abelian case, the determinant for covariant gauges does not depend on , so the ghosts do not contribute to the connected diagrams.
Particle-path representation
Feynman diagrams were originally discovered by Feynman, by trial and error, as a way to represent the contribution to the S-matrix from different classes of particle trajectories.
Schwinger representation
The Euclidean scalar propagator has a suggestive representation:
The meaning of this identity (which is an elementary integration) is made clearer by Fourier transforming to real space.
The contribution at any one value of to the propagator is a Gaussian of width . The total propagation function from 0 to is a weighted sum over all proper times of a normalized Gaussian, the probability of ending up at after a random walk of time .
The path-integral representation for the propagator is then:
which is a path-integral rewrite of the Schwinger representation.
The Schwinger representation is both useful for making manifest the particle aspect of the propagator, and for symmetrizing denominators of loop diagrams.
Combining denominators
The Schwinger representation has an immediate practical application to loop diagrams. For example, for the diagram in the theory formed by joining two s together in two half-lines, and making the remaining lines external, the integral over the internal propagators in the loop is:
Here one line carries momentum and the other . The asymmetry can be fixed by putting everything in the Schwinger representation.
Now the exponent mostly depends on ,
except for the asymmetrical little bit. Defining the variable and , the variable goes from 0 to , while goes from 0 to 1. The variable is the total proper time for the loop, while parametrizes the fraction of the proper time on the top of the loop versus the bottom.
The Jacobian for this transformation of variables is easy to work out from the identities:
and "wedging" gives
.
This allows the integral to be evaluated explicitly:
leaving only the -integral. This method, invented by Schwinger but usually attributed to Feynman, is called combining denominator. Abstractly, it is the elementary identity:
But this form does not provide the physical motivation for introducing ; is the proportion of proper time on one of the legs of the loop.
Once the denominators are combined, a shift in to symmetrizes everything:
This form shows that the moment that is more negative than four times the mass of the particle in the loop, which happens in a physical region of Lorentz space, the integral has a cut. This is exactly when the external momentum can create physical particles.
When the loop has more vertices, there are more denominators to combine:
The general rule follows from the Schwinger prescription for denominators:
The integral over the Schwinger parameters can be split up as before into an integral over the total proper time and an integral over the fraction of the proper time in all but the first segment of the loop for . The are positive and add up to less than 1, so that the integral is over an -dimensional simplex.
The Jacobian for the coordinate transformation can be worked out as before:
Wedging all these equations together, one obtains
This gives the integral:
where the simplex is the region defined by the conditions
as well as
Performing the integral gives the general prescription for combining denominators:
Since the numerator of the integrand is not involved, the same prescription works for any loop, no matter what the spins are carried by the legs. The interpretation of the parameters is that they are the fraction of the total proper time spent on each leg.
Scattering
The correlation functions of a quantum field theory describe the scattering of particles. The definition of "particle" in relativistic field theory is not self-evident, because if you try to determine the position so that the uncertainty is less than the compton wavelength, the uncertainty in energy is large enough to produce more particles and antiparticles of the same type from the vacuum. This means that the notion of a single-particle state is to some extent incompatible with the notion of an object localized in space.
In the 1930s, Wigner gave a mathematical definition for single-particle states: they are a collection of states that form an irreducible representation of the Poincaré group. Single particle states describe an object with a finite mass, a well defined momentum, and a spin. This definition is fine for protons and neutrons, electrons and photons, but it excludes quarks, which are permanently confined, so the modern point of view is more accommodating: a particle is anything whose interaction can be described in terms of Feynman diagrams, which have an interpretation as a sum over particle trajectories.
A field operator can act to produce a one-particle state from the vacuum, which means that the field operator produces a superposition of Wigner particle states. In the free field theory, the field produces one particle states only. But when there are interactions, the field operator can also produce 3-particle, 5-particle (if there is no +/− symmetry also 2, 4, 6 particle) states too. To compute the scattering amplitude for single particle states only requires a careful limit, sending the fields to infinity and integrating over space to get rid of the higher-order corrections.
The relation between scattering and correlation functions is the LSZ-theorem: The scattering amplitude for particles to go to particles in a scattering event is the given by the sum of the Feynman diagrams that go into the correlation function for field insertions, leaving out the propagators for the external legs.
For example, for the interaction of the previous section, the order contribution to the (Lorentz) correlation function is:
Stripping off the external propagators, that is, removing the factors of , gives the invariant scattering amplitude :
which is a constant, independent of the incoming and outgoing momentum. The interpretation of the scattering amplitude is that the sum of over all possible final states is the probability for the scattering event. The normalization of the single-particle states must be chosen carefully, however, to ensure that is a relativistic invariant.
Non-relativistic single particle states are labeled by the momentum , and they are chosen to have the same norm at every value of . This is because the nonrelativistic unit operator on single particle states is:
In relativity, the integral over the -states for a particle of mass m integrates over a hyperbola in space defined by the energy–momentum relation:
If the integral weighs each point equally, the measure is not Lorentz-invariant. The invariant measure integrates over all values of and , restricting to the hyperbola with a Lorentz-invariant delta function:
So the normalized -states are different from the relativistically normalized -states by a factor of
The invariant amplitude is then the probability amplitude for relativistically normalized incoming states to become relativistically normalized outgoing states.
For nonrelativistic values of , the relativistic normalization is the same as the nonrelativistic normalization (up to a constant factor ). In this limit, the invariant scattering amplitude is still constant. The particles created by the field scatter in all directions with equal amplitude.
The nonrelativistic potential, which scatters in all directions with an equal amplitude (in the Born approximation), is one whose Fourier transform is constant—a delta-function potential. The lowest order scattering of the theory reveals the non-relativistic interpretation of this theory—it describes a collection of particles with a delta-function repulsion. Two such particles have an aversion to occupying the same point at the same time.
Nonperturbative effects
Thinking of Feynman diagrams as a perturbation series, nonperturbative effects like tunneling do not show up, because any effect that goes to zero faster than any polynomial does not affect the Taylor series. Even bound states are absent, since at any finite order particles are only exchanged a finite number of times, and to make a bound state, the binding force must last forever.
But this point of view is misleading, because the diagrams not only describe scattering, but they also are a representation of the short-distance field theory correlations. They encode not only asymptotic processes like particle scattering, they also describe the multiplication rules for fields, the operator product expansion. Nonperturbative tunneling processes involve field configurations that on average get big when the coupling constant gets small, but each configuration is a coherent superposition of particles whose local interactions are described by Feynman diagrams. When the coupling is small, these become collective processes that involve large numbers of particles, but where the interactions between each of the particles is simple. (The perturbation series of any interacting quantum field theory has zero radius of convergence, complicating the limit of the infinite series of diagrams needed (in the limit of vanishing coupling) to describe such field configurations.)
This means that nonperturbative effects show up asymptotically in resummations of infinite classes of diagrams, and these diagrams can be locally simple. The graphs determine the local equations of motion, while the allowed large-scale configurations describe non-perturbative physics. But because Feynman propagators are nonlocal in time, translating a field process to a coherent particle language is not completely intuitive, and has only been explicitly worked out in certain special cases. In the case of nonrelativistic bound states, the Bethe–Salpeter equation describes the class of diagrams to include to describe a relativistic atom. For quantum chromodynamics, the Shifman–Vainshtein–Zakharov sum rules describe non-perturbatively excited long-wavelength field modes in particle language, but only in a phenomenological way.
The number of Feynman diagrams at high orders of perturbation theory is very large, because there are as many diagrams as there are graphs with a given number of nodes. Nonperturbative effects leave a signature on the way in which the number of diagrams and resummations diverge at high order. It is only because non-perturbative effects appear in hidden form in diagrams that it was possible to analyze nonperturbative effects in string theory, where in many cases a Feynman description is the only one available.
In popular culture
The use of the above diagram of the virtual particle producing a quark–antiquark pair was featured in the television sit-com The Big Bang Theory, in the episode "The Bat Jar Conjecture".
PhD Comics of January 11, 2012, shows Feynman diagrams that visualize and describe quantum academic interactions, i.e. the paths followed by Ph.D. students when interacting with their advisors.
Vacuum Diagrams, a science fiction story by Stephen Baxter, features the titular vacuum diagram, a specific type of Feynman diagram.
Feynman and his wife, Gweneth Howarth, bought a Dodge Tradesman Maxivan in 1975, and had it painted with Feynman diagrams. The van is currently owned by video game designer and physicist Seamus Blackley. Qantum was the license plate ID.
| Physical sciences | Particle physics: General | Physics |
11634 | https://en.wikipedia.org/wiki/Field%20extension | Field extension | In mathematics, particularly in algebra, a field extension is a pair of fields , such that the operations of K are those of L restricted to K. In this case, L is an extension field of K and K is a subfield of L. For example, under the usual notions of addition and multiplication, the complex numbers are an extension field of the real numbers; the real numbers are a subfield of the complex numbers.
Field extensions are fundamental in algebraic number theory, and in the study of polynomial roots through Galois theory, and are widely used in algebraic geometry.
Subfield
A subfield of a field is a subset that is a field with respect to the field operations inherited from . Equivalently, a subfield is a subset that contains the multiplicative identity , and is closed under the operations of addition, subtraction, multiplication, and taking the inverse of a nonzero element of .
As , the latter definition implies and have the same zero element.
For example, the field of rational numbers is a subfield of the real numbers, which is itself a subfield of the complex numbers. More generally, the field of rational numbers is (or is isomorphic to) a subfield of any field of characteristic .
The characteristic of a subfield is the same as the characteristic of the larger field.
Extension field
If is a subfield of , then is an extension field or simply extension of , and this pair of fields is a field extension. Such a field extension is denoted (read as " over ").
If is an extension of , which is in turn an extension of , then is said to be an intermediate field (or intermediate extension or subextension) of .
Given a field extension , the larger field is a -vector space. The dimension of this vector space is called the degree of the extension and is denoted by .
The degree of an extension is 1 if and only if the two fields are equal. In this case, the extension is a . Extensions of degree 2 and 3 are called quadratic extensions and cubic extensions, respectively. A finite extension is an extension that has a finite degree.
Given two extensions and , the extension is finite if and only if both and are finite. In this case, one has
Given a field extension and a subset of , there is a smallest subfield of that contains and . It is the intersection of all subfields of that contain and , and is denoted by (read as " "). One says that is the field generated by over , and that is a generating set of over . When is finite, one writes instead of and one says that is over . If consists of a single element , the extension is called a simple extension and is called a primitive element of the extension.
An extension field of the form is often said to result from the of to .
In characteristic 0, every finite extension is a simple extension. This is the primitive element theorem, which does not hold true for fields of non-zero characteristic.
If a simple extension is not finite, the field is isomorphic to the field of rational fractions in over .
Caveats
The notation L / K is purely formal and does not imply the formation of a quotient ring or quotient group or any other kind of division. Instead the slash expresses the word "over". In some literature the notation L:K is used.
It is often desirable to talk about field extensions in situations where the small field is not actually contained in the larger one, but is naturally embedded. For this purpose, one abstractly defines a field extension as an injective ring homomorphism between two fields.
Every non-zero ring homomorphism between fields is injective because fields do not possess nontrivial proper ideals, so field extensions are precisely the morphisms in the category of fields.
Henceforth, we will suppress the injective homomorphism and assume that we are dealing with actual subfields.
Examples
The field of complex numbers is an extension field of the field of real numbers , and in turn is an extension field of the field of rational numbers . Clearly then, is also a field extension. We have because is a basis, so the extension is finite. This is a simple extension because (the cardinality of the continuum), so this extension is infinite.
The field
is an extension field of also clearly a simple extension. The degree is 2 because can serve as a basis.
The field
is an extension field of both and of degree 2 and 4 respectively. It is also a simple extension, as one can show that
Finite extensions of are also called algebraic number fields and are important in number theory. Another extension field of the rationals, which is also important in number theory, although not a finite extension, is the field of p-adic numbers for a prime number p.
It is common to construct an extension field of a given field K as a quotient ring of the polynomial ring K[X] in order to "create" a root for a given polynomial f(X). Suppose for instance that K does not contain any element x with x2 = −1. Then the polynomial is irreducible in K[X], consequently the ideal generated by this polynomial is maximal, and is an extension field of K which does contain an element whose square is −1 (namely the residue class of X).
By iterating the above construction, one can construct a splitting field of any polynomial from K[X]. This is an extension field L of K in which the given polynomial splits into a product of linear factors.
If p is any prime number and n is a positive integer, there is a unique (up to isomorphism) finite field with pn elements; this is an extension field of the prime field with p elements.
Given a field K, we can consider the field K(X) of all rational functions in the variable X with coefficients in K; the elements of K(X) are fractions of two polynomials over K, and indeed K(X) is the field of fractions of the polynomial ring K[X]. This field of rational functions is an extension field of K. This extension is infinite.
Given a Riemann surface M, the set of all meromorphic functions defined on M is a field, denoted by It is a transcendental extension field of if we identify every complex number with the corresponding constant function defined on M. More generally, given an algebraic variety V over some field K, the function field K(V), consisting of the rational functions defined on V, is an extension field of K.
Algebraic extension
An element x of a field extension is algebraic over K if it is a root of a nonzero polynomial with coefficients in K. For example, is algebraic over the rational numbers, because it is a root of If an element x of L is algebraic over K, the monic polynomial of lowest degree that has x as a root is called the minimal polynomial of x. This minimal polynomial is irreducible over K.
An element s of L is algebraic over K if and only if the simple extension is a finite extension. In this case the degree of the extension equals the degree of the minimal polynomial, and a basis of the K-vector space K(s) consists of where d is the degree of the minimal polynomial.
The set of the elements of L that are algebraic over K form a subextension, which is called the algebraic closure of K in L. This results from the preceding characterization: if s and t are algebraic, the extensions and are finite. Thus is also finite, as well as the sub extensions , and (if ). It follows that , st and 1/s are all algebraic.
An algebraic extension is an extension such that every element of L is algebraic over K. Equivalently, an algebraic extension is an extension that is generated by algebraic elements. For example, is an algebraic extension of , because and are algebraic over
A simple extension is algebraic if and only if it is finite. This implies that an extension is algebraic if and only if it is the union of its finite subextensions, and that every finite extension is algebraic.
Every field K has an algebraic closure, which is up to an isomorphism the largest extension field of K which is algebraic over K, and also the smallest extension field such that every polynomial with coefficients in K has a root in it. For example, is an algebraic closure of , but not an algebraic closure of , as it is not algebraic over (for example is not algebraic over ).
Transcendental extension
Given a field extension , a subset S of L is called algebraically independent over K if no non-trivial polynomial relation with coefficients in K exists among the elements of S. The largest cardinality of an algebraically independent set is called the transcendence degree of L/K. It is always possible to find a set S, algebraically independent over K, such that L/K(S) is algebraic. Such a set S is called a transcendence basis of L/K. All transcendence bases have the same cardinality, equal to the transcendence degree of the extension. An extension is said to be if and only if there exists a transcendence basis S of such that L = K(S). Such an extension has the property that all elements of L except those of K are transcendental over K, but, however, there are extensions with this property which are not purely transcendental—a class of such extensions take the form L/K where both L and K are algebraically closed.
If L/K is purely transcendental and S is a transcendence basis of the extension, it doesn't necessarily follow that L = K(S). On the opposite, even when one knows a transcendence basis, it may be difficult to decide whether the extension is purely separable, and if it is so, it may be difficult to find a transcendence basis S such that L = K(S).
For example, consider the extension where is transcendental over and is a root of the equation Such an extension can be defined as in which and are the equivalence classes of and Obviously, the singleton set is transcendental over and the extension is algebraic; hence is a transcendence basis that does not generates the extension . Similarly, is a transcendence basis that does not generates the whole extension. However the extension is purely transcendental since, if one set one has and and thus generates the whole extension.
Purely transcendental extensions of an algebraically closed field occur as function fields of rational varieties. The problem of finding a rational parametrization of a rational variety is equivalent with the problem of finding a transcendence basis that generates the whole extension.
Normal, separable and Galois extensions
An algebraic extension is called normal if every irreducible polynomial in K[X] that has a root in L completely factors into linear factors over L. Every algebraic extension F/K admits a normal closure L, which is an extension field of F such that is normal and which is minimal with this property.
An algebraic extension is called separable if the minimal polynomial of every element of L over K is separable, i.e., has no repeated roots in an algebraic closure over K. A Galois extension is a field extension that is both normal and separable.
A consequence of the primitive element theorem states that every finite separable extension has a primitive element (i.e. is simple).
Given any field extension , we can consider its automorphism group , consisting of all field automorphisms α: L → L with α(x) = x for all x in K. When the extension is Galois this automorphism group is called the Galois group of the extension. Extensions whose Galois group is abelian are called abelian extensions.
For a given field extension , one is often interested in the intermediate fields F (subfields of L that contain K). The significance of Galois extensions and Galois groups is that they allow a complete description of the intermediate fields: there is a bijection between the intermediate fields and the subgroups of the Galois group, described by the fundamental theorem of Galois theory.
Generalizations
Field extensions can be generalized to ring extensions which consist of a ring and one of its subrings. A closer non-commutative analog are central simple algebras (CSAs) – ring extensions over a field, which are simple algebra (no non-trivial 2-sided ideals, just as for a field) and where the center of the ring is exactly the field. For example, the only finite field extension of the real numbers is the complex numbers, while the quaternions are a central simple algebra over the reals, and all CSAs over the reals are Brauer equivalent to the reals or the quaternions. CSAs can be further generalized to Azumaya algebras, where the base field is replaced by a commutative local ring.
Extension of scalars
Given a field extension, one can "extend scalars" on associated algebraic objects. For example, given a real vector space, one can produce a complex vector space via complexification. In addition to vector spaces, one can perform extension of scalars for associative algebras defined over the field, such as polynomials or group algebras and the associated group representations. Extension of scalars of polynomials is often used implicitly, by just considering the coefficients as being elements of a larger field, but may also be considered more formally. Extension of scalars has numerous applications, as discussed in extension of scalars: applications.
| Mathematics | Abstract algebra | null |
11642 | https://en.wikipedia.org/wiki/General%20Dynamics%20F-16%20Fighting%20Falcon | General Dynamics F-16 Fighting Falcon | The General Dynamics F-16 Fighting Falcon is an American single-engine supersonic multirole fighter aircraft originally developed by General Dynamics for the United States Air Force (USAF). Designed as an air superiority day fighter, it evolved into a successful all-weather multirole aircraft with over 4,600 built since 1976. Although no longer purchased by the U.S. Air Force, improved versions are being built for export. In 1993, General Dynamics sold its aircraft manufacturing business to the Lockheed Corporation, which became part of Lockheed Martin after a 1995 merger with Martin Marietta.
The F-16's key features include a frameless bubble canopy for enhanced cockpit visibility, a side-mounted control stick to ease control while maneuvering, an ejection seat reclined 30 degrees from vertical to reduce the effect of g-forces on the pilot, and the first use of a relaxed static stability/fly-by-wire flight control system that helps to make it an agile aircraft. The fighter has a single turbofan engine, an internal M61 Vulcan cannon and 11 hardpoints. Although officially named "Fighting Falcon", the aircraft is commonly known by the nickname "Viper" among its crews and pilots.
In addition to active duty in the U.S. Air Force, Air Force Reserve Command, and Air National Guard units, the aircraft is also used by the U.S. Air Force Thunderbirds aerial demonstration team, the US Air Combat Command F-16 Viper Demonstration Team, and as an adversary/aggressor aircraft by the United States Navy. The F-16 has also been procured by the air forces of 25 other nations. As of 2024, it is the world's most common fixed-wing aircraft in military service, with 2,145 F-16s operational.
Development
Lightweight Fighter program
US Vietnam War experience showed the need for air superiority fighters and better air-to-air training for fighter pilots. Based on his experience in the Korean War and as a fighter tactics instructor in the early 1960s, Colonel John Boyd with mathematician Thomas Christie developed the energy–maneuverability theory to model a fighter aircraft's performance in combat. Boyd's work called for a small, lightweight aircraft that could maneuver with the minimum possible energy loss and which also incorporated an increased thrust-to-weight ratio. In the late 1960s, Boyd gathered a group of like-minded innovators who became known as the Fighter Mafia, and in 1969, they secured Department of Defense funding for General Dynamics and Northrop to study design concepts based on the theory.
Air Force F-X proponents were opposed to the concept because they perceived it as a threat to the F-15 program, but the USAF's leadership understood that its budget would not allow it to purchase enough F-15 aircraft to satisfy all of its missions. The Advanced Day Fighter concept, renamed F-XX, gained civilian political support under the reform-minded Deputy Secretary of Defense David Packard, who favored the idea of competitive prototyping. As a result, in May 1971, the Air Force Prototype Study Group was established, with Boyd a key member, and two of its six proposals would be funded, one being the Lightweight Fighter (LWF). The request for proposals issued on 6 January 1972 called for a class air-to-air day fighter with a good turn rate, acceleration, and range, and optimized for combat at speeds of and altitudes of . This was the region where USAF studies predicted most future air combat would occur. The anticipated average flyaway cost of a production version was . This production plan was hypothetical as the USAF had no firm plans to procure the winner.
Selection of finalists and flyoff
Five companies responded, and in 1972, the Air Staff selected General Dynamics' Model 401 and Northrop's P-600 for the follow-on prototype development and testing phase. GD and Northrop were awarded contracts worth and to produce the YF-16 and YF-17, respectively, with the first flights of both prototypes planned for early 1974. To overcome resistance in the Air Force hierarchy, the Fighter Mafia and other LWF proponents successfully advocated the idea of complementary fighters in a high-cost/low-cost force mix. The "high/low mix" would allow the USAF to be able to afford sufficient fighters for its overall fighter force structure requirements. The mix gained broad acceptance by the time of the prototypes' flyoff, defining the relationship between the LWF and the F-15.
The YF-16 was developed by a team of General Dynamics engineers led by Robert H. Widmer. The first YF-16 was rolled out on 13 December 1973. Its 90-minute maiden flight was made at the Air Force Flight Test Center at Edwards AFB, California, on 2 February 1974. Its actual first flight occurred accidentally during a high-speed taxi test on 20 January 1974. While gathering speed, a roll-control oscillation caused a fin of the port-side wingtip-mounted missile and then the starboard stabilator to scrape the ground, and the aircraft then began to veer off the runway. The test pilot, Phil Oestricher, decided to lift off to avoid a potential crash, safely landing six minutes later. The slight damage was quickly repaired and the official first flight occurred on time. The YF-16's first supersonic flight was accomplished on 5 February 1974, and the second YF-16 prototype first flew on 9 May 1974. This was followed by the first flights of Northrop's YF-17 prototypes on 9 June and 21 August 1974, respectively. During the flyoff, the YF-16s completed 330 sorties for a total of 417 flight hours; the YF-17s flew 288 sorties, covering 345 hours.
Air Combat Fighter competition
Increased interest turned the LWF into a serious acquisition program. NATO allies Belgium, Denmark, the Netherlands, and Norway were seeking to replace their F-104G Starfighter fighter-bombers. In early 1974, they reached an agreement with the U.S. that if the USAF ordered the LWF winner, they would consider ordering it as well. The USAF also needed to replace its F-105 Thunderchief and F-4 Phantom II fighter-bombers. The U.S. Congress sought greater commonality in fighter procurements by the Air Force and Navy, and in August 1974 redirected Navy funds to a new Navy Air Combat Fighter program that would be a naval fighter-bomber variant of the LWF. The four NATO allies had formed the Multinational Fighter Program Group (MFPG) and pressed for a U.S. decision by December 1974; thus, the USAF accelerated testing.
To reflect this serious intent to procure a new fighter-bomber, the LWF program was rolled into a new Air Combat Fighter (ACF) competition in an announcement by U.S. Secretary of Defense James R. Schlesinger in April 1974. The ACF would not be a pure fighter, but multirole, and Schlesinger made it clear that any ACF order would be in addition to the F-15, which extinguished opposition to the LWF. ACF also raised the stakes for GD and Northrop because it brought in competitors intent on securing what was touted at the time as "the arms deal of the century". These were Dassault-Breguet's proposed Mirage F1M-53, the Anglo-French SEPECAT Jaguar, and the proposed Saab 37E "Eurofighter". Northrop offered the P-530 Cobra, which was similar to the YF-17. The Jaguar and Cobra were dropped by the MFPG early on, leaving two European and two U.S. candidates. On 11 September 1974, the U.S. Air Force confirmed plans to order the winning ACF design to equip five tactical fighter wings. Though computer modeling predicted a close contest, the YF-16 proved significantly quicker going from one maneuver to the next and was the unanimous choice of those pilots that flew both aircraft.
On 13 January 1975, Secretary of the Air Force John L. McLucas announced the YF-16 as the winner of the ACF competition. The chief reasons given by the secretary were the YF-16's lower operating costs, greater range, and maneuver performance that was "significantly better" than that of the YF-17, especially at supersonic speeds. Another advantage of the YF-16 – unlike the YF-17 – was its use of the Pratt & Whitney F100 turbofan engine, the same powerplant used by the F-15; such commonality would lower the cost of engines for both programs. Secretary McLucas announced that the USAF planned to order at least 650, possibly up to 1,400 production F-16s. In the Navy Air Combat Fighter competition, on 2 May 1975, the Navy selected the YF-17 as the basis for what would become the McDonnell Douglas F/A-18 Hornet.
Production
The U.S. Air Force initially ordered 15 full-scale development (FSD) aircraft (11 single-seat and four two-seat models) for its flight test program which was reduced to eight (six F-16A single-seaters and two F-16B two-seaters). The YF-16 design was altered for the production F-16. The fuselage was lengthened by , a larger nose radome was fitted for the AN/APG-66 radar, wing area was increased from , the tailfin height was decreased, the ventral fins were enlarged, two more stores stations were added, and a single door replaced the original nosewheel double doors. The F-16's weight was increased by 25% over the YF-16 by these modifications.
The FSD F-16s were manufactured by General Dynamics in Fort Worth, Texas, at United States Air Force Plant 4 in late 1975; the first F-16A rolled out on 20 October 1976 and first flew on 8 December. The initial two-seat model achieved its first flight on 8 August 1977. The initial production-standard F-16A flew for the first time on 7 August 1978 and its delivery was accepted by the USAF on 6 January 1979. The aircraft entered USAF operational service with the 34th Tactical Fighter Squadron, 388th Tactical Fighter Wing, at Hill AFB in Utah, on 1 October 1980.
The F-16 was given its name of "Fighting Falcon" on 21 July 1980. Its pilots and crews often use the name "Viper" instead, because of a perceived resemblance to a viper snake as well as to the fictional Colonial Viper starfighter from the television program Battlestar Galactica, which aired at the time the F-16 entered service.
On 7 June 1975, the four European partners, now known as the European Participation Group, signed up for 348 aircraft at the Paris Air Show. This was split among the European Participation Air Forces (EPAF) as 116 for Belgium, 58 for Denmark, 102 for the Netherlands, and 72 for Norway. Two European production lines, one in the Netherlands at Fokker's Schiphol-Oost facility and the other at SABCA's Gosselies plant in Belgium, would produce 184 and 164 units respectively. Norway's Kongsberg Vaapenfabrikk and Denmark's Terma A/S also manufactured parts and subassemblies for EPAF aircraft. European co-production was officially launched on 1 July 1977 at the Fokker factory. Beginning in November 1977, Fokker-produced components were sent to Fort Worth for fuselage assembly, then shipped back to Europe for final assembly of EPAF aircraft at the Belgian plant on 15 February 1978; deliveries to the Belgian Air Force began in January 1979. The first Royal Netherlands Air Force aircraft was delivered in June 1979. In 1980, the first aircraft were delivered to the Royal Norwegian Air Force by Fokker and to the Royal Danish Air Force by SABCA.
During the late 1980s and 1990s, Turkish Aerospace Industries (TAI) produced 232 Block 30/40/50 F-16s on a production line in Ankara under license for the Turkish Air Force. TAI also produced 46 Block 40s for Egypt in the mid-1990s and 30 Block 50s from 2010 onwards. Korean Aerospace Industries opened a production line for the KF-16 program, producing 140 Block 52s from the mid-1990s to mid-2000s (decade). If India had selected the F-16IN for its Medium Multi-Role Combat Aircraft procurement, a sixth F-16 production line would have been built in India. In May 2013, Lockheed Martin stated there were currently enough orders to keep producing the F-16 until 2017.
Improvements and upgrades
One change made during production was augmented pitch control to avoid deep stall conditions at high angles of attack. The stall issue had been raised during development but had originally been discounted. Model tests of the YF-16 conducted by the Langley Research Center revealed a potential problem, but no other laboratory was able to duplicate it. YF-16 flight tests were not sufficient to expose the issue; later flight testing on the FSD aircraft demonstrated a real concern. In response, the area of each horizontal stabilizer was increased by 25% on the Block 15 aircraft in 1981 and later retrofitted to earlier aircraft. In addition, a manual override switch to disable the horizontal stabilizer flight limiter was prominently placed on the control console, allowing the pilot to regain control of the horizontal stabilizers (which the flight limiters otherwise lock in place) and recover. Besides reducing the risk of deep stalls, the larger horizontal tail also improved stability and permitted faster takeoff rotation.
In the 1980s, the Multinational Staged Improvement Program (MSIP) was conducted to evolve the F-16's capabilities, mitigate risks during technology development, and ensure the aircraft's worth. The program upgraded the F-16 in three stages. The MSIP process permitted the quick introduction of new capabilities, at lower costs and with reduced risks compared to traditional independent upgrade programs. In 2012, the USAF had allocated $2.8 billion (~$ in ) to upgrade 350 F-16s while waiting for the F-35 to enter service. One key upgrade has been an auto-GCAS (Ground collision avoidance system) to reduce instances of controlled flight into terrain. Onboard power and cooling capacities limit the scope of upgrades, which often involve the addition of more power-hungry avionics.
Lockheed won many contracts to upgrade foreign operators' F-16s. BAE Systems also offers various F-16 upgrades, receiving orders from South Korea, Oman, Turkey, and the US Air National Guard; BAE lost the South Korean contract because of a price breach in November 2014. In 2012, the USAF assigned the total upgrade contract to Lockheed Martin. Upgrades include Raytheon's Center Display Unit, which replaces several analog flight instruments with a single digital display.
In 2013, sequestration budget cuts cast doubt on the USAF's ability to complete the Combat Avionics Programmed Extension Suite (CAPES), a part of secondary programs such as Taiwan's F-16 upgrade. Air Combat Command's General Mike Hostage stated that if he only had money for a service life extension program (SLEP) or CAPES, he would fund SLEP to keep the aircraft flying. Lockheed Martin responded to talk of CAPES cancellation with a fixed-price upgrade package for foreign users. CAPES was not included in the Pentagon's 2015 budget request. The USAF said that the upgrade package will still be offered to Taiwan's Republic of China Air Force, and Lockheed said that some common elements with the F-35 will keep the radar's unit costs down. In 2014, the USAF issued a RFI to SLEP 300 F-16 C/Ds.
Production relocation
To make more room for assembly of its newer F-35 Lightning II fighter aircraft, Lockheed Martin moved the F-16 production from Fort Worth, Texas to its plant in Greenville, South Carolina. Lockheed delivered the last F-16 from Fort Worth to the Iraqi Air Force on 14 November 2017, ending 40 years of F-16 production there. The company resumed production in 2019, though engineering and modernization work will remain in Fort Worth. A gap in orders made it possible to stop production during the move; after completing orders for the last Iraqi purchase, the company was negotiating an F-16 sale to Bahrain that would be produced in Greenville. This contract was signed in June 2018, and the first planes rolled off the Greenville line in 2023.
Design
Overview
The F-16 is a single-engine, highly maneuverable, supersonic, multirole tactical fighter aircraft. It is much smaller and lighter than its predecessors but uses advanced aerodynamics and avionics, including the first use of a relaxed static stability/fly-by-wire (RSS/FBW) flight control system, to achieve enhanced maneuver performance. Highly agile, the F-16 was the first fighter aircraft purpose-built to pull 9-g maneuvers and can reach a maximum speed of over Mach 2. Innovations include a frameless bubble canopy for better visibility, a side-mounted control stick, and a reclined seat to reduce g-force effects on the pilot. It is armed with an internal 20 mm M61 Vulcan cannon in the left wing root and has multiple locations for mounting various missiles, bombs and pods. It has a thrust-to-weight ratio greater than one, providing power to climb and vertical acceleration.
The F-16 was designed to be relatively inexpensive to build and simpler to maintain than earlier-generation fighters. The airframe is built with about 80% aviation-grade aluminum alloys, 8% steel, 3% composites, and 1.5% titanium. The leading-edge flaps, stabilators, and ventral fins make use of bonded aluminum honeycomb structures and graphite epoxy lamination coatings. The number of lubrication points, fuel line connections, and replaceable modules is significantly less than in preceding fighters; 80% of the access panels can be accessed without stands. The air intake was placed so it was rearward of the nose but forward enough to minimize air flow losses and reduce aerodynamic drag.
Although the LWF program called for a structural life of 4,000 flight hours, capable of achieving with 80% internal fuel; GD's engineers decided to design the F-16's airframe life for 8,000 hours and for maneuvers on full internal fuel. This proved advantageous when the aircraft's mission changed from solely air-to-air combat to multirole operations. Changes in operational use and additional systems have increased weight, necessitating multiple structural strengthening programs.
General configuration
The F-16 has a cropped-delta wing incorporating wing-fuselage blending and forebody vortex-control strakes; a fixed-geometry, underslung air intake (with splitter plate) to the single turbofan jet engine; a conventional tri-plane empennage arrangement with all-moving horizontal "stabilator" tailplanes; a pair of ventral fins beneath the fuselage aft of the wing's trailing edge; and a tricycle landing gear configuration with the aft-retracting, steerable nose gear deploying a short distance behind the inlet lip. There is a boom-style aerial refueling receptacle located behind the single-piece "bubble" canopy of the cockpit. Split-flap speedbrakes are located at the aft end of the wing-body fairing, and a tailhook is mounted underneath the fuselage. A fairing beneath the rudder often houses ECM equipment or a drag chute. Later F-16 models feature a long dorsal fairing along the fuselage's "spine", housing additional equipment or fuel.
Aerodynamic studies in the 1960s demonstrated that the "vortex lift" phenomenon could be harnessed by highly swept wing configurations to reach higher angles of attack, using leading edge vortex flow off a slender lifting surface. As the F-16 was being optimized for high combat agility, GD's designers chose a slender cropped-delta wing with a leading-edge sweep of 40° and a straight trailing edge. To improve maneuverability, a variable-camber wing with a NACA 64A-204 airfoil was selected; the camber is adjusted by leading-edge and trailing edge flaperons linked to a digital flight control system regulating the flight envelope. The F-16 has a moderate wing loading, reduced by fuselage lift. The vortex lift effect is increased by leading-edge extensions, known as strakes. Strakes act as additional short-span, triangular wings running from the wing root (the junction with the fuselage) to a point further forward on the fuselage. Blended into the fuselage and along the wing root, the strake generates a high-speed vortex that remains attached to the top of the wing as the angle of attack increases, generating additional lift and allowing greater angles of attack without stalling. Strakes allow a smaller, lower-aspect-ratio wing, which increases roll rates and directional stability while decreasing weight. Deeper wing roots also increase structural strength and internal fuel volume.
Armament
Early F-16s could be armed with up to six AIM-9 Sidewinder heat-seeking short-range air-to-air missiles (AAM) by employing rail launchers on each wingtip, as well as radar-guided AIM-7 Sparrow medium-range AAMs in a weapons mix. More recent versions support the AIM-120 AMRAAM, and US aircraft often mount that missile on their wingtips to reduce wing flutter. The aircraft can carry various other AAMs, a wide variety of air-to-ground missiles, rockets or bombs; electronic countermeasures (ECM), navigation, targeting or weapons pods; and fuel tanks on 9 hardpoints – six under the wings, two on wingtips, and one under the fuselage. Two other locations under the fuselage are available for sensor or radar pods. The F-16 carries a M61A1 Vulcan cannon, which is mounted inside the fuselage to the left of the cockpit.
Relaxed stability and fly-by-wire
The F-16 is the first production fighter aircraft intentionally designed to be slightly aerodynamically unstable, also known as relaxed static stability (RSS), to both reduce drag and improve maneuverability. Most aircraft are designed to have positive static stability, which induces the aircraft to return to straight and level flight attitude if the pilot releases the controls. This reduces maneuverability as the inherent stability has to be overcome and increases a form of drag known as trim drag. Aircraft with relaxed stability are designed to be able to augment their stability characteristics while maneuvering to increase lift and reduce drag, thus greatly increasing their maneuverability. At , the F-16 gains positive stability because of aerodynamic changes.
To counter the tendency to depart from controlled flight and avoid the need for constant trim inputs by the pilot, the F-16 has a quadruplex (four-channel) fly-by-wire (FBW) flight control system (FLCS). The flight control computer (FLCC) accepts pilot input from the stick and rudder controls and manipulates the control surfaces in such a way as to produce the desired result without inducing control loss. The FLCC conducts thousands of measurements per second on the aircraft's flight attitude to automatically counter deviations from the pilot-set flight path. The FLCC further incorporates limiters governing movement in the three main axes based on attitude, airspeed, and angle of attack (AOA)/g; these prevent control surfaces from inducing instability such as slips or skids, or a high AOA inducing a stall. The limiters also prevent maneuvers that would exert more than a load.
Flight testing revealed that "assaulting" multiple limiters at high AOA and low speed can result in an AOA far exceeding the 25° limit, colloquially referred to as "departing"; this causes a deep stall; a near-freefall at 50° to 60° AOA, either upright or inverted. While at a very high AOA, the aircraft's attitude is stable but control surfaces are ineffective. The pitch limiter locks the stabilators at an extreme pitch-up or pitch-down attempting to recover. This can be overridden so the pilot can "rock" the nose via pitch control to recover.
Unlike the YF-17, which had hydromechanical controls serving as a backup to the FBW, General Dynamics took the innovative step of eliminating mechanical linkages from the control stick and rudder pedals to the flight control surfaces. The F-16 is entirely reliant on its electrical systems to relay flight commands, instead of traditional mechanically linked controls, leading to the early moniker of "the electric jet" and aphorisms among pilots such as "You don't fly an F-16; it flies you." The quadruplex design permits "graceful degradation" in flight control response in that the loss of one channel renders the FLCS a "triplex" system. The FLCC began as an analog system on the A/B variants but has been supplanted by a digital computer system beginning with the F-16C/D Block 40. The F-16's controls suffered from a sensitivity to static electricity or electrostatic discharge (ESD) and lightning. Up to 70–80% of the C/D models' electronics were vulnerable to ESD.
Cockpit and ergonomics
A key feature of the F-16's cockpit is the exceptional field of view. The single-piece, bird-proof polycarbonate bubble canopy provides 360° all-round visibility, with a 40° look-down angle over the side of the aircraft, and 15° down over the nose (compared to the common 12–13° of preceding aircraft); the pilot's seat is elevated for this purpose. Additionally, the F-16's canopy omits the forward bow frame found on many fighters, which is an obstruction to a pilot's forward vision. The F-16's ACES II zero/zero ejection seat is reclined at an unusual tilt-back angle of 30°; most fighters have a tilted seat at 13–15°. The tilted seat can accommodate taller pilots and increases tolerance; however, it has been associated with reports of neck aches, possibly caused by incorrect headrest usage. Subsequent U.S. fighters have adopted more modest tilt-back angles of 20°. Because of the seat angle and the canopy's thickness, the ejection seat lacks canopy-breakers for emergency egress; instead the entire canopy is jettisoned prior to the seat's rocket firing.
The pilot flies primarily by means of an armrest-mounted side-stick controller (instead of a traditional center-mounted stick) and an engine throttle; conventional rudder pedals are also employed. To enhance the pilot's degree of control of the aircraft during combat maneuvers, various switches and function controls were moved to centralized hands on throttle-and-stick (HOTAS) controls upon both the controllers and the throttle. Hand pressure on the side-stick controller is transmitted by electrical signals via the FBW system to adjust various flight control surfaces to maneuver the F-16. Originally, the side-stick controller was non-moving, but this proved uncomfortable and difficult for pilots to adjust to, sometimes resulting in a tendency to "over-rotate" during takeoffs, so the control stick was given a small amount of "play". Since the introduction of the F-16, HOTAS controls have become a standard feature on modern fighters.
The F-16 has a head-up display (HUD), which projects visual flight and combat information in front of the pilot without obstructing the view; being able to keep their head "out of the cockpit" improves the pilot's situation awareness. Further flight and systems information are displayed on multi-function displays (MFD). The left-hand MFD is the primary flight display (PFD), typically showing radar and moving maps; the right-hand MFD is the system display (SD), presenting information about the engine, landing gear, slat and flap settings, and fuel and weapons status. Initially, the F-16A/B had monochrome cathode-ray tube (CRT) displays; replaced by color liquid-crystal displays on the Block 50/52. The Mid-Life Update (MLU) introduced compatibility with night-vision goggles (NVG). The Boeing Joint Helmet Mounted Cueing System (JHMCS) is available from Block 40 onwards for targeting based on where the pilot's head faces, unrestricted by the HUD, using high-off-boresight missiles like the AIM-9X.
In November 2024 it was announced that the US Air Force had awarded a $9 million contract to Danish defense company Terma A/S, to supply its 3-D audio system for the aircraft, with a program of upgrades over the following two years. The system will provide high-fidelity digital audio by spatially separating radio signals, aligning audio with threat directions, and integrating active noise reduction.
Fire-control radar
The F-16A/B was originally equipped with the Westinghouse AN/APG-66 fire-control radar. Its slotted planar array antenna was designed to be compact to fit into the F-16's relatively small nose. In uplook mode, the APG-66 uses a low pulse-repetition frequency (PRF) for medium- and high-altitude target detection in a low-clutter environment, and in look-down/shoot-down employs a medium PRF for heavy clutter environments. It has four operating frequencies within the X band, and provides four air-to-air and seven air-to-ground operating modes for combat, even at night or in bad weather. The Block 15's APG-66(V)2 model added more powerful signal processing, higher output power, improved reliability, and increased range in cluttered or jamming environments. The Mid-Life Update (MLU) program introduced a new model, APG-66(V)2A, which features higher speed and more memory.
The AN/APG-68, an evolution of the APG-66, was introduced with the F-16C/D Block 25. The APG-68 has greater range and resolution, as well as 25 operating modes, including ground-mapping, Doppler beam-sharpening, ground moving target indication, sea target, and track while scan (TWS) for up to 10 targets. The Block 40/42's APG-68(V)1 model added full compatibility with Lockheed Martin Low Altitude Navigation and Targeting Infrared for Night (LANTIRN) pods, and a high-PRF pulse-Doppler track mode to provide Interrupted Continuous Wave guidance for semi-active radar homing (SARH) missiles like the AIM-7 Sparrow. Block 50/52 F-16s initially used the more reliable APG-68(V)5 which has a programmable signal processor employing Very High Speed Integrated Circuit (VHSIC) technology. The Advanced Block 50/52 (or 50+/52+) is equipped with the APG-68(V)9 radar, with a 30% greater air-to-air detection range and a synthetic aperture radar (SAR) mode for high-resolution mapping and target detection-recognition. In August 2004, Northrop Grumman was contracted to upgrade the APG-68 radars of Block 40/42/50/52 aircraft to the (V)10 standard, providing all-weather autonomous detection and targeting for Global Positioning System (GPS)-aided precision weapons, SAR mapping, and terrain-following radar (TF) modes, as well as interleaving of all modes.
The F-16E/F is outfitted with Northrop Grumman's AN/APG-80 active electronically scanned array (AESA) radar. Northrop Grumman developed the latest AESA radar upgrade for the F-16 (selected for USAF and Taiwan's Republic of China Air Force F-16 upgrades), named the AN/APG-83 Scalable Agile Beam Radar (SABR). In July 2007, Raytheon announced that it was developing a Next Generation Radar (RANGR) based on its earlier AN/APG-79 AESA radar as a competitor to Northrop Grumman's AN/APG-68 and AN/APG-80 for the F-16. On 28 February 2020, Northrop Grumman received an order from USAF to extend the service lives of their F-16s to at least 2048 with AN/APG-83 as part of the service-life extension program (SLEP).
Propulsion
The initial powerplant selected for the single-engined F-16 was the Pratt & Whitney F100-PW-200 afterburning turbofan, a modified version of the F-15's F100-PW-100, rated at thrust. During testing, the engine was found to be prone to compressor stalls and "rollbacks", wherein the engine's thrust would spontaneously reduce to idle. Until resolved, the Air Force ordered F-16s to be operated within "dead-stick landing" distance of its bases. It was the standard F-16 engine through the Block 25, except for the newly built Block 15s with the Operational Capability Upgrade (OCU). The OCU introduced the F100-PW-220, later installed on Block 32 and 42 aircraft: the main advance being a Digital Electronic Engine Control (DEEC) unit, which improved reliability and reduced stall occurrence. Beginning production in 1988, the "-220" also supplanted the F-15's "-100", for commonality. Many of the "-220" engines on Block 25 and later aircraft were upgraded from 1997 onwards to the "-220E" standard, which enhanced reliability and maintainability; unscheduled engine removals were reduced by 35%.
The F100-PW-220/220E was the result of the USAF's Alternate Fighter Engine (AFE) program (colloquially known as "the Great Engine War"), which also saw the entry of General Electric as an F-16 engine provider. Its F110-GE-100 turbofan was limited by the original inlet to a thrust of , the Modular Common Inlet Duct allowed the F110 to achieve its maximum thrust of . (To distinguish between aircraft equipped with these two engines and inlets, from the Block 30 series on, blocks ending in "0" (e.g., Block 30) are powered by GE, and blocks ending in "2" (e.g., Block 32) are fitted with Pratt & Whitney engines.)
The Increased Performance Engine (IPE) program led to the F110-GE-129 on the Block 50 and F100-PW-229 on the Block 52. F-16s began flying with these IPE engines in the early 1990s. Altogether, of the 1,446 F-16C/Ds ordered by the USAF, 556 were fitted with F100-series engines and 890 with F110s. The United Arab Emirates' Block 60 is powered by the General Electric F110-GE-132 turbofan with a maximum thrust of , the highest thrust engine developed for the F-16.
Operational history
United States
The F-16 is being used by the active-duty USAF, Air Force Reserve, and Air National Guard units, the USAF aerial demonstration team, the U.S. Air Force Thunderbirds, and as an adversary-aggressor aircraft by the United States Navy at the Naval Strike and Air Warfare Center.
The U.S. Air Force, including the Air Force Reserve and the Air National Guard, flew the F-16 in combat during Operation Desert Storm in 1991 and in the Balkans later in the 1990s. F-16s also patrolled the no-fly zones in Iraq during Operations Northern Watch and Southern Watch and served during the War in Afghanistan and the War in Iraq from 2001 and 2003 respectively. In 2011, Air Force F-16s took part in the intervention in Libya.
On 11 September 2001, two unarmed F-16s were launched in an attempt to ram and down United Airlines Flight 93 before it reached Washington D.C. during the 11 September 2001 terrorist attacks, but Flight 93 was prematurely brought down by the hijackers after passengers attacked the cockpit, so the F-16s were retasked to patrol the local airspace and later escorted Air Force One back to Washington.
The F-16 had been scheduled to remain in service with the U.S. Air Force until 2025. Its replacement is planned to be the F-35A variant of the Lockheed Martin F-35 Lightning II, which is expected to gradually begin replacing several multirole aircraft among the program's member nations. However, owing to delays in the F-35 program, all USAF F-16s will receive service life extension upgrades. In 2022, it was announced the USAF would continue to operate the F-16 for another two decades.
Israel
The F-16's first air-to-air combat success was achieved by the Israeli Air Force (IAF) over the Bekaa Valley on 28 April 1981, against a Syrian Mi-8 helicopter, which was downed with cannon fire. On 7 June 1981, eight Israeli F-16s, escorted by six F-15s, executed Operation Opera, their first employment in a significant air-to-ground operation. This raid severely damaged Osirak, an Iraqi nuclear reactor under construction near Baghdad, to prevent the regime of Saddam Hussein from using the reactor for the creation of nuclear weapons.
The following year, during the 1982 Lebanon War Israeli F-16s engaged Syrian aircraft in one of the largest air battles involving jet aircraft, which began on 9 June and continued for two more days. Israeli Air Force F-16s were credited with 44 air-to-air kills during the conflict.
In January 2000, Israel completed a purchase of 102 new F-16I aircraft in a deal totaling . F-16s were also used in their ground-attack role for strikes against targets in Lebanon. IAF F-16s participated in the 2006 Lebanon War and the 2008–09 Gaza War. During and after the 2006 Lebanon war, IAF F-16s shot down Iranian-made UAVs launched by Hezbollah, using Rafael Python 5 air-to-air missiles.
On 10 February 2018, an Israeli Air Force F-16I was shot down in northern Israel when it was hit by a relatively old model S-200 (NATO name SA-5 Gammon) surface-to-air missile of the Syrian Air Defense Force. The pilot and navigator ejected safely in Israeli territory. The F-16I was part of a bombing mission against Syrian and Iranian targets around Damascus after an Iranian drone entered Israeli airspace and was shot down. An Israel Air Force investigation determined on 27 February 2018 that the loss was due to pilot error since the IAF determined the air crew did not adequately defend themselves.
On 16 July 2024, the last single-seat F-16C Barak-1 (‘Lightning’ in Hebrew) were retired; the IAF continue to use the F-16D Brakeet and F-16I Sufa two-seat variants.
Pakistan
During the Soviet–Afghan War, PAF F-16As shot down between 20 and 30 Soviet and Afghan warplanes; the political situation however resulted in PAF officially recognizing only 9 kills which were made inside Pakistani airspace. From May 1986 to January 1989, PAF F-16s from the Tail Choppers and Griffin squadrons using mostly AIM-9 Sidewinder missiles, shot down four Afghan Su-22s, two MiG-23s, one Su-25, and one An-26. Most of these kills were by missiles, but at least one, a Su-22, was destroyed by cannon fire. One F-16 was lost in these battles. The downed F-16 was likely hit accidentally by the other F-16.
On 7 June 2002, a Pakistan Air Force F-16B Block 15 (S. No. 82-605) shot down an Indian Air Force unmanned aerial vehicle, an Israeli-made Searcher II, using an AIM-9L Sidewinder missile, during a night interception near Lahore.
The Pakistan Air Force has used its F-16s in various foreign and internal military exercises, such as the "Indus Vipers" exercise in 2008 conducted jointly with Turkey.
Between May 2009 and , the PAF F-16 fleet flew more than 5,500 sorties in support of the Pakistan Army's operations against the Taliban insurgency in the FATA region of North-West Pakistan. More than 80% of the dropped munitions were laser-guided bombs.
On 27 February 2019, following six Pakistan Air Force airstrikes in Jammu and Kashmir, India, Pakistani officials said that two of its fighter jets shot down one MiG-21 and one Su-30MKI belonging to the Indian Air Force. Indian officials only confirmed the loss of one MiG-21 but denied losing any Su-30MKI in the clash and claimed the Pakistani claims as dubious. Additionally Indian officials also claimed to have shot down one F-16 belonging to the Pakistan Air Force. This was denied by the Pakistani side, considered dubious by neutral sources, and later backed by a report by Foreign Policy magazine, reporting that the US had completed a physical count of Pakistan's F-16s and found none missing. A report by The Washington Post noted that the Pentagon and State Department refused public comment on the matter but did not deny the earlier report.
Turkey
The Turkish Air Force acquired its first F-16s in 1987. F-16s were later produced in Turkey under four phases of Peace Onyx programs. In 2015, they were upgraded to Block 50/52+ with CCIP by Turkish Aerospace Industries. Turkish F-16s are being fitted with indigenous AESA radars and EW suite called SPEWS-II.
On 18 June 1992, a Greek Mirage F1 crashed during a dogfight with a Turkish F-16. On 8 February 1995, a Turkish F-16 crashed into the Aegean Sea after being intercepted by Greek Mirage F1 fighters.
Turkish F-16s have participated in Bosnia-Herzegovina and Kosovo since 1993 in support of United Nations resolutions.
On 8 October 1996, seven months after the escalation a Greek Mirage 2000 reportedly fired an R.550 Magic II missile and shot down a Turkish F-16D over the Aegean Sea. The Turkish pilot died, while the co-pilot ejected and was rescued by Greek forces. In August 2012, after the downing of an RF-4E on the Syrian coast, Turkish Defence Minister İsmet Yılmaz confirmed that the Turkish F-16D was shot down by a Greek Mirage 2000 with an R.550 Magic II in 1996 near Chios island. Greece denies that the F-16 was shot down. Both Mirage 2000 pilots reported that the F-16 caught fire and they saw one parachute.
On 23 May 2006, two Greek F-16s intercepted a Turkish RF-4 reconnaissance aircraft and two F-16 escorts off the coast of the Greek island of Karpathos, within the Athens FIR. A mock dogfight ensued between the two sides, resulting in a midair collision between a Turkish F-16 and a Greek F-16. The Turkish pilot ejected safely, but the Greek pilot died owing to damage caused by the collision.
Turkey used its F-16s extensively in its conflict with Kurdish insurgents in southeastern parts of Turkey and Iraq. Turkey launched its first cross-border raid on 16 December 2007, a prelude to the 2008 Turkish incursion into northern Iraq, involving 50 fighters before Operation Sun. This was the first time Turkey had mounted a night-bombing operation on a massive scale, and also the largest operation conducted by the Turkish Air Force.
During the Syrian Civil War, Turkish F-16s were tasked with airspace protection on the Syrian border. After the RF-4 downing in June 2012 Turkey changed its rules of engagement against Syrian aircraft, resulting in scrambles and downings of Syrian combat aircraft. On 16 September 2013, a Turkish Air Force F-16 shot down a Syrian Arab Air Force Mil Mi-17 helicopter near the Turkish border. On 23 March 2014, a Turkish Air Force F-16 shot down a Syrian Arab Air Force MiG-23 when it allegedly entered Turkish air space during a ground attack mission against Al Qaeda-linked insurgents. On 16 May 2015, two Turkish Air Force F-16s shot down a Syrian Mohajer 4 UAV firing two AIM-9 missiles after it trespassed into Turkish airspace for 5 minutes. A Turkish Air Force F-16 shot down a Russian Air Force Sukhoi Su-24 on the Turkey-Syria border on 24 November 2015.
On 1 March 2020, two Syrian Sukhoi Su-24s were shot down by Turkish Air Force F-16s using air-to-air missiles over Syria's Idlib Governorate. All four pilots safely ejected. On 3 March 2020, a Syrian Arab Army Air Force L-39 combat trainer was shot down by a Turkish F-16 over Syria's Idlib province. The pilot died.
As a part of Turkish F-16 modernization program new air-to-air missiles are being developed and tested for the aircraft. GÖKTUĞ program led by TUBITAK SAGE has presented two types of air-to-air missiles named as Bozdogan (Merlin) and Gokdogan (Peregrine). While Bozdogan has been categorized as a Within Visual Range Air-to-Air Missile (WVRAAM), Gokdogan is a Beyond Visual Range Air-to-Air Missile (BVRAAM). On 14 April 2021, first live test exercise of Bozdogan have successfully completed and the first batch of missiles are expected to be delivered throughout the same year to the Turkish Air Force.
Egypt
On 16 February 2015, Egyptian F-16s struck weapons caches and training camps of the Islamic State (ISIS) in Libya in retaliation for the murder of 21 Egyptian Coptic Christian construction workers by masked militants affiliated with ISIS. The airstrikes killed 64 ISIS fighters, including three leaders in Derna and Sirte on the coast.
Europe
The Royal Netherlands Air Force, Belgian Air Component, Royal Danish Air Force and Royal Norwegian Air Force all fly the F-16. All F-16s in most European air forces are equipped with drag chutes specifically to allow them to operate from automobile highways.
A Yugoslavian MiG-29 was shot down by a Dutch F-16AM during the Kosovo War in 1999. Belgian and Danish F-16s also participated in joint operations over Kosovo during the war. Dutch, Belgian, Danish, and Norwegian F-16s were deployed during the 2011 intervention in Libya and in Afghanistan. In Libya, Norwegian F-16s dropped almost 550 bombs and flew 596 missions, some 17% of the total strike missions including the bombing of Muammar Gaddafi's headquarters.
In late March 2018, Croatia announced its intention to purchase 12 used Israeli F-16C/D "Barak"/"Brakeet" jets, pending U.S. approval. Acquiring these F-16s would allow Croatia to retire its aging MiG-21s. In January 2019, the deal was canceled because U.S. would only allow the resale if Israel stripped the planes of all the modernized electronics, while Croatia insisted on the original deal with all the upgrades installed. At the end of November 2021, Croatia signed with France instead, for 12 Rafales.
On 11 July 2018, Slovakia's government approved the purchase of 14 F-16 Block 70/72 to replace its aging fleet of Soviet-made MiG-29s. A contract was signed on 12 December 2018 in Bratislava.
Ukraine
In May 2023, an international coalition consisting of the United Kingdom, the Netherlands, Belgium and Denmark announced their intention to train Ukrainian Air Force pilots on the F-16 ahead of possible future deliveries to increase the Ukrainian Air Force capabilities in the current Russo-Ukrainian War. The U.S. confirmed that it would approve the re-export from these countries to Ukraine. Denmark has agreed to help train Ukrainians on their usage of the fighter. Denmark's acting Defence Minister Troels Lund Poulsen said that Denmark "will now be able to move forward for a collective contribution to train Ukrainian pilots to fly F-16s". On 6 July 2023, Romania announced that it will host the future training center after the meeting of the Supreme Council of National Defense. During the 2023 Vilnius summit, a coalition was formed consisting of Denmark, the Netherlands, Belgium, Canada, Luxembourg, Norway, Poland, Portugal, Romania, Sweden, the United Kingdom, and Ukraine. A number of Ukrainian pilots began training in Denmark and the U.S. The European F-16 Training Center, organized by Romania, the Netherlands, and Lockheed Martin through several subcontractors, officially opened on 13 November 2023. It is located at the Romanian Air Force's 86th Air Base, and Ukrainian pilots are expected to start training there in early 2024. On 17 August 2023, the U.S. approved the transfer of F-16s from the Netherlands and Denmark to Ukraine after the Ukrainian pilots have completed their training. The Netherlands and Denmark have announced that together they will donate up to 61 F-16AM/BM Block 15 MLU fighters to Ukraine once pilot training has been completed.
On 13 May 2024, Danish Prime Minister Mette Frederiksen said that "F-16 from Denmark will be in the air over Ukraine within months." Denmark is sending 19 F-16s in total. By the end of July 2024, the first F-16s were delivered to Ukraine.
On 4 August 2024, President Zelensky announced to the public that the F-16 was now in operational service with Ukraine. Zelensky stated at an opening ceremony that: "F-16s are in Ukraine. We did it. I am proud of our guys who are mastering these jets and have already started using them for our country,".
On 26 August 2024, F-16s were reportedly used to intercept Russian cruise missiles for the first time. Also on 26 August, a Ukrainian F-16 crashed and the pilot, Oleksiy Mes, was killed while intercepting Russian aerial targets during the cruise missile strikes. The cause is under investigation.
On 30 August 2024, the Commander of the Ukrainian Air Force, Mykola Oleshchuk, was dismissed by President Zelenskyy and replaced by Lieutenant General Anatolii Kryvonozhko, which was partially attributed to "indications" that the F-16 that crashed on 26 August was shot down in "a friendly fire incident". Ukrainian parliamentarian Maryana Bezuhla and Oleshchuk had previously argued over the cause of the F-16 loss.
On 13 December 2024, the Ukrainian Air Force claimed an F-16 shot down six Russian cruise missiles. Two were shot down with "medium-range missiles", another two with "short-range missiles" and two were claimed to be downed by the 20 mm cannon.
Others
Venezuela Air Force have flown the F-16 on combat missions. During the November 1992 Venezuelan coup attempt, two F-16A belonging to the government loyalist managed to shot down two OV-10 Bronco and an AT-27 Tucano flown by the rebels and establishing aerial superiority for the government forces.
Two F-16B of the Indonesian Air Force intercepted and engaged several US Navy F/A-18 Hornets over the Java Sea in the 2003 Bawean incident.
The Royal Moroccan Air Force and the Royal Bahraini Air Force each lost a single F-16C, both shot down by Houthi anti-aircraft fire during the Saudi Arabian-led intervention in Yemen, respectively on 11 May 2015 and on 30 December 2015.
Argentina
On 11 October 2023, Deputy Assistant Secretary for Regional Security Mira Resnick confirmed to Jorge Argüello, Argentinean ambassador to the US, that the State Department has approved the transfer of 38 F-16s from Denmark. On April 16, it was announced by defense minister Luis Petri that the country went through with the purchase of 24+1 Danish F-16s, that will be brought up to date before they're sent to Argentina. The 25th plane, an F-16B MLU Block 10, meant for mechanics training, came disassembled in an Argentinian Hércules C-130 in late December 2024.
Six planes a year will be delivered from Denmark to Argentina until the 24 fighters arrive, with the first batch expected around November 2025.
Potential operators
Bulgaria
The Bulgarian Air Force expects delivery of the first eight new F-16 Block 70s by 2025 and the second batch of eight F-16 Block 70s is expected to arrive in 2027.
Philippines
On 24 June 2021, the Defense Security Cooperation Agency approved the Philippines' purchase of 12 F-16s worth an estimated US$2.43 billion. However, the Philippines has yet to complete this deal due to financial constraints with negotiations ongoing.
Civilian operators
Top Aces
In January 2021, Canadian defence contractor Top Aces announced that they had taken delivery of the first civilian owned F-16s to their US HQ in Mesa, Arizona. In an approval process that had taken years, they had purchased a batch of 29 F-16A/B Netz from the Israeli Air Force, including several that had taken part in Operation Opera. A year later, the first of these aircraft had finished the extensive AAMS mission system upgrades including AESA radar, HMCS, ECM, and Tactical Datalink. In late 2022 they began regular operations flying as contracted aggressors for USAF F-22 and F-35 squadrons in Luke AFB and Eglin AFB, as well as supporting exercises in other USAF and USMC bases.
Variants
F-16 models are denoted by increasing block numbers to denote upgrades. The blocks cover both single- and two-seat versions. A variety of software, hardware, systems, weapons compatibility, and structural enhancements have been instituted over the years to gradually upgrade production models and retrofit delivered aircraft.
While many F-16s were produced according to these block designs, there have been many other variants with significant changes, usually because of modification programs. Other changes have resulted in role-specialization, such as the close air support and reconnaissance variants. Several models were also developed to test new technology. The F-16 design also inspired the design of other aircraft, which are considered derivatives. Older F-16s are being converted into QF-16 drone targets.
The F-16A (single seat) and F-16B (two seat) were initial production variants. These variants include the Block 1, 5, 10, 15, and 20 versions. Block 15 was the first major change to the F-16 with larger horizontal stabilizers. It is the most numerous of all F-16 variants with 983 produced. Around 300 earlier USAF F-16A and B aircraft were upgraded to the Block 15 Mid-Life Update (MLU) standard, getting analogous capability to F-16C/D Block 50/52 aircraft. From 1987 a total of 214 Block 15 aircraft were upgraded to OCU (Operational Capability Upgrade) standard, with engines, structural and electronic improvements, and from 1988 all Block 15 were directly build to OCU specifications. Between 1989 and 1992 a total of 271 Block 15OCU airframes (246 F-16A and 25 F-16B) were converted at the Ogden Air Logistic Center to the ADF (Air Defense Fighter) variant, with improved IFF system, radio and radar, the ability to carry advanced Beyond Visual Range missiles and the addition of a side-mounted 150,000 candlepower spotlight for visual night identification of intruders. Originally intended for Cold-War air defense of the continental U.S. airspace, with the fall of the Berlin Wall the ADF lost a clear mission, and most were mothballed starting from 1994. Some mothballed ADFs were later exported to Jordan (12 -A and 4 -B models) and Thailand (15 -A and 1 -B), while 30 -A and 4 -B models were leased to Italy from 2003 to 2012
F-16C/D The F-16C (single seat) and F-16D (two seat) variants entered production in 1984. The first C/D version was the Block 25 with improved cockpit avionics and radar which added all-weather capability with beyond-visual-range (BVR) AIM-7 and AIM-120 air-air missiles. Block 30/32, 40/42, and 50/52 were later C/D versions. The F-16C/D had a unit cost of US$18.8 million (1998). Operational cost per flight hour has been estimated at $7,000 to $22,470 or $24,000, depending on the calculation method.
F-16E/F The F-16E (single seat) and F-16F (two seat) are newer F-16 Block 60 variants based on the F-16C/D Block 50/52. The United Arab Emirates invested heavily in their development. They feature improved AN/APG-80 active electronically scanned array (AESA) radar, infrared search and track (IRST), avionics, conformal fuel tanks (CFTs), and the more powerful General Electric F110-GE-132 engine.
F-16IN For the Indian MRCA competition for the Indian Air Force, Lockheed Martin offered the F-16IN Super Viper. The F-16IN is based on the F-16E/F Block 60 and features conformal fuel tanks; AN/APG-80 AESA radar, GE F110-GE-132A engine with FADEC controls; electronic warfare suite and infrared search and track (IRST) unit; updated glass cockpit; and a helmet-mounted cueing system. As of 2011, the F-16IN is no longer in the competition. In 2016, Lockheed Martin offered the new F-16 Block 70/72 version to India under the Make in India program. In 2016, the Indian government offered to purchase 200 (potentially up to 300) fighters in a deal worth $13–15bn. As of 2017, Lockheed Martin has agreed to manufacture F-16 Block 70 fighters in India with the Indian defense firm Tata Advanced Systems Limited. The new production line could be used to build F-16s for India and for exports.
F-16IQ In September 2010, the Defense Security Cooperation Agency informed the United States Congress of a possible Foreign Military Sale of 18 F-16IQ aircraft along with the associated equipment and services to the newly reformed Iraqi Air Force. The total value of sale was estimated at . The Iraqi Air Force purchased those 18 jets in the second half of 2011, then later exercised an option to purchase 18 more for a total of 36 F-16IQs. , the Iraqi had lost two in accidents. By 2023, the US government reported that these jets were Iraq's most capable airborne platforms with a 66 percent mission-capable rate. Their maintenance was being supported by private contractors. At the same time, Iraq's Russian-made systems were suffering from sanctions imposed in the wake of Russia's invasion of Ukraine.
F-16N The F-16N was an adversary aircraft operated by the United States Navy. It is based on the standard F-16C/D Block 30, is powered by the General Electric F110-GE-100 engine, and is capable of supercruise. The F-16N has a strengthened wing and is capable of carrying an Air Combat Maneuvering Instrumentation (ACMI) pod on the starboard wingtip. Although the single-seat F-16Ns and twin-seat (T)F-16Ns are based on the early-production small-inlet Block 30 F-16C/D airframe, they retain the APG-66 radar of the F-16A/B. In addition, the aircraft's cannon has been removed, as has the airborne self-protection jammer (ASPJ), and they carry no missiles. Their EW fit consists of an ALR-69 radar warning receiver (RWR) and an ALE-40 chaff/flare dispenser. The F-16Ns and (T)F-16Ns have the standard Air Force tailhook and undercarriage and are not aircraft carrier–capable. Production totaled 26 airframes, of which 22 are single-seat F-16Ns and 4 are twin-seat TF-16Ns. The initial batch of aircraft was in service between 1988 and 1998. At that time, hairline cracks were discovered in several bulkheads, and the Navy did not have the resources to replace them, so the aircraft were eventually retired, with one aircraft sent to the collection of the National Naval Aviation Museum at NAS Pensacola, Florida, and the remainder placed in storage at Davis-Monthan AFB. These aircraft were later replaced by embargoed ex-Pakistani F-16s in 2003. The original inventory of F-16Ns was previously operated by adversary squadrons at NAS Oceana, Virginia; NAS Key West, Florida; and the former NAS Miramar, California. The current F-16A/B aircraft are operated by the Naval Strike and Air Warfare Center at NAS Fallon, Nevada.
F-16V At the 2012 Singapore Air Show, Lockheed Martin unveiled plans for the new F-16V variant with the V suffix for its Viper nickname. It features an AN/APG-83 active electronically scanned array (AESA) radar, a new mission computer and electronic warfare suite, an automated ground collision avoidance system, and various cockpit improvements; this package is an option on current production F-16s and can be retrofitted to most in service F-16s. First flight took place 21 October 2015. Taiwanese media reported that Taiwan and the U.S. both initially invested in the development of the F-16V. Upgrades to Taiwan's F-16 fleet began in January 2017. The first country to confirm the purchase of 16 new F-16 Block 70/72 was Bahrain. Greece announced the upgrade of 84 F-16C/D Block 52+ and Block 52+ Advanced (Block 52M) to the latest V (Block 70/72) variant in October 2017. Slovakia announced on 11 July 2018 that it intends to purchase 14 F-16 Block 70/72 aircraft. Lockheed Martin has redesignated the F-16V Block 70 as the "F-21" in its offering for India's fighter requirement. Taiwan's Republic of China Air Force announced on 19 March 2019 that it formally requested the purchase of an additional 66 F-16V fighters. The Trump administration approved the sale on 20 August 2019. On 14 August 2020, Lockheed Martin was awarded a US$62 billion contract by the US DoD that includes 66 new F-16s at US$8 billion (~$ in ) for Taiwan.
QF-16 In September 2013, Boeing and the U.S. Air Force tested an unmanned F-16, with two US Air Force pilots controlling the airplane from the ground as it flew from Tyndall AFB over the Gulf of Mexico.
Related developments
Vought Model 1600 Proposed naval variant
General Dynamics F-16XL 1980s technology demonstrator
General Dynamics NF-16D VISTA 1990s experimental fighter
Mitsubishi F-2 1990s Japanese multirole fighter based on the F-16
Operators
As of 2024, there have been 2,145 F-16s in active service around the world.
– On 18 December 2024, Argentina bought 24 F-16AM/BM from Denmark, beating a bid to acquire JF-17s from China/Pakistan.
Former operators
– Royal Danish Air Force sold 24 F-16s to Argentine Air Force in 2024. Donating rest of the fleet of 19 F-16s to Ukrainian Air Force.
– Italian Air Force used up to 30 F-16As and 4 F-16Bs of the Block 15 ADF variant, leased from the United States Air Force, from 2003 to 2012.
– Royal Netherlands Air Force sold 6 F-16s to Royal Jordanian Air Force and 36 F-16s to Chilean Air Force in 2005. Donating rest of the fleet of 42 aircraft to Ukraine.
– Royal Norwegian Air Force (RNoAF) on 6 January 2022, Norway announced that all F-16s had been retired and replaced with the F-35. The RNoAF sold 32 of their F-16s to Romanian Air Force, with the remaining operational aircraft being donated to Ukraine.
Future operators
– On 3 June 2019, the US State Department approved the possible sale of 8 F-16 Block 70s to Bulgaria. On 26 July the deal was approved by the Bulgarian parliament, and President Rumen Radev. In November 2022, the purchase of a further 8 F-16 Block 70 fighters, spares, weapons and other systems was approved for delivery in 2027.
Notable accidents and incidents
The F-16 has been involved in over 670 hull-loss accidents as of January 2020.
On 8 May 1975, while practicing a 9-g aerial display maneuver with the second YF-16 (tail number 72-1568) at Fort Worth, Texas, prior to being sent to the Paris Air Show, one of the main landing gears jammed. The test pilot, Neil Anderson, had to perform an emergency gear-up landing and chose to do so in the grass, hoping to minimize damage and avoid injuring any observers. The aircraft was only slightly damaged, but because of the mishap, the first prototype was sent to the Paris Air Show in its place.
On 15 November 1982, while on a training flight outside Kunsan Air Base in South Korea, USAF Captain Ted Harduvel died when he crashed inverted into a mountain ridge. In 1985, Harduvel's widow filed a lawsuit against General Dynamics claiming an electrical malfunction, not pilot error, as the cause; a jury awarded the plaintiff in damages. However, in 1989, the U.S. Court of Appeals ruled the contractor had immunity to lawsuits, overturning the previous judgment. The court remanded the case to the trial court "for entry of judgment in favor of General Dynamics". The accident and subsequent trial was the subject of the 1992 film Afterburn.
On 23 March 1994, during a joint Army-Air Force exercise at Pope AFB, North Carolina, F-16D (AF Serial No. 88-0171) of the 23d Fighter Wing / 74th Fighter Squadron was simulating an engine-out approach when it collided with a USAF C-130E. Both F-16 crew members ejected, but their aircraft, on full afterburner, continued on an arc towards Green Ramp and struck a USAF C-141 that was being boarded by US Army paratroopers. This accident resulted in 24 fatalities and at least 100 others injured. It has since been known as the "Green Ramp disaster".
On 15 September 2003, a USAF Thunderbirds F-16C crashed during an air show at Mountain Home AFB, Idaho. Captain Christopher Stricklin attempted a "split S" maneuver based on an incorrect mean-sea-level altitude of the airfield. Climbing to only above ground level instead of , Stricklin had insufficient altitude to complete the maneuver, but was able to guide the aircraft away from spectators and ejected less than one second before impact. Stricklin survived with only minor injuries; the aircraft was destroyed. USAF procedure for demonstration "Split-S" maneuvers was changed, requiring both pilots and controllers to use above-ground-level (AGL) altitudes.
On 26 January 2015, a Greek F-16D crashed while performing a NATO training exercise in Albacete, Spain. Both crew members and nine French soldiers on the ground died when it crashed in the flight line, destroying or damaging two Italian AMXs, two French Alpha jets, and one French Mirage 2000. Investigations suggested that the accident was due to an erroneous rudder setting that was caused by loose papers in the cockpit.
On 7 July 2015, an F-16CJ collided with a Cessna 150M over Moncks Corner, South Carolina, U.S. The pilot of the F-16 ejected safely, but both people in the Cessna were killed.
On 11 October 2018, an F-16 MLU from the 2nd Tactical Wing of the Belgian Air Component, on the apron at Florennes Air Station, was hit by a gun burst from a nearby F-16, whose cannon was fired inadvertently during maintenance. The aircraft caught fire and was burned to the ground, while two other F-16s were damaged and two maintenance personnel were treated for aural trauma.
On 11 March 2020, a Pakistani F-16AM (Serial No. 92730) of the No. 9 Squadron (Pakistan Air Force) crashed in the Shakarparian area of Islamabad during rehearsals for the Pakistan Day Parade. The plane crashed when the F-16 was executing an aerobatic loop. As a result, the pilot of the F-16, Wing Commander Noman Akram, who was also the Commanding Officer of the No. 9 Squadron "Griffins", lost his life. A board of inquiry ordered by the Pakistan Air Force later revealed that the pilot had every chance to eject but opted not to and tried his best to save the aircraft and avoid civilian casualties on the ground. Videos taken by locals on the ground show his F-16AM crashing into some woods. He was hailed a hero by Pakistanis while also gaining some attention internationally.
On 6 May 2023, a U.S. Air Force F-16C of the 8th Fighter Wing crashed in a field near Osan Air Base in South Korea during a daytime training sortie. The pilot safely ejected from the aircraft.
On 8 May 2024, an F-16C of the Republic of Singapore Air Force crashed during takeoff within Tengah Air Base. The pilot successfully ejected from the aircraft without major injuries. The cause was later identified to be from the malfunction of two of the three primary pitch rate gyroscopes on the aircraft. This was noted to be a "rare occurrence" by Lockheed Martin due to the concurrent failure of the two independent pitch rate gyroscopes giving similar inputs which caused the digital flight control computer to reject inputs from the correctly functioning pitch rate gyroscope and the backup pitch rate gyroscope when it was activated by the rejection of a primary pitch rate gyroscope.
Aircraft on display
As newer variants have entered service, many examples of older F-16 models have been preserved for display worldwide, particularly in Europe and the United States.
Specifications (F-16C Block 50 and 52)
| Technology | Specific aircraft | null |
11659 | https://en.wikipedia.org/wiki/Fourier%20analysis | Fourier analysis | In mathematics, Fourier analysis () is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from the study of Fourier series, and is named after Joseph Fourier, who showed that representing a function as a sum of trigonometric functions greatly simplifies the study of heat transfer.
The subject of Fourier analysis encompasses a vast spectrum of mathematics. In the sciences and engineering, the process of decomposing a function into oscillatory components is often called Fourier analysis, while the operation of rebuilding the function from these pieces is known as Fourier synthesis. For example, determining what component frequencies are present in a musical note would involve computing the Fourier transform of a sampled musical note. One could then re-synthesize the same sound by including the frequency components as revealed in the Fourier analysis. In mathematics, the term Fourier analysis often refers to the study of both operations.
The decomposition process itself is called a Fourier transformation. Its output, the Fourier transform, is often given a more specific name, which depends on the domain and other properties of the function being transformed. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. Each transform used for analysis (see list of Fourier-related transforms) has a corresponding inverse transform that can be used for synthesis.
To use Fourier analysis, data must be equally spaced. Different approaches have been developed for analyzing unequally spaced data, notably the least-squares spectral analysis (LSSA) methods that use a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems.
Applications
Fourier analysis has many scientific applications – in physics, partial differential equations, number theory, combinatorics, signal processing, digital image processing, probability theory, statistics, forensics, option pricing, cryptography, numerical analysis, acoustics, oceanography, sonar, optics, diffraction, geometry, protein structure analysis, and other areas.
This wide applicability stems from many useful properties of the transforms:
The transforms are linear operators and, with proper normalization, are unitary as well (a property known as Parseval's theorem or, more generally, as the Plancherel theorem, and most generally via Pontryagin duality).
The transforms are usually invertible.
The exponential functions are eigenfunctions of differentiation, which means that this representation transforms linear differential equations with constant coefficients into ordinary algebraic ones. Therefore, the behavior of a linear time-invariant system can be analyzed at each frequency independently.
By the convolution theorem, Fourier transforms turn the complicated convolution operation into simple multiplication, which means that they provide an efficient way to compute convolution-based operations such as signal filtering, polynomial multiplication, and multiplying large numbers.
The discrete version of the Fourier transform (see below) can be evaluated quickly on computers using fast Fourier transform (FFT) algorithms.
In forensics, laboratory infrared spectrophotometers use Fourier transform analysis for measuring the wavelengths of light at which a material will absorb in the infrared spectrum. The FT method is used to decode the measured signals and record the wavelength data. And by using a computer, these Fourier calculations are rapidly carried out, so that in a matter of seconds, a computer-operated FT-IR instrument can produce an infrared absorption pattern comparable to that of a prism instrument.
Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses a variant of the Fourier transformation (discrete cosine transform) of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated, so that the remaining components can be stored very compactly. In image reconstruction, each image square is reassembled from the preserved approximate Fourier-transformed components, which are then inverse-transformed to produce an approximation of the original image.
In signal processing, the Fourier transform often takes a time series or a function of continuous time, and maps it into a frequency spectrum. That is, it takes a function from the time domain into the frequency domain; it is a decomposition of a function into sinusoids of different frequencies; in the case of a Fourier series or discrete Fourier transform, the sinusoids are harmonics of the fundamental frequency of the function being analyzed.
When a function is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function at frequency represents the amplitude of a frequency component whose initial phase is given by the angle of (polar coordinates).
Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain. This justifies their use in such diverse branches as image processing, heat conduction, and automatic control.
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate narrowband components of a compound waveform, concentrating them for easier detection or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.
Some examples include:
Equalization of audio recordings with a series of bandpass filters;
Digital radio reception without a superheterodyne circuit, as in a modern cell phone or radio scanner;
Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, strip artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera;
Cross correlation of similar images for co-alignment;
X-ray crystallography to reconstruct a crystal structure from its diffraction pattern;
Fourier-transform ion cyclotron resonance mass spectrometry to determine the mass of ions from the frequency of cyclotron motion in a magnetic field;
Many other forms of spectroscopy, including infrared and nuclear magnetic resonance spectroscopies;
Generation of sound spectrograms used to analyze sounds;
Passive sonar used to classify targets based on machinery noise.
Variants of Fourier analysis
(Continuous) Fourier transform
Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, and it produces a continuous function of frequency, known as a frequency distribution. One function is transformed into another, and the operation is reversible. When the domain of the input (initial) function is time (), and the domain of the output (final) function is ordinary frequency, the transform of function at frequency is given by the complex number:
Evaluating this quantity for all values of produces the frequency-domain function. Then can be represented as a recombination of complex exponentials of all possible frequencies:
which is the inverse transform formula. The complex number, conveys both amplitude and phase of frequency
See Fourier transform for much more information, including:
conventions for amplitude normalization and frequency scaling/units
transform properties
tabulated transforms of specific functions
an extension/generalization for functions of multiple dimensions, such as images.
Fourier series
The Fourier transform of a periodic function, with period becomes a Dirac comb function, modulated by a sequence of complex coefficients:
(where is the integral over any interval of length ).
The inverse transform, known as Fourier series, is a representation of in terms of a summation of a potentially infinite number of harmonically related sinusoids or complex exponential functions, each with an amplitude and phase specified by one of the coefficients:
Any can be expressed as a periodic summation of another function, :
and the coefficients are proportional to samples of at discrete intervals of :
Note that any whose transform has the same discrete sample values can be used in the periodic summation. A sufficient condition for recovering (and therefore ) from just these samples (i.e. from the Fourier series) is that the non-zero portion of be confined to a known interval of duration which is the frequency domain dual of the Nyquist–Shannon sampling theorem.
See Fourier series for more information, including the historical development.
Discrete-time Fourier transform (DTFT)
The DTFT is the mathematical dual of the time-domain Fourier series. Thus, a convergent periodic summation in the frequency domain can be represented by a Fourier series, whose coefficients are samples of a related continuous time function:
which is known as the DTFT. Thus the DTFT of the sequence is also the Fourier transform of the modulated Dirac comb function.
The Fourier series coefficients (and inverse transform), are defined by:
Parameter corresponds to the sampling interval, and this Fourier series can now be recognized as a form of the Poisson summation formula. Thus we have the important result that when a discrete data sequence, is proportional to samples of an underlying continuous function, one can observe a periodic summation of the continuous Fourier transform, Note that any with the same discrete sample values produces the same DTFT. But under certain idealized conditions one can theoretically recover and exactly. A sufficient condition for perfect recovery is that the non-zero portion of be confined to a known frequency interval of width When that interval is the applicable reconstruction formula is the Whittaker–Shannon interpolation formula. This is a cornerstone in the foundation of digital signal processing.
Another reason to be interested in is that it often provides insight into the amount of aliasing caused by the sampling process.
Applications of the DTFT are not limited to sampled functions. See Discrete-time Fourier transform for more information on this and other topics, including:
normalized frequency units
windowing (finite-length sequences)
transform properties
tabulated transforms of specific functions
Discrete Fourier transform (DFT)
Similar to a Fourier series, the DTFT of a periodic sequence, with period , becomes a Dirac comb function, modulated by a sequence of complex coefficients (see ):
(where is the sum over any sequence of length )
The sequence is customarily known as the DFT of one cycle of It is also -periodic, so it is never necessary to compute more than coefficients. The inverse transform, also known as a discrete Fourier series, is given by:
where is the sum over any sequence of length
When is expressed as a periodic summation of another function:
and
the coefficients are samples of at discrete intervals of :
Conversely, when one wants to compute an arbitrary number of discrete samples of one cycle of a continuous DTFT, it can be done by computing the relatively simple DFT of as defined above. In most cases, is chosen equal to the length of the non-zero portion of Increasing known as zero-padding or interpolation, results in more closely spaced samples of one cycle of Decreasing causes overlap (adding) in the time-domain (analogous to aliasing), which corresponds to decimation in the frequency domain. (see ) In most cases of practical interest, the sequence represents a longer sequence that was truncated by the application of a finite-length window function or FIR filter array.
The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.
See Discrete Fourier transform for much more information, including:
transform properties
applications
tabulated transforms of specific functions
Summary
For periodic functions, both the Fourier transform and the DTFT comprise only a discrete set of frequency components (Fourier series), and the transforms diverge at those frequencies. One common practice (not discussed above) is to handle that divergence via Dirac delta and Dirac comb functions. But the same spectral information can be discerned from just one cycle of the periodic function, since all the other cycles are identical. Similarly, finite-duration functions can be represented as a Fourier series, with no actual loss of information except that the periodicity of the inverse transform is a mere artifact.
It is common in practice for the duration of s(•) to be limited to the period, or . But these formulas do not require that condition.
Symmetry properties
When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform:
From this, various relationships are apparent, for example:
The transform of a real-valued function is the conjugate symmetric function Conversely, a conjugate symmetric transform implies a real-valued time-domain.
The transform of an imaginary-valued function is the conjugate antisymmetric function and the converse is true.
The transform of a conjugate symmetric function is the real-valued function and the converse is true.
The transform of a conjugate antisymmetric function is the imaginary-valued function and the converse is true.
History
An early form of harmonic series dates back to ancient Babylonian mathematics, where they were used to compute ephemerides (tables of astronomical positions).
The Classical Greek concepts of deferent and epicycle in the Ptolemaic system of astronomy were related to Fourier series (see ).
In modern times, variants of the discrete Fourier transform were used by Alexis Clairaut in 1754 to compute an orbit,
which has been described as the first formula for the DFT,
and in 1759 by Joseph Louis Lagrange, in computing the coefficients of a trigonometric series for a vibrating string. Technically, Clairaut's work was a cosine-only series (a form of discrete cosine transform), while Lagrange's work was a sine-only series (a form of discrete sine transform); a true cosine+sine DFT was used by Gauss in 1805 for trigonometric interpolation of asteroid orbits.
Euler and Lagrange both discretized the vibrating string problem, using what would today be called samples.
An early modern development toward Fourier analysis was the 1770 paper Réflexions sur la résolution algébrique des équations by Lagrange, which in the method of Lagrange resolvents used a complex Fourier decomposition to study the solution of a cubic:
Lagrange transformed the roots into the resolvents:
where is a cubic root of unity, which is the DFT of order 3.
A number of authors, notably Jean le Rond d'Alembert, and Carl Friedrich Gauss used trigonometric series to study the heat equation, but the breakthrough development was the 1807 paper Mémoire sur la propagation de la chaleur dans les corps solides by Joseph Fourier, whose crucial insight was to model all functions by trigonometric series, introducing the Fourier series.
Historians are divided as to how much to credit Lagrange and others for the development of Fourier theory: Daniel Bernoulli and Leonhard Euler had introduced trigonometric representations of functions, and Lagrange had given the Fourier series solution to the wave equation, so Fourier's contribution was mainly the bold claim that an arbitrary function could be represented by a Fourier series.
The subsequent development of the field is known as harmonic analysis, and is also an early instance of representation theory.
The first fast Fourier transform (FFT) algorithm for the DFT was discovered around 1805 by Carl Friedrich Gauss when interpolating measurements of the orbit of the asteroids Juno and Pallas, although that particular FFT algorithm is more often attributed to its modern rediscoverers Cooley and Tukey.
Time–frequency transforms
In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information.
As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, the Gabor transform or fractional Fourier transform (FRFT), or can use different functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform.
Fourier transforms on arbitrary locally compact abelian topological groups
The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact Abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. | Mathematics | Calculus and analysis | null |
11665 | https://en.wikipedia.org/wiki/Filtration | Filtration | Filtration is a physical separation process that separates solid matter and fluid from a mixture using a filter medium that has a complex structure through which only the fluid can pass. Solid particles that cannot pass through the filter medium are described as oversize and the fluid that passes through is called the filtrate. Oversize particles may form a filter cake on top of the filter and may also block the filter lattice, preventing the fluid phase from crossing the filter, known as blinding. The size of the largest particles that can successfully pass through a filter is called the effective pore size of that filter. The separation of solid and fluid is imperfect; solids will be contaminated with some fluid and filtrate will contain fine particles (depending on the pore size, filter thickness and biological activity). Filtration occurs both in nature and in engineered systems; there are biological, geological, and industrial forms. In everyday usage the verb "strain" is more often used; for example, using a colander to drain cooking water from cooked pasta.
Oil filtration refers to the method of purifying oil by removing impurities that can degrade its quality. Contaminants can enter the oil through various means, including wear and tear of machinery components, environmental factors, and improper handling during oil changes. The primary goal of oil filtration is to enhance the oil’s performance, thereby protecting the machinery and extending its service life.
Filtration is also used to describe biological and physical systems that not only separate solids from a fluid stream but also remove chemical species and biological organisms by entrainment, phagocytosis, adsorption and absorption. Examples include slow sand filters and trickling filters. It is also used as a general term for macrophage in which organisms use a variety of means to filter small food particles from their environment. Examples range from the microscopic Vorticella up to the basking shark, one of the largest fishes, and the baleen whales, all of which are described as filter feeders.
Physical processes
Filtration is used to separate particles and fluid in a suspension, where the fluid can be a liquid, a gas or a supercritical fluid. Depending on the application, either one or both of the components may be isolated.
Filtration, as a physical operation enables materials of different chemical compositions to be separated. A solvent is chosen which dissolves one component, while not dissolving the other. By dissolving the mixture in the chosen solvent, one component will go into the solution and pass through the filter, while the other will be retained.
Filtration is widely used in chemical engineering. It may be combined with other unit operations to process the feed stream, as in the biofilter, which is a combined filter and biological digestion device.
Filtration differs from sieving, where separation occurs at a single perforated layer (a sieve). In sieving, particles that are too big to pass through the holes of the sieve are retained (see particle size distribution). In filtration, a multilayer lattice retains those particles that are unable to follow the tortuous channels of the filter. Oversize particles may form a cake layer on top of the filter and may also block the filter lattice, preventing the fluid phase from crossing the filter (blinding). Commercially, the term filter is applied to membranes where the separation lattice is so thin that the surface becomes the main zone of particle separation, even though these products might be described as sieves.
Filtration differs from adsorption, where separation relies on surface charge. Some adsorption devices containing activated charcoal and ion-exchange resin are commercially called filters, although filtration is not their principal mechanical function.
Filtration differs from removal of magnetic contaminants from fluids with magnets (typically lubrication oil, coolants and fuel oils) because there is no filter medium. Commercial devices called "magnetic filters" are sold, but the name reflects their use, not their mode of operation.
In biological filters, oversize particulates are trapped and ingested and the resulting metabolites may be released. For example, in animals (including humans), renal filtration removes waste from the blood, and in water treatment and sewage treatment, undesirable constituents are removed by adsorption into a biological film grown on or in the filter medium, as in slow sand filtration.
Methods
Filters may be used for the purpose of removing unwanted liquid from a solid residue, cleaning unwanted solids from a liquid, or simply to separate the solid from the liquid.
There are many different methods of filtration; all aim to attain the separation of substances. Separation is achieved by some form of interaction between the substance or objects to be removed and the filter. The substance that is to pass through the filter must be a fluid, i.e. a liquid or gas. Methods of filtration vary depending on the location of the targeted material, i.e. whether it is dissolved in the fluid phase or suspended as a solid.
There are several laboratory filtration techniques depending on the desired outcome namely, hot, cold and vacuum filtration. Some of the major purposes of obtaining the desired outcome are, for the removal of impurities from a mixture or, for the isolation of solids from a mixture.
Hot filtration method is mainly used to separate solids from a hot solution. This is done to prevent crystal formation in the filter funnel and other apparatus that come in contact with the solution. As a result, the apparatus and the solution used are heated to prevent the rapid decrease in temperature which in turn, would lead to the crystallisation of the solids in the funnel and hinder the filtration process.
One of the most important measures to prevent the formation of crystals in the funnel and to undergo effective hot filtration is the use stemless filter funnel. Due to the absence of a stem in the filter funnel, there is a decrease in the surface area of contact between the solution and the stem of the filter funnel, hence preventing re-crystallization of solid in the funnel, and adversely affecting the filtration process.
Cold filtration method is the use of an ice bath to rapidly cool the solution to be crystallized rather than leaving it to cool slowly in the room atmosphere. This technique results in the formation of very small crystals as opposed to getting large crystals by cooling the solution at room temperature.
Vacuum filtration technique is mostly preferred for small batches of solution to dry small crystals quickly. This method requires a Büchner funnel, filter paper of a smaller diameter than the funnel, Büchner flask, and rubber tubing to connect to a vacuum source.
Centrifugal filtration is carried out by rapidly rotating the substance to be filtered. The more dense material is separated from the less dense matter by the horizontal rotation.
Gravity filtration is the process of pouring the mixture from a higher location to a lower one. It is frequently accomplished via simple filtration, which involves placing filter paper in a glass funnel with the liquid passing through by gravity while the insoluble solid particles are caught by the filter paper. Filter cones, fluted filters, or filtering pipets can all be employed, depending on the amount of the substance at hand. Gravity filtration is in widespread everyday use, for example for straining cooking water from food, or removing contaminants from a liquid.
Filtering force
Only when a driving force is supplied will the fluid to be filtered be able to flow through the filter media. Gravity, centrifugation, applying pressure to the fluid above the filter, applying a vacuum below the filter, or a combination of these factors may all contribute to this force. In both straightforward laboratory filtrations and massive sand-bed filters, gravitational force alone may be utilized. Centrifuges with a bowl holding a porous filter media can be thought of as filters in which a centrifugal force several times stronger than gravity replaces gravitational force. A partial vacuum is typically provided to the container below the filter media when laboratory filtration is challenging to speed up the filtering process. Depending on the type of filter being used, the majority of industrial filtration operations employ pressure or vacuum to speed up filtering and reduce the amount of equipment needed.
Filter media
Filter media are the materials used to do the separation of materials.
Two main types of filter media are employed in laboratories:
Surface filters are solid sieves with a mesh to trap solid particles, sometimes with the aid of filter paper (e.g. Büchner funnel, belt filter, rotary vacuum-drum filter, cross-flow filters, screen filter).
Depth filters, beds of granular material which retain the solid particles as they pass (e.g. sand filter).
Surface filters allow the solid particles, i.e. the residue, to be collected intact; depth filters do not. However, the depth filter is less prone to clogging due to the greater surface area where the particles can be trapped. Also, when the solid particles are very fine, it is often cheaper and easier to discard the contaminated granules than to clean the solid sieve.
Filter media can be cleaned by rinsing with solvents or detergents or backwashing. Alternatively, in engineering applications, such as swimming pool water treatment plants, they may be cleaned by backwashing. Self-cleaning screen filters utilize point-of-suction backwashing to clean the screen without interrupting system flow.
Achieving flow through the filter
Fluids flow through a filter due to a pressure difference—fluid flows from the high-pressure side to the low-pressure side of the filter. The simplest method to achieve this is by gravity which can be seen in the coffeemaker example. In the laboratory, pressure in the form of compressed air on the feed side (or vacuum on the filtrate side) may be applied to make the filtration process faster, though this may lead to clogging or the passage of fine particles. Alternatively, the liquid may flow through the filter by the force exerted by a pump, a method commonly used in industry when a reduced filtration time is important. In this case, the filter need not be mounted vertically.
Filter aid
Certain filter aids may be used to aid filtration. These are often incompressible diatomaceous earth, or kieselguhr, which is composed primarily of silica. Also used are wood cellulose and other inert porous solids such as the cheaper and safer perlite. Activated carbon is often used in industrial applications that require changes in the filtrate's properties, such as altering colour or odour.
These filter aids can be used in two different ways. They can be used as a precoat before the slurry is filtered. This will prevent gelatinous-type solids from plugging the filter medium and also give a clearer filtrate. They can also be added to the slurry before filtration. This increases the porosity of the cake and reduces the resistance of the cake during filtration. In a rotary filter, the filter aid may be applied as a precoat; subsequently, thin slices of this layer are sliced off with the cake.
The use of filter aids is usually limited to cases where the cake is discarded or where the precipitate can be chemically separated from the filter.
Alternatives
Filtration is a more efficient method for the separation of mixtures than decantation but is much more time-consuming. If very small amounts of solution are involved, most of the solution may be soaked up by the filter medium.
An alternative to filtration is centrifugation. Instead of filtering the mixture of solid and liquid particles, the mixture is centrifuged to force the (usually) denser solid to the bottom, where it often forms a firm cake. The liquid above can then be decanted. This method is especially useful for separating solids that do not filter well, such as gelatinous or fine particles. These solids can clog or pass through the filter, respectively.
Biological filtration
Biological filtration may take place inside an organism, or the biological component may be grown on a medium in the material being filtered. Removal of solids, emulsified components, organic chemicals and ions may be achieved by ingestion and digestion, adsorption or absorption. Because of the complexity of biological interactions, especially in multi-organism communities, it is often not possible to determine which processes are achieving the filtration result. At the molecular level, it may often be by individual catalytic enzyme actions within an individual organism. The waste products of some organisms may subsequently broken down by other organisms to extract as much energy as possible and in so doing reduce complex organic molecules to very simple inorganic species such as water, carbon dioxide and nitrogen.
Excretion
In mammals, reptiles, and birds, the kidneys function by renal filtration whereby the glomerulus selectively removes undesirable constituents such as urea, followed by selective reabsorption of many substances essential for the body to maintain homeostasis. The complete process is termed excretion by urination. Similar but often less complex solutions are deployed in all animals, even the protozoa, where the contractile vacuole provides a similar function.
Biofilms
Biofilms are often complex communities of bacteria, phages, yeasts and often more complex organisms including protozoa, rotifers and annelids which form dynamic and complex, frequently gelatinous films on wet substrates. Such biofilms coat the rocks of most rivers and the sea and they provide the key filtration capability of the Schmutzdecke on the surface of slow sand filters and the film on the filter media of trickling filters which are used to create potable water and treat sewage respectively.
An example of a biofilm is a biological slime, which may be found in lakes, rivers, rocks, etc. The utilization of single- or dual-species biofilms is a novel technology since natural biofilms are sluggishly developing. The use of biofilms in the biofiltration process allows for the attachment of desirable biomass and critical nutrients to immobilized support. So that water may be reused for various processes, advances in biofiltration methods assist in removing significant volumes of effluents from wastewater.
Systems for biologically treating wastewater are crucial for enhancing both human health and water quality. Biofilm technology, the formation of biofilms on various filter media, and other factors have an impact on the growth structure and function of these biofilms. To conduct a thorough investigation of the composition, diversity, and dynamics of biofilms, it also takes on a variety of traditional and contemporary molecular approaches.
Filter feeders
Filter feeders are organisms that obtain their food by filtering their, generally aquatic, environment. Many of the protozoa are filter feeders using a range of adaptations including rigid spikes of protoplasm held in the water flow as in the suctoria to various arrangements of beating cillia to direct particles to the mouth including organisms such as Vorticella which have a complex ring of cilia which create a vortex in the flow drafting particles into the oral cavity. Similar feeding techniques are used by the Rotifera and the Ectoprocta. Many aquatic arthropods are filter feeders. Some use rhythmical beating of abdominal limbs to create a water current to the mouth whilst the hairs on the legs trap any particle. Others such as some caddis flies spin fine webs in the water flow to trap particles.
Examples
Many filtration processes include more than one filtration mechanism, and particulates are often removed from the fluid first to prevent clogging of downstream elements.
Particulate filtration includes:
The coffee filter to separate the coffee infusion from the grounds.
HEPA filters in air conditioning to remove particles from air.
Belt filters to extract precious metals in mining.
Vertical plate filter such as those used in Merrill–Crowe process.
The Nutsche filter is typically used in pharmaceutical applications or batch processes that need to capture solids.
Furnaces use filtration to prevent the furnace elements from fouling with particulates.
A pneumatic conveying system such as an industrial exhaust duct system often employs filtration to stop or slow the flow of unwanted material that is transported, through the use of a baghouse.
In the laboratory, a Büchner funnel is often used, with a filter paper serving as the porous barrier.
Air filters are commonly used to remove airborne particulate matter in building ventilation systems, combustion engines, and industrial processes.
Oil filter in automobiles, often as a canister or cartridge.
Aquarium filter
Straining water from food with a colander
Adsorption filtration removes contaminants by adsorption of the contaminant by the filter medium. This requires intimate contact between the filter medium and the filtrate, and takes time for diffusion to bring the contaminant into direct contact with the medium while passing through it, referred to as . Slower flow also reduces pressure drop across the filter. Applications include:
Carbon dioxide removal from breathing gas in rebreathers and life-support systems using scrubber filters,
Activated carbon filters to remove volatile hydrocarbons, odours, and other contaminants from recirculated breathing gas in closed habitats.
Combined applications include:
Compressed breathing air production, where the air passes through a particulate filter before entering the compressor, which removes particles likely to damage the compressor, followed by droplet separation after post-compression cooling and final product adsorption filtration to remove gaseous hydrocarbons contaminants and excessive water vapour. In some cases prefilters using adsorption media are used to control carbon dioxide levels, pressure swing adsorption may be used to increase oxygen fraction, and where the risk of carbon monoxide contamination exists, hopcalite catalytic converters may be included in the filtration media of the product. All these processes are broadly referred to as aspects of the filtration of the product.
Potable water treatment using biofilm filtration in slow sand filters.
Wastewater treatment using biofilm filtration using trickling filters.
| Physical sciences | Separation processes | null |
11711 | https://en.wikipedia.org/wiki/Finch | Finch | The true finches are small to medium-sized passerine birds in the family Fringillidae. Finches generally have stout conical bills adapted for eating seeds and nuts and often have colourful plumage. They occupy a great range of habitats where they are usually resident and do not migrate. They have a worldwide native distribution except for Australia and the polar regions. The family Fringillidae contains more than two hundred species divided into fifty genera. It includes the canaries, siskins, redpolls, serins, grosbeaks and euphonias, as well as the morphologically divergent Hawaiian honeycreepers.
Many birds in other families are also commonly called "finches". These groups include the estrildid finches (Estrildidae) of the Old World tropics and Australia; some members of the Old World bunting family (Emberizidae) and the New World sparrow family (Passerellidae); and the Darwin's finches of the Galapagos islands, now considered members of the tanager family (Thraupidae).
Finches and canaries were used in the UK, US and Canada in the coal mining industry to detect carbon monoxide from the eighteenth to twentieth century. This practice ceased in the UK in 1986.
Systematics and taxonomy
The name Fringillidae for the finch family was introduced in 1819 by the English zoologist William Elford Leach in a guide to the contents of the British Museum. The taxonomy of the family, in particular the cardueline finches, has a long and complicated history. The study of the relationship between the taxa has been confounded by the recurrence of similar morphologies due to the convergence of species occupying similar niches. In 1968 the American ornithologist Raymond Andrew Paynter, Jr. wrote:
Limits of the genera and relationships among the species are less understood – and subject to more controversy – in the carduelines than in any other species of passerines, with the possible exception of the estrildines [waxbills].
Beginning around 1990 a series of phylogenetic studies based on mitochondrial and nuclear DNA sequences resulted in substantial revisions in the taxonomy. Several groups of birds that had previously been assigned to other families were found to be related to the finches. The Neotropical Euphonia and the Chlorophonia were formerly placed in the tanager family Thraupidae due to their similar appearance but analysis of mitochondrial DNA sequences revealed that both genera were more closely related to the finches. They are now placed in a separate subfamily Euphoniinae within the Fringillidae. The Hawaiian honeycreepers were at one time placed in their own family, Drepanididae but were found to be closely related to the Carpodacus rosefinches and are now placed within the Carduelinae subfamily. The three largest genera, Carpodacus, Carduelis and Serinus were found to be polyphyletic. Each was split into monophyletic genera. The American rosefinches were moved from Carpodacus to Haemorhous. Carduelis was split by moving the greenfinches to Chloris and a large clade into Spinus leaving just three species in the original genus. Thirty seven species were moved from Serinus to Crithagra leaving eight species in the original genus. Today the family Fringillidae is divided into three subfamilies, the Fringillinae containing a single genus with the chaffinches, the Carduelinae containing 183 species divided into 49 genera, and the Euphoniinae containing the Euphonia and the Chlorophonia.
Although Przewalski's "rosefinch" (Urocynchramus pylzowi) has ten primary flight feathers rather than the nine primaries of other finches, it was sometimes classified in the Carduelinae. It is now assigned to a distinct family, Urocynchramidae, monotypic as to genus and species, and with no particularly close relatives among the Passeroidea.
Fossil record
Fossil remains of true finches are rare, and those that are known can mostly be assigned to extant genera at least. Like the other Passeroidea families, the true finches seem to be of roughly Middle Miocene origin, around 20 to 10 million years ago (Ma). An unidentifable finch fossil from the Messinian age, around 12 to 7.3 million years ago (Ma) during the Late Miocene subepoch, has been found at Polgárdi in Hungary.
Description
The smallest "classical" true finches are the Andean siskin (Spinus spinescens) at as little as 9.5 cm (3.8 in) and the lesser goldfinch (Spinus psaltria) at as little as . The largest species is probably the collared grosbeak (Mycerobas affinis) at up to and , although larger lengths, to in the pine grosbeak (Pinicola enucleator), and weights, to in the evening grosbeak (Hesperiphona vespertina), have been recorded in species which are slightly smaller on average. They typically have strong, stubby beaks, which in some species can be quite large; however, Hawaiian honeycreepers are famous for the wide range of bill shapes and sizes brought about by adaptive radiation. All true finches have 9 primary remiges and 12 rectrices. The basic plumage colour is brownish, sometimes greenish; many have considerable amounts of black, while white plumage is generally absent except as wing-bars or other signalling marks. Bright yellow and red carotenoid pigments are commonplace in this family, and thus blue structural colours are rather rare, as the yellow pigments turn the blue color into green. Many, but by no means all true finches have strong sexual dichromatism, the females typically lacking the bright carotenoid markings of males.
Distribution and habitat
The finches have a near-global distribution, being found across the Americas, Eurasia and Africa, as well as some island groups such as the Hawaiian islands. They are absent from Australasia, Antarctica, the Southern Pacific and the islands of the Indian Ocean, although some European species have been widely introduced in Australia and New Zealand.
Finches are typically inhabitants of well-wooded areas, but some can be found on mountains or even in deserts.
Behaviour
The finches are primarily granivorous, but euphoniines include considerable amounts of arthropods and berries in their diet, and Hawaiian honeycreepers evolved to utilize a wide range of food sources, including nectar. The diet of Fringillidae nestlings includes a varying amount of small arthropods. True finches have a bouncing flight like most small passerines, alternating bouts of flapping with gliding on closed wings. Most sing well and several are commonly seen cagebirds; foremost among these is the domesticated canary (Serinus canaria domestica). The nests are basket-shaped and usually built in trees, more rarely in bushes, between rocks or on similar substrate.
List of genera
The family Fringillidae contains 235 species divided into 50 genera and three subfamilies. The subfamily Carduelinae includes 18 extinct Hawaiian honeycreepers and the extinct Bonin grosbeak. See List of Fringillidae species for further details.
Subfamily Fringillinae
Fringilla – 5 species of chaffinch, 2 species of blue chaffinch, and the brambling
Subfamily Carduelinae
Mycerobas – 4 Palearctic grosbeaks
Coccothraustes – 3 species
Eophona – 2 oriental grosbeaks, the Chinese and the Japanese grosbeak
Pinicola – pine grosbeak
Pyrrhula – 8 bullfinch species
Rhodopechys – 2 species, the Asian crimson-winged finch and the African crimson-winged finch
Bucanetes – trumpeter and the Mongolian finch
Agraphospiza – Blanford's rosefinch
Callacanthis – spectacled finch
Pyrrhoplectes – golden-naped finch
Procarduelis – dark-breasted rosefinch
Leucosticte – 6 species of mountain and rosy finches
Carpodacus – 28 Palearctic rosefinch species
Hawaiian honeycreeper group (tribe Drepanidini)
Melamprosops – contains a single extinct species, the po'ouli
Paroreomyza – 3 species, the Oahu alauahio, the Maui alauahio and the extinct kakawahie
Oreomystis – akikiki
Telespiza – 4 species, the Laysan finch, the Nihoa finch, and 2 prehistoric species
Loxioides – 2 species, the palila and a prehistoric species
Rhodacanthis – 2 recently extinct species, the lesser and the greater koa finch, and 2 prehistoric species
Chloridops – extinct species, the Kona grosbeak
Psittirostra – ou
Dysmorodrepanis – extinct species, the Lanai hookbill
Drepanis – 2 extinct species, the Hawaii mamo and the black mamo, and the extant iiwi
Ciridops – single recently extinct species, the Ula-ai-hawane, and 3 prehistoric species
Palmeria – contains a single species, the akohekohe
Himatione – 2 species, the apapane and the extinct Laysan honeycreeper
Viridonia – single extinct species, the greater amakihi
Akialoa – 4 recently extinct species, and 2 prehistoric species
Hemignathus – 4 species, only one of which is extant
Pseudonestor – Maui parrotbill
Magumma – anianiau
Loxops – 5 species, of which one is extinct
Chlorodrepanis – 3 species, the Hawaii, Oahu and Kauai amakihi
Haemorhous – 3 North America rosefinches
Chloris – 6 greenfinches
Rhodospiza – desert finch
Rhynchostruthus – 3 golden-winged grosbeaks
Linurgus – oriole finch
Crithagra – 37 species of canaries, serins and siskins from Africa and the Arabian Peninsula
Linaria – 4 species including the twite and three linnets
Acanthis – 3 redpolls
Loxia – 6 crossbills
Chrysocorythus – 2 species
Carduelis – 3 species including the European goldfinch
Serinus – 8 species including the European serin
Spinus – 20 species including the North American goldfinches and the Eurasian siskin
Subfamily Euphoniinae
Euphonia – 27 species all with euphonia in their English name
Chlorophonia – 5 species all with chlorophonia in their English name
Gallery
| Biology and health sciences | Passerida | null |
11712 | https://en.wikipedia.org/wiki/Facilitated%20diffusion | Facilitated diffusion | Facilitated diffusion (also known as facilitated transport or passive-mediated transport) is the process of spontaneous passive transport (as opposed to active transport) of molecules or ions across a biological membrane via specific transmembrane integral proteins. Being passive, facilitated transport does not directly require chemical energy from ATP hydrolysis in the transport step itself; rather, molecules and ions move down their concentration gradient according to the principles of diffusion.
Facilitated diffusion differs from simple diffusion in several ways:
The transport relies on molecular binding between the cargo and the membrane-embedded channel or carrier protein.
The rate of facilitated diffusion is saturable with respect to the concentration difference between the two phases; unlike free diffusion which is linear in the concentration difference.
The temperature dependence of facilitated transport is substantially different due to the presence of an activated binding event, as compared to free diffusion where the dependence on temperature is mild.
Polar molecules and large ions dissolved in water cannot diffuse freely across the plasma membrane due to the hydrophobic nature of the fatty acid tails of the phospholipids that comprise the lipid bilayer. Only small, non-polar molecules, such as oxygen and carbon dioxide, can diffuse easily across the membrane. Hence, small polar molecules are transported by proteins in the form of transmembrane channels. These channels are gated, meaning that they open and close, and thus deregulate the flow of ions or small polar molecules across membranes, sometimes against the osmotic gradient. Larger molecules are transported by transmembrane carrier proteins, such as permeases, that change their conformation as the molecules are carried across (e.g. glucose or amino acids).
Non-polar molecules, such as retinol or lipids, are poorly soluble in water. They are transported through aqueous compartments of cells or through extracellular space by water-soluble carriers (e.g. retinol binding protein). The metabolites are not altered because no energy is required for facilitated diffusion. Only permease changes its shape in order to transport metabolites. The form of transport through a cell membrane in which a metabolite is modified is called group translocation transportation.
Glucose, sodium ions, and chloride ions are just a few examples of molecules and ions that must efficiently cross the plasma membrane but to which the lipid bilayer of the membrane is virtually impermeable. Their transport must therefore be "facilitated" by proteins that span the membrane and provide an alternative route or bypass mechanism. Some examples of proteins that mediate this process are glucose transporters, organic cation transport proteins, urea transporter, monocarboxylate transporter 8 and monocarboxylate transporter 10.
In vivo model of facilitated diffusion
Many physical and biochemical processes are regulated by diffusion. Facilitated diffusion is one form of diffusion and it is important in several metabolic processes. Facilitated diffusion is the main mechanism behind the binding of Transcription Factors (TFs) to designated target sites on the DNA molecule. The in vitro model, which is a very well known method of facilitated diffusion, that takes place outside of a living cell, explains the 3-dimensional pattern of diffusion in the cytosol and the 1-dimensional diffusion along the DNA contour. After carrying out extensive research on processes occurring out of the cell, this mechanism was generally accepted but there was a need to verify that this mechanism could take place in vivo or inside of living cells. Bauer & Metzler (2013) therefore carried out an experiment using a bacterial genome in which they investigated the average time for TF – DNA binding to occur. After analyzing the process for the time it takes for TF's to diffuse across the contour and cytoplasm of the bacteria's DNA, it was concluded that in vitro and in vivo are similar in that the association and dissociation rates of TF's to and from the DNA are similar in both. Also, on the DNA contour, the motion is slower and target sites are easy to localize while in the cytoplasm, the motion is faster but the TF's are not sensitive to their targets and so binding is restricted.
Intracellular facilitated diffusion
Single-molecule imaging is an imaging technique which provides an ideal resolution necessary for the study of the Transcription factor binding mechanism in living cells. In prokaryotic bacteria cells such as E. coli, facilitated diffusion is required in order for regulatory proteins to locate and bind to target sites on DNA base pairs. There are 2 main steps involved: the protein binds to a non-specific site on the DNA and then it diffuses along the DNA chain until it locates a target site, a process referred to as sliding. According to Brackley et al. (2013), during the process of protein sliding, the protein searches the entire length of the DNA chain using 3-D and 1-D diffusion patterns. During 3-D diffusion, the high incidence of Crowder proteins creates an osmotic pressure which brings searcher proteins (e.g. Lac Repressor) closer to the DNA to increase their attraction and enable them to bind, as well as steric effect which exclude the Crowder proteins from this region (Lac operator region). Blocker proteins participate in 1-D diffusion only i.e. bind to and diffuse along the DNA contour and not in the cytosol.
Facilitated diffusion of proteins on chromatin
The in vivo model mentioned above clearly explains 3-D and 1-D diffusion along the DNA strand and the binding of proteins to target sites on the chain. Just like prokaryotic cells, in eukaryotes, facilitated diffusion occurs in the nucleoplasm on chromatin filaments, accounted for by the switching dynamics of a protein when it is either bound to a chromatin thread or when freely diffusing in the nucleoplasm. In addition, given that the chromatin molecule is fragmented, its fractal properties need to be considered. After calculating the search time for a target protein, alternating between the 3-D and 1-D diffusion phases on the chromatin fractal structure, it was deduced that facilitated diffusion in eukaryotes precipitates the searching process and minimizes the searching time by increasing the DNA-protein affinity.
For oxygen
The oxygen affinity with hemoglobin on red blood cell surfaces enhances this bonding ability. In a system of facilitated diffusion of oxygen, there is a tight relationship between the ligand which is oxygen and the carrier which is either hemoglobin or myoglobin. This mechanism of facilitated diffusion of oxygen by hemoglobin or myoglobin was discovered and initiated by Wittenberg and Scholander. They carried out experiments to test for the steady-state of diffusion of oxygen at various pressures. Oxygen-facilitated diffusion occurs in a homogeneous environment where oxygen pressure can be relatively controlled.
For oxygen diffusion to occur, there must be a full saturation pressure (more) on one side of the membrane and full reduced pressure (less) on the other side of the membrane i.e. one side of the membrane must be of higher concentration. During facilitated diffusion, hemoglobin increases the rate of constant diffusion of oxygen and facilitated diffusion occurs when oxyhemoglobin molecule is randomly displaced.
For carbon monoxide
Facilitated diffusion of carbon monoxide is similar to that of oxygen. Carbon monoxide also combines with hemoglobin and myoglobin, but carbon monoxide has a dissociation velocity that 100 times less than that of oxygen. Its affinity for myoglobin is 40 times higher and 250 times higher for hemoglobin, compared to oxygen.
For glucose
Since glucose is a large molecule, its diffusion across a membrane is difficult. Hence, it diffuses across membranes through facilitated diffusion, down the concentration gradient. The carrier protein at the membrane binds to the glucose and alters its shape such that it can easily to be transported. Movement of glucose into the cell could be rapid or slow depending on the number of membrane-spanning protein. It is transported against the concentration gradient by a dependent glucose symporter which provides a driving force to other glucose molecules in the cells. Facilitated diffusion helps in the release of accumulated glucose into the extracellular space adjacent to the blood capillary.
| Biology and health sciences | Cell processes | Biology |
11715 | https://en.wikipedia.org/wiki/McDonnell%20Douglas%20F-15%20Eagle | McDonnell Douglas F-15 Eagle | The McDonnell Douglas F-15 Eagle is an American twin-engine, all-weather fighter aircraft designed by McDonnell Douglas (now part of Boeing). Following reviews of proposals, the United States Air Force (USAF) selected McDonnell Douglas's design in 1969 to meet the service's need for a dedicated air superiority fighter. The Eagle took its maiden flight in July 1972, and entered service in 1976. It is among the most successful modern fighters, with over 100 victories and no losses in aerial combat, with the majority of the kills by the Israeli Air Force.
The Eagle has been exported to many countries, including Israel, Japan, and Saudi Arabia. Although the F-15 was originally envisioned as a pure air superiority fighter, its design included a secondary ground-attack capability that was largely unused. It proved flexible enough that an improved all-weather strike derivative, the F-15E Strike Eagle, was later developed, entered service in 1989 and has been exported to several nations. Several additional Eagle and Strike Eagle subvariants have been produced for foreign customers, with production of enhanced variants ongoing.
The F-15 was the principal air superiority fighter of the USAF and U.S. allies during the late Cold War and the 1990s, replacing the F-4 Phantom II. The Eagle was first used in combat by the Israeli Air Force in 1979 and saw extensive action in the 1982 Lebanon War. In USAF service, the aircraft saw combat action in the 1991 Gulf War and the conflict over Yugoslavia. The USAF had planned to replace all of its air superiority F-15A/B/C/D with the F-22 Raptor by the 2010s, but the severely reduced procurement pushed the F-15C/D retirement to 2026 and forced the service to supplement the F-22 with an advanced Eagle variant, the F-15EX, in order to retain an adequate number of air superiority fighters. The F-15 remains in service with numerous countries, and the Strike Eagle variant is expected to continue operating in the USAF into the 2030s.
Development
Early studies
The F-15 can trace its origins to the early Vietnam War, when the U.S. Air Force and U.S. Navy fought each other over future tactical aircraft. Defense Secretary Robert McNamara was pressing for both services to use as many common aircraft as possible, even if performance compromises were involved. As part of this policy, the USAF and Navy had embarked on the TFX (F-111) program, aiming to deliver a medium-range interdiction aircraft for the Air Force that would also serve as a long-range interceptor aircraft for the Navy.
In January 1965, Secretary McNamara asked the Air Force to consider a new low-cost tactical fighter design for short-range roles and close air support to replace several types like the F-100 Super Sabre and various light bombers then in service. Several existing designs could fill this role; the Navy favored the Douglas A-4 Skyhawk and LTV A-7 Corsair II, which were pure attack aircraft, while the Air Force was more interested in the Northrop F-5 fighter with a secondary attack capability. The A-4 and A-7 were more capable in the attack role, while the F-5 less so, but could defend itself. If the Air Force chose a pure attack design, maintaining air superiority would be a priority for a new airframe. The next month, a report on light tactical aircraft suggested the Air Force purchase the F-5 or A-7, and consider a new higher-performance aircraft to ensure its air superiority. This point was reinforced after the loss of two Republic F-105 Thunderchief aircraft to obsolete MiG-17s attacking the Thanh Hóa Bridge on 4 April 1965.
In April 1965, Harold Brown, at that time director of the Department of Defense Research and Engineering, stated the favored position was to consider the F-5 and begin studies of an "F-X". These early studies envisioned a production run of 800 to 1,000 aircraft and stressed maneuverability over speed; it also stated that the aircraft would not be considered without some level of ground-attack capability. On 1 August, General Gabriel Disosway took command of Tactical Air Command and reiterated calls for the F-X, but lowered the required performance from Mach 3.0 to 2.5 to lower costs.
An official requirements document for an air superiority fighter was finalized in October 1965, and sent out as a request for proposals to 13 companies on 8 December. Meanwhile, the Air Force chose the A-7 over the F-5 for the support role on 5 November 1965, giving further impetus for an air superiority design as the A-7 lacked any credible air-to-air capability.
Eight companies responded with proposals. Following a downselect, four companies were asked to provide further developments. In total, they developed some 500 design concepts. Typical designs featured variable-sweep wings, weight over , included a top speed of Mach 2.7 and a thrust-to-weight ratio of 0.75. When the proposals were studied in July 1966, the aircraft were roughly the size and weight of the TFX F-111, and like that aircraft, were designs that could not be considered an air-superiority fighter.
Smaller, lighter
Through this period, studies of combat over Vietnam were producing worrying results. Theory had stressed long-range combat using missiles and optimized aircraft for this role. The result was highly loaded aircraft with large radar and excellent speed, but limited maneuverability and often lacking a gun. The canonical example was the McDonnell Douglas F-4 Phantom II, used by the USAF, USN, and U.S. Marine Corps to provide air superiority over Vietnam, the only fighter with enough power, range, and maneuverability to be given the primary task of dealing with the threat of Soviet fighters while flying with visual engagement rules.
In practice, due to policy and practical reasons, aircraft were closing to visual range and maneuvering, placing the larger US aircraft at a disadvantage to the much less expensive day fighters such as the MiG-21. Missiles proved to be much less reliable than predicted, especially at close range. Although improved training and the introduction of the M61 Vulcan cannon on the F-4 did much to address the disparity, these early outcomes led to considerable re-evaluation of the 1963 Project Forecast doctrine. This led to John Boyd's energy–maneuverability theory, which stressed that extra power and maneuverability were key aspects of a successful fighter design and these were more important than outright speed. Through tireless championing of the concepts and good timing with the "failure" of the initial F-X project, the "fighter mafia" pressed for a lightweight day fighter that could be built and operated in large numbers to ensure air superiority. In early 1967, they proposed that the ideal design had a thrust-to-weight ratio near 1:1, a maximum speed further reduced to Mach 2.3, a weight of , and a wing loading of .
By this time, the Navy had decided the F-111 would not meet their requirements and began the development of a new dedicated fighter design, the VFAX program. In May 1966, McNamara again asked the forces to study the designs and see whether the VFAX would meet the Air Force's F-X needs. The resulting studies took 18 months and concluded that the desired features were too different; the Navy stressed loiter time and mission flexibility, while the Air Force was now looking primarily for maneuverability.
Focus on air superiority
In 1967, the Soviet Union revealed the Mikoyan-Gurevich MiG-25 at the Domodedovo airfield near Moscow. The MiG-25 was designed as a high-speed, high-altitude interceptor aircraft, and made many performance tradeoffs to excel in this role. Among these was the requirement for very high speed, over Mach 2.8, which demanded the use of stainless steel instead of aluminum for many parts of the aircraft. The added weight demanded a much larger wing to allow the aircraft to operate at the required high altitudes. However, to observers, it appeared outwardly similar to the very large F-X studies, an aircraft with high speed and a large wing offering high maneuverability, leading to serious concerns throughout the Department of Defense and the various arms that the US was being outclassed. The MiG-23 was likewise a subject of concern, and it was generally believed to be a better aircraft than the F-4. The F-X would outclass the MiG-23, but now the MiG-25 appeared to be superior in speed, ceiling, and endurance to all existing US fighters, even the F-X. Thus, an effort to improve the F-X followed.
Both Headquarters USAF and TAC continued to call for a multipurpose aircraft, while both Disosway and Air Chief of Staff Bruce K. Holloway pressed for a pure air-superiority design that would be able to meet the expected performance of the MiG-25. During the same period, the Navy had ended its VFAX program and instead accepted a proposal from Grumman for a smaller and more maneuverable design known as VFX, later becoming the Grumman F-14 Tomcat. VFX was considerably closer to the evolving F-X requirements. The Air Force in-fighting was eventually ended by the worry that the Navy's VFAX would be forced on them; in May 1968, it was stated that "We finally decided – and I hope there is no one who still disagrees – that this aircraft is going to be an air superiority fighter".
In September 1968, a request for proposals was released to major aerospace companies. These requirements called for single-seat fighter having a maximum take-off weight of for the air-to-air role with a maximum speed of Mach 2.5 and a thrust-to-weight ratio of nearly 1:1 at mission weight. It also called for a twin-engined arrangement, as this was believed to respond to throttle changes more rapidly and might offer commonality with the Navy's VFX program. However, details of the avionics were left largely undefined, as whether to build a larger aircraft with a powerful radar that could detect the enemy at longer ranges was not clear, or alternatively a smaller aircraft that would make detecting it more difficult for the enemy.
Four companies submitted proposals, with the Air Force eliminating General Dynamics and awarding contracts to Fairchild Republic, North American Rockwell, and McDonnell Douglas for the definition phase in December 1968. The companies submitted technical proposals by June 1969. The Air Force announced the selection of McDonnell Douglas on 23 December 1969; like the Navy's VFX, the F-X skipped much of the prototype phase and jumped straight into full-scale development to save time and avoid potential program cancellation. The winning design resembled the twin-tailed F-14, but with fixed wings; both designs were based on configurations studied in wind-tunnel testing by NASA.
The Eagle's initial versions were the F-15 single-seat variant and TF-15 twin-seat variant. (After the F-15C was first flown, the designations were changed to "F-15A" and "F-15B"). These versions would be powered by new Pratt & Whitney F100 engines to achieve a combat thrust-to-weight ratio in excess of 1:1. A proposed 25-mm Ford-Philco GAU-7 cannon with caseless ammunition suffered development problems. It was dropped in favor of the standard M61 Vulcan gun. The F-15 used conformal carriage of four Sparrow missiles like the Phantom. The fixed wing was put onto a flat, wide fuselage that also provided an effective lifting body surface. The airframe was designed with a 4,000 hour service life, although this was later increased through testing and life extension modifications to 8,000 hours and some would fly beyond that. The first F-15A flight was made on 27 July 1972, with the first flight of the two-seat F-15B following in July 1973.
The F-15 has a "look-down/shoot-down" radar that can distinguish low-flying moving targets from ground clutter. It would use computer technology with new controls and displays to lower pilot workload and require only one pilot to save weight. Unlike the F-14 or F-4, the F-15 has only a single canopy frame with clear vision forward. The USAF introduced the F-15 as "the first dedicated USAF air-superiority fighter since the North American F-86 Sabre".
The F-15 was favored by customers such as the Israel and Japan air arms. Criticism from the fighter mafia that the F-15 was too large to be a dedicated dogfighter and too expensive to procure in large numbers, led to the Lightweight Fighter (LWF) program, which led to the USAF General Dynamics F-16 Fighting Falcon and the middle-weight Navy McDonnell Douglas F/A-18 Hornet.
Upgrades and further development
The single-seat F-15C and two-seat F-15D models entered production in 1978 and conducted their first flights in February and June of that year. These models were fitted with the Production Eagle Package (PEP 2000), which included of additional internal fuel, provisions for exterior conformal fuel tanks (CFT), and an increased maximum takeoff weight up to . The increased takeoff weight allows internal fuel, a full weapons load, conformal fuel tanks, and three external fuel tanks to be carried. The APG-63 radar uses a programmable signal processor (PSP), enabling the radar to be reprogrammable for additional purposes such as the addition of new armaments and equipment. The PSP was the first of its kind in the world, and the upgraded APG-63 radar was the first radar to use it. Other improvements included strengthened landing gear, a new digital central computer, and an overload warning system (OWS), which allows the pilot to fly up to 9 g at all weights.
The F-15 Multistage Improvement Program (MSIP) was initiated in February 1983 with the first production MSIP F-15C produced in 1985. Improvements included an upgraded central computer; a Programmable Armament Control Set, allowing for advanced versions of the AIM-7, AIM-9, and AIM-120A missiles; and an expanded Tactical Electronic Warfare System that provides improvements to the ALR-56C radar warning receiver and ALQ-135 countermeasure set. The final 43 F-15Cs included the Hughes APG-70 radar developed for the F-15E (see below); these are sometimes referred as Enhanced Eagles. Earlier MSIP F-15Cs with the APG-63 were upgraded to the APG-63(V)1 to improve maintainability and to perform similar to the APG-70. Existing F-15s were retrofitted with these improvements. Also beginning in 1985, F-15C and D models were equipped with the improved P&W F100-PW-220 engine and digital engine controls, providing quicker throttle response, reduced wear, and lower fuel consumption. Starting in 1997, original F100-PW-100 engines were upgraded to a similar configuration with the designation F100-PW-220E starting. In 2000, the APG-63(V)2 active electronically scanned array (AESA) radar was retrofitted to 18 U.S. Air Force F-15C aircraft. The ZAP (Zone Acquisition Program) missile launch envelope has been integrated into the operational flight program system of all U.S. F-15 aircraft, providing dynamic launch zone and launch acceptability region information for missiles to the pilot by display cues in real-time.
Although the Air Force's FX requirements were focused on air superiority, McDonnell Douglas had quietly included a basic secondary ground attack capability in the F-15's design since the beginning and also performed early internal studies for enhancing that capability. In 1979, McDonnell Douglas and F-15 radar manufacturer, Hughes, teamed to privately develop a strike fighter version of the F-15. This version competed in the Air Force's Dual-Role Fighter competition starting in 1982. The F-15E strike variant was selected for production over General Dynamics' competing F-16XL in 1984; it is a two-seat, dual-role, totally integrated fighter for all-weather, air-to-air, and deep interdiction missions. The rear cockpit is upgraded to include four multipurpose cathode-ray tube displays for aircraft systems and weapons management. The digital, triple-redundant Lear Siegler aircraft flight control system permits coupled automatic terrain following, enhanced by a ring-laser gyro inertial navigation system. For low-altitude, high-speed penetration and precision attack on tactical targets at night or in adverse weather, the F-15E carries a high-resolution APG-70 radar and LANTIRN pods to provide thermography. The F-15E would be developed into the F-15 Advanced Eagle family, which features fly-by-wire controls; the Advanced Eagle is currently the basis of all current F-15 production.
Beginning in 2006, with the threat of curtailed procurement of the F-22 that was to replace all air superiority F-15s, USAF planned to modernize 179 F-15Cs in the best material condition in order to maintain fighter fleet size by retrofitting the AN/APG-63(V)3 AESA radar and updated cockpit displays; the first upgraded aircraft was delivered in October 2010. A significant number of F-15s were equipped with the Joint Helmet Mounted Cueing System. Lockheed Martin developed an infrared search and track (IRST) sensor system for tactical fighters such the F-15C, eventually resulting in the AN/ASG-34(V)1 IRST21 sensor mounted in the Legion Pod; the AN/AAQ-33 Sniper XR pod was also integrated as a makeshift interim IRST solution. A follow-on upgrade called the Eagle Passive/Active Warning Survivability System (EPAWSS) was planned. Boeing was selected in October 2015 to serve as prime contractor for the EPAWSS, with BAE Systems selected as a subcontractor. The EPAWSS is an all-digital system with advanced electronic countermeasures, radar warning, and increased chaff and flare capabilities in a smaller footprint than the 1980s-era Tactical Electronic Warfare System. More than 400 F-15Cs and F-15Es were planned to have the system installed.
In September 2015, Boeing unveiled its 2040C Eagle upgrade (also called "Golden Eagle"), designed to keep the F-15 relevant through 2040. Seen as a necessity because of the low numbers of F-22s procured, the upgrade builds upon the company's F-15SE Silent Eagle concept with low-observable features. Most improvements focus on lethality including quad-pack munitions racks to double its missile load to 16, conformal fuel tanks for extended range, "Talon HATE" communications pod to communicate with fifth-generation fighters, the APG-63(V)3 AESA radar, long-range Legion IRST pod, and EPAWSS electronic warfare suite. The 2040C upgrade for the F-15C/D was not pursued, owing to the airframes' age that made it not economically sustainable, but many of the components such as EPAWSS and AESA radar were continued for F-15E upgrades as well as new-build F-15EX Eagle II ordered by USAF in 2020; the F-15EX took advantage of existing Advanced Eagle production line for export customers to minimize lead times and start-up costs to replace the remaining F-15C/Ds, whereas F-22 production restart was considered cost-prohibitive.
Design
Overview
The F-15 has an all-metal semi-monocoque fuselage with a large-cantilever, shoulder-mounted wing. The wing planform of the F-15 suggests a modified cropped delta shape with a leading-edge sweepback angle of 45°. Ailerons and a simple high-lift flap are located on the trailing edge. No leading-edge maneuvering flaps are used. This complication was avoided by the combination of low wing loading and fixed leading-edge conical camber that varies with spanwise position along the wing. Airfoil thickness ratios vary from 5.9% at the root to 3% at the tip.
The empennage is of metal and composite construction, with twin aluminum alloy/composite material honeycomb structure vertical stabilizers with boron-composite skin, resulting in an exceptionally thin tailplane and rudders. Composite horizontal all-moving tails outboard of the vertical stabilizers move independently to provide roll control in some flight maneuvers; the horizontal tails have a dogtooth notch to mitigate flutter. The F-15 has a spine-mounted air brake and retractable tricycle landing gear. It is powered by two Pratt & Whitney F100 axial flow turbofan engines with afterburners, mounted side by side in the fuselage and fed by rectangular inlets with variable intake ramps. The cockpit is mounted high in the forward fuselage with a one-piece windscreen and large canopy for increased visibility and a 360° field of view for the pilot. The airframe consists of 37.3% aluminum, 29.2% honeycomb, 25.8% titanium, 5.5% steel, and 2% composites and fiberglass; the structure began to incorporate advanced superplastically formed titanium components in the 1980s.
The F-15's maneuverability is derived from low wing loading (weight to wing area ratio) with a high thrust-to-weight ratio, enabling the aircraft to turn tightly at up to 9 g without losing airspeed. The F-15 can climb to in around 60 seconds. At certain speeds, the dynamic thrust output of the dual engines is greater than the aircraft's combat weight and drag, so it has the ability to accelerate vertically. The weapons and flight-control systems are designed so that one person can safely and effectively perform air-to-air combat. The A and C models are single-seat variants; these were the main air-superiority versions produced. B and D models add a second seat behind the pilot for training, although they are also fully combat capable. E models use the second seat for a weapon systems officer. Visibly, the F-15 has a unique feature vis-à-vis other modern fighter aircraft; it does not have the distinctive "turkey feather" aerodynamic exhaust petals covering its engine nozzles. Following problems during development of its exhaust petal design, including dislodgment during flight, the decision was made to remove them, resulting in a 3% aerodynamic drag increase.
The F-15 was shown to be capable of controlled flight with only one wing after an Israeli F-15D suffered a mid-air collision with an A-4 Skyhawk that removed most of the right wing, in the 1983 Negev mid-air collision. While the A-4 was instantly disintegrated with the pilot being automatically ejected, the F-15 was sent into an uncontrollable roll. Through the application of full afterburner as well as a landing at twice the normal speed, pilot Zivi Nedivi managed to land successfully at Ramon Airbase. Subsequent wind-tunnel tests on a one-wing model confirmed that controllable flight was only possible within a very limited speed range of ±20 knots and angle of attack variation of ±20 degrees. The event resulted in research into damage-adaptive technology and a system called "Intelligent Flight Control System".
Avionics
A multimission avionics system includes a head-up display (HUD), advanced radar, AN/ASN-109 inertial guidance system, flight instruments, ultra high frequency communications, and tactical air navigation system and instrument landing system receivers. It also has an internally mounted, tactical electronic warfare system, Identification friend or foe system, an electronic countermeasures suite, and a central digital computer.
The HUD projects all essential flight information gathered by the integrated avionics system. This display, visible in any light condition, provides the pilot information necessary to track and destroy an enemy aircraft without having to look down at cockpit instruments.
The F-15's versatile APG-63 and 70 pulse-Doppler radar systems can look up at high-flying targets and look-down/shoot-down at low-flying targets without being confused by ground clutter. These radars can detect and track aircraft and small high-speed targets at distances beyond visual range down to close range, and at altitudes down to treetop level. The APG-63 has a basic range of . The radar feeds target information into the central computer for effective weapons delivery. For close-in dogfights, the radar automatically acquires enemy aircraft, and this information is projected on the head-up display. The F-15's electronic warfare system provides both threat warning (radar warning receiver) and automatic countermeasures against selected threats.
The improved APG-63(V)2 and (V)3 active electronically scanned array (AESA) radar upgrade included most of the hardware from the APG-63(V)1, but added an AESA to provide increased pilot situation awareness. The AESA radar has an exceptionally agile beam, providing nearly instantaneous track updates and enhanced multitarget tracking capability. The APG-63(V)2 and (V)3 are compatible with current F-15C weapon loads and enable pilots to take full advantage of AIM-120 AMRAAM capabilities, simultaneously guiding multiple missiles to several targets widely spaced in azimuth, elevation, or range.
Weaponry and external stores
A variety of air-to-air weaponry can be carried by the F-15. An automated weapon system enables the pilot to release weapons effectively and safely, using the head-up display and the avionics and weapons controls located on the engine throttles or control stick. When the pilot changes from one weapon system to another, visual guidance for the selected weapon automatically appears on the head-up display.
The Eagle can be armed with combinations of four different air-to-air weapons: AIM-7F/M Sparrow missiles or AIM-120 AMRAAM advanced medium-range air-to-air missiles on its lower fuselage corners, AIM-9L/M Sidewinder or AIM-120 AMRAAM missiles on two pylons under the wings, and an internal M61 Vulcan Gatling gun in the right wing root.
Low-drag conformal fuel tanks (CFTs), initially called Fuel And Sensor Tactical (FAST) packs, were developed for the F-15C and D models. They can be attached to the sides of the engine air intakes under each wing and are designed to the same load factors and airspeed limits as the basic aircraft. These tanks slightly degrade performance by increasing aerodynamic drag and cannot be jettisoned in-flight. However, they cause less drag than conventional external tanks. Each conformal tank can hold 750 U.S. gallons (2,840 L) of fuel. These CFTs increase range and reduce the need for in-flight refueling. All external stations for munitions remain available with the tanks in use. Moreover, Sparrow or AMRAAM missiles can be attached to the corners of the CFTs. The USAF 57th Fighter-Interceptor Squadron based at NAS Keflavik, Iceland, was the only C-model squadron to use CFTs on a regular basis due to its extended operations over the North Atlantic. With the closure of the 57 FIS, the F-15E is the only variant to carry them on a routine basis. CFTs have also been sold to Israel and Saudi Arabia.
Operational history
Introduction and early service
The largest operator of the F-15 is the United States Air Force. The first Eagle, an F-15B, was delivered on 13 November 1974. In January 1976, the first Eagle destined for a combat squadron, the 555th TFS, was delivered. These initial aircraft carried the Hughes Aircraft (now Raytheon) APG-63 radar. The F-15 in early service was plagued by reliability and durability problems with its F100-PW-100 engines, whose ambitious specifications were critical for the aircraft's high performance. Furthermore, the issues were exacerbated by pilots making many more abrupt throttle changes than in previous fighters and engines due to the thrust available. The issues were addressed by the development of the improved F100-PW-220 engines.
The first kill by an F-15 was scored by Israeli Air Force (IAF) ace Moshe Melnik in 1979. During IAF raids against Palestinian factions in Lebanon in 1979–1981, F-15As reportedly downed 13 Syrian MiG-21s and two Syrian MiG-25s. Israeli F-15As and Bs participated as escorts in Operation Opera, an air strike on an Iraqi nuclear reactor. In the 1982 Lebanon War, Israeli F-15s were credited with 41 Syrian aircraft destroyed (23 MiG-21s and 17 MiG-23s, and one Aérospatiale SA.342L Gazelle helicopter). During Operation Mole Cricket 19, Israeli F-15s and F-16s together shot down 82 Syrian fighters (MiG-21s, MiG-23s, and MiG-23Ms) without losses.
Israel was the only operator to use and develop the air-to-ground abilities of the air-superiority F-15 variants, doing so because the fighter's range was well beyond other combat aircraft in the Israeli inventory in the 1980s. The first known use of F-15s for a strike mission was during Operation Wooden Leg on 1 October 1985, with six F-15Ds attacking PLO Headquarters in Tunis with one GBU-15 guided bomb per aircraft and two F-15Cs restriking the ruins with six Mk-82 unguided bombs each. This was one of the few times air-superiority F-15s (A/B/C/D models) were used in tactical strike missions. Israeli air-superiority F-15 variants have since been extensively upgraded to carry a wider range of air-to-ground armaments, including JDAM GPS-guided bombs and Popeye missile.
The first American combat use of the F-15 was during Operation Urgent Fury. F-15s of the 33rd Tactical Fighter Wing provided air cover alongside U.S. Navy F-14 Tomcats for Marines and the 82nd Airborne Division for contingency operations in Grenada.
Royal Saudi Air Force F-15C pilots reportedly shot down two Iranian Air Force F-4E Phantom IIs in a skirmish on 5 June 1984.
Anti-satellite trials
The ASM-135 missile was designed to be a standoff antisatellite (ASAT) weapon, with the F-15 acting as a first stage. The Soviet Union could correlate a U.S. rocket launch with a spy satellite loss, but an F-15 carrying an ASAT would blend in among hundreds of F-15 flights. From January 1984 to September 1986, two F-15As were used as launch platforms for the ASAT missile. The F-15As were modified to carry one ASM-135 on the centerline station with extra equipment within a special centerline pylon. The launch aircraft executed a Mach 1.22, 3.8 g climb at 65° to release the ASAT missile at an altitude of . The flight computer was updated to control the zoom-climb and missile release.
The third test flight involved a functional P78-1 solar observatory satellite in a orbit, which was destroyed by kinetic energy. The pilot, USAF Major Wilbert D. "Doug" Pearson, became the only pilot to destroy a satellite. The ASAT program involved five test launches. The program was officially terminated in 1988.
Gulf War and aftermath
The USAF began deploying F-15C, D, and E model aircraft to the Persian Gulf region in August 1990 for Operations Desert Shield and Desert Storm. During the Gulf War, the F-15 accounted for 36 of the 39 air-to-air victories by the U.S. Air Force against Iraqi forces. Iraq has confirmed the loss of 23 of its aircraft in air-to-air combat. The F-15C and D fighters were used in the air-superiority role, while F-15E Strike Eagles were used in air-to-ground attacks mainly at night, hunting modified Scud missile launchers and artillery sites using the LANTIRN system. According to the USAF, its F-15Cs had 34 confirmed kills of Iraqi aircraft during the 1991 Gulf War, most of them by missile fire: five Mikoyan MiG-29s, two MiG-25s, eight MiG-23s, two MiG-21s, two Sukhoi Su-25s, four Sukhoi Su-22s, one Sukhoi Su-7, six Dassault Mirage F1s, one Ilyushin Il-76 cargo aircraft, one Pilatus PC-9 trainer, and two Mil Mi-8 helicopters. According to NHHC, F-15s may have also shot down a friendly F-14 Tomcat. In addition, the F-15E achieved its first-ever air-to-air kill on 14 February 1991, destroying an Iraqi Mi-24 "Hind" helicopter with a GBU-10 laser-guided bomb. Air superiority was achieved in the first three days of the conflict; many of the later kills were reportedly of Iraqi aircraft fleeing to Iran, rather than engaging American aircraft. Two F-15Es were lost to ground fire, and another was damaged on the ground by a Scud strike on King Abdulaziz Air Base.
On 11 November 1990, a Royal Saudi Air Force (RSAF) pilot defected to Sudan with an F-15C fighter during Operation Desert Shield. Saudi Arabia paid US$40 million (~$ in ) for return of the aircraft three months later. RSAF F-15s shot down two Iraqi Mirage F1s during the Operation Desert storm. One Saudi Arabian F-15C was lost to a crash during the Persian Gulf War in 1991. The IQAF claimed this fighter was part of two USAF F-15Cs that engaged two Iraqi MiG-25PDs, and was hit by an R-40 missile before crashing.
They have since been deployed to support Operation Southern Watch, the patrolling of the Iraqi no-fly zones in Southern Iraq; Operation Provide Comfort in Turkey; in support of NATO operations in Bosnia, and recent air expeditionary force deployments. In 1994, two U.S. Army Sikorsky UH-60 Black Hawks were mistakenly downed by USAF F-15Cs in northern Iraq in a friendly-fire incident. USAF F-15Cs shot down four Yugoslav MiG-29s using AIM-120 and AIM-7 Radar guided missiles during NATO's 1999 intervention in Kosovo, Operation Allied Force.
Structural defects
All F-15s were grounded by the USAF after a Missouri Air National Guard F-15C came apart in flight and crashed on 2 November 2007. The newer F-15E fleet was later cleared for continued operations. The USAF reported on 28 November 2007 that a critical location in the upper longerons on the F-15C was the failure's suspected cause, causing the fuselage forward of the air intakes, including the cockpit and radome, to separate from the airframe.
F-15A through D-model aircraft were grounded until the location received detailed inspections and repairs as needed. The grounding of F-15s received media attention as it began to place strains on the nation's air-defense efforts. The grounding forced some states to rely on their neighboring states' fighters for air-defense protection, and Alaska to depend on Canadian Forces' fighter support.
On 8 January 2008, the USAF Air Combat Command (ACC) cleared a portion of its older F-15 fleet for return to flying status. It also recommended a limited return to flight for units worldwide using the affected models. The accident review board report, which was released on 10 January 2008, stated that analysis of the F-15C wreckage determined that the longeron did not meet drawing specifications, which led to fatigue cracks and finally a catastrophic failure of the remaining support structures and breakup of the aircraft in flight. In a report released on 10 January 2008, nine other F-15s were identified to have similar problems in the longeron. As a result, General John D. W. Corley stated, "the long-term future of the F-15 is in question". On 15 February 2008, ACC cleared all its grounded F-15A/B/C/D fighters for flight pending inspections, engineering reviews, and any needed repairs. ACC also recommended release of other U.S. F-15A/B/C/Ds.
Later service
The F-15 had a combined air-to-air combat record of 104 kills to no losses . The F-15's air superiority versions, the A/B/C/D models, have not suffered any losses to enemy action. Over half of F-15 kills have been achieved by Israeli Air Force pilots.
On 16 September 2009, the last F-15A, an Oregon Air National Guard aircraft, was retired, marking the end of service for the F-15A and F-15B models in the United States. With the retirement of those early models, the F-15C and D models continued operational service to supplement the new F-22 Raptor in frontline US service. Because the DOD was primarily focused on asymmetric counterinsurgency warfare in the Middle East in the 2000s, the F-22 procurement was curtailed to just 187 operational aircraft and the USAF had to extend F-15C/D operations well beyond its planned retirement date in order to maintain adequate numbers of air superiority fighters; in 2007, the USAF planned to keep 179 F-15C/Ds along with 224 F-15Es in service beyond 2025. During the 2010s, USAF F-15C/Ds were regularly based overseas with the Pacific Air Forces at Kadena AB in Japan and with the U.S. Air Forces in Europe at RAF Lakenheath in the United Kingdom. Other regular USAF F-15s are operated by ACC as adversary/aggressor platforms at Nellis AFB, Nevada, and by Air Force Materiel Command in test and evaluation roles at Edwards AFB, California, and Eglin AFB, Florida. All remaining combat-coded F-15C/Ds are operated by the Air National Guard.
To keep the F-15C/D viable, the fleet saw a series of upgrades, with 179 aircraft receiving the AN/APG-63(V)3 AESA radar starting in 2010 along with eventual addition of IRST pods and cockpit enhancements. However, problems with the aging fleet meant the F-15C faced cuts or retirement in the USAF's FY 2015 budget in response to sequestration. By the mid-2010s, the aging F-15C/D fleet was no longer economically sustainable to the 2030s as hoped, and the USAF chose to forgo the more comprehensive F-15 2040C upgrade proposed by Boeing; in April 2017, USAF officials announced plans to retire the F-15C/D in the mid-2020s and press other aircraft such as F-16s into roles occupied by the F-15 while exploring options to recapitalize its fighter fleet.
In late 2018 and early 2019, following a series of DoD Cost Analysis and Program Evaluation (CAPE) Office studies on affordably recapitalizing the fighter fleet, the Pentagon in its FY 2020 budget requested new-build F-15EXs — an advanced variant based on the export F-15QA then in production — to replace the F-15Cs and supplement the F-22s to maintain fighter fleet size, with planned total procurement of 144 aircraft. This allowed USAF to use the existing export production line to quickly and affordably bring fighters into operational service, as restarting the F-22 line was considered cost-prohibitive. In 2022, it was announced the USAF plan to retire their fleet of F-15C/Ds by 2026, while the F-15Es would retire in the 2030s.
Yemen Civil War
During the Yemeni Civil War (2015–present), Houthis have used R-27T missiles modified to serve as surface-to-air missiles. A video released on 7 January 2018 also shows a modified R-27T hitting a Saudi F-15 on a forward-looking infrared camera. Houthi sources claim to have downed the F-15, although this has been disputed, as the missile apparently proximity detonated, though the F-15 continued to fly in its trajectory seemingly unaffected. Rebels later released footage showing an aircraft wreck, but serial numbers on the wreckage suggested the aircraft was a Panavia Tornado, also operated by Saudi forces. On 8 January, the Saudi admitted the loss of an aircraft but due to technical reasons.
On 21 March 2018, Houthi rebels released a video where they hit and possibly shot down a Saudi F-15 in Saada province. In the video a R-27T air-to-air missile adapted for surface-to-air use was launched and appeared to hit a jet. As in the video of the previous similar hit recorded on 8 January, the target, while clearly hit, did not appear to be downed. Saudi forces confirmed the hit, while saying the jet landed at a Saudi base. Saudi official sources confirmed the incident, reporting that it happened at 3:48 pm local time after a surface-to-air defense missile was launched at the fighter jet from inside Saada airport.
After the Houthi attack on Saudi oil infrastructure on 14 September 2019, Saudi Arabia tasked F-15 fighters armed with missiles to intercept low flying drones, difficult to intercept with ground-based high altitude missile systems like the MIM-104 Patriot with several drones being downed since then. On 2 July 2020, a Saudi F-15 shot down two Houthi Shahed 129 drones above Yemen. On 7 March 2021, during a Houthi attack at several Saudi oil installations, Saudi F-15s shot down several attacking drones using heatseeking AIM-9 Sidewinder missiles, with video evidence showing at least two Samad-3 UAVs and one Qasef-2K downed. On 30 March 2021, a video made by Saudi border guards showed a Saudi F-15 shooting down a Houthi Quasef-2K drone with an AIM-120 AMRAAM fired at short range.
Variants
Basic models
F-15A
Single-seat all-weather air-superiority fighter version, 384 built in 1972–1979
F-15B
Two-seat training version, formerly designated TF-15A, 61 built in 1972–1979
F-15C
Improved single-seat all-weather air-superiority fighter version, 483 built in 1979–1985. The last 43 F-15Cs were upgraded with AN/APG-70 radar and later the AN/APG-63(V)1 radar.
F-15D
Two-seat training version, 92 built in 1979–1985. The Israeli Air Force uses this version as a strike fighter.
F-15J
Single-seat all-weather air-superiority fighter version for the Japan Air Self-Defense Force 139 built under license in Japan by Mitsubishi Heavy Industries in 1981–1997, two built in St. Louis.
F-15DJ
Two-seat training version for the Japan Air Self-Defense Force. 12 built in St. Louis, and 25 built under license in Japan by Mitsubishi in the period 1981–1997.
F-15N Sea Eagle
The F-15N was a carrier-capable variant proposed in the early 1970s to the U.S. Navy as an alternative to the heavier and, at the time, considered to be "riskier" technology program, the Grumman F-14 Tomcat. It did not have a long-range radar or the long-range missiles used by the F-14. The F-15N-PHX was another proposed naval version capable of carrying the AIM-54 Phoenix missile, but with an enhanced version of the AN/APG-63 radar on the F-15A. These featured folding wingtips, reinforced landing gear and a stronger tailhook for shipboard operation.
F-15 2040C
Proposed upgrade to the F-15C, allowing it to supplement the F-22 in the air superiority role. The 2040C concept is an evolution of the Silent Eagle proposed to South Korea and Israel, with some low-observable improvements but mostly a focus on the latest air capabilities and lethality. Proposal includes infra-red search and track, doubling the number of weapon stations, with quad racks for a maximum of 16 air-to-air missiles, Eagle Passive/Active Warning Survivability System (EPAWSS), conformal fuel tanks, upgraded APG-63(V)3 AESA radar and a "Talon HATE" communications pod allowing data transfer with the F-22. This upgrade program was not pursued due to the age of the existing airframes, but some of the upgrades were applied to the new-build F-15EX.
Strike Eagle derivatives
F-15E Strike Eagle
Two-seat all-weather multirole strike version, fitted with conformal fuel tanks. It was developed into the F-15I, F-15S, F-15K, F-15SG, and is the basis of the F-15 Advanced Eagle family. Over 400 F-15E and derivative variants produced since 1985.
F-15F Strike Eagle
Originally proposed as single-seat F-15E for Saudi Arabia; later reserved for Singaporean F-15Es, delivered as F-15SG.
F-15 Advanced Eagle
Further development of the F-15E with revised wing structure and digital fly-by-wire and is the basis for the F-15SA, F-15QA, F-15EX, and other variants. Current production baseline.
F-15SE Silent Eagle
A proposed F-15E variant from March 2009 with a reduced radar cross-section via changes such as replacing conformal fuel tanks with conformal weapons bays and canting the twin vertical tails 15 degrees outward, which would reduce their radar signature while providing a slight boost to lift to help offset the loss of conformal fuel tanks.
F-15EX Eagle II
Two-seat Advanced Eagle version for the USAF, can be fitted with conformal fuel tanks.
Prototypes
Twelve prototypes were built and used for trials by the F-15 Joint Test Force at Edwards Air Force Base using McDonnell Douglas and United States Air Force personnel. Most prototypes were later used by NASA for trials and experiments.
F-15A-1, AF Serial No. 71-0280
Was the first F-15 to fly on 11 July 1972 from Edwards Air Force Base, it was used as a trial aircraft for exploring the flight envelope, general handling and testing the carriage of external stores.
F-15A-1, AF Ser. No. 71-0281
The second prototype first flew on 26 September 1972 and was used to test the F100 engine.
F-15A-2, AF Ser. No. 71-0282
First flew on 4 November 1972 and was used to test the APG-63 radar and avionics.
F-15A-2, AF Ser. No. 71-0283
First flew on 13 January 1973 and was used as a structural test aircraft, it was the first aircraft to have the smaller wingtips to clear a severe buffet problem found on earlier aircraft.
F-15A-2, AF Ser. No. 71-0284
First flew on 7 March 1973 it was used for armament development and was the first aircraft fitted with an internal cannon.
F-15A-3, AF Ser. No. 71-0285
First flew on 23 May 1973 and was used to test the missile fire control system and other avionics.
F-15A-3, AF Ser. No. 71-0286
First flew on 14 June 1973 and was used for armament trials and testing external fuel stores.
F-15A-4, AF Ser. No. 71-0287
First flew on 25 August 1973 and was used for spin recovery, angle of attack and fuel system testing, it was fitted with an anti-spin recovery parachute. The aircraft was loaned to NASA from 1976 for engine development trials.
F-15A-4, AF Ser. No. 71-0288
First flew on 20 October 1973 and was used to test integrated aircraft and engine performance, it was later used by McDonnell Douglas as a test aircraft in the 1990s.
F-15A-4, AF Ser. No. 71-0289
First flew on 30 January 1974 and was used for trials on the radar, avionics and electronic warfare systems.
F-15B-1, AF Ser. No. 71-0290
The first two-seat prototype originally designated the TF-15A, it first flew on 7 July 1973.
F-15B-2, AF Ser. No. 71-0291
First flew on 18 October 1973 as a TF-15A and used as a test and demonstration aircraft. In 1976 it made an overseas sales tour painted in markings to celebrate the bicentenary of the United States. Also used as the development aircraft for the F-15E as well as the first F-15 to use Conformal Fuel Tanks.
Research and test
F-15 Streak Eagle (AF Ser. No.72-0119)
An unpainted F-15A stripped of most avionics demonstrated the fighter's acceleration capabilities. The aircraft broke eight time-to-climb world records between 16 January and 1 February 1975 at Grand Forks AFB, ND. It was delivered to the National Museum of the United States Air Force in December 1980. The aircraft is currently on display in the museum's Research and Development Hangar.
F-15 STOL/MTD (AF Ser. No. 71-0290)
The first F-15B was converted into a short takeoff and landing, maneuver technology demonstrator aircraft. In the late 1980s it received canard flight surfaces in addition to its usual horizontal tail, along with square thrust-vectoring nozzles. It was used as a short-takeoff/maneuver-technology demonstrator (S/MTD).
F-15 ACTIVE (AF Ser. No. 71-0290)
The F-15 S/MTD was later converted into an advanced flight control technology research aircraft with thrust vectoring nozzles.
F-15 IFCS (AF Ser. No. 71-0290)
The F-15 ACTIVE was then converted into an intelligent flight control systems research aircraft. F-15B 71-0290 was the oldest F-15 still flying when retired in January 2009.
F-15 MANX
Concept name for a tailless variant of the F-15 ACTIVE, but the NASA ACTIVE experimental aircraft was never modified to be tailless.
F-15 Flight Research Facility (AF Ser. No. 71-0281 and AF Ser. No. 71-0287)
Two F-15A aircraft were acquired in 1976 for use by NASA's Dryden Flight Research Center for numerous experiments. Notable experiments include Highly Integrated Digital Electronic Control (HiDEC), Adaptive Engine Control System (ADECS), Self-Repairing and Self-Diagnostic Flight Control System (SRFCS) and Propulsion Controlled Aircraft System (PCA). 71-0281, the second flight-test F-15A, was returned to the Air Force and became a static display at Langley AFB in 1983.
F-15B Research Testbed (AF Ser. No. 74-0141)
Acquired in 1993, it was an F-15B modified and used by NASA's Dryden Flight Research Center for flight tests.
Operators
This article only covers the F-15A, B, C, D, and related variants. For the operators of other F-15E-based variants, like the F-15E, F-15I, F-15S, F-15K, F-15SG, or F-15EX, see McDonnell Douglas F-15E Strike Eagle.
Israeli Air Force has operated F-15s since 1977. The IAF operates 38 F-15As, 6 F-15Bs, 16 F-15Cs and 11 F-15Ds in service as of 2022.
Japan Air Self-Defense Force operates 155 Mitsubishi F-15J and 44 F-15DJ fighters produced under license by Mitsubishi Heavy Industries.
Royal Saudi Air Force has 46 F-15C and 16 F-15D fighters in operation as of 2024.
United States Air Force operates 168 F-15C and 18 F-15D total aircraft as of mid-2022.
NASA currently operates one F-15B #836 as a test bed for a variety of flight research experiments and two F-15D, #884 and #897, for research support and pilot proficiency. NASA in the past used an F-15B #835 to test Highly Integrated Digital Engine Control system (HIDEC) at Edwards AFB in 1988.
Notable accidents
A total of 175 F-15s have been lost to non-combat causes as of June 2016. However, the F-15 aircraft is very reliable with only 1 loss per 50,000 flight hours.
On 1 May 1983, an Israeli Air Force F-15D collided mid-air with an A-4 Skyhawk during a training flight, causing the F-15's right wing to shear off almost completely. Despite the damage, the pilot was able to reach a nearby airbase and land safely – albeit at twice the normal landing speed. The aircraft was subsequently repaired and saw further combat action.
On 26 March 2001, two US Air Force F-15Cs crashed near the summit of Ben Macdui in the Cairngorms during a low flying training exercise over the Scottish Highlands. Both Lieutenant Colonel Kenneth John Hyvonen and Captain Kirk Jones died in the accident, which resulted in a court martial for an RAF air traffic controller, who was later found not guilty.
On 2 November 2007, a 27-year-old F-15C (AF Ser. No. 80-0034) of the 131st Fighter Wing, Missouri Air National Guard, crashed following an in-flight breakup due to structural failure during combat training near St. Louis, Missouri. The pilot, Major Stephen W. Stilwell, ejected but suffered serious injuries. On 3 November 2007, all non-mission critical F-15s were grounded pending the crash investigation's outcome. By 13 November 2007, over 1,100 F-15s were grounded worldwide after Israel, Japan and Saudi Arabia grounded their aircraft as well. F-15Es were cleared on 15 November 2007 pending individual inspections. On 8 January 2008, the USAF cleared 60 percent of the F-15A/B/C/D fleet to fly. On 10 January 2008, the accident review board released its report, which attributed the crash to the longeron not meeting specifications. On 15 February 2008, the Air Force cleared all F-15s for flight, pending inspections and any needed repairs. In March 2008, Stilwell filed a lawsuit against Boeing which was later dismissed in April 2009.
Specifications (F-15C)
Aircraft on display
Although the F-15 continues to be in use, a number of older USAF and IAF models have been retired, with several placed on outdoor display or in museums.
Germany
F-15A
74-0085 – Spangdahlem AB
74-0109 – Auto Technik Museum, Speyer
Netherlands
F-15A
74-0083 (marked as 77–0132) – Nationaal Militair Museum, Kamp Zeist, former Camp New Amsterdam AB. Aircraft was based at Camp New Amsterdam and left as a gift when the base was closed in 1995.
Japan
F-15A
74-0088 – Kadena AB
Israel
F-15A
73-0098 – Israeli Air Museum, Hatzerim
73-0107 – gate guard at Tel Nof AB
Saudi Arabia
F-15B
71-0291 - painted in false Saudi markings as '1315' at Royal Saudi Air Force Museum
United Kingdom
F-15A
74-0131 – Wings of Liberty Memorial Park, RAF Lakenheath
76-0020 – American Air Museum, Duxford
United States
F-15A
71-0280 – 37th Training Wing HQ Parade Ground, Kelly Field (formerly Kelly AFB), San Antonio, Texas.
71-0281 – Tactical Air Command Memorial Park, Joint Base Langley-Eustis, Hampton, Virginia.
71-0283 – Defense Supply Center Richmond, Richmond, Virginia.
71-0285 – Boeing Avionic Antenna Laboratory, St. Charles, Missouri.
71-0286 – A GF-15A; Saint Louis Science Center, St. Louis, Missouri, in storage. Previously on display at Octave Chanute Aerospace Museum, Rantoul, Illinois
72-0119 "Streak Eagle" – at the National Museum of the United States Air Force, Wright-Patterson AFB, Dayton, Ohio
73-0085 – Museum of Aviation, Robins AFB, Warner Robins, Georgia
73-0086 – Louisiana Military Museum, Jackson Barracks, New Orleans, Louisiana
73-0099 – Robins AFB, Warner Robins, Georgia
74-0081 – Elmendorf AFB, Alaska
74-0084 – Alaska Aviation Heritage Museum, Anchorage, Alaska
74-0095 – Tyndall AFB, Panama City, Florida. This aircraft was flipped and severely damaged by Hurricane Michael in October 2018.
74-0114 – Mountain Home AFB, Idaho
74-0117 – Langley AFB, Virginia
74-0118 – Pima Air & Space Museum, Tucson, Arizona
74-0119 – Castle Air Museum, Atwater, California
74-0124 – Air Force Armament Museum, Eglin AFB, Florida
75-0026 – National Warplane Museum, Elmira Corning Regional Airport, New York
75-0033 – Eglin Parkway entrance to 33d Fighter Wing complex, Eglin AFB, Florida
75-0045 – USS Alabama Battleship Memorial Park, Mobile, Alabama
75-0084 – Russell Military Museum, Russell, Illinois
76-0008 – March Field Air Museum at March ARB, Riverside, California
76-0009 – Kingsley Field Air National Guard Base, Klamath Falls, Oregon
76-0012 – Air Heritage Aviation Museum, Beaver County Airport, Beaver Falls, Pennsylvania
76-0014 – Evergreen Aviation Museum, McMinnville, Oregon
76-0018 – Hickam Field, Joint Base Pearl Harbor–Hickam, Oahu, Hawaii
76-0024 – Peterson Air and Space Museum, Peterson AFB, Colorado
76-0027 – National Museum of the United States Air Force, Wright-Patterson AFB, Dayton, Ohio
76-0037 – Holloman AFB, New Mexico
76-0040 – Otis ANGB, Cape Cod, Massachusetts
76-0042 - United States Air Force Academy, Colorado Springs, Colorado
76-0048 – McChord Air Museum, McChord AFB, Washington
76-0057 - Nellis Air Force Base, Las Vegas, Nevada. Aircraft previously bore "Vegas Strong" paint scheme to honor victims of Oct 1, 2017 shooting.
76-0063 – Pacific Aviation Museum, Ford Island, Joint Base Pearl Harbor–Hickam, Hawaii
76-0066 – Portland Air National Guard Base, Oregon
76-0067 – Dyess Air Force Base, Linear Air Park display area on base
76-0076 (marked as 33rd Fighter Wing F-15C 85–0125) – roadside park, DeBary, Florida
76-0080 – Jacksonville Air National Guard Base, Florida
76-0088 – 131st Bomb Wing Heritage Park, Whiteman AFB, Missouri
76-0108 – Lackland AFB/Kelly Field Annex, Texas
76-0110 – gate guard, Mountain Home AFB, Idaho
77-0068 – Arnold AFB, Manchester, Tennessee
77-0084 – 412th Test Wing at Edwards Air Force Base, California and Nellis Air Force Base, Nevada.
77-0090 – Hill Aerospace Museum, Hill AFB, Utah
77-0102 – Pacific Coast Air Museum, Charles M. Schulz-Sonoma County Airport, Santa Rosa, California. One of two Massachusetts Air National Guard 102d Fighter Wing aircraft scrambled in first response to terrorist air attacks on 11 September 2001.
77-0146 – Veterans Park, Callaway, Florida
77-0150 – Yanks Air Museum, Chino, California
F-15B
73-0108 – Luke AFB, Arizona.
73-0114 – Air Force Flight Test Center Museum, Edwards AFB, California
77-0154 - Sheppard Air Force Base, Witchita Falls, Texas.
77-0159 - Volk Field Air National Guard Base, Camp Douglas, Wisconsin.
77-0161 – Seymour Johnson AFB, Goldsboro, North Carolina.
F-15C
79-0022 – Pueblo Weisbrod Aircraft Museum, Pueblo, Colorado Credited with a MiG-23 kill during Operation Desert Storm while flown by Donald Watros. It is painted in the colors of the 22nd Fighter Squadron deployed from Bitburg AB, Germany, to Incirlik AB, Turkey.
79-0078 – Museum of Aviation, Robins AFB, Warner Robins, Georgia Currently stored at the museum, it is awaiting restoration and display. Credited with two MiG-21 kills during Operation Desert Storm while flown by Thomas Dietz, while on deployment with 53rd Fighter Squadron to Al Kharj AB, Saudi Arabia from Bitburg AB, Germany
80-0014 – Chico Air Museum, Chico, California; transported from Langley AFB, Virginia
85-0101 - New England Air Museum, Connecticut; This aircraft, flown by Capt Rick ‘Kluso’ Tollini scored one MiG-25 kill during Operation Desert Storm on January 19, 1991.
86-0156 - National Museum of the United States Air Force - On display in the Cold War Gallery. This aircraft scored two MiG-29 kills of the Yugoslavia Air Force during Operation Allied Force flown by Captain Jeff "Claw" Hwang of the 493rd Fighter Squadron, 48th Fighter Wing based at RAF Lakenheath, UK.
Notable appearances in media
The F-15 was the subject of the IMAX movie Fighter Pilot: Operation Red Flag, about the RED FLAG exercises. In Tom Clancy's nonfiction book, Fighter Wing: A Guided Tour of an Air Force Combat Wing (1995), a detailed analysis of the Air Force's premier fighter aircraft, the F-15 Eagle and its capabilities are showcased.
The F-15 has also been a popular subject as a toy, and a fictional likeness of an aircraft similar to the F-15 has been used in cartoons, books, video games, animated television series, and animated films.
| Technology | Specific aircraft | null |
11729 | https://en.wikipedia.org/wiki/Fuel%20cell | Fuel cell | A fuel cell is an electrochemical cell that converts the chemical energy of a fuel (often hydrogen) and an oxidizing agent (often oxygen) into electricity through a pair of redox reactions. Fuel cells are different from most batteries in requiring a continuous source of fuel and oxygen (usually from air) to sustain the chemical reaction, whereas in a battery the chemical energy usually comes from substances that are already present in the battery. Fuel cells can produce electricity continuously for as long as fuel and oxygen are supplied.
The first fuel cells were invented by Sir William Grove in 1838. The first commercial use of fuel cells came almost a century later following the invention of the hydrogen–oxygen fuel cell by Francis Thomas Bacon in 1932. The alkaline fuel cell, also known as the Bacon fuel cell after its inventor, has been used in NASA space programs since the mid-1960s to generate power for satellites and space capsules. Since then, fuel cells have been used in many other applications. Fuel cells are used for primary and backup power for commercial, industrial and residential buildings and in remote or inaccessible areas. They are also used to power fuel cell vehicles, including forklifts, automobiles, buses, trains, boats, motorcycles, and submarines.
There are many types of fuel cells, but they all consist of an anode, a cathode, and an electrolyte that allows ions, often positively charged hydrogen ions (protons), to move between the two sides of the fuel cell. At the anode, a catalyst causes the fuel to undergo oxidation reactions that generate ions (often positively charged hydrogen ions) and electrons. The ions move from the anode to the cathode through the electrolyte. At the same time, electrons flow from the anode to the cathode through an external circuit, producing direct current electricity. At the cathode, another catalyst causes ions, electrons, and oxygen to react, forming water and possibly other products. Fuel cells are classified by the type of electrolyte they use and by the difference in start-up time ranging from 1 second for proton-exchange membrane fuel cells (PEM fuel cells, or PEMFC) to 10 minutes for solid oxide fuel cells (SOFC). A related technology is flow batteries, in which the fuel can be regenerated by recharging. Individual fuel cells produce relatively small electrical potentials, about 0.7 volts, so cells are "stacked", or placed in series, to create sufficient voltage to meet an application's requirements. In addition to electricity, fuel cells produce water vapor, heat and, depending on the fuel source, very small amounts of nitrogen dioxide and other emissions. PEMFC cells generally produce fewer nitrogen oxides than SOFC cells: they operate at lower temperatures, use hydrogen as fuel, and limit the diffusion of nitrogen into the anode via the proton exchange membrane, which forms NOx. The energy efficiency of a fuel cell is generally between 40 and 60%; however, if waste heat is captured in a cogeneration scheme, efficiencies of up to 85% can be obtained.
History
The first references to hydrogen fuel cells appeared in 1838. In a letter dated October 1838 but published in the December 1838 edition of The London and Edinburgh Philosophical Magazine and Journal of Science, Welsh physicist and barrister Sir William Grove wrote about the development of his first crude fuel cells. He used a combination of sheet iron, copper, and porcelain plates, and a solution of sulphate of copper and dilute acid. In a letter to the same publication written in December 1838 but published in June 1839, German physicist Christian Friedrich Schönbein discussed the first crude fuel cell that he had invented. His letter discussed the current generated from hydrogen and oxygen dissolved in water. Grove later sketched his design, in 1842, in the same journal. The fuel cell he made used similar materials to today's phosphoric acid fuel cell.
In 1932, English engineer Francis Thomas Bacon successfully developed a 5 kW stationary fuel cell. NASA used the alkaline fuel cell (AFC), also known as the Bacon fuel cell after its inventor, from the mid-1960s.
In 1955, W. Thomas Grubb, a chemist working for the General Electric Company (GE), further modified the original fuel cell design by using a sulphonated polystyrene ion-exchange membrane as the electrolyte. Three years later another GE chemist, Leonard Niedrach, devised a way of depositing platinum onto the membrane, which served as a catalyst for the necessary hydrogen oxidation and oxygen reduction reactions. This became known as the "Grubb-Niedrach fuel cell". GE went on to develop this technology with NASA and McDonnell Aircraft, leading to its use during Project Gemini. This was the first commercial use of a fuel cell. In 1959, a team led by Harry Ihrig built a 15 kW fuel cell tractor for Allis-Chalmers, which was demonstrated across the U.S. at state fairs. This system used potassium hydroxide as the electrolyte and compressed hydrogen and oxygen as the reactants. Later in 1959, Bacon and his colleagues demonstrated a practical five-kilowatt unit capable of powering a welding machine. In the 1960s, Pratt & Whitney licensed Bacon's U.S. patents for use in the U.S. space program to supply electricity and drinking water (hydrogen and oxygen being readily available from the spacecraft tanks).
UTC Power was the first company to manufacture and commercialize a large, stationary fuel cell system for use as a cogeneration power plant in hospitals, universities and large office buildings.
In recognition of the fuel cell industry and America's role in fuel cell development, the United States Senate recognized October 8, 2015 as National Hydrogen and Fuel Cell Day, passing S. RES 217. The date was chosen in recognition of the atomic weight of hydrogen (1.008).
Types of fuel cells; design
Fuel cells come in many varieties; however, they all work in the same general manner. They are made up of three adjacent segments: the anode, the electrolyte, and the cathode. Two chemical reactions occur at the interfaces of the three different segments. The net result of the two reactions is that fuel is consumed, water or carbon dioxide is created, and an electric current is created, which can be used to power electrical devices, normally referred to as the load.
At the anode a catalyst ionizes the fuel, turning the fuel into a positively charged ion and a negatively charged electron. The electrolyte is a substance specifically designed so ions can pass through it, but the electrons cannot. The freed electrons travel through a wire creating an electric current. The ions travel through the electrolyte to the cathode. Once reaching the cathode, the ions are reunited with the electrons and the two react with a third chemical, usually oxygen, to create water or carbon dioxide.
Design features in a fuel cell include:
The electrolyte substance, which usually defines the type of fuel cell, and can be made from a number of substances like potassium hydroxide, salt carbonates, and phosphoric acid.
The most common fuel that is used is hydrogen.
The anode catalyst, usually fine platinum powder, breaks down the fuel into electrons and ions.
The cathode catalyst, often nickel, converts ions into waste chemicals, with water being the most common type of waste.
Gas diffusion layers that are designed to resist oxidization.
A typical fuel cell produces a voltage from 0.6 to 0.7 V at a full-rated load. Voltage decreases as current increases, due to several factors:
Activation loss
Ohmic loss (voltage drop due to resistance of the cell components and interconnections)
Mass transport loss (depletion of reactants at catalyst sites under high loads, causing rapid loss of voltage).
To deliver the desired amount of energy, the fuel cells can be combined in series to yield higher voltage, and in parallel to allow a higher current to be supplied. Such a design is called a fuel cell stack. The cell surface area can also be increased, to allow higher current from each cell.
Proton-exchange membrane fuel cells
In the archetypical hydrogen–oxide proton-exchange membrane fuel cell (PEMFC) design, a proton-conducting polymer membrane (typically nafion) contains the electrolyte solution that separates the anode and cathode sides. This was called a solid polymer electrolyte fuel cell (SPEFC) in the early 1970s, before the proton-exchange mechanism was well understood. (Notice that the synonyms polymer electrolyte membrane and proton-exchange mechanism result in the same acronym.)
On the anode side, hydrogen diffuses to the anode catalyst where it later dissociates into protons and electrons. These protons often react with oxidants causing them to become what are commonly referred to as multi-facilitated proton membranes. The protons are conducted through the membrane to the cathode, but the electrons are forced to travel in an external circuit (supplying power) because the membrane is electrically insulating. On the cathode catalyst, oxygen molecules react with the electrons (which have traveled through the external circuit) and protons to form water.
In addition to this pure hydrogen type, there are hydrocarbon fuels for fuel cells, including diesel, methanol (see: direct-methanol fuel cells and indirect methanol fuel cells) and chemical hydrides. The waste products with these types of fuel are carbon dioxide and water. When hydrogen is used, the CO is released when methane from natural gas is combined with steam, in a process called steam methane reforming, to produce the hydrogen. This can take place in a different location to the fuel cell, potentially allowing the hydrogen fuel cell to be used indoors—for example, in forklifts.
The different components of a PEMFC are
bipolar plates,
electrodes,
catalyst,
membrane, and
the necessary hardware such as current collectors and gaskets.
The materials used for different parts of the fuel cells differ by type. The bipolar plates may be made of different types of materials, such as, metal, coated metal, graphite, flexible graphite, C–C composite, carbon–polymer composites etc. The membrane electrode assembly (MEA) is referred to as the heart of the PEMFC and is usually made of a proton-exchange membrane sandwiched between two catalyst-coated carbon papers. Platinum and/or similar types of noble metals are usually used as the catalyst for PEMFC, and these can be contaminated by carbon monoxide, necessitating a relatively pure hydrogen fuel. The electrolyte could be a polymer membrane.
Proton-exchange membrane fuel cell design issues
Cost In 2013, the Department of Energy estimated that 80 kW automotive fuel cell system costs of per kilowatt could be achieved, assuming volume production of 100,000 automotive units per year and per kilowatt could be achieved, assuming volume production of 500,000 units per year. Many companies are working on techniques to reduce cost in a variety of ways including reducing the amount of platinum needed in each individual cell. Ballard Power Systems has experimented with a catalyst enhanced with carbon silk, which allows a 30% reduction (1.0–0.7 mg/cm2) in platinum usage without reduction in performance. Monash University, Melbourne uses PEDOT as a cathode. A 2011-published study documented the first metal-free electrocatalyst using relatively inexpensive doped carbon nanotubes, which are less than 1% the cost of platinum and are of equal or superior performance. A recently published article demonstrated how the environmental burdens change when using carbon nanotubes as carbon substrate for platinum.
Water and air management (in PEMFCs) In this type of fuel cell, the membrane must be hydrated, requiring water to be evaporated at precisely the same rate that it is produced. If water is evaporated too quickly, the membrane dries, the resistance across it increases, and eventually, it will crack, creating a gas "short circuit" where hydrogen and oxygen combine directly, generating heat that will damage the fuel cell. If the water is evaporated too slowly, the electrodes will flood, preventing the reactants from reaching the catalyst and stopping the reaction. Methods to manage water in cells are being developed like electroosmotic pumps focusing on flow control. Just as in a combustion engine, a steady ratio between the reactant and oxygen is necessary to keep the fuel cell operating efficiently.
Temperature management The same temperature must be maintained throughout the cell in order to prevent destruction of the cell through thermal loading. This is particularly challenging as the 2H2 + O2 → 2H2O reaction is highly exothermic, so a large quantity of heat is generated within the fuel cell.
Durability, service life, and special requirements for some type of cells Stationary fuel cell applications typically require more than 40,000 hours of reliable operation at a temperature of , while automotive fuel cells require a 5,000-hour lifespan (the equivalent of ) under extreme temperatures. Current service life is 2,500 hours (about ). Automotive engines must also be able to start reliably at and have a high power-to-volume ratio (typically 2.5 kW/L).
Limited carbon monoxide tolerance of some (non-PEDOT) cathodes.
Phosphoric acid fuel cell
Phosphoric acid fuel cells (PAFCs) were first designed and introduced in 1961 by G. V. Elmore and H. A. Tanner. In these cells, phosphoric acid is used as a non-conductive electrolyte to pass protons from the anode to the cathode and to force electrons to travel from anode to cathode through an external electrical circuit. These cells commonly work in temperatures of 150 to 200 °C. This high temperature will cause heat and energy loss if the heat is not removed and used properly. This heat can be used to produce steam for air conditioning systems or any other thermal energy-consuming system. Using this heat in cogeneration can enhance the efficiency of phosphoric acid fuel cells from 40 to 50% to about 80%. Since the proton production rate on the anode is small, platinum is used as a catalyst to increase this ionization rate. A key disadvantage of these cells is the use of an acidic electrolyte. This increases the corrosion or oxidation of components exposed to phosphoric acid.
Solid acid fuel cell
Solid acid fuel cells (SAFCs) are characterized by the use of a solid acid material as the electrolyte. At low temperatures, solid acids have an ordered molecular structure like most salts. At warmer temperatures (between 140 and 150°C for CsHSO4), some solid acids undergo a phase transition to become highly disordered "superprotonic" structures, which increases conductivity by several orders of magnitude. The first proof-of-concept SAFCs were developed in 2000 using cesium hydrogen sulfate (CsHSO4). Current SAFC systems use cesium dihydrogen phosphate (CsH2PO4) and have demonstrated lifetimes in the thousands of hours.
Alkaline fuel cell
The alkaline fuel cell (AFC) or hydrogen-oxygen fuel cell was designed and first demonstrated publicly by Francis Thomas Bacon in 1959. It was used as a primary source of electrical energy in the Apollo space program. The cell consists of two porous carbon electrodes impregnated with a suitable catalyst such as Pt, Ag, CoO, etc. The space between the two electrodes is filled with a concentrated solution of KOH or NaOH which serves as an electrolyte. H2 gas and O2 gas are bubbled into the electrolyte through the porous carbon electrodes. Thus the overall reaction involves the combination of hydrogen gas and oxygen gas to form water. The cell runs continuously until the reactant's supply is exhausted. This type of cell operates efficiently in the temperature range 343–413K (70 -140 °C) and provides a potential of about 0.9V. Alkaline anion exchange membrane fuel cell (AAEMFC) is a type of AFC which employs a solid polymer electrolyte instead of aqueous potassium hydroxide (KOH) and it is superior to aqueous AFC.
High-temperature fuel cells
Solid oxide fuel cell
Solid oxide fuel cells (SOFCs) use a solid material, most commonly a ceramic material called yttria-stabilized zirconia (YSZ), as the electrolyte. Because SOFCs are made entirely of solid materials, they are not limited to the flat plane configuration of other types of fuel cells and are often designed as rolled tubes. They require high operating temperatures (800–1000 °C) and can be run on a variety of fuels including natural gas.
SOFCs are unique because negatively charged oxygen ions travel from the cathode (positive side of the fuel cell) to the anode (negative side of the fuel cell) instead of protons travelling vice versa (i.e., from the anode to the cathode), as is the case in all other types of fuel cells. Oxygen gas is fed through the cathode, where it absorbs electrons to create oxygen ions. The oxygen ions then travel through the electrolyte to react with hydrogen gas at the anode. The reaction at the anode produces electricity and water as by-products. Carbon dioxide may also be a by-product depending on the fuel, but the carbon emissions from a SOFC system are less than those from a fossil fuel combustion plant. The chemical reactions for the SOFC system can be expressed as follows:
Anode reaction: 2H2 + 2O2− → 2H2O + 4e−
Cathode reaction: O2 + 4e− → 2O2−
Overall cell reaction: 2H2 + O2 → 2H2O
SOFC systems can run on fuels other than pure hydrogen gas. However, since hydrogen is necessary for the reactions listed above, the fuel selected must contain hydrogen atoms. For the fuel cell to operate, the fuel must be converted into pure hydrogen gas. SOFCs are capable of internally reforming light hydrocarbons such as methane (natural gas), propane, and butane. These fuel cells are at an early stage of development.
Challenges exist in SOFC systems due to their high operating temperatures. One such challenge is the potential for carbon dust to build up on the anode, which slows down the internal reforming process. Research to address this "carbon coking" issue at the University of Pennsylvania has shown that the use of copper-based cermet (heat-resistant materials made of ceramic and metal) can reduce coking and the loss of performance. Another disadvantage of SOFC systems is the long start-up, making SOFCs less useful for mobile applications. Despite these disadvantages, a high operating temperature provides an advantage by removing the need for a precious metal catalyst like platinum, thereby reducing cost. Additionally, waste heat from SOFC systems may be captured and reused, increasing the theoretical overall efficiency to as high as 80–85%.
The high operating temperature is largely due to the physical properties of the YSZ electrolyte. As temperature decreases, so does the ionic conductivity of YSZ. Therefore, to obtain the optimum performance of the fuel cell, a high operating temperature is required. According to their website, Ceres Power, a UK SOFC fuel cell manufacturer, has developed a method of reducing the operating temperature of their SOFC system to 500–600 degrees Celsius. They replaced the commonly used YSZ electrolyte with a CGO (cerium gadolinium oxide) electrolyte. The lower operating temperature allows them to use stainless steel instead of ceramic as the cell substrate, which reduces cost and start-up time of the system.
Molten-carbonate fuel cell
Molten carbonate fuel cells (MCFCs) require a high operating temperature, , similar to SOFCs. MCFCs use lithium potassium carbonate salt as an electrolyte, and this salt liquefies at high temperatures, allowing for the movement of charge within the cell – in this case, negative carbonate ions.
Like SOFCs, MCFCs are capable of converting fossil fuel to a hydrogen-rich gas in the anode, eliminating the need to produce hydrogen externally. The reforming process creates emissions. MCFC-compatible fuels include natural gas, biogas and gas produced from coal. The hydrogen in the gas reacts with carbonate ions from the electrolyte to produce water, carbon dioxide, electrons and small amounts of other chemicals. The electrons travel through an external circuit, creating electricity, and return to the cathode. There, oxygen from the air and carbon dioxide recycled from the anode react with the electrons to form carbonate ions that replenish the electrolyte, completing the circuit. The chemical reactions for an MCFC system can be expressed as follows:
Anode reaction: CO32− + H2 → H2O + CO2 + 2e−
Cathode reaction: CO2 + ½O2 + 2e− → CO32−
Overall cell reaction: H2 + ½O2 → H2O
As with SOFCs, MCFC disadvantages include slow start-up times because of their high operating temperature. This makes MCFC systems not suitable for mobile applications, and this technology will most likely be used for stationary fuel cell purposes. The main challenge of MCFC technology is the cells' short life span. The high-temperature and carbonate electrolyte lead to corrosion of the anode and cathode. These factors accelerate the degradation of MCFC components, decreasing the durability and cell life. Researchers are addressing this problem by exploring corrosion-resistant materials for components as well as fuel cell designs that may increase cell life without decreasing performance.
MCFCs hold several advantages over other fuel cell technologies, including their resistance to impurities. They are not prone to "carbon coking", which refers to carbon build-up on the anode that results in reduced performance by slowing down the internal fuel reforming process. Therefore, carbon-rich fuels like gases made from coal are compatible with the system. The United States Department of Energy claims that coal, itself, might even be a fuel option in the future, assuming the system can be made resistant to impurities such as sulfur and particulates that result from converting coal into hydrogen. MCFCs also have relatively high efficiencies. They can reach a fuel-to-electricity efficiency of 50%, considerably higher than the 37–42% efficiency of a phosphoric acid fuel cell plant. Efficiencies can be as high as 65% when the fuel cell is paired with a turbine, and 85% if heat is captured and used in a combined heat and power (CHP) system.
FuelCell Energy, a Connecticut-based fuel cell manufacturer, develops and sells MCFC fuel cells. The company says that their MCFC products range from 300 kW to 2.8 MW systems that achieve 47% electrical efficiency and can utilize CHP technology to obtain higher overall efficiencies. One product, the DFC-ERG, is combined with a gas turbine and, according to the company, it achieves an electrical efficiency of 65%.
Electric storage fuel cell
The electric storage fuel cell is a conventional battery chargeable by electric power input, using the conventional electro-chemical effect. However, the battery further includes hydrogen (and oxygen) inputs for alternatively charging the battery chemically.
Comparison of fuel cell types
Glossary of terms in table:
Anode The electrode at which oxidation (a loss of electrons) takes place. For fuel cells and other galvanic cells, the anode is the negative terminal; for electrolytic cells (where electrolysis occurs), the anode is the positive terminal.
Aqueous solution
Catalyst A chemical substance that increases the rate of a reaction without being consumed; after the reaction, it can potentially be recovered from the reaction mixture and is chemically unchanged. The catalyst lowers the activation energy required, allowing the reaction to proceed more quickly or at a lower temperature. In a fuel cell, the catalyst facilitates the reaction of oxygen and hydrogen. It is usually made of platinum powder very thinly coated onto carbon paper or cloth. The catalyst is rough and porous so the maximum surface area of the platinum can be exposed to the hydrogen or oxygen. The platinum-coated side of the catalyst faces the membrane in the fuel cell.
Cathode The electrode at which reduction (a gain of electrons) occurs. For fuel cells and other galvanic cells, the cathode is the positive terminal; for electrolytic cells (where electrolysis occurs), the cathode is the negative terminal.
Electrolyte A substance that conducts charged ions from one electrode to the other in a fuel cell, battery, or electrolyzer.
Fuel cell stack Individual fuel cells connected in a series. Fuel cells are stacked to increase voltage.
Matrix something within or from which something else originates, develops, or takes form.
Membrane The separating layer in a fuel cell that acts as electrolyte (an ion-exchanger) as well as a barrier film separating the gases in the anode and cathode compartments of the fuel cell.
Molten carbonate fuel cell (MCFC) A type of fuel cell that contains a molten carbonate electrolyte. Carbonate ions (CO32−) are transported from the cathode to the anode. Operating temperatures are typically near 650 °C.
Phosphoric acid fuel cell (PAFC) A type of fuel cell in which the electrolyte consists of concentrated phosphoric acid (H3PO4). Protons (H+) are transported from the anode to the cathode. The operating temperature range is generally 160–220 °C.
Proton-exchange membrane fuel cell (PEM) A fuel cell incorporating a solid polymer membrane used as its electrolyte. Protons (H+) are transported from the anode to the cathode. The operating temperature range is generally 60–100 °C for Low Temperature Proton-exchange membrane fuel cell (LT-PEMFC). PEM fuel cell with operating temperature of 120-200 °C is called High Temperature Proton-exchange membrane fuel cell (HT-PEMFC).
Solid oxide fuel cell (SOFC) A type of fuel cell in which the electrolyte is a solid, nonporous metal oxide, typically zirconium oxide (ZrO2) treated with Y2O3, and O2− is transported from the cathode to the anode. Any CO in the reformate gas is oxidized to CO2 at the anode. Temperatures of operation are typically 800–1,000 °C.
Solution
Efficiency of leading fuel cell types
Theoretical maximum efficiency
The energy efficiency of a system or device that converts energy is measured by the ratio of the amount of useful energy put out by the system ("output energy") to the total amount of energy that is put in ("input energy") or by useful output energy as a percentage of the total input energy. In the case of fuel cells, useful output energy is measured in electrical energy produced by the system. Input energy is the energy stored in the fuel. According to the U.S. Department of Energy, fuel cells are generally between 40 and 60% energy efficient. This is higher than some other systems for energy generation. For example, the internal combustion engine of a car can be about 43% energy efficient. Steam power plants usually achieve efficiencies of 30-40% while combined cycle gas turbine and steam plants can achieve efficiencies above 60%. In combined heat and power (CHP) systems, the waste heat produced by the primary power cycle - whether fuel cell, nuclear fission or combustion - is captured and put to use, increasing the efficiency of the system to up to 85–90%.
The theoretical maximum efficiency of any type of power generation system is never reached in practice, and it does not consider other steps in power generation, such as production, transportation and storage of fuel and conversion of the electricity into mechanical power. However, this calculation allows the comparison of different types of power generation. The theoretical maximum efficiency of a fuel cell approaches 100%, while the theoretical maximum efficiency of internal combustion engines is approximately 58%.
In practice
Values are given from 40% for acidic, 50% for molten carbonate, to 60% for alkaline, solid oxide and PEM fuel cells.
Fuel cells cannot store energy like a battery, except as hydrogen, but in some applications, such as stand-alone power plants based on discontinuous sources such as solar or wind power, they are combined with electrolyzers and storage systems to form an energy storage system. As of 2019, 90% of hydrogen was used for oil refining, chemicals and fertilizer production (where hydrogen is required for the Haber–Bosch process), and 98% of hydrogen is produced by steam methane reforming, which emits carbon dioxide. The overall efficiency (electricity to hydrogen and back to electricity) of such plants (known as round-trip efficiency), using pure hydrogen and pure oxygen can be "from 35 up to 50 percent", depending on gas density and other conditions. The electrolyzer/fuel cell system can store indefinite quantities of hydrogen, and is therefore suited for long-term storage.
Solid-oxide fuel cells produce heat from the recombination of the oxygen and hydrogen. The ceramic can run as hot as . This heat can be captured and used to heat water in a micro combined heat and power (m-CHP) application. When the heat is captured, total efficiency can reach 80–90% at the unit, but does not consider production and distribution losses. CHP units are being developed today for the European home market.
Professor Jeremy P. Meyers, in the Electrochemical Society journal Interface in 2008, wrote, "While fuel cells are efficient relative to combustion engines, they are not as efficient as batteries, primarily due to the inefficiency of the oxygen reduction reaction (and ... the oxygen evolution reaction, should the hydrogen be formed by electrolysis of water). ... [T]hey make the most sense for operation disconnected from the grid, or when fuel can be provided continuously. For applications that require frequent and relatively rapid start-ups ... where zero emissions are a requirement, as in enclosed spaces such as warehouses, and where hydrogen is considered an acceptable reactant, a [PEM fuel cell] is becoming an increasingly attractive choice [if exchanging batteries is inconvenient]". In 2013 military organizations were evaluating fuel cells to determine if they could significantly reduce the battery weight carried by soldiers.
In vehicles
In a fuel cell vehicle the tank-to-wheel efficiency is greater than 45% at low loads and shows average values of about 36% when a driving cycle like the NEDC (New European Driving Cycle) is used as test procedure. The comparable NEDC value for a Diesel vehicle is 22%. In 2008 Honda released a demonstration fuel cell electric vehicle (the Honda FCX Clarity) with fuel stack claiming a 60% tank-to-wheel efficiency.
It is also important to take losses due to fuel production, transportation, and storage into account. Fuel cell vehicles running on compressed hydrogen may have a power-plant-to-wheel efficiency of 22% if the hydrogen is stored as high-pressure gas, and 17% if it is stored as liquid hydrogen.
Applications
Power
Stationary fuel cells are used for commercial, industrial and residential primary and backup power generation. Fuel cells are very useful as power sources in remote locations, such as spacecraft, remote weather stations, large parks, communications centers, rural locations including research stations, and in certain military applications. A fuel cell system running on hydrogen can be compact and lightweight, and have no major moving parts. Because fuel cells have no moving parts and do not involve combustion, in ideal conditions they can achieve up to 99.9999% reliability. This equates to less than one minute of downtime in a six-year period.
Since fuel cell electrolyzer systems do not store fuel in themselves, but rather rely on external storage units, they can be successfully applied in large-scale energy storage, rural areas being one example. There are many different types of stationary fuel cells so efficiencies vary, but most are between 40% and 60% energy efficient. However, when the fuel cell's waste heat is used to heat a building in a cogeneration system this efficiency can increase to 85%. This is significantly more efficient than traditional coal power plants, which are only about one third energy efficient. Assuming production at scale, fuel cells could save 20–40% on energy costs when used in cogeneration systems. Fuel cells are also much cleaner than traditional power generation; a fuel cell power plant using natural gas as a hydrogen source would create less than one ounce of pollution (other than ) for every 1,000 kW·h produced, compared to 25 pounds of pollutants generated by conventional combustion systems. Fuel Cells also produce 97% less nitrogen oxide emissions than conventional coal-fired power plants.
One such pilot program is operating on Stuart Island in Washington State. There the Stuart Island Energy Initiative has built a complete, closed-loop system: Solar panels power an electrolyzer, which makes hydrogen. The hydrogen is stored in a tank at , and runs a ReliOn fuel cell to provide full electric back-up to the off-the-grid residence. Another closed system loop was unveiled in late 2011 in Hempstead, NY.
Fuel cells can be used with low-quality gas from landfills or waste-water treatment plants to generate power and lower methane emissions. A 2.8 MW fuel cell plant in California is said to be the largest of the type. Small-scale (sub-5kWhr) fuel cells are being developed for use in residential off-grid deployment.
Cogeneration
Combined heat and power (CHP) fuel cell systems, including micro combined heat and power (MicroCHP) systems are used to generate both electricity and heat for homes (see home fuel cell), office building and factories. The system generates constant electric power (selling excess power back to the grid when it is not consumed), and at the same time produces hot air and water from the waste heat. As the result CHP systems have the potential to save primary energy as they can make use of waste heat which is generally rejected by thermal energy conversion systems. A typical capacity range of home fuel cell is 1–3 kWel, 4–8 kWth. CHP systems linked to absorption chillers use their waste heat for refrigeration.
The waste heat from fuel cells can be diverted during the summer directly into the ground providing further cooling while the waste heat during winter can be pumped directly into the building. The University of Minnesota owns the patent rights to this type of system.
Co-generation systems can reach 85% efficiency (40–60% electric and the remainder as thermal). Phosphoric-acid fuel cells (PAFC) comprise the largest segment of existing CHP products worldwide and can provide combined efficiencies close to 90%. Molten carbonate (MCFC) and solid-oxide fuel cells (SOFC) are also used for combined heat and power generation and have electrical energy efficiencies around 60%. Disadvantages of co-generation systems include slow ramping up and down rates, high cost and short lifetime. Also their need to have a hot water storage tank to smooth out the thermal heat production was a serious disadvantage in the domestic market place where space in domestic properties is at a great premium.
Delta-ee consultants stated in 2013 that with 64% of global sales the fuel cell micro-combined heat and power passed the conventional systems in sales in 2012. The Japanese ENE FARM project stated that 34.213 PEMFC and 2.224 SOFC were installed in the period 2012–2014, 30,000 units on LNG and 6,000 on LPG.
Fuel cell electric vehicles (FCEVs)
Automobiles
Four fuel cell electric vehicles have been introduced for commercial lease and sale: the Honda Clarity, Toyota Mirai, Hyundai ix35 FCEV, and the Hyundai Nexo. By year-end 2019, about 18,000 FCEVs had been leased or sold worldwide. Fuel cell electric vehicles feature an average range of between refuelings and can be refueled in about 5 minutes. The U.S. Department of Energy's Fuel Cell Technology Program states that, as of 2011, fuel cells achieved 53–59% efficiency at one-quarter power and 42–53% vehicle efficiency at full power, and a durability of over with less than 10% degradation. In a 2017 Well-to-Wheels simulation analysis that "did not address the economics and market constraints", General Motors and its partners estimated that, for an equivalent journey, a fuel cell electric vehicle running on compressed gaseous hydrogen produced from natural gas could use about 40% less energy and emit 45% less greenhouse gasses than an internal combustion vehicle.
In 2015, Toyota introduced its first fuel cell vehicle, the Mirai, at a price of $57,000. Hyundai introduced the limited production Hyundai ix35 FCEV under a lease agreement. In 2016, Honda started leasing the Honda Clarity Fuel Cell. In 2018, Hyundai introduced the Hyundai Nexo, replacing the Hyundai ix35 FCEV. In 2020, Toyota introduced the second generation of its Mirai brand, improving fuel efficiency and expanding range compared to the original Sedan 2014 model.
In 2024, Mirai owners filed a class action lawsuit against Toyota in California over the lack of availability of hydrogen for fuel cell electric cars, alleging, among other things, fraudulent concealment and misrepresentation as well as violations of California's false advertising law and breaches of implied warranty. The same year, Hyundai recalled all 1,600 Nexo vehicles sold in the US to that time due to a risk of fuel leaks and fire from a faulty "pressure relief device".
Criticism
Some commentators believe that hydrogen fuel cell cars will never become economically competitive with other technologies or that it will take decades for them to become profitable. Elon Musk, CEO of battery-electric vehicle maker Tesla Motors, stated in 2015 that fuel cells for use in cars will never be commercially viable because of the inefficiency of producing, transporting and storing hydrogen and the flammability of the gas, among other reasons. In 2012, Lux Research, Inc. issued a report that stated: "The dream of a hydrogen economy ... is no nearer". It concluded that "Capital cost ... will limit adoption to a mere 5.9 GW" by 2030, providing "a nearly insurmountable barrier to adoption, except in niche applications". The analysis concluded that, by 2030, PEM stationary market will reach $1 billion, while the vehicle market, including forklifts, will reach a total of $2 billion. Other analyses cite the lack of an extensive hydrogen infrastructure in the U.S. as an ongoing challenge to Fuel Cell Electric Vehicle commercialization.
In 2014, Joseph Romm, the author of The Hype About Hydrogen (2005), said that FCVs still had not overcome the high fueling cost, lack of fuel-delivery infrastructure, and pollution caused by producing hydrogen. "It would take several miracles to overcome all of those problems simultaneously in the coming decades." He concluded that renewable energy cannot economically be used to make hydrogen for an FCV fleet "either now or in the future." Greentech Media's analyst reached similar conclusions in 2014. In 2015, CleanTechnica listed some of the disadvantages of hydrogen fuel cell vehicles. So did Car Throttle. A 2019 video by Real Engineering noted that, notwithstanding the introduction of vehicles that run on hydrogen, using hydrogen as a fuel for cars does not help to reduce carbon emissions from transportation. The 95% of hydrogen still produced from fossil fuels releases carbon dioxide, and producing hydrogen from water is an energy-consuming process. Storing hydrogen requires more energy either to cool it down to the liquid state or to put it into tanks under high pressure, and delivering the hydrogen to fueling stations requires more energy and may release more carbon. The hydrogen needed to move a FCV a kilometer costs approximately 8 times as much as the electricity needed to move a BEV the same distance.
A 2020 assessment concluded that hydrogen vehicles are still only 38% efficient, while battery EVs are 80% efficient. In 2021 CleanTechnica concluded that (a) hydrogen cars remain far less efficient than electric cars; (b) grey hydrogen – hydrogen produced with polluting processes – makes up the vast majority of available hydrogen; (c) delivering hydrogen would require building a vast and expensive new delivery and refueling infrastructure; and (d) the remaining two "advantages of fuel cell vehicles – longer range and fast fueling times – are rapidly being eroded by improving battery and charging technology." A 2022 study in Nature Electronics agreed. A 2023 study by the Centre for International Climate and Environmental Research (CICERO) estimated that leaked hydrogen has a global warming effect 11.6 times stronger than CO₂.
Buses
, there were about 100 fuel cell buses in service around the world. Most of these were manufactured by UTC Power, Toyota, Ballard, Hydrogenics, and Proton Motor. UTC buses had driven more than by 2011. Fuel cell buses have from 39% to 141% higher fuel economy than diesel buses and natural gas buses.
, the NREL was evaluating several current and planned fuel cell bus projects in the U.S.
Trains
Train operators may use hydrogen fuel cells in trains in an effort to save the costs of installing overhead electrification and to maintain the range offered by diesel trains. They have encountered expenses, however, due to fuel cells in trains lasting only three years, maintenance of the hydrogen tank and the additional need for batteries as a power buffer. In 2018, the first fuel cell-powered trains, the Alstom Coradia iLint multiple units, began running on the Buxtehude–Bremervörde–Bremerhaven–Cuxhaven line in Germany. Hydrogen trains have also been introduced in Sweden and the UK.
Trucks
In December 2020, Toyota and Hino Motors, together with Seven-Eleven (Japan), FamilyMart and Lawson announced that they have agreed to jointly consider introducing light-duty fuel cell electric trucks (light-duty FCETs). Lawson started testing for low temperature delivery at the end of July 2021 in Tokyo, using a Hino Dutro in which the Toyota Mirai fuel cell is implemented. FamilyMart started testing in Okazaki city.
In August 2021, Toyota announced their plan to make fuel cell modules at its Kentucky auto-assembly plant for use in zero-emission big rigs and heavy-duty commercial vehicles. They plan to begin assembling the electrochemical devices in 2023.
In October 2021, Daimler Truck's fuel cell based truck received approval from German authorities for use on public roads.
Forklifts
A fuel cell forklift (also called a fuel cell lift truck) is a fuel cell-powered industrial forklift truck used to lift and transport materials. In 2013 there were over 4,000 fuel cell forklifts used in material handling in the US, of which 500 received funding from DOE (2012). As of 2024, approximately 50,000 hydrogen forklifts are in operation worldwide (the bulk of which are in the U.S.), as compared with 1.2 million battery electric forklifts that were purchased in 2021.
Most companies in Europe and the US do not use petroleum-powered forklifts, as these vehicles work indoors where emissions must be controlled and instead use electric forklifts. Fuel cell-powered forklifts can be refueled in 3 minutes and they can be used in refrigerated warehouses, where their performance is not degraded by lower temperatures. The FC units are often designed as drop-in replacements.
Motorcycles and bicycles
In 2005, a British manufacturer of hydrogen-powered fuel cells, Intelligent Energy (IE), produced the first working hydrogen-run motorcycle called the ENV (Emission Neutral Vehicle). The motorcycle holds enough fuel to run for four hours, and to travel in an urban area, at a top speed of . In 2004 Honda developed a fuel cell motorcycle that utilized the Honda FC Stack.
Other examples of motorbikes and bicycles that use hydrogen fuel cells include the Taiwanese company APFCT's scooter using the fueling system from Italy's Acta SpA and the Suzuki Burgman scooter with an IE fuel cell that received EU Whole Vehicle Type Approval in 2011. Suzuki Motor Corp. and IE have announced a joint venture to accelerate the commercialization of zero-emission vehicles.
Airplanes
In 2003, the world's first propeller-driven airplane to be powered entirely by a fuel cell was flown. The fuel cell was a stack design that allowed the fuel cell to be integrated with the plane's aerodynamic surfaces. Fuel cell-powered unmanned aerial vehicles (UAV) include a Horizon fuel cell UAV that set the record distance flown for a small UAV in 2007. Boeing researchers and industry partners throughout Europe conducted experimental flight tests in February 2008 of a manned airplane powered only by a fuel cell and lightweight batteries. The fuel cell demonstrator airplane, as it was called, used a proton-exchange membrane (PEM) fuel cell/lithium-ion battery hybrid system to power an electric motor, which was coupled to a conventional propeller.
In 2009, the Naval Research Laboratory's (NRL's) Ion Tiger utilized a hydrogen-powered fuel cell and flew for 23 hours and 17 minutes. Fuel cells are also being tested and considered to provide auxiliary power in aircraft, replacing fossil fuel generators that were previously used to start the engines and power on board electrical needs, while reducing carbon emissions. In 2016 a Raptor E1 drone made a successful test flight using a fuel cell that was lighter than the lithium-ion battery it replaced. The flight lasted 10 minutes at an altitude of , although the fuel cell reportedly had enough fuel to fly for two hours. The fuel was contained in approximately 100 solid pellets composed of a proprietary chemical within an unpressurized cartridge. The pellets are physically robust and operate at temperatures as warm as . The cell was from Arcola Energy.
Lockheed Martin Skunk Works Stalker is an electric UAV powered by solid oxide fuel cell.
Boats
The Hydra, a 22-person fuel cell boat operated from 1999 to 2001 on the Rhine river near Bonn, Germany, and was used as a ferry boat in Ghent, Belgium, during an electric boat conference in 2000. It was fully certified by the Germanischer Lloyd for passenger transport. The Zemship, a small passenger ship, was produced in 2003 to 2013. It used a 100 kW Polymer Electrolyte Membrane Fuel Cells (PEMFC) with 7 lead gel batteries. With these systems, alongside 12 storage tanks, fuel cells provided an energy capacity of 560 V and 234 kWh. Made in Hamburg, Germany, the FCS Alsterwasser, revealed in 2008, was one of the first passenger ships powered by fuel cells and could carry 100 passengers. The hybrid fuel cell technology that powered this ship was produced by Proton Motor Fuel Cell GmbH.
In 2010, the MF Vågen was first produced, utilizing 12 kW fuel cells and 2- to 3-kilogram metal hydride hydrogen storage. It also utilizes 25 kWh lithium batteries and a 10 kW DC motor.
The Hornblower Hybrid debuted in 2012. It utilizes a diesel generator, batteries, photovoltaics, wind power, and fuel cells for energy. Made in Bristol, a 12-passenger hybrid ferry, Hydrogenesis, has been in operation since 2012. The SF-BREEZE is a two-motor boat that utilizes 41 × 120 kW fuel cells. With a type C storage tank, the pressurized vessel can maintain 1200 kg of LH2. These ships are still in operation today. In Norway, the first ferry powered by fuel cells running on liquid hydrogen was scheduled for its first test drives in December 2022.
The Type 212 submarines of the German and Italian navies use fuel cells to remain submerged for weeks without the need to surface. The U212A is a non-nuclear submarine developed by German naval shipyard Howaldtswerke Deutsche Werft. The system consists of nine PEM fuel cells, providing between 30 kW and 50 kW each. The ship is silent, giving it an advantage in the detection of other submarines.
Portable power systems
Portable fuel cell systems are generally classified as weighing under 10 kg and providing power of less than 5 kW. The potential market size for smaller fuel cells is quite large with an up to 40% per annum potential growth rate and a market size of around $10 billion, leading a great deal of research to be devoted to the development of portable power cells. Within this market two groups have been identified. The first is the microfuel cell market, in the 1-50 W range for power smaller electronic devices. The second is the 1-5 kW range of generators for larger scale power generation (e.g. military outposts, remote oil fields).
Microfuel cells are primarily aimed at penetrating the market for phones and laptops. This can be primarily attributed to the advantageous energy density provided by fuel cells over a lithium-ion battery, for the entire system. For a battery, this system includes the charger as well as the battery itself. For the fuel cell this system would include the cell, the necessary fuel and peripheral attachments. Taking the full system into consideration, fuel cells have been shown to provide 530 Wh/kg compared to 44 Wh/kg for lithium-ion batteries. However, while the weight of fuel cell systems offer a distinct advantage the current costs are not in their favor. while a battery system will generally cost around $1.20 per Wh, fuel cell systems cost around $5 per Wh, putting them at a significant disadvantage.
As power demands for cell phones increase, fuel cells could become much more attractive options for larger power generation. The demand for longer on time on phones and computers is something often demanded by consumers so fuel cells could start to make strides into laptop and cell phone markets. The price will continue to go down as developments in fuel cells continues to accelerate. Current strategies for improving micro fuel cells is through the use of carbon nanotubes. It was shown by Girishkumar et al. that depositing nanotubes on electrode surfaces allows for substantially greater surface area increasing the oxygen reduction rate.
Fuel cells for use in larger scale operations also show much promise. Portable power systems that use fuel cells can be used in the leisure sector (i.e. RVs, cabins, marine), the industrial sector (i.e. power for remote locations including gas/oil wellsites, communication towers, security, weather stations), and in the military sector. SFC Energy is a German manufacturer of direct methanol fuel cells for a variety of portable power systems. Ensol Systems Inc. is an integrator of portable power systems, using the SFC Energy DMFC. The key advantage of fuel cells in this market is the great power generation per weight. While fuel cells can be expensive, for remote locations that require dependable energy fuel cells hold great power. For a 72-h excursion the comparison in weight is substantial, with a fuel cell only weighing 15 pounds compared to 29 pounds of batteries needed for the same energy.
Other applications
Providing power for base stations or cell sites
Emergency power systems are a type of fuel cell system, which may include lighting, generators and other apparatus, to provide backup resources in a crisis or when regular systems fail. They find uses in a wide variety of settings from residential homes to hospitals, scientific laboratories, data centers,
Telecommunication equipment and modern naval ships.
An uninterrupted power supply (UPS) provides emergency power and, depending on the topology, provide line regulation as well to connected equipment by supplying power from a separate source when utility power is not available. Unlike a standby generator, it can provide instant protection from a momentary power interruption.
Smartphones, laptops and tablets for use in locations where AC charging may not be readily available.
Portable charging docks for small electronics (e.g. a belt clip that charges a cell phone or PDA).
Small heating appliances
Food preservation, achieved by exhausting the oxygen and automatically maintaining oxygen exhaustion in a shipping container, containing, for example, fresh fish.
Sensors, including in Breathalyzers, where the amount of voltage generated by a fuel cell is used to determine the concentration of fuel (alcohol) in the sample.
Fueling stations
According to FuelCellsWorks, an industry group, at the end of 2019, 330 hydrogen refueling stations were open to the public worldwide. As of June 2020, there were 178 publicly available hydrogen stations in operation in Asia. 114 of these were in Japan. There were at least 177 stations in Europe, and about half of these were in Germany. There were 44 publicly accessible stations in the US, 42 of which were located in California.
A hydrogen fueling station costs between $1 million and $4 million to build.
Social Implications
As of 2023, technological barriers to fuel cell adoption remain. Fuel cells are primarily for material handling in warehouses, distribution centers, and manufacturing facilities. They are projected to be useful and sustainable in a wider range applications. But current applications do not often reach lower-income communities, though some attempts at inclusivity are being made, for example in accessibility.
Markets and economics
In 2012, fuel cell industry revenues exceeded $1 billion market value worldwide, with Asian pacific countries shipping more than 3/4 of the fuel cell systems worldwide. However, as of January 2014, no public company in the industry had yet become profitable. There were 140,000 fuel cell stacks shipped globally in 2010, up from 11,000 shipments in 2007, and from 2011 to 2012 worldwide fuel cell shipments had an annual growth rate of 85%. Tanaka Kikinzoku expanded its manufacturing facilities in 2011. Approximately 50% of fuel cell shipments in 2010 were stationary fuel cells, up from about a third in 2009, and the four dominant producers in the Fuel Cell Industry were the United States, Germany, Japan and South Korea. The Department of Energy Solid State Energy Conversion Alliance found that, as of January 2011, stationary fuel cells generated power at approximately $724 to $775 per kilowatt installed. In 2011, Bloom Energy, a major fuel cell supplier, said that its fuel cells generated power at 9–11 cents per kilowatt-hour, including the price of fuel, maintenance, and hardware.
Industry groups predict that there are sufficient platinum resources for future demand, and in 2007, research at Brookhaven National Laboratory suggested that platinum could be replaced by a gold-palladium coating, which may be less susceptible to poisoning and thereby improve fuel cell lifetime. Another method would use iron and sulphur instead of platinum. This would lower the cost of a fuel cell (as the platinum in a regular fuel cell costs around , and the same amount of iron costs only around ). The concept was being developed by a coalition of the John Innes Centre and the University of Milan-Bicocca. PEDOT cathodes are immune to carbon monoxide poisoning.
In 2016, Samsung "decided to drop fuel cell-related business projects, as the outlook of the market isn't good".
Research and development
2005: Georgia Institute of Technology researchers used triazole to raise the operating temperature of PEM fuel cells from below 100 °C to over 125 °C, claiming this will require less carbon-monoxide purification of the hydrogen fuel.
2008: Monash University, Melbourne used PEDOT as a cathode.
2009: Researchers at the University of Dayton, in Ohio, showed that arrays of vertically grown carbon nanotubes could be used as the catalyst in fuel cells. The same year, a nickel bisdiphosphine-based catalyst for fuel cells was demonstrated.
2013: British firm ACAL Energy developed a fuel cell that it said could run for 10,000 hours in simulated driving conditions. It asserted that the cost of fuel cell construction can be reduced to $40/kW (roughly $9,000 for 300 HP).
2014: Researchers in Imperial College London developed a new method for regeneration of hydrogen sulfide contaminated PEFCs. They recovered 95–100% of the original performance of a hydrogen sulfide contaminated PEFC. They were successful in rejuvenating a SO2 contaminated PEFC too. This regeneration method is applicable to multiple cell stacks.
2019: U.S. Army Research Laboratory researchers developed a two part in-situ hydrogen generation fuel cell, one for hydrogen generation and the other for electric power generation through an internal hydrogen/air power plant.
2022: Researchers from University of Delaware developed a hydrogen-powered fuel cell projected to function at lower costs and operate at roughly $1.4/kW. This design removes carbon dioxide from the air feed of hydroxide exchange membrane fuel cells.
| Technology | Energy storage | null |
11763 | https://en.wikipedia.org/wiki/Frost | Frost | Frost is a thin layer of ice on a solid surface, which forms from water vapor that deposits onto a freezing surface. Frost forms when the air contains more water vapor than it can normally hold at a specific temperature. The process is similar to the formation of dew, except it occurs below the freezing point of water typically without crossing through a liquid state.
Air always contains a certain amount of water vapor, depending on temperature. Warmer air can hold more than colder air. When the atmosphere contains more water than it can hold at a specific temperature, its relative humidity rises above 100% becoming supersaturated, and the excess water vapor is forced to deposit onto any nearby surface, forming seed crystals. The temperature at which frost will form is called the dew point, and depends on the humidity of the air. When the temperature of the air drops below its dew point, excess water vapor is forced out of solution, resulting in a phase change directly from water vapor (a gas) to ice (a solid). As more water molecules are added to the seeds, crystal growth occurs, forming ice crystals. Crystals may vary in size and shape, from an even layer of numerous microscopic-seeds to fewer but much larger crystals, ranging from long dendritic crystals (tree-like) growing across a surface, acicular crystals (needle-like) growing outward from the surface, snowflake-shaped crystals, or even large, knifelike blades of ice covering an object, which depends on many factors such as temperature, air pressure, air motion and turbulence, surface roughness and wettability, and the level of supersaturation. For example, water vapor adsorbs to glass very well, so automobile windows will often frost before the paint, and large hoar-frost crystals can grow very rapidly when the air is very cold, calm, and heavily saturated, such as during an ice fog.
Frost may occur when warm, moist air comes into contact with a cold surface, cooling it below its dew point, such as warm breath on a freezing window. In the atmosphere, it more often occurs when both the air and the surface are below freezing, when the air experiences a drop in temperature bringing it below its dew point, for example, when the temperature falls after the sun sets. In temperate climates, it most commonly appears on surfaces near the ground as fragile white crystals; in cold climates, it occurs in a greater variety of forms. The propagation of crystal formation occurs by the process of nucleation, in specific, water nucleation, which is the same phenomenon responsible for the formation of clouds, fog, snow, rain and other meteorological phenomena.
The ice crystals of frost form as the result of fractal process development. The depth of frost crystals varies depending on the amount of time they have been accumulating, and the concentration of the water vapor (humidity). Frost crystals may be invisible (black), clear (translucent), or, if a mass of frost crystals scatters light in all directions, the coating of frost appears white.
Types of frost include crystalline frost (hoar frost or radiation frost) from deposition of water vapor from air of low humidity, white frost in humid conditions, window frost on glass surfaces, advection frost from cold wind over cold surfaces, black frost without visible ice at low temperatures and very low humidity, and rime under supercooled wet conditions.
Plants that have evolved in warmer climates suffer damage when the temperature falls low enough to freeze the water in the cells that make up the plant tissue. The tissue damage resulting from this process is known as "frost damage". Farmers in those regions where frost damage has been known to affect their crops often invest in substantial means to protect their crops from such damage.
Formation
If a solid surface is chilled below the dew point of the surrounding humid air, and the surface itself is colder than freezing, ice will form on it. If the water deposits as a liquid that then freezes, it forms a coating that may look glassy, opaque, or crystalline, depending on its type. Depending on context, that process may also be called atmospheric icing. The ice it produces differs in some ways from crystalline frost, which consists of spicules of ice that typically project from the solid surface on which they grow.
The main difference between the ice coatings and frost spicules arises because the crystalline spicules grow directly from desublimation of water vapour from air, and desublimation is not a factor in icing of freezing surfaces. For desublimation to proceed, the surface must be below the frost point of the air, meaning that it is sufficiently cold for ice to form without passing through the liquid phase. The air must be humid, but not sufficiently humid to permit the condensation of liquid water, or icing will result instead of desublimation. The size of the crystals depends largely on the temperature, the amount of water vapor available, and how long they have been growing undisturbed.
As a rule, except in conditions where supercooled droplets are present in the air, frost will form only if the deposition surface is colder than the surrounding air. For instance, frost may be observed around cracks in cold wooden sidewalks when humid air escapes from the warmer ground beneath. Other objects on which frost commonly forms are those with low specific heat or high thermal emissivity, such as blackened metals, hence the accumulation of frost on the heads of rusty nails.
The apparently erratic occurrence of frost in adjacent localities is due partly to differences of elevation, the lower areas becoming colder on calm nights. Where static air settles above an area of ground in the absence of wind, the absorptivity and specific heat of the ground strongly influence the temperature that the trapped air attains.
Types
Hoar frost
Hoar frost, also hoarfrost, radiation frost, or pruina, refers to white ice crystals deposited on the ground or loosely attached to exposed objects, such as wires or leaves. They form on cold, clear nights when conditions are such that heat radiates into outer space faster than it can be replaced from nearby warm objects or brought in by the wind. Under suitable circumstances, objects cool to below the frost point of the surrounding air, well below the freezing point of water. Such freezing may be promoted by effects such as flood frost or frost pocket. These occur when ground-level radiation cools air until it flows downhill and accumulates in pockets of very cold air in valleys and hollows. Hoar frost may freeze in such low-lying cold air even when the air temperature a few feet above ground is well above freezing.
The word "hoar" comes from an Old English adjective that means "showing signs of old age". In this context, it refers to the frost that makes trees and bushes look like white hair.
Hoar frost may have different names depending on where it forms:
Air hoar is a deposit of hoar frost on objects above the surface, such as tree branches, plant stems, and wires.
Surface hoar refers to fern-like ice crystals directly deposited on snow, ice, or already frozen surfaces.
Crevasse hoar consists of crystals that form in glacial crevasses where water vapour can accumulate under calm weather conditions.
Depth hoar refers to faceted crystals that have slowly grown large within cavities beneath the surface of banks of dry snow. Depth hoar crystals grow continuously at the expense of neighbouring smaller crystals, so typically are visibly stepped and have faceted hollows.
When surface hoar covers sloping snowbanks, the layer of frost crystals may create an avalanche risk; when heavy layers of new snow cover the frosty surface, furry crystals standing out from the old snow hold off the falling flakes, forming a layer of voids that prevents the new snow layers from bonding strongly to the old snow beneath. Ideal conditions for hoarfrost to form on snow are cold, clear nights, with very light, cold air currents conveying humidity at the right rate for growth of frost crystals. Wind that is too strong or warm destroys the furry crystals, and thereby may permit a stronger bond between the old and new snow layers. However, if the winds are strong enough and cold enough to lay the crystals flat and dry, carpeting the snow with cold, loose crystals without removing or destroying them or letting them warm up and become sticky, then the frost interface between the snow layers may still present an avalanche danger, because the texture of the frost crystals differs from the snow texture, and the dry crystals will not stick to fresh snow. Such conditions still prevent a strong bond between the snow layers.
In very low temperatures where fluffy surface hoar crystals form without subsequently being covered with snow, strong winds may break them off, forming a dust of ice particles and blowing them over the surface. The ice dust then may form yukimarimo, as has been observed in parts of Antarctica, in a process similar to the formation of dust bunnies and similar structures.
Hoar frost and white frost also occur in man-made environments such as in freezers or industrial cold-storage facilities. If such cold spaces or the pipes serving them are not well insulated and are exposed to ambient humidity, the moisture will freeze instantly depending on the freezer temperature. The frost may coat pipes thickly, partly insulating them, but such inefficient insulation still is a source of heat loss.
Advection frost
Advection frost (also called wind frost) refers to tiny ice spikes that form when very cold wind is blowing over tree branches, poles, and other surfaces. It looks like rimming on the edges of flowers and leaves, and usually forms against the direction of the wind. It can occur at any hour, day or night.
Window frost
Window frost (also called fern frost or ice flowers) forms when a glass pane is exposed to very cold air on the outside and warmer, moderately moist air on the inside. If the pane is a bad insulator (for example, if it is a single-pane window), water vapour condenses on the glass, forming frost patterns. With very low temperatures outside, frost can appear on the bottom of the window even with double-pane energy-efficient windows because the air convection between two panes of glass ensures that the bottom part of the glazing unit is colder than the top part. On unheated motor vehicles, the frost usually forms on the outside surface of the glass first. The glass surface influences the shape of crystals, so imperfections, scratches, or dust can modify the way ice nucleates. The patterns in window frost form a fractal with a fractal dimension greater than one, but less than two. This is a consequence of the nucleation process being constrained to unfold in two dimensions, unlike a snowflake, which is shaped by a similar process, but forms in three dimensions and has a fractal dimension greater than two.
If the indoor air is very humid, rather than moderately so, water first condenses in small droplets, and then freezes into clear ice.
Similar patterns of freezing may occur on other smooth vertical surfaces, but they seldom are as obvious or spectacular as on clear glass.
White frost
White frost is a solid deposition of ice that forms directly from water vapour contained in air.
White frost forms when relative humidity is above 90% and the temperature below −8 °C (18 °F), and it grows against the wind direction, since air arriving from windward has a higher humidity than leeward air, but the wind must not be strong, else it damages the delicate icy structures as they begin to form. White frost resembles a heavy coating of hoar frost with big, interlocking crystals, usually needle-shaped.
Rime
Rime is a type of ice deposition that occurs quickly, often under heavily humid and windy conditions. Technically speaking, it is not a type of frost, since usually supercooled water drops are involved, in contrast to the formation of hoar frost, in which water vapour desublimates slowly and directly. Ships travelling through Arctic seas may accumulate large quantities of rime on the rigging. Unlike hoar frost, which has a feathery appearance, rime generally has an icy, solid appearance.
Black frost
Black frost (or "killing frost") is not strictly speaking frost at all, because it is the condition seen in crops when the humidity is too low for frost to form, but the temperature falls so low that plant tissues freeze and die, becoming blackened, hence the term "black frost". Black frost often is called "killing frost" because white frost tends to be less cold, partly because the latent heat of freezing of the water reduces the temperature drop.
Effect on plants
Damage
Many plants can be damaged or killed by freezing temperatures or frost. This varies with the type of plant, the tissue exposed, and how low temperatures get; a "light frost" of damages fewer types of plants than a "hard frost" below .
Plants likely to be damaged even by a light frost include vines—such as beans, grapes, squashes, melons—along with nightshades such as tomatoes, eggplants, and peppers. Plants that may tolerate (or even benefit from) frosts include:
root vegetables (e.g. beets, carrots, parsnips, onions)
leafy greens (e.g. lettuces, spinach, chard, cucumber)
cruciferous vegetables (e.g. cabbages, cauliflower, bok choy, broccoli, Brussels sprouts, radishes, kale, collard, mustard, turnips, rutabagas)
Even those plants that tolerate frost may be damaged once temperatures drop even lower (below ). Hardy perennials, such as Hosta, become dormant after the first frosts and regrow when spring arrives. The entire visible plant may turn completely brown until the spring warmth, or may drop all of its leaves and flowers, leaving the stem and stalk only. Evergreen plants, such as pine trees, withstand frost although all or most growth stops. Frost crack is a bark defect caused by a combination of low temperatures and heat from the winter sun.
Vegetation is not necessarily damaged when leaf temperatures drop below the freezing point of their cell contents. In the absence of a site nucleating the formation of ice crystals, the leaves remain in a supercooled liquid state, safely reaching temperatures of . However, once frost forms, the leaf cells may be damaged by sharp ice crystals. Hardening is the process by which a plant becomes tolerant to low temperatures. | Physical sciences | Precipitation | null |
11780 | https://en.wikipedia.org/wiki/Fur%20seal | Fur seal | Fur seals are any of nine species of pinnipeds belonging to the subfamily Arctocephalinae in the family Otariidae. They are much more closely related to sea lions than true seals, and share with them external ears (pinnae), relatively long and muscular foreflippers, and the ability to walk on all fours. They are marked by their dense underfur, which made them a long-time object of commercial hunting. Eight species belong to the genus Arctocephalus and are found primarily in the Southern Hemisphere, while a ninth species also sometimes called fur seal, the Northern fur seal (Callorhinus ursinus), belongs to a different genus and inhabits the North Pacific. The fur seals in Arctocephalus are more closely related to sea lions than they are to the Northern fur seal, but all three groups are more closely related to one another than they are to true seals.
Taxonomy
Fur seals and sea lions make up the family Otariidae. Along with the Phocidae and Odobenidae, ottariids are pinnipeds descending from a common ancestor most closely related to modern bears (as hinted by the subfamily Arctocephalinae, meaning "bear-headed"). The name pinniped refers to mammals with front and rear flippers. Otariids arose about 15-17 million years ago in the Miocene, and were originally land mammals that rapidly diversified and adapted to a marine environment, giving rise to the semiaquatic marine mammals that thrive today. Fur seals and sea lions are closely related and commonly known together as the "eared seals".
Until recently, fur seals were all grouped under a single subfamily of Pinnipedia, called the Arctocephalinae, to contrast them with Otariinae – the sea lions – based on the most prominent common feature, namely the coat of dense underfur intermixed with guard hairs. Recent genetic evidence, however, suggests Callorhinus is more closely related to some sea lion species, and the fur seal/sea lion subfamily distinction has been eliminated from many taxonomies. Nonetheless, all fur seals have certain features in common: the fur, generally smaller sizes, farther and longer foraging trips, smaller and more abundant prey items, and greater sexual dimorphism. For these reasons, the distinction remains useful. Fur seals comprise two genera: Callorhinus, and Arctocephalus. Callorhinus is represented by just one species in the Northern Hemisphere, the northern fur seal (Callorhinus ursinus), and Arctocephalus is represented by eight species in the Southern Hemisphere. The southern fur seals comprising the genus Arctocephalus include Antarctic fur seals, Galapagos fur seals, Juan Fernandez fur seals, New Zealand fur seals, brown fur seals, South American fur seals, and subantarctic fur seals.
Physical appearance
Along with the previously mentioned thick underfur, fur seals are distinguished from sea lions by their smaller body structure, greater sexual dimorphism, smaller prey, and longer foraging trips during the feeding cycle. The physical appearance of fur seals varies with individual species, but the main characteristics remain constant.
Fur seals are characterized by their external pinnae, dense underfur, vibrissae, and long, muscular limbs. They share with other otariids the ability to rotate their rear limbs forward, supporting their bodies and allowing them to ambulate on land. In water, their front limbs, typically measuring about a fourth of their body length, act as oars and can propel them forward for optimal mobility. The surfaces of these long, paddle-like fore limbs are leathery with small claws. Otariids have a dog-like head, sharp, well-developed canines, sharp eyesight, and keen hearing.
They are extremely sexually dimorphic mammals, with the males often two to five times the size of the females, with proportionally larger heads, necks, and chests. Size ranges from about 1.5 m, 64 kg in the male Galapagos fur seal (also the smallest pinniped) to 2.5 m, 180 kg in the adult male New Zealand fur seal. Most fur seal pups are born with a black-brown coat that molts at 2–3 months, revealing a brown coat that typically gets darker with age. Some males and females within the same species have significant differences in appearance, further contributing to the sexual dimorphism. Females and juveniles often have a lighter colored coat overall or only on the chest, as seen in South American fur seals. In a northern fur seal population, the females are typically silvery-gray on the dorsal side and reddish-brown on their ventral side with a light gray patch on their chest. This makes them easily distinguished from the males with their brownish-gray to reddish-brown or black coats.
Habitat
Of the fur seal family, eight species are considered southern fur seals, and only one is found in the Northern Hemisphere. The southern group includes Antarctic, Galapagos, Guadalupe, Juan Fernandez, New Zealand, brown, South American, and subantarctic fur seals. They typically spend about 70% of their lives in subpolar, temperate, and equatorial waters. Colonies of fur seals can be seen throughout the Pacific and Southern Oceans from south Australia, Africa, and New Zealand, to the coast of Peru and north to California. They are typically nonmigrating mammals, with the exception of the northern fur seal, which has been known to travel distances up to 10,000 km. Fur seals are often found near isolated islands or peninsulas, and can be seen hauling out onto the mainland during winter. Although they are not migratory, they have been observed wandering hundreds of miles from their breeding grounds in times of scarce resources. For example, the subantarctic fur seal typically resides near temperate islands in the South Atlantic and Indian Oceans north of the Antarctic Polar Front, but juvenile males have been seen wandering as far north as Brazil and South Africa.
Behavior and ecology
Typically, fur seals gather during the summer in large rookeries at specific beaches or rocky outcrops to give birth and breed. All species are polygynous, meaning dominant males reproduce with more than one female. For most species, total gestation lasts about 11.5 months, including a several-month period of delayed implantation of the embryo. Northern fur seal males aggressively select and defend the specific females in their harems. Females typically reach sexual maturity around 3–4 years. The males reach sexual maturity around the same time, but do not become territorial or mate until 6–10 years.
The breeding season typically begins in November and lasts 2–3 months. The northern fur seals begin their breeding season as early as June due to their region, climate, and resources. In all cases, the males arrive a few weeks early to fight for their territory and groups of females with which to mate. They congregate at rocky, isolated breeding grounds and defend their territory through fighting and vocalization. Males typically do not leave their territory for the entirety of the breeding season, fasting and competing until all energy sources are depleted.
The Juan Fernandez fur seals deviate from this typical behavior, using aquatic breeding territories not seen in other fur seals. They use rocky sites for breeding, but males fight for territory on land and on the shoreline and in the water. Upon arriving to the breeding grounds, females give birth to their pups from the previous season. About a week later, the females mate again and shortly after begin their feeding cycle, which typically consists of foraging and feeding at sea for about 5 days, then returning to the breeding grounds to nurse the pups for about 2 days. Mothers and pups locate each other using call recognition during nursing period. The Juan Fernandez fur seal has a particularly long feeding cycle, with about 12 days of foraging and feeding and 5 days of nursing. Most fur seals continue this cycle for about 9 months until they wean their pup. The exception to this is the Antarctic fur seal, which has a feeding cycle that lasts only 4 months. During foraging trips, most female fur seals travel around 200 km from the breeding site, and can dive around 200 m depending on food availability.
The remainder of the year, fur seals lead a largely pelagic existence in the open sea, pursuing their prey wherever it is abundant. They feed on moderately sized fish, squid, and krill. Several species of the southern fur seal also have sea birds, especially penguins, as part of their diets. Fur seals, in turn, are preyed upon by sharks, orcas, and occasionally by larger sea lions. These opportunistic mammals tend to feed and dive in shallow waters at night, when their prey are swimming near the surface. Fur seals occasionally gang up and evict sharks. South American fur seals exhibit a different diet; adults feed almost exclusively on anchovies, while juveniles feed on demersal fish, most likely due to availability.
When fur seals were hunted in the late 18th and early 19th centuries, they hauled out on remote islands where no predators were present. The hunters reported being able to club the unwary animals to death one after another, making the hunt profitable, though the price per seal skin was low.
Population and survival
The average lifespan of fur seals varies with different species from 13 to 25 years, with females typically living longer. Most populations continue to expand as they recover from previous commercial hunting and environmental threats.
Many species were heavily exploited by commercial sealers, especially during the 19th century, when their fur was highly valued. Beginning in the 1790s, the ports of Stonington and New Haven, Connecticut, were leaders of the American fur seal trade, which primarily entailed clubbing fur seals to death on uninhabited South Pacific islands, skinning them, and selling the hides in China. Many populations, notably the Guadalupe fur seal, northern fur seal, and Cape fur seal, suffered dramatic declines and are still recovering. Currently, most species are protected, and hunting is mostly limited to subsistence harvest. Globally, most populations can be considered healthy, mostly because they often prefer remote habitats that are relatively inaccessible to humans. Nonetheless, environmental degradation, competition with fisheries, and climate change potentially pose threats to some populations.
| Biology and health sciences | Pinnipeds | Animals |
11807 | https://en.wikipedia.org/wiki/Ferromagnetism | Ferromagnetism | Ferromagnetism is a property of certain materials (such as iron) that results in a significant, observable magnetic permeability, and in many cases, a significant magnetic coercivity, allowing the material to form a permanent magnet. Ferromagnetic materials are noticeably attracted to a magnet, which is a consequence of their substantial magnetic permeability.
Magnetic permeability describes the induced magnetization of a material due to the presence of an external magnetic field. For example, this temporary magnetization inside a steel plate accounts for the plate's attraction to a magnet. Whether or not that steel plate then acquires permanent magnetization depends on both the strength of the applied field and on the coercivity of that particular piece of steel (which varies with the steel's chemical composition and any heat treatment it may have undergone).
In physics, multiple types of material magnetism have been distinguished. Ferromagnetism (along with the similar effect ferrimagnetism) is the strongest type and is responsible for the common phenomenon of everyday magnetism. An example of a permanent magnet formed from a ferromagnetic material is a refrigerator magnet.
Substances respond weakly to three other types of magnetism—paramagnetism, diamagnetism, and antiferromagnetism—but the forces are usually so weak that they can be detected only by lab instruments.
Permanent magnets (materials that can be magnetized by an external magnetic field and remain magnetized after the external field is removed) are either ferromagnetic or ferrimagnetic, as are the materials that are attracted to them. Relatively few materials are ferromagnetic. They are typically pure forms, alloys, or compounds of iron, cobalt, nickel, and certain rare-earth metals.
Ferromagnetism is vital in industrial applications and modern technologies, forming the basis for electrical and electromechanical devices such as electromagnets, electric motors, generators, transformers, magnetic storage (including tape recorders and hard disks), and nondestructive testing of ferrous materials.
Ferromagnetic materials can be divided into magnetically soft materials (like annealed iron), which do not tend to stay magnetized, and magnetically hard materials, which do. Permanent magnets are made from hard ferromagnetic materials (such as alnico) and ferrimagnetic materials (such as ferrite) that are subjected to special processing in a strong magnetic field during manufacturing to align their internal microcrystalline structure, making them difficult to demagnetize. To demagnetize a saturated magnet, a magnetic field must be applied. The threshold at which demagnetization occurs depends on the coercivity of the material. Magnetically hard materials have high coercivity, whereas magnetically soft materials have low coercivity.
The overall strength of a magnet is measured by its magnetic moment or, alternatively, its total magnetic flux. The local strength of magnetism in a material is measured by its magnetization.
Terms
Historically, the term ferromagnetism was used for any material that could exhibit spontaneous magnetization: a net magnetic moment in the absence of an external magnetic field; that is, any material that could become a magnet. This definition is still in common use.
In a landmark paper in 1948, Louis Néel showed that two levels of magnetic alignment result in this behavior. One is ferromagnetism in the strict sense, where all the magnetic moments are aligned. The other is ferrimagnetism, where some magnetic moments point in the opposite direction but have a smaller contribution, so spontaneous magnetization is present.
In the special case where the opposing moments balance completely, the alignment is known as antiferromagnetism; antiferromagnets do not have a spontaneous magnetization.
Materials
Ferromagnetism is an unusual property that occurs in only a few substances. The common ones are the transition metals iron, nickel, and cobalt, as well as their alloys and alloys of rare-earth metals. It is a property not just of the chemical make-up of a material, but of its crystalline structure and microstructure. Ferromagnetism results from these materials having many unpaired electrons in their d-block (in the case of iron and its relatives) or f-block (in the case of the rare-earth metals), a result of Hund's rule of maximum multiplicity. There are ferromagnetic metal alloys whose constituents are not themselves ferromagnetic, called Heusler alloys, named after Fritz Heusler. Conversely, there are non-magnetic alloys, such as types of stainless steel, composed almost exclusively of ferromagnetic metals.
Amorphous (non-crystalline) ferromagnetic metallic alloys can be made by very rapid quenching (cooling) of an alloy. These have the advantage that their properties are nearly isotropic (not aligned along a crystal axis); this results in low coercivity, low hysteresis loss, high permeability, and high electrical resistivity. One such typical material is a transition metal-metalloid alloy, made from about 80% transition metal (usually Fe, Co, or Ni) and a metalloid component (B, C, Si, P, or Al) that lowers the melting point.
A relatively new class of exceptionally strong ferromagnetic materials are the rare-earth magnets. They contain lanthanide elements that are known for their ability to carry large magnetic moments in well-localized f-orbitals.
The table lists a selection of ferromagnetic and ferrimagnetic compounds, along with their Curie temperature (TC), above which they cease to exhibit spontaneous magnetization.
Unusual materials
Most ferromagnetic materials are metals, since the conducting electrons are often responsible for mediating the ferromagnetic interactions. It is therefore a challenge to develop ferromagnetic insulators, especially multiferroic materials, which are both ferromagnetic and ferroelectric.
A number of actinide compounds are ferromagnets at room temperature or exhibit ferromagnetism upon cooling. PuP is a paramagnet with cubic symmetry at room temperature, but which undergoes a structural transition into a tetragonal state with ferromagnetic order when cooled below its . In its ferromagnetic state, PuP's easy axis is in the ⟨100⟩ direction.
In NpFe2 the easy axis is ⟨111⟩. Above , NpFe2 is also paramagnetic and cubic. Cooling below the Curie temperature produces a rhombohedral distortion wherein the rhombohedral angle changes from 60° (cubic phase) to 60.53°. An alternate description of this distortion is to consider the length along the unique trigonal axis (after the distortion has begun) and as the distance in the plane perpendicular to . In the cubic phase this reduces to . Below the Curie temperature, the lattice acquires a distortion
which is the largest strain in any actinide compound. NpNi2 undergoes a similar lattice distortion below , with a strain of (43 ± 5) × 10−4. NpCo2 is a ferrimagnet below 15 K.
In 2009, a team of MIT physicists demonstrated that a lithium gas cooled to less than one kelvin can exhibit ferromagnetism. The team cooled fermionic lithium-6 to less than (150 billionths of one kelvin) using infrared laser cooling. This demonstration is the first time that ferromagnetism has been demonstrated in a gas.
In rare circumstances, ferromagnetism can be observed in compounds consisting of only s-block and p-block elements, such as rubidium sesquioxide.
In 2018, a team of University of Minnesota physicists demonstrated that body-centered tetragonal ruthenium exhibits ferromagnetism at room temperature.
Electrically induced ferromagnetism
Recent research has shown evidence that ferromagnetism can be induced in some materials by an electric current or voltage. Antiferromagnetic LaMnO3 and SrCoO have been switched to be ferromagnetic by a current. In July 2020, scientists reported inducing ferromagnetism in the abundant diamagnetic material iron pyrite ("fool's gold") by an applied voltage. In these experiments, the ferromagnetism was limited to a thin surface layer.
Explanation
The Bohr–Van Leeuwen theorem, discovered in the 1910s, showed that classical physics theories are unable to account for any form of material magnetism, including ferromagnetism; the explanation rather depends on the quantum mechanical description of atoms. Each of an atom's electrons has a magnetic moment according to its spin state, as described by quantum mechanics. The Pauli exclusion principle, also a consequence of quantum mechanics, restricts the occupancy of electrons' spin states in atomic orbitals, generally causing the magnetic moments from an atom's electrons to largely or completely cancel. An atom will have a net magnetic moment when that cancellation is incomplete.
Origin of atomic magnetism
One of the fundamental properties of an electron (besides that it carries charge) is that it has a magnetic dipole moment, i.e., it behaves like a tiny magnet, producing a magnetic field. This dipole moment comes from a more fundamental property of the electron: its quantum mechanical spin. Due to its quantum nature, the spin of the electron can be in one of only two states, with the magnetic field either pointing "up" or "down" (for any choice of up and down). Electron spin in atoms is the main source of ferromagnetism, although there is also a contribution from the orbital angular momentum of the electron about the nucleus. When these magnetic dipoles in a piece of matter are aligned (point in the same direction), their individually tiny magnetic fields add together to create a much larger macroscopic field.
However, materials made of atoms with filled electron shells have a total dipole moment of zero: because the electrons all exist in pairs with opposite spin, every electron's magnetic moment is cancelled by the opposite moment of the second electron in the pair. Only atoms with partially filled shells (i.e., unpaired spins) can have a net magnetic moment, so ferromagnetism occurs only in materials with partially filled shells. Because of Hund's rules, the first few electrons in an otherwise unoccupied shell tend to have the same spin, thereby increasing the total dipole moment.
These unpaired dipoles (often called simply "spins", even though they also generally include orbital angular momentum) tend to align in parallel to an external magnetic field leading to a macroscopic effect called paramagnetism. In ferromagnetism, however, the magnetic interaction between neighboring atoms' magnetic dipoles is strong enough that they align with each other regardless of any applied field, resulting in the spontaneous magnetization of so-called domains. This results in the large observed magnetic permeability of ferromagnetics, and the ability of magnetically hard materials to form permanent magnets.
Exchange interaction
When two nearby atoms have unpaired electrons, whether the electron spins are parallel or antiparallel affects whether the electrons can share the same orbit as a result of the quantum mechanical effect called the exchange interaction. This in turn affects the electron location and the Coulomb (electrostatic) interaction and thus the energy difference between these states.
The exchange interaction is related to the Pauli exclusion principle, which says that two electrons with the same spin cannot also be in the same spatial state (orbital). This is a consequence of the spin–statistics theorem and that electrons are fermions. Therefore, under certain conditions, when the orbitals of the unpaired outer valence electrons from adjacent atoms overlap, the distributions of their electric charge in space are farther apart when the electrons have parallel spins than when they have opposite spins. This reduces the electrostatic energy of the electrons when their spins are parallel compared to their energy when the spins are antiparallel, so the parallel-spin state is more stable. This difference in energy is called the exchange energy. In simple terms, the outer electrons of adjacent atoms, which repel each other, can move further apart by aligning their spins in parallel, so the spins of these electrons tend to line up.
This energy difference can be orders of magnitude larger than the energy differences associated with the magnetic dipole–dipole interaction due to dipole orientation, which tends to align the dipoles antiparallel. In certain doped semiconductor oxides, RKKY interactions have been shown to bring about periodic longer-range magnetic interactions, a phenomenon of significance in the study of spintronic materials.
The materials in which the exchange interaction is much stronger than the competing dipole–dipole interaction are frequently called magnetic materials. For instance, in iron (Fe) the exchange force is about 1,000 times stronger than the dipole interaction. Therefore, below the Curie temperature, virtually all of the dipoles in a ferromagnetic material will be aligned. In addition to ferromagnetism, the exchange interaction is also responsible for the other types of spontaneous ordering of atomic magnetic moments occurring in magnetic solids: antiferromagnetism and ferrimagnetism. There are different exchange interaction mechanisms which create the magnetism in different ferromagnetic, ferrimagnetic, and antiferromagnetic substances—these mechanisms include direct exchange, RKKY exchange, double exchange, and superexchange.
Magnetic anisotropy
Although the exchange interaction keeps spins aligned, it does not align them in a particular direction. Without magnetic anisotropy, the spins in a magnet randomly change direction in response to thermal fluctuations, and the magnet is superparamagnetic. There are several kinds of magnetic anisotropy, the most common of which is magnetocrystalline anisotropy. This is a dependence of the energy on the direction of magnetization relative to the crystallographic lattice. Another common source of anisotropy, inverse magnetostriction, is induced by internal strains. Single-domain magnets also can have a shape anisotropy due to the magnetostatic effects of the particle shape. As the temperature of a magnet increases, the anisotropy tends to decrease, and there is often a blocking temperature at which a transition to superparamagnetism occurs.
Magnetic domains
The spontaneous alignment of magnetic dipoles in ferromagnetic materials would seem to suggest that every piece of ferromagnetic material should have a strong magnetic field, since all the spins are aligned; yet iron and other ferromagnets are often found in an "unmagnetized" state. This is because a bulk piece of ferromagnetic material is divided into tiny regions called magnetic domains (also known as Weiss domains). Within each domain, the spins are aligned, but if the bulk material is in its lowest energy configuration (i.e. "unmagnetized"), the spins of separate domains point in different directions and their magnetic fields cancel out, so the bulk material has no net large-scale magnetic field.
Ferromagnetic materials spontaneously divide into magnetic domains because the exchange interaction is a short-range force, so over long distances of many atoms, the tendency of the magnetic dipoles to reduce their energy by orienting in opposite directions wins out. If all the dipoles in a piece of ferromagnetic material are aligned parallel, it creates a large magnetic field extending into the space around it. This contains a lot of magnetostatic energy. The material can reduce this energy by splitting into many domains pointing in different directions, so the magnetic field is confined to small local fields in the material, reducing the volume of the field. The domains are separated by thin domain walls a number of molecules thick, in which the direction of magnetization of the dipoles rotates smoothly from one domain's direction to the other.
Magnetized materials
Thus, a piece of iron in its lowest energy state ("unmagnetized") generally has little or no net magnetic field. However, the magnetic domains in a material are not fixed in place; they are simply regions where the spins of the electrons have aligned spontaneously due to their magnetic fields, and thus can be altered by an external magnetic field. If a strong-enough external magnetic field is applied to the material, the domain walls will move via a process in which the spins of the electrons in atoms near the wall in one domain turn under the influence of the external field to face in the same direction as the electrons in the other domain, thus reorienting the domains so more of the dipoles are aligned with the external field. The domains will remain aligned when the external field is removed, and sum to create a magnetic field of their own extending into the space around the material, thus creating a "permanent" magnet. The domains do not go back to their original minimum energy configuration when the field is removed because the domain walls tend to become 'pinned' or 'snagged' on defects in the crystal lattice, preserving their parallel orientation. This is shown by the Barkhausen effect: as the magnetizing field is changed, the material's magnetization changes in thousands of tiny discontinuous jumps as domain walls suddenly "snap" past defects.
This magnetization as a function of an external field is described by a hysteresis curve. Although this state of aligned domains found in a piece of magnetized ferromagnetic material is not a minimal-energy configuration, it is metastable, and can persist for long periods, as shown by samples of magnetite from the sea floor which have maintained their magnetization for millions of years.
Heating and then cooling (annealing) a magnetized material, subjecting it to vibration by hammering it, or applying a rapidly oscillating magnetic field from a degaussing coil tends to release the domain walls from their pinned state, and the domain boundaries tend to move back to a lower energy configuration with less external magnetic field, thus demagnetizing the material.
Commercial magnets are made of "hard" ferromagnetic or ferrimagnetic materials with very large magnetic anisotropy such as alnico and ferrites, which have a very strong tendency for the magnetization to be pointed along one axis of the crystal, the "easy axis". During manufacture the materials are subjected to various metallurgical processes in a powerful magnetic field, which aligns the crystal grains so their "easy" axes of magnetization all point in the same direction. Thus, the magnetization, and the resulting magnetic field, is "built in" to the crystal structure of the material, making it very difficult to demagnetize.
Curie temperature
As the temperature of a material increases, thermal motion, or entropy, competes with the ferromagnetic tendency for dipoles to align. When the temperature rises beyond a certain point, called the Curie temperature, there is a second-order phase transition and the system can no longer maintain a spontaneous magnetization, so its ability to be magnetized or attracted to a magnet disappears, although it still responds paramagnetically to an external field. Below that temperature, there is a spontaneous symmetry breaking and magnetic moments become aligned with their neighbors. The Curie temperature itself is a critical point, where the magnetic susceptibility is theoretically infinite and, although there is no net magnetization, domain-like spin correlations fluctuate at all length scales.
The study of ferromagnetic phase transitions, especially via the simplified Ising spin model, had an important impact on the development of statistical physics. There, it was first clearly shown that mean field theory approaches failed to predict the correct behavior at the critical point (which was found to fall under a universality class that includes many other systems, such as liquid-gas transitions), and had to be replaced by renormalization group theory.
| Physical sciences | Basics_9 | null |
11812 | https://en.wikipedia.org/wiki/Lockheed%20Martin%20F-35%20Lightning%20II | Lockheed Martin F-35 Lightning II | The Lockheed Martin F-35 Lightning II is an American family of single-seat, single-engine, supersonic stealth strike fighters. A multirole combat aircraft designed for both air superiority and strike missions, it also has electronic warfare and intelligence, surveillance, and reconnaissance capabilities. Lockheed Martin is the prime F-35 contractor with principal partners Northrop Grumman and BAE Systems. The aircraft has three main variants: the conventional takeoff and landing (CTOL) F-35A, the short take-off and vertical-landing (STOVL) F-35B, and the carrier variant (CV) catapult-assisted take-off but arrested recovery (CATOBAR) F-35C.
The aircraft descends from the Lockheed Martin X-35, which in 2001 beat the Boeing X-32 to win the Joint Strike Fighter (JSF) program intended to replace the F-16 Fighting Falcon, F/A-18 Hornet, and the McDonnell Douglas AV-8B Harrier II "jump jet", among others. Its development is principally funded by the United States, with additional funding from program partner countries from the North Atlantic Treaty Organization (NATO) and close U.S. allies, including Australia, Canada, Denmark, Italy, the Netherlands, Norway, the United Kingdom, and formerly Turkey. Several other countries have also ordered, or are considering ordering, the aircraft. The program has drawn criticism for its unprecedented size, complexity, ballooning costs, and delayed deliveries. The acquisition strategy of concurrent production of the aircraft while it was still in development and testing led to expensive design changes and retrofits. , the average flyaway costs per plane are: US$82.5 million for the F-35A, $109 million for the F-35B, and $102.1 million for the F-35C.
The F-35 first flew in 2006 and entered service with the U.S. Marine Corps F-35B in July 2015, followed by the U.S. Air Force F-35A in August 2016 and the U.S. Navy F-35C in February 2019. The aircraft was first used in combat in 2018 by the Israeli Air Force. The U.S. plans to buy 2,456 F-35s through 2044, which will represent the bulk of the crewed tactical aviation of the U.S. Air Force, Navy, and Marine Corps for several decades; the aircraft is planned to be a cornerstone of NATO and U.S.-allied air power and to operate to 2070.
Development
Program origins
The F-35 was the product of the Joint Strike Fighter (JSF) program, which was the merger of various combat aircraft programs from the 1980s and 1990s. One progenitor program was the Defense Advanced Research Projects Agency (DARPA) Advanced Short Take-Off/Vertical Landing (ASTOVL) which ran from 1983 to 1994; ASTOVL aimed to develop a Harrier jump jet replacement for the U.S. Marine Corps (USMC) and the UK Royal Navy. Under one of ASTOVL's classified programs, the Supersonic STOVL Fighter (SSF), Lockheed's Skunk Works conducted research for a stealthy supersonic STOVL fighter intended for both U.S. Air Force (USAF) and USMC; among key STOVL technologies explored was the shaft-driven lift fan (SDLF) system. Lockheed's concept was a single-engine canard delta aircraft weighing about empty. ASTOVL was rechristened as the Common Affordable Lightweight Fighter (CALF) in 1993 and involved Lockheed, McDonnell Douglas, and Boeing.
The end of the Cold War and the collapse of the Soviet Union in 1991 caused considerable reductions in Department of Defense (DoD) spending and subsequent restructuring. In 1993, the Joint Advanced Strike Technology (JAST) program emerged following the cancellation of the USAF's Multi-Role Fighter (MRF) and U.S. Navy's (USN) Advanced Attack/Fighter (A/F-X) programs. MRF, a program for a relatively affordable F-16 Fighting Falcon replacement, was scaled back and delayed due to post–Cold War defense posture easing F-16 fleet usage and thus extending its service life as well as increasing budget pressure from the Lockheed Martin F-22 Advanced Tactical Fighter (ATF) program. The A/F-X, initially known as the Advanced-Attack (A-X), began in 1991 as the USN's follow-on to the Advanced Tactical Aircraft (ATA) program for an Grumman A-6 Intruder replacement; the ATA's resulting McDonnell Douglas A-12 Avenger II had been canceled due to technical problems and cost overruns in 1991. In the same year, the termination of the Naval Advanced Tactical Fighter (NATF), a naval development of USAF's ATF program to replace the Grumman F-14 Tomcat, resulted in additional fighter capability being added to A-X, which was then renamed A/F-X. Amid increased budget pressure, the DoD's Bottom-Up Review (BUR) in September 1993 announced MRF's and A/F-X's cancellations, with applicable experience brought to the emerging JAST program. JAST was not meant to develop a new aircraft, but rather to develop requirements, mature technologies, and demonstrate concepts for advanced strike warfare.
As JAST progressed, the need for concept demonstrator aircraft by 1996 emerged, which would coincide with the full-scale flight demonstrator phase of ASTOVL/CALF. Because the ASTOVL/CALF concept appeared to align with the JAST charter, the two programs were eventually merged in 1994 under the JAST name, with the program now serving the USAF, USMC, and USN. JAST was subsequently renamed to Joint Strike Fighter (JSF) in 1995, with STOVL submissions by McDonnell Douglas, Northrop Grumman, Lockheed Martin, and Boeing. The JSF was expected to eventually replace large numbers of multi-role and strike fighters in the inventories of the US and its allies, including the Harrier, F-16, F/A-18, Fairchild A-10 Thunderbolt II, and Lockheed F-117 Nighthawk.
International participation is a key aspect of the JSF program, starting with United Kingdom participation in the ASTOVL program. Many international partners requiring modernization of their air forces were interested in the JSF. The United Kingdom joined JAST/JSF as a founding member in 1995 and thus became the only Tier 1 partner of the JSF program; Italy, the Netherlands, Denmark, Norway, Canada, Australia, and Turkey joined the program during the Concept Demonstration Phase (CDP), with Italy and the Netherlands being Tier 2 partners and the rest Tier 3. Consequently, the aircraft was developed in cooperation with international partners and available for export.
JSF competition
Boeing and Lockheed Martin were selected in early 1997 for CDP, with their concept demonstrator aircraft designated X-32 and X-35 respectively; the McDonnell Douglas team was eliminated and Northrop Grumman and British Aerospace joined the Lockheed Martin team. Each firm would produce two prototype air vehicles to demonstrate conventional takeoff and landing (CTOL), carrier takeoff and landing (CV), and STOVL. Lockheed Martin's design would make use of the work on the SDLF system conducted under the ASTOVL/CALF program. The key aspect of the X-35 that enabled STOVL operation, the SDLF system consists of the lift fan in the forward center fuselage that could be activated by engaging a clutch that connects the driveshaft to the turbines and thus augmenting the thrust from the engine's swivel nozzle. Research from prior aircraft incorporating similar systems, such as the Convair Model 200, Rockwell XFV-12, and Yakovlev Yak-141, were also taken into consideration. By contrast, Boeing's X-32 employed direct lift system that the augmented turbofan would be reconfigured to when engaging in STOVL operation.
Lockheed Martin's commonality strategy was to replace the STOVL variant's SDLF with a fuel tank and the aft swivel nozzle with a two-dimensional thrust vectoring nozzle for the CTOL variant. STOVL operation is made possible through a patented shaft-driven LiftFan propulsion system. This would enable identical aerodynamic configuration for the STOVL and CTOL variants, while the CV variant would have an enlarged wing to reduce landing speed for carrier recovery. Due to aerodynamic characteristics and carrier recovery requirements from the JAST merger, the design configuration settled on a conventional tail compared to the canard delta design from the ASTOVL/CALF; notably, the conventional tail configuration offers much lower risk for carrier recovery compared to the ASTOVL/CALF canard configuration, which was designed without carrier compatibility in mind. This enabled greater commonality between all three variants, as the commonality goal was important at this design stage. Lockheed Martin's prototypes would consist of the X-35A for demonstrating CTOL before converting it to the X-35B for STOVL demonstration and the larger-winged X-35C for CV compatibility demonstration.
The X-35A first flew on 24 October 2000 and conducted flight tests for subsonic and supersonic flying qualities, handling, range, and maneuver performance. After 28 flights, the aircraft was then converted into the X-35B for STOVL testing, with key changes including the addition of the SDLF, the three-bearing swivel module (3BSM), and roll-control ducts. The X-35B would successfully demonstrate the SDLF system by performing stable hover, vertical landing, and short takeoff in less than . The X-35C first flew on 16 December 2000 and conducted field landing carrier practice tests.
On 26 October 2001, Lockheed Martin was declared the winner and was awarded the System Development and Demonstration (SDD) contract; Pratt & Whitney was separately awarded a development contract for the F135 engine for the JSF. The F-35 designation, which was out of sequence with standard DoD numbering, was allegedly determined on the spot by program manager Major General Mike Hough; this came as a surprise even to Lockheed Martin, which had expected the F-24 designation for the JSF.
Design and production
As the JSF program moved into the System Development and Demonstration phase, the X-35 demonstrator design was modified to create the F-35 combat aircraft. The forward fuselage was lengthened by to make room for mission avionics, while the horizontal stabilizers were moved aft to retain balance and control. The diverterless supersonic inlet changed from a four-sided to a three-sided cowl shape and was moved aft. The fuselage section was fuller, the top surface raised by along the centerline and the lower surface bulged to accommodate weapons bays. Following the designation of the X-35 prototypes, the three variants were designated F-35A (CTOL), F-35B (STOVL), and F-35C (CV), all with a design service life of 8,000 hours. Prime contractor Lockheed Martin performs overall systems integration and final assembly and checkout (FACO) at Air Force Plant 4 in Fort Worth, Texas, while Northrop Grumman and BAE Systems supply components for mission systems and airframe.
Adding the systems of a fighter aircraft added weight. The F-35B gained the most, largely due to a 2003 decision to enlarge the weapons bays for commonality between variants; the total weight growth was reportedly up to , over 8%, causing all STOVL key performance parameter (KPP) thresholds to be missed. In December 2003, the STOVL Weight Attack Team (SWAT) was formed to reduce the weight increase; changes included thinned airframe members, smaller weapons bays and vertical stabilizers, less thrust fed to the roll-post outlets, and redesigning the wing-mate joint, electrical elements, and the airframe immediately aft of the cockpit. The inlet was also revised to accommodate more powerful, greater mass flow engines. Many changes from the SWAT effort were applied to all three variants for commonality. By September 2004, these efforts had reduced the F-35B's weight by over , while the F-35A and F-35C were reduced in weight by and respectively. The weight reduction work cost $6.2 billion and caused an 18-month delay.
The first F-35A, designated AA-1, was rolled out at Fort Worth on 19 February 2006 and first flew on 15 December 2006 with chief test pilot Jon S. Beesley at the controls. In 2006, the F-35 was given the name "Lightning II" after the Lockheed P-38 Lightning of World War II. Some USAF pilots have nicknamed the aircraft "Panther" instead, and other nicknames include "Fat Amy" and "Battle Penguin".
The aircraft's software was developed as six releases, or Blocks, for SDD. The first two Blocks, 1A and 1B, readied the F-35 for initial pilot training and multi-level security. Block 2A improved the training capabilities, while 2B was the first combat-ready release planned for the USMC's Initial Operating Capability (IOC). Block 3i retains the capabilities of 2B while having new Technology Refresh 2 (TR-2) hardware and was planned for the USAF's IOC. The final release for SDD, Block 3F, would have full flight envelope and all baseline combat capabilities. Alongside software releases, each block also incorporates avionics hardware updates and air vehicle improvements from flight and structural testing. In what is known as "concurrency", some low rate initial production (LRIP) aircraft lots would be delivered in early Block configurations and eventually upgraded to Block 3F once development is complete. After 17,000 flight test hours, the final flight for the SDD phase was completed in April 2018. Like the F-22, the F-35 has been targeted by cyberattacks and technology theft efforts, as well as potential vulnerabilities in the integrity of the supply chain.
Testing found several major problems: early F-35B airframes were vulnerable to premature cracking, the F-35C arrestor hook design was unreliable, fuel tanks were too vulnerable to lightning strikes, the helmet display had problems, and more. Software was repeatedly delayed due to its unprecedented scope and complexity. In 2009, the DoD Joint Estimate Team (JET) estimated that the program was 30 months behind the public schedule. In 2011, the program was "re-baselined"; that is, its cost and schedule goals were changed, pushing the IOC from the planned 2010 to July 2015. The decision to simultaneously test, fix defects, and begin production was criticized as inefficient; in 2014, Under Secretary of Defense for Acquisition Frank Kendall called it "acquisition malpractice". The three variants shared just 25% of their parts, far below the anticipated commonality of 70%.
The program received considerable criticism for cost overruns and for the total projected lifetime cost, as well as quality management shortcomings by contractors. , the program was 80% over budget and 10 years late.
The JSF program was expected to cost about $200 billion for acquisition in base-year 2002 dollars when SDD was awarded in 2001. As early as 2005, the Government Accountability Office (GAO) had identified major program risks in cost and schedule. The costly delays strained the relationship between the Pentagon and contractors. By 2017, delays and cost overruns had pushed the F-35 program's expected acquisition costs to $406.5 billion, with total lifetime cost (i.e., to 2070) to $1.5 trillion in then-year dollars which also includes operations and maintenance. The F-35A's unit cost (not including engine) for LRIP Lot 13 was $79.2 million in base-year 2012 dollars. Delays in development and operational test and evaluation, including integration into the Joint Simulation Environment, pushed full-rate production decision from the end of 2019 to March 2024, although actual production rate had already approached the full rate by 2020; the combined full rate at the Fort Worth, Italy, and Japan FACO plants is 156 aircraft annually.
Upgrades and further development
The F-35 is expected to be continually upgraded over its lifetime. The first combat-capable Block 2B configuration, which had basic air-to-air and strike capabilities, was declared ready by the USMC in July 2015. The Block 3F configuration began operational test and evaluation (OT&E) in December 2018 and its completion in late 2023 concluded SDD in March 2024. The F-35 program is also conducting sustainment and upgrade development, with early aircraft from LRIP lot 2 onwards gradually upgraded to the baseline Block 3F standard by 2021.
With Block 3F as the final build for SDD, the first major upgrade program is Block 4 which began development in 2019 and was initially captured under the Continuous Capability Development and Delivery (C2D2) program. Block 4 is expected to enter service in incremental steps from the late 2020s to early 2030s and integrates additional weapons, including those unique to international customers, improved sensor capabilities including the new AN/APG-85 AESA radar and additional ESM bandwidth, and adds Remotely Operated Video Enhanced Receiver (ROVER) support. C2D2 also places greater emphasis on agile software development to enable quicker releases.
The key enabler of Block 4 is Technology Refresh 3 (TR-3) avionics hardware, which consists of new display, core processor, and memory modules to support increased processing requirements, as well as engine upgrade that increases the amount of cooling available to support the additional mission systems. The engine upgrade effort explored both improvements to the F135 as well as significantly more power and efficient adaptive cycle engines. In 2018, General Electric and Pratt & Whitney were awarded contracts to develop adaptive cycle engines for potential application in the F-35, and in 2022, the F-35 Adaptive Engine Replacement program was launched to integrate them. However, in 2023 the USAF chose an improved F135 under the Engine Core Upgrade (ECU) program over an adaptive cycle engine due to cost as well as concerns over risk of integrating the new engine, initially designed for the F-35A, on the B and C. Difficulties with the new TR-3 hardware, including regression testing, have caused delays to Block 4 as well as a halt in aircraft deliveries from July 2023 to July 2024.
Defense contractors have offered upgrades to the F-35 outside of official program contracts. In 2013, Northrop Grumman disclosed its development of a directional infrared countermeasures suite, named Threat Nullification Defensive Resource (ThNDR). The countermeasure system would share the same space as the Distributed Aperture System (DAS) sensors and acts as a laser missile jammer to protect against infrared-homing missiles.
Israel operates a unique subvariant of the F-35A, designated the F-35I, that is designed to better interface with and incorporate Israeli equipment and weapons. The Israeli Air Force also has their own F-35I test aircraft that provides more access to the core avionics to include their own equipment.
Procurement and international participation
The United States is the primary customer and financial backer, with planned procurement of 1,763 F-35As for the USAF, 353 F-35Bs and 67 F-35Cs for the USMC, and 273 F-35Cs for the USN. Additionally, the United Kingdom, Italy, the Netherlands, Turkey, Australia, Norway, Denmark and Canada have agreed to contribute US$4.375 billion towards development costs, with the United Kingdom contributing about 10% of the planned development costs as the sole Tier 1 partner. The initial plan was that the U.S. and eight major partner countries would acquire over 3,100 F-35s through 2035. The three tiers of international participation generally reflect financial stake in the program, the amount of technology transfer and subcontracts open for bid by national companies, and the order in which countries can obtain production aircraft. Alongside program partner countries, Israel and Singapore have joined as Security Cooperative Participants (SCP). Sales to SCP and non-partner states, including Belgium, Japan, and South Korea, are made through the Pentagon's Foreign Military Sales program. Turkey was removed from the F-35 program in July 2019 over security concerns following its purchase of a Russian S-400 surface-to-air missile system.
, the average flyaway costs per plane are: $82.5 million for the F-35A, $109 million for the F-35B, and $102.1 million for the F-35C.
Design
Overview
The F-35 is a family of single-engine, supersonic, stealth multirole strike fighters. The second fifth-generation fighter to enter US service and the first operational supersonic STOVL stealth fighter, the F-35 emphasizes low observables, advanced avionics and sensor fusion that enable a high level of situational awareness and long range lethality; the USAF considers the aircraft its primary strike fighter for conducting suppression of enemy air defense (SEAD) and air interdiction missions, owing to the advanced sensors and mission systems.
The F-35 has a wing-tail configuration with two vertical stabilizers canted for stealth. Flight control surfaces include leading-edge flaps, flaperons, rudders, and all-moving horizontal tails (stabilators); leading edge root extensions or chines also run forwards to the inlets. The relatively short 35-foot wingspan of the F-35A and F-35B is set by the requirement to fit inside USN amphibious assault ship parking areas and elevators; the F-35C's larger wing is more fuel efficient. The fixed diverterless supersonic inlets (DSI) use a bumped compression surface and forward-swept cowl to shed the boundary layer of the forebody away from the inlets, which form a Y-duct for the engine. Structurally, the F-35 drew upon lessons from the F-22; composites comprise 35% of airframe weight, with the majority being bismaleimide and composite epoxy materials as well as some carbon nanotube-reinforced epoxy in later production lots. The F-35 is considerably heavier than the lightweight fighters it replaces, with the lightest variant having an empty weight of ; much of the weight can be attributed to the internal weapons bays and the extensive avionics carried.
While lacking the kinematic performance of the larger twin-engine F-22, the F-35 is competitive with fourth-generation fighters such as the F-16 and F/A-18, especially when they carry weapons because the F-35's internal weapons bay eliminates drag from external stores. All variants have a top speed of Mach 1.6, attainable with full internal payload. The Pratt & Whitney F135 engine gives good subsonic acceleration and energy, with supersonic dash in afterburner. The F-35, while not a "supercruising" aircraft, can fly at Mach 1.2 for a dash of with afterburners. This ability can be useful in battlefield situations. The large stabilitors, leading edge extensions and flaps, and canted rudders provide excellent high alpha (angle-of-attack) characteristics, with a trimmed alpha of 50°. Relaxed stability and triplex-redundant fly-by-wire controls provide excellent handling qualities and departure resistance. Having over double the F-16's internal fuel, the F-35 has a considerably greater combat radius, while stealth also enables a more efficient mission flight profile.
Sensors and avionics
The F-35's mission systems are among the most complex aspects of the aircraft. The avionics and sensor fusion are designed to improve the pilot's situational awareness and command-and-control capabilities and facilitate network-centric warfare. Key sensors include the Northrop Grumman AN/APG-81 active electronically scanned array (AESA) radar, BAE Systems AN/ASQ-239 Barracuda electronic warfare system, Northrop Grumman/Raytheon AN/AAQ-37 Electro-optical Distributed Aperture System (DAS), Lockheed Martin AN/AAQ-40 Electro-Optical Targeting System (EOTS) and Northrop Grumman AN/ASQ-242 Communications, Navigation, and Identification (CNI) suite. The F-35 was designed for its sensors to work together to provide a cohesive image of the local battlespace; for example, the APG-81 radar also acts as a part of the electronic warfare system.
Much of the F-35's software was developed in C and C++ programming languages, while Ada83 code from the F-22 was also used; the Block 3F software has 8.6 million lines of code. The Green Hills Software Integrity DO-178B real-time operating system (RTOS) runs on integrated core processors (ICPs); data networking includes the IEEE 1394b and Fibre Channel buses. The avionics use commercial off-the-shelf (COTS) components when practical to make upgrades cheaper and more flexible; for example, to enable fleet software upgrades for the software-defined radio (SDR) systems. The mission systems software, particularly for sensor fusion, was one of the program's most difficult parts and responsible for substantial program delays.
The APG-81 radar uses electronic scanning for rapid beam agility and incorporates passive and active air-to-air modes, strike modes, and synthetic aperture radar (SAR) capability, with multiple target track-while-scan at ranges in excess of . The antenna is tilted backwards for stealth. Complementing the radar is the AAQ-37 DAS, which consists of six infrared sensors that provide all-aspect missile launch warning and target tracking; the DAS acts as a situational awareness infrared search-and-track (SAIRST) and gives the pilot spherical infrared and night-vision imagery on the helmet visor. The ASQ-239 Barracuda electronic warfare system has ten radio frequency antennas embedded into the edges of the wing and tail for all-aspect radar warning receiver (RWR). It also provides sensor fusion of radio frequency and infrared tracking functions, geolocation threat targeting, and multispectral image countermeasures for self-defense against missiles. The electronic warfare system can detect and jam hostile radars. The AAQ-40 EOTS is mounted behind a faceted low-observable window under the nose and performs laser targeting, forward-looking infrared (FLIR), and long range IRST functions. The ASQ-242 CNI suite uses a half dozen physical links, including the directional Multifunction Advanced Data Link (MADL), for covert CNI functions. Through sensor fusion, information from radio frequency receivers and infrared sensors are combined to form a single tactical picture for the pilot. The all-aspect target direction and identification can be shared via MADL to other platforms without compromising low observability, while Link 16 enables communication with older systems.
The F-35 was designed to accept upgrades to its processors, sensors, and software over its lifespan. Technology Refresh 3, which includes a new core processor and a new cockpit display, is planned for Lot 15 aircraft. Lockheed Martin has offered the Advanced EOTS for the Block 4 configuration; the improved sensor fits into the same area as the baseline EOTS with minimal changes. In June 2018, Lockheed Martin picked Raytheon for improved DAS. The USAF has studied the potential for the F-35 to orchestrate attacks by unmanned combat aerial vehicles (UCAVs) via its sensors and communications equipment.
A new radar called the AN/APG-85 is planned for Block 4 F-35s. According to the JPO, the new radar will be compatible with all three major F-35 variants. However, it is unclear if older aircraft will be retrofitted with the new radar.
Stealth and signatures
Stealth is a key aspect of the F-35's design, and radar cross-section (RCS) is minimized through careful shaping of the airframe and the use of radar-absorbent materials (RAM); visible measures to reduce RCS include alignment of edges and continuous curvature of surfaces, serration of skin panels, and the masking of the engine face and turbine. Additionally, the F-35's diverterless supersonic inlet (DSI) uses a compression bump and forward-swept cowl rather than a splitter gap or bleed system to divert the boundary layer away from the inlet duct, eliminating the diverter cavity and further reducing radar signature. The RCS of the F-35 has been characterized as lower than a metal golf ball at certain frequencies and angles; in some conditions, the F-35 compares favorably to the F-22 in stealth. For maintainability, the F-35's stealth design took lessons from earlier stealth aircraft such as the F-22; the F-35's radar-absorbent fibermat skin is more durable and requires less maintenance than older topcoats. The aircraft also has reduced infrared and visual signatures as well as strict controls of radio frequency emitters to prevent their detection. The F-35's stealth design is primarily focused on high-frequency X-band wavelengths; low-frequency radars can spot stealthy aircraft due to Rayleigh scattering, but such radars are also conspicuous, susceptible to clutter, and lack precision. To disguise its RCS, the aircraft can mount four Luneburg lens reflectors.
Noise from the F-35 caused concerns in residential areas near potential bases for the aircraft, and residents near two such bases—Luke Air Force Base, Arizona, and Eglin Air Force Base (AFB), Florida—requested environmental impact studies in 2008 and 2009 respectively. Although the noise levels, in decibels, were comparable to those of prior fighters such as the F-16, the F-35's sound power is stronger—particularly at lower frequencies. Subsequent surveys and studies have indicated that the noise of the F-35 was not perceptibly different from the F-16 and F/A-18E/F, though the greater low-frequency noise was noticeable for some observers.
Cockpit
The glass cockpit was designed to give the pilot good situational awareness. The main display is a 20-by-8-inch (50 by 20 cm) panoramic touchscreen, which shows flight instruments, stores management, CNI information, and integrated caution and warnings; the pilot can customize the arrangement of the information. Below the main display is a smaller stand-by display. The cockpit has a speech-recognition system developed by Adacel. The F-35 does not have a head-up display; instead, flight and combat information is displayed on the visor of the pilot's helmet in a helmet-mounted display system (HMDS). The one-piece tinted canopy is hinged at the front and has an internal frame for structural strength. The Martin-Baker US16E ejection seat is launched by a twin-catapult system housed on side rails. There is a right-hand side stick and throttle hands-on throttle-and-stick system. For life support, an onboard oxygen-generation system (OBOGS) is fitted and powered by the Integrated Power Package (IPP), with an auxiliary oxygen bottle and backup oxygen system for emergencies.
The Vision Systems International helmet display is a key piece of the F-35's human-machine interface. Instead of the head-up display mounted atop the dashboard of earlier fighters, the HMDS puts flight and combat information on the helmet visor, allowing the pilot to see it no matter which way they are facing. Infrared and night vision imagery from the Distributed Aperture System can be displayed directly on the HMDS and enables the pilot to "see through" the aircraft. The HMDS allows an F-35 pilot to fire missiles at targets even when the nose of the aircraft is pointing elsewhere by cuing missile seekers at high angles off-boresight. Each helmet costs $400,000. The HMDS weighs more than traditional helmets, and there is concern that it can endanger lightweight pilots during ejection.
Due to the HMDS's vibration, jitter, night-vision and sensor display problems during development, Lockheed Martin and Elbit issued a draft specification in 2011 for an alternative HMDS based on the AN/AVS-9 night vision goggles as backup, with BAE Systems chosen later that year. A cockpit redesign would be needed to adopt an alternative HMDS. Following progress on the baseline helmet, development on the alternative HMDS was halted in October 2013. In 2016, the Gen 3 helmet with improved night vision camera, new liquid crystal displays, automated alignment and software enhancements was introduced with LRIP lot 7.
Armament
To preserve its stealth shaping, the F-35 has two internal weapons bays each with two weapons stations. The two outboard weapon stations each can carry ordnance up to , or for the F-35B, while the two inboard stations carry air-to-air missiles. Air-to-surface weapons for the outboard station include the Joint Direct Attack Munition (JDAM), Paveway series of bombs, Joint Standoff Weapon (JSOW), and cluster munitions (Wind Corrected Munitions Dispenser). The station can also carry multiple smaller munitions such as the GBU-39 Small Diameter Bombs (SDB), GBU-53/B SDB II, and SPEAR 3; up to four SDBs can be carried per station for the F-35A and F-35C, and three for the F-35B. The F-35A achieved certification to carry the B61 Mod 12 nuclear bomb in October 2023. The inboard station can carry the AIM-120 AMRAAM and eventually the AIM-260 JATM. Two compartments behind the weapons bays contain flares, chaff, and towed decoys.
The aircraft can use six external weapons stations for missions that do not require stealth. The wingtip pylons each can carry an AIM-9X or AIM-132 ASRAAM and are canted outwards to reduce their radar cross-section. Additionally, each wing has a inboard station and a middle station, or for F-35B. The external wing stations can carry large air-to-surface weapons that would not fit inside the weapons bays such as the AGM-158 Joint Air to Surface Standoff Missile (JASSM) or AGM-158C LRASM cruise missile. An air-to-air missile load of eight AIM-120s and two AIM-9s is possible using internal and external weapons stations; a configuration of six bombs, two AIM-120s and two AIM-9s can also be arranged. The F-35 is armed with a 25 mm GAU-22/A rotary cannon, a lighter four-barrel variant of the GAU-12/U Equalizer. On the F-35A this is mounted internally near the left wing root with 182 rounds carried; the gun is more effective against ground targets than the 20 mm gun carried by other USAF fighters. In 2020, a USAF report noted "unacceptable" accuracy problems with the GAU-22/A on the F-35A. These were due to "misalignments" in the gun's mount, which was also susceptible to cracking. These problems were resolved by 2024. The F-35B and F-35C have no internal gun and instead can use a Terma A/S multi-mission pod (MMP) carrying the GAU-22/A and 220 rounds; the pod is mounted on the centerline of the aircraft and shaped to reduce its radar cross-section. In lieu of the gun, the pod can also be used for different equipment and purposes, such as electronic warfare, aerial reconnaissance, or rear-facing tactical radar. The pod was not susceptible to the accuracy issues that once plagued the gun on the F-35A variant, though was apparently not problem-free.
Lockheed Martin is developing a weapon rack called Sidekick that would enable the internal outboard station to carry two AIM-120s, thus increasing the internal air-to-air payload to six missiles, currently offered for Block 4. Block 4 will also have a rearranged hydraulic line and bracket to allow the F-35B to carry four SDBs per internal outboard station; integration of the MBDA Meteor is also planned. The USAF and USN are planning to integrate the AGM-88G AARGM-ER internally in the F-35A and F-35C. Norway and Australia are funding an adaptation of the Naval Strike Missile (NSM) for the F-35; designated Joint Strike Missile (JSM), two missiles can be carried internally with an additional four externally. Both hypersonic missiles and direct energy weapons such as solid-state laser are currently being considered as future upgrades; in 2024, Lockheed Martin disclosed its proposed Mako hypersonic missile, which can be carried internally in the F-35A and C and externally on the B. Additionally, Lockheed Martin is studying integrating a fiber laser that uses spectral beam combining multiple individual laser modules into a single high-power beam, which can be scaled to various levels.
The USAF plans for the F-35A to take up the close air support (CAS) mission in contested environments; amid criticism that it is not as well suited as a dedicated attack platform, USAF chief of staff Mark Welsh placed a focus on weapons for CAS sorties, including guided rockets, fragmentation rockets that shatter into individual projectiles before impact, and more compact ammunition for higher capacity gun pods. Fragmentary rocket warheads create greater effects than cannon shells as each rocket creates a "thousand-round burst", delivering more projectiles than a strafing run.
Engine
The aircraft is powered by a single Pratt & Whitney F135 low-bypass augmented turbofan with rated thrust of at military power and with afterburner. Derived from the Pratt & Whitney F119 used by the F-22, the F135 has a larger fan and higher bypass ratio to increase subsonic thrust and fuel efficiency, and unlike the F119, is not optimized for supercruise. The engine contributes to the F-35's stealth by having a low-observable augmenter, or afterburner, that incorporates fuel injectors into thick curved vanes; these vanes are covered by ceramic radar-absorbent materials and mask the turbine. The stealthy augmenter had problems with pressure pulsations, or "screech", at low altitude and high speed early in its development. The low-observable axisymmetric nozzle consists of 15 partially overlapping flaps that create a sawtooth pattern at the trailing edge, which reduces radar signature and creates shed vortices that reduce the infrared signature of the exhaust plume. Due to the engine's large dimensions, the U.S. Navy had to modify its underway replenishment system to facilitate at-sea logistics support. The F-35's Integrated Power Package (IPP) performs power and thermal management and integrates environment control, auxiliary power unit, engine starting, and other functions into a single system.
The F135-PW-600 variant for the F-35B incorporates the Shaft-Driven Lift Fan (SDLF) to allow STOVL operations. Designed by Lockheed Martin and developed by Rolls-Royce, the SDLF, also known as the Rolls-Royce LiftSystem, consists of the lift fan, drive shaft, two roll posts, and a "three-bearing swivel module" (3BSM). The nozzle features three bearings resembling a short cylinder with nonparallel bases. As the toothed edges are rotated by motors, the nozzle swivels from being linear with the engine to being perpendicular. The thrust vectoring 3BSM nozzle allows the main engine exhaust to be deflected downward at the tail of the aircraft and is moved by a "fueldraulic" actuator that uses pressurized fuel as the working fluid. Unlike the Harrier's Pegasus engine that entirely uses direct engine thrust for lift, the F-35B's system augments the swivel nozzle's thrust with the lift fan; the fan is powered by the low-pressure turbine through a drive shaft when engaged with a clutch and placed near the front of the aircraft to provide a torque countering that of the 3BSM nozzle. Roll control during slow flight is achieved by diverting unheated engine bypass air through wing-mounted thrust nozzles called roll posts.
An alternative engine, the General Electric/Allison/Rolls-Royce F136, was being developed in the 1990s and 2000s; originally, F-35 engines from Lot 6 onward were competitively tendered. Using technology from the General Electric YF120, the F136 was claimed to have a greater temperature margin than the F135 due to the higher mass flow design making full use of the inlet. The F136 was canceled in December 2011 due to lack of funding.
The F-35 is expected to receive propulsion upgrades over its lifecycle to adapt to emerging threats and enable additional capabilities. In 2016, the Adaptive Engine Transition Program (AETP) was launched to develop and test adaptive cycle engines, with one major potential application being the re-engining of the F-35; in 2018, both GE and P&W were awarded contracts to develop thrust class demonstrators, with the designations XA100 and XA101 respectively. In addition to potential re-engining, P&W is also developing improvements to the baseline F135; the Engine Core Upgrade (ECU) is an update to the power module, originally called Growth Option 1.0 and then Engine Enhancement Package, that improves engine thrust and fuel burn by 5% and bleed air cooling capacity by 50% to support Block 4. The F135 ECU was selected over AETP engines in 2023 to provide additional power and cooling for the F-35. Although GE had expected that the more revolutionary XA100 could enter service with the F-35A and C by 2027 and could be adapted for the F-35B, the increased cost and risk caused the USAF to choose the F135 ECU instead.
Maintenance and logistics
The F-35 is designed to require less maintenance than prior stealth aircraft. Some 95% of all field-replaceable parts are "one deep"—that is, nothing else needs to be removed to reach the desired part; for instance, the ejection seat can be replaced without removing the canopy. The F-35 has a fibermat radar-absorbent material (RAM) baked into the skin, which is more durable, easier to work with, and faster to cure than older RAM coatings; similar coatings are being considered for application on older stealth aircraft such as the F-22. Skin corrosion on the F-22 led to the F-35 using a less galvanic corrosion-inducing skin gap filler, fewer gaps in the airframe skin needing filler, and better drainage. The flight control system uses electro-hydrostatic actuators rather than traditional hydraulic systems; these controls can be powered by lithium-ion batteries in case of emergency. Commonality between variants led to the USMC's first aircraft maintenance Field Training Detachment, which applied USAF lessons to their F-35 operations.
The F-35 was initially supported by a computerized maintenance management system named Autonomic Logistics Information System (ALIS). In concept, any F-35 can be serviced at any maintenance facility and all parts can be globally tracked and shared as needed. Due to numerous problems, such as unreliable diagnoses, excessive connectivity requirements, and security vulnerabilities, ALIS is being replaced by the cloud-based Operational Data Integrated Network (ODIN). From September 2020, ODIN base kits (OBKs) were running ALIS software, as well as ODIN software, first at Marine Corps Air Station (MCAS) Yuma, Arizona, then at Naval Air Station Lemoore, California, in support of Strike Fighter Squadron (VFA) 125 on 16 July 2021, and then Nellis Air Force Base, Nevada, in support of the 422nd Test and Evaluation Squadron (TES) on 6 August 2021. In 2022, over a dozen more OBK sites will replace the ALIS's Standard Operating Unit unclassified (SOU-U) servers. OBK performance is double that of ALIS.
Operational history
Testing
The first F-35A, AA-1, conducted its engine run in September 2006 and first flew on 15 December 2006. Unlike all subsequent aircraft, AA-1 did not have the weight optimization from SWAT; consequently, it mainly tested subsystems common to subsequent aircraft, such as the propulsion, electrical system, and cockpit displays. This aircraft was retired from flight testing in December 2009 and was used for live-fire testing at NAS China Lake.
The first F-35B, BF-1, flew on 11 June 2008, while the first weight-optimized F-35A and F-35C, AF-1 and CF-1, flew on 14 November 2009 and 6 June 2010 respectively. The F-35B's first hover was on 17 March 2010, followed by its first vertical landing the next day. The F-35 Integrated Test Force (ITF) consisted of 18 aircraft at Edwards Air Force Base and Naval Air Station Patuxent River. Nine aircraft at Edwards, five F-35As, three F-35Bs, and one F-35C, performed flight sciences testing such as F-35A envelope expansion, flight loads, stores separation, as well as mission systems testing. The other nine aircraft at Patuxent River, five F-35Bs and four F-35Cs, were responsible for F-35B and C envelope expansion and STOVL and CV suitability testing. Additional carrier suitability testing was conducted at Naval Air Warfare Center Aircraft Division at Lakehurst, New Jersey. Two non-flying aircraft of each variant were used to test static loads and fatigue. For testing avionics and mission systems, a modified Boeing 737-300 with a duplication of the cockpit, the Lockheed Martin CATBird has been used. Field testing of the F-35's sensors were conducted during Exercise Northern Edge 2009 and 2011, serving as significant risk-reduction steps.
Flight tests revealed several serious deficiencies that required costly redesigns, caused delays, and resulted in several fleet-wide groundings. In 2011, the F-35C failed to catch the arresting wire in all eight landing tests; a redesigned tail hook was delivered two years later. By June 2009, many of the initial flight test targets had been accomplished but the program was behind schedule. Software and mission systems were among the biggest sources of delays for the program, with sensor fusion proving especially challenging. In fatigue testing, the F-35B suffered several premature cracks, requiring a redesign of the structure. A third non-flying F-35B is currently planned to test the redesigned structure. The F-35B and C also had problems with the horizontal tails suffering heat damage from prolonged afterburner use. Early flight control laws had problems with "wing drop" and also made the airplane sluggish, with high angles-of-attack tests in 2015 against an F-16 showing a lack of energy.
At-sea testing of the F-35B was first conducted aboard . In October 2011, two F-35Bs conducted three weeks of initial sea trials, called Development Test I. The second F-35B sea trials, Development Test II, began in August 2013, with tests including nighttime operations; two aircraft completed 19 nighttime vertical landings using DAS imagery. The first operational testing involving six F-35Bs was done on the Wasp in May 2015. The final Development Test III on involving operations in high sea states was completed in late 2016. A Royal Navy F-35 conducted the first "rolling" landing on board in October 2018.
After the redesigned tail hook arrived, the F-35C's carrier-based Development Test I began in November 2014 aboard and focused on basic day carrier operations and establishing launch and recovery handling procedures. Development Test II, which focused on night operations, weapons loading, and full power launches, took place in October 2015. The final Development Test III was completed in August 2016, and included tests of asymmetric loads and certifying systems for landing qualifications and interoperability. Operational test of the F-35C was conducted in 2018 and the first operational squadron achieved safe-for-flight milestone that December, paving the way for its introduction in 2019.
The F-35's reliability and availability have fallen short of requirements, especially in the early years of testing. The ALIS maintenance and logistics system was plagued by excessive connectivity requirements and faulty diagnoses. In late 2017, the GAO reported the time needed to repair an F-35 part averaged 172 days, which was "twice the program's objective," and that shortage of spare parts was degrading readiness. In 2019, while individual F-35 units have achieved mission-capable rates of over the target of 80% for short periods during deployed operations, fleet-wide rates remained below target. The fleet availability goal of 65% was also not met, although the trend shows improvement. Internal gun accuracy of the F-35A was unacceptable until misalignment issues were addressed by 2024. As of 2020, the number of the program's most serious issues have been decreased by half.
Operational test and evaluation (OT&E) with Block 3F, the final configuration for SDD, began in December 2018, but its completion was delayed particularly by technical problems in integration with the DOD's Joint Simulation Environment (JSE); the F-35 finally completed all JSE trials in September 2023.
United States
Training
The F-35A and F-35B were cleared for basic flight training in early 2012, although there were concerns over safety and performance due to lack of system maturity at the time. During the Low Rate Initial Production (LRIP) phase, the three U.S. military services jointly developed tactics and procedures using flight simulators, testing effectiveness, discovering problems and refining design. On 10 September 2012, the USAF began an operational utility evaluation (OUE) of the F-35A, including logistical support, maintenance, personnel training, and pilot execution.
The USMC F-35B Fleet Replacement Squadron (FRS) was initially based at Eglin AFB in 2012 alongside USAF F-35A training units, before moving to MCAS Beaufort in 2014 while another FRS was stood up at MCAS Miramar in 2020. The USAF F-35A basic course is held at Eglin AFB and Luke AFB; in January 2013, training began at Eglin with capacity for 100 pilots and 2,100 maintainers at once. Additionally, the 6th Weapons Squadron of the USAF Weapons School was activated at Nellis AFB in June 2017 for F-35A weapons instructor curriculum while the 65th Aggressor Squadron was reactivated with the F-35A in June 2022 to expand training against adversary stealth aircraft tactics. The USN stood up its F-35C FRS in 2012 with VFA-101 at Eglin AFB, but operations would later be transferred and consolidated under VFA-125 at NAS Lemoore in 2019. The F-35C was introduced to the Strike Fighter Tactics Instructor course, or TOPGUN, in 2020 and the additional capabilities of the aircraft greatly revamped the course syllabus.
U.S. Marine Corps
On 16 November 2012, the USMC received the first F-35B of VMFA-121 at MCAS Yuma. The USMC declared Initial Operational Capability (IOC) for the F-35B in the Block 2B configuration on 31 July 2015 after operational trials, with some limitations in night operations, mission systems, and weapons carriage. USMC F-35Bs participated in their first Red Flag exercise in July 2016 with 67 sorties conducted. The first F-35B deployment occurred in 2017 at MCAS Iwakuni, Japan; combat employment began in July 2018 from the amphibious assault ship , with the first combat strike on 27 September 2018 against a Taliban target in Afghanistan.
In addition to deploying F-35Bs on amphibious assault ships, the USMC plans to disperse the aircraft among austere forward-deployed bases with shelter and concealment to enhance survivability while remaining close to a battlespace. Known as distributed STOVL operations (DSO), F-35Bs would operate from temporary bases in allied territory within hostile missile engagement zones and displace inside the enemy's 24- to 48-hour targeting cycle; this strategy allows F-35Bs to rapidly respond to operational needs, with mobile forward arming and refueling points (M-FARPs) accommodating KC-130 and MV-22 Osprey aircraft to rearm and refuel the jets, as well as littoral areas for sea links of mobile distribution sites. For higher echelons of maintenance, F-35Bs would return from M-FARPs to rear-area friendly bases or ships. Helicopter-portable metal planking is needed to protect unprepared roads from the F-35B's exhaust; the USMC are studying lighter heat-resistant options. These operations have become part of the larger USMC Expeditionary Advanced Base Operations (EABO) concept.
The first USMC F-35C squadron, VMFA-314, achieved Full Operational Capability in July 2021 and was first deployed on board USS Abraham Lincoln as a part of Carrier Air Wing 9 in January 2022.
In 2024, Lt. Gen. Sami Sadat of Afghanistan described an operation using F-35Bs from which bombed a Taliban position through cloud cover. "The impact [the F-35] left on my soldiers was amazing. Like, whoa, you know, we have this technology," Sadat said. "But also the impact on the Taliban was quite crippling, because they have never seen Afghan forces move in the winter, and they have never seen planes that could bomb through the clouds."
On 9 November 2024 Marine F-35Cs carried out strikes on the Houthi movement in Yemen in the context of the Red Sea crisis.
U.S. Air Force
USAF F-35A in the Block 3i configuration achieved IOC with the USAF's 34th Fighter Squadron at Hill Air Force Base, Utah on 2 August 2016. F-35As conducted their first Red Flag exercise in 2017; system maturity had improved and the aircraft scored a kill ratio of 15:1 against an F-16 aggressor squadron in a high-threat environment. The first USAF F-35A deployment occurred on 15 April 2019 to Al Dhafra Air Base, UAE. On 27 April 2019, USAF F-35As were first used in combat in an airstrike on an Islamic State tunnel network in northern Iraq.
For European basing, RAF Lakenheath in the UK was chosen as the first installation to station two F-35A squadrons, with 48 aircraft adding to the 48th Fighter Wing's existing F-15C and F-15E squadrons. The first aircraft of the 495th Fighter Squadron arrived on 15 December 2021.
The F-35's operating cost is higher than some older USAF tactical aircraft. In fiscal year 2018, the F-35A's cost per flight hour (CPFH) was $44,000, a number that was reduced to $35,000 in 2019. For comparison, in 2015 the CPFH of the A-10 was $17,716; the F-15C, $41,921; and the F-16C, $22,514. Lockheed Martin hopes to reduce it to $25,000 by 2025 through performance-based logistics and other measures.
U.S. Navy
The USN achieved operational status with the F-35C in Block 3F on 28 February 2019. On 2 August 2021, the F-35C of VFA-147, as well as the CMV-22 Osprey, embarked on their maiden deployments as part of Carrier Air Wing 2 on board .
United Kingdom
The United Kingdom's Royal Air Force and Royal Navy operate the F-35B. Called Lightning in British service, it has replaced the Harrier GR9, retired in 2010, and Tornado GR4, retired in 2019. The F-35 is to be Britain's primary strike aircraft for the next three decades. One of the Royal Navy's requirements was a Shipborne Rolling and Vertical Landing (SRVL) mode to increase maximum landing weight by using wing lift during landing. Like the Italian Navy, British F-35Bs use ski-jumps to fly from their aircraft carriers, HMS Queen Elizabeth and . British F-35Bs are not intended to use the Brimstone 2 missile. In July 2013, Chief of the Air Staff Air Chief Marshal Sir Stephen Dalton announced that No. 617 (The Dambusters) Squadron would be the RAF's first operational F-35 squadron.
The first British F-35 squadron was No. 17 (Reserve) Test and Evaluation Squadron (TES), which stood up on 12 April 2013 as the plane's Operational Evaluation Unit. By June 2013, the RAF had received three F-35s of the 48 on order, initially based at Eglin Air Force Base. In June 2015, the F-35B undertook its first launch from a ski-jump at NAS Patuxent River. On 5 July 2017, it was announced the second UK-based RAF squadron would be No. 207 Squadron, which reformed on 1 August 2019 as the Lightning Operational Conversion Unit. No. 617 Squadron reformed on 18 April 2018 during a ceremony in Washington, D.C., becoming the first RAF front-line squadron to operate the type; receiving its first four F-35Bs on 6 June, flying from MCAS Beaufort to RAF Marham. On 10 January 2019, No. 617 Squadron and its F-35s were declared combat-ready.
April 2019 saw the first overseas deployment of a UK F-35 squadron when No. 617 Squadron went to RAF Akrotiri, Cyprus. This reportedly led on 25 June 2019 to the first combat use of an RAF F-35B: an armed reconnaissance flight searching for Islamic State targets in Iraq and Syria. In October 2019, the Dambusters and No. 17 TES F-35s were embarked on HMS Queen Elizabeth for the first time. No. 617 Squadron departed RAF Marham on 22 January 2020 for their first Exercise Red Flag with the Lightning. As of November 2022, 26 F-35Bs were based in the United Kingdom (with 617 and 207 Squadrons) and a further three were permanently based in the United States (with 17 Squadron) for testing and evaluation purposes.
The UK's second operational squadron is the Fleet Air Arm's 809 Naval Air Squadron, which stood up in December 2023.
Australia
Australia's first F-35, designated A35-001, was manufactured in 2014, with flight training provided through international Pilot Training Centre (PTC) at Luke Air Force Base in Arizona. The first two F-35s were unveiled to the Australian public on 3 March 2017 at the Avalon Airshow. By 2021, the Royal Australian Air Force had accepted 26 F-35As, with nine in the US and 17 operating at No 3 Squadron and No 2 Operational Conversion Unit at RAAF Base Williamtown. With 41 trained RAAF pilots and 225 trained technicians for maintenance, the fleet was declared ready to deploy on operations. It was originally expected that Australia would receive all 72 F-35s by 2023. Its final nine aircraft, which were the TR-3 version, arrived in Australia in December 2024.
Israel
The Israeli Air Force (IAF) declared the F-35 operationally capable on 6 December 2017. According to Kuwaiti newspaper Al Jarida, in July 2018, a test mission of at least three IAF F-35s flew to Iran's capital Tehran and back to Tel Aviv. While publicly unconfirmed, regional leaders acted on the report; Iran's supreme leader Ali Khamenei reportedly fired the air force chief and commander of Iran's Revolutionary Guard Corps over the mission.
On 22 May 2018, IAF chief Amikam Norkin said that the service had employed their F-35Is in two attacks on two battle fronts, marking the first combat operation of an F-35 by any country. Norkin said it had been flown "all over the Middle East", and showed photos of an F-35I flying over Beirut in daylight. In July 2019, Israel expanded its strikes against Iranian missile shipments; IAF F-35Is allegedly struck Iranian targets in Iraq twice.
In November 2020, the IAF announced the delivery of a unique F-35I testbed aircraft among a delivery of four aircraft received in August, to be used to test and integrate Israeli-produced weapons and electronic systems on F-35s received later. This is the only example of a testbed F-35 delivered to a non-US air force.
On 11 May 2021, eight IAF F-35Is took part in an attack on 150 targets in Hamas' rocket array, including 50–70 launch pits in the northern Gaza Strip, as part of Operation Guardian of the Walls.
On 6 March 2022, the IDF stated that on 15 March 2021, F-35Is shot down two Iranian drones carrying weapons to the Gaza Strip. This was the first operational shoot down and interception carried out by the F-35. They were also used in the Israel–Hamas war.
On 2 November 2023, the IDF posted on social media that they used an F-35I to shoot down a Houthi cruise missile over the Red Sea that was fired from Yemen during the Israel-Hamas War.
The F-35I Adir was used in the 29 September 2024 Israeli attacks on Yemen. F-35Is were also reportedly involved in the October 2024 Israeli strikes on Iran.
Italy
Italy's F-35As were declared to have reached initial operational capability (IOC) on 30 November 2018. At the time Italy had taken delivery of 10 F-35As and one F-35B, with 2 F-35As and the one F-35B being stationed in the U.S. for training, the remaining 8 F-35As were stationed in Amendola.
Japan
Japan's F-35As were declared to have reached initial operational capability (IOC) on 29 March 2019. At the time Japan had taken delivery of 10 F-35As stationed in Misawa Air Base. Japan plans to eventually acquire a total of 147 F-35s, which will include 42 F-35Bs. It plans to use the latter variant to equip Japan's s.
Norway
On 6 November 2019 Norway declared initial operational capability (IOC) for its fleet of 15 F-35As out of a planned 52 F-35As. On 6 January 2022 Norway's F-35As replaced its F-16s for the NATO quick reaction alert mission in the high north.
On 22 September 2023, two F-35As from the Royal Norwegian Air Force landed on a motorway near Tervo, Finland, showing, for the first time, that F-35As can operate from paved roads. Unlike the F-35B they cannot land vertically. The fighters were also refueled with their engines running. Commander of the Royal Norwegian Air Force, Major General Rolf Folland, said: "Fighter jets are vulnerable on the ground, so by being able to use small airfields – and now motorways – (this) increases our survivability in war,"
Netherlands
On 27 December 2021 the Netherlands declared initial operational capability (IOC) for its fleet of 24 F-35As that it has received to date from its order for 46 F-35As. In 2022, the Netherlands announced they will order an additional six F-35s, totaling 52 aircraft ordered.
Variants
The F-35 was designed with three initial variants – the F-35A, a CTOL land-based version; the F-35B, a STOVL version capable of use either on land or on aircraft carriers; and the F-35C, a CATOBAR carrier-based version. Since then, there has been work on the design of nationally specific versions for Israel and Canada.
F-35A
The F-35A is the conventional take-off and landing (CTOL) variant intended for the USAF and other air forces. It is the smallest, lightest version and capable of 9 g, the highest of all variants.
Although the F-35A currently conducts aerial refueling via boom and receptacle method, the aircraft can be modified for probe-and-drogue refueling if needed by the customer. A drag chute pod can be installed on the F-35A, with the Royal Norwegian Air Force being the first operator to adopt it.
F-35B
The F-35B is the short take-off and vertical landing (STOVL) variant of the aircraft. Similar in size to the A variant, the B sacrifices about a third of the A variant's fuel volume to accommodate the shaft-driven lift fan (SDLF). This variant is limited to 7 g. Unlike other variants, the F-35B has no landing hook. The "STOVL/HOOK" control instead engages conversion between normal and vertical flight. The F-35B is capable of Mach 1.6 (1,976 km/h) and can perform vertical and/or short take-off and landing (V/STOL).
F-35C
The F-35C is a carrier-based variant designed for catapult-assisted take-off, barrier arrested recovery operations from aircraft carriers. Compared to the F-35A, the F-35C features larger wings with foldable wingtip sections, larger control surfaces for improved low-speed control, stronger landing gear for the stresses of carrier arrested landings, a twin-wheel nose gear, and a stronger tailhook for use with carrier arrestor cables. The larger wing area allows for decreased landing speed while increasing both range and payload. The F-35C is limited to 7.5 g.
F-35I "Adir"
The F-35I Adir (, meaning "Awesome", or "Mighty") is an F-35A with unique Israeli modifications. The US initially refused to allow such changes before permitting Israel to integrate its own electronic warfare systems, including sensors and countermeasures. The main computer has a plug-and-play function for add-on systems; proposals include an external jamming pod, and new Israeli air-to-air missiles and guided bombs in the internal weapon bays. A senior IAF official said that the F-35's stealth may be partly overcome within 10 years despite a 30 to 40-year service life, thus Israel's insistence on using their own electronic warfare systems. In 2010, Israel Aerospace Industries (IAI) considered a two-seat F-35 concept; an IAI executive noted that there was a "known demand for two seats not only from Israel but from other air forces". In 2008, IAI planned to produce conformal fuel tanks.
Israel had ordered a total of 75 F-35Is by 2023, with 36 already delivered as of November 2022.
Proposed variants
CF-35
The Canadian CF-35 was a proposed variant that would differ from the F-35A through the addition of a drogue parachute and the potential inclusion of an F-35B/C-style refueling probe. In 2012, it was revealed that the CF-35 would employ the same boom refueling system as the F-35A. One alternative proposal would have been the adoption of the F-35C for its probe refueling and lower landing speed; however, the Parliamentary Budget Officer's report cited the F-35C's limited performance and payload as being too high a price to pay. Following the 2015 Federal Election the Liberal Party, whose campaign had included a pledge to cancel the F-35 procurement, formed a new government and commenced an open competition to replace the existing CF-18 Hornet. The CF-35 variant was deemed too expensive to develop, and was never considered. The Canadian government decided to not pursue any other modifications in the Future Fighter Capability Project, and instead focused on the potential procurement of the existing F-35A variant.
On 28 March 2022, the Canadian Government began negotiations with Lockheed Martin for 88 F-35As to replace the aging fleet of CF-18 fighters starting in 2025. The aircraft are reported to cost up to CA$19bn total with a life-cycle cost estimated at CA$77bn over the course of the F-35 program. On 9 January 2023, Canada formally confirmed the purchase of 88 aircraft. The initial delivery to the Royal Canadian Air Force in 2026 will be 4 aircraft, followed by 6 aircraft each in 2027–2028, and the rest to be delivered by 2032. The additional characteristics confirmed for the CF-35 included the drag chute pod for landings at short/icy arctic runways, as well as the 'sidekick' system, which allows the CF-35 to carry up to 6 x AIM-120D missiles internally (instead of the typical internal capacity of 4 x AIM-120 missiles on other variants).
New export variant
In December 2021, it was reported that Lockheed Martin was developing a new variant for an unspecified foreign customer. The Department of Defense released US$49 million in funding for this work.
Operators
Royal Australian Air Force – 72 F-35A delivered , of 72 ordered.
Belgian Air Component – 1 officially delivered (but none have left the US ), 34 F-35A planned .
Royal Danish Air Force – 10 F-35As delivered (including 6 stationed at Luke AFB for training) of the 27 planned for the RDAF.
Israeli Air Force – 39 delivered (F-35I "Adir"). Includes one F-35 testbed aircraft for indigenous Israeli weapons, electronics and structural upgrades, designated (AS-15). A total of 75 ordered.
Italian Air Force – 17 F-35As and 3 F-35B delivered of 75 F-35As and 20 F-35Bs ordered for the Italian Air Force.
Italian Navy – 3 delivered , out of 20 F-35Bs ordered for the Italian Navy.
Japan Air Self-Defense Force – 27 F-35As operational with a total order of 147, including 105 F-35As and 42 F-35Bs.
Royal Netherlands Air Force – 39 F-35As delivered and operational, of which 8 trainer aircraft based at Luke Air Force Base in the USA. 52 F-35As ordered in total. The RNLAF is the second air force with a 5th gen-only fighter fleet after the retirement of its F-16s.
Royal Norwegian Air Force – 40 F-35As delivered and operational, of which 21 are in Norway and 10 are based in the US for training of 52 F-35As planned in total. They differ from other F-35A through the addition of a drogue parachute.
Republic of Korea Air Force – 40 F-35As ordered and delivered , with 25 more ordered in September 2023.
Republic of Korea Navy – about 20 F-35Bs planned. It has not yet been approved by South Korean parliament.
Royal Air Force and Royal Navy (owned by the RAF but jointly operated) – 34 F-35Bs received with 30 in the UK after the loss of one aircraft in November 2021; the other three are in the US where they are used for testing and training. 42 (24 FOC fighters and 18 training aircraft) originally intended to be fast-tracked by 2023; A total of 48 ordered ; a total of 138 were originally planned, the expectation in 2021 was to eventually reach around 60 or 80. In 2022, it was announced that the UK would acquire 74 F-35Bs, with a decision on whether or not to go beyond that number, including the possibility of reviving the original plan of 138 aircraft, to be made in the mid-2020s. In February 2024 the United Kingdom appeared to signal a reaffirmation of its commitment to procure 138 F-35B aircraft, as per the original plan.
United States Air Force – 302 delivered with 1,763 F-35As planned
United States Marine Corps – 112 F-35B/C delivered with 353 F-35Bs and 67 F-35Cs planned
United States Navy – 30 delivered with 273 F-35Cs planned
Future operators
Royal Canadian Air Force - 88 F-35As (Block 4) ordered on 9 January 2023. The first 4 are expected to be delivered in 2026, 6 in 2027, another 6 in 2028, and the rest delivered by 2032. This will phase out the CF-18s that were delivered in the 1980s.
Czech Air Force – The U.S. State Department approved a possible sale to the Czech Republic of F-35 aircraft, munitions and related equipment worth up to $5.62 billion, according to a 29 June 2023 announcement. On 29 January 2024, the Czech government signed a memorandum of understanding with the U.S. for the purchase of 24 F-35A fighters. In September 2024, the Czech Republic signed a contract for the logistic support of the F-35A.
Finnish Air Force – 64 F-35As on order . F-35A Block 4 selected via the HX Fighter Program to replace the current F/A-18 Hornets.
German Air Force – 35 F-35A ordered , with an order for 10 more being considered .
Hellenic Air Force – 20 F-35As on order, with expected delivery in late 2027 to early 2028. An option for another additional 20 aircraft is also included.
Polish Air Force – 32 F-35A “Husarz” Block 4 jets with "Technology Refresh 3" software update and drogue parachutes were ordered on 31 January 2020. The deliveries are expected to begin in 2024 and conclude in 2030. There are plans for two more squadrons consisting of 16 jets each, for a total of 32 additional F-35s.
Romanian Air Force – Romania signed the contract for 32 F-35A worth $6.5 billion on 21 November 2024. The plan is to buy 48 F-35A aircraft in two phases – first phase of 32 and second phase of 16. The first F-35s will arrive after 2030 and will replace the current Romanian F-16 fleet between 2034 and 2040.
Republic of Singapore Air Force – 12 F-35Bs on order with first 4 to be delivered in 2026; The other 8 are to be delivered in 2028. 8 F-35As have been ordered, and are expected to arrive by 2030.
Swiss Air Force – 36 F-35A ordered to replace the current F-5E/F Tiger II and F/A-18C/D Hornet. Deliveries will begin in 2027 and conclude in 2030.
Order and approval cancellations
Republic of China Air Force – Taiwan has requested to buy the F-35 from the US. However this has been rejected by the US in fear of a critical response from China. In March 2009 Taiwan again was looking to buy U.S. fifth-generation fighter jets. However, in September 2011, during a visit to the US, the Deputy Minister of National Defense of Taiwan confirmed that while the country was busy upgrading its current F-16s it was still also looking to procure a next-generation aircraft such as the F-35. This received the usual critical response from China. Taiwan renewed its push for an F-35 purchase during Donald Trump's presidency in early 2017, again causing criticism from China. In March 2018, Taiwan once again reiterated its interest in the F-35 in light of an anticipated round of arms procurement from the United States. The F-35B STOVL variant is reportedly the political favorite as it would allow the Republic of China Air Force to continue operations after its limited number of runways were to be bombed in an escalation with the People's Republic of China. In April 2018 however it became clear that the U.S. government was reluctant about selling the F-35 to Taiwan over worries of Chinese spies within the Taiwanese Armed Forces, possibly compromising classified data concerning the aircraft and granting Chinese military officials access. In November 2018, it was reported that Taiwanese military leadership had abandoned the procurement of the F-35 in favor of a larger number of F-16V Viper aircraft. The decision was reportedly motivated by concerns about industry independence, as well as cost and previously raised espionage concerns.
Royal Thai Air Force – 8 or 12 planned to replace F-16A/B Block 15 ADF in service. On 12 January 2022, Thailand's cabinet approved a budget for the first four F-35A, estimated at 13.8 billion baht in FY2023. On 22 May 2023, the United States Department of Defense implied it will turn down Thailand's bid to buy F-35 fighters, and instead offer F-16 Block 70/72 Viper and F-15EX Eagle II fighters, a Royal Thai Air Force source said.
Turkish Air Force – 30 were ordered, of up to 100 total planned. Future purchases have been banned by the U.S. with contracts canceled by early 2020, following Turkey's decision to buy the S-400 missile system from Russia. Six of Turkey's 30 ordered F-35As were completed (they are still kept in a hangar in the United States and so far haven't been transferred to the USAF, despite a modification in the 2020 Fiscal Year defense budget by the U.S. Congress which gives authority to do so if necessary), and two more were at the assembly line in 2020. The first four F-35As were delivered to Luke Air Force Base in 2018 and 2019 for the training of Turkish pilots. On 20 July 2020, the U.S. government had formally approved the seizure of eight F-35As originally bound for Turkey and their transfer to the USAF, together with a contract to modify them to USAF specifications. The U.S. has not refunded the $1.4 billion payment made by Turkey for purchasing the F-35A fighters . On 1 February 2024, the United States expressed willingness to readmit Turkey into the F-35 program if Turkey agrees to give up its S-400 system.
United Arab Emirates Air Force – Up to 50 F-35As planned. But on 27 January 2021, the Biden administration temporarily suspended the F-35 sales to the UAE. After pausing the bill to review the sale, the Biden administration confirmed to move forward with the deal on 13 April 2021. In December 2021 UAE withdrew from purchasing F-35s as they did not agree to the additional terms of the transaction from the US. On 14 September 2024, a senior UAE official said that the United Arab Emirates does not expect to resume talks with the U.S. about the F-35.
Accidents and notable incidents
Various models of the F-35 have been involved in incidents since 2014. They have often involved operator error or mechanical issues, which has set back the program. In comparison to most military aircraft, however, it is described as being safe.
Specifications (F-35A)
Differences between variants
Appearances in media
| Technology | Specific aircraft | null |
11815 | https://en.wikipedia.org/wiki/Food%20additive | Food additive | Food additives are substances added to food to preserve flavor or enhance taste, appearance, or other sensory qualities. Some additives, such as vinegar (pickling), salt (salting), smoke (smoking) and sugar (crystallization), have been used for centuries to preserve food. This allows for longer-lasting foods, such as bacon, sweets or wines.
With the advent of ultra-processed foods in the late 20th century, many additives having both natural and artificial origin were introduced. Food additives also include substances that may be introduced to food indirectly (called "indirect additives") in the manufacturing process through packaging, storage or transport.
In Europe and internationally, many additives are designated with E numbers, while in the United States, additives in amounts deemed safe for human consumption are designated as GRAS.
Identification
To regulate these additives and inform consumers each additive is assigned a unique number called an "E number", which is used in Europe for all approved additives. This numbering scheme has been adopted and extended by the Codex Alimentarius Commission as the International Numbering System for Food Additives (INS) to internationally identify all additives (INS number.,
E numbers are all prefixed by "E", but countries outside Europe use only the number, whether the additive is approved in Europe or not.
For example, acetic acid is written as E260 on products sold in Europe, but is simply known as additive 260 in some countries. Additive 103, alkannin, is not approved for use in Europe, so does not have an E number, although it is approved for use in Australia and New Zealand. Since 1987, Australia has had an approved system of labelling for additives in packaged foods. Each food additive has to be named or numbered. The numbers are the same as in Europe, but without the prefix "E".
The United States Food and Drug Administration (FDA) lists these items as GRAS; they are listed under both their Chemical Abstracts Service number and FDA regulation under the United States Code of Federal Regulations.
The FDA publishes a list of food additives for all approved ingredients.
Categories
Food additives can be divided into several groups, although there is some overlap because some additives exert more than one effect. For example, salt is both a preservative as well as a flavor.
Acidulants confer sour or acid taste. Common acidulants include vinegar, citric acid, tartaric acid, malic acid, fumaric acid, and lactic acid.
Acidity regulators are used for controlling the pH of foods for stability or to affect activity of enzymes.
Anticaking agents keep powders such as milk powder from caking or sticking.
Antifoaming agents reduce or prevent foaming in foods. Foaming agents do the reverse.
Antioxidants such as vitamin C are preservatives by inhibiting the degradation of food by oxygen.
Bulking agents such as starch are additives that increase the bulk of a food without affecting its taste.
Colorings are added to food to replace colors lost during preparation or to make food look more attractive.
Fortifying agents: Vitamins and minerals may be added to increase the nutritional value
In contrast to colorings, color retention agents are used to preserve a food's existing color.
Emulsifiers allow water and oils to remain mixed together in an emulsion, as in mayonnaise, ice cream, and homogenized milk.
Flavorings are additives that give food a particular taste or smell, and may be derived from natural ingredients or created artificially.
In Europe, flavorings do not have an E-code and they are not considered as food additives.
Flavor enhancers enhance a food's existing flavors. A popular example is monosodium glutamate. Some flavor enhancers have their own flavors that are independent of the food.
Flour treatment agents are added to flour to improve its color or its use in baking.
Glazing agents provide a shiny appearance or protective coating to foods.
Humectants prevent foods from drying out.
Tracer gas allows for package integrity testing to prevent foods from being exposed to atmosphere, thus guaranteeing shelf life.
Preservatives prevent or inhibit spoilage of food due to fungi, bacteria and other microorganisms.
Stabilizers, thickening and gelling agents, like agar or pectin (used in jam for example) give foods a firmer texture. While they are not true emulsifiers, they help to stabilize emulsions.
Sweeteners are added to foods for flavoring. Sweeteners other than sugar are added to keep the food energy (calories) low.
Thickening agents are substances which, when added to the mixture, increase its viscosity without substantially modifying its other properties.
Bisphenols, phthalates, and perfluoroalkyl chemicals (PFCs) are indirect additives used in manufacturing or packaging. In July 2018 the American Academy of Pediatrics called for more careful study of those three substances, along with nitrates and food coloring, as they might harm children during development.
Safety and regulation
With the increasing use of processed foods since the 19th century, food additives are more widely used. Many countries regulate their use. For example, boric acid was widely used as a food preservative from the 1870s to the 1920s, but was banned after World War I due to its toxicity, as demonstrated in animal and human studies. During World War II, the urgent need for cheap, available food preservatives led to boric acid being used again, but it was finally banned in the 1950s. Such cases led to a general mistrust of food additives, and an application of the precautionary principle led to the conclusion that only additives that are known to be safe should be used in foods. In the United States, this induced adoption of the Delaney clause, an amendment to the Federal Food, Drug, and Cosmetic Act of 1938, stating that no carcinogenic substances may be used as food additives. However, after the banning of cyclamates in the United States and Britain in 1969, saccharin, the only remaining legal artificial sweetener at the time, was found to cause cancer in rats. Widespread public outcry in the United States, partly communicated to Congress by postage-paid postcards supplied in the packaging of sweetened soft drinks, led to the retention of saccharin, despite its violation of the Delaney clause. However, in 2000, saccharin was found to be carcinogenic in rats due only to their unique urine chemistry.
In 2007, Food Standards Australia New Zealand published an official shoppers' guidance with which the concerns of food additives and their labeling are mediated. In the EU, it can take 10 years or more to obtain approval for a new food additive. This includes five years of safety testing, followed by two years for evaluation by the European Food Safety Authority (EFSA) and another three years before the additive receives an EU-wide approval for use in every country in the European Union. Apart from testing and analyzing food products during the whole production process to ensure safety and compliance with regulatory standards, Trading Standards officers (in the UK) protect the public from any illegal use or potentially dangerous mis-use of food additives by performing random testing of food products.
There has been controversy associated with the risks and benefits of food additives. Natural additives may be similarly harmful or be the cause of allergic reactions in certain individuals. For example, safrole was used to flavor root beer until it was shown to be carcinogenic. Due to the application of the Delaney clause, it may not be added to foods, even though it occurs naturally in sassafras and sweet basil.
Hyperactivity
Although concerns have been expressed about a linkage between additives and hyperactivity, there is no clear evidence of a cause-and-effect relationship.
Toxicity assessment
In 2012, the EFSA proposed the tier approach to evaluate the potential toxicity of food additives. It is based on four dimensions: toxicokinetics (absorption, distribution, metabolism and excretion); genotoxicity; subchronic (at least 90 data) and chronic toxicity and carcinogenity; reproductive and developmental toxicity.
Micronutrients
A subset of food additives, micronutrients added in food fortification processes preserve nutrient value by providing vitamins and minerals to foods such as flour, cereal, margarine and milk which normally would not retain such high levels. Added ingredients, such as air, bacteria, fungi, and yeast, also contribute manufacturing and flavor qualities, and reduce spoilage.
Regulation in the United States
The United States Food and Drug Administration defines a food additive as "any substance the intended use of which results or may reasonably be expected to result directly or indirectly in its becoming a component or otherwise affecting the characteristics of any food". In order for a novel food additive to be approved, a food additive approval petition must be submitted to the FDA. The identity of the ingredient, the proposed use in the food system, the technical effect of the ingredient, a method of analysis for the ingredient in foods, information on the manufacturing process, and full safety reports must be defined in a food additive petition. The FDA evaluates the chemical composition of the ingredient, the quantities that would be typically consumed, acute and chronic health impacts, and other safety factors. The FDA reviews the petition prior to market approval of the additive.
Standardization
ISO has published a series of standards regarding the topic and these standards are covered by ICS 67.220.
| Technology | Food, water and health | null |
11835 | https://en.wikipedia.org/wiki/Glycine | Glycine | Glycine (symbol Gly or G; ) is an amino acid that has a single hydrogen atom as its side chain. It is the simplest stable amino acid (carbamic acid is unstable). Glycine is one of the proteinogenic amino acids. It is encoded by all the codons starting with GG (GGU, GGC, GGA, GGG). Glycine is integral to the formation of alpha-helices in secondary protein structure due to the "flexibility" caused by such a small R group. Glycine is also an inhibitory neurotransmitter – interference with its release within the spinal cord (such as during a Clostridium tetani infection) can cause spastic paralysis due to uninhibited muscle contraction.
It is the only achiral proteinogenic amino acid. It can fit into hydrophilic or hydrophobic environments, due to its minimal side chain of only one hydrogen atom.
History and etymology
Glycine was discovered in 1820 by French chemist Henri Braconnot when he hydrolyzed gelatin by boiling it with sulfuric acid. He originally called it "sugar of gelatin", but French chemist Jean-Baptiste Boussingault showed in 1838 that it contained nitrogen. In 1847 American scientist Eben Norton Horsford, then a student of the German chemist Justus von Liebig, proposed the name "glycocoll"; however, the Swedish chemist Berzelius suggested the simpler current name a year later. The name comes from the Greek word γλυκύς "sweet tasting" (which is also related to the prefixes glyco- and gluco-, as in glycoprotein and glucose). In 1858, the French chemist Auguste Cahours determined that glycine was an amine of acetic acid.
Production
Although glycine can be isolated from hydrolyzed proteins, this route is not used for industrial production, as it can be manufactured more conveniently by chemical synthesis. The two main processes are amination of chloroacetic acid with ammonia, giving glycine and hydrochloric acid, and the Strecker amino acid synthesis, which is the main synthetic method in the United States and Japan. About 15 thousand tonnes are produced annually in this way.
Glycine is also co-generated as an impurity in the synthesis of EDTA, arising from reactions of the ammonia co-product.
Chemical reactions
Its acid–base properties are most important. In aqueous solution, glycine is amphoteric: below pH = 2.4, it converts to the ammonium cation called glycinium. Above about pH 9.6, it converts to glycinate.
Glycine functions as a bidentate ligand for many metal ions, forming amino acid complexes. A typical complex is Cu(glycinate)2, i.e. Cu(H2NCH2CO2)2, which exists both in cis and trans isomers.
With acid chlorides, glycine converts to the amidocarboxylic acid, such as hippuric acid and acetylglycine. With nitrous acid, one obtains glycolic acid (van Slyke determination). With methyl iodide, the amine becomes quaternized to give trimethylglycine, a natural product:
+ 3 CH3I → + 3 HI
Glycine condenses with itself to give peptides, beginning with the formation of glycylglycine:
2 → + H2O
Pyrolysis of glycine or glycylglycine gives 2,5-diketopiperazine, the cyclic diamide.
Glycine forms esters with alcohols. They are often isolated as their hydrochloride, such as glycine methyl ester hydrochloride. Otherwise, the free ester tends to convert to diketopiperazine.
As a bifunctional molecule, glycine reacts with many reagents. These can be classified into N-centered and carboxylate-center reactions.
Metabolism
Biosynthesis
Glycine is not essential to the human diet, as it is biosynthesized in the body from the amino acid serine, which is in turn derived from 3-phosphoglycerate. In most organisms, the enzyme serine hydroxymethyltransferase catalyses this transformation via the cofactor pyridoxal phosphate:
serine + tetrahydrofolate → glycine + N5,N10-methylene tetrahydrofolate + H2O
In E. coli, glycine is sensitive to antibiotics that target folate.
In the liver of vertebrates, glycine synthesis is catalyzed by glycine synthase (also called glycine cleavage enzyme). This conversion is readily reversible:
CO2 + NH + N5,N10-methylene tetrahydrofolate + NADH + H+ ⇌ Glycine + tetrahydrofolate + NAD+
In addition to being synthesized from serine, glycine can also be derived from threonine, choline or hydroxyproline via inter-organ metabolism of the liver and kidneys.
Degradation
Glycine is degraded via three pathways. The predominant pathway in animals and plants is the reverse of the glycine synthase pathway mentioned above. In this context, the enzyme system involved is usually called the glycine cleavage system:
Glycine + tetrahydrofolate + NAD+ ⇌ CO2 + NH + N5,N10-methylene tetrahydrofolate + NADH + H+
In the second pathway, glycine is degraded in two steps. The first step is the reverse of glycine biosynthesis from serine with serine hydroxymethyl transferase. Serine is then converted to pyruvate by serine dehydratase.
In the third pathway of its degradation, glycine is converted to glyoxylate by D-amino acid oxidase. Glyoxylate is then oxidized by hepatic lactate dehydrogenase to oxalate in an NAD+-dependent reaction.
The half-life of glycine and its elimination from the body varies significantly based on dose. In one study, the half-life varied between 0.5 and 4.0 hours.
Physiological function
The principal function of glycine is it acts as a precursor to proteins. Most proteins incorporate only small quantities of glycine, a notable exception being collagen, which contains about 35% glycine due to its periodically repeated role in the formation of collagen's helix structure in conjunction with hydroxyproline. In the genetic code, glycine is coded by all codons starting with GG, namely GGU, GGC, GGA and GGG.
As a biosynthetic intermediate
In higher eukaryotes, δ-aminolevulinic acid, the key precursor to porphyrins, is biosynthesized from glycine and succinyl-CoA by the enzyme ALA synthase. Glycine provides the central C2N subunit of all purines.
As a neurotransmitter
Glycine is an inhibitory neurotransmitter in the central nervous system, especially in the spinal cord, brainstem, and retina. When glycine receptors are activated, chloride enters the neuron via ionotropic receptors, causing an inhibitory postsynaptic potential (IPSP). Strychnine is a strong antagonist at ionotropic glycine receptors, whereas bicuculline is a weak one. Glycine is a required co-agonist along with glutamate for NMDA receptors. In contrast to the inhibitory role of glycine in the spinal cord, this behaviour is facilitated at the (NMDA) glutamatergic receptors which are excitatory. The of glycine is 7930 mg/kg in rats (oral), and it usually causes death by hyperexcitability.
As a toxin conjugation agent
Glycine conjugation pathway has not been fully investigated. Glycine is thought to be a hepatic detoxifier of a number endogenous and xenobiotic organic acids. Bile acids are normally conjugated to glycine in order to increase their solubility in water.
The human body rapidly clears sodium benzoate by combining it with glycine to form hippuric acid which is then excreted. The metabolic pathway for this begins with the conversion of benzoate by butyrate-CoA ligase into an intermediate product, benzoyl-CoA, which is then metabolized by glycine N-acyltransferase into hippuric acid.
Uses
In the US, glycine is typically sold in two grades: United States Pharmacopeia ("USP"), and technical grade. USP grade sales account for approximately 80 to 85 percent of the U.S. market for glycine. If purity greater than the USP standard is needed, for example for intravenous injections, a more expensive pharmaceutical grade glycine can be used. Technical grade glycine, which may or may not meet USP grade standards, is sold at a lower price for use in industrial applications, e.g., as an agent in metal complexing and finishing.
Animal and human foods
Glycine is not widely used in foods for its nutritional value, except in infusions. Instead, glycine's role in food chemistry is as a flavorant. It is mildly sweet, and it counters the aftertaste of saccharine. It also has preservative properties, perhaps owing to its complexation to metal ions. Metal glycinate complexes, e.g. copper(II) glycinate are used as supplements for animal feeds.
, the U.S. Food and Drug Administration "no longer regards glycine and its salts as generally recognized as safe for use in human food", and only permits food uses of glycine in certain conditions.
Glycine has been researched for its potential to extend life. The proposed mechanisms of this effect are its ability to clear methionine from the body, and activating autophagy.
Chemical feedstock
Glycine is an intermediate in the synthesis of a variety of chemical products. It is used in the manufacture of the herbicides glyphosate, iprodione, glyphosine, imiprothrin, and eglinazine. It is used as an intermediate of antibiotics such as thiamphenicol.
Laboratory research
Glycine is a significant component of some solutions used in the SDS-PAGE method of protein analysis. It serves as a buffering agent, maintaining pH and preventing sample damage during electrophoresis. Glycine is also used to remove protein-labeling antibodies from Western blot membranes to enable the probing of numerous proteins of interest from SDS-PAGE gel. This allows more data to be drawn from the same specimen, increasing the reliability of the data, reducing the amount of sample processing, and number of samples required. This process is known as stripping.
Presence in space
The presence of glycine outside the Earth was confirmed in 2009, based on the analysis of samples that had been taken in 2004 by the NASA spacecraft Stardust from comet Wild 2 and subsequently returned to Earth. Glycine had previously been identified in the Murchison meteorite in 1970. The discovery of glycine in outer space bolstered the hypothesis of so-called soft-panspermia, which claims that the "building blocks" of life are widespread throughout the universe. In 2016, detection of glycine within Comet 67P/Churyumov–Gerasimenko by the Rosetta spacecraft was announced.
The detection of glycine outside the Solar System in the interstellar medium has been debated.
Evolution
Glycine is proposed to be defined by early genetic codes. For example, low complexity regions (in proteins), that may resemble the proto-peptides of the early genetic code are highly enriched in glycine.
Presence in foods
| Biology and health sciences | Amino acids | Biology |
11866 | https://en.wikipedia.org/wiki/Global%20Positioning%20System | Global Positioning System | The Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radio navigation system owned by the United States Space Force and operated by Mission Delta 31. It is one of the global navigation satellite systems (GNSS) that provide geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. It does not require the user to transmit any data, and operates independently of any telephone or Internet reception, though these technologies can enhance the usefulness of the GPS positioning information. It provides critical positioning capabilities to military, civil, and commercial users around the world. Although the United States government created, controls, and maintains the GPS system, it is freely accessible to anyone with a GPS receiver.
Overview
The GPS project was started by the U.S. Department of Defense in 1973. The first prototype spacecraft was launched in 1978 and the full constellation of 24 satellites became operational in 1993. After Korean Air Lines Flight 007 was shot down when it mistakenly entered Soviet airspace, President Ronald Reagan announced that the GPS system would be made available for civilian use as of September 16, 1983; however, initially this civilian use was limited to an average accuracy of by use of Selective Availability (SA), a deliberate error introduced into the GPS data that military receivers could correct for.
As civilian GPS usage grew, there was increasing pressure to remove this error. The SA system was temporarily disabled during the Gulf War, as a shortage of military GPS units meant that many US soldiers were using civilian GPS units sent from home. In the 1990s, Differential GPS systems from the US Coast Guard, Federal Aviation Administration, and similar agencies in other countries began to broadcast local GPS corrections, reducing the effect of both SA degradation and atmospheric effects (that military receivers also corrected for). The U.S. military had also developed methods to perform local GPS jamming, meaning that the ability to globally degrade the system was no longer necessary. As a result, United States President Bill Clinton signed a bill ordering that Selective Availability be disabled on May 1, 2000; and, in 2007, the US government announced that the next generation of GPS satellites would not include the feature at all.
Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block III satellites and Next Generation Operational Control System (OCX) which was authorized by the U.S. Congress in 2000. When Selective Availability was discontinued, GPS was accurate to about . GPS receivers that use the L5 band have much higher accuracy of , while those for high-end applications such as engineering and land surveying are accurate to within and can even provide sub-millimeter accuracy with long-term measurements. Consumer devices such as smartphones can be accurate to or better when used with assistive services like Wi-Fi positioning.
, 18 GPS satellites broadcast L5 signals, which are considered pre-operational prior to being broadcast by a full complement of 24 satellites in 2027.
History
The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems, combining ideas from several predecessors, including classified engineering design studies from the 1960s. The U.S. Department of Defense developed the system, which originally used 24 satellites, for use by the United States military, and became fully operational in 1993. Civilian use was allowed from the 1980s. Roger L. Easton of the Naval Research Laboratory, Ivan A. Getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it. The work of Gladys West on the creation of the mathematical geodetic Earth model is credited as instrumental in the development of computational techniques for detecting satellite positions with the precision needed for GPS.
The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator System, developed in the early 1940s. In 1955, Friedwardt Winterberg proposed a test of general relativity—detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites. Special and general relativity predicted that the clocks on GPS satellites, as observed by those on Earth, run 38 microseconds faster per day than those on the Earth. The design of GPS corrects for this difference; because without doing so, GPS calculated positions would accumulate errors of up to .
Predecessors
When the Soviet Union launched its first artificial satellite (Sputnik 1) in 1957, two American physicists, William Guier and George Weiffenbach, at Johns Hopkins University's Applied Physics Laboratory (APL) monitored its radio transmissions. Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC I computer to perform the heavy calculations required.
Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem: pinpointing the user's location, given the satellite's. (At the time, the Navy was developing the submarine-launched Polaris missile, which required them to know the submarine's location.) This led them and APL to develop the TRANSIT system. In 1959, ARPA (renamed DARPA in 1972) also played a role in TRANSIT.
TRANSIT was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour. In 1967, the U.S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required for GPS.
In the 1970s, the ground-based OMEGA navigation system, based on phase comparison of signal transmission from pairs of stations, became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy.
Although there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation of a constellation of navigation satellites. During the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded. It is also the reason for the ultra-secrecy at that time. The nuclear triad consisted of the United States Navy's submarine-launched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic bombers and intercontinental ballistic missiles (ICBMs). Considered vital to the nuclear deterrence posture, accurate determination of the SLBM launch position was a force multiplier.
Precise navigation would enable United States ballistic missile submarines to get an accurate fix of their positions before they launched their SLBMs. The USAF, with two-thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The U.S. Navy and U.S. Air Force were developing their own technologies in parallel to solve what was essentially the same problem. To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (comparable to the Soviet SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.
In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN System. A follow-on study, Project 57, was performed in 1963 and it was "in this study that the GPS concept was born". That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS" and promised increased accuracy for U.S. Air Force bombers as well as ICBMs.
Updates from the Navy TRANSIT system were too slow for the high speeds of Air Force operation. The Naval Research Laboratory (NRL) continued making advances with their Timation (Time Navigation) satellites, first launched in 1967, second launched in 1969, with the third in 1974 carrying the first atomic clock into orbit and the fourth launched in 1977.
Another important predecessor to GPS came from a different branch of the United States military. In 1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying. The SECOR system included three ground-based transmitters at known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969.
Development
With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program. Satellite orbital position errors, induced by variations in the gravity field and radar refraction among others, had to be resolved. A team led by Harold L. Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, used real-time data assimilation and recursive estimation to do so, reducing systematic and residual errors to a manageable level to permit accurate navigation.
During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of a Defense Navigation Satellite System (DNSS). It was at this meeting that the real synthesis that became GPS was created. Later that year, the DNSS program was named Navstar. Navstar is often erroneously considered an acronym for "NAVigation System using Timing And Ranging" but was never considered as such by the GPS Joint Program Office (TRW may have once advocated for a different navigational system that used that acronym). With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites, Navstar-GPS. Ten "Block I" prototype satellites were launched between 1978 and 1985 (an additional unit was destroyed in a launch failure).
The effect of the ionosphere on radio transmission was investigated in a geophysics laboratory of Air Force Cambridge Research Laboratory, renamed to Air Force Geophysical Research Lab (AFGRL) in 1974. AFGRL developed the Klobuchar model for computing ionospheric corrections to GPS location. Of note is work done by Australian space scientist Elizabeth Essex-Cohen at AFGRL in 1974. She was concerned with the curving of the paths of radio waves (atmospheric refraction) traversing the ionosphere from NavSTAR satellites.
After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down by a Soviet interceptor aircraft after straying in prohibited airspace because of navigational errors, in the vicinity of Sakhalin and Moneron Islands, President Ronald Reagan issued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good. The first Block II satellite was launched on February 14, 1989, and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment but including the costs of the satellite launches, has been estimated at US$5 billion (equivalent to $ billion in ).
Initially, the highest-quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded, in a policy known as Selective Availability. This changed on May 1, 2000, with U.S. President Bill Clinton signing a policy directive to turn off Selective Availability to provide the same accuracy to civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of Defense, William Perry, in view of the widespread growth of differential GPS services by private industry to improve civilian accuracy. Moreover, the U.S. military was developing technologies to deny GPS service to potential adversaries on a regional basis. Selective Availability was removed from the GPS architecture beginning with GPS-III.
Since its deployment, the U.S. has implemented several improvements to the GPS service, including new signals for civil use and increased accuracy and integrity for all users, all the while maintaining compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing initiative by the U.S. Department of Defense through a series of satellite acquisitions to meet the growing needs of the military, civilians, and the commercial market. As of early 2015, high-quality Standard Positioning Service (SPS) GPS receivers provided horizontal accuracy of better than , although many factors such as receiver and antenna quality and atmospheric issues can affect this accuracy.
GPS is owned and operated by the United States government as a national resource. The Department of Defense is the steward of GPS. The Interagency GPS Executive Board (IGEB) oversaw GPS policy matters from 1996 to 2004. After that, the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems. The executive committee is chaired jointly by the Deputy Secretaries of Defense and Transportation. Its membership includes equivalent-level officials from the Departments of State, Commerce, and Homeland Security, the Joint Chiefs of Staff and NASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison.
The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis" and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses".
Timeline and modernization
In 1972, the U.S. Air Force Central Inertial Guidance Test Facility (Holloman Air Force Base) conducted developmental flight tests of four prototype GPS receivers in a Y configuration over White Sands Missile Range, using ground-based pseudo-satellites.
In 1978, the first experimental Block-I GPS satellite was launched.
In 1983, after Soviet Union interceptor aircraft shot down the civilian airliner KAL 007 that strayed into prohibited airspace because of navigational errors, killing all 269 people on board, U.S. President Ronald Reagan announced that GPS would be made available for civilian uses once it was completed, although it had been publicly known as early as 1979, that the CA code (Coarse/Acquisition code) would be available to civilian users.
By 1985, ten more experimental Block-I satellites had been launched to validate the concept.
Beginning in 1988, command and control of these satellites was moved from Onizuka AFS, California to the 2nd Satellite Control Squadron (2SCS) located at Schriever Space Force Base in Colorado Springs, Colorado.
On February 14, 1989, the first modern Block-II satellite was launched.
The Gulf War from 1990 to 1991 was the first conflict in which the military widely used GPS.
In 1991, DARPA's project to create a miniature GPS receiver successfully ended, replacing the previous military receivers with a all-digital handheld GPS receiver.
In 1991, TomTom, a Dutch sat-nav manufacturer was founded.
In 1992, the 2nd Space Wing, which originally managed the system, was inactivated and replaced by the 50th Space Wing.
By December 1993, GPS achieved initial operational capability (IOC), with a full constellation (24 satellites) available and providing the Standard Positioning Service (SPS).
Full Operational Capability (FOC) was declared by Air Force Space Command (AFSPC) in April 1995, signifying full availability of the military's secure Precise Positioning Service (PPS).
In 1996, recognizing the importance of GPS to civilian users as well as military users, U.S. President Bill Clinton issued a policy directive declaring GPS a dual-use system and establishing an Interagency GPS Executive Board to manage it as a national asset.
In 1998, United States Vice President Al Gore announced plans to upgrade GPS with two new civilian signals for enhanced user accuracy and reliability, particularly with respect to aviation safety, and in 2000 the United States Congress authorized the effort, referring to it as GPS III.
On May 2, 2000 "Selective Availability" was discontinued as a result of the 1996 executive order, allowing civilian users to receive a non-degraded signal globally.
In 2004, the United States government signed an agreement with the European Community establishing cooperation related to GPS and Europe's Galileo system.
In 2004, United States President George W. Bush updated the national policy and replaced the executive board with the National Executive Committee for Space-Based Positioning, Navigation, and Timing.
In November 2004, Qualcomm announced successful tests of assisted GPS for mobile phones.
In 2005, the first modernized GPS satellite was launched and began transmitting a second civilian signal (L2C) for enhanced user performance.
On September 14, 2007, the aging mainframe-based Ground segment Control System was transferred to the new Architecture Evolution Plan.
On May 19, 2009, the United States Government Accountability Office issued a report warning that some GPS satellites could fail as soon as 2010.
On May 21, 2009, the Air Force Space Command allayed fears of GPS failure, saying: "There's only a small risk we will not continue to exceed our performance standard."
On January 11, 2010, an update of ground control systems caused a software incompatibility with 8,000 to 10,000 military receivers manufactured by a division of Trimble Navigation Limited of Sunnyvale, California.
On February 25, 2010, the U.S. Air Force awarded the contract to Raytheon Company to develop the GPS Next Generation Operational Control System (OCX) to improve accuracy and availability of GPS navigation signals, and serve as a critical part of GPS modernization.
July 24, 2020, operation of the GPS constellation is transferred to the newly established U.S. Space Force as part of its establishment.
On October 13, 2023, the Space Force activated PNT Delta (Provisional) to manage US navigation warfare assets. 2SOPS and GPS operations were realigned under this new Delta.
Awards
On February 10, 1993, the National Aeronautic Association selected the GPS Team as winners of the 1992 Robert J. Collier Trophy, the US's most prestigious aviation award. This team combines researchers from the Naval Research Laboratory, the U.S. Air Force, the Aerospace Corporation, Rockwell International Corporation, and IBM Federal Systems Company. The citation honors them "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago".
Two GPS developers received the National Academy of Engineering Charles Stark Draper Prize for 2003:
Ivan Getting, emeritus president of The Aerospace Corporation and an engineer at Massachusetts Institute of Technology, established the basis for GPS, improving on the World War II land-based radio system called LORAN (Long-range Radio Aid to Navigation).
Bradford Parkinson, professor of aeronautics and astronautics at Stanford University, conceived the present satellite-based system in the early 1960s and developed it in conjunction with the U.S. Air Force. Parkinson served twenty-one years in the Air Force, from 1957 to 1978, and retired with the rank of colonel.
GPS developer Roger L. Easton received the National Medal of Technology on February 13, 2006. Francis X. Kane (Col. USAF, ret.) was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame at Lackland A.F.B., San Antonio, Texas, March 2, 2010, for his role in space technology development and the engineering design concept of GPS conducted as part of Project 621B. In 1998, GPS technology was inducted into the Space Foundation Space Technology Hall of Fame.
On October 4, 2011, the International Astronautical Federation (IAF) awarded the Global Positioning System (GPS) its 60th Anniversary Award, nominated by IAF member, the American Institute for Aeronautics and Astronautics (AIAA). The IAF Honors and Awards Committee recognized the uniqueness of the GPS program and the exemplary role it has played in building international collaboration for the benefit of humanity. On December 6, 2018, Gladys West was inducted into the Air Force Space and Missile Pioneers Hall of Fame in recognition of her work on an extremely accurate geodetic Earth model, which was ultimately used to determine the orbit of the GPS constellation. On February 12, 2019, four founding members of the project were awarded the Queen Elizabeth Prize for Engineering with the chair of the awarding board stating: "Engineering is the foundation of civilisation; ...They've re-written, in a major way, the infrastructure of our world."
Principles
The GPS satellites carry very stable atomic clocks that are synchronized with one another and with the reference atomic clocks at the ground control stations; any drift of the clocks aboard the satellites from the reference time maintained on the ground stations is corrected regularly. Since the speed of radio waves (speed of light) is constant and independent of the satellite speed, the time delay between when the satellite transmits a signal and the ground station receives it is proportional to the distance from the satellite to the ground station. With the distance information collected from multiple ground stations, the location coordinates of any satellite at any time can be calculated with great precision.
Each GPS satellite carries an accurate record of its own position and time, and broadcasts that data continuously. Based on data received from multiple GPS satellites, an end user's GPS receiver can calculate its own four-dimensional position in spacetime; However, at a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and the deviation of its own clock from satellite time).
More detailed description
Each GPS satellite continually broadcasts a signal (carrier wave with modulation) that includes:
A pseudorandom code (sequence of ones and zeros) that is known to the receiver. By time-aligning a receiver-generated version and the receiver-measured version of the code, the time of arrival (TOA) of a defined point in the code sequence, called an epoch, can be found in the receiver clock time scale
A message that includes the time of transmission (TOT) of the code epoch (in GPS time scale) and the satellite position at that time
Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite signals. From the TOAs and the TOTs, the receiver forms four time of flight (TOF) values, which are (given the speed of light) approximately equivalent to receiver-satellite ranges plus time difference between the receiver and GPS satellites multiplied by speed of light, which are called pseudo-ranges. The receiver then computes its three-dimensional position and clock deviation from the four TOFs.
In practice the receiver position (in three dimensional Cartesian coordinates with origin at the Earth's center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using the navigation equations to process the TOFs.
The receiver's Earth-centered solution location is usually converted to latitude, longitude and height relative to an ellipsoidal Earth model. The height may then be further converted to height relative to the geoid, which is essentially mean sea level. These coordinates may be displayed, such as on a moving map display, or recorded or used by some other system, such as a vehicle guidance system.
User-satellite geometry
Although usually not formed explicitly in the receiver processing, the conceptual time differences of arrival (TDOAs) define the measurement geometry. Each TDOA corresponds to a hyperboloid of revolution (see Multilateration). The line connecting the two satellites involved (and its extensions) forms the axis of the hyperboloid. The receiver is located at the point where three hyperboloids intersect.
It is sometimes incorrectly said that the user location is at the intersection of three spheres. While simpler to visualize, this is the case only if the receiver has a clock synchronized with the satellite clocks (i.e., the receiver measures true ranges to the satellites rather than range differences). There are marked performance benefits to the user carrying a clock synchronized with the satellites. Foremost is that only three satellites are needed to compute a position solution. If it were an essential part of the GPS concept that all users needed to carry a synchronized clock, a smaller number of satellites could be deployed, but the cost and complexity of the user equipment would increase.
Receiver in continuous operation
The description above is representative of a receiver start-up situation. Most receivers have a track algorithm, sometimes called a tracker, that combines sets of satellite measurements collected at different times—in effect, taking advantage of the fact that successive receiver positions are usually close to each other. After a set of measurements are processed, the tracker predicts the receiver location corresponding to the next set of satellite measurements. When the new measurements are collected, the receiver uses a weighting scheme to combine the new measurements with the tracker prediction. In general, a tracker can (a) improve receiver position and time accuracy, (b) reject bad measurements, and (c) estimate receiver speed and direction.
The disadvantage of a tracker is that changes in speed or direction can be computed only with a delay, and that derived direction becomes inaccurate when the distance traveled between two position measurements drops below or near the random error of position measurement. GPS units can use measurements of the Doppler shift of the signals received to compute velocity accurately. More advanced navigation systems use additional sensors like a compass or an inertial navigation system to complement GPS.
Non-navigation applications
GPS requires four or more satellites to be visible for accurate navigation. The solution of the navigation equations gives the position of the receiver along with the difference between the time kept by the receiver's on-board clock and the true time-of-day, thereby eliminating the need for a more precise and possibly impractical receiver based clock. Applications for GPS such as time transfer, traffic signal timing, and synchronization of cell phone base stations, make use of this cheap and highly accurate timing. Some GPS applications use this time for display, or, other than for the basic position calculations, do not use it at all.
Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship on the open ocean usually has a known elevation close to 0m, and the elevation of an aircraft may be known. Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude, dead reckoning, inertial navigation, or including information from the vehicle computer, to give a (possibly degraded) position when fewer than four satellites are visible.
Structure
The current GPS consists of three major segments. These are the space segment, a control segment, and a user segment. The U.S. Space Force develops, maintains, and operates the space and control segments. GPS satellites broadcast signals from space, and each GPS receiver uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time.
Space segment
The space segment (SS) is composed of 24 to 32 satellites, or Space Vehicles (SV), in medium Earth orbit, and also includes the payload adapters to the boosters required to launch them into orbit. The GPS design originally called for 24 SVs, eight each in three approximately circular orbits, but this was modified to six orbital planes with four satellites each. The six orbit planes have approximately 55° inclination (tilt relative to the Earth's equator) and are separated by 60° right ascension of the ascending node (angle along the equator from a reference point to the orbit's intersection). The orbital period is one-half of a sidereal day, i.e., 11 hours and 58 minutes, so that the satellites pass over the same locations or almost the same locations every day. The orbits are arranged so that at least six satellites are always within line of sight from everywhere on the Earth's surface (see animation at right). The result of this objective is that the four satellites are not evenly spaced (90°) apart within each orbit. In general terms, the angular difference between satellites in each orbit is 30°, 105°, 120°, and 105° apart, which sum to 360°.
Orbiting at an altitude of approximately ; orbital radius of approximately , each SV makes two complete orbits each sidereal day, repeating the same ground track each day. This was very helpful during development because even with only four satellites, correct alignment means all four are visible from one spot for a few hours each day. For military operations, the ground track repeat can be used to ensure good coverage in combat zones.
, there are 31 satellites in the GPS constellation, 27 of which are in use at a given time with the rest allocated as stand-bys. A 32nd was launched in 2018, but as of July 2019 is still in evaluation. More decommissioned satellites are in orbit and available as spares. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a nonuniform arrangement. Such an arrangement was shown to improve accuracy but also improves reliability and availability of the system, relative to a uniform system, when multiple satellites fail. With the expanded constellation, nine satellites are usually visible at any time from any point on the Earth with a clear horizon, ensuring considerable redundancy over the minimum four satellites needed for a position.
Control segment
The control segment (CS) is composed of:
a master control station (MCS),
an alternative master control station,
four dedicated ground antennas, and
six dedicated monitor stations.
The MCS can also access Satellite Control Network (SCN) ground antennas (for additional command and control capability) and NGA (National Geospatial-Intelligence Agency) monitor stations. The flight paths of the satellites are tracked by dedicated U.S. Space Force monitoring stations in Hawaii, Kwajalein Atoll, Ascension Island, Diego Garcia, Colorado Springs, Colorado and Cape Canaveral, Florida, along with shared NGA monitor stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington, DC. The tracking information is sent to the MCS at Schriever Space Force Base ESE of Colorado Springs, which is operated by the 2nd Space Operations Squadron (2 SOPS) of the U.S. Space Force. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located at Kwajalein, Ascension Island, Diego Garcia, and Cape Canaveral). These updates synchronize the atomic clocks on board the satellites to within a few nanoseconds of each other, and adjust the ephemeris of each satellite's internal orbital model. The updates are created by a Kalman filter that uses inputs from the ground monitoring stations, space weather information, and various other inputs.
When a satellite's orbit is being adjusted, the satellite is marked unhealthy, so receivers do not use it. After the maneuver, engineers track the new orbit from the ground, upload the new ephemeris, and mark the satellite healthy again. The operation control segment (OCS) currently serves as the control segment of record. It provides the operational capability that supports GPS users and keeps the GPS operational and performing within specification.
OCS replaced the 1970s-era mainframe computer at Schriever Air Force Base in September 2007. After installation, the system helped enable upgrades and provide a foundation for a new security architecture that supported U.S. armed forces.
OCS will continue to be the ground control system of record until the new segment, Next Generation GPS Operation Control System (OCX), is fully developed and functional. The U.S. Department of Defense has claimed that the new capabilities provided by OCX will be the cornerstone for enhancing GPS's mission capabilities, enabling U.S. Space Force to enhance GPS operational services to U.S. combat forces, civil partners and domestic and international users. The GPS OCX program also will reduce cost, schedule and technical risk. It is designed to provide 50% sustainment cost savings through efficient software architecture and Performance-Based Logistics. In addition, GPS OCX is expected to cost millions of dollars less than the cost to upgrade OCS while providing four times the capability.
The GPS OCX program represents a critical part of GPS modernization and provides information assurance improvements over the current GPS OCS program.
OCX will have the ability to control and manage GPS legacy satellites as well as the next generation of GPS III satellites, while enabling the full array of military signals.
Built on a flexible architecture that can rapidly adapt to changing needs of GPS users allowing immediate access to GPS data and constellation status through secure, accurate and reliable information.
Provides the warfighter with more secure, actionable and predictive information to enhance situational awareness.
Enables new modernized signals (L1C, L2C, and L5) and has M-code capability, which the legacy system is unable to do.
Provides significant information assurance improvements over the current program including detecting and preventing cyber attacks, while isolating, containing and operating during such attacks.
Supports higher volume near real-time command and control capabilities and abilities.
On September 14, 2011, the U.S. Air Force announced the completion of GPS OCX Preliminary Design Review and confirmed that the OCX program is ready for the next phase of development. The GPS OCX program missed major milestones and pushed its launch into 2021, 5 years past the original deadline. According to the Government Accounting Office in 2019, the 2021 deadline looked shaky.
The project remained delayed in 2023, and was (as of June 2023) 73% over its original estimated budget. In late 2023, Frank Calvelli, the assistant secretary of the Air Force for space acquisitions and integration, stated that the project was estimated to go live some time during the summer of 2024.
User segment
The user segment (US) is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often a crystal oscillator). They may also include a display for providing location and speed information to the user.
GPS receivers may include an input for differential corrections, using the RTCM SC-104 format. This is typically in the form of an RS-232 port at 4,800 bit/s speed. Data is actually sent at a much lower rate, which limits the accuracy of the signal sent using RTCM. Receivers with internal DGPS receivers can outperform those using external RTCM data. , even low-cost units commonly include Wide Area Augmentation System (WAAS) receivers.
Many GPS receivers can relay position data to a PC or other device using the NMEA 0183 protocol. Although this protocol is officially defined by the National Marine Electronics Association (NMEA), references to this protocol have been compiled from public records, allowing open source tools like gpsd to read the protocol without violating intellectual property laws. Other proprietary protocols exist as well, such as the SiRF and MTK protocols. Receivers can interface with other devices using methods including a serial connection, USB, or Bluetooth.
Applications
While originally a military project, GPS is considered a dual-use technology, meaning it has significant civilian applications as well.
GPS has become a widely deployed and useful tool for commerce, scientific uses, tracking, and surveillance. GPS's accurate time facilitates everyday activities such as banking, mobile phone operations, and even the control of power grids by allowing well synchronized hand-off switching.
Civilian
Many civilian applications use one or more of GPS's three basic components: absolute location, relative movement, and time transfer.
Amateur radio: clock synchronization required for several digital modes such as FT8, FT4 and JS8; also used with APRS for position reporting; is often critical during emergency and disaster communications support.
Atmosphere: studying the troposphere delays (recovery of the water vapor content) and ionosphere delays (recovery of the number of free electrons). Recovery of Earth surface displacements due to the atmospheric pressure loading.
Astronomy: both positional and clock synchronization data is used in astrometry and celestial mechanics and precise orbit determination. GPS is also used in both amateur astronomy with small telescopes as well as by professional observatories for finding extrasolar planets.
Automated vehicle: applying location and routes for cars and trucks to function without a human driver.
Cartography: both civilian and military cartographers use GPS extensively.
Cellular telephony: clock synchronization enables time transfer, which is critical for synchronizing its spreading codes with other base stations to facilitate inter-cell handoff and support hybrid GPS/cellular position detection for mobile emergency calls and other applications. The first handsets with integrated GPS launched in the late 1990s. The U.S. Federal Communications Commission (FCC) mandated the feature in either the handset or in the towers (for use in triangulation) in 2002 so emergency services could locate 911 callers. Third-party software developers later gained access to GPS APIs from Nextel upon launch, followed by Sprint in 2006, and Verizon soon thereafter.
Clock synchronization: the accuracy of GPS time signals (±10 ns) is second only to the atomic clocks they are based on, and is used in applications such as GPS disciplined oscillators.
Disaster relief/emergency services: many emergency services depend upon GPS for location and timing capabilities.
GPS-equipped radiosondes and dropsondes: measure and calculate the atmospheric pressure, wind speed and direction up to from the Earth's surface.
Radio occultation for weather and atmospheric science applications.
Fleet tracking: used to identify, locate and maintain contact reports with one or more fleet vehicles in real-time.
Geodesy: determination of Earth orientation parameters including the daily and sub-daily polar motion, and length-of-day variabilities, Earth's center-of-mass – geocenter motion, and low-degree gravity field parameters.
Geofencing: vehicle tracking systems, person tracking systems, and pet tracking systems use GPS to locate devices that are attached to or carried by a person, vehicle, or pet. The application can provide continuous tracking and send notifications if the target leaves a designated (or "fenced-in") area.
Geotagging: applies location coordinates to digital objects such as photographs (in Exif data) and other documents for purposes such as creating map overlays with devices like Nikon GP-1.
GPS aircraft tracking
GPS for mining: the use of RTK GPS has significantly improved several mining operations such as drilling, shoveling, vehicle tracking, and surveying. RTK GPS provides centimeter-level positioning accuracy.
GPS data mining: It is possible to aggregate GPS data from multiple users to understand movement patterns, common trajectories and interesting locations. GPS data is today used in transportation and disaster engineering to forecast mobility in normal and evacuation situations (e.g., hurricanes, wildfires, earthquakes).
GPS tours: location determines what content to display; for instance, information about an approaching point of interest.
Mental health: tracking mental health functioning and sociability.
Navigation: navigators value digitally precise velocity and orientation measurements, as well as precise positions in real-time with a support of orbit and clock corrections.
Orbit determination of low-orbiting satellites with GPS receiver installed on board, such as GOCE, GRACE, Jason-1, Jason-2, TerraSAR-X, TanDEM-X, CHAMP, Sentinel-3, and some cubesats, e.g., CubETH.
Phasor measurements: GPS enables highly accurate timestamping of power system measurements, making it possible to compute phasors.
Recreation: for example, Geocaching, Geodashing, GPS drawing, waymarking, and other kinds of location based mobile games such as Pokémon Go.
Reference frames: realization and densification of the terrestrial reference frames in the framework of Global Geodetic Observing System. Co-location in space between Satellite laser ranging and microwave observations for deriving global geodetic parameters.
Robotics: self-navigating, autonomous robots using GPS sensors, which calculate latitude, longitude, time, speed, and heading.
Sport: used in football and rugby for the control and analysis of the training load.
Surveying: surveyors use absolute locations to make maps and determine property boundaries.
Tectonics: GPS enables direct fault motion measurement of earthquakes. Between earthquakes GPS can be used to measure crustal motion and deformation to estimate seismic strain buildup for creating seismic hazard maps.
Telematics: GPS technology integrated with computers and mobile communications technology in automotive navigation systems.
Restrictions on civilian use
The U.S. government controls the export of some civilian receivers. All GPS receivers capable of functioning above above sea level and , or designed or modified for use with unmanned missiles and aircraft, are classified as munitions (weapons)—which means they require State Department export licenses. This rule applies even to otherwise purely civilian units that only receive the L1 frequency and the C/A (Coarse/Acquisition) code.
Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach . These limits only apply to units or components exported from the United States. A growing trade in various components exists, including GPS units from other countries. These are expressly sold as ITAR-free.
Military
As of 2009, military GPS applications include:
Navigation: Soldiers use GPS to find objectives, even in the dark or in unfamiliar territory, and to coordinate troop and supply movement. In the United States armed forces, commanders use the Commander's Digital Assistant and lower ranks use the Soldier Digital Assistant.
Frequency-Hopping Radio Clock Coordination: Military radio systems using frequency hopping modes, such as SINCGARS and HAVEQUICK, require all radios within a network to have the same time input to their internal clocks (+/-4 seconds in the case of SINCGARS) to be on the correct frequency at a given time. Military GPS receivers, such as the Precision Lightweight GPS Receiver (PLGR) and Defense Advanced GPS Receiver (DAGR), are used by radio operators within a radio network to properly input an accurate time to said radios internal clock. More modern military radios have internal GPS receivers that synchronize the internal clock automatically.
Target tracking: Various military weapons systems use GPS to track potential ground and air targets before flagging them as hostile. These weapon systems pass target coordinates to precision-guided munitions to allow them to engage targets accurately. Military aircraft, particularly in air-to-ground roles, use GPS to find targets.
Missile and projectile guidance: GPS allows accurate targeting of various military weapons including ICBMs, cruise missiles, precision-guided munitions and artillery shells. Embedded GPS receivers able to withstand accelerations of 12,000 g or about have been developed for use in howitzer shells.
Search and rescue.
Reconnaissance: Patrol movement can be managed more closely.
GPS satellites carry a set of nuclear detonation detectors consisting of an optical sensor called a bhangmeter, an X-ray sensor, a dosimeter, and an electromagnetic pulse (EMP) sensor (W-sensor), that form a major portion of the United States Nuclear Detonation Detection System. General William Shelton has stated that future satellites may drop this feature to save money.
GPS type navigation was first used in war in the 1991 Persian Gulf War, before GPS was fully developed in 1995, to assist Coalition Forces to navigate and perform maneuvers in the war. The war also demonstrated the vulnerability of GPS to being jammed, when Iraqi forces installed jamming devices on likely targets that emitted radio noise, disrupting reception of the weak GPS signal.
GPS's vulnerability to jamming is a threat that continues to grow as jamming equipment and experience grows. GPS signals have been reported to have been jammed many times over the years for military purposes. Russia seems to have several objectives for this approach, such as intimidating neighbors while undermining confidence in their reliance on American systems, promoting their GLONASS alternative, disrupting Western military exercises, and protecting assets from drones. China uses jamming to discourage US surveillance aircraft near the contested Spratly Islands. North Korea has mounted several major jamming operations near its border with South Korea and offshore, disrupting flights, shipping and fishing operations. Iranian Armed Forces disrupted the civilian airliner plane Flight PS752's GPS when it shot down the aircraft.
In the Russo-Ukrainian War, GPS-guided munitions provided to Ukraine by NATO countries experienced significant failure rates as a result of Russian electronic warfare. Excalibur artillery shells efficiency rate hitting targets dropped from 70% to 6% as Russia adapted its electronic warfare activities.
Timekeeping
Leap seconds
While most clocks derive their time from Coordinated Universal Time (UTC), the atomic clocks on the satellites are set to GPS time. The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain new leap seconds or other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset with International Atomic Time (TAI) (TAI – GPS = 19 seconds). Periodic corrections are performed to the on-board clocks to keep them synchronized with ground clocks.
The GPS navigation message includes the difference between GPS time and UTC. GPS time is 18 seconds ahead of UTC because of the leap second added to UTC on December 31, 2016. Receivers subtract this offset from GPS time to calculate UTC and specific time zone values. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits).
Accuracy
GPS time is theoretically accurate to about 14 nanoseconds, due to the clock drift relative to International Atomic Time that the atomic clocks in GPS transmitters experience. Most receivers lose some accuracy in their interpretation of the signals and are only accurate to about 100 nanoseconds.
Relativistic corrections
The GPS implements two major corrections to its time signals for relativistic effects: one for relative velocity of satellite and receiver, using the special theory of relativity, and one for the difference in gravitational potential between satellite and receiver, using general relativity. The acceleration of the satellite could also be computed independently as a correction, depending on purpose, but normally the effect is already dealt with in the first two corrections.
Format
As opposed to the year, month, and day format of the Gregorian calendar, the GPS date is expressed as a week number and a seconds-into-week number. The week number is transmitted as a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6, 1980, and the week number became zero again for the first time at 23:59:47 UTC on August 21, 1999 (00:00:19 TAI on August 22, 1999). It happened the second time at 23:59:42 UTC on April 6, 2019. To determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to correctly translate the GPS date signal. To address this concern in the future the modernized GPS civil navigation (CNAV) message will use a 13-bit field that only repeats every 8,192 weeks (157 years), thus lasting until 2137 (157 years after GPS week zero).
Communication
The navigational signals transmitted by GPS satellites encode a variety of information including satellite positions, the state of the internal clocks, and the health of the network. These signals are transmitted on two separate carrier frequencies that are common to all satellites in the network. Two different encodings are used: a public encoding that enables lower resolution navigation, and an encrypted encoding used by the U.S. military.
Message format
{|class="wikitable" style="float:right; margin:0 0 0.5em 1em;" border="1"
|+
! Subframes !! Description
|-
| 1 || Satellite clock,GPS time relationship
|-
| 2–3 || Ephemeris(precise satellite orbit)
|-
| 4–5 || Almanac component(satellite network synopsis,error correction)
|}
Each GPS satellite continuously broadcasts a navigation message on L1 (C/A and P/Y) and L2 (P/Y) frequencies at a rate of 50 bits per second (see bitrate). Each complete message takes 750 seconds ( minutes) to complete. The message structure has a basic format of a 1500-bit-long frame made up of five subframes, each subframe being 300 bits (6 seconds) long. Subframes 4 and 5 are subcommutated 25 times each, so that a complete data message requires the transmission of 25 full frames. Each subframe consists of ten words, each 30 bits long. Thus, with 300 bits in a subframe times 5 subframes in a frame times 25 frames in a message, each message is 37,500 bits long. At a transmission rate of 50-bit/s, this gives 750 seconds to transmit an entire almanac message (GPS). Each 30-second frame begins precisely on the minute or half-minute as indicated by the atomic clock on each satellite.
The first subframe of each frame encodes the week number and the time within the week, as well as the data about the health of the satellite. The second and the third subframes contain the ephemeris – the precise orbit for the satellite. The fourth and fifth subframes contain the almanac, which contains coarse orbit and status information for up to 32 satellites in the constellation as well as data related to error correction. Thus, to obtain an accurate satellite location from this transmitted message, the receiver must demodulate the message from each satellite it includes in its solution for 18 to 30 seconds. To collect all transmitted almanacs, the receiver must demodulate the message for 732 to 750 seconds or minutes.
All satellites broadcast at the same frequencies, encoding signals using unique code-division multiple access (CDMA) so receivers can distinguish individual satellites from each other. The system uses two distinct CDMA encoding types: the coarse/acquisition (C/A) code, which is accessible by the general public, and the precise (P(Y)) code, which is encrypted so that only the U.S. military and other NATO nations who have been given access to the encryption code can access it.
The ephemeris is updated every 2 hours and is sufficiently stable for 4 hours, with provisions for updates every 6 hours or longer in non-nominal conditions. The almanac is updated typically every 24 hours. Additionally, data for a few weeks following is uploaded in case of transmission updates that delay data upload.
Satellite frequencies
{|class="wikitable" style="float:right; width:30em; margin:0 0 0.5em 1em;" border="1"
|+
! Band !! Frequency !! Description
|-
| L1 || 1575.42 MHz || Coarse-acquisition (C/A) and encrypted precision (P(Y)) codes, plus the L1 civilian (L1C) and military (M) codes on Block III and newer satellites.
|-
| L2 || 1227.60 MHz || P(Y) code, plus the L2C and military codes on the Block IIR-M and newer satellites.
|-
| L3 || 1381.05 MHz || Used for nuclear detonation (NUDET) detection.
|-
| L4 || 1379.913 MHz || Being studied for additional ionospheric correction.
|-
| L5 || 1176.45 MHz || Used as a civilian safety-of-life (SoL) signal on Block IIF and newer satellites.
|}
All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique where the low-bitrate message data is encoded with a high-rate pseudo-random (PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 million chips per second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The actual internal reference of the satellites is 10.22999999543 MHz to compensate for relativistic effects that make observers on the Earth perceive a different time reference with respect to the transmitters in orbit. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code. The P code can be encrypted as a so-called P(Y) code that is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user.
The L3 signal at a frequency of 1.38105 GHz is used to transmit data from the satellites to ground stations. This data is used by the United States Nuclear Detonation (NUDET) Detection System (USNDS) to detect, locate, and report nuclear detonations (NUDETs) in the Earth's atmosphere and near space. One usage is the enforcement of nuclear test ban treaties.
The L4 band at 1.379913 GHz is being studied for additional ionospheric correction.
The L5 frequency band at 1.17645 GHz was added in the process of GPS modernization. This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that provides this signal was launched in May 2010. On February 5, 2016, the 12th and final Block IIF satellite was launched. The L5 consists of two carrier components that are in phase quadrature with each other. Each carrier component is bi-phase shift key (BPSK) modulated by a separate bit train. "L5, the third civil GPS signal, will eventually support safety-of-life applications for aviation and provide improved availability and accuracy."
In 2011, a conditional waiver was granted to LightSquared to operate a terrestrial broadband service near the L1 band. Although LightSquared had applied for a license to operate in the 1525 to 1559 band as early as 2003 and it was put out for public comment, the FCC asked LightSquared to form a study group with the GPS community to test GPS receivers and identify issues that might arise due to the larger signal power from the LightSquared terrestrial network. The GPS community had not objected to the LightSquared (formerly MSV and SkyTerra) applications until November 2010, when LightSquared applied for a modification to its Ancillary Terrestrial Component (ATC) authorization. This filing (SAT-MOD-20101118-00239) amounted to a request to run several orders of magnitude more power in the same frequency band for terrestrial base stations, essentially repurposing what was supposed to be a "quiet neighborhood" for signals from space as the equivalent of a cellular network. Testing in the first half of 2011 has demonstrated that the effects from the lower 10 MHz of spectrum are minimal to GPS devices (less than 1% of the total GPS devices are affected). The upper 10 MHz intended for use by LightSquared may have some effect on GPS devices. There is some concern that this may seriously degrade the GPS signal for many consumer uses. Aviation Week magazine reports that the latest testing (June 2011) confirms "significant jamming" of GPS by LightSquared's system.
Demodulation and decoding
Because all of the satellite signals are modulated onto the same L1 carrier frequency, the signals must be separated after demodulation. This is done by assigning each satellite a unique binary sequence known as a Gold code. The signals are decoded after demodulation using addition of the Gold codes corresponding to the satellites monitored by the receiver.
If the almanac information has previously been acquired, the receiver picks the satellites to listen for by their PRNs, unique numbers in the range 1 through 32. If the almanac information is not in memory, the receiver enters a search mode until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then acquire the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data.
Processing of the navigation message enables the determination of the time of transmission and the satellite position at this time. For more information see Demodulation and Decoding, Advanced.
Navigation equations
Problem statement
The receiver uses messages received from satellites to determine the satellite positions and time sent. The x, y, and z components of satellite position and the time sent (s) are designated as [xi, yi, zi, si] where the subscript i denotes the satellite and has the value 1, 2, ..., n, where n ≥ 4. When the time of message reception indicated by the on-board receiver clock is , the true reception time is , where b is the receiver's clock bias from the much more accurate GPS clocks employed by the satellites. The receiver clock bias is the same for all received satellite signals (assuming the satellite clocks are all perfectly synchronized). The message's transit time is , where si is the satellite time. Assuming the message traveled at the speed of light, c, the distance traveled is .
For n satellites, the equations to satisfy are:
where di is the geometric distance or range between receiver and satellite i (the values without subscripts are the x, y, and z components of receiver position):
Defining pseudoranges as , we see they are biased versions of the true range:
.
Since the equations have four unknowns [x, y, z, b]—the three components of GPS receiver position and the clock bias—signals from at least four satellites are necessary to attempt solving these equations. They can be solved by algebraic or numerical methods. Existence and uniqueness of GPS solutions are discussed by Abell and Chaffee. When n is greater than four, this system is overdetermined and a fitting method must be used.
The amount of error in the results varies with the received satellites' locations in the sky, since certain configurations (when the received satellites are close together in the sky) cause larger errors. Receivers usually calculate a running estimate of the error in the calculated position. This is done by multiplying the basic resolution of the receiver by quantities called the geometric dilution of position (GDOP) factors, calculated from the relative sky directions of the satellites used. The receiver location is expressed in a specific coordinate system, such as latitude and longitude using the WGS 84 geodetic datum or a country-specific system.
Geometric interpretation
The GPS equations can be solved by numerical and analytical methods. Geometrical interpretations can enhance the understanding of these solution methods.
Spheres
The measured ranges, called pseudoranges, contain clock errors. In a simplified idealization in which the ranges are synchronized, these true ranges represent the radii of spheres, each centered on one of the transmitting satellites. The solution for the position of the receiver is then at the intersection of the surfaces of these spheres; see trilateration (more generally, true-range multilateration). Signals from at minimum three satellites are required, and their three spheres would typically intersect at two points. One of the points is the location of the receiver, and the other moves rapidly in successive measurements and would not usually be on Earth's surface.
In practice, there are many sources of inaccuracy besides clock bias, including random errors as well as the potential for precision loss from subtracting numbers close to each other if the centers of the spheres are relatively close together. This means that the position calculated from three satellites alone is unlikely to be accurate enough. Data from more satellites can help because of the tendency for random errors to cancel out and also by giving a larger spread between the sphere centers. But at the same time, more spheres will not generally intersect at one point. Therefore, a near intersection gets computed, typically via least squares. The more signals available, the better the approximation is likely to be.
Hyperboloids
If the pseudorange between the receiver and satellite i and the pseudorange between the receiver and satellite j are subtracted, , the common receiver clock bias (b) cancels out, resulting in a difference of distances . The locus of points having a constant difference in distance to two points (here, two satellites) is a hyperbola on a plane and a hyperboloid of revolution (more specifically, a two-sheeted hyperboloid) in 3D space (see Multilateration). Thus, from four pseudorange measurements, the receiver can be placed at the intersection of the surfaces of three hyperboloids each with foci at a pair of satellites. With additional satellites, the multiple intersections are not necessarily unique, and a best-fitting solution is sought instead.
Inscribed sphere
The receiver position can be interpreted as the center of an inscribed sphere (insphere) of radius bc, given by the receiver clock bias b (scaled by the speed of light c). The insphere location is such that it touches other spheres. The circumscribing spheres are centered at the GPS satellites, whose radii equal the measured pseudoranges pi. This configuration is distinct from the one described above, in which the spheres' radii were the unbiased or geometric ranges di.
Hypercones
The clock in the receiver is usually not of the same quality as the ones in the satellites and will not be accurately synchronized to them. This produces pseudoranges with large differences compared to the true distances to the satellites. Therefore, in practice, the time difference between the receiver clock and the satellite time is defined as an unknown clock bias b. The equations are then solved simultaneously for the receiver position and the clock bias. The solution space [x, y, z, b] can be seen as a four-dimensional spacetime, and signals from at minimum four satellites are needed. In that case each of the equations describes a hypercone (or spherical cone), with the cusp located at the satellite, and the base a sphere around the satellite. The receiver is at the intersection of four or more of such hypercones.
Solution methods
Least squares
When more than four satellites are available, the calculation can use the four best, or more than four simultaneously (up to all visible satellites), depending on the number of receiver channels, processing capability, and geometric dilution of precision (GDOP).
Using more than four involves an over-determined system of equations with no unique solution; such a system can be solved by a least-squares or weighted least squares method.
Iterative
Both the equations for four satellites, or the least squares equations for more than four, are non-linear and need special solution methods. A common approach is by iteration on a linearized form of the equations, such as the Gauss–Newton algorithm.
The GPS was initially developed assuming use of a numerical least-squares solution method—i.e., before closed-form solutions were found.
Closed-form
One closed-form solution to the above set of equations was developed by S. Bancroft. Its properties are well known; in particular, proponents claim it is superior in low-GDOP situations, compared to iterative least squares methods.
Bancroft's method is algebraic, as opposed to numerical, and can be used for four or more satellites. When four satellites are used, the key steps are inversion of a 4x4 matrix and solution of a single-variable quadratic equation. Bancroft's method provides one or two solutions for the unknown quantities. When there are two (usually the case), only one is a near-Earth sensible solution.
When a receiver uses more than four satellites for a solution, Bancroft uses the generalized inverse (i.e., the pseudoinverse) to find a solution. A case has been made that iterative methods, such as the Gauss–Newton algorithm approach for solving over-determined non-linear least squares problems, generally provide more accurate solutions.
Leick et al. (2015) states that "Bancroft's (1985) solution is a very early, if not the first, closed-form solution."
Other closed-form solutions were published afterwards, although their adoption in practice is unclear.
Error sources and analysis
GPS error analysis examines error sources in GPS results and the expected size of those errors. GPS makes corrections for receiver clock errors and other effects, but some residual errors remain uncorrected. Error sources include signal arrival time measurements, numerical calculations, atmospheric effects (ionospheric/tropospheric delays), ephemeris and clock data, multipath signals, and natural and artificial interference. Magnitude of residual errors from these sources depends on geometric dilution of precision. Artificial errors may result from jamming devices and threaten ships and aircraft or from intentional signal degradation through selective availability, which limited accuracy to ≈ , but has been switched off since May 1, 2000.
Accuracy enhancement and surveying
Regulatory spectrum issues concerning GPS receivers
In the United States, GPS receivers are regulated under the Federal Communications Commission's (FCC) Part 15 rules. As indicated in the manuals of GPS-enabled devices sold in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation". With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum". For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue.
The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Lightsquared is the Mobile Satellite Service band. Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to the Virginia company LightSquared. On March 1, 2001, the FCC received an application from LightSquared's predecessor, Motient Services, to use their allocated frequencies for an integrated satellite-terrestrial service. In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with LightSquared to prevent transmissions from LightSquared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz. In 2004, the FCC adopted the OOBE agreement in its authorization for LightSquared to deploy a ground-based network ancillary to their satellite system – known as the Ancillary Tower Components (ATCs) – "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes the U.S. Department of Agriculture, U.S. Space Force, U.S. Army, U.S. Coast Guard, Federal Aviation Administration, National Aeronautics and Space Administration (NASA), U.S. Department of the Interior, and U.S. Department of Transportation.
In January 2011, the FCC conditionally authorized LightSquared's wholesale customers—such as Best Buy, Sharp, and C Spire—to only purchase an integrated satellite-ground-based service from LightSquared and re-sell that integrated service on devices that are equipped to only use the ground-based signal using LightSquared's allocated frequencies of 1525 to 1559 MHz. In December 2010, GPS receiver manufacturers expressed concerns to the FCC that LightSquared's signal would interfere with GPS receiver devices although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based LightSquared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a LightSquared led working group along with GPS industry and Federal agency participation. On February 14, 2012, the FCC initiated proceedings to vacate LightSquared's Conditional Waiver Order based on the NTIA's conclusion that there was currently no practical way to mitigate potential GPS interference.
GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services. As regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum. This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of spectrum allocation ensures that users of both bands are able to operate cooperatively and freely.
The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as LightSquared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum". In those 2003 rules, the FCC stated: "As a preliminary matter, terrestrial [Commercial Mobile Radio Service ('CMRS')] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominantly different market segments ... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base...". In 2004, the FCC clarified that the ground-based towers would be ancillary, noting: "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." In July 2010, the FCC stated that it expected LightSquared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector". GPS receiver manufacturers have argued that LightSquared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component. To build public support of efforts to continue the 2004 FCC authorization of LightSquared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturer Trimble Navigation Ltd. formed the "Coalition To Save Our GPS".
The FCC and LightSquared have each made public commitments to solve the GPS interference issue before the network is allowed to operate. According to Chris Dancy of the Aircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it". The problems could also affect the Federal Aviation Administration upgrade to the air traffic control system, United States Defense Department guidance, and local emergency services including 911.
On February 14, 2012, the FCC moved to bar LightSquared's planned national broadband network after being informed by the National Telecommunications and Information Administration (NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time". LightSquared is challenging the FCC's action.
Similar systems
Following the United States' deployment of GPS, other countries have also developed their own satellite navigation systems. These systems include:
The Russian Global Navigation Satellite System (GLONASS) was developed at the same time as GPS, but suffered from incomplete coverage of the globe until the mid-2000s. GLONASS reception in addition to GPS can be combined in a receiver thereby allowing for additional satellites available to enable faster position fixes and improved accuracy, to within .
China's BeiDou Navigation Satellite System began global services in 2018 and finished its full deployment in 2020.
The Galileo navigation satellite system, a global system being developed by the European Union and other partner countries, began operation in 2016, and is expected to be fully deployed by 2020.
Japan's Quasi-Zenith Satellite System (QZSS) is a GPS satellite-based augmentation system to enhance GPS's accuracy in Asia-Oceania, with satellite navigation independent of GPS scheduled for 2023.
The Indian Regional Navigation Satellite System (Operational name 'NavIC', Navigation with Indian Constellation), deployed by India.
Backup system
In the event of adverse space weather or the deployment of an anti-satellite weapon against GPS, the United States has no terrestrial backup system. The potential cost of such an event to the U.S. economy is estimated at $1 billion per day. The LORAN-C system was turned off in North America in 2010 and Europe in 2015. eLoran is proposed as an American terrestrial backup system, but as of 2024 has not received approval or funding.
China continues to operate LORAN-C transmitters, and Russia has a similar system called CHAYKA ("Seagull").
| Technology | Navigation | null |
11924 | https://en.wikipedia.org/wiki/Game%20theory | Game theory | Game theory is the study of mathematical models of strategic interactions. It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science. Initially, game theory addressed two-person zero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 1950s, it was extended to the study of non zero-sum games, and was eventually applied to a wide range of behavioral relations. It is now an umbrella term for the science of rational decision making in humans, animals, and computers.
Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by Theory of Games and Economic Behavior (1944), co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
Game theory was developed extensively in the 1950s, and was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory in 1999, and fifteen game theorists have won the Nobel Prize in economics as of 2020, including most recently Paul Milgrom and Robert B. Wilson.
History
Earliest results
In 1713, a letter attributed to Charles Waldegrave, an active Jacobite and uncle to British diplomat James Waldegrave, analyzed a game called "le her". Waldegrave provided a minimax mixed strategy solution to a two-person version of the card game, and the problem is now known as the Waldegrave problem.
In 1838, Antoine Augustin Cournot provided a model of competition in oligopolies. Though he did not refer to it as such, he presented a solution that is the Nash equilibrium of the game in his (Researches into the Mathematical Principles of the Theory of Wealth). In 1883, Joseph Bertrand critiqued Cournot's model as unrealistic, providing an alternative model of price competition which would later be formalized by Francis Ysidro Edgeworth.
In 1913, Ernst Zermelo published (On an Application of Set Theory to the Theory of the Game of Chess), which proved that the optimal chess strategy is strictly determined.
Foundation
The work of John von Neumann established game theory as its own independent field in the early-to-mid 20th century, with von Neumann publishing his paper On the Theory of Games of Strategy in 1928. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. Von Neumann's work in game theory culminated in his 1944 book Theory of Games and Economic Behavior, co-authored with Oskar Morgenstern. The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility (of money) as an independent discipline. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. Subsequent work focused primarily on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.
In his 1938 book and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix is symmetric and provided a solution to a non-trivial infinite game (known in English as Blotto game). Borel conjectured the non-existence of mixed-strategy equilibria in finite two-person zero-sum games, a conjecture that was proved false by von Neumann.
In 1950, John Nash developed a criterion for mutual consistency of players' strategies known as the Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum) non-cooperative game has what is now known as a Nash equilibrium in mixed strategies.
Game theory experienced a flurry of activity in the 1950s, during which the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. The 1950s also saw the first applications of game theory to philosophy and political science. The first mathematical discussion of the prisoner's dilemma appeared, and an experiment was undertaken by mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy.
Prize-winning achievements
In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. Later he would introduce trembling hand perfection as well. In 1994 Nash, Selten and Harsanyi became Economics Nobel Laureates for their contributions to economic game theory.
In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection and common knowledge were introduced and analyzed.
In 1994, John Nash was awarded the Nobel Memorial Prize in the Economic Sciences for his contribution to game theory. Nash's most famous contribution to game theory is the concept of the Nash equilibrium, which is a solution concept for non-cooperative games, published in 1951. A Nash equilibrium is a set of strategies, one for each player, such that no player can improve their payoff by unilaterally changing their strategy.
In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007, Leonid Hurwicz, Eric Maskin, and Roger Myerson were awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory". Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict. Hurwicz introduced and formalized the concept of incentive compatibility.
In 2012, Alvin E. Roth and Lloyd S. Shapley were awarded the Nobel Prize in Economics "for the theory of stable allocations and the practice of market design". In 2014, the Nobel went to game theorist Jean Tirole.
Different types of games
Cooperative / non-cooperative
A game is cooperative if the players are able to form binding commitments externally enforced (e.g. through contract law). A game is non-cooperative if players cannot form alliances or if all agreements need to be self-enforcing (e.g. through credible threats).
Cooperative games are often analyzed through the framework of cooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is different from non-cooperative game theory which focuses on predicting individual players' actions and payoffs by analyzing Nash equilibria.
Cooperative game theory provides a high-level approach as it describes only the structure and payoffs of coalitions, whereas non-cooperative game theory also looks at how strategic interaction will affect the distribution of payoffs. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation.
Symmetric / asymmetric
A symmetric game is a game where each player earns the same payoff when making the same choice. In other words, the identity of the player does not change the resulting game facing the other player. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games.
The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured in this section's graphic is asymmetric despite having identical strategy sets for both players.
Zero-sum / non-zero-sum
Zero-sum games (more generally, constant-sum games) are games in which choices by players can neither increase nor decrease the available resources. In zero-sum games, the total benefit goes to all players in a game, for every combination of strategies, and always adds to zero (more informally, a player benefits only at the equal expense of others). Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess.
Many games studied by game theorists (including the famed prisoner's dilemma) are non-zero-sum games, because the outcome has net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.
Furthermore, constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any constant-sum game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called "the board") whose losses compensate the players' net winnings.
Simultaneous / sequential
Simultaneous games are games where both players move simultaneously, or instead the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential games (or dynamic games) are games where players do not make decisions simultaneously, and player's earlier actions affect the outcome and decisions of other players. This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while they do not know which of the other available actions the first player actually performed.
The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, while extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection.
In short, the differences between sequential and simultaneous games are as follows:
Perfect information and imperfect information
An important subset of sequential games consists of games of perfect information. A game with perfect information means that all players, at every move in the game, know the previous history of the game and the moves previously made by all other players. An imperfect information game is played when the players do not know all moves already made by the opponent such as a simultaneous move game. Examples of perfect-information games include tic-tac-toe, checkers, chess, and Go.
Many card games are games of imperfect information, such as poker and bridge. Perfect information is often confused with complete information, which is a similar concept pertaining to the common knowledge of each player's sequence, strategies, and payoffs throughout gameplay. Complete information requires that every player know the strategies and payoffs available to the other players but not necessarily the actions taken, whereas perfect information is knowledge of all aspects of the game and players. Games of incomplete information can be reduced, however, to games of imperfect information by introducing "moves by nature".
Bayesian game
One of the assumptions of the Nash equilibrium is that every player has correct beliefs about the actions of the other players. However, there are many situations in game theory where participants do not fully understand the characteristics of their opponents. Negotiators may be unaware of their opponent's valuation of the object of negotiation, companies may be unaware of their opponent's cost functions, combatants may be unaware of their opponent's strengths, and jurors may be unaware of their colleague's interpretation of the evidence at trial. In some cases, participants may know the character of their opponent well, but may not know how well their opponent knows his or her own character.
Bayesian game means a strategic game with incomplete information. For a strategic game, decision makers are players, and every player has a group of actions. A core part of the imperfect information specification is the set of states. Every state completely describes a collection of characteristics relevant to the player such as their preferences and details about them. There must be a state for every set of features that some player believes may exist.
For example, where Player 1 is unsure whether Player 2 would rather date her or get away from her, while Player 2 understands Player 1's preferences as before. To be specific, supposing that Player 1 believes that Player 2 wants to date her under a probability of 1/2 and get away from her under a probability of 1/2 (this evaluation comes from Player 1's experience probably: she faces players who want to date her half of the time in such a case and players who want to avoid her half of the time). Due to the probability involved, the analysis of this situation requires to understand the player's preference for the draw, even though people are only interested in pure strategic equilibrium.
Combinatorial games
Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and Go. Games that involve imperfect information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are, however, mathematical tools that can solve some particular problems and answer some general questions.
Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic (and sometimes non-constructive) proof methods to solve games of certain types, including "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory. A typical game that has been solved this way is Hex. A related field of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating the computational difficulty of finding optimal strategies.
Research in artificial intelligence has addressed both perfect and imperfect information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha–beta pruning or use of artificial neural networks trained by reinforcement learning, which make games more tractable in computing practice.
Discrete and continuous games
Much of game theory is concerned with finite, discrete games that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities.
Differential games
Differential games such as the continuous pursuit and evasion game are continuous games where the evolution of the players' state variables is governed by differential equations. The problem of finding an optimal strategy in a differential game is closely related to the optimal control theory. In particular, there are two types of strategies: the open-loop strategies are found using the Pontryagin maximum principle while the closed-loop strategies are found using Bellman's Dynamic Programming method.
A particular case of differential games are the games with a random time horizon. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectation of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval.
Evolutionary game theory
Evolutionary game theory studies players who adjust their strategies over time according to rules that are not necessarily rational or farsighted. In general, the evolution of strategies over time according to such rules is modeled as a Markov chain with a state variable such as the current strategy profile or how the game has been played in the recent past. Such rules may feature imitation, optimization, or survival of the fittest.
In biology, such models can represent evolution, in which offspring adopt their parents' strategies and parents who play more successful strategies (i.e. corresponding to higher payoffs) have a greater number of offspring. In the social sciences, such models typically represent strategic adjustment by players who play a game many times within their lifetime and, consciously or unconsciously, occasionally adjust their strategies.
Stochastic outcomes (and relation to other fields)
Individual decision problems with stochastic outcomes are sometimes considered "one-player games". They may be modeled using similar tools within the related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes (MDP).
Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves" ("moves by nature"). This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game.
For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen. (See Black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.)
General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.
Metagames
These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory.
The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard, whereby a situation is framed as a strategic game in which stakeholders try to realize their objectives by means of the options available to them. Subsequent developments have led to the formulation of confrontation analysis.
Mean field game theory
Mean field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Peter E. Caines, and by mathematicians Pierre-Louis Lions and Jean-Michel Lasry.
Representation of games
The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the players of the game, the information and actions available to each player at each decision point, and the payoffs for each outcome. (Eric Rasmusen refers to these four "essential elements" by the acronym "PAPI".) A game theorist typically uses these elements, along with a solution concept of their choosing, to deduce a set of equilibrium strategies for each player such that, when these strategies are employed, no player can profit by unilaterally deviating from their strategy. These equilibrium strategies determine an equilibrium to the game—a stable state in which either one outcome occurs or a set of outcomes occur with known probability.
Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
Extensive form
The extensive form can be used to formalize games with a time sequencing of moves. Extensive form games can be visualized using game trees (as pictured here). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of a decision tree. To solve any extensive form game, backward induction must be used. It involves working backward up the game tree to determine what a rational player would do at the last vertex of the tree, what the player with the previous move would do given that the player with the last move is rational, and so on until the first vertex of the tree is reached.
The game pictured consists of two players. The way this particular game is structured (i.e., with sequential decision making and perfect information), Player 1 "moves" first by choosing either or (fair or unfair). Next in the sequence, Player 2, who has now observed Player 1s move, can choose to play either or (accept or reject). Once Player 2 has made their choice, the game is considered finished and each player gets their respective payoff, represented in the image as two numbers, where the first number represents Player 1's payoff, and the second number represents Player 2's payoff. Suppose that Player 1 chooses and then Player 2 chooses : Player 1 then gets a payoff of "eight" (which in real-world terms can be interpreted in many ways, the simplest of which is in terms of money but could mean things such as eight days of vacation or eight countries conquered or even eight more opportunities to play the same game against other players) and Player 2 gets a payoff of "two".
The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e. the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.)
Normal form
The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3.
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.
Every extensive-form game has an equivalent normal-form game, however, the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical.
Characteristic function form
In cooperative game theory the characteristic function lists the payoff of each coalition. The origin of this formulation is in John von Neumann and Oskar Morgenstern's book.
Formally, a characteristic function is a function from the set of all possible coalitions of players to a set of payments, and also satisfies . The function describes how much collective payoff a set of players can gain by forming a coalition.
Alternative game representations
Alternative game representation forms are used for some subclasses of games or adjusted to the needs of interdisciplinary research. In addition to classical game representations, some of the alternative representations also encode time related aspects.
General and applied uses
As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The first use of game-theoretic analysis was by Antoine Augustin Cournot in 1838 with his solution of the Cournot duopoly. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well.
Although pre-twentieth-century naturalists such as Charles Darwin made game-theoretic kinds of statements, the use of game-theoretic analysis in biology began with Ronald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 book Evolution and the Theory of Games.
In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic approaches have also been suggested in the philosophy of language and philosophy of science. Game-theoretic arguments of this type can be found as far back as Plato. An alternative version of game theory, called chemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules". Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions.
Description and modeling
The primary use of game theory is to describe and model how human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has been criticized. It is argued that the assumptions made by game theorists are often violated when applied to real-world situations. Game theorists usually assume players act rationally, but in practice, human rationality and/or behavior often deviates from the model of rationality as used in game theory. Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, empirical work has shown that in some classic games, such as the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments and whether the analysis of the experiments fully captures all aspects of the relevant situation.
Some game theorists, following the work of John Maynard Smith and George R. Price, have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).
Prescriptive or normative analysis
Some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a strategy, corresponding to a Nash equilibrium of a game constitutes one's best response to the actions of the other players – provided they are in (the same) Nash equilibrium – playing a strategy that is part of a Nash equilibrium seems appropriate. This normative use of game theory has also come under criticism.
Economics
Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers and acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems; and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy.
This research usually focuses on particular sets of strategies known as "solution concepts" or "equilibria". A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.
The payoffs of the game are generally taken to represent the utility of individual players.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of a particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Economists and business professors suggest two primary uses (noted above): descriptive and prescriptive.
Managerial economics
Game theory also has an extensive use in a specific branch or stream of economics – Managerial Economics. One important usage of it in the field of managerial economics is in analyzing strategic interactions between firms. For example, firms may be competing in a market with limited resources, and game theory can help managers understand how their decisions impact their competitors and the overall market outcomes. Game theory can also be used to analyze cooperation between firms, such as in forming strategic alliances or joint ventures. Another use of game theory in managerial economics is in analyzing pricing strategies. For example, firms may use game theory to determine the optimal pricing strategy based on how they expect their competitors to respond to their pricing decisions. Overall, game theory serves as a useful tool for analyzing strategic interactions and decision making in the context of managerial economics.
Business
The Chartered Institute of Procurement & Supply (CIPS) promotes knowledge and use of game theory within the context of business procurement. CIPS and TWS Partners have conducted a series of surveys designed to explore the understanding, awareness and application of game theory among procurement professionals. Some of the main findings in their third annual survey (2019) include:
application of game theory to procurement activity has increased – at the time it was at 19% across all survey respondents
65% of participants predict that use of game theory applications will grow
70% of respondents say that they have "only a basic or a below basic understanding" of game theory
20% of participants had undertaken on-the-job training in game theory
50% of respondents said that new or improved software solutions were desirable
90% of respondents said that they do not have the software they need for their work.
Project management
Sensible decision-making is critical for the success of projects. In project management, game theory is used to model the decision-making process of players, such as investors, project managers, contractors, sub-contractors, governments and customers. Quite often, these players have competing interests, and sometimes their interests are directly detrimental to other players, making project management scenarios well-suited to be modeled by game theory.
Piraveenan (2019) in his review provides several examples where game theory is used to model project management scenarios. For instance, an investor typically has several investment options, and each option will likely result in a different project, and thus one of the investment options has to be chosen before the project charter can be produced. Similarly, any large project involving subcontractors, for instance, a construction project, has a complex interplay between the main contractor (the project manager) and subcontractors, or among the subcontractors themselves, which typically has several decision points. For example, if there is an ambiguity in the contract between the contractor and subcontractor, each must decide how hard to push their case without jeopardizing the whole project, and thus their own stake in it. Similarly, when projects from competing organizations are launched, the marketing personnel have to decide what is the best timing and strategy to market the project, or its resultant product or service, so that it can gain maximum traction in the face of competition. In each of these scenarios, the required decisions depend on the decisions of other players who, in some way, have competing interests to the interests of the decision-maker, and thus can ideally be modeled using game theory.
Piraveenan summarizes that two-player games are predominantly used to model project management scenarios, and based on the identity of these players, five distinct types of games are used in project management.
Government-sector–private-sector games (games that model public–private partnerships)
Contractor–contractor games
Contractor–subcontractor games
Subcontractor–subcontractor games
Games involving other players
In terms of types of games, both cooperative as well as non-cooperative, normal-form as well as extensive-form, and zero-sum as well as non-zero-sum are used to model various project management scenarios.
Political science
The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians.
Early examples of game theory applied to political science are provided by Anthony Downs. In his 1957 book An Economic Theory of Democracy, he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence. Game theory was applied in 1962 to the Cuban Missile Crisis during the presidency of John F. Kennedy.
It has also been proposed that game theory explains the stability of any form of political government. Taking the simplest case of a monarchy, for example, the king, being only one person, does not and cannot maintain his authority by personally exercising physical control over all or even any significant number of his subjects. Sovereign control is instead explained by the recognition by each citizen that all other citizens expect each other to view the king (or other established government) as the person whose orders will be followed. Coordinating communication among citizens to replace the sovereign is effectively barred, since conspiracy to replace the sovereign is generally punishable as a crime. Thus, in a process that can be modeled by variants of the prisoner's dilemma, during periods of stability no citizen will find it rational to move to replace the sovereign, even if all the citizens know they would be better off if they were all to act collectively.
A game-theoretic explanation for democratic peace is that public and open debate in democracies sends clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy.
However, game theory predicts that two countries may still go to war even if their leaders are cognizant of the costs of fighting. War may result from asymmetric information; two countries may have incentives to mis-represent the amount of military resources they have on hand, rendering them unable to settle disputes agreeably without resorting to fighting. Moreover, war may arise because of commitment problems: if two countries wish to settle a dispute via peaceful means, but each wishes to go back on the terms of that settlement, they may have no choice but to resort to warfare. Finally, war may result from issue indivisibilities.
Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reduce greenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations.
Defence science and technology
Game theory has been used extensively to model decision-making scenarios relevant to defence applications. Most studies that has applied game theory in defence settings are concerned with Command and Control Warfare, and can be further classified into studies dealing with (i) Resource Allocation Warfare (ii) Information Warfare (iii) Weapons Control Warfare, and (iv) Adversary Monitoring Warfare. Many of the problems studied are concerned with sensing and tracking, for example a surface ship trying to track a hostile submarine and the submarine trying to evade being tracked, and the interdependent decision making that takes place with regards to bearing, speed, and the sensor technology activated by both vessels. Ho et al. provides a concise summary of the state-of-the-art with regards to the use of game theory in defence applications and highlights the benefits and limitations of game theory in the considered scenarios.
Biology
Unlike those in economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality and more on ones that would be maintained by evolutionary forces. The best-known equilibrium in biology is known as the evolutionarily stable strategy (ESS), first introduced in . Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.
In biology, game theory has been used as a model to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication. The analysis of signaling games and other communication games has provided insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion (see Paul Ormerod's Butterfly Economics).
Biologists have used the game of chicken to analyze fighting behavior and territoriality.
According to Maynard Smith, in the preface to Evolution and the Theory of Games, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.
One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival. All of these actions increase the overall fitness of a group, but occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary rationale behind this selection with the equation , where the cost to the altruist must be less than the benefit to the recipient multiplied by the coefficient of relatedness . The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on through survival of its offspring, can forgo the option of having offspring itself because the same number of alleles are passed on. For example, helping a sibling (in diploid animals) has a coefficient of , because (on average) an individual shares half of the alleles in its sibling's offspring. Ensuring that enough of a sibling's offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring. The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller.
Computer science and logic
Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.
Separately, game theory has played a role in online algorithms; in particular, the -server problem, which has in the past been referred to as games with moving costs and request-answer games. Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, especially online algorithms.
The emergence of the Internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory and within it algorithmic mechanism design combine computational algorithm design and analysis of complex systems with economic theory.
Game theory has multiple applications in the field of artificial intelligence and machine learning. It is often used in developing autonomous systems that can make complex decisions in uncertain environment. Some other areas of application of game theory in AI/ML context are as follows - multi-agent system formation, reinforcement learning, mechanism design etc. By using game theory to model the behavior of other agents and anticipate their actions, AI/ML systems can make better decisions and operate more effectively.
Philosophy
Game theory has been put to several uses in philosophy. Responding to two papers by , used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis. Following game-theoretic account of conventions, Edna Ullmann-Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game.
Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from the interactions of agents. Philosophers who have worked in this area include Bicchieri (1989, 1993), Skyrms (1990), and Stalnaker (1999).
The synthesis of game theory with ethics was championed by R. B. Braithwaite. The hope was that rigorous mathematical analysis of game theory might help formalize the more imprecise philosophical discussions. However, this expectation was only materialized to a limited extent.
In ethics, some (most notably David Gauthier, Gregory Kavka, and Jean Hampton) authors have attempted to pursue Thomas Hobbes' project of deriving morality from self-interest. Since games like the prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see and ).
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the prisoner's dilemma, stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., and ).
Epidemiology
Since the decision to take a vaccine for a particular disease is often made by individuals, who may consider a range of factors and parameters in making this decision (such as the incidence and prevalence of the disease, perceived and real risks associated with contracting the disease, mortality rate, perceived and real risks associated with vaccination, and financial cost of vaccination), game theory has been used to model and predict vaccination uptake in a society.
Well known examples of games
Prisoner's dilemma
William Poundstone described the game in his 1993 book Prisoner's Dilemma:
Two members of a criminal gang, A and B, are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communication with their partner. The principal charge would lead to a sentence of ten years in prison; however, the police do not have the evidence for a conviction. They plan to sentence both to two years in prison on a lesser charge but offer each prisoner a Faustian bargain: If one of them confesses to the crime of the principal charge, betraying the other, they will be pardoned and free to leave while the other must serve the entirety of the sentence instead of just two years for the lesser charge.
The dominant strategy (and therefore the best response to any possible opponent strategy), is to betray the other, which aligns with the sure-thing principle. However, both prisoners staying silent would yield a greater reward for both of them than mutual betrayal.
Battle of the sexes
The "battle of the sexes" is a term used to describe the perceived conflict between men and women in various areas of life, such as relationships, careers, and social roles. This conflict is often portrayed in popular culture, such as movies and television shows, as a humorous or dramatic competition between the genders. This conflict can be depicted in a game theory framework. This is an example of non-cooperative games.
An example of the "battle of the sexes" can be seen in the portrayal of relationships in popular media, where men and women are often depicted as being fundamentally different and in conflict with each other. For instance, in some romantic comedies, the male and female protagonists are shown as having opposing views on love and relationships, and they have to overcome these differences in order to be together.
In this game, there are two pure strategy Nash equilibria: one where both the players choose the same strategy and the other where the players choose different options. If the game is played in mixed strategies, where each player chooses their strategy randomly, then there is an infinite number of Nash equilibria. However, in the context of the "battle of the sexes" game, the assumption is usually made that the game is played in pure strategies.
Ultimatum game
The ultimatum game is a game that has become a popular instrument of economic experiments. An early description is by Nobel laureate John Harsanyi in 1961.
One player, the proposer, is endowed with a sum of money. The proposer is tasked with splitting it with another player, the responder (who knows what the total sum is). Once the proposer communicates his decision, the responder may accept it or reject it. If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer. The game demonstrates how social acceptance, fairness, and generosity influence the players decisions.
Ultimatum game has a variant, that is the dictator game. They are mostly identical, except in dictator game the responder has no power to reject the proposer's offer.
Trust game
The Trust Game is an experiment designed to measure trust in economic decisions. It is also called "the investment game" and is designed to investigate trust and demonstrate its importance rather than "rationality" of self-interest. The game was designed by Berg Joyce, John Dickhaut and Kevin McCabe in 1995.
In the game, one player (the investor) is given a sum of money and must decide how much of it to give to another player (the trustee). The amount given is then tripled by the experimenter. The trustee then decides how much of the tripled amount to return to the investor. If the recipient is completely self interested, then he/she should return nothing. However that is not true as the experiment conduct. The outcome suggest that people are willing to place a trust, by risking some amount of money, in the belief that there would be reciprocity.
Cournot Competition
The Cournot competition model involves players choosing quantity of a homogenous product to produce independently and simultaneously, where marginal cost can be different for each firm and the firm's payoff is profit. The production costs are public information and the firm aims to find their profit-maximizing quantity based on what they believe the other firm will produce and behave like monopolies. In this game firms want to produce at the monopoly quantity but there is a high incentive to deviate and produce more, which decreases the market-clearing price. For example, firms may be tempted to deviate from the monopoly quantity if there is a low monopoly quantity and high price, with the aim of increasing production to maximize profit. However this option does not provide the highest payoff, as a firm's ability to maximize profits depends on its market share and the elasticity of the market demand. The Cournot equilibrium is reached when each firm operates on their reaction function with no incentive to deviate, as they have the best response based on the other firms output. Within the game, firms reach the Nash equilibrium when the Cournot equilibrium is achieved.
Bertrand Competition
The Bertrand competition assumes homogenous products and a constant marginal cost and players choose the prices. The equilibrium of price competition is where the price is equal to marginal costs, assuming complete information about the competitors' costs. Therefore, the firms have an incentive to deviate from the equilibrium because a homogenous product with a lower price will gain all of the market share, known as a cost advantage.
In popular culture
Based on the 1998 book by Sylvia Nasar, the life story of game theorist and mathematician John Nash was turned into the 2001 biopic A Beautiful Mind, starring Russell Crowe as Nash.
The 1959 military science fiction novel Starship Troopers by Robert A. Heinlein mentioned "games theory" and "theory of games". In the 1997 film of the same name, the character Carl Jenkins referred to his military intelligence assignment as being assigned to "games and theory".
The 1964 film Dr. Strangelove satirizes game theoretic ideas about deterrence theory. For example, nuclear deterrence depends on the threat to retaliate catastrophically if a nuclear attack is detected. A game theorist might argue that such threats can fail to be credible, in the sense that they can lead to subgame imperfect equilibria. The movie takes this idea one step further, with the Soviet Union irrevocably committing to a catastrophic nuclear response without making the threat public.
The 1980s power pop band Game Theory was founded by singer/songwriter Scott Miller, who described the band's name as alluding to "the study of calculating the most appropriate action given an adversary... to give yourself the minimum amount of failure".
Liar Game, a 2005 Japanese manga and 2007 television series, presents the main characters in each episode with a game or problem that is typically drawn from game theory, as demonstrated by the strategies applied by the characters.
The 1974 novel Spy Story by Len Deighton explores elements of game theory in regard to cold war army exercises.
The 2008 novel The Dark Forest by Liu Cixin explores the relationship between extraterrestrial life, humanity, and game theory.
Joker, the prime antagonist in the 2008 film The Dark Knight presents game theory concepts—notably the prisoner's dilemma in a scene where he asks passengers in two different ferries to bomb the other one to save their own.
In the 2018 film Crazy Rich Asians, the female lead Rachel Chu is a professor of economics and game theory at New York University. At the beginning of the film she is seen in her NYU classroom playing a game of poker with her teaching assistant and wins the game by bluffing; then in the climax of the film, she plays a game of mahjong with her boyfriend's disapproving mother Eleanor, losing the game to Eleanor on purpose but winning her approval as a result.
In the 2017 film Molly's Game, Brad, an inexperienced poker player, makes an irrational betting decision without realizing and causes his opponent Harlan to deviate from his Nash Equilibrium strategy, resulting in a significant loss when Harlan loses the hand.
| Mathematics | Other | null |
11966 | https://en.wikipedia.org/wiki/Firearm | Firearm | A firearm is any type of gun that uses an explosive charge and is designed to be readily carried and operated by an individual. The term is legally defined further in different countries (see legal definitions).
The first firearms originated in 10th-century China, when bamboo tubes containing gunpowder and pellet projectiles were mounted on spears to make the portable fire lance, operable by a single person, which was later used effectively as a shock weapon in the siege of De'an in 1132. In the 13th century, fire lance barrels were replaced with metal tubes and transformed into the metal-barreled hand cannon. The technology gradually spread throughout Eurasia during the 14th century. Older firearms typically used black powder as a propellant, but modern firearms use smokeless powder or other explosive propellants. Most modern firearms (with the notable exception of smoothbore shotguns) have rifled barrels to impart spin to the projectile for improved flight stability.
Modern firearms can be described by their caliber (i.e. bore diameter). For pistols and rifles this is given in millimeters or inches (e.g. 7.62mm or .308 in.); in the case of shotguns, gauge or bore (e.g. 12 ga. or .410 bore.). They are also described by the type of action employed (e.g. muzzleloader, breechloader, lever, bolt, pump, revolver, semi-automatic, fully automatic, etc.), together with the usual means of deportment (i.e. hand-held or mechanical mounting). Further classification may make reference to the type of barrel used (i.e. rifled) and to the barrel length (e.g. 24 inches), to the firing mechanism (e.g. matchlock, wheellock, flintlock, or percussion lock), to the design's primary intended use (e.g. hunting rifle), or to the commonly accepted name for a particular variation (e.g. Gatling gun).
Shooters aim firearms at their targets with hand-eye coordination, using either iron sights or optical sights. The accurate range of pistols generally does not exceed , while most rifles are accurate to using iron sights, or to longer ranges whilst using optical sights. Purpose-built sniper rifles and anti-materiel rifles are accurate to ranges of more than . (Firearm rounds may be dangerous or lethal well beyond their accurate range; the minimum distance for safety is much greater than the specified range for accuracy.)
Types
A firearm is a barreled weapon that inflicts damage on targets by launching one or more projectiles driven by rapidly expanding high-pressure gas produced by exothermic combustion (deflagration) of a chemical propellant, historically black powder, now smokeless powder.
In the military, firearms are categorized into heavy and light weapons regarding their portability by infantry. Light firearms are those that can be readily carried by individual foot soldier, though they might still require more than one individual (crew-served) to achieve optimal operational capacity. Heavy firearms are those that are too large and heavy to be transported on foot, or too unstable against recoil, and thus require the support of a weapons platform (e.g. a fixed mount, wheeled carriage, vehicle, aircraft or water vessel) to be tactically mobile or useful.
The subset of light firearms that only use kinetic projectiles and are compact enough to be operated to full capacity by a single infantryman (individual-served) are also referred to as small arms. Such firearms include handguns such as pistols, revolvers, and derringers; and long guns such as rifles (and their subtypes), shotguns, submachine guns, and machine guns.
Among the world's arms manufacturers, the top firearms manufacturers are Browning, Remington, Colt, Ruger, Smith & Wesson, Savage, Mossberg (United States), Heckler & Koch, SIG Sauer, Walther (Germany), ČZUB (Czech Republic), Glock, Steyr Arms (Austria), FN Herstal (Belgium), Beretta (Italy), Norinco (China), Rostec, and Kalashnikov (Russia). Former top producers included the Springfield Armory (United States), the Royal Small Arms Factory (United Kingdom), Mauser (Germany), Steyr-Daimler-Puch (Austria), and Rock Island Armory under Armscor (Philippines).
the Small Arms Survey reported that there were over one billion firearms distributed globally, of which 857 million (about 85 percent) were in civilian hands. U.S. civilians alone account for 393 million (about 46 percent) of the worldwide total of civilian-held firearms. This amounts to "120.5 firearms for every 100 residents". The world's armed forces control about 133 million (about 13 percent) of the global total of small arms, of which over 43 percent belong to two countries: the Russian Federation (30.3 million) and China (27.5 million). Law enforcement agencies control about 23 million (about 2 percent) of the global total of small arms.
Handguns
A handgun is, as defined generally and in many gun laws, a firearm that can be used with a single hand. They are the smallest of all firearms, and are common as sidearms, concealed carry weapons, or as backup weapons for self-defense.
Handguns can be categorized into two broad types: pistols, which have a single fixed firing chamber machined into the rear of the barrel, and are often loaded using magazines of varying capacities; revolvers, which have a number of firing chambers or "charge holes" in a revolving cylinder, each one loaded with a single cartridge or charge; and derringers, broadly defined as any handgun that is not a traditional pistol nor a revolver.
There are various types of the aforementioned handguns designed for different mechanisms or purposes, such as single-shot, manual repeating, semi-automatic, or automatic pistols; single-action, double-action, or double-action/single-action revolvers; and small, compact handguns for concealed carry such as pocket pistols and "Saturday night specials".
Examples of pistols include the Glock, Browning Hi-Power, M1911 pistol, Makarov pistol, Walther PP, Luger pistol, Mauser C96, and Beretta 92. Examples of revolvers include the Colt Single Action Army, Smith & Wesson Model 10, Colt Official Police, Colt Python, New Nambu M60, and Mateba Autorevolver. Examples of derringers include the Remington Model 95, FP-45 Liberator, and COP .357 Derringer.
Long guns
A long gun is any firearm with a notably long barrel, typically a length of (there are restrictions on minimum barrel length in many jurisdictions; maximum barrel length is usually a matter of practicality). Unlike a handgun, long guns are designed to be held and fired with both hands, while braced against either the hip or the shoulder for better stability. The receiver and trigger group is mounted into a stock made of wood, plastic, metal, or composite material, which has sections that form a foregrip, rear grip, and optionally (but typically) a shoulder mount called the butt. Early long arms, from the Renaissance up to the mid-19th century, were generally smoothbore firearms that fired one or more ball shot, called muskets or arquebuses depending on caliber and firing mechanism. Since the 19th and 20th centuries, various types of long guns have been created for different purposes.
Rifles
A rifle is a long gun that has riflings (spiral grooves) machined into the bore (inner) surface of its barrel, imparting a gyroscopically stabilizing spin to the bullets that it fires. A descendant of the musket, rifles produce a single point of impact with each firing with a long range and high accuracy. For this reason, as well as for their ubiquity, rifles are very popular among militaries as service rifles, police as accurate long-range alternatives to their traditional shotgun long guns, and civilians for hunting, shooting sports, and self-defense.
Many types of rifles exist owing to their wide adoption and versatility, ranging from mere barrel length differences as in short-barreled rifles and carbines, to classifications per the rifle's function and purpose as in semi-automatic rifles, automatic rifles and sniper rifles, to differences in the rifle's action as in single-shot, break-action, bolt-action, and lever-action rifles.
Examples of rifles of various types include the Henry rifle, Winchester rifle, Lee–Enfield, Gewehr 98, M1 Garand, MAS-36 rifle, AKM, Ruger 10/22, Heckler & Koch G3, Remington Model 700, and Heckler & Koch HK417.
Shotguns
A shotgun is a long gun that has a predominantly smoothbore barrel—meaning it lacks rifling—designed to fire a number of shot pellets in each discharge. These shot pellet sizes commonly range between 2 mm #9 birdshot and 8.4 mm #00 (double-aught) buckshot, and produce a cluster of impact points with considerably less range and accuracy, since shot spreads during flight. Shotguns are also capable of firing single solid projectiles called slugs, or specialty (often "less lethal") munitions such as bean bags or tear gas to function as a riot gun or breaching rounds to function as a door breaching shotgun. Shotgun munitions, regardless of type, are packed into shotgun shells (cartridges designed specifically for shotguns) that are loaded into the shotgun for use; these shells are commonly loose and manually loaded one-by-one, though some shotguns accept magazines.
Shotguns share many qualities with rifles, such as both being descendants of early long guns such as the musket; both having single-shot, break-action, bolt-action, lever-action, pump-action, semi-automatic, and automatic variants; and both being popular with militaries, police, and civilians for largely the same reasons. However, unlike rifles, shotguns are less favored in combat roles due to their low accuracy and limited effectiveness in modern warfare, with combat shotguns often only used for breaching or close-quarters combat and sometimes limited to underbarrel attachments such as the M26 Modular Accessory Shotgun System. Shotguns are still popular with civilians for the suitability of their shot spread in hunting, clay pigeon shooting, and home defense.
Double-barreled shotguns are break-action shotguns with two parallel barrels (horizontal side-by-side or vertical over-and-under), allowing two single shots that can be loaded and fired in quick succession.
Examples of shotguns include the Winchester Model 1897, Browning Auto-5, Ithaca 37, Remington Model 870, Mossberg 500, Benelli M4, Franchi SPAS-12, Atchisson AA-12, and Knight's Armament Company Masterkey.
Carbines
A carbine is a long gun, usually a rifle, that has had its barrel shortened from its original length or is of a certain size smaller than standard rifles, but is still large enough to be considered a long gun. How considerable the difference is between a rifle and a carbine varies; for example, the standard Heckler & Koch G36's barrel has a length of 480 mm (18.9 in), the G36K carbine variant's barrel is 318 mm (12.5 in), and the G36C compact variant's barrel is 228 mm (9.0 in). Some carbines are also redesigned compared to their rifle counterparts, such as the aforementioned G36/G36K and G36C, or the AK-74 and AKS-74U. However, some carbines, such as the M1 carbine, are not a variant of any existing design and are their own firearm model. Carbines are regardless very similar to rifles and often have the same actions (single-shot, lever-action, bolt-action, semi-automatic, automatic, etc.). This similarity has given carbines the alternate name of short barreled rifle (SBR), though this more accurately describes a full-size rifle with a shortened carbine-style barrel for close-quarters use.
The small size of a carbine provides lighter weight and better maneuverability, making them ideal for close-quarters combat and storage in compact areas. This makes them popular firearms among special forces and police tactical units alongside submachine guns, considerably so since the late 1990s due to the familiarity and better stopping power of carbines compared to submachine guns. They are also popular with (and were originally mostly intended for) military personnel in roles that are expected to engage in combat, but where a full-size rifle would be an impediment to the primary duties of that soldier (logistical personnel, airborne forces, military engineers, officers, etc.), though since the turn of the millennium these have been superseded to a degree in some roles by personal defense weapons. Carbines are also common among civilian firearm owners who have size, space, and power concerns similar to military and police users.
Examples of carbines include the Winchester Model 1892, Rifle No. 5 Mk I, SKS, M1 carbine, Ruger Mini-14, M4 carbine, and Kel-Tec SUB-2000.
Assault rifles
An assault rifle is commonly defined as a selective-fire rifle chambered in an intermediate cartridge (such as 5.56×45mm NATO, 7.62×39mm, 5.45×39mm, and .300 AAC Blackout) and fed with a detachable magazine. Assault rifles are also usually smaller than full-sized rifles such as battle rifles.
Originating with the StG 44 produced by Nazi Germany during World War II, assault rifles have since become extremely popular among militaries and other armed groups due to their universal versatility, and they have made up the vast majority of standard-issue military service rifles since the mid-20th century. Various configurations of assault rifle exist, such as the bullpup, in which the firing grip is located in front of the breech instead of behind it.
Examples of assault rifles include the Kalashnikov rifles of Soviet and Russian origin (such as the AK-47, AKM, and AK-74), as well as the American M4 carbine and M16 rifle.
Battle rifles
A battle rifle is commonly defined as a semi-automatic or selective-fire rifle that is larger or longer than an assault rifle and is chambered in a "full-power" cartridge (e.g. 7.62×51mm NATO, 7.92×57mm Mauser, 7.62×54mmR). The term originated as a retronym to differentiate older full-powered rifles of these configurations like the M1 Garand, from newer assault rifles using intermediate cartridges like the Heckler & Koch HK33, but it is sometimes used to describe similar modern rifles such as the FN SCAR.
Battle rifles serve similar purposes as assault rifles, as they both are usually employed by ground infantry for essentially the same purposes. However, some prefer battle rifles for their more powerful cartridge, despite the added recoil. Some designated marksman rifles are configured from battle rifles, such as the Mk 14 Enhanced Battle Rifle and United States Marine Corps Designated Marksman Rifle, both essentially heavily modified and modernized variants of the M14 rifle.
Examples of rifles considered to be battle rifles include the FG 42, Gewehr 43, FN FAL, Howa Type 64, and Desert Tech MDR.
Sniper rifles
A sniper rifle is, per widespread definition, a high-powered precision rifle, often bolt-action or semi-automatic, with an effective range farther than that of a standard rifle. Though any rifle in a sniper configuration (usually with a telescopic sight and bipod) can be considered a sniper rifle, most sniper rifles are purpose-built for their applications, or are variants of existing rifles that have been modified to function as sniper rifles, such as the Type 97 sniper rifle, which was essentially a standard Type 38 rifle that was modified to be lighter and come with a telescopic sight.
Related developments are anti-materiel rifles, large-caliber rifles designed to destroy enemy materiel such as vehicles, supplies, or hardware; anti-tank rifles, anti-materiel rifles that were designed specifically to combat early armoured fighting vehicles, but are now largely obsolete due to advances in vehicle armour; scout rifles, a broad class of rifles generally summed up as short, lightweight, portable sniper rifles; and designated marksman rifles, semi-automatic high-precision rifles, usually chambered in intermediate or full-power cartridges, that fill the range gap between sniper rifles and regular rifles and are designed for designated marksmen in squads.
Examples of sniper and scout rifles include the M40 rifle, Heckler & Koch PSG1, Walther WA 2000, Accuracy International AWM, M24 Sniper Weapon System, Steyr Scout, Sako TRG, and CheyTac Intervention. Examples of anti-materiel and anti-tank rifles include the Mauser Tankgewehr M1918, Boys anti-tank rifle, PTRS-41, Barrett M82, Gepárd anti-materiel rifle, and McMillan TAC-50. Examples of designated marksman rifles include the SVD, SR-25, Dragunov SVU, Marine Scout Sniper Rifle, Mk 14 Enhanced Battle Rifle, and M110 Semi-Automatic Sniper System.
Automatic rifles
An automatic rifle is a magazine-fed rifle that is capable of automatic fire. They include most assault rifles and battle rifles, but originated as their own category of rifles capable of automatic fire, as opposed to the bolt-action and semi-automatic rifles commonly issued to infantry at the time of their invention. They usually have smaller magazine capacities than machine guns; the French Chauchat had a 20-round box magazine, while the Hotchkiss Mle 1914 machine gun, the French Army's standard machine gun at the time, was fed by a 250-round ammunition belt.
Though automatic rifles are sometimes considered to be their own category, they are also occasionally considered to be other types of firearms that postdated their invention, usually as light machine guns. Automatic rifles are sometimes confused with machine guns or vice versa, or are defined as such by law; the National Firearms Act and Firearm Owners Protection Act define a "machine gun" in United States Code Title 26, Subtitle E, Chapter 53, Subchapter B, Part 1, § 5845 as "... any firearm which shoots ... automatically more than one shot, without manual reloading, by a single function of the trigger". "Machine gun" is therefore largely synonymous with "automatic weapon" in American civilian parlance, covering all automatic firearms. In most jurisdictions, automatic rifles, as well as automatic firearms in general, are prohibited from civilian purchase or are at least heavily restricted; in the U.S. for instance, most automatic rifles are Title II weapons that require certain licenses and are greatly regulated.
Examples of automatic rifles include the Cei-Rigotti, Lewis gun, Fedorov Avtomat, and M1918 Browning automatic rifle.
Machine guns
A machine gun is a fully-automatic firearm, chambered in intermediate or full-power rifle cartridges, designed to provide sustained automatic direct fire as opposed to the semi-automatic or burst fire of standard rifles. They are commonly associated with being belt-fed, though many machine guns are also fed by box, drum, pan, or hopper magazines. They generally have a high rate of fire and a large ammunition capacity, and are often used for suppressive fire to support infantry advances or defend positions from enemy assaults. Owing to their versatility and firepower, they are also commonly installed on military vehicles and military aircraft, either as main or ancillary weapons. Many machine guns are individual-served and can be operated by a single soldier, though some are crew-served weapons that require a dedicated crew of soldiers to operate, usually between two and six soldiers depending on the machine gun's operation and the crew members' roles (ammunition bearers, spotters, etc.).
Machine guns can be divided into three categories: light machine guns, individual-served machine guns of an intermediate cartridge that are usually magazine-fed; medium machine guns, belt-fed machine guns of a full-power caliber and a certain weight that can be operated by an individual but tend to work best with a crew; and heavy machine guns, machine guns that are too large and heavy to be carried and are thus mounted to something (like a tripod or military vehicle), and require a crew to operate. A general-purpose machine gun combines these categories under a single flexible machine gun platform, often one that is most suitable as a light or medium machine gun but fares well as a heavy machine gun. A closely related concept is the squad automatic weapon, a portable light machine gun or even a modified rifle that is designed and fielded to provide a squad with rapid direct fire.
Examples of machine guns include the Maxim gun, M2 Browning, Bren light machine gun, MG 42, PK machine gun, FN MAG, M249 light machine gun, RPK, IWI Negev, and M134 Minigun.
Submachine guns
A submachine gun is a magazine-fed carbine chambered in a small-caliber handgun cartridge (such as 9×19mm Parabellum, .45 ACP, .22 Long Rifle, and .40 S&W). They cannot be considered machine guns due to their small-caliber, hence the prefix "sub-" to differentiate them from proper machine guns. Submachine guns are commonly associated with high rates of fire, automatic fire capabilities, and low recoil, though many submachine guns differentiate from this in various ways, such as having fairly low rates of fire or including burst and semi-automatic modes available through selective fire. Most submachine guns are the size of carbines and short-barreled rifles, and use similar configurations. Many are designed to take as little space as possible for use in close-quarters or for easy storage in vehicles and cases. Some submachine guns are designed and configured similar to pistols even down to size, and are thus occasionally classed as machine pistols, even if they are not actually a handgun (i.e. designed to require two hands to use).
Submachine guns are considered ideal for close-quarters combat and are cheap to mass-produce. They were very common in military service through much of the 20th century, but have since been superseded in most combat roles by rifles, carbines, and personal defense weapons due to their low effective range and poor penetration against most body armor developed since the late 20th century. However, they remain popular among special forces and police for their effectiveness in close-quarters and low likelihood to overpenetrate targets.
Examples of submachine guns include the MP 18, MP 40, Thompson submachine gun, M3 submachine gun, Uzi, Heckler & Koch MP5, Spectre M4, Steyr TMP, Heckler & Koch UMP, PP-2000, KRISS Vector, and SIG MPX.
Personal defense weapons
A personal defense weapon is, in simplest terms, a submachine gun that is designed to fire ammunition with ballistic performance that is similar to (but not actually a type of) rifle cartridges, often called "sub-intermediate" cartridges. In this way, it combines the high automatic rate of fire, reliable low recoil, and lightweight compact maneuverability of submachine guns with the versatility, penetration, and effective range of rifles, effectively making them an "in-between" of submachine guns and carbines.
Personal defense weapons were developed to provide rear and "second-line" personnel not otherwise armed with high-caliber firearms (vehicle and weapon crews, engineers, logistical personnel, etc.) with a method of effective self-defense against skirmishers and infiltrators who cannot effectively be defeated by low-powered submachine guns and handguns, often the only firearms suitable for those personnel (while they could be issued rifles or carbines, those would become unnecessary burdens in their normal duties, during which the likelihood of hostility is fairly rare regardless, making their issuance questionable). Thus, per their name, personal defense weapons allow these personnel to effectively defend themselves from enemies and repel attacks themselves or at least until support can arrive. They are not intended for civilian self-defense due to their nature as automatic firearms (which are usually prohibited from civilian purchase), though some semi-automatic PDWs exist for the civilian market, albeit often with longer barrels.
Examples of personal defense weapons include the FN P90, Heckler & Koch MP7, AAC Honey Badger, and ST Kinetics CPW.
Action
Types aside, firearms are also categorized by their "action", which describes their loading, firing, and unloading cycle.
Manual
Manual action or manual operation is essentially any type of firearm action that is loaded, and usually also fired, one cartridge at a time by the user, rather than automatically. Manual action firearms can be divided into two basic categories: single-shot firearms that can only be fired once per barrel before it must be reloaded or charged via an external mechanism or series of steps; and repeating firearms that can be fired multiple times per barrel, but can only be fired once with each subsequent pull of the trigger or ignite, and the firearm's action must be reloaded or charged via an internal mechanism between trigger pulls.
Types of manual actions include lever action, bolt action, and pump action.
Lever action
Lever action is a repeating action that is operated by using a cocking handle (the "lever") located around the trigger guard area (often incorporating it) that is pulled down then back up to move the bolt via internal linkages and cock the firing pin mechanism, expelling the old cartridge and loading a new one.
Bolt action
Bolt action is a repeating (and rarely single-shot) action that is operated by directly manipulating the bolt via a bolt handle. The bolt is unlocked from the receiver, then pulled back to open the breech, ejecting a cartridge, and cocking the striker and engaging it against the sear; when the bolt is returned to the forward position, a new cartridge, if loaded, is pushed out of the magazine and into the barrel chamber, and the breech is re-locked.
Two designs of bolt action exist: rotating bolt, where the bolt must be axially rotated to unlock and lock the receiver; and straight pull, which does not require the bolt to be rotated, simplifying the bolt action mechanism and allowing for a greater rate of fire.
Pump action
Pump action or slide action is a repeating action that is operated by moving a sliding handguard (the "pump") on the gun's forestock rearward (frontward on some models), ejecting any spent cartridges and cocking the hammer or striker, then moving the handguard forward to load a new cartridge into the chamber. It is most common on shotguns, though pump action rifles and grenade launchers also exist.
Semi-automatic
Semi-automatic, self-loading, or autoloading is a firearm action that, after a single discharge, automatically performs the feeding and ignition procedures necessary to prepare the firearm for a subsequent discharge. Semi-automatic firearms only discharge once with each trigger actuation, and the trigger must be actuated again to fire another cartridge.
Types of semi-automatic actions and modes include automatic, burst, and selective.
Automatic
Automatic is a firearm action that uses the same automated action cycling as semi-automatic, but continues to do so for as long as the trigger is actuated, until the trigger is let go of or the firearm is depleted of available ammunition. The excess energy released from a discharged cartridge is used to load a new cartridge into the chamber, then igniting the propellant and discharging said new cartridge by delivering a hammer or striker impact on the primer. Automatic firearms are further defined by the type of cycling principles used, such as recoil operation (uses energy from the recoil to cycle the action), blowback (uses energy from the cartridge case as it is pushed by expanding gas), blow forward (use propellant gas pressure to open the breech), or gas operation (uses high-pressure gas from a fired cartridge to dispose of the spent case and load a new cartridge).
Burst
Burst is a fire mode of some semi-automatic and automatic firearms that fires a predetermined amount of rounds—usually two or three—in the same manner as automatic fire. Depending on the firearm, a single trigger actuation may fire the full burst of rounds, or it must be depressed for the entire discharge, with a single pull of the trigger firing a single round or an incomplete burst. Most firearms with burst capabilities have it as a fire mode secondary to semi-automatic and automatic.
Selective fire
Selective fire or select fire is the capability of a firearm to have its fire mode adjusted between semi-automatic, burst, or automatic. The modes are chosen by means of a fire mode selector, which varies depending on the weapon's design. The presence of selective-fire modes on firearms allows more efficient use of ammunition for specific tactical needs, either precision-aimed or suppressive fire. Selective fire is most commonly found on assault rifles and submachine guns.
Use as a blunt weapon
Firearms can be used as blunt weapons, for instance to conserve limited ammunition or when ammunition has run out entirely.
New recruits of the Israel Defense Forces undergo training on the safe practice of using the M16 rifle as a blunt weapon, mainly so that in close-quarter fighting, the weapon cannot be pulled away from them. Other training includes the recruit learning how to jab parts of the body with the muzzle and using the butt stock as a weapon.
Forensic medicine recognizes evidence for various types of blunt-force injuries produced by firearms. For example, "pistol-whipping" typically leaves semicircular or triangular lacerations of skin produced by the butt of a pistol.
In armed robberies, beating the victims with firearms is a more common way to complete the robbery, rather than shooting or stabbing them.
Examples include:
Buttstroking, striking with the butt stock of a firearm.
Pistol-whipping, striking someone with a handgun.
Striking with the muzzle end of a firearm without a bayonet attached.
History
The first firearms were invented in 10th century China when the man-portable fire lance (a bamboo or metal tube that could shoot ignited gunpowder) was combined with projectiles such as scrap metal, broken porcelain, or darts/arrows.
An early depiction of a firearm is a sculpture from a cave in Sichuan, China. The sculpture dates to the 12th century and represents a figure carrying a vase-shaped bombard, with flames and a cannonball coming out of it. The oldest surviving gun, a hand cannon made of bronze, has been dated to 1288 because it was discovered at a site in modern-day Acheng District, Heilongjiang, China, where the Yuan Shi records that battles were fought at that time. The firearm had a barrel of a diameter, a chamber for the gunpowder and a socket for the firearm's handle. It is long and without the handle, which would have been made of wood.
The Arabs and Mamluks had firearms in the late-13th century. Europeans obtained firearms in the 14th century. The Koreans adopted firearms from the Chinese in the 14th century. The Iranians (first Aq Qoyunlu and Safavids) and Indians (first Mughals) all got them no later than the 15th century, from the Ottoman Turks. The people of the Nusantara archipelago of Southeast Asia used the long arquebus at least by the last quarter of the 15th century.
Even though the knowledge of making gunpowder-based weapons in the Nusantara archipelago had been known after the failed Mongol invasion of Java (1293), and the predecessor of firearms, the pole gun (bedil tombak), was recorded as being used by Java in 1413, the knowledge of making "true" firearms came much later, after the middle of 15th century. It was brought by the Islamic nations of West Asia, most probably the Arabs. The precise year of introduction is unknown, but it may be safely concluded to be no earlier than 1460. Before the arrival of the Portuguese in Southeast Asia, the natives already possessed firearms, the Java arquebus.
The technology of firearms in Southeast Asia further improved after the Portuguese capture of Malacca (1511). Starting in the 1513, the traditions of German-Bohemian gun-making merged with Turkish gun-making traditions. This resulted in the Indo-Portuguese tradition of matchlocks. Indian craftsmen modified the design by introducing a very short, almost pistol-like buttstock held against the cheek, not the shoulder, when aiming. They also reduced the caliber and made the gun lighter and more balanced. This was a hit with the Portuguese who did a lot of fighting aboard ship and on river craft, and valued a more compact gun. The Malaccan gunfounders, compared as being in the same level with those of Germany, quickly adapted these new firearms, and thus a new type of arquebus, the istinggar, appeared. The Japanese did not acquire firearms until the 16th century, and then from the Portuguese rather than from the Chinese.
Developments in firearms accelerated during the 19th and 20th centuries. Breech-loading became more or less a universal standard for the reloading of most hand-held firearms and continues to be so with some notable exceptions (such as mortars). Instead of loading individual rounds into weapons, magazines holding multiple munitions were adopted—these aided rapid reloading. Automatic and semi-automatic firing mechanisms meant that a single soldier could fire many more rounds in a minute than a vintage weapon could fire over the course of a battle. Polymers and alloys in firearm construction made weaponry progressively lighter and thus easier to deploy. Ammunition changed over the centuries from simple metallic ball-shaped projectiles that rattled down the barrel to bullets and cartridges manufactured to high precision. Especially in the past century particular attention has focused on accuracy and sighting to make firearms altogether far more accurate than ever before. More than any single factor though, firearms have proliferated due to the advent of mass production—enabling arms-manufacturers to produce large quantities of weaponry to a consistent standard.
Velocities of bullets increased with the use of a "jacket" of metals such as copper or copper alloys that covered a lead core and allowed the bullet to glide down the barrel more easily than exposed lead. Such bullets are known as "full metal jacket" (FMJ). Such FMJ bullets are less likely to fragment on impact and are more likely to traverse through a target while imparting less energy. Hence, FMJ bullets impart less tissue damage than non-jacketed bullets that expand. This led to their adoption for military use by countries adhering to the Hague Convention of 1899.
That said, the basic principle behind firearm operation remains unchanged to this day. A musket of several centuries ago is still similar in principle to a modern-day rifle—using the expansion of gases to propel projectiles over long distances—albeit less accurately and rapidly.
Early firearm models
Fire lances
The Chinese fire lance from the 10th century was the direct predecessor to the modern concept of the firearm. It was not a gun itself, but an addition to soldiers' spears. Originally it consisted of paper or bamboo barrels that would contain incendiary gunpowder that could be lit one time and which would project flames at the enemy. Sometimes Chinese troops would place small projectiles within the barrel that would also be projected when the gunpowder was lit, but most of the explosive force would create flames. Later, the barrel was changed to be made of metal, so that more explosive gunpowder could be used and put more force into the propulsion of projectiles.
Hand cannons
The original predecessor of all firearms, the Chinese hand cannon from the 13th century, was loaded with gunpowder and the projectile (initially lead shot, later replaced by cast iron) through the muzzle, while a fuse was placed at the rear. This fuse was lit, causing the gunpowder to ignite and propel the projectiles. In military use, the standard hand cannon was tremendously powerful, while also being somewhat erratic due to the relative inability of the gunner to aim the weapon, or to control the ballistic properties of the projectile. Recoil could be absorbed by bracing the barrel against the ground using a wooden support, the forerunner of the stock. Neither the quality nor amount of gunpowder, nor the consistency in projectile dimensions was controlled, with resulting inaccuracy in firing due to windage, variance in gunpowder composition, and the difference in diameter between the bore and the shot. Hand cannons were replaced around the 15th century by lighter carriage-mounted artillery pieces, and ultimately by the arquebus.
In the 1420s, gunpowder was used to propel missiles from hand-held tubes during the Hussite revolt in Bohemia.
Arquebuses
The arquebus is a long gun that appeared in Europe and the Ottoman Empire during the 15th Century. The term arquebus is derived from the Dutch word haaqbus (literally meaning hook gun). The term arquebus was applied to many different types of guns. In their earliest form they were defensive weapon mounts on German city walls in the 15th Century. The addition of a shoulder stock, priming pan and matchlock mechanism in the late 15th century turned the arquebus into a handheld firearm, and also first firearm equipped with a trigger. Heavy arquebuses mounted on war wagons were called arquebus a croc. These heavy arquebuses fired a lead ball of about 3.5 ounces (100g).
Muskets
Muzzle-loading muskets (smooth-bored long guns) were among the first firearms developed. The firearm was loaded through the muzzle with gunpowder, optionally with some wadding, and then with a bullet (usually a solid lead ball, but musketeers could shoot stones when they ran out of bullets). Greatly improved muzzleloaders (usually rifled instead of smooth-bored) are manufactured today and have many enthusiasts, many of whom hunt large and small game with their guns. Muzzleloaders have to be manually reloaded after each shot; a skilled archer could fire multiple arrows faster than most early muskets could be reloaded and fired, although by the mid-18th century when muzzleloaders became the standard small-armament of the military, a well-drilled soldier could fire six rounds in a minute using prepared cartridges in his musket. Before then, the effectiveness of muzzleloaders was hindered both by the low reloading speed and, before the firing mechanism was perfected, by the very high risk posed by the firearm to the person attempting to fire it.
One interesting solution to the reloading problem was the "Roman Candle Gun" with superposed loads. This was a muzzleloader in which multiple charges and balls were loaded one on top of the other, with a small hole in each ball to allow the subsequent charge to be ignited after the one ahead of it was ignited. It was neither a very reliable nor popular firearm, but it enabled a form of "automatic" fire long before the advent of the machine gun.
Firing mechanisms
Matchlock
Matchlocks were the first and simplest firearms-firing mechanisms developed. In the matchlock mechanism, the powder in the gun barrel was ignited by a piece of burning cord called a "match". The match was wedged into one end of an S-shaped piece of steel. When the trigger (often actually a lever) was pulled, the match was brought into the open end of a "touch hole" at the base of the gun barrel, which contained a very small quantity of gunpowder, igniting the main charge of gunpowder in the gun barrel. The match usually had to be relit after each firing. The main parts of the matchlock firing mechanism are the pan, match, arm, and trigger. A benefit of the pan and arm swivel being moved to the side of the gun was it gave a clear line of fire. An advantage to the matchlock firing mechanism is that it did not misfire. However, it also came with some disadvantages. One disadvantage involved weather: in rain, the match could not be kept lit to fire the weapon. Another issue with the match was it could give away the position of soldiers because of the glow, sound, and smell. While European pistols were equipped with wheellock and flintlock mechanisms, Asian pistols used matchlock mechanisms.
Wheellock
The wheellock action, a successor to the matchlock, predated the flintlock. Despite its many faults, the wheellock was a significant improvement over the matchlock in terms of both convenience and safety, since it eliminated the need to keep a smoldering match in proximity to loose gunpowder. It operated using a small wheel (much like that on a cigarette lighter) which was wound up with a key before use and which, when the trigger was pulled, spun against a flint, creating the shower of sparks that ignited the powder in the touch hole. Supposedly invented by Leonardo da Vinci (1452–1519), the Italian Renaissance man, the wheellock action was an innovation that was not widely adopted due to the high cost of the clockwork mechanism.
Flintlock
The flintlock action represented a major innovation in firearm design. The spark used to ignite the gunpowder in the touch hole came from a sharpened piece of flint clamped in the jaws of a "cock" which, when released by the trigger, struck a piece of steel called the "frizzen" to generate the necessary sparks. (The spring-loaded arm that holds a piece of flint or pyrite is referred to as a cock because of its resemblance to a rooster.) The cock had to be manually reset after each firing, and the flint had to be replaced periodically due to wear from striking the frizzen. ( | Technology | Weapons | null |
11971 | https://en.wikipedia.org/wiki/Galaxy%20formation%20and%20evolution | Galaxy formation and evolution | In cosmology, the study of galaxy formation and evolution is concerned with the processes that formed a heterogeneous universe from a homogeneous beginning, the formation of the first galaxies, the way galaxies change over time, and the processes that have generated the variety of structures observed in nearby galaxies. Galaxy formation is hypothesized to occur from structure formation theories, as a result of tiny quantum fluctuations in the aftermath of the Big Bang. The simplest model in general agreement with observed phenomena is the Lambda-CDM model—that is, clustering and merging allows galaxies to accumulate mass, determining both their shape and structure. Hydrodynamics simulation, which simulates both baryons and dark matter, is widely used to study galaxy formation and evolution.
Commonly observed properties of galaxies
Because of the inability to conduct experiments in outer space, the only way to “test” theories and models of galaxy evolution is to compare them with observations. Explanations for how galaxies formed and evolved must be able to predict the observed properties and types of galaxies.
Edwin Hubble created an early galaxy classification scheme, now known as the Hubble tuning-fork diagram. It partitioned galaxies into ellipticals, normal spirals, barred spirals (such as the Milky Way), and irregulars. These galaxy types exhibit the following properties which can be explained by current galaxy evolution theories:
Many of the properties of galaxies (including the galaxy color–magnitude diagram) indicate that there are fundamentally two types of galaxies. These groups divide into blue star-forming galaxies that are more like spiral types, and red non-star forming galaxies that are more like elliptical galaxies.
Spiral galaxies are quite thin, dense, and rotate relatively fast, while the stars in elliptical galaxies have randomly oriented orbits.
The majority of giant galaxies contain a supermassive black hole in their centers, ranging in mass from millions to billions of times the mass of the Sun. The black hole mass is tied to the host galaxy bulge or spheroid mass.
Metallicity has a positive correlation with the absolute magnitude (luminosity) of a galaxy.
Astronomers now believe that disk galaxies likely formed first, then evolved into elliptical galaxies through galaxy mergers.
Current models also predict that the majority of mass in galaxies is made up of dark matter, a substance which is not directly observable, and might not interact through any means except gravity. This observation arises because galaxies could not have formed as they have, or rotate as they are seen to, unless they contain far more mass than can be directly observed.
Formation of disk galaxies
The earliest stage in the evolution of galaxies is their formation. When a galaxy forms, it has a disk shape and is called a spiral galaxy due to spiral-like "arm" structures located on the disk. There are different theories on how these disk-like distributions of stars develop from a cloud of matter: however, at present, none of them exactly predicts the results of observation.
Top-down theories
Olin J. Eggen, Donald Lynden-Bell, and Allan Sandage in 1962, proposed a theory that disk galaxies form through a monolithic collapse of a large gas cloud. The distribution of matter in the early universe was in clumps that consisted mostly of dark matter. These clumps interacted gravitationally, putting tidal torques on each other that acted to give them some angular momentum. As the baryonic matter cooled, it dissipated some energy and contracted toward the center. With angular momentum conserved, the matter near the center speeds up its rotation. Then, like a spinning ball of pizza dough, the matter forms into a tight disk. Once the disk cools, the gas is not gravitationally stable, so it cannot remain a singular homogeneous cloud. It breaks, and these smaller clouds of gas form stars. Since the dark matter does not dissipate as it only interacts gravitationally, it remains distributed outside the disk in what is known as the dark halo. Observations show that there are stars located outside the disk, which does not quite fit the "pizza dough" model. It was first proposed by Leonard Searle and Robert Zinn that galaxies form by the coalescence of smaller progenitors. Known as a top-down formation scenario, this theory is quite simple yet no longer widely accepted.
Bottom-up theory
More recent theories include the clustering of dark matter halos in the bottom-up process. Instead of large gas clouds collapsing to form a galaxy in which the gas breaks up into smaller clouds, it is proposed that matter started out in these “smaller” clumps (mass on the order of globular clusters), and then many of these clumps merged to form galaxies, which then were drawn by gravitation to form galaxy clusters. This still results in disk-like distributions of baryonic matter with dark matter forming the halo for all the same reasons as in the top-down theory. Models using this sort of process predict more small galaxies than large ones, which matches observations.
Astronomers do not currently know what process stops the contraction. In fact, theories of disk galaxy formation are not successful at producing the rotation speed and size of disk galaxies. It has been suggested that the radiation from bright newly formed stars, or from an active galactic nucleus can slow the contraction of a forming disk. It has also been suggested that the dark matter halo can pull the galaxy, thus stopping disk contraction.
The Lambda-CDM model is a cosmological model that explains the formation of the universe after the Big Bang. It is a relatively simple model that predicts many properties observed in the universe, including the relative frequency of different galaxy types; however, it underestimates the number of thin disk galaxies in the universe. The reason is that these galaxy formation models predict a large number of mergers. If disk galaxies merge with another galaxy of comparable mass (at least 15 percent of its mass) the merger will likely destroy, or at a minimum greatly disrupt the disk, and the resulting galaxy is not expected to be a disk galaxy (see next section). While this remains an unsolved problem for astronomers, it does not necessarily mean that the Lambda-CDM model is completely wrong, but rather that it requires further refinement to accurately reproduce the population of galaxies in the universe.
Galaxy mergers and the formation of elliptical galaxies
Elliptical galaxies (most notably supergiant ellipticals, such as ESO 306-17) are among some of the largest known thus far. Their stars are on orbits that are randomly oriented within the galaxy (i.e. they are not rotating like disk galaxies). A distinguishing feature of elliptical galaxies is that the velocity of the stars does not necessarily contribute to flattening of the galaxy, such as in spiral galaxies. Elliptical galaxies have central supermassive black holes, and the masses of these black holes correlate with the galaxy's mass.
Elliptical galaxies have two main stages of evolution. The first is due to the supermassive black hole growing by accreting cooling gas. The second stage is marked by the black hole stabilizing by suppressing gas cooling, thus leaving the elliptical galaxy in a stable state. The mass of the black hole is also correlated to a property called sigma which is the dispersion of the velocities of stars in their orbits. This relationship, known as the M-sigma relation, was discovered in 2000. Elliptical galaxies mostly lack disks, although some bulges of disk galaxies resemble elliptical galaxies. Elliptical galaxies are more likely found in crowded regions of the universe (such as galaxy clusters).
Astronomers now see elliptical galaxies as some of the most evolved systems in the universe. It is widely accepted that the main driving force for the evolution of elliptical galaxies is mergers of smaller galaxies. Many galaxies in the universe are gravitationally bound to other galaxies, which means that they will never escape their mutual pull. If those colliding galaxies are of similar size, the resultant galaxy will appear similar to neither of the progenitors, but will instead be elliptical. There are many types of galaxy mergers, which do not necessarily result in elliptical galaxies, but result in a structural change. For example, a minor merger event is thought to be occurring between the Milky Way and the Magellanic Clouds.
Mergers between such large galaxies are regarded as violent, and the frictional interaction of the gas between the two galaxies can cause gravitational shock waves, which are capable of forming new stars in the new elliptical galaxy. By sequencing several images of different galactic collisions, one can observe the timeline of two spiral galaxies merging into a single elliptical galaxy.
In the Local Group, the Milky Way and the Andromeda Galaxy are gravitationally bound, and currently approaching each other at high speed. Simulations show that the Milky Way and Andromeda are on a collision course, and are expected to collide in less than five billion years. During this collision, it is expected that the Sun and the rest of the Solar System will be ejected from its current path around the Milky Way. The remnant could be a giant elliptical galaxy.
Galaxy quenching
One observation that must be explained by a successful theory of galaxy evolution is the existence of two different populations of galaxies on the galaxy color-magnitude diagram. Most galaxies tend to fall into two separate locations on this diagram: a "red sequence" and a "blue cloud". Red sequence galaxies are generally non-star-forming elliptical galaxies with little gas and dust, while blue cloud galaxies tend to be dusty star-forming spiral galaxies.
As described in previous sections, galaxies tend to evolve from spiral to elliptical structure via mergers. However, the current rate of galaxy mergers does not explain how all galaxies move from the "blue cloud" to the "red sequence". It also does not explain how star formation ceases in galaxies. Theories of galaxy evolution must therefore be able to explain how star formation turns off in galaxies. This phenomenon is called galaxy "quenching".
Stars form out of cold gas (see also the Kennicutt–Schmidt law), so a galaxy is quenched when it has no more cold gas. However, it is thought that quenching occurs relatively quickly (within 1 billion years), which is much shorter than the time it would take for a galaxy to simply use up its reservoir of cold gas. Galaxy evolution models explain this by hypothesizing other physical mechanisms that remove or shut off the supply of cold gas in a galaxy. These mechanisms can be broadly classified into two categories: (1) preventive feedback mechanisms that stop cold gas from entering a galaxy or stop it from producing stars, and (2) ejective feedback mechanisms that remove gas so that it cannot form stars.
One theorized preventive mechanism called “strangulation” keeps cold gas from entering the galaxy. Strangulation is likely the main mechanism for quenching star formation in nearby low-mass galaxies. The exact physical explanation for strangulation is still unknown, but it may have to do with a galaxy's interactions with other galaxies. As a galaxy falls into a galaxy cluster, gravitational interactions with other galaxies can strangle it by preventing it from accreting more gas. For galaxies with massive dark matter halos, another preventive mechanism called “virial shock heating” may also prevent gas from becoming cool enough to form stars.
Ejective processes, which expel cold gas from galaxies, may explain how more massive galaxies are quenched. One ejective mechanism is caused by supermassive black holes found in the centers of galaxies. Simulations have shown that gas accreting onto supermassive black holes in galactic centers produces high-energy jets; the released energy can expel enough cold gas to quench star formation.
Our own Milky Way and the nearby Andromeda Galaxy currently appear to be undergoing the quenching transition from star-forming blue galaxies to passive red galaxies.
Hydrodynamics Simulation
Dark energy and dark matter account for most of the Universe's energy, so it is valid to ignore baryons when simulating large-scale structure formation (using methods such as N-body simulation). However, since the visible components of galaxies consist of baryons, it is crucial to include baryons in the simulation to study the detailed structures of galaxies. At first, the baryon component consists of mostly hydrogen and helium gas, which later transforms into stars during the formation of structures. From observations, models used in simulations can be tested and the understanding of different stages of galaxy formation can be improved.
Euler equations
In cosmological simulations, astrophysical gases are typically modeled as inviscid ideal gases that follow the Euler equations, which can be expressed mainly in three different ways: Lagrangian, Eulerian, or arbitrary Lagrange-Eulerian methods. Different methods give specific forms of hydrodynamical equations. When using the Lagrangian approach to specify the field, it is assumed that the observer tracks a specific fluid parcel with its unique characteristics during its movement through space and time. In contrast, the Eulerian approach emphasizes particular locations in space that the fluid passes through as time progresses.
Baryonic Physics
To shape the population of galaxies, the hydrodynamical equations must be supplemented by a variety of astrophysical processes mainly governed by baryonic physics.
Gas cooling
Processes, such as collisional excitation, ionization, and inverse Compton scattering, can cause the internal energy of the gas to be dissipated. In the simulation, cooling processes are realized by coupling cooling functions to energy equations. Besides the primordial cooling, at high temperature,, heavy elements (metals) cooling dominates. When , the fine structure and molecular cooling also need to be considered to simulate the cold phase of the interstellar medium.
Interstellar medium
Complex multi-phase structure, including relativistic particles and magnetic field, makes simulation of interstellar medium difficult. In particular, modeling the cold phase of the interstellar medium poses technical difficulties due to the short timescales associated with the dense gas. In the early simulations, the dense gas phase is frequently not modeled directly but rather characterized by an effective polytropic equation of state. More recent simulations use a multimodal distribution to describe
the gas density and temperature distributions, which directly model the multi-phase structure. However, more detailed physics processes needed to be considered in future simulations, since the structure of the interstellar medium directly affects star formation.
Star formation
As cold and dense gas accumulates, it undergoes gravitational collapse and eventually forms stars. To simulate this process, a portion of the gas is transformed into collisionless star particles, which represent coeval, single-metallicity stellar populations and are described by an initial underlying mass function. Observations suggest that star formation efficiency in molecular gas is almost universal, with around 1% of the gas being converted into stars per free fall time. In simulations, the gas is typically converted into star particles using a probabilistic sampling scheme based on the calculated star formation rate. Some simulations seek an alternative to the probabilistic sampling scheme and aim to better capture the clustered nature of star formation by treating star clusters as the fundamental unit of star formation. This approach permits the growth of star particles by accreting material from the surrounding medium. In addition to this, modern models of galaxy formation track the evolution of these stars and the mass they return to the gas component, leading to an enrichment of the gas with metals.
Stellar feedback
Stars have an influence on their surrounding gas by injecting energy and momentum. This creates a feedback loop that regulates the process of star formation. To effectively control star formation, stellar feedback must generate galactic-scale outflows that expel gas from galaxies. Various methods are utilized to couple energy and momentum, particularly through supernova explosions, to the surrounding gas. These methods differ in how the energy is deposited, either thermally or kinetically. However, excessive radiative gas cooling must be avoided in the former case. Cooling is expected in dense and cold gas, but it cannot be reliably modeled in cosmological simulations due to low resolution. This leads to artificial and excessive cooling of the gas, causing the supernova feedback energy to be lost via radiation and significantly reducing its effectiveness. In the latter case, kinetic energy cannot be radiated away until it thermalizes. However, using hydrodynamically decoupled wind particles to inject momentum non-locally into the gas surrounding active star-forming regions may still be necessary to achieve large-scale galactic outflows. Recent models explicitly model stellar feedback. These models not only incorporate supernova feedback but also consider other feedback channels such as energy and momentum injection from stellar winds, photoionization, and radiation pressure resulting from radiation emitted by young, massive stars. During the Cosmic Dawn, galaxy formation occurred in short bursts of 5 to 30 Myr due to stellar feedbacks.
Supermassive black holes
Simulation of supermassive black holes is also considered, numerically seeding them in dark matter haloes, due to their observation in many galaxies and the impact of their mass on the mass density distribution. Their mass accretion rate is frequently modeled by the Bondi-Hoyle model.
Active galactic nuclei
Active galactic nuclei (AGN) have an impact on the observational phenomena of supermassive black holes, and further have a regulation of black hole growth and star formation. In simulations, AGN feedback is usually classified into two modes, namely quasar and radio mode. Quasar mode feedback is linked to the radiatively efficient mode of black hole growth and is frequently incorporated through energy or momentum injection. The regulation of star formation in massive galaxies is believed to be significantly influenced by radio mode feedback, which occurs due to the presence of highly collimated jets of relativistic particles. These jets are typically linked to X-ray bubbles that possess enough energy to counterbalance cooling losses.
Magnetic fields
The ideal magnetohydrodynamics approach is commonly utilized in cosmological simulations since it provides a good approximation for cosmological magnetic fields. The effect of magnetic fields on the dynamics of gas is generally negligible on large cosmological scales. Nevertheless, magnetic fields are a critical component of the interstellar medium since they provide pressure support against gravity and affect the propagation of cosmic rays.
Cosmic rays
Cosmic rays play a significant role in the interstellar medium by contributing to its pressure, serving as a crucial heating channel, and potentially driving galactic gas outflows. The propagation of cosmic rays is highly affected by magnetic fields. So in the simulation, equations describing the cosmic ray energy and flux are coupled to magnetohydrodynamics equations.
Radiation Hydrodynamics
Radiation hydrodynamics simulations are computational methods used to study the interaction of radiation with matter. In astrophysical contexts, radiation hydrodynamics is used to study the epoch of reionization when the Universe had high redshift. There are several numerical methods used for radiation hydrodynamics simulations, including ray-tracing, Monte Carlo, and moment-based methods. Ray-tracing involves tracing the paths of individual photons through the simulation and computing their interactions with matter at each step. This method is computationally expensive but can produce very accurate results.
Gallery
| Physical sciences | Physical cosmology | null |
11984 | https://en.wikipedia.org/wiki/Gardening | Gardening | Gardening is the process of growing plants for their vegetables, fruits, flowers, herbs, and appearances within a designated space. Gardens fulfill a wide assortment of purposes, notably the production of aesthetically pleasing areas, medicines, cosmetics, dyes, foods, poisons, wildlife habitats, and saleable goods (see market gardening). People often partake in gardening for its therapeutic, health, educational, cultural, philosophical, environmental, and religious benefits.
Gardening varies in scale from the 800 hectare Versailles gardens down to container gardens grown inside. Gardens take many forms, some only contain one type of plant while others involve a complex assortment of plants with no particular order.
Gardening can be difficult to differentiate from farming. They are most easily differentiated based on their primary objectives. Farming prioritizes saleable goods and may include livestock production whereas gardening often prioritizes aesthetics and leisure. As it pertains to food production, gardening generally happens on a much smaller scale with the intent of personal or community consumption. There are cultures which do not differentiate between farming and gardening. This is primarily because subsistence agriculture has been the main method of farming throughout its 12,000 year history and is virtually indistinguishable from gardening.
Prehistory
Plant domestication is seen as the birth of agriculture. However, it is arguably proceeded by a very long history of gardening wild plants. While the 12,000 year-old date is the commonly accepted timeline describing plant domestication, there is now evidence from the Ohalo II hunter-gatherer site showing earlier signs of disturbing the soil and cultivation of pre-domesticated crop species. This evidence pushes early stage plant domestication to 23,000 years ago which aligns with research done by Allaby (2022) showing slight selection pressure of desirable traits in Southwest Asian cereals (einkorn, emmer, barley). Despite not qualifying as plant domestication, there are many archaeological studies pushing the potential date of hominin selective ecosystem disturbance back up to 125,000 years ago. Much of these early recorded ecosystem disturbances were made through hominin use of fire, which dates back to 1.5 Mya (although at this time fire was not likely being wielded as a landscape-changing tool by hominids). This anthropogenic ecosystem disturbance may be the origins of gardening.
Every hunter-gatherer society has developed a niche of some sort, allowing them to thrive or even just survive amongst their environments. Many of these prehistoric hunter-gatherers had constructed a niche allowing for easier access to, or a higher amount of edible plant species. This shift from hunting and gathering to increasingly modifying the environment in a way which produces an abundance of edible plant species marks the beginning of gardening. One of the most documented hominin niches is the use of off-site fire. When done intentionally, this is often called forest gardening or fire stick farming in Australia. The modern study of fire ecology describes the many benefits off-site fires may have granted these early humans. Some of these agroecological practices have been well documented and studied during colonial contact. However, they are vastly under represented in research done on early hominin fire use. Based on current research, it is evident that these niches developed separately in different societies across different times and locations. Many of the Indigenous gardening methods were and still are often overlooked by colonizers due to the lack of resemblance to western gardens with well defined borders and non-naturalized plant species.
The Americas
There are long traditions of gardening within Indigenous societies spanning from the northernmost parts of Canada down to the southernmost tip of Chile and Argentina. The Arctic and Subarctic societies relied primarily on hunting and fishing due to the harsh climate although they have been known to collectively use at least 311 different plants as foods or medicines. The substantial knowledge and use of these plants along with the communal harvesting sites and emphasis on reciprocity between humans and plants indicates a basic level of gardening. Similarly, the Fuegian Indigenous groups in South America had developed seemingly comparable niches due to a similar tundra ecosystem. While there are very few studies on the Fuegians, Darwin mentioned wild edible plants such as fungi, kelp, and wild celery growing next to the various Fuegian shelters.
Horticulture plays a relatively small role in these northern and southern tundra inhabitants compared with Indigenous societies in grassland and forest ecosystems. From the boreal forests of Canada to the temperate forests and grasslands of Chile and Argentina different communities have developed food production niches. These include the use of fire for ecosystem maintenance and resetting successional sequences, the sowing of wild annuals, the sowing of domesticated annuals (e.g. three sisters, New World crops), creating berry patches and orchards, manipulation of plants to encourage desired traits(e.g. increased nut, fruit, or root production), and landscape modification to encourage plant and animal growth (e.g. complex irrigation, sea gardens, or terraces). These modified landscapes as recorded by early American philosophers such as Thoreau, and Emmerson were described as exhibiting pristine beauty. Indigenous gardens such as forest gardens therefore do not only serve as a producer of foods, medicines, or materials, but also pleasant aesthetics.
Many popular crops originate from pre-colonial Indigenous agricultural societies. Some of these include maize, quinoa, common bean, peanut, pumpkin, squash, pepper, tomato, cassava, potato, blueberry, cactus pear, cashew, papaya, pineapple, strawberry, cacao, sunflower, cotton, Pará rubber, and tobacco.
History
Ancient times
Forest gardening, a forest-based food production system, is the world's oldest form of gardening.
After the emergence of the first civilizations, wealthy individuals began to create gardens for aesthetic purposes. Ancient Egyptian tomb paintings from the New Kingdom (around 1500 BC) provide some of the earliest physical evidence of ornamental horticulture and landscape design; they depict lotus ponds surrounded by symmetrical rows of acacias and palms. A notable example of ancient ornamental gardens were the Hanging Gardens of Babylon—one of the Seven Wonders of the Ancient World —while ancient Rome had dozens of gardens.
Wealthy ancient Egyptians used gardens for providing shade. Egyptians associated trees and gardens with gods, believing that their deities were pleased by gardens. Gardens in ancient Egypt were often surrounded by walls with trees planted in rows. Among the most popular species planted were date palms, sycamores, fig trees, nut trees, and willows. These gardens were a sign of higher socioeconomic status. In addition, wealthy ancient Egyptians grew vineyards, as wine was a sign of the higher social classes. Roses, poppies, daisies and irises could all also be found in the gardens of the Egyptians.
Assyria was renowned for its beautiful gardens. These tended to be wide and large, some of them used for hunting game—rather like a game reserve today—and others as leisure gardens. Cypresses and palms were some of the most frequently planted types of trees.
Gardens were also available in Kush. In Musawwarat es-Sufra, the Great Enclosure dated to the 3rd century BC included splendid gardens.
Ancient Roman gardens were laid out with hedges and vines and contained a wide variety of flowers—acanthus, cornflowers, crocus, cyclamen, hyacinth, iris, ivy, lavender, lilies, myrtle, narcissus, poppy, rosemary and violets—as well as statues and sculptures. Flower beds were popular in the courtyards of rich Romans.
The Middle Ages
The Middle Ages represent a period of decline in gardens for aesthetic purposes. After the fall of Rome, gardening was done for the purpose of growing medicinal herbs and/or decorating church altars. Monasteries carried on a tradition of garden design and intense horticultural techniques during the medieval period in Europe.
Generally, monastic garden types consisted of kitchen gardens, infirmary gardens, cemetery orchards, cloister garths and vineyards. Individual monasteries might also have had a "green court", a plot of grass and trees where horses could graze, as well as a cellarer's garden or private gardens for obedientiaries, monks who held specific posts within the monastery.
Islamic gardens were built after the model of Persian gardens and they were usually enclosed by walls and divided in four by watercourses. Commonly, the centre of the garden would have a reflecting pool or pavilion. Specific to the Islamic gardens are the mosaics and glazed tiles used to decorate the rills and fountains that were built in these gardens.
By the late 13th century, rich Europeans began to grow gardens for leisure and for medicinal herbs and vegetables. They surrounded the gardens by walls to protect them from animals and to provide seclusion. During the next two centuries, Europeans started planting lawns and raising flowerbeds and trellises of roses. Fruit trees were common in these gardens and also in some, there were turf seats. At the same time, the gardens in the monasteries were a place to grow flowers and medicinal herbs but they were also a space where the monks could enjoy nature and relax.
The gardens in the 16th and 17th century were symmetric, proportioned and balanced with a more classical appearance. Most of these gardens were built around a central axis and they were divided into different parts by hedges. Commonly, gardens had flowerbeds laid out in squares and separated by gravel paths.
Gardens in Renaissance were adorned with sculptures, topiary and fountains. In the 17th century, knot gardens became popular along with the hedge mazes. By this time, Europeans started planting new flowers such as tulips, marigolds and sunflowers.
Cottage gardens
Cottage gardens, which emerged in Elizabethan times, appear to have originated as a local source for herbs and fruits. One theory is that they arose out of the Black Death of the 1340s, when the death of so many laborers made land available for small cottages with personal gardens. According to the late 19th-century legend of origin, these gardens were originally created by the workers that lived in the cottages of the villages, to provide them with food and herbs, with flowers planted among them for decoration. Farm workers were provided with cottages that had architectural quality set in a small garden—about —where they could grow food and keep pigs and chickens.
Authentic gardens of the yeoman cottager would have included a beehive and livestock, and frequently a pig and sty, along with a well. The peasant cottager of medieval times was more interested in meat than flowers, with herbs grown for medicinal use rather than for their beauty. By Elizabethan times there was more prosperity, and thus more room to grow flowers. Even the early cottage garden flowers typically had their practical use—violets were spread on the floor (for their pleasant scent and keeping out vermin); calendulas and primroses were both attractive and used in cooking. Others, such as sweet William and hollyhocks, were grown entirely for their beauty.
18th century
In the 18th century, gardens were laid out more naturally, without any walls. This style of smooth undulating grass, which would run straight to the house, clumps, belts and scattering of trees and serpentine lakes formed by invisibly damming small rivers, were a new style within the English landscape. This was a "gardenless" form of landscape gardening, which swept away almost all the remnants of previous formally patterned styles. The English landscape garden usually included a lake, lawns set against groves of trees, and often contained shrubberies, grottoes, pavilions, bridges and follies such as mock temples, Gothic ruins, bridges, and other picturesque architecture, designed to recreate an idyllic pastoral landscape. This new style emerged in England in the early 18th century, and spread across Europe, replacing the more formal, symmetrical garden à la française of the 17th century as the principal gardening style of Europe. The English garden presented an idealized view of nature. They were often inspired by paintings of landscapes by Claude Lorraine and Nicolas Poussin, and some were Influenced by the classic Chinese gardens of the East, which had recently been described by European travelers. The work of Lancelot 'Capability' Brown was particularly influential. Also, in 1804 the Horticultural Society was formed.
Gardens of the 19th century contained plants such as the monkey puzzle or Chile pine. This is also the time when the so-called "gardenesque" style of gardens evolved. These gardens displayed a wide variety of flowers in a rather small space. Rock gardens increased in popularity in the 19th century.
In ancient India, patterns from sacred geometry and mandalas were used to design gardens. Distinct mandala patterns denoted specific deities, planets, or even constellations. Such a garden was also referred to as a 'Mandala Vaatika'. The word 'Vaatika' can mean garden, plantation or parterre.
Types
Residential gardening takes place near the home, in a space referred to as the garden. Although a garden typically is located on the land near a residence, it may also be located on a roof, in an atrium, on a balcony, in a window box, on a patio or vivarium.
Gardening also takes place in non-residential green areas, such as parks, public or semi-public gardens (botanical gardens or zoological gardens), amusement parks, along transportation corridors, and around tourist attractions and garden hotels. In these situations, a staff of gardeners or groundskeepers maintains the gardens.
Indoor gardening is concerned with the growing of houseplants within a residence or building, in a conservatory, or in a greenhouse. Indoor gardens are sometimes incorporated as part of air conditioning or heating systems. Indoor gardening extends the growing season in the fall and spring and can be used for winter gardening.
Native plant gardening is concerned with the use of native plants with or without the intent of creating wildlife habitat. The goal is to create a garden in harmony with, and adapted to a given area. This type of gardening typically reduces water usage, maintenance, and fertilization costs, while increasing native faunal interest.
Water gardening is concerned with growing plants adapted to pools and ponds. Bog gardens are also considered a type of water garden. These all require special conditions and considerations. A simple water garden may consist solely of a tub containing the water and plant(s). In aquascaping, a garden is created within an aquarium tank.
Container gardening is concerned with growing plants in any type of container either indoors or outdoors. Common containers are pots, hanging baskets, and planters. Container gardening is usually used in atriums and on balconies, patios, and roof tops.
Hügelkultur is concerned with growing plants on piles of rotting wood, as a form of raised bed gardening and composting in situ. An English loanword from German, it means "mound garden". Toby Hemenway, noted permaculture author and teacher, considers wood buried in trenches to also be a form of hugelkultur referred to as a dead wood swale. Hugelkultur is practiced by Sepp Holzer as a method of forest gardening and agroforestry, and by Geoff Lawton as a method of dryland farming and desert greening. When used as a method of disposing of large volumes of waste wood and woody debris, hugelkultur accomplishes carbon sequestration. It is also a form of xeriscaping.
Community gardening is a social activity in which an area of land is gardened by a group of people, providing access to fresh produce, herbs, flowers and plants as well as access to satisfying labor, neighborhood improvement, sense of community and connection to the environment. Community gardens are typically owned in trust by local governments or nonprofits.
Garden sharing partners landowners with gardeners in need of land. These shared gardens, typically front or back yards, are usually used to produce food that is divided between the two parties.
Organic gardening uses natural, sustainable methods, fertilizers and pesticides to grow non-genetically modified crops.
Biodynamic gardening or biodynamic agriculture is similar to organic gardening, but includes various esoteric concepts drawn from the ideas of Rudolf Steiner, such as astrological sowing and planting calendar and particular field and compost preparations.
Commercial gardening is a more intensive type of gardening that involves the production of vegetables, non-tropical fruits, and flowers from local farmers. Commercial gardening began because farmers would sell locally to stop food from spoiling faster because of the transportation of goods from a far distance. Mediterranean agriculture is also a common practice that commercial gardeners use. Mediterranean agriculture is the practice of cultivating animals such as sheep to help weed and provide manure for vine crops, grains, or citrus. Gardeners can easily train these animals to not eat the actual plant.
No-dig gardening (or no till gardening) is a method of gardening that avoids tillage as much as possible. This method of gardening is gaining popularity in part due to celebrated figures such as Charles Dowding, Masanobu Fukuoka, Jean-Martin Fortier, Connor Crickmore, Jesse Frost, Elaine Ingham, and many other market gardeners. Minimal tillage has been documented to help with promoting diverse soil biology, water retention and drainage, healthier vigorous plants, reduction in weed pressure, reduction in labor, increased fertility and nutrient availability, increase carbon sequestration, reduction in cost, reduction in soil erosion, and reduction in pollution.
Tools
Regardless of historical time period, location, scale, or type of garden, all gardening requires some basic tools. For the majority of human history, people have managed with significantly fewer resources compared to modern times. Agriculture was built on the use of hands, stones, sticks, human ingenuity, and fire. The essential tools used in pre-Bronze Age gardening were non-metal (primarily stone, bone, wood, or copper) knives, axes, adzes, foot ploughs, sickles, hoes, baskets, pottery, digging sticks, animal-driven ploughs, animals, and fire for clearing land. Up until the green revolution these simple tools, although continually improved upon, would continue to be the backbone of agricultural societies.
The industrial revolution created a large increase in availability and impact of agricultural tools. These tools include tractors with modern implements, manure spreaders, cultivators, mowers, earth-moving machines, hedge trimmers, strimmer's, wood-chippers, two-wheel tractors, complex irrigation systems, plastic mulch, plastic shelters, seeding trays, indoor grow lights, packaging, chemical fertilizers, pesticides, genetically modified seeds, and many more.
Plant Propagation
Plants may be propagated through many different methods. These methods are classified as either sexual or asexual propagation.
Asexual reproduction
Asexual reproduction occurs when plants produce clonal offspring. This method of reproduction is often more simplistic and provides rapid population growth. Cloning may result in highly vulnerable plant populations if they do not also reproduce sexually in order to create genetic diversity thus allowing for certain levels of natural selection and hybrid vigor. There are various methods of asexual plant propagation taken advantage of by gardeners. These include vegetative propagation which involves the growth of new plants from vegetative parts of the parent plant, such as roots, stems, and leaves. Certain plants such as strawberries and raspberries produce stolons or rhizomes which are stems which grow horizontally above or below ground, developing new plants at nodes. Another common method of asexual reproduction in garden plants is fragmentation which involves a separation from the parent plant. This is common for shrubs, and trees such as willows which may shed their branches which is termed cladoptosis. Placing the shed limb into water or soil produces budding and causes roots to form.
Perhaps the most commonly-known method of asexual reproduction in gardening and farming is grafting. A human may choose to graft an excellent fruit producing cultivar on a selected rootstock cultivar of the same species. This involves cutting each plant and connecting the cuttings by mechanical means until they inosculate or fuse together. Grafting is done for many purposes. Firstly, the scion (portion of the plant above the graft site) can undergo artificial selection for specific desirable traits such as flavor while the rootstock can undergo selection for traits such as disease resistance or cold tolerance. This effectively allows for much more efficiency in the artificial selection process as certain traits such as fruit taste can be ignored altogether in the rootstock allowing for a focused selection with less backcrossing to a plant that had good tasting fruit. Secondly, grafting allows for plants that require cross pollination for fruit generation, such as apples, to all grow together as one tree. Thirdly, this allows for quick reproduction where one mother plant can produce many semi-developed clones each year.
Sexual reproduction
Sexual reproduction occurs through the pollination of an ovule. This pollination must occur between female and male parts of a single flower or between flowers. A plant may undergo self pollination as a sexual means of reproduction where the genes of the mother plant will not perfectly match those of the progeny. Progeny from self pollination will however have less genetic diversity which may result in inbreeding depression versus plants from cross pollination. Pollen is typically carried by wind, insects, or animals to complete pollination. Some greenhouses may have to manually pollinate their plants to produce fruit and seeds due to a lack of these conditions. Sexual reproduction can only be done by members of the same species and this produces varying levels of genetic diversity in the plants offspring. This genetic diversity is responsible for the survival of every plant as we know them today. The diversity allows for disease resistance, adaptations to changing climate, changes in soil, changes in pollination methods, changes in animal grazing pressure, changes in weed pressure, and any other variations that arise in their growing conditions. Crossing plants, or hybridizing, results in hybrid vigor and will increase the genetic diversity.
Many commercially grown plants are F1 hybrids which ensures certain desirable traits. A common alternative to growing hybrid plants is to grow heirloom or open pollinated plants which, unlike F1 hybrids, will produce viable seed with progeny similar to its parent. Many modern gardeners will save seeds from heirloom varieties but not hybrids due to the certainty of desirable traits heirloom seeds provide. Historically a lack of plant breeding knowledge would have led to more hybridization and the creation of new genetically diverse landraces. Each plant varies in its likelihood of outcrossing. Highly outcrossing plants such as spinach are more likely to create landraces. Many landraces and heirloom varieties along with their genetics are being lost due to the decrease in seed saving by modern farmers. This leads plant geneticists to search for desirable genetics in wild ancestral varieties of commonly grown plants. Plants have been artificially selected and bred since at least 7800 BCE. Despite the decrease in farmer seed saving, many landraces are also being created through artificial selection and genetic modification. Gardeners remain vital in the preservation of diverse genetics whether they maintain a family heirloom variety bred to fit conditions from the distant past, or they breed new landraces with traits matching their modern climate and growing condition.
Certain seeds may not sprout without certain environmental conditions. These seeds either require scarification or stratification. Gardeners may grow frustrated if they lack this crucial knowledge before attempting to propagate certain plants such as hard neck garlic (asexual reproduction), which requires a cold dormant period to sprout, or saskatoon berries which have improved germination after being digested by bears through a process called endozoochory.
Transplanting
Many gardeners, especially those who live in colder climates, will start seeds indoors prior to transplanting the young plant outside. This provides many benefits such as elongating the growing season, ensuring adequate quantities and quality of light, ensuring seedlings have adequate nutrients in the seed starting mix, ensuring seeds stay at correct humidity, heat, and moisture level for germination, and saving space in the garden. Many crops will not be harvestable unless they are started inside so if a gardener wants to plant these crops in their garden without starting the plants themselves, they will need to purchase transplants which are commonly available at garden centers, plant nurseries, and big-box stores. It is crucial that transplanting is done correctly. This generally implies providing the plants with enough soil so they do not become root-bound (roots wrapping in circles around transplant container), providing a hardening-off period (slow exposure to sun, wind, and cold), providing sufficient light, water, and nutrients, and choosing the correct plants to start indoors as some plants do not do well with the transplanting process.
There are varying methods of starting your seeds. The most prevalent method would be to start seeds in transplant (plug) trays or in planters/pots. Another method is starting seeds in soil blocks (small cubes of compressed potting soil, compost, and/or other seed-starting media), which may reduce transplant shock and stop root-binding because they allow air pruning of the roots. Some plants such as onions and various herbs may be efficiently started by scattering their seeds on top of soil in a large tray where the seedlings will later be teased apart from each other and replanted in the garden or pots.
Pests
Garden pests are generally plants, fungi, or animals (frequently insects) that engage in activity that the gardener considers undesirable. A pest may crowd out desirable plants, disturb soil, stunt the growth of young seedlings, steal or damage fruit, or otherwise kill plants, hamper their growth, damage their appearance, or reduce the quality of the edible or ornamental portions of the plant. Aphids, spider mites, slugs, snails, ants, birds, and even cats are commonly considered to be garden pests.
Throughout history ecosystems that have undergone rapid changes are typically those which harbor the most pests. For example, a highly and rapidly altered landscape such as modern canola fields in the Americas can be a breeding ground for pests of the Brassicaceae family. A natural ecosystem will typically regulate pest levels through many biological means whether that be the natural introduction of a disease or an increase in the population of a predator species of animal.
Because gardeners may have different goals, organisms considered "garden pests" vary from gardener to gardener. Tropaeolum speciosum, for example, may be considered a desirable and ornamental garden plant, or it may be considered a pest if it seeds and starts to grow where it is not wanted. As another example, in lawns, moss can become dominant and be impossible to eradicate. In some lawns, lichens, especially very damp lawn lichens such as Peltigera lactucfolia and P. membranacea, can become difficult to control and are considered pests.
Pest control
There are many ways by which unwanted pests are removed from a garden. The techniques vary depending on the pest, the gardener's goals, and the gardener's philosophy. For example, snails may be dealt with through the use of a chemical pesticide, an organic pesticide, hand-picking, barriers, or simply growing snail-resistant plants.
On a large scale pest control is often done through the use of pesticides and herbicides, which may be either organic or artificially synthesized. Pesticides may affect the ecology of a garden due to their effects on the populations of both target and non-target species. For example, unintended exposure to some neonicotinoid pesticides has been proposed as a factor in the recent decline in honey bee populations. Pesticides and herbicides are also known to cause medical issues, typically to those in proximity during their application. While farm workers are by far the most affected by the use of pesticides and herbicides, they are often under-informed or accept the consequences due to financial necessity. Fungicides may be applied to the seed coat to reduce mortality of germinating seedlings. The improper use of pesticides often leads to pesticide resistance which poses a risk in global food security. With climate change affecting the distribution of pests, a global increase in pesticide usage has been observed which in turn has caused an increase of human health risks due to exposure. Creating new pesticides in order to manage resistant organisms is an immense expense and is often heavily criticized as an ineffective method of pest control.
Other means of control include the removal of infected plants, using fertilizers and bio stimulants to improve the health and vigor of plants so they better resist attack, practicing crop rotation to prevent pest build-up, using foliar sprays, companion planting, and practicing good garden hygiene, such as disinfecting tools and clearing debris and weeds which may harbor pests. Another common method of pest control, used frequently in market gardening, is using insect netting or plastic greenhouse covers. Gardeners may rely on one type of pest in order to eliminate another. Some examples of this are cats which hunt mice and rats, wild birds, bats, chickens, and ducks which hunt insects and slugs, or thorny hedges to deter deer and other creatures. Using these organisms to help control pests is called biological pest control. There are also targeted measures of animal pest control such as a mole vibrator which can deter mole activity in a garden, or automated gun shots to scare off birds.
Garden guns
Garden guns are smooth-bore shotguns specifically made to fire .22 caliber snake shot, and are commonly used by gardeners and farmers for pest control. Garden guns are short-range weapons that can do little harm past and are relatively quiet when fired with snake shot, compared to a standard ammunition. These guns are especially effective inside of barns and sheds, as the snake shot will not shoot holes in the roof or walls, or more importantly injure livestock with a ricochet. They are also used for pest control at airports, warehouses, and stockyards.
Social aspects
People can express their political or social views in gardens, intentionally or not. The lawn vs. garden issue is played out in urban planning as the debate over the "land ethic" that is to determine urban land use and whether hyper hygienist bylaws (e.g. weed control) should apply, or whether land should generally be allowed to exist in its natural wild state. In a famous Canadian Charter of Rights case, "Sandra Bell vs. City of Toronto", 1997, the right to cultivate all native species, even most varieties deemed noxious or allergenic, was upheld as part of the right of free expression.
Community gardening comprises a wide variety of approaches to sharing land and gardens.
People often surround their house and garden with a hedge. Common hedge plants are privet, hawthorn, beech, yew, leyland cypress, hemlock, arborvitae, barberry, box, holly, oleander, forsythia and lavender. The idea of open gardens without hedges may be distasteful to those who enjoy privacy.
The Slow Food movement has sought in some countries to add an edible school yard and garden classrooms to schools, e.g. in Fergus, Ontario, where these were added to a public school to augment the kitchen classroom. Garden sharing, where urban landowners allow gardeners to grow on their property in exchange for a share of the harvest, is associated with the desire to control the quality of one's food, and reconnect with soil and community.
In US and British usage, the production of ornamental plantings around buildings is called landscaping, landscape maintenance or grounds keeping, while international usage uses the term gardening for these same activities.
Also gaining popularity is the concept of "Green Gardening" which involves growing plants using organic fertilizers and pesticides so that the gardening process – or the flowers and fruits produced thereby – doesn't adversely affect the environment or people's health in any manner.
Gardening can be a very pleasant and relaxing activity with rewarding results. it allows for a connection with nature and creating a green space that presents a vision of beauty but also contributes to the eco system. A thriving and flourishing garden can be created, by understanding and adapting to the climate and environmental changes.
Plants and flowers grow in varying temperatures and weather conditions. Most plants thrive in temperatures between 18 and 24 °C during the day and slightly cooler at night. This range allows for optimal photosynthesis and overall growth for many common plant species. Usually there is a variety of plants in a garden, therefore it is always best to learn about the best weather for your plants to have success with your planting.
Laws and restrictions
In some parts of the world, particularly the United States, gardening can be restricted by law or by rules and regulations imposed by a home-owner's association. In the United States, such rules may prohibit homeowners from growing vegetable gardens, prohibit xeriscaping or meadow gardens, or require garden plants to be chosen from a pre-approved list, to preserve the aesthetics of the neighborhood. Numerous challenges to these laws, ordinances and regulations have emerged in recent years, with some resulting in legislation protecting a homeowner's right to cultivate native plants or grow vegetables. Laws protecting a homeowner's right to grow food plants have been termed "right to garden" laws.
Benefits
Gardening is considered by many people to be a relaxing activity. There are also many studies about the positive effects on mental and physical health in relation to gardening. Specifically, gardening is thought to increase self-esteem and reduce stress. As writer and former teacher Sarah Biddle notes, one's garden may become a "tiny oasis to relax and recharge [one's] batteries." Involving in gardening activities aid in creativity, observational skills, learning, planning and physical movement.
Others consider gardening to be a good hedge against supply chain disruptions with increased worries that the public cannot always trust that the grocery store shelves will be fully stocked. In April 2022, about 31% of grocery products were out of stock which is an 11% increase from November 2021.
Gardening can also support good numbers and a wide range of pollinators, but worryingly bees and other pollinators are in decline. Gardeners can make a difference to help reverse this trend. The main thing that matters is that they get their share of nectar to fuel their busy lifestyles, and this is where gardening can help them.
A way to both positively impacts humans and pollinators can be implementing pollinator gardens. Including native flowers has shown to increase pollinators, and even protects bee populations against urbanization and landscapes that do not include flowers. Small patches in urban landscapes that are diverse in flowers have been noted to match or even exceed wild landscapes when it comes to bees pollinating. Areas like golf courses, cemeteries, community gardens as well as residential gardens are all areas in urban settings that could benefit pollinator diversity by implementing native flowers to the landscape.
Ornaments and accessories
There is a wide range of garden ornaments and accessories available in the market for both the professional gardener and the amateur to exercise their creativity for example sculptures, lights or fountains. These are used to add decoration or functionality, and may be made from a wide range of materials such as copper, stone, wood, bamboo, stainless steel, clay, stained glass, concrete, or iron. Examples include trellis, garden furniture, gnomes, statues, outdoor fireplaces, fountains, rain chains, urns, bird baths and feeders, wind chimes, and garden lighting such as candle lanterns and oil lamps. The use of these items can be part of the expression of a gardener's gardening personality.
As art
Garden design is considered to be an art in most cultures, distinguished from gardening, which generally means garden maintenance. Garden design can include different themes such as perennial, butterfly, wildlife, Japanese, water, tropical, or shade gardens.
In Japan, Samurai and Zen monks were often required to build decorative gardens or practice related skills like flower arrangement known as ikebana. In 18th-century Europe, country estates were refashioned by landscape gardeners into formal gardens or landscaped park lands, such as at Versailles, France, or Stowe, England. Today, landscape architects and garden designers continue to produce artistically creative designs for private garden spaces. In the US, professional landscape designers are certified by the Association of Professional Landscape Designers.
| Technology | Forms | null |
12013 | https://en.wikipedia.org/wiki/Girth%20%28graph%20theory%29 | Girth (graph theory) | In graph theory, the girth of an undirected graph is the length of a shortest cycle contained in the graph. If the graph does not contain any cycles (that is, it is a forest), its girth is defined to be infinity.
For example, a 4-cycle (square) has girth 4. A grid has girth 4 as well, and a triangular mesh has girth 3. A graph with girth four or more is triangle-free.
Cages
A cubic graph (all vertices have degree three) of girth that is as small as possible is known as a -cage (or as a -cage). The Petersen graph is the unique 5-cage (it is the smallest cubic graph of girth 5), the Heawood graph is the unique 6-cage, the McGee graph is the unique 7-cage and the Tutte eight cage is the unique 8-cage. There may exist multiple cages for a given girth. For instance there are three nonisomorphic 10-cages, each with 70 vertices: the Balaban 10-cage, the Harries graph and the Harries–Wong graph.
Girth and graph coloring
For any positive integers and , there exists a graph with girth at least and chromatic number at least ; for instance, the Grötzsch graph is triangle-free and has chromatic number 4, and repeating the Mycielskian construction used to form the Grötzsch graph produces triangle-free graphs of arbitrarily large chromatic number. Paul Erdős was the first to prove the general result, using the probabilistic method. More precisely, he showed that a random graph on vertices, formed by choosing independently whether to include each edge with probability , has, with probability tending to 1 as goes to infinity, at most cycles of length or less, but has no independent set of size . Therefore, removing one vertex from each short cycle leaves a smaller graph with girth greater than , in which each color class of a coloring must be small and which therefore requires at least colors in any coloring.
Explicit, though large, graphs with high girth and chromatic number can be constructed as certain Cayley graphs of linear groups over finite fields. These remarkable Ramanujan graphs also have large expansion coefficient.
Related concepts
The odd girth and even girth of a graph are the lengths of a shortest odd cycle and shortest even cycle respectively.
The of a graph is the length of the longest (simple) cycle, rather than the shortest.
Thought of as the least length of a non-trivial cycle, the girth admits natural generalisations as the 1-systole or higher systoles in systolic geometry.
Girth is the dual concept to edge connectivity, in the sense that the girth of a planar graph is the edge connectivity of its dual graph, and vice versa. These concepts are unified in matroid theory by the girth of a matroid, the size of the smallest dependent set in the matroid. For a graphic matroid, the matroid girth equals the girth of the underlying graph, while for a co-graphic matroid it equals the edge connectivity.
Computation
The girth of an undirected graph can be computed by running a breadth-first search from each node, with complexity where is the number of vertices of the graph and is the number of edges. A practical optimization is to limit the depth of the BFS to a depth that depends on the length of the smallest cycle discovered so far. Better algorithms are known in the case where the girth is even and when the graph is planar. In terms of lower bounds, computing the girth of a graph is at least as hard as solving the triangle finding problem on the graph.
| Mathematics | Graph theory | null |
12024 | https://en.wikipedia.org/wiki/General%20relativity | General relativity | General relativity, also known as the general theory of relativity, and as Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics. General relativity generalizes special relativity and refines Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or four-dimensional spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever present matter and radiation. The relation is specified by the Einstein field equations, a system of second-order partial differential equations.
Newton's law of universal gravitation, which describes classical gravity, can be seen as a prediction of general relativity for the almost flat spacetime geometry around stationary mass distributions. Some predictions of general relativity, however, are beyond Newton's law of universal gravitation in classical physics. These predictions concern the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light, and include gravitational time dilation, gravitational lensing, the gravitational redshift of light, the Shapiro time delay and singularities/black holes. So far, all tests of general relativity have been shown to be in agreement with the theory. The time-dependent solutions of general relativity enable us to talk about the history of the universe and have provided the modern framework for cosmology, thus leading to the discovery of the Big Bang and cosmic microwave background radiation. Despite the introduction of a number of alternative theories, general relativity continues to be the simplest theory consistent with experimental data.
Reconciliation of general relativity with the laws of quantum physics remains a problem, however, as there is a lack of a self-consistent theory of quantum gravity. It is not yet known how gravity can be unified with the three non-gravitational forces: strong, weak and electromagnetic.
Einstein's theory has astrophysical implications, including the prediction of black holes—regions of space in which space and time are distorted in such a way that nothing, not even light, can escape from them. Black holes are the end-state for massive stars. Microquasars and active galactic nuclei are believed to be stellar black holes and supermassive black holes. It also predicts gravitational lensing, where the bending of light results in multiple images of the same distant astronomical phenomenon. Other predictions include the existence of gravitational waves, which have been observed directly by the physics collaboration LIGO and other observatories. In addition, general relativity has provided the base of cosmological models of an expanding universe.
Widely acknowledged as a theory of extraordinary beauty, general relativity has often been described as the most beautiful of all existing physical theories.
History
Henri Poincaré's 1905 theory of the dynamics of the electron was a relativistic theory which he applied to all forces, including gravity. While others thought that gravity was instantaneous or of electromagnetic origin, he suggested that relativity was "something due to our methods of measurement". In his theory, he showed that gravitational waves propagate at the speed of light. Soon afterwards, Einstein started thinking about how to incorporate gravity into his relativistic framework. In 1907, beginning with a simple thought experiment involving an observer in free fall (FFO), he embarked on what would be an eight-year search for a relativistic theory of gravity. After numerous detours and false starts, his work culminated in the presentation to the Prussian Academy of Science in November 1915 of what are now known as the Einstein field equations, which form the core of Einstein's general theory of relativity. These equations specify how the geometry of space and time is influenced by whatever matter and radiation are present. A version of non-Euclidean geometry, called Riemannian geometry, enabled Einstein to develop general relativity by providing the key mathematical framework on which he fit his physical ideas of gravity. This idea was pointed out by mathematician Marcel Grossmann and published by Grossmann and Einstein in 1913.
The Einstein field equations are nonlinear and considered difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But in 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, eventually resulting in the Reissner–Nordström solution, which is now associated with electrically charged black holes. In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption. By 1929, however, the work of Hubble and others had shown that the universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which the universe has evolved from an extremely hot and dense earlier state. Einstein later declared the cosmological constant the biggest blunder of his life.
During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein showed in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making Einstein famous. Yet the theory remained outside the mainstream of theoretical physics and astrophysics until developments between approximately 1960 and 1975, now known as the golden age of general relativity. Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations. Ever more precise solar system tests confirmed the theory's predictive power, and relativistic cosmology also became amenable to direct observational tests.
General relativity has acquired a reputation as a theory of extraordinary beauty. Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements that excite wonderment and surprise). It juxtaposes fundamental concepts (space and time versus matter and motion) which had previously been considered as entirely independent. Chandrasekhar also noted that Einstein's only guides in his search for an exact theory were the principle of equivalence and his sense that a proper description of gravity should be geometrical at its basis, so that there was an "element of revelation" in the manner in which Einstein arrived at his theory. Other elements of beauty associated with the general theory of relativity are its simplicity and symmetry, the manner in which it incorporates invariance and unification, and its perfect logical consistency.
In the preface to Relativity: The Special and the General Theory, Einstein said "The present book is intended, as far as possible, to give an exact insight into the theory of Relativity to those readers who, from a general scientific and philosophical point of view, are interested in the theory, but who are not conversant with the mathematical apparatus of theoretical physics. The work presumes a standard of education corresponding to that of a university matriculation examination, and, despite the shortness of the book, a fair amount of patience and force of will on the part of the reader. The author has spared himself no pains in his endeavour to present the main ideas in the simplest and most intelligible form, and on the whole, in the sequence and connection in which they actually originated."
From classical mechanics to general relativity
General relativity can be understood by examining its similarities with and departures from classical physics. The first step is the realization that classical mechanics and Newton's law of gravity admit a geometric description. The combination of this description with the laws of special relativity results in a heuristic derivation of general relativity.
Geometry of Newtonian gravity
At the base of classical mechanics is the notion that a body's motion can be described as a combination of free (or inertial) motion, and deviations from this free motion. Such deviations are caused by external forces acting on a body in accordance with Newton's second law of motion, which states that the net force acting on a body is equal to that body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are related to the geometry of space and time: in the standard reference frames of classical mechanics, objects in free motion move along straight lines at constant speed. In modern parlance, their paths are geodesics, straight world lines in curved spacetime.
Conversely, one might expect that inertial motions, once identified by observing the actual motions of bodies and making allowances for the external forces (such as electromagnetism or friction), can be used to define the geometry of space, as well as a time coordinate. However, there is an ambiguity once gravity comes into play. According to Newton's law of gravity, and independently verified by experiments such as that of Eötvös and its successors (see Eötvös experiment), there is a universality of free fall (also known as the weak equivalence principle, or the universal equality of inertial and passive-gravitational mass): the trajectory of a test body in free fall depends only on its position and initial speed, but not on any of its material properties. A simplified version of this is embodied in Einstein's elevator experiment, illustrated in the figure on the right: for an observer in an enclosed room, it is impossible to decide, by mapping the trajectory of bodies such as a dropped ball, whether the room is stationary in a gravitational field and the ball accelerating, or in free space aboard a rocket that is accelerating at a rate equal to that of the gravitational field versus the ball which upon release has nil acceleration.
Given the universality of free fall, there is no observable distinction between inertial motion and motion under the influence of the gravitational force. This suggests the definition of a new class of inertial motion, namely that of objects in free fall under the influence of gravity. This new class of preferred motions, too, defines a geometry of space and time—in mathematical terms, it is the geodesic motion associated with a specific connection which depends on the gradient of the gravitational potential. Space, in this construction, still has the ordinary Euclidean geometry. However, spacetime as a whole is more complicated. As can be shown using simple thought experiments following the free-fall trajectories of different test particles, the result of transporting spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with the particle's trajectory; mathematically speaking, the Newtonian connection is not integrable. From this, one can deduce that spacetime is curved. The resulting Newton–Cartan theory is a geometric formulation of Newtonian gravity using only covariant concepts, i.e. a description which is valid in any desired coordinate system. In this geometric description, tidal effects—the relative acceleration of bodies in free fall—are related to the derivative of the connection, showing how the modified geometry is caused by the presence of mass.
Relativistic generalization
As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics. In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group, which includes translations, rotations, boosts and reflections.) The differences between the two become significant when dealing with speeds approaching the speed of light, and with high-energy phenomena.
With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see image). The light-cones define a causal structure: for each event , there is a set of events that can, in principle, either influence or be influenced by via signals or interactions that do not need to travel faster than light (such as event in the image), and a set of events for which such an influence is impossible (such as event in the image). These sets are observer-independent. In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure or conformal geometry.
Special relativity is defined in the absence of gravity. For practical applications, it is a suitable model whenever gravity can be neglected. Bringing gravity into play, and assuming the universality of free fall motion, an analogous reasoning as in the previous section applies: there are no global inertial frames. Instead there are approximate inertial frames moving alongside freely falling particles. Translated into the language of spacetime: the straight time-like lines that define a gravity-free inertial frame are deformed to lines that are curved relative to each other, suggesting that the inclusion of gravity necessitates a change in spacetime geometry.
A priori, it is not clear whether the new local frames in free fall coincide with the reference frames in which the laws of special relativity hold—that theory is based on the propagation of light, and thus on electromagnetism, which could have a different set of preferred frames. But using different assumptions about the special-relativistic frames (such as their being earth-fixed, or in free fall), one can derive different predictions for the gravitational redshift, that is, the way in which the frequency of light shifts as the light propagates through a gravitational field (cf. below). The actual measurements show that free-falling frames are the ones in which light propagates as it does in special relativity. The generalization of this statement, namely that the laws of special relativity hold to good approximation in freely falling (and non-rotating) reference frames, is known as the Einstein equivalence principle, a crucial guiding principle for generalizing special-relativistic physics to include gravity.
The same experimental data shows that time as measured by clocks in a gravitational field—proper time, to give the technical term—does not follow the rules of special relativity. In the language of spacetime geometry, it is not measured by the Minkowski metric. As in the Newtonian case, this is suggestive of a more general geometry. At small scales, all reference frames that are in free fall are equivalent, and approximately Minkowskian. Consequently, we are now dealing with a curved generalization of Minkowski space. The metric tensor that defines the geometry—in particular, how lengths and angles are measured—is not the Minkowski metric of special relativity, it is a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each Riemannian metric is naturally associated with one particular kind of connection, the Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence principle and makes space locally Minkowskian (that is, in suitable locally inertial coordinates, the metric is Minkowskian, and its first partial derivatives and the connection coefficients vanish).
Einstein's equations
Having formulated the relativistic, geometric version of the effects of gravity, the question of gravity's source remains. In Newtonian gravity, the source is mass. In special relativity, mass turns out to be part of a more general quantity called the energy–momentum tensor, which includes both energy and momentum densities as well as stress: pressure and shear. Using the equivalence principle, this tensor is readily generalized to curved spacetime. Drawing further upon the analogy with geometric Newtonian gravity, it is natural to assume that the field equation for gravity relates this tensor and the Ricci tensor, which describes a particular class of tidal effects: the change in volume for a small cloud of test particles that are initially at rest, and then fall freely. In special relativity, conservation of energy–momentum corresponds to the statement that the energy–momentum tensor is divergence-free. This formula, too, is readily generalized to curved spacetime by replacing partial derivatives with their curved-manifold counterparts, covariant derivatives studied in differential geometry. With this additional condition—the covariant divergence of the energy–momentum tensor, and hence of whatever is on the other side of the equation, is zero—the simplest nontrivial set of equations are what are called Einstein's (field) equations:
On the left-hand side is the Einstein tensor, , which is symmetric and a specific divergence-free combination of the Ricci tensor and the metric. In particular,
is the curvature scalar. The Ricci tensor itself is related to the more general Riemann curvature tensor as
On the right-hand side, is a constant and is the energy–momentum tensor. All tensors are written in abstract index notation. Matching the theory's prediction to observational results for planetary orbits or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics, the proportionality constant is found to be , where is the Newtonian constant of gravitation and the speed of light in vacuum. When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations,
In general relativity, the world line of a particle free from all external, non-gravitational force is a particular type of geodesic in curved spacetime. In other words, a freely moving or falling particle always moves along a geodesic.
The geodesic equation is:
where is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, and so this equation is analogous to Newton's laws of motion which likewise provide formulae for the acceleration of a particle. This equation of motion employs the Einstein notation, meaning that repeated indices are summed (i.e. from zero to three). The Christoffel symbols are functions of the four spacetime coordinates, and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation.
Total force in general relativity
In general relativity, the effective gravitational potential energy of an object of mass m revolving around a massive central body M is given by
A conservative total force can then be obtained as its negative gradient
where L is the angular momentum. The first term represents the force of Newtonian gravity, which is described by the inverse-square law. The second term represents the centrifugal force in the circular motion. The third term represents the relativistic effect.
Alternatives to general relativity
There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–Cartan theory.
Definition and basic applications
The derivation outlined in the previous section contains all the information needed to define general relativity, describe its key properties, and address a question of crucial importance in physics, namely how the theory can be used for model-building.
Definition and basic properties
General relativity is a metric theory of gravitation. At its core are Einstein's equations, which describe the relation between the geometry of a four-dimensional pseudo-Riemannian manifold representing spacetime, and the energy–momentum contained in that spacetime. Phenomena that in classical mechanics are ascribed to the action of the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories), correspond to inertial motion within a curved geometry of spacetime in general relativity; there is no gravitational force deflecting objects from their natural, straight paths. Instead, gravity corresponds to changes in the properties of space and time, which in turn changes the straightest-possible paths that objects will naturally follow. The curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells spacetime how to curve.
While general relativity replaces the scalar gravitational potential of classical physics by a symmetric rank-two tensor, the latter reduces to the former in certain limiting cases. For weak gravitational fields and slow speed relative to the speed of light, the theory's predictions converge on those of Newton's law of universal gravitation.
As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems. Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers. Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.
Model-building
The core concept of general-relativistic model-building is that of a solution of Einstein's equations. Given both Einstein's equations and suitable equations for the properties of matter, such a solution consists of a specific semi-Riemannian manifold (usually defined by giving the metric in specific coordinates), and specific matter fields defined on that manifold. Matter and geometry must satisfy Einstein's equations, so in particular, the matter's energy–momentum tensor must be divergence-free. The matter must, of course, also satisfy whatever additional equations were imposed on its properties. In short, such a solution is a model universe that satisfies the laws of general relativity, and possibly additional laws governing whatever matter might be present.
Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly. Nevertheless, a number of exact solutions are known, although only a few have direct physical applications. The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe, and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos. Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub–NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).
Given the difficulty of finding exact solutions, Einstein's field equations are also solved frequently by numerical integration on a computer, or by considering small perturbations of exact solutions. In the field of numerical relativity, powerful computers are employed to simulate the geometry of spacetime and to solve Einstein's equations for interesting situations such as two colliding black holes. In principle, such methods may be applied to any system, given sufficient computer resources, and may address fundamental questions such as naked singularities. Approximate solutions may also be found by perturbation theories such as linearized gravity and its generalization, the post-Newtonian expansion, both of which were developed by Einstein. The latter provides a systematic approach to solving for the geometry of a spacetime that contains a distribution of matter that moves slowly compared with the speed of light. The expansion involves a series of terms; the first terms represent Newtonian gravity, whereas the later terms represent ever smaller corrections to Newton's theory due to general relativity. An extension of this expansion is the parametrized post-Newtonian (PPN) formalism, which allows quantitative comparisons between the predictions of general relativity and alternative theories.
Consequences of Einstein's theory
General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of many years of research that followed Einstein's initial publication.
Gravitational time dilation and frequency shift
Assuming that the equivalence principle holds, gravity influences the passage of time. Light sent down into a gravity well is blueshifted, whereas light sent in the opposite direction (i.e., climbing out of the gravity well) is redshifted; collectively, these two effects are known as the gravitational frequency shift. More generally, processes close to a massive body run more slowly when compared with processes taking place farther away; this effect is known as gravitational time dilation.
Gravitational redshift has been measured in the laboratory and using astronomical observations. Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks, while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS). Tests in stronger gravitational fields are provided by the observation of binary pulsars. All results are in agreement with general relativity. However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.
Light deflection and gravitational time delay
General relativity predicts that the path of light will follow the curvature of spacetime as it passes near a massive object. This effect was initially confirmed by observing the light of stars or distant quasars being deflected as it passes the Sun.
This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity. As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion), several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light, the angle of deflection resulting from such calculations is only half the value given by general relativity.
Closely related to light deflection is the Shapiro Time Delay, the phenomenon that light signals take longer to move through a gravitational field than they would in the absence of that field. There have been numerous successful tests of this prediction. In the parameterized post-Newtonian formalism (PPN), measurements of both the deflection of light and the gravitational time delay determine a parameter called γ, which encodes the influence of gravity on the geometry of space.
Gravitational waves
Predicted in 1916 by Albert Einstein, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light. These are one of several analogies between weak-field gravity and electromagnetism in that, they are analogous to electromagnetic waves. On 11 February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a pair of black holes merging.
The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right). Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, linear approximations of gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed.
Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space or Gowdy universes, varieties of an expanding cosmos filled with gravitational waves. But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.
Orbital effects and the relativity of direction
General relativity differs from classical mechanics in a number of predictions concerning orbiting bodies. It predicts an overall rotation (precession) of planetary orbits, as well as orbital decay caused by the emission of gravitational waves and effects related to the relativity of direction.
Precession of apsides
In general relativity, the apsides of any orbit (the point of the orbiting body's closest approach to the system's center of mass) will precess; the orbit is not an ellipse, but akin to an ellipse that rotates on its focus, resulting in a rose curve-like shape (see image). Einstein first derived this result by using an approximate metric representing the Newtonian limit and treating the orbiting body as a test particle. For him, the fact that his theory gave a straightforward explanation of Mercury's anomalous perihelion shift, discovered earlier by Urbain Le Verrier in 1859, was important evidence that he had at last identified the correct form of the gravitational field equations.
The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass) or the much more general post-Newtonian formalism. It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations). Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth), as well as in binary pulsar systems, where it is larger by five orders of magnitude.
In general relativity the perihelion shift , expressed in radians per revolution, is approximately given by
where:
is the semi-major axis
is the orbital period
is the speed of light in vacuum
is the orbital eccentricity
Orbital decay
According to general relativity, a binary system will emit gravitational waves, thereby losing energy. Due to this loss, the distance between the two orbiting bodies decreases, and so does their orbital period. Within the Solar System or for ordinary double stars, the effect is too small to be observable. This is not the case for a close binary pulsar, a system of two orbiting neutron stars, one of which is a pulsar: from the pulsar, observers on Earth receive a regular series of radio pulses that can serve as a highly accurate clock, which allows precise measurements of the orbital period. Because neutron stars are immensely compact, significant amounts of energy are emitted in the form of gravitational radiation.
The first observation of a decrease in orbital period due to the emission of gravitational waves was made by Hulse and Taylor, using the binary pulsar PSR1913+16 they had discovered in 1974. This was the first detection of gravitational waves, albeit indirect, for which they were awarded the 1993 Nobel Prize in physics. Since then, several other binary pulsars have been found, in particular the double pulsar PSR J0737−3039, where both stars are pulsars and which was last reported to also be in agreement with general relativity in 2021 after 16 years of observations.
Geodetic precession and frame-dragging
Several relativistic effects are directly related to the relativity of direction. One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport"). For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging. More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.
Near a rotating mass, there are gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable. Such effects can again be tested through their influence on the orientation of gyroscopes in free fall. Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction. Also the Mars Global Surveyor probe around Mars has been used.
Astrophysical applications
Gravitational lensing
The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing. Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs.
The earliest example was discovered in 1979; since then, more than a hundred gravitational lenses have been observed. Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.
Gravitational lensing has developed into a tool of observational astronomy. It is used to detect the presence and distribution of dark matter, provide a "natural telescope" for observing distant galaxies, and to obtain an independent estimate of the Hubble constant. Statistical evaluations of lensing data provide valuable insight into the structural evolution of galaxies.
Gravitational-wave astronomy
Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). Detection of these waves is a major goal of current relativity-related research. Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO. Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 hertz frequency range, which originate from binary supermassive blackholes. A European space-based detector, eLISA / NGO, is currently under development, with a precursor mission (LISA Pathfinder) having launched in December 2015.
Observations of gravitational waves promise to complement observations in the electromagnetic spectrum. They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string. In February 2016, the Advanced LIGO team announced that they had detected gravitational waves from a black hole merger.
Black holes and other compact objects
Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars. Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center, and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.
Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation. Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars. In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed.
General relativity plays a central role in modelling all these phenomena, and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.
Black holes are also sought-after targets in the search for gravitational waves (cf. Gravitational waves, above). Merging black hole binaries should lead to some of the strongest gravitational wave signals reaching detectors here on Earth, and the phase directly before the merger ("chirp") could be used as a "standard candle" to deduce the distance to the merger events–and hence serve as a probe of cosmic expansion at large distances. The gravitational waves produced as a stellar black hole plunges into a supermassive one should provide direct information about the supermassive black hole's geometry.
Cosmology
The current models of cosmology are based on Einstein's field equations, which include the cosmological constant since it has important influence on the large-scale dynamics of the cosmos,
where is the spacetime metric. Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions, allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase. Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation, further observational data can be used to put the models to the test. Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis, the large-scale structure of the universe, and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.
Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly. There is no generally accepted description of this new kind of matter, within the framework of known particle physics or otherwise. Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.
An inflationary phase, an additional phase of strongly accelerated expansion at cosmic times of around 10−33 seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation. Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario. However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations. An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed (cf. the section on quantum gravity, below).
Exotic solutions: time travel, warp drives
Kurt Gödel showed that solutions to Einstein's equations exist that contain closed timelike curves (CTCs), which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then, other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes. Stephen Hawking introduced chronology protection conjecture, which is an assumption beyond those of standard general relativity to prevent time travel.
Some exact solutions in general relativity such as Alcubierre drive present examples of warp drive but these solutions requires exotic matter distribution, and generally suffers from semiclassical instability.
Advanced concepts
Asymptotic symmetries
The spacetime symmetry group for special relativity is the Poincaré group, which is a ten-dimensional group of three Lorentz boosts, three rotations, and four spacetime translations. It is logical to ask what symmetries, if any, might apply in General Relativity. A tractable case might be to consider the symmetries of spacetime as seen by observers located far away from all sources of the gravitational field. The naive expectation for asymptotically flat spacetime symmetries might be simply to extend and reproduce the symmetries of flat spacetime of special relativity, viz., the Poincaré group.
In 1962 Hermann Bondi, M. G. van der Burg, A. W. Metzner and Rainer K. Sachs addressed this asymptotic symmetry problem in order to investigate the flow of energy at infinity due to propagating gravitational waves. Their first step was to decide on some physically sensible boundary conditions to place on the gravitational field at light-like infinity to characterize what it means to say a metric is asymptotically flat, making no a priori assumptions about the nature of the asymptotic symmetry group—not even the assumption that such a group exists. Then after designing what they considered to be the most sensible boundary conditions, they investigated the nature of the resulting asymptotic symmetry transformations that leave invariant the form of the boundary conditions appropriate for asymptotically flat gravitational fields. What they found was that the asymptotic symmetry transformations actually do form a group and the structure of this group does not depend on the particular gravitational field that happens to be present. This means that, as expected, one can separate the kinematics of spacetime from the dynamics of the gravitational field at least at spatial infinity. The puzzling surprise in 1962 was their discovery of a rich infinite-dimensional group (the so-called BMS group) as the asymptotic symmetry group, instead of the finite-dimensional Poincaré group, which is a subgroup of the BMS group. Not only are the Lorentz transformations asymptotic symmetry transformations, there are also additional transformations that are not Lorentz transformations but are asymptotic symmetry transformations. In fact, they found an additional infinity of transformation generators known as supertranslations. This implies the conclusion that General Relativity (GR) does not reduce to special relativity in the case of weak fields at long distances. It turns out that the BMS symmetry, suitably modified, could be seen as a restatement of the universal soft graviton theorem in quantum field theory (QFT), which relates universal infrared (soft) QFT with GR asymptotic spacetime symmetries.
Causal structure and global geometry
In general relativity, no material body can catch up with or overtake a light pulse. No influence from an event A can reach any other location X before light sent out at A to X. In consequence, an exploration of all light worldlines (null geodesics) yields key information about the spacetime's causal structure. This structure can be displayed using Penrose–Carter diagrams in which infinitely large regions of space and infinite time intervals are shrunk ("compactified") so as to fit onto a finite map, while light still travels along diagonals as in standard spacetime diagrams.
Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of energy conditions) are used to derive general results.
Horizons
Using global geometry, some spacetimes can be shown to contain boundaries called horizons, which demarcate one region from the rest of spacetime. The best-known examples are black holes: if mass is compressed into a sufficiently compact region of space (as specified in the hoop conjecture, the relevant length scale is the Schwarzschild radius), no light from inside can escape to the outside. Since no object can overtake a light pulse, all interior matter is imprisoned as well. Passage from the exterior to the interior is still possible, showing that the boundary, the black hole's horizon, is not a physical barrier.
Early studies of black holes relied on explicit solutions of Einstein's equations, notably the spherically symmetric Schwarzschild solution (used to describe a static black hole) and the axisymmetric Kerr solution (used to describe a rotating, stationary black hole, and introducing interesting features such as the ergosphere). Using global geometry, later studies have revealed more general properties of black holes. With time they become rather simple objects characterized by eleven parameters specifying: electric charge, mass–energy, linear momentum, angular momentum, and location at a specified time. This is stated by the black hole uniqueness theorem: "black holes have no hair", that is, no distinguishing marks like the hairstyles of humans. Irrespective of the complexity of a gravitating object collapsing to form a black hole, the object that results (having emitted gravitational waves) is very simple.
Even more remarkably, there is a general set of laws known as black hole mechanics, which is analogous to the laws of thermodynamics. For instance, by the second law of black hole mechanics, the area of the event horizon of a general black hole will never decrease with time, analogous to the entropy of a thermodynamic system. This limits the energy that can be extracted by classical means from a rotating black hole (e.g. by the Penrose process). There is strong evidence that the laws of black hole mechanics are, in fact, a subset of the laws of thermodynamics, and that the black hole area is proportional to its entropy. This leads to a modification of the original laws of black hole mechanics: for instance, as the second law of black hole mechanics becomes part of the second law of thermodynamics, it is possible for the black hole area to decrease as long as other processes ensure that entropy increases overall. As thermodynamical objects with nonzero temperature, black holes should emit thermal radiation. Semiclassical calculations indicate that indeed they do, with the surface gravity playing the role of temperature in Planck's law. This radiation is known as Hawking radiation (cf. the quantum theory section, below).
There are many other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon). Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semiclassical radiation known as Unruh radiation.
Singularities
Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values. Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole, or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole. The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.
Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization. The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage and also at the beginning of a wide class of expanding universes. However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the BKL conjecture). The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.
Evolution equations
Each solution of Einstein's equation encompasses the whole history of a universe—it is not just some snapshot of how things are, but a whole, possibly matter-filled, spacetime. It describes the state of matter and geometry everywhere and at every moment in that particular universe. Due to its general covariance, Einstein's theory is not sufficient by itself to determine the time evolution of the metric tensor. It must be combined with a coordinate condition, which is analogous to gauge fixing in other field theories.
To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism. These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified. Such formulations of Einstein's field equations are the basis of numerical relativity.
Global and quasi-local quantities
The notion of evolution equations is intimately tied in with another aspect of general relativistic physics. In Einstein's theory, it turns out to be impossible to find a general definition for a seemingly simple property such as a system's total mass (or energy). The main reason is that the gravitational field—like any physical field—must be ascribed a certain energy, but that it proves to be fundamentally impossible to localize that energy.
Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass) or suitable symmetries (Komar mass). If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the Bondi mass at null infinity. Just as in classical physics, it can be shown that these masses are positive. Corresponding global definitions exist for momentum and angular momentum. There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.
Relationship with quantum theory
If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid-state physics, would be the other. However, how to reconcile quantum theory with general relativity is still an open question.
Quantum field theory in curved spacetime
Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth. In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime. Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation leading to the possibility that they evaporate over time. As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.
Quantum gravity
The demand for consistency between a quantum description of matter and a geometric description of spacetime, as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics. Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.
Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. Some have argued that at low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity. At very high energies, however, the perturbative results are badly divergent and lead to models devoid of predictive power ("perturbative non-renormalizability").
One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects. The theory promises to be a unified description of all particles and interactions, including gravity; the price to pay is unusual features such as six extra dimensions of space in addition to the usual three. In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.
Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined without a proper ultraviolet (lattice) cutoff. However, with the introduction of what are now known as Ashtekar variables, this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.
Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced, there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being the lattice theory of gravity based on the Feynman Path Integral approach and Regge calculus, dynamical triangulations, causal sets, twistor models or the path integral based models of quantum cosmology.
All candidate theories still have major formal and conceptual problems to overcome. They also face the common problem that, as yet, there is no way to put quantum gravity predictions to experimental tests (and thus to decide between the candidates where their predictions vary), although there is hope for this to change as future data from cosmological observations and particle physics experiments becomes available.
Current status
General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications that the theory is incomplete. The problem of quantum gravity and the question of the reality of spacetime singularities remain open. Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics.
Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations, while numerical relativists run increasingly powerful computer simulations (such as those describing merging black holes). In February 2016, it was announced that the existence of gravitational waves was directly detected by the Advanced LIGO team on 14 September 2015. A century after its introduction, general relativity remains a highly active area of research.
| Physical sciences | Theory of relativity | null |
12100 | https://en.wikipedia.org/wiki/Graviton | Graviton | In theories of quantum gravity, the graviton is the hypothetical elementary particle that mediates the force of gravitational interaction. There is no complete quantum field theory of gravitons due to an outstanding mathematical problem with renormalization in general relativity. In string theory, believed by some to be a consistent theory of quantum gravity, the graviton is a massless state of a fundamental string.
If it exists, the graviton is expected to be massless because the gravitational force has a very long range, and appears to propagate at the speed of light. The graviton must be a spin-2 boson because the source of gravitation is the stress–energy tensor, a second-order tensor (compared with electromagnetism's spin-1 photon, the source of which is the four-current, a first-order tensor). Additionally, it can be shown that any massless spin-2 field would give rise to a force indistinguishable from gravitation, because a massless spin-2 field would couple to the stress–energy tensor in the same way gravitational interactions do. This result suggests that, if a massless spin-2 particle is discovered, it must be the graviton.
Theory
It is hypothesized that gravitational interactions are mediated by an as yet undiscovered elementary particle, dubbed the graviton. The three other known forces of nature are mediated by elementary particles: electromagnetism by the photon, the strong interaction by gluons, and the weak interaction by the W and Z bosons. All three of these forces appear to be accurately described by the Standard Model of particle physics. In the classical limit, a successful theory of gravitons would reduce to general relativity, which itself reduces to Newton's law of gravitation in the weak-field limit.
History
Albert Einstein discussed quantized gravitational radiation in 1916, the year following his publication of general relativity.
The term graviton was coined in 1934 by Soviet physicists Dmitry Blokhintsev and . Paul Dirac reintroduced the term in a number of lectures in 1959, noting that the energy of the gravitational field should come in quanta. A mediation of the gravitational interaction by particles was anticipated by Pierre-Simon Laplace. Just like Newton's anticipation of photons, Laplace's anticipated "gravitons" had a greater speed than the speed of light in vacuum , the speed of gravitons expected in modern theories, and were not connected to quantum mechanics or special relativity, since these theories didn't yet exist during Laplace's lifetime.
Gravitons and renormalization
When describing graviton interactions, the classical theory of Feynman diagrams and semiclassical corrections such as one-loop diagrams behave normally. However, Feynman diagrams with at least two loops lead to ultraviolet divergences. These infinite results cannot be removed because quantized general relativity is not perturbatively renormalizable, unlike quantum electrodynamics and models such as the Yang–Mills theory. Therefore, incalculable answers are found from the perturbation method by which physicists calculate the probability of a particle to emit or absorb gravitons, and the theory loses predictive veracity. Those problems and the complementary approximation framework are grounds to show that a theory more unified than quantized general relativity is required to describe the behavior near the Planck scale.
Comparison with other forces
Like the force carriers of the other forces (see photon, gluon, W and Z bosons), the graviton plays a role in general relativity, in defining the spacetime in which events take place. In some descriptions energy modifies the "shape" of spacetime itself, and gravity is a result of this shape, an idea which at first glance may appear hard to match with the idea of a force acting between particles. Because the diffeomorphism invariance of the theory does not allow any particular space-time background to be singled out as the "true" space-time background, general relativity is said to be background-independent. In contrast, the Standard Model is not background-independent, with Minkowski space enjoying a special status as the fixed background space-time. A theory of quantum gravity is needed in order to reconcile these differences. Whether this theory should be background-independent is an open question. The answer to this question will determine the understanding of what specific role gravitation plays in the fate of the universe.
Energy and wavelength
While gravitons are presumed to be massless, they would still carry energy, as does any other quantum particle. Photon energy and gluon energy are also carried by massless particles. It is unclear which variables might determine graviton energy, the amount of energy carried by a single graviton.
Alternatively, if gravitons are massive at all, the analysis of gravitational waves yielded a new upper bound on the mass of gravitons. The graviton's Compton wavelength is at least , or about 1.6 light-years, corresponding to a graviton mass of no more than . This relation between wavelength and mass-energy is calculated with the Planck–Einstein relation, the same formula that relates electromagnetic wavelength to photon energy.
Experimental observation
Unambiguous detection of individual gravitons, though not prohibited by any fundamental law, has been thought to be impossible with any physically reasonable detector. The reason is the extremely low cross section for the interaction of gravitons with matter. For example, a detector with the mass of Jupiter and 100% efficiency, placed in close orbit around a neutron star, would only be expected to observe one graviton every 10 years, even under the most favorable conditions. It would be impossible to discriminate these events from the background of neutrinos, since the dimensions of the required neutrino shield would ensure collapse into a black hole. It has been proposed that detecting single gravitons would be possible by quantum sensing. Even quantum events may not indicate quantization of gravitational radiation.
LIGO and Virgo collaborations' observations have directly detected gravitational waves. Others have postulated that graviton scattering yields gravitational waves as particle interactions yield coherent states. Although these experiments cannot detect individual gravitons, they might provide information about certain properties of the graviton. For example, if gravitational waves were observed to propagate slower than c (the speed of light in vacuum), that would imply that the graviton has mass (however, gravitational waves must propagate slower than c in a region with non-zero mass density if they are to be detectable). Observations of gravitational waves put an upper bound of on the graviton's mass. Solar system planetary trajectory measurements by space missions such as Cassini and MESSENGER give a comparable upper bound of . The gravitational wave and planetary ephemeris need not agree: they test different aspects of a potential graviton-based theory.
Astronomical observations of the kinematics of galaxies, especially the galaxy rotation problem and modified Newtonian dynamics, might point toward gravitons having non-zero mass.
Difficulties and outstanding issues
Most theories containing gravitons suffer from severe problems. Attempts to extend the Standard Model or other quantum field theories by adding gravitons run into serious theoretical difficulties at energies close to or above the Planck scale. This is because of infinities arising due to quantum effects; technically, gravitation is not renormalizable. Since classical general relativity and quantum mechanics seem to be incompatible at such energies, from a theoretical point of view, this situation is not tenable. One possible solution is to replace particles with strings. String theories are quantum theories of gravity in the sense that they reduce to classical general relativity plus field theory at low energies, but are fully quantum mechanical, contain a graviton, and are thought to be mathematically consistent.
| Physical sciences | Bosons | Physics |
12103 | https://en.wikipedia.org/wiki/Golden%20Gate%20Bridge | Golden Gate Bridge | The Golden Gate Bridge is a suspension bridge spanning the Golden Gate, the strait connecting San Francisco Bay and the Pacific Ocean in California, United States. The structure links San Francisco—the northern tip of the San Francisco Peninsula—to Marin County, carrying both U.S. Route 101 and California State Route 1 across the strait. It also carries pedestrian and bicycle traffic, and is designated as part of U.S. Bicycle Route 95. Recognized by the American Society of Civil Engineers as one of the Wonders of the Modern World, the bridge is one of the most internationally recognized symbols of San Francisco and California.
The idea of a fixed link between San Francisco and Marin had gained increasing popularity during the late 19th century, but it was not until the early 20th century that such a link became feasible. Joseph Strauss served as chief engineer for the project, with Leon Moisseiff, Irving Morrow and Charles Ellis making significant contributions to its design. The bridge opened to the public on May 27, 1937, and has undergone various retrofits and other improvement projects in the decades since.
The Golden Gate Bridge is described in Frommer's travel guide as "possibly the most beautiful, certainly the most photographed, bridge in the world." At the time of its opening in 1937, it was both the longest and the tallest suspension bridge in the world, titles it held until 1964 and 1998 respectively. Its main span is and its total height is .
History
Ferry service
Before the bridge was built, the only practical short route between San Francisco and what is now Marin County was by boat across a section of San Francisco Bay. A ferry service began as early as 1820, with a regularly scheduled service beginning in the 1840s for the purpose of transporting water to San Francisco.
In 1867, the Sausalito Land and Ferry Company opened. In 1920, the service was taken over by the Golden Gate Ferry Company, which merged in 1929 with the ferry system of the Southern Pacific Railroad, becoming the Southern Pacific-Golden Gate Ferries, Ltd., the largest ferry operation in the world. Once for railroad passengers and customers only, Southern Pacific's automobile ferries became very profitable and important to the regional economy. The ferry crossing between the Hyde Street Pier in San Francisco and Sausalito Ferry Terminal in Marin County took approximately 20 minutes and cost $1.00 per vehicle prior to 1937, when the price was reduced to compete with the new bridge. The trip from the San Francisco Ferry Building took 27 minutes.
Many wanted to build a bridge to connect San Francisco to Marin County. San Francisco was the largest American city still served primarily by ferry boats. Because it did not have a permanent link with communities around the bay, the city's growth rate was below the national average. Many experts said that a bridge could not be built across the strait, which had strong, swirling tides and currents, with water deep at the center of the channel, and frequent strong winds. Experts said that ferocious winds and blinding fogs would prevent construction and operation.
Conception
Although the idea of a bridge spanning the Golden Gate was not new, the proposal that eventually took hold was made in a 1916 San Francisco Bulletin article by former engineering student James Wilkins. San Francisco's City Engineer estimated the cost at $100 million (equivalent to $ billion in ), and impractical for the time. He asked bridge engineers whether it could be built for less. One who responded, Joseph Strauss, was an ambitious engineer and poet who had, for his graduate thesis, designed a railroad bridge across the Bering Strait. At the time, Strauss had completed some 400 drawbridges—most of which were inland—and nothing on the scale of the new project. Strauss's initial drawings were for a massive cantilever on each side of the strait, connected by a central suspension segment, which Strauss promised could be built for $17 million (equivalent to $ million in ).
A suspension-bridge design was chosen, using recent advances in bridge design and metallurgy.
Strauss spent more than a decade drumming up support in Northern California. The bridge faced opposition, including litigation, from many sources. The Department of War was concerned that the bridge would interfere with ship traffic. The US Navy feared that a ship collision or sabotage to the bridge could block the entrance to one of its main harbors. Unions demanded guarantees that local workers would be favored for construction jobs. Southern Pacific Railroad, one of the most powerful business interests in California, opposed the bridge as competition to its ferry fleet and filed a lawsuit against the project, leading to a mass boycott of the ferry service.
In May 1924, Colonel Herbert Deakyne held the second hearing on the Bridge on behalf of the Secretary of War in a request to use federal land for construction. Deakyne, on behalf of the Secretary of War, approved the transfer of land needed for the bridge structure and leading roads to the "Bridging the Golden Gate Association" and both San Francisco County and Marin County, pending further bridge plans by Strauss. Another ally was the fledgling automobile industry, which supported the development of roads and bridges to increase demand for automobiles.
The bridge's name was first used when the project was initially discussed in 1917 by M.M. O'Shaughnessy, city engineer of San Francisco, and Strauss. The name became official with the passage of the Golden Gate Bridge and Highway District Act by the state legislature in 1923, creating a special district to design, build and finance the bridge. San Francisco and most of the counties along the North Coast of California joined the Golden Gate Bridge District, with the exception being Humboldt County, whose residents opposed the bridge's construction and the traffic it would generate.
Design
Strauss was the chief engineer in charge of the overall design and construction of the bridge project. However, because he had little understanding or experience with cable-suspension designs, responsibility for much of the engineering and architecture fell on other experts. Strauss's initial design proposal (two double cantilever spans linked by a central suspension segment) was unacceptable from a visual standpoint. The final suspension design was conceived and championed by Leon Moisseiff, the engineer of the Manhattan Bridge in New York City.
Irving Morrow, a relatively unknown residential architect, designed the overall shape of the bridge towers, the lighting scheme, and Art Deco elements, such as the tower decorations, streetlights, railing, and walkways. The famous International Orange color was Morrow's personal selection, winning out over other possibilities, including the US Navy's suggestion that it be painted with black and yellow stripes to ensure visibility by passing ships.
Senior engineer Charles Alton Ellis, collaborating remotely with Moisseiff, was the principal engineer of the project. Moisseiff produced the basic structural design, introducing his "deflection theory" by which a thin, flexible roadway would flex in the wind, greatly reducing stress by transmitting forces via suspension cables to the bridge towers. Although the Golden Gate Bridge design has proved sound, a later Moisseiff design, the original Tacoma Narrows Bridge, collapsed in a strong windstorm soon after it was completed, because of an unexpected aeroelastic flutter. Ellis was also tasked with designing a "bridge within a bridge" in the southern abutment, to avoid the need to demolish Fort Point, a pre–Civil War masonry fortification viewed, even then, as worthy of historic preservation. He penned a graceful steel arch spanning the fort and carrying the roadway to the bridge's southern anchorage.
Ellis was a Greek scholar and mathematician who at one time was a University of Illinois professor of engineering despite having no engineering degree. He eventually earned a degree in civil engineering from the University of Illinois prior to designing the Golden Gate Bridge and spent the last twelve years of his career as a professor at Purdue University. He became an expert in structural design, writing the standard textbook of the time. Ellis did much of the technical and theoretical work that built the bridge, but he received none of the credit in his lifetime. In November 1931, Strauss fired Ellis and replaced him with a former subordinate, Clifford Paine, ostensibly for wasting too much money sending telegrams back and forth to Moisseiff. Ellis, obsessed with the project and unable to find work elsewhere during the Depression, continued working 70 hours per week on an unpaid basis, eventually turning in ten volumes of hand calculations.
With an eye toward self-promotion and posterity, Strauss downplayed the contributions of his collaborators who, despite receiving little recognition or compensation, are largely responsible for the final form of the bridge. He succeeded in having himself credited as the person most responsible for the design and vision of the bridge. Only much later were the contributions of the others on the design team properly appreciated. In May 2007, the Golden Gate Bridge District issued a formal report on 70 years of stewardship of the famous bridge and decided to give Ellis major credit for the design of the bridge.
Finance
The Golden Gate Bridge and Highway District, authorized by an act of the California Legislature, was incorporated in 1928 as the official entity to design, construct, and finance the Golden Gate Bridge. However, after the Wall Street Crash of 1929, the District was unable to raise the construction funds, so it lobbied for a $30 million bond measure (equivalent to $ million today). The bonds were approved in November 1930, by votes in the counties affected by the bridge. The construction budget at the time of approval was $27 million ($ million today). However, the District was unable to sell the bonds until 1932, when Amadeo Giannini, the founder of San Francisco–based Bank of America, agreed on behalf of his bank to buy the entire issue in order to help the local economy.
Construction
Construction began on January 5, 1933. The project cost more than $35 million ($ in dollars), and was completed ahead of schedule and $1.3 million under budget (equivalent to $ million in ).
The Golden Gate Bridge construction project was carried out by the McClintic-Marshall Construction Co., a subsidiary of Bethlehem Steel Corporation founded by Howard H. McClintic and Charles D. Marshall, both of Lehigh University.
Strauss remained head of the project, overseeing day-to-day construction and making some groundbreaking contributions. A graduate of the University of Cincinnati, he placed a brick from his alma mater's demolished McMicken Hall in the south anchorage before the concrete was poured.
Strauss also innovated the use of movable safety netting beneath the men working, which saved many lives. Nineteen men saved by the nets over the course of the project formed the Half Way to Hell Club. Nonetheless, eleven men were killed in falls, ten on February 17, 1937, when a scaffold (secured by undersized bolts) with twelve men on it fell into and broke through the safety net; two of the twelve survived the fall into the water.
The Round House Café diner was then included in the southeastern end of the Golden Gate Bridge, adjacent to the tourist plaza which was renovated in 2012. The Round House Café, an Art Deco design by Alfred Finnila completed in 1938, has been popular throughout the years as a starting point for various commercial tours of the bridge and an unofficial gift shop. The diner was renovated in 2012 and the gift shop was then removed as a new, official gift shop has been included in the adjacent plaza.
During the bridge work, the Assistant Civil Engineer of California Alfred Finnila had overseen the entire iron work of the bridge as well as half of the bridge's road work.
Contributors
Plaque of the major contributors to the Golden Gate Bridge lists contractors, engineering-staff, directors and officers:
Contractors
Foundations - Pacific Bridge Company
Anchorages - Barrett & Hilp
Structural steel - Main span - Bethlehem Steel Company Incorporated
Approach steel - J.H. Pomeroy & Company Incorporated - Raymond Concrete Pile Company
Cables - John A. Roebling's Sons Company
Electrical work - Alta Electric and Mechanical Company Incorporated
Bridge deck - Pacific Bridge Company
Presidio Approach Roads and Viaducts - Easton & Smith
Toll Plaza - Barrett & Hilp
Engineering staff
Chief engineer - Joseph B. Strauss
Principal assistant engineer - Clifford E. Paine
Resident engineer - Russell Cone
Assistant engineer - Charles Clarahan Jr., Dwight N. Wetherell
Consulting engineer - O.H. Ammann, Charles Derleth Jr., Leon S. Moisseiff
Consulting traffic engineer - Sydney W. Taylor Jr.
Consulting architect - Irving F. Morrow
Consulting geologist - Andrew C. Lawson, Allan E. Sedgwick
Directors
San Francisco - William P. Filmer, Richard J. Welch, Warren Shannon, Hugo D. Newhouse, Arthur M. Brown Jr., John P. McLaughlin, William D. Hadeler, C.A. Henry, Francis V. Keesling, William P. Stanton, George T. Cameron
Marin County - Robert H. Trumbull, Harry Lutgens
Napa County - Thomas Maxwell
Sonoma County - Frank P. Doyle, Joseph A. McMinn
Mendocino County - A. R. O'Brien
Del Norte County - Henry Westbrook Jr., Milton M. McVay
Officers
President - William P. Filmer
Vice President - Robert H. Trumbull
General manager - James Reed, Alan McDonald
Chief engineer - Joseph B. Strauss
Secretary - W. W. Felt Jr.
Auditor - Roy S. West, John R. Ruckstell
Attorney - George H. Harlan
Torsional bracing retrofit
On December 1, 1951, a windstorm revealed swaying and rolling instabilities of the bridge, resulting in its closure. In 1953 and 1954, the bridge was retrofitted with lateral and diagonal bracing that connected the lower chords of the two side trusses. This bracing stiffened the bridge deck in torsion so that it would better resist the types of twisting that had destroyed the Tacoma Narrows Bridge in 1940.
Bridge deck replacement (1982–1986)
The original bridge used a concrete deck. Salt carried by fog or mist reached the rebar, causing corrosion and concrete spalling. From 1982 to 1986, the original bridge deck, in 747 sections, was systematically replaced with a 40% lighter, and stronger, steel orthotropic deck panels, over 401 nights without closing the roadway completely to traffic. The roadway was also widened by two feet, resulting in outside curb lane width of 11 feet, instead of 10 feet for the inside lanes. This deck replacement was the bridge's greatest engineering project since it was built and cost over $68 million.
Opening festivities, and 50th and 75th anniversaries
The bridge-opening celebration in 1937 began on May 27 and lasted for one week. The day before vehicle traffic was allowed, 200,000 people crossed either on foot or on roller skates. On opening day, Mayor Angelo Rossi and other officials rode the ferry to Marin, then crossed the bridge in a motorcade past three ceremonial "barriers," the last a blockade of beauty queens who required Joseph Strauss to present the bridge to the Highway District before allowing him to pass. An official song, "There's a Silver Moon on the Golden Gate," was chosen to commemorate the event. Strauss wrote a poem that is now on the Golden Gate Bridge entitled "The Mighty Task is Done." The next day, President Franklin D. Roosevelt pushed a button in Washington, D.C. signaling the official start of vehicle traffic over the Bridge at noon. Weeks of civil and cultural activities called "the Fiesta" followed. A statue of Strauss was moved in 1955 to a site near the bridge.
As part of the fiftieth anniversary celebration in 1987, the Golden Gate Bridge district again closed the bridge to automobile traffic and allowed pedestrians to cross it on May 24. This Sunday morning celebration attracted 750,000 to 1,000,000 people, and ineffective crowd control meant the bridge became congested with roughly 300,000 people, causing the center span of the bridge to flatten out under the weight. Although the bridge is designed to flex in that way under heavy loads, and was estimated not to have exceeded 40% of the yielding stress of the suspension cables, bridge officials stated that uncontrolled pedestrian access was not being considered as part of the 75th anniversary on Sunday, May 27, 2012, because of the additional law enforcement costs required "since 9/11."
Structural specifications
Until 1964, the Golden Gate Bridge had the longest suspension bridge main span in the world, at . Since 1964 its main span length has been surpassed by eighteen bridges; it now has the second-longest main span in the Americas, after the Verrazzano-Narrows Bridge in New York City. The total length of the Golden Gate Bridge from abutment to abutment is .
The Golden Gate Bridge's clearance above high water averages while its towers, at above the water, were the world's tallest on a suspension bridge until 1993 when it was surpassed by the Mezcala Bridge, in Mexico.
The weight of the roadway is hung from 250 pairs of vertical suspender ropes, which are attached to two main cables. The main cables pass over the two main towers and are fixed in concrete at each end. Each cable is made of 27,572 strands of wire. The total length of galvanized steel wire used to fabricate both main cables is estimated to be . Each of the bridge's two towers has approximately 600,000 rivets.
In the 1960s, when the Bay Area Rapid Transit system (BART) was being planned, the engineering community had conflicting opinions about the feasibility of running train tracks north to Marin County over the bridge. In June 1961, consultants hired by BART completed a study that determined the bridge's suspension section was capable of supporting service on a new lower deck. In July 1961, one of the bridge's consulting engineers, Clifford Paine, disagreed with their conclusion. In January 1962, due to more conflicting reports on feasibility, the bridge's board of directors appointed an engineering review board to analyze all the reports. The review board's report, released in April 1962, concluded that running BART on the bridge was not advisable.
Aesthetics
Aesthetics was the foremost reason why the first design of Joseph Strauss was rejected. Upon re-submission of his bridge construction plan, he added details, such as lighting, to outline the bridge's cables and towers. In 1999, it was ranked fifth on the List of America's Favorite Architecture by the American Institute of Architects.
The color of the bridge is officially an orange vermilion called international orange. The color was selected by consulting architect Irving Morrow because it complements the natural surroundings and enhances the bridge's visibility in fog.
The bridge was originally painted with red lead primer and a lead-based topcoat, which was touched up as required. In the mid-1960s, a program was started to improve corrosion protection by stripping the original paint and repainting the bridge with zinc silicate primer and vinyl topcoats. Since 1990, acrylic topcoats have been used instead for air-quality reasons. The program was completed in 1995 and it is now maintained by 38 painters who touch up the paintwork where it becomes seriously corroded.
The ongoing maintenance task of painting the bridge is continuous.
Traffic
Most maps and signage mark the bridge as part of the concurrency between U.S. Route 101 and California State Route 1. Although part of the National Highway System, the bridge is not officially part of California's Highway System. For example, under the California Streets and Highways Code § 401, Route 101 ends at "the approach to the Golden Gate Bridge" and then resumes at "a point in Marin County opposite San Francisco". The Golden Gate Bridge, Highway and Transportation District has jurisdiction over the segment of highway that crosses the bridge instead of the California Department of Transportation (Caltrans).
The movable median barrier between the lanes is moved several times daily to conform to traffic patterns. On weekday mornings, traffic flows mostly southbound into the city, so four of the six lanes run southbound. Conversely, on weekday afternoons, four lanes run northbound. During off-peak periods and weekends, traffic is split with three lanes in each direction.
From 1968 to 2015, opposing traffic was separated by small, plastic pylons; during that time, there were 16 fatalities resulting from 128 head-on collisions. To improve safety, the speed limit on the Golden Gate Bridge was reduced from on October 1, 1983. Although there had been discussion concerning the installation of a movable barrier since the 1980s, only in March 2005 did the Bridge Board of Directors commit to finding funding to complete the $2 million study required prior to the installation of a movable median barrier. Installation of the resulting barrier was completed on January 11, 2015, following a closure of 45.5 hours to private vehicle traffic, the longest in the bridge's history. The new barrier system, including the zipper trucks, cost approximately $30.3 million to purchase and install.
The bridge carries about 112,000 vehicles per day according to the Golden Gate Bridge Highway and Transportation District.
Usage and tourism
The bridge is popular with pedestrians and bicyclists, and was built with walkways on either side of the six vehicle traffic lanes. Initially, they were separated from the traffic lanes by only a metal curb, but railings between the walkways and the traffic lanes were added in 2003, primarily as a measure to prevent bicyclists from falling into the roadway. The bridge was designated as part of U.S. Bicycle Route 95 in 2021.
The main walkway is on the eastern side, and is open for use by both pedestrians and bicycles in the morning to mid-afternoon during weekdays (5:00 a.m. to 3:30 p.m.), and to pedestrians only for the remaining daylight hours (until 6:00 p.m., or 9:00 p.m. during DST). The eastern walkway is reserved for pedestrians on weekends (5:00 a.m. to 6:00 p.m., or 9:00 p.m. during DST), and is open exclusively to bicyclists in the evening and overnight, when it is closed to pedestrians. The western walkway is open only for bicyclists and only during the hours when they are not allowed on the eastern walkway.
Bus service across the bridge is provided by one public transportation agency, Golden Gate Transit, which runs numerous bus lines throughout the week. The southern end of the bridge, near the toll plaza and parking lot, is also accessible daily from 5:30 a.m. to midnight by San Francisco Muni line 28. Muni formerly offered Saturday and Sunday service across the bridge on the Marin Headlands Express bus line, but this was indefinitely suspended due to the COVID-19 pandemic. The Marin Airporter, a private company, also offers service across the bridge between Marin County and San Francisco International Airport.
A visitor center and gift shop, originally called the "Bridge Pavilion" (since renamed the "Golden Gate Bridge Welcome Center"), is located on the San Francisco side of the bridge, adjacent to the southeast parking lot. It opened in 2012, in time for the bridge's 75th-anniversary celebration. A cafe, outdoor exhibits, and restroom facilities are located nearby. On the Marin side of the bridge, only accessible from the northbound lanes, is the H. Dana Bower Rest Area and Vista Point, named after the first landscape architect for the California Division of Highways.
Lands and waters under and around the bridge are homes to varieties of wildlife such as bobcats, harbor seals, and sea lions. Three species of cetaceans (whales) that had been absent in the area for many years have shown recent recoveries/(re)colonizations in the vicinity of the bridge; researchers studying them have encouraged stronger protections and recommended that the public watch them from the bridge or from land, or use a local whale watching operator.
Tolls
Current toll rates
Tolls are only collected from southbound traffic after they cross from Marin County at the toll plaza on the San Francisco side of the bridge. All-electronic tolling has been in effect since 2013, and drivers may either pay using the FasTrak electronic toll collection device or using the license plate tolling program. It remains not truly an open road tolling system until the remaining unused toll booths are removed, forcing drivers to slow substantially from freeway speeds while passing through. Effective , the toll rate for passenger cars with license plate accounts is $9.50, while FasTrak users pay a discounted toll of $9.25. During peak traffic hours on weekdays between 5:00 am and 9:00 am, and between 4:00 pm and 6:00 pm, carpool vehicles carrying three or more people, or motorcycles may pay a discounted toll of $7.25 if they have FasTrak and use the designated carpool lane. Drivers without Fastrak or a license plate account must open a "short term" account within 48 hours after crossing the bridge or they will be sent a toll invoice of $10.25 (the FasTrak toll plus an additional $1 fee). No additional toll violation penalty will be assessed if the invoice is paid within 21 days.
Historical toll rates
When the Golden Gate Bridge opened in 1937, the toll was 50cents per car (equivalent to $ in ), collected in each direction. In 1950 it was reduced to 40cents each way ($ in ), then lowered to 25cents in 1955 ($ in ). In 1968, the bridge was converted to only collect tolls from southbound traffic, with the toll amount reset back to 50cents ($ in ).
From May 1937 until December 1970, pedestrians were charged a toll of 10 cents for bridge access via turnstiles on the sidewalks.
The last of the construction bonds were retired in 1971, with $35 million (equivalent to $M in ) in principal and nearly $39 million ($M in ) in interest raised entirely from bridge tolls. Tolls continued to be collected and subsequently incrementally raised; in 1991, the toll was raised a dollar to $3.00 (equivalent to $ in ).
The bridge began accepting tolls via the FasTrak electronic toll collection system in 2002, with $4 tolls for FasTrak users and $5 for those paying cash (equivalent to $ and $ respectively in ). In November 2006, the Golden Gate Bridge, Highway and Transportation District recommended a corporate sponsorship program for the bridge to address its operating deficit, projected at $80 million over five years. The District promised that the proposal, which it called a "partnership program", would not include changing the name of the bridge or placing advertising on the bridge itself. In October 2007, the Board unanimously voted to discontinue the proposal and seek additional revenue through other means, most likely a toll increase. The District later increased the toll amounts in 2008 to $5 for FasTrak users and $6 to those paying cash (equivalent to $ and $ respectively in ).
In an effort to save $19.2 million over the following 10 years, the Golden Gate District voted in January 2011 to eliminate all toll takers by 2012 and use only open road tolling. Subsequently, this was delayed and toll taker elimination occurred in March 2013. The cost savings have been revised to $19 million over an eight-year period. In addition to FasTrak, the Golden Gate Transportation District implemented the use of license plate tolling (branded as "Pay-by-Plate"), and also a one-time payment system for drivers to pay before or after their trip on the bridge. Twenty-eight positions were eliminated as part of this plan.
On April 7, 2014, the toll for users of FasTrak was increased from $5 to $6 (equivalent to $ in ), while the toll for drivers using either the license plate tolling or the one time payment system was raised from $6 to $7 (equivalent to $ in ). Bicycle, pedestrian, and northbound motor vehicle traffic remain toll free. For vehicles with more than two axles, the toll rate was $7 per axle for those using license plate tolling or the one time payment system, and $6 per axle for FasTrak users. During peak traffic hours, carpool vehicles carrying two or more people and motorcycles paid a discounted toll of $4 (equivalent to $ in ); drivers must have had Fastrak to take advantage of this carpool rate. The Golden Gate Transportation District then increased the tolls by 25cents in July 2015, and then by another 25cents each of the next three years.
In March 2019, the Golden Gate Transportation District approved a plan to implement 35-cent annual toll increases through 2023, except for the toll-by-plate program which will increase by 20cents per year. The district then approved another plan in March 2024 to implement 50-cent annual toll increases through 2028.
Congestion pricing
In March 2008, the Golden Gate Bridge District board approved a resolution to start congestion pricing at the Golden Gate Bridge, charging higher tolls during the peak hours, but rising and falling depending on traffic levels. This decision allowed the Bay Area to meet the federal requirement to receive $158 million in federal transportation funds from USDOT Urban Partnership grant. As a condition of the grant, the congestion toll was to be in place by September 2009.
In August 2008, transportation officials ended the congestion pricing program in favor of varying rates for metered parking along the route to the bridge including on Lombard Street and Van Ness Avenue.
Issues
Protests and stunts
In August 1977, three California Polytechnic State University students climbed the cables of the Golden Gate Bridge.
In May 1981, Dave Aguilar climbed the South Tower of the Golden Gate Bridge to protest offshore oil drilling.
On November 24, 1996, environmentalists, including Woody Harrelson, were arrested after scaling the Golden Gate Bridge.
In 1997, Quentin Kopp authored a bill, that was signed into law by Pete Wilson that increased the maximum fine for trespassing on the bridge from $1,000 to $10,000 and doubled maximum jail time from six months to a year.
In July 2001, approximately 100 protesters gathered to demand an end to the U.S. Navy's bombing activities on the Puerto Rican island of Vieques.
During the 2008 Tibetan unrest, three pro-Tibet activists scaled the bridge's vertical cables in April 2008 to protest the arrival of the Olympic torch in the city. The activists hung banners to denounce China's crackdown on Tibet. The incident resulted in the closure of a northbound lane of the bridge and was part of a wave of protests across multiple cities against China's policies in Tibet.
On January 20, 2017, thousands of people held hands as a human chain on the sidewalk across the Golden Gate Bridge as Donald Trump took the oath of office.
On June 6, 2020, protesters shut down traffic on the Golden Gate Bridge in a demonstration against police brutality following the murder of George Floyd. The protest, originally confined to the pedestrian path, spilled into traffic lanes as activists knelt for eight minutes and 46 seconds, symbolizing the time a police officer knelt on Floyd's neck. Law enforcement was unable to redirect protesters, causing a complete closure of the bridge to traffic during the demonstration. This event was part of nationwide protests, with San Francisco lifting its curfew to allow continued gatherings in support of the movement.
Approximately 5,000 Armenian-Americans marched across the Golden Gate Bridge in October 2020 to raise awareness about an illegal blockade during the Nagorno-Karabakh conflict and to urge the US government to halt arms shipments to Turkey and Azerbaijan. Organized by the Armenian Youth Federation (AYF) San Francisco "Rosdom" Chapter, the demonstration aimed at informing Bay Area citizens about the violence against Armenians.
In June 2021, activists from the Sunrise Movement marched over 250 miles to advocate for climate action, culminating in a demonstration on the Golden Gate Bridge. Activists called for urgent measures to combat climate change, including the passage of President Joe Biden's American Jobs Plan, which includes funding for green energy jobs.
On September 30, 2021, protesters blocked traffic, urging Senate Democrats to address immigration reform and advocate for citizenship for undocumented immigrants and Haitian refugees. Five organizers, including an undocumented individual, were arrested during the demonstration.
In November 2021, a protest against government-mandated COVID-19 vaccinations led to a chain-reaction crash at the bridge. During the demonstration, a vehicle collision occurred involving two California Highway Patrol officers and three Golden Gate Bridge employees. The individuals were hospitalized with not life-threatening injuries.
Protests over the death of Mahsa Amini occurred on September 26, 2022. Over 1,000 protesters gathered at the Golden Gate Bridge Welcome Center to demonstrate against the Islamic Republic of Iran and its morality police following the death of Amini, who had been detained after an encounter with Tehran police, leading to her subsequent coma and death. The protest attendees voiced demands for women's rights and freedom, displayed signs and carrying former imperial state Iranian flags. The event drew attention globally, sparking solidarity protests in Iran, Greece, England, and France.
On February 14, 2024, a pro-Palestinian protest temporarily halted traffic on the Golden Gate Bridge. Around 20 protesters gathered on the bridge, displaying banners condemning the Israeli invasion of the Gaza Strip, and calling for an end to U.S. military support to Israel. The demonstration caused a standstill in both northbound and southbound traffic.
Pro-Palestinian protesters staged demonstrations across the bridge in April 2024 in response to the ongoing Israel-Hamas War. The protests aimed to raise awareness and show solidarity with Gaza during a period of conflict, with some protestors chaining themselves to vehicles to impede traffic flow. Major highways and bridges were temporarily blocked, resulting in arrests by law enforcement.
Suicides
The Golden Gate Bridge is the most used suicide site in the world. The deck is about above the water. After a fall of four seconds, jumpers hit the water at around . Most die from impact trauma. About 5% survive the initial impact but generally drown or die of hypothermia in the cold water.
After years of debate and an estimated more than 1,500 deaths, suicide barriers, consisting of a stainless steel net extending from the bridge and supported by structural steel 20 feet under the walkway, began to be installed in April 2017. Construction was first estimated to take approximately four years at a cost of over $200 million. Installation of the nets was completed in January 2024. The metal nets are visible from the pedestrian walkways and are expected to be painful to land on.
Wind
The Golden Gate Bridge was designed to safely withstand winds of up to . Until 2008, the bridge was closed because of weather conditions only three times: on December 1, 1951, because of gusts of ; on December 23, 1982, because of winds of ; and on December 3, 1983, because of wind gusts of . An anemometer placed midway between the two towers on the west side of the bridge has been used to measure wind speeds. Another anemometer was placed on one of the towers.
As part of the retrofitting of the bridge and installation of the suicide barrier, starting in 2019 the railings on the west side of the pedestrian walkway were replaced with thinner, more flexible slats in order to improve the bridge's aerodynamic tolerance of high wind to . Starting in June 2020, reports were received of a loud hum, heard across San Francisco and Marin County, produced by the new railing slats when a strong west wind was blowing. The sound had been predicted from wind tunnel tests, but not included in the environmental impact report; ways of ameliorating it are being considered. An independent engineering analysis of a 2020 sound recording of the tones concludes that the singing noise comprises a variety of Aeolian tones (the sound produced by air flowing past a sharp edge), arising in this case from the ambient wind blowing across metal slats of the newly installed sidewalk railings. The tones observed were frequencies of 354, 398, 439 and 481 Hz, corresponding to the musical notes F4, G4, A4, and B4; these notes form an F Lydian Tetrachord.
Seismic vulnerability and improvements
Modern knowledge of the effect of earthquakes on structures led to a program to retrofit the Golden Gate to better resist seismic events. The proximity of the bridge to the San Andreas Fault places it at risk for a significant earthquake. Once thought to have been able to withstand any magnitude of foreseeable earthquake, the bridge was actually vulnerable to complete structural failure (i.e., collapse) triggered by the failure of supports on the arch over Fort Point. A $392 million program was initiated to improve the structure's ability to withstand such an event with only minimal (repairable) damage. A custom-built electro-hydraulic synchronous lift system for construction of temporary support towers and a series of intricate lifts, transferring the loads from the existing bridge onto the temporary supports, were completed with engineers from Balfour Beatty and Enerpac, without disrupting day-to-day commuter traffic. Although the retrofit was initially planned to be completed in 2012, it was expected to take several more years.
The former elevated approach to the Golden Gate Bridge through the San Francisco Presidio, known as Doyle Drive, dated to 1933 and was named after Frank P. Doyle. Doyle, the president of the Exchange Bank in Santa Rosa and son of the bank's founder, was the man who, more than any other person, made it possible to build the Golden Gate Bridge. The highway carried about 91,000 vehicles each weekday between downtown San Francisco and the North Bay and points north. The road was deemed "vulnerable to earthquake damage", had a problematic 4-lane design, and lacked shoulders; a San Francisco County Transportation Authority study recommended that it be replaced. Construction on the $1 billion replacement, temporarily known as the Presidio Parkway, began in December 2009.
The elevated Doyle Drive was demolished on the weekend of April 27–30, 2012, and traffic used a part of the partially completed Presidio Parkway, until it was switched onto the finished Presidio Parkway on the weekend of July 9–12, 2015. , an official at Caltrans said there is no plan to permanently rename the portion known as Doyle Drive.
Gallery
| Technology | Transport infrastructure | null |
12207 | https://en.wikipedia.org/wiki/Geology | Geology | Geology () is a branch of natural science concerned with the Earth and other astronomical objects, the rocks of which they are composed, and the processes by which they change over time. Modern geology significantly overlaps all other Earth sciences, including hydrology. It is integrated with Earth system science and planetary science.
Geology describes the structure of the Earth on and beneath its surface and the processes that have shaped that structure. Geologists study the mineralogical composition of rocks in order to get insight into their history of formation. Geology determines the relative ages of rocks found at a given location; geochemistry (a branch of geology) determines their absolute ages. By combining various petrological, crystallographic, and paleontological tools, geologists are able to chronicle the geological history of the Earth as a whole. One aspect is to demonstrate the age of the Earth. Geology provides evidence for plate tectonics, the evolutionary history of life, and the Earth's past climates.
Geologists broadly study the properties and processes of Earth and other terrestrial planets. Geologists use a wide variety of methods to understand the Earth's structure and evolution, including fieldwork, rock description, geophysical techniques, chemical analysis, physical experiments, and numerical modelling. In practical terms, geology is important for mineral and hydrocarbon exploration and exploitation, evaluating water resources, understanding natural hazards, remediating environmental problems, and providing insights into past climate change. Geology is a major academic discipline, and it is central to geological engineering and plays an important role in geotechnical engineering.
Geological material
The majority of geological data comes from research on solid Earth materials. Meteorites and other extraterrestrial natural materials are also studied by geological methods.
Minerals
Minerals are naturally occurring elements and compounds with a definite homogeneous chemical composition and an ordered atomic arrangement.
Each mineral has distinct physical properties, and there are many tests to determine each of them. Minerals are often identified through these tests. The specimens can be tested for:
Color: Minerals are grouped by their color. Mostly diagnostic but impurities can change a mineral's color.
Streak: Performed by scratching the sample on a porcelain plate. The color of the streak can help identify the mineral.
Hardness: The resistance of a mineral to scratching or indentation.
Breakage pattern: A mineral can either show fracture or cleavage, the former being breakage of uneven surfaces, and the latter a breakage along closely spaced parallel planes.
Luster: Quality of light reflected from the surface of a mineral. Examples are metallic, pearly, waxy, dull.
Specific gravity: the weight of a specific volume of a mineral.
Effervescence: Involves dripping hydrochloric acid on the mineral to test for fizzing.
Magnetism: Involves using a magnet to test for magnetism.
Taste: Minerals can have a distinctive taste such as halite (which tastes like table salt).
Rock
A rock is any naturally occurring solid mass or aggregate of minerals or mineraloids. Most research in geology is associated with the study of rocks, as they provide the primary record of the majority of the geological history of the Earth. There are three major types of rock: igneous, sedimentary, and metamorphic. The rock cycle
illustrates the relationships among them (see diagram).
When a rock solidifies or crystallizes from melt (magma or lava), it is an igneous rock. This rock can be weathered and eroded, then redeposited and lithified into a sedimentary rock. Sedimentary rocks are mainly divided into four categories: sandstone, shale, carbonate, and evaporite. This group of classifications focuses partly on the size of sedimentary particles (sandstone and shale), and partly on mineralogy and formation processes (carbonation and evaporation). Igneous and sedimentary rocks can then be turned into metamorphic rocks by heat and pressure that change its mineral content, resulting in a characteristic fabric. All three types may melt again, and when this happens, new magma is formed, from which an igneous rock may once again solidify.
Organic matter, such as coal, bitumen, oil, and natural gas, is linked mainly to organic-rich sedimentary rocks.
To study all three types of rock, geologists evaluate the minerals of which they are composed and their other physical properties, such as texture and fabric.
Unlithified material
Geologists also study unlithified materials (referred to as superficial deposits) that lie above the bedrock. This study is often known as Quaternary geology, after the Quaternary period of geologic history, which is the most recent period of geologic time.
Magma
Magma is the original unlithified source of all igneous rocks. The active flow of molten rock is closely studied in volcanology, and igneous petrology aims to determine the history of igneous rocks from their original molten source to their final crystallization.
Whole-Earth structure
Plate tectonics
In the 1960s, it was discovered that the Earth's lithosphere, which includes the crust and rigid uppermost portion of the upper mantle, is separated into tectonic plates that move across the plastically deforming, solid, upper mantle, which is called the asthenosphere. This theory is supported by several types of observations, including seafloor spreading and the global distribution of mountain terrain and seismicity.
There is an intimate coupling between the movement of the plates on the surface and the convection of the mantle (that is, the heat transfer caused by the slow movement of ductile mantle rock). Thus, oceanic parts of plates and the adjoining mantle convection currents always move in the same direction – because the oceanic lithosphere is actually the rigid upper thermal boundary layer of the convecting mantle. This coupling between rigid plates moving on the surface of the Earth and the convecting mantle is called plate tectonics.
The development of plate tectonics has provided a physical basis for many observations of the solid Earth. Long linear regions of geological features are explained as plate boundaries:
Mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, are seen as divergent boundaries, where two plates move apart.
Arcs of volcanoes and earthquakes are theorized as convergent boundaries, where one plate subducts, or moves, under another.
Transform boundaries, such as the San Andreas Fault system, are where plates slide horizontally past each other.
Plate tectonics has provided a mechanism for Alfred Wegener's theory of continental drift, in which the continents move across the surface of the Earth over geological time. They also provided a driving force for crustal deformation, and a new setting for the observations of structural geology. The power of the theory of plate tectonics lies in its ability to combine all of these observations into a single theory of how the lithosphere moves over the convecting mantle.
Earth structure
Advances in seismology, computer modeling, and mineralogy and crystallography at high temperatures and pressures give insights into the internal composition and structure of the Earth.
Seismologists can use the arrival times of seismic waves to image the interior of the Earth. Early advances in this field showed the existence of a liquid outer core (where shear waves were not able to propagate) and a dense solid inner core. These advances led to the development of a layered model of the Earth, with a lithosphere (including crust) on top, the mantle below (separated within itself by seismic discontinuities at 410 and 660 kilometers), and the outer core and inner core below that. More recently, seismologists have been able to create detailed images of wave speeds inside the earth in the same way a doctor images a body in a CT scan. These images have led to a much more detailed view of the interior of the Earth, and have replaced the simplified layered model with a much more dynamic model.
Mineralogists have been able to use the pressure and temperature data from the seismic and modeling studies alongside knowledge of the elemental composition of the Earth to reproduce these conditions in experimental settings and measure changes within the crystal structure. These studies explain the chemical changes associated with the major seismic discontinuities in the mantle and show the crystallographic structures expected in the inner core of the Earth.
Geological time
The geological time scale encompasses the history of the Earth. It is bracketed at the earliest by the dates of the first Solar System material at 4.567 Ga (or 4.567 billion years ago) and the formation of the Earth at
4.54 Ga
(4.54 billion years), which is the beginning of the Hadean eona division of geological time. At the later end of the scale, it is marked by the present day (in the Holocene epoch).
Timescale of the Earth
Important milestones on Earth
4.567 Ga (gigaannum: billion years ago): Solar system formation
4.54 Ga: Accretion, or formation, of Earth
c. 4 Ga: End of Late Heavy Bombardment, the first life
c. 3.5 Ga: Start of photosynthesis
c. 2.3 Ga: Oxygenated atmosphere, first snowball Earth
730–635 Ma (megaannum: million years ago): second snowball Earth
541 ± 0.3 Ma: Cambrian explosion – vast multiplication of hard-bodied life; first abundant fossils; start of the Paleozoic
c. 380 Ma: First vertebrate land animals
250 Ma: Permian-Triassic extinction – 90% of all land animals die; end of Paleozoic and beginning of Mesozoic
66 Ma: Cretaceous–Paleogene extinction – Dinosaurs die; end of Mesozoic and beginning of Cenozoic
c. 7 Ma: First hominins appear
3.9 Ma: First Australopithecus, direct ancestor to modern Homo sapiens, appear
200 ka (kiloannum: thousand years ago): First modern Homo sapiens appear in East Africa
Timescale of the Moon
Timescale of Mars
Dating methods
Relative dating
Methods for relative dating were developed when geology first emerged as a natural science. Geologists still use the following principles today as a means to provide information about geological history and the timing of geological events.
The principle of uniformitarianism states that the geological processes observed in operation that modify the Earth's crust at present have worked in much the same way over geological time. A fundamental principle of geology advanced by the 18th-century Scottish physician and geologist James Hutton is that "the present is the key to the past." In Hutton's words: "the past history of our globe must be explained by what can be seen to be happening now."
The principle of intrusive relationships concerns crosscutting intrusions. In geology, when an igneous intrusion cuts across a formation of sedimentary rock, it can be determined that the igneous intrusion is younger than the sedimentary rock. Different types of intrusions include stocks, laccoliths, batholiths, sills and dikes.
The principle of cross-cutting relationships pertains to the formation of faults and the age of the sequences through which they cut. Faults are younger than the rocks they cut; accordingly, if a fault is found that penetrates some formations but not those on top of it, then the formations that were cut are older than the fault, and the ones that are not cut must be younger than the fault. Finding the key bed in these situations may help determine whether the fault is a normal fault or a thrust fault.
The principle of inclusions and components states that, with sedimentary rocks, if inclusions (or clasts) are found in a formation, then the inclusions must be older than the formation that contains them. For example, in sedimentary rocks, it is common for gravel from an older formation to be ripped up and included in a newer layer. A similar situation with igneous rocks occurs when xenoliths are found. These foreign bodies are picked up as magma or lava flows, and are incorporated, later to cool in the matrix. As a result, xenoliths are older than the rock that contains them.
The principle of original horizontality states that the deposition of sediments occurs as essentially horizontal beds. Observation of modern marine and non-marine sediments in a wide variety of environments supports this generalization (although cross-bedding is inclined, the overall orientation of cross-bedded units is horizontal).
The principle of superposition states that a sedimentary rock layer in a tectonically undisturbed sequence is younger than the one beneath it and older than the one above it. Logically a younger layer cannot slip beneath a layer previously deposited. This principle allows sedimentary layers to be viewed as a form of the vertical timeline, a partial or complete record of the time elapsed from deposition of the lowest layer to deposition of the highest bed.
The principle of faunal succession is based on the appearance of fossils in sedimentary rocks. As organisms exist during the same period throughout the world, their presence or (sometimes) absence provides a relative age of the formations where they appear. Based on principles that William Smith laid out almost a hundred years before the publication of Charles Darwin's theory of evolution, the principles of succession developed independently of evolutionary thought. The principle becomes quite complex, however, given the uncertainties of fossilization, localization of fossil types due to lateral changes in habitat (facies change in sedimentary strata), and that not all fossils formed globally at the same time.
Absolute dating
Geologists also use methods to determine the absolute age of rock samples and geological events. These dates are useful on their own and may also be used in conjunction with relative dating methods or to calibrate relative methods.
At the beginning of the 20th century, advancement in geological science was facilitated by the ability to obtain accurate absolute dates to geological events using radioactive isotopes and other methods. This changed the understanding of geological time. Previously, geologists could only use fossils and stratigraphic correlation to date sections of rock relative to one another. With isotopic dates, it became possible to assign absolute ages to rock units, and these absolute dates could be applied to fossil sequences in which there was datable material, converting the old relative ages into new absolute ages.
For many geological applications, isotope ratios of radioactive elements are measured in minerals that give the amount of time that has passed since a rock passed through its particular closure temperature, the point at which different radiometric isotopes stop diffusing into and out of the crystal lattice. These are used in geochronologic and thermochronologic studies. Common methods include uranium–lead dating, potassium–argon dating, argon–argon dating and uranium–thorium dating. These methods are used for a variety of applications. Dating of lava and volcanic ash layers found within a stratigraphic sequence can provide absolute age data for sedimentary rock units that do not contain radioactive isotopes and calibrate relative dating techniques. These methods can also be used to determine ages of pluton emplacement.
Thermochemical techniques can be used to determine temperature profiles within the crust, the uplift of mountain ranges, and paleo-topography.
Fractionation of the lanthanide series elements is used to compute ages since rocks were removed from the mantle.
Other methods are used for more recent events. Optically stimulated luminescence and cosmogenic radionuclide dating are used to date surfaces and/or erosion rates. Dendrochronology can also be used for the dating of landscapes. Radiocarbon dating is used for geologically young materials containing organic carbon.
Geological development of an area
The geology of an area changes through time as rock units are deposited and inserted, and deformational processes alter their shapes and locations.
Rock units are first emplaced either by deposition onto the surface or intrusion into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude.
After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates.
When rock units are placed under horizontal compression, they shorten and become thicker. Because rock units, other than muds, do not significantly change in volume, this is accomplished in two primary ways: through faulting and folding. In the shallow crust, where brittle deformation can occur, thrust faults form, which causes the deeper rock to move on top of the shallower rock. Because deeper rock is often older, as noted by the principle of superposition, this can result in older rocks moving on top of younger ones. Movement along faults can result in folding, either because the faults are not planar or because rock layers are dragged along, forming drag folds as slip occurs along the fault. Deeper in the Earth, rocks behave plastically and fold instead of faulting. These folds can either be those where the material in the center of the fold buckles upwards, creating "antiforms", or where it buckles downwards, creating "synforms". If the tops of the rock units within the folds remain pointing upwards, they are called anticlines and synclines, respectively. If some of the units in the fold are facing downward, the structure is called an overturned anticline or syncline, and if all of the rock units are overturned or the correct up-direction is unknown, they are simply called by the most general terms, antiforms, and synforms.
Even higher pressures and temperatures during horizontal shortening can cause both folding and metamorphism of the rocks. This metamorphism causes changes in the mineral composition of the rocks; creates a foliation, or planar surface, that is related to mineral growth under stress. This can remove signs of the original textures of the rocks, such as bedding in sedimentary rocks, flow features of lavas, and crystal patterns in crystalline rocks.
Extension causes the rock units as a whole to become longer and thinner. This is primarily accomplished through normal faulting and through the ductile stretching and thinning. Normal faults drop rock units that are higher below those that are lower. This typically results in younger units ending up below older units. Stretching of units can result in their thinning. In fact, at one location within the Maria Fold and Thrust Belt, the entire sedimentary sequence of the Grand Canyon appears over a length of less than a meter. Rocks at the depth to be ductilely stretched are often also metamorphosed. These stretched rocks can also pinch into lenses, known as boudins, after the French word for "sausage" because of their visual similarity.
Where rock units slide past one another, strike-slip faults develop in shallow regions, and become shear zones at deeper depths where the rocks deform ductilely.
The addition of new rock units, both depositionally and intrusively, often occurs during deformation. Faulting and other deformational processes result in the creation of topographic gradients, causing material on the rock unit that is increasing in elevation to be eroded by hillslopes and channels. These sediments are deposited on the rock unit that is going down. Continual motion along the fault maintains the topographic gradient in spite of the movement of sediment and continues to create accommodation space for the material to deposit. Deformational events are often also associated with volcanism and igneous activity. Volcanic ashes and lavas accumulate on the surface, and igneous intrusions enter from below. Dikes, long, planar igneous intrusions, enter along cracks, and therefore often form in large numbers in areas that are being actively deformed. This can result in the emplacement of dike swarms, such as those that are observable across the Canadian shield, or rings of dikes around the lava tube of a volcano.
All of these processes do not necessarily occur in a single environment and do not necessarily occur in a single order. The Hawaiian Islands, for example, consist almost entirely of layered basaltic lava flows. The sedimentary sequences of the mid-continental United States and the Grand Canyon in the southwestern United States contain almost-undeformed stacks of sedimentary rocks that have remained in place since Cambrian time. Other areas are much more geologically complex. In the southwestern United States, sedimentary, volcanic, and intrusive rocks have been metamorphosed, faulted, foliated, and folded. Even older rocks, such as the Acasta gneiss of the Slave craton in northwestern Canada, the oldest known rock in the world have been metamorphosed to the point where their origin is indiscernible without laboratory analysis. In addition, these processes can occur in stages. In many places, the Grand Canyon in the southwestern United States being a very visible example, the lower rock units were metamorphosed and deformed, and then deformation ended and the upper, undeformed units were deposited. Although any amount of rock emplacement and rock deformation can occur, and they can occur any number of times, these concepts provide a guide to understanding the geological history of an area.
Investigative methods
Geologists use a number of fields, laboratory, and numerical modeling methods to decipher Earth history and to understand the processes that occur on and inside the Earth. In typical geological investigations, geologists use primary information related to petrology (the study of rocks), stratigraphy (the study of sedimentary layers), and structural geology (the study of positions of rock units and their deformation). In many cases, geologists also study modern soils, rivers, landscapes, and glaciers; investigate past and current life and biogeochemical pathways, and use geophysical methods to investigate the subsurface. Sub-specialities of geology may distinguish endogenous and exogenous geology.
Field methods
Geological field work varies depending on the task at hand. Typical fieldwork could consist of:
Geological mapping
Structural mapping: identifying the locations of major rock units and the faults and folds that led to their placement there.
Stratigraphic mapping: pinpointing the locations of sedimentary facies (lithofacies and biofacies) or the mapping of isopachs of equal thickness of sedimentary rock
Surficial mapping: recording the locations of soils and surficial deposits
Surveying of topographic features
compilation of topographic maps
Work to understand change across landscapes, including:
Patterns of erosion and deposition
River-channel change through migration and avulsion
Hillslope processes
Subsurface mapping through geophysical methods
These methods include:
Shallow seismic surveys
Ground-penetrating radar
Aeromagnetic surveys
Electrical resistivity tomography
They aid in:
Hydrocarbon exploration
Finding groundwater
Locating buried archaeological artifacts
High-resolution stratigraphy
Measuring and describing stratigraphic sections on the surface
Well drilling and logging
Biogeochemistry and geomicrobiology
Collecting samples to:
determine biochemical pathways
identify new species of organisms
identify new chemical compounds
and to use these discoveries to:
understand early life on Earth and how it functioned and metabolized
find important compounds for use in pharmaceuticals
Paleontology: excavation of fossil material
For research into past life and evolution
For museums and education
Collection of samples for geochronology and thermochronology
Glaciology: measurement of characteristics of glaciers and their motion
Petrology
In addition to identifying rocks in the field (lithology), petrologists identify rock samples in the laboratory. Two of the primary methods for identifying rocks in the laboratory are through optical microscopy and by using an electron microprobe. In an optical mineralogy analysis, petrologists analyze thin sections of rock samples using a petrographic microscope, where the minerals can be identified through their different properties in plane-polarized and cross-polarized light, including their birefringence, pleochroism, twinning, and interference properties with a conoscopic lens. In the electron microprobe, individual locations are analyzed for their exact chemical compositions and variation in composition within individual crystals. Stable and radioactive isotope studies provide insight into the geochemical evolution of rock units.
Petrologists can also use fluid inclusion data and perform high temperature and pressure physical experiments to understand the temperatures and pressures at which different mineral phases appear, and how they change through igneous and metamorphic processes. This research can be extrapolated to the field to understand metamorphic processes and the conditions of crystallization of igneous rocks. This work can also help to explain processes that occur within the Earth, such as subduction and magma chamber evolution.
Structural geology
Structural geologists use microscopic analysis of oriented thin sections of geological samples to observe the fabric within the rocks, which gives information about strain within the crystalline structure of the rocks. They also plot and combine measurements of geological structures to better understand the orientations of faults and folds to reconstruct the history of rock deformation in the area. In addition, they perform analog and numerical experiments of rock deformation in large and small settings.
The analysis of structures is often accomplished by plotting the orientations of various features onto stereonets. A stereonet is a stereographic projection of a sphere onto a plane, in which planes are projected as lines and lines are projected as points. These can be used to find the locations of fold axes, relationships between faults, and relationships between other geological structures.
Among the most well-known experiments in structural geology are those involving orogenic wedges, which are zones in which mountains are built along convergent tectonic plate boundaries. In the analog versions of these experiments, horizontal layers of sand are pulled along a lower surface into a back stop, which results in realistic-looking patterns of faulting and the growth of a critically tapered (all angles remain the same) orogenic wedge. Numerical models work in the same way as these analog models, though they are often more sophisticated and can include patterns of erosion and uplift in the mountain belt. This helps to show the relationship between erosion and the shape of a mountain range. These studies can also give useful information about pathways for metamorphism through pressure, temperature, space, and time.
Stratigraphy
In the laboratory, stratigraphers analyze samples of stratigraphic sections that can be returned from the field, such as those from drill cores. Stratigraphers also analyze data from geophysical surveys that show the locations of stratigraphic units in the subsurface. Geophysical data and well logs can be combined to produce a better view of the subsurface, and stratigraphers often use computer programs to do this in three dimensions. Stratigraphers can then use these data to reconstruct ancient processes occurring on the surface of the Earth, interpret past environments, and locate areas for water, coal, and hydrocarbon extraction.
In the laboratory, biostratigraphers analyze rock samples from outcrop and drill cores for the fossils found in them. These fossils help scientists to date the core and to understand the depositional environment in which the rock units formed. Geochronologists precisely date rocks within the stratigraphic section to provide better absolute bounds on the timing and rates of deposition.
Magnetic stratigraphers look for signs of magnetic reversals in igneous rock units within the drill cores. Other scientists perform stable-isotope studies on the rocks to gain information about past climate.
Planetary geology
With the advent of space exploration in the twentieth century, geologists have begun to look at other planetary bodies in the same ways that have been developed to study the Earth. This new field of study is called planetary geology (sometimes known as astrogeology) and relies on known geological principles to study other bodies of the solar system. This is a major aspect of planetary science, and largely focuses on the terrestrial planets, icy moons, asteroids, comets, and meteorites. However, some planetary geophysicists study the giant planets and exoplanets.
Although the Greek-language-origin prefix geo refers to Earth, "geology" is often used in conjunction with the names of other planetary bodies when describing their composition and internal processes: examples are "the geology of Mars" and "Lunar geology". Specialized terms such as selenology (studies of the Moon), areology (of Mars), etc., are also in use.
Although planetary geologists are interested in studying all aspects of other planets, a significant focus is to search for evidence of past or present life on other worlds. This has led to many missions whose primary or ancillary purpose is to examine planetary bodies for evidence of life. One of these is the Phoenix lander, which analyzed Martian polar soil for water, chemical, and mineralogical constituents related to biological processes.
Applied geology
Economic geology
Economic geology is a branch of geology that deals with aspects of economic minerals that humankind uses to fulfill various needs. Economic minerals are those extracted profitably for various practical uses. Economic geologists help locate and manage the Earth's natural resources, such as petroleum and coal, as well as mineral resources, which include metals such as iron, copper, and uranium.
Mining geology
Mining geology consists of the extractions of mineral and ore resources from the Earth. Some resources of economic interests include gemstones, metals such as gold and copper, and many minerals such as asbestos, Magnesite, perlite, mica, phosphates, zeolites, clay, pumice, quartz, and silica, as well as elements such as sulfur, chlorine, and helium.
Petroleum geology
Petroleum geologists study the locations of the subsurface of the Earth that can contain extractable hydrocarbons, especially petroleum and natural gas. Because many of these reservoirs are found in sedimentary basins, they study the formation of these basins, as well as their sedimentary and tectonic evolution and the present-day positions of the rock units.
Engineering geology
Engineering geology is the application of geological principles to engineering practice for the purpose of assuring that the geological factors affecting the location, design, construction, operation, and maintenance of engineering works are properly addressed. Engineering geology is distinct from geological engineering, particularly in North America.
In the field of civil engineering, geological principles and analyses are used in order to ascertain the mechanical principles of the material on which structures are built. This allows tunnels to be built without collapsing, bridges and skyscrapers to be built with sturdy foundations, and buildings to be built that will not settle in clay and mud.
Hydrology
Geology and geological principles can be applied to various environmental problems such as stream restoration, the restoration of brownfields, and the understanding of the interaction between natural habitat and the geological environment. Groundwater hydrology, or hydrogeology, is used to locate groundwater, which can often provide a ready supply of uncontaminated water and is especially important in arid regions, and to monitor the spread of contaminants in groundwater wells.
Paleoclimatology
Geologists also obtain data through stratigraphy, boreholes, core samples, and ice cores. Ice cores and sediment cores are used for paleoclimate reconstructions, which tell geologists about past and present temperature, precipitation, and sea level across the globe. These datasets are our primary source of information on global climate change outside of instrumental data.
Natural hazards
Geologists and geophysicists study natural hazards in order to enact safe building codes and warning systems that are used to prevent loss of property and life. Examples of important natural hazards that are pertinent to geology (as opposed those that are mainly or only pertinent to meteorology) are:
History
The study of the physical material of the Earth dates back at least to ancient Greece when Theophrastus (372–287 BCE) wrote the work Peri Lithon (On Stones). During the Roman period, Pliny the Elder wrote in detail of the many minerals and metals, then in practical use – even correctly noting the origin of amber. Additionally, in the 4th century BCE Aristotle made critical observations of the slow rate of geological change. He observed the composition of the land and formulated a theory where the Earth changes at a slow rate and that these changes cannot be observed during one person's lifetime. Aristotle developed one of the first evidence-based concepts connected to the geological realm regarding the rate at which the Earth physically changes.
Abu al-Rayhan al-Biruni (973–1048 CE) was one of the earliest Persian geologists, whose works included the earliest writings on the geology of India, hypothesizing that the Indian subcontinent was once a sea. Drawing from Greek and Indian scientific literature that were not destroyed by the Muslim conquests, the Persian scholar Ibn Sina (Avicenna, 981–1037) proposed detailed explanations for the formation of mountains, the origin of earthquakes, and other topics central to modern geology, which provided an essential foundation for the later development of the science. In China, the polymath Shen Kuo (1031–1095) formulated a hypothesis for the process of land formation: based on his observation of fossil animal shells in a geological stratum in a mountain hundreds of miles from the ocean, he inferred that the land was formed by the erosion of the mountains and by deposition of silt.
Georgius Agricola (1494–1555) published his groundbreaking work De Natura Fossilium in 1546 and is seen as the founder of geology as a scientific discipline.
Nicolas Steno (1638–1686) is credited with the law of superposition, the principle of original horizontality, and the principle of lateral continuity: three defining principles of stratigraphy.
The word geology was first used by Ulisse Aldrovandi in 1603, then by Jean-André Deluc in 1778 and introduced as a fixed term by Horace-Bénédict de Saussure in 1779. The word is derived from the Greek γῆ, gê, meaning "earth" and λόγος, logos, meaning "speech". But according to another source, the word "geology" comes from a Norwegian, Mikkel Pedersøn Escholt (1600–1669), who was a priest and scholar. Escholt first used the definition in his book titled, Geologia Norvegica (1657).
William Smith (1769–1839) drew some of the first geological maps and began the process of ordering rock strata (layers) by examining the fossils contained in them.
In 1763, Mikhail Lomonosov published his treatise On the Strata of Earth. His work was the first narrative of modern geology, based on the unity of processes in time and explanation of the Earth's past from the present.
James Hutton (1726–1797) is often viewed as the first modern geologist. In 1785 he presented a paper entitled Theory of the Earth to the Royal Society of Edinburgh. In his paper, he explained his theory that the Earth must be much older than had previously been supposed to allow enough time for mountains to be eroded and for sediments to form new rocks at the bottom of the sea, which in turn were raised up to become dry land. Hutton published a two-volume version of his ideas in 1795.
Followers of Hutton were known as Plutonists because they believed that some rocks were formed by vulcanism, which is the deposition of lava from volcanoes, as opposed to the Neptunists, led by Abraham Werner, who believed that all rocks had settled out of a large ocean whose level gradually dropped over time.
The first geological map of the U.S. was produced in 1809 by William Maclure. In 1807, Maclure commenced the self-imposed task of making a geological survey of the United States. Almost every state in the Union was traversed and mapped by him, the Allegheny Mountains being crossed and recrossed some 50 times. The results of his unaided labours were submitted to the American Philosophical Society in a memoir entitled Observations on the Geology of the United States explanatory of a Geological Map, and published in the Society's Transactions, together with the nation's first geological map. This antedates William Smith's geological map of England by six years, although it was constructed using a different classification of rocks.
Sir Charles Lyell (1797–1875) first published his famous book, Principles of Geology, in 1830. This book, which influenced the thought of Charles Darwin, successfully promoted the doctrine of uniformitarianism. This theory states that slow geological processes have occurred throughout the Earth's history and are still occurring today. In contrast, catastrophism is the theory that Earth's features formed in single, catastrophic events and remained unchanged thereafter. Though Hutton believed in uniformitarianism, the idea was not widely accepted at the time.
Much of 19th-century geology revolved around the question of the Earth's exact age. Estimates varied from a few hundred thousand to billions of years. By the early 20th century, radiometric dating allowed the Earth's age to be estimated at two billion years. The awareness of this vast amount of time opened the door to new theories about the processes that shaped the planet.
Some of the most significant advances in 20th-century geology have been the development of the theory of plate tectonics in the 1960s and the refinement of estimates of the planet's age. Plate tectonics theory arose from two separate geological observations: seafloor spreading and continental drift. The theory revolutionized the Earth sciences. Today the Earth is known to be approximately 4.5 billion years old.
Fields or related disciplines
Earth system science
Economic geology
Mining geology
Petroleum geology
Engineering geology
Environmental geology
Environmental science
Geoarchaeology
Geochemistry
Biogeochemistry
Isotope geochemistry
Geochronology
Geodetics
Geography
Physical geography
Technical geography
Geological engineering
Geological modelling
Geometallurgy
Geomicrobiology
Geomorphology
Geomythology
Geophysics
Glaciology
Historical geology
Hydrogeology
Meteorology
Mineralogy
Oceanography
Marine geology
Paleoclimatology
Paleontology
Micropaleontology
Palynology
Petrology
Petrophysics
Planetary geology
Plate tectonics
Regional geology
Sedimentology
Seismology
Soil science
Pedology (soil study)
Speleology
Stratigraphy
Biostratigraphy
Chronostratigraphy
Lithostratigraphy
Structural geology
Systems geology
Tectonics
Volcanology
| Physical sciences | Science and medicine | null |
12240 | https://en.wikipedia.org/wiki/Gold | Gold | Gold is a chemical element with the chemical symbol Au (from Latin ) and atomic number 79. In its pure form, it is a bright, slightly orange-yellow, dense, soft, malleable, and ductile metal. Chemically, gold is a transition metal, a group 11 element, and one of the noble metals. It is one of the least reactive chemical elements, being the second-lowest in the reactivity series. It is solid under standard conditions.
Gold often occurs in free elemental (native state), as nuggets or grains, in rocks, veins, and alluvial deposits. It occurs in a solid solution series with the native element silver (as in electrum), naturally alloyed with other metals like copper and palladium, and mineral inclusions such as within pyrite. Less commonly, it occurs in minerals as gold compounds, often with tellurium (gold tellurides).
Gold is resistant to most acids, though it does dissolve in aqua regia (a mixture of nitric acid and hydrochloric acid), forming a soluble tetrachloroaurate anion. Gold is insoluble in nitric acid alone, which dissolves silver and base metals, a property long used to refine gold and confirm the presence of gold in metallic substances, giving rise to the term 'acid test'. Gold dissolves in alkaline solutions of cyanide, which are used in mining and electroplating. Gold also dissolves in mercury, forming amalgam alloys, and as the gold acts simply as a solute, this is not a chemical reaction.
A relatively rare element, gold is a precious metal that has been used for coinage, jewelry, and other works of art throughout recorded history. In the past, a gold standard was often implemented as a monetary policy. Gold coins ceased to be minted as a circulating currency in the 1930s, and the world gold standard was abandoned for a fiat currency system after the Nixon shock measures of 1971.
In 2023, the world's largest gold producer was China, followed by Russia and Australia. , a total of around 201,296 tonnes of gold exist above ground. This is equal to a cube, with each side measuring roughly . The world's consumption of new gold produced is about 50% in jewelry, 40% in investments, and 10% in industry. Gold's high malleability, ductility, resistance to corrosion and most other chemical reactions, as well as conductivity of electricity have led to its continued use in corrosion-resistant electrical connectors in all types of computerized devices (its chief industrial use). Gold is also used in infrared shielding, the production of colored glass, gold leafing, and tooth restoration. Certain gold salts are still used as anti-inflammatory agents in medicine.
Characteristics
Gold is the most malleable of all metals. It can be drawn into a wire of single-atom width, and then stretched considerably before it breaks. Such nanowires distort via the formation, reorientation, and migration of dislocations and crystal twins without noticeable hardening. A single gram of gold can be beaten into a sheet of , and an avoirdupois ounce into . Gold leaf can be beaten thin enough to become semi-transparent. The transmitted light appears greenish-blue because gold strongly reflects yellow and red. Such semi-transparent sheets also strongly reflect infrared light, making them useful as infrared (radiant heat) shields in the visors of heat-resistant suits and in sun visors for spacesuits. Gold is a good conductor of heat and electricity.
Gold has a density of 19.3 g/cm3, almost identical to that of tungsten at 19.25 g/cm3; as such, tungsten has been used in the counterfeiting of gold bars, such as by plating a tungsten bar with gold. By comparison, the density of lead is 11.34 g/cm3, and that of the densest element, osmium, is .
Color
Whereas most metals are gray or silvery white, gold is slightly reddish-yellow. This color is determined by the frequency of plasma oscillations among the metal's valence electrons, in the ultraviolet range for most metals but in the visible range for gold due to relativistic effects affecting the orbitals around gold atoms. Similar effects impart a golden hue to metallic caesium.
Common colored gold alloys include the distinctive eighteen-karat rose gold created by the addition of copper. Alloys containing palladium or nickel are also important in commercial jewelry as these produce white gold alloys. Fourteen-karat gold-copper alloy is nearly identical in color to certain bronze alloys, and both may be used to produce police and other badges. Fourteen- and eighteen-karat gold alloys with silver alone appear greenish-yellow and are referred to as green gold. Blue gold can be made by alloying with iron, and purple gold can be made by alloying with aluminium. Less commonly, addition of manganese, indium, and other elements can produce more unusual colors of gold for various applications.
Colloidal gold, used by electron-microscopists, is red if the particles are small; larger particles of colloidal gold are blue.
Isotopes
Gold has only one stable isotope, , which is also its only naturally occurring isotope, so gold is both a mononuclidic and monoisotopic element. Thirty-six radioisotopes have been synthesized, ranging in atomic mass from 169 to 205. The most stable of these is with a half-life of 186.1 days. The least stable is , which decays by proton emission with a half-life of 30 μs. Most of gold's radioisotopes with atomic masses below 197 decay by some combination of proton emission, α decay, and β+ decay. The exceptions are , which decays by electron capture, and , which decays most often by electron capture (93%) with a minor β− decay path (7%). All of gold's radioisotopes with atomic masses above 197 decay by β− decay.
At least 32 nuclear isomers have also been characterized, ranging in atomic mass from 170 to 200. Within that range, only , , , , and do not have isomers. Gold's most stable isomer is with a half-life of 2.27 days. Gold's least stable isomer is with a half-life of only 7 ns. has three decay paths: β+ decay, isomeric transition, and alpha decay. No other isomer or isotope of gold has three decay paths.
Synthesis
The possible production of gold from a more common element, such as lead, has long been a subject of human inquiry, and the ancient and medieval discipline of alchemy often focused on it; however, the transmutation of the chemical elements did not become possible until the understanding of nuclear physics in the 20th century. The first synthesis of gold was conducted by Japanese physicist Hantaro Nagaoka, who synthesized gold from mercury in 1924 by neutron bombardment. An American team, working without knowledge of Nagaoka's prior study, conducted the same experiment in 1941, achieving the same result and showing that the isotopes of gold produced by it were all radioactive. In 1980, Glenn Seaborg transmuted several thousand atoms of bismuth into gold at the Lawrence Berkeley Laboratory. Gold can be manufactured in a nuclear reactor, but doing so is highly impractical and would cost far more than the value of the gold that is produced.
Chemistry
Although gold is the most noble of the noble metals, it still forms many diverse compounds. The oxidation state of gold in its compounds ranges from −1 to +5, but Au(I) and Au(III) dominate its chemistry. Au(I), referred to as the aurous ion, is the most common oxidation state with soft ligands such as thioethers, thiolates, and organophosphines. Au(I) compounds are typically linear. A good example is , which is the soluble form of gold encountered in mining. The binary gold halides, such as AuCl, form zigzag polymeric chains, again featuring linear coordination at Au. Most drugs based on gold are Au(I) derivatives.
Au(III) (referred to as auric) is a common oxidation state, and is illustrated by gold(III) chloride, . The gold atom centers in Au(III) complexes, like other d8 compounds, are typically square planar, with chemical bonds that have both covalent and ionic character. Gold(I,III) chloride is also known, an example of a mixed-valence complex.
Gold does not react with oxygen at any temperature and, up to 100 °C, is resistant to attack from ozone:
Some free halogens react to form the corresponding gold halides. Gold is strongly attacked by fluorine at dull-red heat to form gold(III) fluoride . Powdered gold reacts with chlorine at 180 °C to form gold(III) chloride . Gold reacts with bromine at 140 °C to form a combination of gold(III) bromide and gold(I) bromide AuBr, but reacts very slowly with iodine to form gold(I) iodide AuI:
2 Au{} + 3 F2 ->[{}\atop\Delta] 2 AuF3
2 Au{} + 3 Cl2 ->[{}\atop\Delta] 2 AuCl3
2 Au{} + 2 Br2 ->[{}\atop\Delta] AuBr3{} + AuBr
2 Au{} + I2 ->[{}\atop\Delta] 2 AuI
Gold does not react with sulfur directly, but gold(III) sulfide can be made by passing hydrogen sulfide through a dilute solution of gold(III) chloride or chlorauric acid.
Unlike sulfur, phosphorus reacts directly with gold at elevated temperatures to produce gold phosphide (Au2P3).
Gold readily dissolves in mercury at room temperature to form an amalgam, and forms alloys with many other metals at higher temperatures. These alloys can be produced to modify the hardness and other metallurgical properties, to control melting point or to create exotic colors.
Gold is unaffected by most acids. It does not react with hydrofluoric, hydrochloric, hydrobromic, hydriodic, sulfuric, or nitric acid. It does react with selenic acid, and is dissolved by aqua regia, a 1:3 mixture of nitric acid and hydrochloric acid. Nitric acid oxidizes the metal to +3 ions, but only in minute amounts, typically undetectable in the pure acid because of the chemical equilibrium of the reaction. However, the ions are removed from the equilibrium by hydrochloric acid, forming ions, or chloroauric acid, thereby enabling further oxidation:
2 Au{} + 6 H2SeO4 ->[{}\atop{200^\circ\text{C}}] Au2(SeO4)3{} + 3 H2SeO3{} + 3 H2O
Au{} + 4HCl{} + HNO3 -> HAuCl4{} + NO\uparrow + 2H2O
Gold is similarly unaffected by most bases. It does not react with aqueous, solid, or molten sodium or potassium hydroxide. It does however, react with sodium or potassium cyanide under alkaline conditions when oxygen is present to form soluble complexes.
Common oxidation states of gold include +1 (gold(I) or aurous compounds) and +3 (gold(III) or auric compounds). Gold ions in solution are readily reduced and precipitated as metal by adding any other metal as the reducing agent. The added metal is oxidized and dissolves, allowing the gold to be displaced from solution and be recovered as a solid precipitate.
Rare oxidation states
Less common oxidation states of gold include −1, +2, and +5.
The −1 oxidation state occurs in aurides, compounds containing the anion. Caesium auride (CsAu), for example, crystallizes in the caesium chloride motif; rubidium, potassium, and tetramethylammonium aurides are also known. Gold has the highest electron affinity of any metal, at 222.8 kJ/mol, making a stable species, analogous to the halides.
Gold also has a –1 oxidation state in covalent complexes with the group 4 transition metals, such as in titanium tetraauride and the analogous zirconium and hafnium compounds. These chemicals are expected to form gold-bridged dimers in a manner similar to titanium(IV) hydride.
Gold(II) compounds are usually diamagnetic with Au–Au bonds such as [. The evaporation of a solution of in concentrated produces red crystals of gold(II) sulfate, . Originally thought to be a mixed-valence compound, it has been shown to contain cations, analogous to the better-known mercury(I) ion, . A gold(II) complex, the tetraxenonogold(II) cation, which contains xenon as a ligand, occurs in . In September 2023, a novel type of metal-halide perovskite material consisting of Au3+ and Au2+ cations in its crystal structure has been found. It has been shown to be unexpectedly stable at normal conditions.
Gold pentafluoride, along with its derivative anion, , and its difluorine complex, gold heptafluoride, is the sole example of gold(V), the highest verified oxidation state.
Some gold compounds exhibit aurophilic bonding, which describes the tendency of gold ions to interact at distances that are too long to be a conventional Au–Au bond but shorter than van der Waals bonding. The interaction is estimated to be comparable in strength to that of a hydrogen bond.
Well-defined cluster compounds are numerous. In some cases, gold has a fractional oxidation state. A representative example is the octahedral species .
Origin
Gold production in the universe
Gold is thought to have been produced in supernova nucleosynthesis, and from the collision of neutron stars, and to have been present in the dust from which the Solar System formed.
Traditionally, gold in the universe is thought to have formed by the r-process (rapid neutron capture) in supernova nucleosynthesis, but more recently it has been suggested that gold and other elements heavier than iron may also be produced in quantity by the r-process in the collision of neutron stars. In both cases, satellite spectrometers at first only indirectly detected the resulting gold. However, in August 2017, the spectroscopic signatures of heavy elements, including gold, were observed by electromagnetic observatories in the GW170817 neutron star merger event, after gravitational wave detectors confirmed the event as a neutron star merger. Current astrophysical models suggest that this single neutron star merger event generated between 3 and 13 Earth masses of gold. This amount, along with estimations of the rate of occurrence of these neutron star merger events, suggests that such mergers may produce enough gold to account for most of the abundance of this element in the universe.
Asteroid origin theories
Because the Earth was molten when it was formed, almost all of the gold present in the early Earth probably sank into the planetary core. Therefore, as hypothesized in one model, most of the gold in the Earth's crust and mantle is thought to have been delivered to Earth by asteroid impacts during the Late Heavy Bombardment, about 4 billion years ago.
Gold which is reachable by humans has, in one case, been associated with a particular asteroid impact. The asteroid that formed Vredefort impact structure 2.020 billion years ago is often credited with seeding the Witwatersrand basin in South Africa with the richest gold deposits on earth. However, this scenario is now questioned. The gold-bearing Witwatersrand rocks were laid down between 700 and 950 million years before the Vredefort impact. These gold-bearing rocks had furthermore been covered by a thick layer of Ventersdorp lavas and the Transvaal Supergroup of rocks before the meteor struck, and thus the gold did not actually arrive in the asteroid/meteorite. What the Vredefort impact achieved, however, was to distort the Witwatersrand basin in such a way that the gold-bearing rocks were brought to the present erosion surface in Johannesburg, on the Witwatersrand, just inside the rim of the original diameter crater caused by the meteor strike. The discovery of the deposit in 1886 launched the Witwatersrand Gold Rush. Some 22% of all the gold that is ascertained to exist today on Earth has been extracted from these Witwatersrand rocks.
Mantle return theories
Much of the rest of the gold on Earth is thought to have been incorporated into the planet since its very beginning, as planetesimals formed the mantle. In 2017, an international group of scientists established that gold "came to the Earth's surface from the deepest regions of our planet", the mantle, as evidenced by their findings at Deseado Massif in the Argentinian Patagonia.
Occurrence
On Earth, gold is found in ores in rock formed from the Precambrian time onward. It most often occurs as a native metal, typically in a metal solid solution with silver (i.e. as a gold/silver alloy). Such alloys usually have a silver content of 8–10%. Electrum is elemental gold with more than 20% silver, and is commonly known as white gold. Electrum's color runs from golden-silvery to silvery, dependent upon the silver content. The more silver, the lower the specific gravity.
Native gold occurs as very small to microscopic particles embedded in rock, often together with quartz or sulfide minerals such as "fool's gold", which is a pyrite. These are called lode deposits. The metal in a native state is also found in the form of free flakes, grains or larger nuggets that have been eroded from rocks and end up in alluvial deposits called placer deposits. Such free gold is always richer at the exposed surface of gold-bearing veins, owing to the oxidation of accompanying minerals followed by weathering; and by washing of the dust into streams and rivers, where it collects and can be welded by water action to form nuggets.
Gold sometimes occurs combined with tellurium as the minerals calaverite, krennerite, nagyagite, petzite and sylvanite (see telluride minerals), and as the rare bismuthide maldonite () and antimonide aurostibite (). Gold also occurs in rare alloys with copper, lead, and mercury: the minerals auricupride (), novodneprite () and weishanite ().
A 2004 research paper suggests that microbes can sometimes play an important role in forming gold deposits, transporting and precipitating gold to form grains and nuggets that collect in alluvial deposits.
A 2013 study has claimed water in faults vaporizes during an earthquake, depositing gold. When an earthquake strikes, it moves along a fault. Water often lubricates faults, filling in fractures and jogs. About below the surface, under very high temperatures and pressures, the water carries high concentrations of carbon dioxide, silica, and gold. During an earthquake, the fault jog suddenly opens wider. The water inside the void instantly vaporizes, flashing to steam and forcing silica, which forms the mineral quartz, and gold out of the fluids and onto nearby surfaces.
Seawater
The world's oceans contain gold. Measured concentrations of gold in the Atlantic and Northeast Pacific are 50–150 femtomol/L or 10–30 parts per quadrillion (about 10–30 g/km3). In general, gold concentrations for south Atlantic and central Pacific samples are the same (~50 femtomol/L) but less certain. Mediterranean deep waters contain slightly higher concentrations of gold (100–150 femtomol/L), which is attributed to wind-blown dust or rivers. At 10 parts per quadrillion, the Earth's oceans would hold 15,000 tonnes of gold. These figures are three orders of magnitude less than reported in the literature prior to 1988, indicating contamination problems with the earlier data.
A number of people have claimed to be able to economically recover gold from sea water, but they were either mistaken or acted in an intentional deception. Prescott Jernegan ran a gold-from-seawater swindle in the United States in the 1890s, as did an English fraudster in the early 1900s. Fritz Haber did research on the extraction of gold from sea water in an effort to help pay Germany's reparations following World War I. Based on the published values of 2 to 64 ppb of gold in seawater, a commercially successful extraction seemed possible. After analysis of 4,000 water samples yielding an average of 0.004 ppb, it became clear that extraction would not be possible, and he ended the project.
History
The earliest recorded metal employed by humans appears to be gold, which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, .
The oldest gold artifacts in the world are from Bulgaria and are dating back to the 5th millennium BC (4,600 BC to 4,200 BC), such as those found in the Varna Necropolis near Lake Varna and the Black Sea coast, thought to be the earliest "well-dated" finding of gold artifacts in history.
Gold artifacts probably made their first appearance in Ancient Egypt at the very beginning of the pre-dynastic period, at the end of the fifth millennium BC and the start of the fourth, and smelting was developed during the course of the 4th millennium; gold artifacts appear in the archeology of Lower Mesopotamia during the early 4th millennium. As of 1990, gold artifacts found at the Wadi Qana cave cemetery of the 4th millennium BC in West Bank were the earliest from the Levant. Gold artifacts such as the golden hats and the Nebra disk appeared in Central Europe from the 2nd millennium BC Bronze Age.
The oldest known map of a gold mine was drawn in the 19th Dynasty of Ancient Egypt (1320–1200 BC), whereas the first written reference to gold was recorded in the 12th Dynasty around 1900 BC. Egyptian hieroglyphs from as early as 2600 BC describe gold, which King Tushratta of the Mitanni claimed was "more plentiful than dirt" in Egypt. Egypt and especially Nubia had the resources to make them major gold-producing areas for much of history. One of the earliest known maps, known as the Turin Papyrus Map, shows the plan of a gold mine in Nubia together with indications of the local geology. The primitive working methods are described by both Strabo and Diodorus Siculus, and included fire-setting. Large mines were also present across the Red Sea in what is now Saudi Arabia.
Gold is mentioned in the Amarna letters numbered 19 and 26 from around the 14th century BC.
Gold is mentioned frequently in the Old Testament, starting with Genesis 2:11 (at Havilah), the story of the golden calf, and many parts of the temple including the Menorah and the golden altar. In the New Testament, it is included with the gifts of the magi in the first chapters of Matthew. The Book of Revelation 21:21 describes the city of New Jerusalem as having streets "made of pure gold, clear as crystal". Exploitation of gold in the south-east corner of the Black Sea is said to date from the time of Midas, and this gold was important in the establishment of what is probably the world's earliest coinage in Lydia around 610 BC. The legend of the golden fleece dating from eighth century BCE may refer to the use of fleeces to trap gold dust from placer deposits in the ancient world. From the 6th or 5th century BC, the Chu (state) circulated the Ying Yuan, one kind of square gold coin.
In Roman metallurgy, new methods for extracting gold on a large scale were developed by introducing hydraulic mining methods, especially in Hispania from 25 BC onwards and in Dacia from 106 AD onwards. One of their largest mines was at Las Medulas in León, where seven long aqueducts enabled them to sluice most of a large alluvial deposit. The mines at Roşia Montană in Transylvania were also very large, and until very recently, still mined by opencast methods. They also exploited smaller deposits in Britain, such as placer and hard-rock deposits at Dolaucothi. The various methods they used are well described by Pliny the Elder in his encyclopedia Naturalis Historia written towards the end of the first century AD.
During Mansa Musa's (ruler of the Mali Empire from 1312 to 1337) hajj to Mecca in 1324, he passed through Cairo in July 1324, and was reportedly accompanied by a camel train that included thousands of people and nearly a hundred camels where he gave away so much gold that it depressed the price in Egypt for over a decade, causing high inflation. A contemporary Arab historian remarked:
The European exploration of the Americas was fueled in no small part by reports of the gold ornaments displayed in great profusion by Native American peoples, especially in Mesoamerica, Peru, Ecuador and Colombia. The Aztecs regarded gold as the product of the gods, calling it literally "god excrement" (teocuitlatl in Nahuatl), and after Moctezuma II was killed, most of this gold was shipped to Spain. However, for the indigenous peoples of North America gold was considered useless and they saw much greater value in other minerals which were directly related to their utility, such as obsidian, flint, and slate.
El Dorado is applied to a legendary story in which precious stones were found in fabulous abundance along with gold coins. The concept of El Dorado underwent several transformations, and eventually accounts of the previous myth were also combined with those of a legendary lost city. El Dorado, was the term used by the Spanish Empire to describe a mythical tribal chief (zipa) of the Muisca native people in Colombia, who, as an initiation rite, covered himself with gold dust and submerged in Lake Guatavita. The legends surrounding El Dorado changed over time, as it went from being a man, to a city, to a kingdom, and then finally to an empire.
Beginning in the early modern period, European exploration and colonization of West Africa was driven in large part by reports of gold deposits in the region, which was eventually referred to by Europeans as the "Gold Coast". From the late 15th to early 19th centuries, European trade in the region was primarily focused in gold, along with ivory and slaves. The gold trade in West Africa was dominated by the Ashanti Empire, who initially traded with the Portuguese before branching out and trading with British, French, Spanish and Danish merchants. British desires to secure control of West African gold deposits played a role in the Anglo-Ashanti wars of the late 19th century, which saw the Ashanti Empire annexed by Britain.
Gold played a role in western culture, as a cause for desire and of corruption, as told in children's fables such as Rumpelstiltskin—where Rumpelstiltskin turns hay into gold for the peasant's daughter in return for her child when she becomes a princess—and the stealing of the hen that lays golden eggs in Jack and the Beanstalk.
The top prize at the Olympic Games and many other sports competitions is the gold medal.
75% of the presently accounted for gold has been extracted since 1910, two-thirds since 1950.
One main goal of the alchemists was to produce gold from other substances, such as lead — presumably by the interaction with a mythical substance called the philosopher's stone. Trying to produce gold led the alchemists to systematically find out what can be done with substances, and this laid the foundation for today's chemistry, which can produce gold (albeit uneconomically) by using nuclear transmutation. Their symbol for gold was the circle with a point at its center (☉), which was also the astrological symbol and the ancient Chinese character for the Sun.
The Dome of the Rock is covered with an ultra-thin golden glassier. The Sikh Golden temple, the Harmandir Sahib, is a building covered with gold. Similarly the Wat Phra Kaew emerald Buddhist temple (wat) in Thailand has ornamental gold-leafed statues and roofs. Some European king and queen's crowns were made of gold, and gold was used for the bridal crown since antiquity. An ancient Talmudic text circa 100 AD describes Rachel, wife of Rabbi Akiva, receiving a "Jerusalem of Gold" (diadem). A Greek burial crown made of gold was found in a grave circa 370 BC.
Etymology
Gold is cognate with similar words in many Germanic languages, deriving via Proto-Germanic *gulþą from Proto-Indo-European *ǵʰelh₃- .
The symbol Au is from the Latin . The Proto-Indo-European ancestor of aurum was *h₂é-h₂us-o-, meaning . This word is derived from the same root (Proto-Indo-European *h₂u̯es- ) as *h₂éu̯sōs, the ancestor of the Latin word . This etymological relationship is presumably behind the frequent claim in scientific publications that meant .
Culture
In popular culture gold is a high standard of excellence, often used in awards. Great achievements are frequently rewarded with gold, in the form of gold medals, gold trophies and other decorations. Winners of athletic events and other graded competitions are usually awarded a gold medal. Many awards such as the Nobel Prize are made from gold as well. Other award statues and prizes are depicted in gold or are gold plated (such as the Academy Awards, the Golden Globe Awards, the Emmy Awards, the Palme d'Or, and the British Academy Film Awards).
Aristotle in his ethics used gold symbolism when referring to what is now known as the golden mean. Similarly, gold is associated with perfect or divine principles, such as in the case of the golden ratio and the Golden Rule. Gold is further associated with the wisdom of aging and fruition. The fiftieth wedding anniversary is golden. A person's most valued or most successful latter years are sometimes considered "golden years" or "golden jubilee". The height of a civilization is referred to as a golden age.
Religion
The first known prehistoric human usages of gold were religious in nature.
In some forms of Christianity and Judaism, gold has been associated both with the sacred and evil. In the Book of Exodus, the Golden Calf is a symbol of idolatry, while in the Book of Genesis, Abraham was said to be rich in gold and silver, and Moses was instructed to cover the Mercy Seat of the Ark of the Covenant with pure gold. In Byzantine iconography the halos of Christ, Virgin Mary and the saints are often golden.
In Islam, gold (along with silk) is often cited as being forbidden for men to wear. Abu Bakr al-Jazaeri, quoting a hadith, said that "[t]he wearing of silk and gold are forbidden on the males of my nation, and they are lawful to their women". This, however, has not been enforced consistently throughout history, e.g. in the Ottoman Empire. Further, small gold accents on clothing, such as in embroidery, may be permitted.
In ancient Greek religion and mythology, Theia was seen as the goddess of gold, silver and other gemstones.
According to Christopher Columbus, those who had something of gold were in possession of something of great value on Earth and a substance to even help souls to paradise.
Wedding rings are typically made of gold. It is long lasting and unaffected by the passage of time and may aid in the ring symbolism of eternal vows before God and the perfection the marriage signifies. In Orthodox Christian wedding ceremonies, the wedded couple is adorned with a golden crown (though some opt for wreaths, instead) during the ceremony, an amalgamation of symbolic rites.
On 24 August 2020, Israeli archaeologists discovered a trove of early Islamic gold coins near the central city of Yavne. Analysis of the extremely rare collection of 425 gold coins indicated that they were from the late 9th century. Dating to around 1,100 years back, the gold coins were from the Abbasid Caliphate.
Production
According to the United States Geological Survey in 2016, about of gold has been accounted for, of which 85% remains in active use.
Mining and prospecting
Since the 1880s, South Africa has been the source of a large proportion of the world's gold supply, and about 22% of the gold presently accounted is from South Africa. Production in 1970 accounted for 79% of the world supply, about 1,480 tonnes. In 2007 China (with 276 tonnes) overtook South Africa as the world's largest gold producer, the first time since 1905 that South Africa had not been the largest.
In 2023, China was the world's leading gold-mining country, followed in order by Russia, Australia, Canada, the United States and Ghana.
In South America, the controversial project Pascua Lama aims at exploitation of rich fields in the high mountains of Atacama Desert, at the border between Chile and Argentina.
It has been estimated that up to one-quarter of the yearly global gold production originates from artisanal or small scale mining.
The city of Johannesburg located in South Africa was founded as a result of the Witwatersrand Gold Rush which resulted in the discovery of some of the largest natural gold deposits in recorded history. The gold fields are confined to the northern and north-western edges of the Witwatersrand basin, which is a thick layer of archean rocks located, in most places, deep under the Free State, Gauteng and surrounding provinces. These Witwatersrand rocks are exposed at the surface on the Witwatersrand, in and around Johannesburg, but also in isolated patches to the south-east and south-west of Johannesburg, as well as in an arc around the Vredefort Dome which lies close to the center of the Witwatersrand basin. From these surface exposures the basin dips extensively, requiring some of the mining to occur at depths of nearly , making them, especially the Savuka and TauTona mines to the south-west of Johannesburg, the deepest mines on Earth. The gold is found only in six areas where archean rivers from the north and north-west formed extensive pebbly Braided river deltas before draining into the "Witwatersrand sea" where the rest of the Witwatersrand sediments were deposited.
The Second Boer War of 1899–1901 between the British Empire and the Afrikaner Boers was at least partly over the rights of miners and possession of the gold wealth in South Africa.
During the 19th century, gold rushes occurred whenever large gold deposits were discovered. The first documented discovery of gold in the United States was at the Reed Gold Mine near Georgeville, North Carolina in 1803. The first major gold strike in the United States occurred in a small north Georgia town called Dahlonega. Further gold rushes occurred in California, Colorado, the Black Hills, Otago in New Zealand, a number of locations across Australia, Witwatersrand in South Africa, and the Klondike in Canada.
Grasberg mine located in Papua, Indonesia is the largest gold mine in the world.
Extraction and refining
Gold extraction is most economical in large, easily mined deposits. Ore grades as little as 0.5 parts per million (ppm) can be economical. Typical ore grades in open-pit mines are 1–5 ppm; ore grades in underground or hard rock mines are usually at least 3 ppm. Because ore grades of 30 ppm are usually needed before gold is visible to the naked eye, in most gold mines the gold is invisible.
The average gold mining and extraction costs were about $317 per troy ounce in 2007, but these can vary widely depending on mining type and ore quality; global mine production amounted to 2,471.1 tonnes.
After initial production, gold is often subsequently refined industrially by the Wohlwill process which is based on electrolysis or by the Miller process, that is chlorination in the melt. The Wohlwill process results in higher purity, but is more complex and is only applied in small-scale installations. Other methods of assaying and purifying smaller amounts of gold include parting and inquartation as well as cupellation, or refining methods based on the dissolution of gold in aqua regia.
Recycling
In 1997, recycled gold accounted for approximately 20% of the 2700 tons of gold supplied to the market. Jewelry companies such as Generation Collection and computer companies including Dell conduct recycling.
As of 2020, the amount of carbon dioxide produced in mining a kilogram of gold is 16 tonnes, while recycling a kilogram of gold produces 53 kilograms of equivalent. Approximately 30 percent of the global gold supply is recycled and not mined as of 2020.
Consumption
The consumption of gold produced in the world is about 50% in jewelry, 40% in investments, and 10% in industry.
According to the World Gold Council, China was the world's largest single consumer of gold in 2013, overtaking India.
Pollution
Gold production is associated with contribution to hazardous pollution.
Low-grade gold ore may contain less than one ppm gold metal; such ore is ground and mixed with sodium cyanide to dissolve the gold. Cyanide is a highly poisonous chemical, which can kill living creatures when exposed in minute quantities. Many cyanide spills from gold mines have occurred in both developed and developing countries which killed aquatic life in long stretches of affected rivers. Environmentalists consider these events major environmental disasters. Up to thirty tons of used ore can be dumped as waste for producing one troy ounce of gold. Gold ore dumps are the source of many heavy elements such as cadmium, lead, zinc, copper, arsenic, selenium and mercury. When sulfide-bearing minerals in these ore dumps are exposed to air and water, the sulfide transforms into sulfuric acid which in turn dissolves these heavy metals facilitating their passage into surface water and ground water. This process is called acid mine drainage. These gold ore dumps contain long-term, highly hazardous waste.
It was once common to use mercury to recover gold from ore, but today the use of mercury is largely limited to small-scale individual miners. Minute quantities of mercury compounds can reach water bodies, causing heavy metal contamination. Mercury can then enter into the human food chain in the form of methylmercury. Mercury poisoning in humans can cause severe brain damage.
Gold extraction is also a highly energy-intensive industry, extracting ore from deep mines and grinding the large quantity of ore for further chemical extraction requires nearly 25 kWh of electricity per gram of gold produced.
Monetary use
Gold has been widely used throughout the world as money, for efficient indirect exchange (versus barter), and to store wealth in hoards. For exchange purposes, mints produce standardized gold bullion coins, bars and other units of fixed weight and purity.
The first known coins containing gold were struck in Lydia, Asia Minor, around 600 BC. The talent coin of gold in use during the periods of Grecian history both before and during the time of the life of Homer weighed between 8.42 and 8.75 grams. From an earlier preference in using silver, European economies re-established the minting of gold as coinage during the thirteenth and fourteenth centuries.
Bills (that mature into gold coin) and gold certificates (convertible into gold coin at the issuing bank) added to the circulating stock of gold standard money in most 19th century industrial economies. In preparation for World War I the warring nations moved to fractional gold standards, inflating their currencies to finance the war effort. Post-war, the victorious countries, most notably Britain, gradually restored gold-convertibility, but international flows of gold via bills of exchange remained embargoed; international shipments were made exclusively for bilateral trades or to pay war reparations.
After World War II gold was replaced by a system of nominally convertible currencies related by fixed exchange rates following the Bretton Woods system. Gold standards and the direct convertibility of currencies to gold have been abandoned by world governments, led in 1971 by the United States' refusal to redeem its dollars in gold. Fiat currency now fills most monetary roles. Switzerland was the last country to tie its currency to gold; this was ended by a referendum in 1999.
Central banks continue to keep a portion of their liquid reserves as gold in some form, and metals exchanges such as the London Bullion Market Association still clear transactions denominated in gold, including future delivery contracts. Today, gold mining output is declining. With the sharp growth of economies in the 20th century, and increasing foreign exchange, the world's gold reserves and their trading market have become a small fraction of all markets and fixed exchange rates of currencies to gold have been replaced by floating prices for gold and gold future contract. Though the gold stock grows by only 1% or 2% per year, very little metal is irretrievably consumed. Inventory above ground would satisfy many decades of industrial and even artisan uses at current prices.
The gold proportion (fineness) of alloys is measured by karat (k). Pure gold (commercially termed fine gold) is designated as 24 karat, abbreviated 24k. English gold coins intended for circulation from 1526 into the 1930s were typically a standard 22k alloy called crown gold, for hardness (American gold coins for circulation after 1837 contain an alloy of 0.900 fine gold, or 21.6 kt).
Although the prices of some platinum group metals can be much higher, gold has long been considered the most desirable of precious metals, and its value has been used as the standard for many currencies. Gold has been used as a symbol for purity, value, royalty, and particularly roles that combine these properties. Gold as a sign of wealth and prestige was ridiculed by Thomas More in his treatise Utopia. On that imaginary island, gold is so abundant that it is used to make chains for slaves, tableware, and lavatory seats. When ambassadors from other countries arrive, dressed in ostentatious gold jewels and badges, the Utopians mistake them for menial servants, paying homage instead to the most modestly dressed of their party.
The ISO 4217 currency code of gold is XAU. Many holders of gold store it in form of bullion coins or bars as a hedge against inflation or other economic disruptions, though its efficacy as such has been questioned; historically, it has not proven itself reliable as a hedging instrument. Modern bullion coins for investment or collector purposes do not require good mechanical wear properties; they are typically fine gold at 24k, although the American Gold Eagle and the British gold sovereign continue to be minted in 22k (0.92) metal in historical tradition, and the South African Krugerrand, first released in 1967, is also 22k (0.92).
The special issue Canadian Gold Maple Leaf coin contains the highest purity gold of any bullion coin, at 99.999% or 0.99999, while the popular issue Canadian Gold Maple Leaf coin has a purity of 99.99%. In 2006, the United States Mint began producing the American Buffalo gold bullion coin with a purity of 99.99%. The Australian Gold Kangaroos were first coined in 1986 as the Australian Gold Nugget but changed the reverse design in 1989. Other modern coins include the Austrian Vienna Philharmonic bullion coin and the Chinese Gold Panda.
Price
Like other precious metals, gold is measured by troy weight and by grams. The proportion of gold in the alloy is measured by karat (k), with 24 karat (24k) being pure gold (100%), and lower karat numbers proportionally less (18k = 75%). The purity of a gold bar or coin can also be expressed as a decimal figure ranging from 0 to 1, known as the millesimal fineness, such as 0.995 being nearly pure.
The price of gold is determined through trading in the gold and derivatives markets, but a procedure known as the Gold Fixing in London, originating in September 1919, provides a daily benchmark price to the industry. The afternoon fixing was introduced in 1968 to provide a price when US markets are open. , gold was valued at around $42 per gram ($1,300 per troy ounce).
History
Historically gold coinage was widely used as currency; when paper money was introduced, it typically was a receipt redeemable for gold coin or bullion. In a monetary system known as the gold standard, a certain weight of gold was given the name of a unit of currency. For a long period, the United States government set the value of the US dollar so that one troy ounce was equal to $20.67 ($0.665 per gram), but in 1934 the dollar was devalued to $35.00 per troy ounce ($0.889/g). By 1961, it was becoming hard to maintain this price, and a pool of US and European banks agreed to manipulate the market to prevent further currency devaluation against increased gold demand.
The largest gold depository in the world is that of the U.S. Federal Reserve Bank in New York, which holds about 3% of the gold known to exist and accounted for today, as does the similarly laden U.S. Bullion Depository at Fort Knox. In 2005 the World Gold Council estimated total global gold supply to be 3,859 tonnes and demand to be 3,754 tonnes, giving a surplus of 105 tonnes.
After 15 August 1971 Nixon shock, the price began to greatly increase, and between 1968 and 2000 the price of gold ranged widely, from a high of $850 per troy ounce ($27.33/g) on 21 January 1980, to a low of $252.90 per troy ounce ($8.13/g) on 21 June 1999 (London Gold Fixing). Prices increased rapidly from 2001, but the 1980 high was not exceeded until 3 January 2008, when a new maximum of $865.35 per troy ounce was set. Another record price was set on 17 March 2008, at $1023.50 per troy ounce ($32.91/g).
On 2 December 2009, gold reached a new high closing at $1,217.23. Gold further rallied hitting new highs in May 2010 after the European Union debt crisis prompted further purchase of gold as a safe asset. On 1 March 2011, gold hit a new all-time high of $1432.57, based on investor concerns regarding ongoing unrest in North Africa as well as in the Middle East.
From April 2001 to August 2011, spot gold prices more than quintupled in value against the US dollar, hitting a new all-time high of $1,913.50 on 23 August 2011, prompting speculation that the long secular bear market had ended and a bull market had returned. However, the price then began a slow decline towards $1200 per troy ounce in late 2014 and 2015.
In August 2020, the gold price picked up to US$2060 per ounce after a total growth of 59% from August 2018 to October 2020, a period during which it outplaced the Nasdaq total return of 54%.
Gold futures are traded on the COMEX exchange. These contacts are priced in USD per troy ounce (1 troy ounce = 31.1034768 grams). Below are the CQG contract specifications outlining the futures contracts:
Other applications
Jewelry
Because of the softness of pure (24k) gold, it is usually alloyed with other metals for use in jewelry, altering its hardness and ductility, melting point, color and other properties. Alloys with lower karat rating, typically 22k, 18k, 14k or 10k, contain higher percentages of copper, silver, palladium or other base metals in the alloy. Nickel is toxic, and its release from nickel white gold is controlled by legislation in Europe. Palladium-gold alloys are more expensive than those using nickel. High-karat white gold alloys are more resistant to corrosion than are either pure silver or sterling silver. The Japanese craft of Mokume-gane exploits the color contrasts between laminated colored gold alloys to produce decorative wood-grain effects.
By 2014, the gold jewelry industry was escalating despite a dip in gold prices. Demand in the first quarter of 2014 pushed turnover to $23.7 billion according to a World Gold Council report.
Gold solder is used for joining the components of gold jewelry by high-temperature hard soldering or brazing. If the work is to be of hallmarking quality, the gold solder alloy must match the fineness of the work, and alloy formulas are manufactured to color-match yellow and white gold. Gold solder is usually made in at least three melting-point ranges referred to as Easy, Medium and Hard. By using the hard, high-melting point solder first, followed by solders with progressively lower melting points, goldsmiths can assemble complex items with several separate soldered joints. Gold can also be made into thread and used in embroidery.
Electronics
Only 10% of the world consumption of new gold produced goes to industry, but by far the most important industrial use for new gold is in fabrication of corrosion-free electrical connectors in computers and other electrical devices. For example, according to the World Gold Council, a typical cell phone may contain 50 mg of gold, worth about three dollars. But since nearly one billion cell phones are produced each year, a gold value of US$2.82 in each phone adds to US$2.82 billion in gold from just this application. (Prices updated to November 2022)
Though gold is attacked by free chlorine, its good conductivity and general resistance to oxidation and corrosion in other environments (including resistance to non-chlorinated acids) has led to its widespread industrial use in the electronic era as a thin-layer coating on electrical connectors, thereby ensuring good connection. For example, gold is used in the connectors of the more expensive electronics cables, such as audio, video and USB cables. The benefit of using gold over other connector metals such as tin in these applications has been debated; gold connectors are often criticized by audio-visual experts as unnecessary for most consumers and seen as simply a marketing ploy. However, the use of gold in other applications in electronic sliding contacts in highly humid or corrosive atmospheres, and in use for contacts with a very high failure cost (certain computers, communications equipment, spacecraft, jet aircraft engines) remains very common.
Besides sliding electrical contacts, gold is also used in electrical contacts because of its resistance to corrosion, electrical conductivity, ductility and lack of toxicity. Switch contacts are generally subjected to more intense corrosion stress than are sliding contacts. Fine gold wires are used to connect semiconductor devices to their packages through a process known as wire bonding.
The concentration of free electrons in gold metal is 5.91×1022 cm−3. Gold is highly conductive to electricity and has been used for electrical wiring in some high-energy applications (only silver and copper are more conductive per volume, but gold has the advantage of corrosion resistance). For example, gold electrical wires were used during some of the Manhattan Project's atomic experiments, but large high-current silver wires were used in the calutron isotope separator magnets in the project.
It is estimated that 16% of the world's presently-accounted-for gold and 22% of the world's silver is contained in electronic technology in Japan.
Medicine
There are only two gold compounds currently employed as pharmaceuticals in modern medicine (sodium aurothiomalate and auranofin), used in the treatment of arthritis and other similar conditions in the US due to their anti-inflammatory properties. These drugs have been explored as a means to help to reduce the pain and swelling of rheumatoid arthritis, and also (historically) against tuberculosis and some parasites.
Some esotericists and forms of alternative medicine assign metallic gold a healing power, against the scientific consensus.
Historically, metallic and gold compounds have long been used for medicinal purposes. Gold, usually as the metal, is perhaps the most anciently administered medicine (apparently by shamanic practitioners) and known to Dioscorides. In medieval times, gold was often seen as beneficial for the health, in the belief that something so rare and beautiful could not be anything but healthy.
In the 19th century gold had a reputation as an anxiolytic, a therapy for nervous disorders. Depression, epilepsy, migraine, and glandular problems such as amenorrhea and impotence were treated, and most notably alcoholism (Keeley, 1897).
The apparent paradox of the actual toxicology of the substance suggests the possibility of serious gaps in the understanding of the action of gold in physiology. Only salts and radioisotopes of gold are of pharmacological value, since elemental (metallic) gold is inert to all chemicals it encounters inside the body (e.g., ingested gold cannot be attacked by stomach acid).
Gold alloys are used in restorative dentistry, especially in tooth restorations, such as crowns and permanent bridges. The gold alloys' slight malleability facilitates the creation of a superior molar mating surface with other teeth and produces results that are generally more satisfactory than those produced by the creation of porcelain crowns. The use of gold crowns in more prominent teeth such as incisors is favored in some cultures and discouraged in others.
Colloidal gold preparations (suspensions of gold nanoparticles) in water are intensely red-colored, and can be made with tightly controlled particle sizes up to a few tens of nanometers across by reduction of gold chloride with citrate or ascorbate ions. Colloidal gold is used in research applications in medicine, biology and materials science. The technique of immunogold labeling exploits the ability of the gold particles to adsorb protein molecules onto their surfaces. Colloidal gold particles coated with specific antibodies can be used as probes for the presence and position of antigens on the surfaces of cells. In ultrathin sections of tissues viewed by electron microscopy, the immunogold labels appear as extremely dense round spots at the position of the antigen.
Gold, or alloys of gold and palladium, are applied as conductive coating to biological specimens and other non-conducting materials such as plastics and glass to be viewed in a scanning electron microscope. The coating, which is usually applied by sputtering with an argon plasma, has a triple role in this application. Gold's very high electrical conductivity drains electrical charge to earth, and its very high density provides stopping power for electrons in the electron beam, helping to limit the depth to which the electron beam penetrates the specimen. This improves definition of the position and topography of the specimen surface and increases the spatial resolution of the image. Gold also produces a high output of secondary electrons when irradiated by an electron beam, and these low-energy electrons are the most commonly used signal source used in the scanning electron microscope.
The isotope gold-198 (half-life 2.7 days) is used in nuclear medicine, in some cancer treatments and for treating other diseases.
Cuisine
Gold can be used in food and has the E number 175. In 2016, the European Food Safety Authority published an opinion on the re-evaluation of gold as a food additive. Concerns included the possible presence of minute amounts of gold nanoparticles in the food additive, and that gold nanoparticles have been shown to be genotoxic in mammalian cells in vitro.
Gold leaf, flake or dust is used on and in some gourmet foods, notably sweets and drinks as decorative ingredient. Gold flake was used by the nobility in medieval Europe as a decoration in food and drinks,
Danziger Goldwasser (German: Gold water of Danzig) or Goldwasser () is a traditional German herbal liqueur produced in what is today Gdańsk, Poland, and Schwabach, Germany, and contains flakes of gold leaf. There are also some expensive (c. $1000) cocktails which contain flakes of gold leaf. However, since metallic gold is inert to all body chemistry, it has no taste, it provides no nutrition, and it leaves the body unaltered.
Vark is a foil composed of a pure metal that is sometimes gold, and is used for garnishing sweets in South Asian cuisine.
Miscellanea
Gold produces a deep, intense red color when used as a coloring agent in cranberry glass.
In photography, gold toners are used to shift the color of silver bromide black-and-white prints towards brown or blue tones, or to increase their stability. Used on sepia-toned prints, gold toners produce red tones. Kodak published formulas for several types of gold toners, which use gold as the chloride.
Gold is a good reflector of electromagnetic radiation such as infrared and visible light, as well as radio waves. It is used for the protective coatings on many artificial satellites, in infrared protective faceplates in thermal-protection suits and astronauts' helmets, and in electronic warfare planes such as the EA-6B Prowler.
Gold is used as the reflective layer on some high-end CDs.
Automobiles may use gold for heat shielding. McLaren uses gold foil in the engine compartment of its F1 model.
Gold can be manufactured so thin that it appears semi-transparent. It is used in some aircraft cockpit windows for de-icing or anti-icing by passing electricity through it. The heat produced by the resistance of the gold is enough to prevent ice from forming.
Gold is attacked by and dissolves in alkaline solutions of potassium or sodium cyanide, to form the salt gold cyanide—a technique that has been used in extracting metallic gold from ores in the cyanide process. Gold cyanide is the electrolyte used in commercial electroplating of gold onto base metals and electroforming.
Gold chloride (chloroauric acid) solutions are used to make colloidal gold by reduction with citrate or ascorbate ions. Gold chloride and gold oxide are used to make cranberry or red-colored glass, which, like colloidal gold suspensions, contains evenly sized spherical gold nanoparticles.
Gold, when dispersed in nanoparticles, can act as a heterogeneous catalyst of chemical reactions.
In recent years, gold has been used as a symbol of pride by the autism rights movement, as its symbol Au could be seen as similar to the word "autism".
Toxicity
Pure metallic (elemental) gold is non-toxic and non-irritating when ingested and is sometimes used as a food decoration in the form of gold leaf. Metallic gold is also a component of the alcoholic drinks Goldschläger, Gold Strike, and Goldwasser. Metallic gold is approved as a food additive in the EU (E175 in the Codex Alimentarius). Although the gold ion is toxic, the acceptance of metallic gold as a food additive is due to its relative chemical inertness, and resistance to being corroded or transformed into soluble salts (gold compounds) by any known chemical process which would be encountered in the human body.
Soluble compounds (gold salts) such as gold chloride are toxic to the liver and kidneys. Common cyanide salts of gold such as potassium gold cyanide, used in gold electroplating, are toxic by virtue of both their cyanide and gold content. There are rare cases of lethal gold poisoning from potassium gold cyanide. Gold toxicity can be ameliorated with chelation therapy with an agent such as dimercaprol.
Gold metal was voted Allergen of the Year in 2001 by the American Contact Dermatitis Society; gold contact allergies affect mostly women. Despite this, gold is a relatively non-potent contact allergen, in comparison with metals like nickel.
A sample of the fungus Aspergillus niger was found growing from gold mining solution; and was found to contain cyano metal complexes, such as gold, silver, copper, iron and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides.
| Physical sciences | Chemistry | null |
12241 | https://en.wikipedia.org/wiki/Gallium | Gallium | Gallium is a chemical element; it has the symbol Ga and atomic number 31. Discovered by the French chemist Paul-Émile Lecoq de Boisbaudran in 1875, gallium is in group 13 of the periodic table and is similar to the other metals of the group (aluminium, indium, and thallium).
Elemental gallium is a relatively soft, silvery metal at standard temperature and pressure. In its liquid state, it becomes silvery white. If enough force is applied, solid gallium may fracture conchoidally. Since its discovery in 1875, gallium has widely been used to make alloys with low melting points. It is also used in semiconductors, as a dopant in semiconductor substrates.
The melting point of gallium (29.7646°C, 85.5763°F, 302.9146 K) is used as a temperature reference point. Gallium alloys are used in thermometers as a non-toxic and environmentally friendly alternative to mercury, and can withstand higher temperatures than mercury. A melting point of , well below the freezing point of water, is claimed for the alloy galinstan (62–95% gallium, 5–22% indium, and 0–16% tin by weight), but that may be the freezing point with the effect of supercooling.
Gallium does not occur as a free element in nature, but rather as gallium(III) compounds in trace amounts in zinc ores (such as sphalerite) and in bauxite. Elemental gallium is a liquid at temperatures greater than , and will melt in a person's hands at normal human body temperature of .
Gallium is predominantly used in electronics. Gallium arsenide, the primary chemical compound of gallium in electronics, is used in microwave circuits, high-speed switching circuits, and infrared circuits. Semiconducting gallium nitride and indium gallium nitride produce blue and violet light-emitting diodes and diode lasers. Gallium is also used in the production of artificial gadolinium gallium garnet for jewelry. Gallium is considered a technology-critical element by the United States National Library of Medicine and Frontiers Media.
Gallium has no known natural role in biology. Gallium(III) behaves in a similar manner to ferric salts in biological systems and has been used in some medical applications, including pharmaceuticals and radiopharmaceuticals.
Physical properties
Elemental gallium is not found in nature, but it is easily obtained by smelting. Very pure gallium is a silvery blue metal that fractures conchoidally like glass. Gallium's volume expands by 3.10% when it changes from a liquid to a solid so care must be taken when storing it in containers that may rupture when it changes state. Gallium shares the higher-density liquid state with a short list of other materials that includes water, silicon, germanium, bismuth, and plutonium.
Gallium forms alloys with most metals. It readily diffuses into cracks or grain boundaries of some metals such as aluminium, aluminium–zinc alloys and steel, causing extreme loss of strength and ductility called liquid metal embrittlement.
The melting point of gallium, at 302.9146 K (29.7646 °C, 85.5763 °F), is just above room temperature, and is approximately the same as the average summer daytime temperatures in Earth's mid-latitudes. This melting point (mp) is one of the formal temperature reference points in the International Temperature Scale of 1990 (ITS-90) established by the International Bureau of Weights and Measures (BIPM). The triple point of gallium, 302.9166 K (29.7666 °C, 85.5799 °F), is used by the US National Institute of Standards and Technology (NIST) in preference to the melting point.
The melting point of gallium allows it to melt in the human hand, and then solidify if removed. The liquid metal has a strong tendency to supercool below its melting point/freezing point: Ga nanoparticles can be kept in the liquid state below 90 K. Seeding with a crystal helps to initiate freezing. Gallium is one of the four non-radioactive metals (with caesium, rubidium, and mercury) that are known to be liquid at, or near, normal room temperature. Of the four, gallium is the only one that is neither highly reactive (as are rubidium and caesium) nor highly toxic (as is mercury) and can, therefore, be used in metal-in-glass high-temperature thermometers. It is also notable for having one of the largest liquid ranges for a metal, and for having (unlike mercury) a low vapor pressure at high temperatures. Gallium's boiling point, 2676 K, is nearly nine times higher than its melting point on the absolute scale, the greatest ratio between melting point and boiling point of any element. Unlike mercury, liquid gallium metal wets glass and skin, along with most other materials (with the exceptions of quartz, graphite, gallium(III) oxide and PTFE), making it mechanically more difficult to handle even though it is substantially less toxic and requires far fewer precautions than mercury. Gallium painted onto glass is a brilliant mirror. For this reason as well as the metal contamination and freezing-expansion problems, samples of gallium metal are usually supplied in polyethylene packets within other containers.
Gallium does not crystallize in any of the simple crystal structures. The stable phase under normal conditions is orthorhombic with 8 atoms in the conventional unit cell. Within a unit cell, each atom has only one nearest neighbor (at a distance of 244 pm). The remaining six unit cell neighbors are spaced 27, 30 and 39 pm farther away, and they are grouped in pairs with the same distance. Many stable and metastable phases are found as function of temperature and pressure.
The bonding between the two nearest neighbors is covalent; hence Ga2 dimers are seen as the fundamental building blocks of the crystal. This explains the low melting point relative to the neighbor elements, aluminium and indium. This structure is strikingly similar to that of iodine and may form because of interactions between the single 4p electrons of gallium atoms, further away from the nucleus than the 4s electrons and the [Ar]3d10 core. This phenomenon recurs with mercury with its "pseudo-noble-gas" [Xe]4f145d106s2 electron configuration, which is liquid at room temperature. The 3d10 electrons do not shield the outer electrons very well from the nucleus and hence the first ionisation energy of gallium is greater than that of aluminium. Ga2 dimers do not persist in the liquid state and liquid gallium exhibits a complex low-coordinated structure in which each gallium atom is surrounded by 10 others, rather than 11–12 neighbors typical of most liquid metals.
The physical properties of gallium are highly anisotropic, i.e. have different values along the three major crystallographic axes a, b, and c (see table), producing a significant difference between the linear (α) and volume thermal expansion coefficients. The properties of gallium are strongly temperature-dependent, particularly near the melting point. For example, the coefficient of thermal expansion increases by several hundred percent upon melting.
Isotopes
Gallium has 30 known isotopes, ranging in mass number from 60 to 89. Only two isotopes are stable and occur naturally, gallium-69 and gallium-71. Gallium-69 is more abundant: it makes up about 60.1% of natural gallium, while gallium-71 makes up the remaining 39.9%. All the other isotopes are radioactive, with gallium-67 being the longest-lived (half-life 3.261 days). Isotopes lighter than gallium-69 usually decay through beta plus decay (positron emission) or electron capture to isotopes of zinc, while isotopes heavier than gallium-71 decay through beta minus decay (electron emission), possibly with delayed neutron emission, to isotopes of germanium. Gallium-70 can decay through both beta minus decay and electron capture. Gallium-67 is unique among the light isotopes in having only electron capture as a decay mode, as its decay energy is not sufficient to allow positron emission. Gallium-67 and gallium-68 (half-life 67.7 min) are both used in nuclear medicine.
Chemical properties
Gallium is found primarily in the +3 oxidation state. The +1 oxidation state is also found in some compounds, although it is less common than it is for gallium's heavier congeners indium and thallium. For example, the very stable GaCl2 contains both gallium(I) and gallium(III) and can be formulated as GaIGaIIICl4; in contrast, the monochloride is unstable above 0 °C, disproportionating into elemental gallium and gallium(III) chloride. Compounds containing Ga–Ga bonds are true gallium(II) compounds, such as GaS (which can be formulated as Ga24+(S2−)2) and the dioxan complex Ga2Cl4(C4H8O2)2.
Aqueous chemistry
Strong acids dissolve gallium, forming gallium(III) salts such as (gallium nitrate). Aqueous solutions of gallium(III) salts contain the hydrated gallium ion, . Gallium(III) hydroxide, , may be precipitated from gallium(III) solutions by adding ammonia. Dehydrating at 100 °C produces gallium oxide hydroxide, GaO(OH).
Alkaline hydroxide solutions dissolve gallium, forming gallate salts (not to be confused with identically named gallic acid salts) containing the anion. Gallium hydroxide, which is amphoteric, also dissolves in alkali to form gallate salts. Although earlier work suggested as another possible gallate anion, it was not found in later work.
Oxides and chalcogenides
Gallium reacts with the chalcogens only at relatively high temperatures. At room temperature, gallium metal is not reactive with air and water because it forms a passive, protective oxide layer. At higher temperatures, however, it reacts with atmospheric oxygen to form gallium(III) oxide, . Reducing with elemental gallium in vacuum at 500 °C to 700 °C yields the dark brown gallium(I) oxide, . is a very strong reducing agent, capable of reducing to . It disproportionates at 800 °C back to gallium and .
Gallium(III) sulfide, , has 3 possible crystal modifications. It can be made by the reaction of gallium with hydrogen sulfide () at 950 °C. Alternatively, can be used at 747 °C:
2 + 3 → + 6
Reacting a mixture of alkali metal carbonates and with leads to the formation of thiogallates containing the anion. Strong acids decompose these salts, releasing in the process. The mercury salt, , can be used as a phosphor.
Gallium also forms sulfides in lower oxidation states, such as gallium(II) sulfide and the green gallium(I) sulfide, the latter of which is produced from the former by heating to 1000 °C under a stream of nitrogen.
The other binary chalcogenides, and , have the zincblende structure. They are all semiconductors but are easily hydrolysed and have limited utility.
Nitrides and pnictides
Gallium reacts with ammonia at 1050 °C to form gallium nitride, GaN. Gallium also forms binary compounds with phosphorus, arsenic, and antimony: gallium phosphide (GaP), gallium arsenide (GaAs), and gallium antimonide (GaSb). These compounds have the same structure as ZnS, and have important semiconducting properties. GaP, GaAs, and GaSb can be synthesized by the direct reaction of gallium with elemental phosphorus, arsenic, or antimony. They exhibit higher electrical conductivity than GaN. GaP can also be synthesized by reacting with phosphorus at low temperatures.
Gallium forms ternary nitrides; for example:
+ →
Similar compounds with phosphorus and arsenic are possible: and . These compounds are easily hydrolyzed by dilute acids and water.
Halides
Gallium(III) oxide reacts with fluorinating agents such as HF or to form gallium(III) fluoride, . It is an ionic compound strongly insoluble in water. However, it dissolves in hydrofluoric acid, in which it forms an adduct with water, . Attempting to dehydrate this adduct forms . The adduct reacts with ammonia to form , which can then be heated to form anhydrous .
Gallium trichloride is formed by the reaction of gallium metal with chlorine gas. Unlike the trifluoride, gallium(III) chloride exists as dimeric molecules, , with a melting point of 78 °C. Equivalent compounds are formed with bromine and iodine, and .
Like the other group 13 trihalides, gallium(III) halides are Lewis acids, reacting as halide acceptors with alkali metal halides to form salts containing anions, where X is a halogen. They also react with alkyl halides to form carbocations and .
When heated to a high temperature, gallium(III) halides react with elemental gallium to form the respective gallium(I) halides. For example, reacts with Ga to form :
2 Ga + 3 GaCl (g)
At lower temperatures, the equilibrium shifts toward the left and GaCl disproportionates back to elemental gallium and . GaCl can also be produced by reacting Ga with HCl at 950 °C; the product can be condensed as a red solid.
Gallium(I) compounds can be stabilized by forming adducts with Lewis acids. For example:
GaCl + →
The so-called "gallium(II) halides", , are actually adducts of gallium(I) halides with the respective gallium(III) halides, having the structure . For example:
GaCl + →
Hydrides
Like aluminium, gallium also forms a hydride, , known as gallane, which may be produced by reacting lithium gallanate () with gallium(III) chloride at −30 °C:
3 + → 3 LiCl + 4
In the presence of dimethyl ether as solvent, polymerizes to . If no solvent is used, the dimer (digallane) is formed as a gas. Its structure is similar to diborane, having two hydrogen atoms bridging the two gallium centers, unlike α- in which aluminium has a coordination number of 6.
Gallane is unstable above −10 °C, decomposing to elemental gallium and hydrogen.
Organogallium compounds
Organogallium compounds are of similar reactivity to organoindium compounds, less reactive than organoaluminium compounds, but more reactive than organothallium compounds. Alkylgalliums are monomeric. Lewis acidity decreases in the order Al > Ga > In and as a result organogallium compounds do not form bridged dimers as organoaluminium compounds do. Organogallium compounds are also less reactive than organoaluminium compounds. They do form stable peroxides. These alkylgalliums are liquids at room temperature, having low melting points, and are quite mobile and flammable. Triphenylgallium is monomeric in solution, but its crystals form chain structures due to weak intermolecluar Ga···C interactions.
Gallium trichloride is a common starting reagent for the formation of organogallium compounds, such as in carbogallation reactions. Gallium trichloride reacts with lithium cyclopentadienide in diethyl ether to form the trigonal planar gallium cyclopentadienyl complex GaCp3. Gallium(I) forms complexes with arene ligands such as hexamethylbenzene. Because this ligand is quite bulky, the structure of the [Ga(η6-C6Me6)]+ is that of a half-sandwich. Less bulky ligands such as mesitylene allow two ligands to be attached to the central gallium atom in a bent sandwich structure. Benzene is even less bulky and allows the formation of dimers: an example is [Ga(η6-C6H6)2] [GaCl4]·3C6H6.
History
In 1871, the existence of gallium was first predicted by Russian chemist Dmitri Mendeleev, who named it "eka-aluminium" from its position in his periodic table. He also predicted several properties of eka-aluminium that correspond closely to the real properties of gallium, such as its density, melting point, oxide character, and bonding in chloride.
{| class="wikitable"
|+ Comparison between Mendeleev's 1871 predictions and the known properties of gallium
|-
! Property
! Mendeleev's predictions
! Actual properties
|-
! Atomic weight
| ~68
| 69.723
|-
! Density
| 5.9 g/cm3
| 5.904 g/cm3
|-
! Melting point
| Low
| 29.767 °C
|-
! Formula of oxide
| M2O3
| Ga2O3
|-
! Density of oxide
| 5.5 g/cm3
| 5.88 g/cm3
|-
! Nature of hydroxide
| amphoteric
| amphoteric
|}
Mendeleev further predicted that eka-aluminium would be discovered by means of the spectroscope, and that metallic eka-aluminium would dissolve slowly in both acids and alkalis and would not react with air. He also predicted that M2O3 would dissolve in acids to give MX3 salts, that eka-aluminium salts would form basic salts, that eka-aluminium sulfate should form alums, and that anhydrous MCl3 should have a greater volatility than ZnCl2: all of these predictions turned out to be true.
Gallium was discovered using spectroscopy by French chemist Paul-Émile Lecoq de Boisbaudran in 1875 from its characteristic spectrum (two violet lines) in a sample of sphalerite. Later that year, Lecoq obtained the free metal by electrolysis of the hydroxide in potassium hydroxide solution.
He named the element "gallia", from Latin meaning 'Gaul', a name for his native land of France. It was later claimed that, in a multilingual pun of a kind favoured by men of science in the 19th century, he had also named gallium after himself: is French for 'the rooster', and the Latin word for 'rooster' is . In an 1877 article, Lecoq denied this conjecture.
Originally, de Boisbaudran determined the density of gallium as 4.7 g/cm3, the only property that failed to match Mendeleev's predictions; Mendeleev then wrote to him and suggested that he should remeasure the density, and de Boisbaudran then obtained the correct value of 5.9 g/cm3, that Mendeleev had predicted exactly.
From its discovery in 1875 until the era of semiconductors, the primary uses of gallium were high-temperature thermometrics and metal alloys with unusual properties of stability or ease of melting (some such being liquid at room temperature).
The development of gallium arsenide as a direct bandgap semiconductor in the 1960s ushered in the most important stage in the applications of gallium. In the late 1960s, the electronics industry started using gallium on a commercial scale to fabricate light emitting diodes, photovoltaics and semiconductors, while the metals industry used it to reduce the melting point of alloys.
First blue gallium nitride LED were developed in 1971-1973, but they were feeble. Only in the early 1990s Shuji Nakamura managed to combine GaN with indium gallium nitride and develop the modern blue LED, now making the basis of ubiquitous white LEDs, which Nichia commercialized in 1993. He and two other Japanese scientists received a Nobel in Physics in 2014 for this work.
Global gallium production slowly grew from several tens of t/year in the 1970s til ca. 2010, when it passed 100 t/yr and rapidly accelerated, by 2024 reaching about 450 t/yr.
Occurrence
Gallium does not exist as a free element in the Earth's crust, and the few high-content minerals, such as gallite (CuGaS2), are too rare to serve as a primary source. The abundance in the Earth's crust is approximately 16.9 ppm. It is the 34th most abundant element in the crust. This is comparable to the crustal abundances of lead, cobalt, and niobium. Yet unlike these elements, gallium does not form its own ore deposits with concentrations of > 0.1 wt.% in ore. Rather it occurs at trace concentrations similar to the crustal value in zinc ores, and at somewhat higher values (~ 50 ppm) in aluminium ores, from both of which it is extracted as a by-product. This lack of independent deposits is due to gallium's geochemical behaviour, showing no strong enrichment in the processes relevant to the formation of most ore deposits.
The United States Geological Survey (USGS) estimates that more than 1 million tons of gallium is contained in known reserves of bauxite and zinc ores. Some coal flue dusts contain small quantities of gallium, typically less than 1% by weight. However, these amounts are not extractable without mining of the host materials (see below). Thus, the availability of gallium is fundamentally determined by the rate at which bauxite, zinc ores, and coal are extracted.
Production and availability
Gallium is produced exclusively as a by-product during the processing of the ores of other metals. Its main source material is bauxite, the chief ore of aluminium, but minor amounts are also extracted from sulfidic zinc ores (sphalerite being the main host mineral). In the past, certain coals were an important source.
During the processing of bauxite to alumina in the Bayer process, gallium accumulates in the sodium hydroxide liquor. From this it can be extracted by a variety of methods. The most recent is the use of ion-exchange resin. Achievable extraction efficiencies critically depend on the original concentration in the feed bauxite. At a typical feed concentration of 50 ppm, about 15% of the contained gallium is extractable. The remainder reports to the red mud and aluminium hydroxide streams. Gallium is removed from the ion-exchange resin in solution. Electrolysis then gives gallium metal. For semiconductor use, it is further purified with zone melting or single-crystal extraction from a melt (Czochralski process). Purities of 99.9999% are routinely achieved and commercially available.
Its by-product status means that gallium production is constrained by the amount of bauxite, sulfidic zinc ores (and coal) extracted per year. Therefore, its availability needs to be discussed in terms of supply potential. The supply potential of a by-product is defined as that amount which is economically extractable from its host materials per year under current market conditions (i.e. technology and price). Reserves and resources are not relevant for by-products, since they cannot be extracted independently from the main-products. Recent estimates put the supply potential of gallium at a minimum of 2,100 t/yr from bauxite, 85 t/yr from sulfidic zinc ores, and potentially 590 t/yr from coal. These figures are significantly greater than current production (375 t in 2016). Thus, major future increases in the by-product production of gallium will be possible without significant increases in production costs or price. The average price for low-grade gallium was $120 per kilogram in 2016 and $135–140 per kilogram in 2017.
In 2017, the world's production of low-grade gallium was tons—an increase of 15% from 2016. China, Japan, South Korea, Russia, and Ukraine were the leading producers, while Germany ceased primary production of gallium in 2016. The yield of high-purity gallium was ca. 180 tons, mostly originating from China, Japan, Slovakia, UK and U.S. The 2017 world annual production capacity was estimated at 730 tons for low-grade and 320 tons for refined gallium.
China produced tons of low-grade gallium in 2016 and tons in 2017. It also accounted for more than half of global LED production. As of July 2023, China accounted for between 80% and 95% of its production.
Applications
Semiconductor applications dominate the commercial demand for gallium, accounting for 98% of the total. The next major application is for gadolinium gallium garnets. As of 2022, 44% of world use went to light fixtures and 36% to integrated circuits, with smaller shares equal to ~7% going to photovoltaics and magnets each.
Semiconductors
Extremely high-purity (>99.9999%) gallium is commercially available to serve the semiconductor industry. Gallium arsenide (GaAs) and gallium nitride (GaN) used in electronic components represented about 98% of the gallium consumption in the United States in 2007. About 66% of semiconductor gallium is used in the U.S. in integrated circuits (mostly gallium arsenide), such as the manufacture of ultra-high-speed logic chips and MESFETs for low-noise microwave preamplifiers in cell phones. About 20% of this gallium is used in optoelectronics.
Worldwide, gallium arsenide makes up 95% of the annual global gallium consumption. It amounted to $7.5 billion in 2016, with 53% originating from cell phones, 27% from wireless communications, and the rest from automotive, consumer, fiber-optic, and military applications. The recent increase in GaAs consumption is mostly related to the emergence of 3G and 4G smartphones, which employ up to 10 times the amount of GaAs in older models.
Gallium arsenide and gallium nitride can also be found in a variety of optoelectronic devices which had a market share of $15.3 billion in 2015 and $18.5 billion in 2016. Aluminium gallium arsenide (AlGaAs) is used in high-power infrared laser diodes. The semiconductors gallium nitride and indium gallium nitride are used in blue and violet optoelectronic devices, mostly laser diodes and light-emitting diodes. For example, gallium nitride 405 nm diode lasers are used as a violet light source for higher-density Blu-ray Disc compact data disc drives.
Other major applications of gallium nitride are cable television transmission, commercial wireless infrastructure, power electronics, and satellites. The GaN radio frequency device market alone was estimated at $370 million in 2016 and $420 million in 2016.
Multijunction photovoltaic cells, developed for satellite power applications, are made by molecular-beam epitaxy or metalorganic vapour-phase epitaxy of thin films of gallium arsenide, indium gallium phosphide, or indium gallium arsenide. The Mars Exploration Rovers and several satellites use triple-junction gallium arsenide on germanium cells. Gallium is also a component in photovoltaic compounds (such as copper indium gallium selenium sulfide ) used in solar panels as a cost-efficient alternative to crystalline silicon.
Galinstan and other alloys
Gallium readily alloys with most metals, and is used as an ingredient in low-melting alloys. The nearly eutectic alloy of gallium, indium, and tin is a room temperature liquid used in medical thermometers. This alloy, with the trade-name Galinstan (with the "-stan" referring to the tin, in Latin), has a low melting point of −19 °C (−2.2 °F). It has been suggested that this family of alloys could also be used to cool computer chips in place of water, and is often used as a replacement for thermal paste in high-performance computing. Gallium alloys have been evaluated as substitutes for mercury dental amalgams, but these materials have yet to see wide acceptance. Liquid alloys containing mostly gallium and indium have been found to precipitate gaseous CO2 into solid carbon and are being researched as potential methodologies for carbon capture and possibly carbon removal.
Because gallium wets glass or porcelain, gallium can be used to create brilliant mirrors. When the wetting action of gallium-alloys is not desired (as in Galinstan glass thermometers), the glass must be protected with a transparent layer of gallium(III) oxide.
Due to their high surface tension and deformability, gallium-based liquid metals can be used to create actuators by controlling the surface tension. Researchers have demonstrated the potentials of using liquid metal actuators as artificial muscle in robotic actuation.
The plutonium used in nuclear weapon pits is stabilized in the δ phase and made machinable by alloying with gallium.
Biomedical applications
Although gallium has no natural function in biology, gallium ions interact with processes in the body in a manner similar to iron(III). Because these processes include inflammation, a marker for many disease states, several gallium salts are used (or are in development) as pharmaceuticals and radiopharmaceuticals in medicine. Interest in the anticancer properties of gallium emerged when it was discovered that 67Ga(III) citrate injected in tumor-bearing animals localized to sites of tumor. Clinical trials have shown gallium nitrate to have antineoplastic activity against non-Hodgkin's lymphoma and urothelial cancers. A new generation of gallium-ligand complexes such as tris(8-quinolinolato)gallium(III) (KP46) and gallium maltolate has emerged. Gallium nitrate (brand name Ganite) has been used as an intravenous pharmaceutical to treat hypercalcemia associated with tumor metastasis to bones. Gallium is thought to interfere with osteoclast function, and the therapy may be effective when other treatments have failed. Gallium maltolate, an oral, highly absorbable form of gallium(III) ion, is an anti-proliferative to pathologically proliferating cells, particularly cancer cells and some bacteria that accept it in place of ferric iron (Fe3+). Researchers are conducting clinical and preclinical trials on this compound as a potential treatment for a number of cancers, infectious diseases, and inflammatory diseases.
When gallium ions are mistakenly taken up in place of iron(III) by bacteria such as Pseudomonas, the ions interfere with respiration, and the bacteria die. This happens because iron is redox-active, allowing the transfer of electrons during respiration, while gallium is redox-inactive.
A complex amine-phenol Ga(III) compound MR045 is selectively toxic to parasites resistant to chloroquine, a common drug against malaria. Both the Ga(III) complex and chloroquine act by inhibiting crystallization of hemozoin, a disposal product formed from the digestion of blood by the parasites.
Radiogallium salts
Gallium-67 salts such as gallium citrate and gallium nitrate are used as radiopharmaceutical agents in the nuclear medicine imaging known as gallium scan. The radioactive isotope 67Ga is used, and the compound or salt of gallium is unimportant. The body handles Ga3+ in many ways as though it were Fe3+, and the ion is bound (and concentrates) in areas of inflammation, such as infection, and in areas of rapid cell division. This allows such sites to be imaged by nuclear scan techniques.
Gallium-68, a positron emitter with a half-life of 68 min, is now used as a diagnostic radionuclide in PET-CT when linked to pharmaceutical preparations such as DOTATOC, a somatostatin analogue used for neuroendocrine tumors investigation, and DOTA-TATE, a newer one, used for neuroendocrine metastasis and lung neuroendocrine cancer, such as certain types of microcytoma. Gallium-68's preparation as a pharmaceutical is chemical, and the radionuclide is extracted by elution from germanium-68, a synthetic radioisotope of germanium, in gallium-68 generators.
Other uses
Neutrino detection: Gallium is used for neutrino detection. Possibly the largest amount of pure gallium ever collected in a single location is the Gallium-Germanium Neutrino Telescope used by the SAGE experiment at the Baksan Neutrino Observatory in Russia. This detector contains 55–57 tonnes (~9 cubic metres) of liquid gallium. Another experiment was the GALLEX neutrino detector operated in the early 1990s in an Italian mountain tunnel. The detector contained 12.2 tons of watered gallium-71. Solar neutrinos caused a few atoms of 71Ga to become radioactive 71Ge, which were detected. This experiment showed that the solar neutrino flux is 40% less than theory predicted. This deficit (solar neutrino problem) was not explained until better solar neutrino detectors and theories were constructed (see SNO).
Ion source: Gallium is also used as a liquid metal ion source for a focused ion beam. For example, a focused gallium-ion beam was used to create the world's smallest book, Teeny Ted from Turnip Town.
Lubricants: Gallium serves as an additive in glide wax for skis and other low-friction surface materials.
Flexible electronics: Materials scientists speculate that the properties of gallium could make it suitable for the development of flexible and wearable devices.
Hydrogen generation: Gallium disrupts the protective oxide layer on aluminium, allowing water to react with the aluminium in AlGa to produce hydrogen gas.
Humor: A well-known practical joke among chemists is to fashion gallium spoons and use them to serve tea to unsuspecting guests, since gallium has a similar appearance to its lighter homolog aluminium. The spoons then melt in the hot tea.
Gallium in the ocean
Advances in trace element testing have allowed scientists to discover traces of dissolved gallium in the Atlantic and Pacific Oceans. In recent years, dissolved gallium concentrations have presented in the Beaufort Sea. These reports reflect the possible profiles of the Pacific and Atlantic Ocean waters. For the Pacific Oceans, typical dissolved gallium concentrations are between 4 and 6 pmol/kg at depths <~150 m. In comparison, for Atlantic waters 25–28 pmol/kg at depths >~350 m.
Gallium has entered oceans mainly through aeolian input, but having gallium in our oceans can be used to resolve aluminium distribution in the oceans. The reason for this is that gallium is geochemically similar to aluminium, just less reactive. Gallium also has a slightly larger surface water residence time than aluminium. Gallium has a similar dissolved profile similar to that of aluminium, due to this gallium can be used as a tracer for aluminium. Gallium can also be used as a tracer of aeolian inputs of iron. Gallium is used as a tracer for iron in the northwest Pacific, south and central Atlantic Oceans. For example, in the northwest Pacific, low gallium surface waters, in the subpolar region suggest that there is low dust input, which can subsequently explain the following high-nutrient, low-chlorophyll environmental behavior.
Precautions
Metallic gallium is not toxic. However, several gallium compounds are toxic.
Gallium halide complexes can be toxic. The Ga3+ ion of soluble gallium salts tends to form the insoluble hydroxide when injected in large doses; precipitation of this hydroxide resulted in nephrotoxicity in animals. In lower doses, soluble gallium is tolerated well and does not accumulate as a poison, instead being excreted mostly through urine. Excretion of gallium occurs in two phases: the first phase has a biological half-life of 1 hour, while the second has a biological half-life of 25 hours.
Inhaled Ga2O3 particles are probably toxic.
| Physical sciences | Chemical elements_2 | null |
12242 | https://en.wikipedia.org/wiki/Germanium | Germanium | Germanium is a chemical element; it has symbol Ge and atomic number 32. It is lustrous, hard-brittle, grayish-white and similar in appearance to silicon. It is a metalloid (more rarely considered a metal) in the carbon group that is chemically similar to its group neighbors silicon and tin. Like silicon, germanium naturally reacts and forms complexes with oxygen in nature.
Because it seldom appears in high concentration, germanium was found comparatively late in the discovery of the elements. Germanium ranks 50th in abundance of the elements in the Earth's crust. In 1869, Dmitri Mendeleev predicted its existence and some of its properties from its position on his periodic table, and called the element ekasilicon. On February 6, 1886, Clemens Winkler at Freiberg University found the new element, along with silver and sulfur, in the mineral argyrodite. Winkler named the element after Germany, his country of birth. Germanium is mined primarily from sphalerite (the primary ore of zinc), though germanium is also recovered commercially from silver, lead, and copper ores.
Elemental germanium is used as a semiconductor in transistors and various other electronic devices. Historically, the first decade of semiconductor electronics was based entirely on germanium. Presently, the major end uses are fibre-optic systems, infrared optics, solar cell applications, and light-emitting diodes (LEDs). Germanium compounds are also used for polymerization catalysts and have most recently found use in the production of nanowires. This element forms a large number of organogermanium compounds, such as tetraethylgermanium, useful in organometallic chemistry. Germanium is considered a technology-critical element.
Germanium is not thought to be an essential element for any living organism. Similar to silicon and aluminium, naturally-occurring germanium compounds tend to be insoluble in water and thus have little oral toxicity. However, synthetic soluble germanium salts are nephrotoxic, and synthetic chemically reactive germanium compounds with halogens and hydrogen are irritants and toxins.
History
In his report on The Periodic Law of the Chemical Elements in 1869, the Russian chemist Dmitri Mendeleev predicted the existence of several unknown chemical elements, including one that would fill a gap in the carbon family, located between silicon and tin. Because of its position in his periodic table, Mendeleev called it ekasilicon (Es), and he estimated its atomic weight to be 70 (later 72).
In mid-1885, at a mine near Freiberg, Saxony, a new mineral was discovered and named argyrodite because of its high silver content. The chemist Clemens Winkler analyzed this new mineral, which proved to be a combination of silver, sulfur, and a new element. Winkler was able to isolate the new element in 1886 and found it similar to antimony. He initially considered the new element to be eka-antimony, but was soon convinced that it was instead eka-silicon. Before Winkler published his results on the new element, he decided that he would name his element neptunium, since the recent discovery of planet Neptune in 1846 had similarly been preceded by mathematical predictions of its existence. However, the name "neptunium" had already been given to another proposed chemical element (though not the element that today bears the name neptunium, which was discovered in 1940). So instead, Winkler named the new element germanium (from the Latin word, Germania, for Germany) in honor of his homeland. Argyrodite proved empirically to be Ag8GeS6.
Because this new element showed some similarities with the elements arsenic and antimony, its proper place in the periodic table was under consideration, but its similarities with Dmitri Mendeleev's predicted element "ekasilicon" confirmed that place on the periodic table. With further material from 500 kg of ore from the mines in Saxony, Winkler confirmed the chemical properties of the new element in 1887. He also determined an atomic weight of 72.32 by analyzing pure germanium tetrachloride (), while Lecoq de Boisbaudran deduced 72.3 by a comparison of the lines in the spark spectrum of the element.
Winkler was able to prepare several new compounds of germanium, including fluorides, chlorides, sulfides, dioxide, and tetraethylgermane (Ge(C2H5)4), the first organogermane. The physical data from those compounds—which corresponded well with Mendeleev's predictions—made the discovery an important confirmation of Mendeleev's idea of element periodicity. Here is a comparison between the prediction and Winkler's data:
Until the late 1930s, germanium was thought to be a poorly conducting metal. Germanium did not become economically significant until after 1945 when its properties as an electronic semiconductor were recognized. During World War II, small amounts of germanium were used in some special electronic devices, mostly diodes. The first major use was the point-contact Schottky diodes for radar pulse detection during the War. The first silicon–germanium alloys were obtained in 1955. Before 1945, only a few hundred kilograms of germanium were produced in smelters each year, but by the end of the 1950s, the annual worldwide production had reached .
The development of the germanium transistor in 1948 opened the door to countless applications of solid state electronics. From 1950 through the early 1970s, this area provided an increasing market for germanium, but then high-purity silicon began replacing germanium in transistors, diodes, and rectifiers. For example, the company that became Fairchild Semiconductor was founded in 1957 with the express purpose of producing silicon transistors. Silicon has superior electrical properties, but it requires much greater purity that could not be commercially achieved in the early years of semiconductor electronics.
Meanwhile, the demand for germanium for fiber optic communication networks, infrared night vision systems, and polymerization catalysts increased dramatically. These end uses represented 85% of worldwide germanium consumption in 2000. The US government even designated germanium as a strategic and critical material, calling for a 146 ton (132 tonne) supply in the national defense stockpile in 1987.
Germanium differs from silicon in that the supply is limited by the availability of exploitable sources, while the supply of silicon is limited only by production capacity since silicon comes from ordinary sand and quartz. While silicon could be bought in 1998 for less than $10 per kg, the price of germanium was almost $800 per kg.
Characteristics
Under standard conditions, germanium is a brittle, silvery-white, semiconductor. This form constitutes an allotrope known as α-germanium, which has a metallic luster and a diamond cubic crystal structure, the same structure as silicon and diamond. In this form, germanium has a threshold displacement energy of . At pressures above 120 kbar, germanium becomes the metallic allotrope β-germanium with the same structure as β-tin. Like silicon, gallium, bismuth, antimony, and water, germanium is one of the few substances that expands as it solidifies (i.e. freezes) from the molten state.
Germanium is a semiconductor having an indirect bandgap, as is crystalline silicon. Zone refining techniques have led to the production of crystalline germanium for semiconductors that has an impurity of only one part in 1010,
making it one of the purest materials ever obtained.
The first semi-metallic material discovered (in 2005) to become a superconductor in the presence of an extremely strong electromagnetic field was an alloy of germanium, uranium, and rhodium.
Pure germanium is known to spontaneously extrude very long screw dislocations, referred to as germanium whiskers. The growth of these whiskers is one of the primary reasons for the failure of older diodes and transistors made from germanium, as, depending on what they eventually touch, they may lead to an electrical short.
Chemistry
Elemental germanium starts to oxidize slowly in air at around 250 °C, forming GeO2 . Germanium is insoluble in dilute acids and alkalis but dissolves slowly in hot concentrated sulfuric and nitric acids and reacts violently with molten alkalis to produce germanates (). Germanium occurs mostly in the oxidation state +4 although many +2 compounds are known. Other oxidation states are rare: +3 is found in compounds such as Ge2Cl6, and +3 and +1 are found on the surface of oxides, or negative oxidation states in germanides, such as −4 in . Germanium cluster anions (Zintl ions) such as Ge42−, Ge94−, Ge92−, [(Ge9)2]6− have been prepared by the extraction from alloys containing alkali metals and germanium in liquid ammonia in the presence of ethylenediamine or a cryptand. The oxidation states of the element in these ions are not integers—similar to the ozonides O3−.
Two oxides of germanium are known: germanium dioxide (, germania) and germanium monoxide, (). The dioxide, GeO2, can be obtained by roasting germanium disulfide (), and is a white powder that is only slightly soluble in water but reacts with alkalis to form germanates. The monoxide, germanous oxide, can be obtained by the high temperature reaction of GeO2 with elemental Ge. The dioxide (and the related oxides and germanates) exhibits the unusual property of having a high refractive index for visible light, but transparency to infrared light. Bismuth germanate, Bi4Ge3O12 (BGO), is used as a scintillator.
Binary compounds with other chalcogens are also known, such as the disulfide () and diselenide (), and the monosulfide (GeS), monoselenide (GeSe), and monotelluride (GeTe). GeS2 forms as a white precipitate when hydrogen sulfide is passed through strongly acid solutions containing Ge(IV). The disulfide is appreciably soluble in water and in solutions of caustic alkalis or alkaline sulfides. Nevertheless, it is not soluble in acidic water, which allowed Winkler to discover the element. By heating the disulfide in a current of hydrogen, the monosulfide (GeS) is formed, which sublimes in thin plates of a dark color and metallic luster, and is soluble in solutions of the caustic alkalis. Upon melting with alkaline carbonates and sulfur, germanium compounds form salts known as thiogermanates.
Four tetrahalides are known. Under normal conditions germanium tetraiodide (GeI4) is a solid, germanium tetrafluoride (GeF4) a gas and the others volatile liquids. For example, germanium tetrachloride, GeCl4, is obtained as a colorless fuming liquid boiling at 83.1 °C by heating the metal with chlorine. All the tetrahalides are readily hydrolyzed to hydrated germanium dioxide. GeCl4 is used in the production of organogermanium compounds. All four dihalides are known and in contrast to the tetrahalides are polymeric solids. Additionally Ge2Cl6 and some higher compounds of formula GenCl2n+2 are known. The unusual compound Ge6Cl16 has been prepared that contains the Ge5Cl12 unit with a neopentane structure.
Germane (GeH4) is a compound similar in structure to methane. Polygermanes—compounds that are similar to alkanes—with formula GenH2n+2 containing up to five germanium atoms are known. The germanes are less volatile and less reactive than their corresponding silicon analogues. GeH4 reacts with alkali metals in liquid ammonia to form white crystalline MGeH3 which contain the GeH3− anion. The germanium hydrohalides with one, two and three halogen atoms are colorless reactive liquids.
The first organogermanium compound was synthesized by Winkler in 1887; the reaction of germanium tetrachloride with diethylzinc yielded tetraethylgermane (). Organogermanes of the type R4Ge (where R is an alkyl) such as tetramethylgermane () and tetraethylgermane are accessed through the cheapest available germanium precursor germanium tetrachloride and alkyl nucleophiles. Organic germanium hydrides such as isobutylgermane () were found to be less hazardous and may be used as a liquid substitute for toxic germane gas in semiconductor applications. Many germanium reactive intermediates are known: germyl free radicals, germylenes (similar to carbenes), and germynes (similar to carbynes). The organogermanium compound 2-carboxyethylgermasesquioxane was first reported in the 1970s, and for a while was used as a dietary supplement and thought to possibly have anti-tumor qualities.
Using a ligand called Eind (1,1,3,3,5,5,7,7-octaethyl-s-hydrindacen-4-yl) germanium is able to form a double bond with oxygen (germanone). Germanium hydride and germanium tetrahydride are very flammable and even explosive when mixed with air.
Isotopes
Germanium occurs in five natural isotopes: , , , , and . Of these, is very slightly radioactive, decaying by double beta decay with a half-life of . is the most common isotope, having a natural abundance of approximately 36%. is the least common with a natural abundance of approximately 7%. When bombarded with alpha particles, the isotope will generate stable , releasing high energy electrons in the process. Because of this, it is used in combination with radon for nuclear batteries.
At least 27 radioisotopes have also been synthesized, ranging in atomic mass from 58 to 89. The most stable of these is , decaying by electron capture with a half-life of ays. The least stable is , with a half-life of . While most of germanium's radioisotopes decay by beta decay, and decay by delayed proton emission. through isotopes also exhibit minor delayed neutron emission decay paths.
Occurrence
Germanium is created by stellar nucleosynthesis, mostly by the s-process in asymptotic giant branch stars. The s-process is a slow neutron capture of lighter elements inside pulsating red giant stars. Germanium has been detected in some of the most distant stars and in the atmosphere of Jupiter.
Germanium's abundance in the Earth's crust is approximately 1.6 ppm. Only a few minerals like argyrodite, briartite, germanite, renierite and sphalerite contain appreciable amounts of germanium. Only few of them (especially germanite) are, very rarely, found in mineable amounts. Some zinc–copper–lead ore bodies contain enough germanium to justify extraction from the final ore concentrate. An unusual natural enrichment process causes a high content of germanium in some coal seams, discovered by Victor Moritz Goldschmidt during a broad survey for germanium deposits. The highest concentration ever found was in Hartley coal ash with as much as 1.6% germanium. The coal deposits near Xilinhaote, Inner Mongolia, contain an estimated 1600 tonnes of germanium.
Production
About 118 tonnes of germanium were produced in 2011 worldwide, mostly in China (80 t), Russia (5 t) and United States (3 t). Germanium is recovered as a by-product from sphalerite zinc ores where it is concentrated in amounts as great as 0.3%, especially from low-temperature sediment-hosted, massive Zn–Pb–Cu(–Ba) deposits and carbonate-hosted Zn–Pb deposits. A recent study found that at least 10,000 t of extractable germanium is contained in known zinc reserves, particularly those hosted by Mississippi-Valley type deposits, while at least 112,000 t will be found in coal reserves. In 2007 35% of the demand was met by recycled germanium.
While it is produced mainly from sphalerite, it is also found in silver, lead, and copper ores. Another source of germanium is fly ash of power plants fueled from coal deposits that contain germanium. Russia and China used this as a source for germanium. Russia's deposits are located in the far east of Sakhalin Island, and northeast of Vladivostok. The deposits in China are located mainly in the lignite mines near Lincang, Yunnan; coal is also mined near Xilinhaote, Inner Mongolia.
The ore concentrates are mostly sulfidic; they are converted to the oxides by heating under air in a process known as roasting:
GeS2 + 3 O2 → GeO2 + 2 SO2
Some of the germanium is left in the dust produced, while the rest is converted to germanates, which are then leached (together with zinc) from the cinder by sulfuric acid. After neutralization, only the zinc stays in solution while germanium and other metals precipitate. After removing some of the zinc in the precipitate by the Waelz process, the residing Waelz oxide is leached a second time. The dioxide is obtained as precipitate and converted with chlorine gas or hydrochloric acid to germanium tetrachloride, which has a low boiling point and can be isolated by distillation:
GeO2 + 4 HCl → GeCl4 + 2 H2O
GeO2 + 2 Cl2 → GeCl4 + O2
Germanium tetrachloride is either hydrolyzed to the oxide (GeO2) or purified by fractional distillation and then hydrolyzed. The highly pure GeO2 is now suitable for the production of germanium glass. It is reduced to the element by reacting it with hydrogen, producing germanium suitable for infrared optics and semiconductor production:
GeO2 + 2 H2 → Ge + 2 H2O
The germanium for steel production and other industrial processes is normally reduced using carbon:
GeO2 + C → Ge + CO2
Applications
The major end uses for germanium in 2007, worldwide, were estimated to be: 35% for fiber-optics, 30% infrared optics, 15% polymerization catalysts, and 15% electronics and solar electric applications. The remaining 5% went into such uses as phosphors, metallurgy, and chemotherapy.
Optics
The notable properties of germania (GeO2) are its high index of refraction and its low optical dispersion. These make it especially useful for wide-angle camera lenses, microscopy, and the core part of optical fibers. It has replaced titania as the dopant for silica fiber, eliminating the subsequent heat treatment that made the fibers brittle. At the end of 2002, the fiber optics industry consumed 60% of the annual germanium use in the United States, but this is less than 10% of worldwide consumption. GeSbTe is a phase change material used for its optic properties, such as that used in rewritable DVDs.
Because germanium is transparent in the infrared wavelengths, it is an important infrared optical material that can be readily cut and polished into lenses and windows. It is especially used as the front optic in thermal imaging cameras working in the 8 to 14 micron range for passive thermal imaging and for hot-spot detection in military, mobile night vision, and fire fighting applications. It is used in infrared spectroscopes and other optical equipment that require extremely sensitive infrared detectors. It has a very high refractive index (4.0) and must be coated with anti-reflection agents. Particularly, a very hard special antireflection coating of diamond-like carbon (DLC), refractive index 2.0, is a good match and produces a diamond-hard surface that can withstand much environmental abuse.
Electronics
Germanium can be alloyed with silicon, and silicon–germanium alloys are rapidly becoming an important semiconductor material for high-speed integrated circuits. Circuits using the properties of Si-SiGe heterojunctions can be much faster than those using silicon alone. The SiGe chips, with high-speed properties, can be made with low-cost, well-established production techniques of the silicon chip industry.
High efficiency solar panels are a major use of germanium. Because germanium and gallium arsenide have nearly identical lattice constant, germanium substrates can be used to make gallium-arsenide solar cells. Germanium is the substrate of the wafers for high-efficiency multijunction photovoltaic cells for space applications, such as the Mars Exploration Rovers, which use triple-junction gallium arsenide on germanium cells. High-brightness LEDs, used for automobile headlights and to backlight LCD screens, are also an important application.
Germanium-on-insulator (GeOI) substrates are seen as a potential replacement for silicon on miniaturized chips. CMOS circuit based on GeOI substrates has been reported recently. Other uses in electronics include phosphors in fluorescent lamps and solid-state light-emitting diodes (LEDs). Germanium transistors are still used in some effects pedals by musicians who wish to reproduce the distinctive tonal character of the "fuzz"-tone from the early rock and roll era, most notably the Dallas Arbiter Fuzz Face.
Germanium has been studied as a potential material for implantable bioelectronic sensors that are resorbed in the body without generating harmful hydrogen gas, replacing zinc oxide- and indium gallium zinc oxide-based implementations.
Other uses
Germanium dioxide is also used in catalysts for polymerization in the production of polyethylene terephthalate (PET). The high brilliance of this polyester is especially favored for PET bottles marketed in Japan. In the United States, germanium is not used for polymerization catalysts.
Due to the similarity between silica (SiO2) and germanium dioxide (GeO2), the silica stationary phase in some gas chromatography columns can be replaced by GeO2.
In recent years germanium has seen increasing use in precious metal alloys. In sterling silver alloys, for instance, it reduces firescale, increases tarnish resistance, and improves precipitation hardening. A tarnish-proof silver alloy trademarked Argentium contains 1.2% germanium.
Semiconductor detectors made of single crystal high-purity germanium can precisely identify radiation sources—for example in airport security. Germanium is useful for monochromators for beamlines used in single crystal neutron scattering and synchrotron X-ray diffraction. The reflectivity has advantages over silicon in neutron and high energy X-ray applications. Crystals of high purity germanium are used in detectors for gamma spectroscopy and the search for dark matter. Germanium crystals are also used in X-ray spectrometers for the determination of phosphorus, chlorine and sulfur.
Germanium is emerging as an important material for spintronics and spin-based quantum computing applications. In 2010, researchers demonstrated room temperature spin transport and more recently donor electron spins in germanium has been shown to have very long coherence times.
Strategic importance
Due to its use in advanced electronics and optics, Germanium is considered a technology-critical element (by e.g. the European Union), essential to fulfill the green and digital transition. As China controls 60% of global Germanium production it holds a dominant position over the world's supply chains.
On 3 July 2023 China suddenly imposed restrictions on the exports of germanium (and gallium), ratcheting up trade tensions with Western allies. Invoking "national security interests," the Chinese Ministry of Commerce informed that companies that intend to sell products containing germanium would need an export licence. The products/compounds targeted are: germanium dioxide, germanium epitaxial growth substrate, germanium ingot, germanium metal, germanium tetrachloride and zinc germanium phosphide. It sees such products as "dual-use" items that may have military purposes and therefore warrant an extra layer of oversight.
The new dispute opened a new chapter in the increasingly fierce technology race that has pitted the United States, and to a lesser extent Europe, against China. The US wants its allies to heavily curb, or downright prohibit, advanced electronic components bound to the Chinese market to prevent Beijing from securing global technology supremacy. China denied any tit-for-tat intention behind the Germanium export restrictions.
Following China's export restrictions, Russian state-owned company Rostec announced an increase in germanium production to meet domestic demand.
Germanium and health
Germanium is not considered essential to the health of plants or animals. Germanium in the environment has little or no health impact. This is primarily because it usually occurs only as a trace element in ores and carbonaceous materials, and the various industrial and electronic applications involve very small quantities that are not likely to be ingested. For similar reasons, end-use germanium has little impact on the environment as a biohazard. Some reactive intermediate compounds of germanium are poisonous (see precautions, below).
Germanium supplements, made from both organic and inorganic germanium, have been marketed as an alternative medicine capable of treating leukemia and lung cancer. There is, however, no medical evidence of benefit; some evidence suggests that such supplements are actively harmful. U.S. Food and Drug Administration (FDA) research has concluded that inorganic germanium, when used as a nutritional supplement, "presents potential human health hazard".
Some germanium compounds have been administered by alternative medical practitioners as non-FDA-allowed injectable solutions. Soluble inorganic forms of germanium used at first, notably the citrate-lactate salt, resulted in some cases of renal dysfunction, hepatic steatosis, and peripheral neuropathy in individuals using them over a long term. Plasma and urine germanium concentrations in these individuals, several of whom died, were several orders of magnitude greater than endogenous levels. A more recent organic form, beta-carboxyethylgermanium sesquioxide (propagermanium), has not exhibited the same spectrum of toxic effects.
Certain compounds of germanium have low toxicity to mammals, but have toxic effects against certain bacteria.
Precautions for chemically reactive germanium compounds
While use of germanium itself does not require precautions, some of germanium's artificially produced compounds are quite reactive and present an immediate hazard to human health on exposure. For example, Germanium tetrachloride and germane (GeH4) are a liquid and gas, respectively, that can be very irritating to the eyes, skin, lungs, and throat.
| Physical sciences | Chemical elements_2 | null |
12243 | https://en.wikipedia.org/wiki/Gadolinium | Gadolinium | Gadolinium is a chemical element; it has symbol Gd and atomic number 64. Gadolinium is a silvery-white metal when oxidation is removed. It is a malleable and ductile rare-earth element. Gadolinium reacts with atmospheric oxygen or moisture slowly to form a black coating. Gadolinium below its Curie point of is ferromagnetic, with an attraction to a magnetic field higher than that of nickel. Above this temperature it is the most paramagnetic element. It is found in nature only in an oxidized form. When separated, it usually has impurities of the other rare earths because of their similar chemical properties.
Gadolinium was discovered in 1880 by Jean Charles de Marignac, who detected its oxide by using spectroscopy. It is named after the mineral gadolinite, one of the minerals in which gadolinium is found, itself named for the Finnish chemist Johan Gadolin. Pure gadolinium was first isolated by the chemist Paul-Émile Lecoq de Boisbaudran around 1886.
Gadolinium possesses unusual metallurgical properties, to the extent that as little as 1% of gadolinium can significantly improve the workability and resistance to oxidation at high temperatures of iron, chromium, and related metals. Gadolinium as a metal or a salt absorbs neutrons and is, therefore, used sometimes for shielding in neutron radiography and in nuclear reactors.
Like most of the rare earths, gadolinium forms trivalent ions with fluorescent properties, and salts of gadolinium(III) are used as phosphors in various applications.
Gadolinium(III) ions in water-soluble salts are highly toxic to mammals. However, chelated gadolinium(III) compounds prevent the gadolinium(III) from being exposed to the organism, and the majority is excreted by healthy kidneys before it can deposit in tissues. Because of its paramagnetic properties, solutions of chelated organic gadolinium complexes are used as intravenously administered gadolinium-based MRI contrast agents in medical magnetic resonance imaging.
The main uses of gadolinium, in addition to use as a contrast agent for MRI scans, are in nuclear reactors, in alloys, as a phosphor in medical imaging, as a gamma ray emitter, in electronic devices, in optical devices, and in superconductors.
Characteristics
Physical properties
Gadolinium is the eighth member of the lanthanide series. In the periodic table, it appears between the elements europium to its left and terbium to its right, and above the actinide curium. It is a silvery-white, malleable, ductile rare-earth element. Its 64 electrons are arranged in the configuration of [Xe]4f75d16s2, of which the ten 4f, 5d, and 6s electrons are valence.
Like most other metals in the lanthanide series, three electrons are usually available as valence electrons. The remaining 4f electrons are too strongly bound: this is because the 4f orbitals penetrate the most through the inert xenon core of electrons to the nucleus, followed by 5d and 6s, and this increases with higher ionic charge. Gadolinium crystallizes in the hexagonal close-packed α-form at room temperature. At temperatures above , it forms or transforms into its β-form, which has a body-centered cubic structure.
The isotope gadolinium-157 has the highest thermal-neutron capture cross-section among any stable nuclide: about 259,000 barns. Only xenon-135 has a higher capture cross-section, about 2.0 million barns, but this isotope is radioactive.
Gadolinium is believed to be ferromagnetic at temperatures below and is strongly paramagnetic above this temperature. In fact, at body temperature, gadolinium exhibits the greatest paramagnetic effect of any element. There is evidence that gadolinium is a helical antiferromagnetic, rather than a ferromagnetic, below . Gadolinium demonstrates a magnetocaloric effect whereby its temperature increases when it enters a magnetic field and decreases when it leaves the magnetic field. A significant magnetocaloric effect is observed at higher temperatures, up to about 300 kelvins, in the compounds Gd5(Si1−xGex)4.
Individual gadolinium atoms can be isolated by encapsulating them into fullerene molecules, where they can be visualized with a transmission electron microscope. Individual Gd atoms and small Gd clusters can be incorporated into carbon nanotubes.
Chemical properties
Gadolinium combines with most elements to form Gd(III) derivatives. It also combines with nitrogen, carbon, sulfur, phosphorus, boron, selenium, silicon, and arsenic at elevated temperatures, forming binary compounds.
Unlike the other rare-earth elements, metallic gadolinium is relatively stable in dry air. However, it tarnishes quickly in moist air, forming a loosely-adhering gadolinium(III) oxide (Gd2O3):
4 Gd + 3 O2 → 2 Gd2O3,
which spalls off, exposing more surface to oxidation.
Gadolinium is a strong reducing agent, which reduces oxides of several metals into their elements. Gadolinium is quite electropositive and reacts slowly with cold water and quite quickly with hot water to form gadolinium(III) hydroxide (Gd(OH)3):
2 Gd + 6 H2O → 2 Gd(OH)3 + 3 H2.
Gadolinium metal is attacked readily by dilute sulfuric acid to form solutions containing the colorless Gd(III) ions, which exist as [Gd(H2O)9]3+ complexes:
2 Gd + 3 H2SO4 + 18 H2O → 2 [Gd(H2O)9]3+ + 3 + 3 H2.
Chemical compounds
In the great majority of its compounds, like many rare-earth metals, gadolinium adopts the oxidation state +3. However, gadolinium can be found on rare occasions in the 0, +1 and +2 oxidation states. All four trihalides are known. All are white, except for the iodide, which is yellow. Most commonly encountered of the halides is gadolinium(III) chloride (GdCl3). The oxide dissolves in acids to give the salts, such as gadolinium(III) nitrate.
Gadolinium(III), like most lanthanide ions, forms complexes with high coordination numbers. This tendency is illustrated by the use of the chelating agent DOTA, an octadentate ligand. Salts of [Gd(DOTA)]− are useful in magnetic resonance imaging. A variety of related chelate complexes have been developed, including gadodiamide.
Reduced gadolinium compounds are known, especially in the solid state. Gadolinium(II) halides are obtained by heating Gd(III) halides in presence of metallic Gd in tantalum containers. Gadolinium also forms the sesquichloride Gd2Cl3, which can be further reduced to GdCl by annealing at . This gadolinium(I) chloride forms platelets with layered graphite-like structure.
Isotopes
Naturally occurring gadolinium is composed of six stable isotopes, 154Gd, 155Gd, 156Gd, 157Gd, 158Gd and 160Gd, and one radioisotope, 152Gd, with the isotope 158Gd being the most abundant (24.8% natural abundance). The predicted double beta decay of 160Gd has never been observed (an experimental lower limit on its half-life of more than 1.3×1021 years has been measured).
Thirty-three radioisotopes of gadolinium have been observed, with the most stable being 152Gd (naturally occurring), with a half-life of about 1.08×1014 years, and 150Gd, with a half-life of 1.79×106 years. All of the remaining radioactive isotopes have half-lives of less than 75 years. The majority of these have half-lives of less than 25 seconds. Gadolinium isotopes have four metastable isomers, with the most stable being 143mGd (t1/2= 110 seconds), 145mGd (t1/2= 85 seconds) and 141mGd (t1/2= 24.5 seconds).
The isotopes with atomic masses lower than the most abundant stable isotope, 158Gd, primarily decay by electron capture to isotopes of europium. At higher atomic masses, the primary decay mode is beta decay, and the primary products are isotopes of terbium.
History
Gadolinium is named after the mineral gadolinite. Gadolinite was first chemically analyzed by the Finnish chemist Johan Gadolin in 1794. In 1802 German chemist Martin Klaproth gave gadolinite its name. In 1880, the Swiss chemist Jean Charles Galissard de Marignac observed the spectroscopic lines from gadolinium in samples of gadolinite (which actually contains relatively little gadolinium, but enough to show a spectrum) and in the separate mineral cerite. The latter mineral proved to contain far more of the element with the new spectral line. De Marignac eventually separated a mineral oxide from cerite, which he realized was the oxide of this new element. He named the oxide "gadolinia". Because he realized that "gadolinia" was the oxide of a new element, he is credited with the discovery of gadolinium. The French chemist Paul-Émile Lecoq de Boisbaudran carried out the separation of gadolinium metal from gadolinia in 1886.
Occurrence
Gadolinium is a constituent in many minerals, such as monazite and bastnäsite. The metal is too reactive to exist naturally. Paradoxically, as noted above, the mineral gadolinite actually contains only traces of this element. The abundance in the Earth's crust is about 6.2 mg/kg. The main mining areas are in China, the US, Brazil, Sri Lanka, India, and Australia with reserves expected to exceed one million tonnes. World production of pure gadolinium is about 400 tonnes per year. The only known mineral with essential gadolinium, lepersonnite-(Gd), is very rare.
Production
Gadolinium is produced both from monazite and bastnäsite.
Crushed minerals are extracted with hydrochloric acid or sulfuric acid, which converts the insoluble oxides into soluble chlorides or sulfates.
The acidic filtrates are partially neutralized with caustic soda to pH 3–4. Thorium precipitates as its hydroxide, and is then removed.
The remaining solution is treated with ammonium oxalate to convert rare earths into their insoluble oxalates. The oxalates are converted to oxides by heating.
The oxides are dissolved in nitric acid that excludes one of the main components, cerium, whose oxide is insoluble in HNO3.
The solution is treated with magnesium nitrate to produce a crystallized mixture of double salts of gadolinium, samarium and europium.
The salts are separated by ion exchange chromatography.
The rare-earth ions are then selectively washed out by a suitable complexing agent.
Gadolinium metal is obtained from its oxide or salts by heating it with calcium at in an argon atmosphere. Sponge gadolinium can be produced by reducing molten GdCl3 with an appropriate metal at temperatures below (the melting point of Gd) at reduced pressure.
Applications
Gadolinium has no large-scale applications, but it has a variety of specialized uses.
Neutron absorber
Because gadolinium has a high neutron cross-section, it is effective for use with neutron radiography and in shielding of nuclear reactors. It is used as a secondary, emergency shut-down measure in some nuclear reactors, particularly of the CANDU reactor type. Gadolinium is used in nuclear marine propulsion systems as a burnable poison. The use of gadolinium in neutron capture therapy to target tumors has been investigated, and gadolinium-containing compounds have proven promising.
Alloys
Gadolinium possesses unusual metallurgic properties, with as little as 1% of gadolinium improving the workability of iron, chromium, and related alloys, and their resistance to high temperatures and oxidation.
Magnetic contrast agent
Gadolinium is paramagnetic at room temperature, with a ferromagnetic Curie point of . Paramagnetic ions, such as gadolinium, increase nuclear spin relaxation rates, making gadolinium useful as a contrast agent for magnetic resonance imaging (MRI). Solutions of organic gadolinium complexes and gadolinium compounds are used as intravenous contrast agents to enhance images in medical and magnetic resonance angiography (MRA) procedures. Magnevist is the most widespread example. Nanotubes packed with gadolinium, called "gadonanotubes", are 40 times more effective than the usual gadolinium contrast agent. Traditional gadolinium-based contrast agents are un-targeted, generally distributing throughout the body after injection, but will not readily cross the intact blood–brain barrier. Brain tumors, and other disorders that degrade the blood-brain barrier, allow these agents to penetrate into the brain and facilitate their detection by contrast-enhanced MRI. Similarly, delayed gadolinium-enhanced magnetic resonance imaging of cartilage uses an ionic compound agent, originally Magnevist, that is excluded from healthy cartilage based on electrostatic repulsion but will enter proteoglycan-depleted cartilage in diseases such as osteoarthritis.
Phosphors
Gadolinium is used as a phosphor in medical imaging. It is contained in the phosphor layer of X-ray detectors, suspended in a polymer matrix. Terbium-doped gadolinium oxysulfide (Gd2O2S:Tb) at the phosphor layer converts the X-rays released from the source into light. This material emits green light at 540 nm because of the presence of Tb3+, which is very useful for enhancing the imaging quality. The energy conversion of Gd is up to 20%, which means that one fifth of the X-ray energy striking the phosphor layer can be converted into visible photons. Gadolinium oxyorthosilicate (Gd2SiO5, GSO; usually doped by 0.1–1.0% of Ce) is a single crystal that is used as a scintillator in medical imaging such as positron emission tomography, and for detecting neutrons.
Gadolinium compounds were also used for making green phosphors for color TV tubes.
Gamma ray emitter
Gadolinium-153 is produced in a nuclear reactor from elemental europium or enriched gadolinium targets. It has a half-life of days and emits gamma radiation with strong peaks at 41 keV and 102 keV. It is used in many quality-assurance applications, such as line sources and calibration phantoms, to ensure that nuclear-medicine imaging systems operate correctly and produce useful images of radioisotope distribution inside the patient. It is also used as a gamma-ray source in X-ray absorption measurements and in bone density gauges for osteoporosis screening.
Electronic and optical devices
Gadolinium is used for making gadolinium yttrium garnet (Gd:Y3Al5O12), which has microwave applications and is used in fabrication of various optical components and as substrate material for magneto-optical films.
Electrolyte in fuel cells
Gadolinium can also serve as an electrolyte in solid oxide fuel cells (SOFCs). Using gadolinium as a dopant for materials like cerium oxide (in the form of gadolinium-doped ceria) gives an electrolyte having both high ionic conductivity and low operating temperatures.
Magnetic refrigeration
Research is being conducted on magnetic refrigeration near room temperature, which could provide significant efficiency and environmental advantages over conventional refrigeration methods. Gadolinium-based materials, such as Gd5(SixGe1−x)4, are currently the most promising materials, owing to their high Curie temperature and giant magnetocaloric effect. Pure Gd itself exhibits a large magnetocaloric effect near its Curie temperature of , and this has sparked interest into producing Gd alloys having a larger effect and tunable Curie temperature. In Gd5(SixGe1−x)4, Si and Ge compositions can be varied to adjust the Curie temperature.
Superconductors
Gadolinium barium copper oxide (GdBCO) is a superconductor with applications in superconducting motors or generators such as in wind turbines. It can be manufactured in the same way as the most widely researched cuprate high temperature superconductor, yttrium barium copper oxide (YBCO) and uses an analogous chemical composition (GdBa2Cu3O7−δ ). It was used in 2014 to set a new world record for the highest trapped magnetic field in a bulk high temperature superconductor, with a field of 17.6T being trapped within two GdBCO bulks.
Asthma treatment
Gadolinium is being investigated as a possible treatment for preventing lung tissue scarring in asthma. A positive effect has been observed in mice.
Niche and former applications
Gadolinium is used for antineutrino detection in the Japanese Super-Kamiokande detector in order to sense supernova explosions. Low-energy neutrons that arise from antineutrino absorption by protons in the detector's ultrapure water are captured by gadolinium nuclei, which subsequently emit gamma rays that are detected as part of the antineutrino signature.
Gadolinium gallium garnet (GGG, Gd3Ga5O12) was used for imitation diamonds and for computer bubble memory.
Safety
As a free ion, gadolinium is reported often to be highly toxic, but MRI contrast agents are chelated compounds and are considered safe enough to be used in most persons. The toxicity of free gadolinium ions in animals is due to interference with a number of calcium-ion channel dependent processes. The 50% lethal dose is about 0.34 mmol/kg (IV, mouse) or 100–200 mg/kg. Toxicity studies in rodents show that chelation of gadolinium (which also improves its solubility) decreases its toxicity with regard to the free ion by a factor of 31 (i.e., the lethal dose for the Gd-chelate increases by 31 times). It is believed therefore that clinical toxicity of gadolinium-based contrast agents (GBCAs) in humans will depend on the strength of the chelating agent; however this research is still not complete. About a dozen different Gd-chelated agents have been approved as MRI contrast agents around the world.
Use of gadolinium-based contrast agents results in deposition of gadolinium in tissues of the brain, bone, skin, and other tissues in amounts that depend on kidney function, structure of the chelates (linear or macrocyclic) and the dose administered. In patients with kidney failure, there is a risk of a rare but serious illness called nephrogenic systemic fibrosis (NSF) that is caused by the use of gadolinium-based contrast agents. The disease resembles scleromyxedema and to some extent scleroderma. It may occur months after a contrast agent has been injected. Its association with gadolinium and not the carrier molecule is confirmed by its occurrence with various contrast materials in which gadolinium is carried by very different carrier molecules. Because of the risk of NSF, use of these agents is not recommended for any individual with end-stage kidney failure as they may require emergent dialysis.
Included in the current guidelines from the Canadian Association of Radiologists are that dialysis patients should receive gadolinium agents only where essential and that they should receive dialysis after the exam. If a contrast-enhanced MRI must be performed on a dialysis patient, it is recommended that certain high-risk contrast agents be avoided but not that a lower dose be considered. The American College of Radiology recommends that contrast-enhanced MRI examinations be performed as closely before dialysis as possible as a precautionary measure, although this has not been proven to reduce the likelihood of developing NSF. The FDA recommends that potential for gadolinium retention be considered when choosing the type of GBCA used in patients requiring multiple lifetime doses, pregnant women, children, and patients with inflammatory conditions.
Anaphylactoid reactions are rare, occurring in approximately 0.03–0.1%.
Long-term environmental impacts of gadolinium contamination due to human usage are a topic of ongoing research.
Biological use
Gadolinium has no known native biological role, but its compounds are used as research tools in biomedicine. Gd3+ compounds are components of MRI contrast agents. It is used in various ion channel electrophysiology experiments to block sodium leak channels and stretch activated ion channels. Gadolinium has recently been used to measure the distance between two points in a protein via electron paramagnetic resonance, something that gadolinium is especially amenable to thanks to EPR sensitivity at w-band (95 GHz) frequencies.
| Physical sciences | Chemical elements_2 | null |
12266 | https://en.wikipedia.org/wiki/Genetics | Genetics | Genetics is the study of genes, genetic variation, and heredity in organisms. It is an important branch in biology because heredity is vital to organisms' evolution. Gregor Mendel, a Moravian Augustinian friar working in the 19th century in Brno, was the first to study genetics scientifically. Mendel studied "trait inheritance", patterns in the way traits are handed down from parents to offspring over time. He observed that organisms (pea plants) inherit traits by way of discrete "units of inheritance". This term, still used today, is a somewhat ambiguous definition of what is referred to as a gene.
Trait inheritance and molecular inheritance mechanisms of genes are still primary principles of genetics in the 21st century, but modern genetics has expanded to study the function and behavior of genes. Gene structure and function, variation, and distribution are studied within the context of the cell, the organism (e.g. dominance), and within the context of a population. Genetics has given rise to a number of subfields, including molecular genetics, epigenetics, and population genetics. Organisms studied within the broad field span the domains of life (archaea, bacteria, and eukarya).
Genetic processes work in combination with an organism's environment and experiences to influence development and behavior, often referred to as nature versus nurture. The intracellular or extracellular environment of a living cell or organism may increase or decrease gene transcription. A classic example is two seeds of genetically identical corn, one placed in a temperate climate and one in an arid climate (lacking sufficient waterfall or rain). While the average height the two corn stalks could grow to is genetically determined, the one in the arid climate only grows to half the height of the one in the temperate climate due to lack of water and nutrients in its environment.
Etymology
The word genetics stems from the ancient Greek meaning "genitive"/"generative", which in turn derives from meaning "origin".
History
The observation that living things inherit traits from their parents has been used since prehistoric times to improve crop plants and animals through selective breeding. The modern science of genetics, seeking to understand this process, began with the work of the Augustinian friar Gregor Mendel in the mid-19th century.
Prior to Mendel, Imre Festetics, a Hungarian noble, who lived in Kőszeg before Mendel, was the first who used the word "genetic" in hereditarian context, and is considered the first geneticist. He described several rules of biological inheritance in his work The genetic laws of nature (Die genetischen Gesetze der Natur, 1819). His second law is the same as that which Mendel published. In his third law, he developed the basic principles of mutation (he can be considered a forerunner of Hugo de Vries). Festetics argued that changes observed in the generation of farm animals, plants, and humans are the result of scientific laws. Festetics empirically deduced that organisms inherit their characteristics, not acquire them. He recognized recessive traits and inherent variation by postulating that traits of past generations could reappear later, and organisms could produce progeny with different attributes. These observations represent an important prelude to Mendel's theory of particulate inheritance insofar as it features a transition of heredity from its status as myth to that of a scientific discipline, by providing a fundamental theoretical basis for genetics in the twentieth century.
Other theories of inheritance preceded Mendel's work. A popular theory during the 19th century, and implied by Charles Darwin's 1859 On the Origin of Species, was blending inheritance: the idea that individuals inherit a smooth blend of traits from their parents. Mendel's work provided examples where traits were definitely not blended after hybridization, showing that traits are produced by combinations of distinct genes rather than a continuous blend. Blending of traits in the progeny is now explained by the action of multiple genes with quantitative effects. Another theory that had some support at that time was the inheritance of acquired characteristics: the belief that individuals inherit traits strengthened by their parents. This theory (commonly associated with Jean-Baptiste Lamarck) is now known to be wrong—the experiences of individuals do not affect the genes they pass to their children. Other theories included Darwin's pangenesis (which had both acquired and inherited aspects) and Francis Galton's reformulation of pangenesis as both particulate and inherited.
Mendelian genetics
Modern genetics started with Mendel's studies of the nature of inheritance in plants. In his paper "Versuche über Pflanzenhybriden" ("Experiments on Plant Hybridization"), presented in 1865 to the Naturforschender Verein (Society for Research in Nature) in Brno, Mendel traced the inheritance patterns of certain traits in pea plants and described them mathematically. Although this pattern of inheritance could only be observed for a few traits, Mendel's work suggested that heredity was particulate, not acquired, and that the inheritance patterns of many traits could be explained through simple rules and ratios.
The importance of Mendel's work did not gain wide understanding until 1900, after his death, when Hugo de Vries and other scientists rediscovered his research. William Bateson, a proponent of Mendel's work, coined the word genetics in 1905. The adjective genetic, derived from the Greek word genesis—γένεσις, "origin", predates the noun and was first used in a biological sense in 1860. Bateson both acted as a mentor and was aided significantly by the work of other scientists from Newnham College at Cambridge, specifically the work of Becky Saunders, Nora Darwin Barlow, and Muriel Wheldale Onslow. Bateson popularized the usage of the word genetics to describe the study of inheritance in his inaugural address to the Third International Conference on Plant Hybridization in London in 1906.
After the rediscovery of Mendel's work, scientists tried to determine which molecules in the cell were responsible for inheritance. In 1900, Nettie Stevens began studying the mealworm. Over the next 11 years, she discovered that females only had the X chromosome and males had both X and Y chromosomes. She was able to conclude that sex is a chromosomal factor and is determined by the male. In 1911, Thomas Hunt Morgan argued that genes are on chromosomes, based on observations of a sex-linked white eye mutation in fruit flies. In 1913, his student Alfred Sturtevant used the phenomenon of genetic linkage to show that genes are arranged linearly on the chromosome.
Molecular genetics
Although genes were known to exist on chromosomes, chromosomes are composed of both protein and DNA, and scientists did not know which of the two is responsible for inheritance. In 1928, Frederick Griffith discovered the phenomenon of transformation: dead bacteria could transfer genetic material to "transform" other still-living bacteria. Sixteen years later, in 1944, the Avery–MacLeod–McCarty experiment identified DNA as the molecule responsible for transformation. The role of the nucleus as the repository of genetic information in eukaryotes had been established by Hämmerling in 1943 in his work on the single celled alga Acetabularia. The Hershey–Chase experiment in 1952 confirmed that DNA (rather than protein) is the genetic material of the viruses that infect bacteria, providing further evidence that DNA is the molecule responsible for inheritance.
James Watson and Francis Crick determined the structure of DNA in 1953, using the X-ray crystallography work of Rosalind Franklin and Maurice Wilkins that indicated DNA has a helical structure (i.e., shaped like a corkscrew). Their double-helix model had two strands of DNA with the nucleotides pointing inward, each matching a complementary nucleotide on the other strand to form what look like rungs on a twisted ladder. This structure showed that genetic information exists in the sequence of nucleotides on each strand of DNA. The structure also suggested a simple method for replication: if the strands are separated, new partner strands can be reconstructed for each based on the sequence of the old strand. This property is what gives DNA its semi-conservative nature where one strand of new DNA is from an original parent strand.
Although the structure of DNA showed how inheritance works, it was still not known how DNA influences the behavior of cells. In the following years, scientists tried to understand how DNA controls the process of protein production. It was discovered that the cell uses DNA as a template to create matching messenger RNA, molecules with nucleotides very similar to DNA. The nucleotide sequence of a messenger RNA is used to create an amino acid sequence in protein; this translation between nucleotide sequences and amino acid sequences is known as the genetic code.
With the newfound molecular understanding of inheritance came an explosion of research. A notable theory arose from Tomoko Ohta in 1973 with her amendment to the neutral theory of molecular evolution through publishing the nearly neutral theory of molecular evolution. In this theory, Ohta stressed the importance of natural selection and the environment to the rate at which genetic evolution occurs. One important development was chain-termination DNA sequencing in 1977 by Frederick Sanger. This technology allows scientists to read the nucleotide sequence of a DNA molecule. In 1983, Kary Banks Mullis developed the polymerase chain reaction, providing a quick way to isolate and amplify a specific section of DNA from a mixture. The efforts of the Human Genome Project, Department of Energy, NIH, and parallel private efforts by Celera Genomics led to the sequencing of the human genome in 2003.
Features of inheritance
Discrete inheritance and Mendel's laws
At its most fundamental level, inheritance in organisms occurs by passing discrete heritable units, called genes, from parents to offspring. This property was first observed by Gregor Mendel, who studied the segregation of heritable traits in pea plants, showing for example that flowers on a single plant were either purple or white—but never an intermediate between the two colors. The discrete versions of the same gene controlling the inherited appearance (phenotypes) are called alleles.
In the case of the pea, which is a diploid species, each individual plant has two copies of each gene, one copy inherited from each parent. Many species, including humans, have this pattern of inheritance. Diploid organisms with two copies of the same allele of a given gene are called homozygous at that gene locus, while organisms with two different alleles of a given gene are called heterozygous. The set of alleles for a given organism is called its genotype, while the observable traits of the organism are called its phenotype. When organisms are heterozygous at a gene, often one allele is called dominant as its qualities dominate the phenotype of the organism, while the other allele is called recessive as its qualities recede and are not observed. Some alleles do not have complete dominance and instead have incomplete dominance by expressing an intermediate phenotype, or codominance by expressing both alleles at once.
When a pair of organisms reproduce sexually, their offspring randomly inherit one of the two alleles from each parent. These observations of discrete inheritance and the segregation of alleles are collectively known as Mendel's first law or the Law of Segregation. However, the probability of getting one gene over the other can change due to dominant, recessive, homozygous, or heterozygous genes. For example, Mendel found that if you cross heterozygous organisms your odds of getting the dominant trait is 3:1. Real geneticist study and calculate probabilities by using theoretical probabilities, empirical probabilities, the product rule, the sum rule, and more.
Notation and diagrams
Geneticists use diagrams and symbols to describe inheritance. A gene is represented by one or a few letters. Often a "+" symbol is used to mark the usual, non-mutant allele for a gene.
In fertilization and breeding experiments (and especially when discussing Mendel's laws) the parents are referred to as the "P" generation and the offspring as the "F1" (first filial) generation. When the F1 offspring mate with each other, the offspring are called the "F2" (second filial) generation. One of the common diagrams used to predict the result of cross-breeding is the Punnett square.
When studying human genetic diseases, geneticists often use pedigree charts to represent the inheritance of traits. These charts map the inheritance of a trait in a family tree.
Multiple gene interactions
Organisms have thousands of genes, and in sexually reproducing organisms these genes generally assort independently of each other. This means that the inheritance of an allele for yellow or green pea color is unrelated to the inheritance of alleles for white or purple flowers. This phenomenon, known as "Mendel's second law" or the "law of independent assortment," means that the alleles of different genes get shuffled between parents to form offspring with many different combinations. Different genes often interact to influence the same trait. In the Blue-eyed Mary (Omphalodes verna), for example, there exists a gene with alleles that determine the color of flowers: blue or magenta. Another gene, however, controls whether the flowers have color at all or are white. When a plant has two copies of this white allele, its flowers are white—regardless of whether the first gene has blue or magenta alleles. This interaction between genes is called epistasis, with the second gene epistatic to the first.
Many traits are not discrete features (e.g. purple or white flowers) but are instead continuous features (e.g. human height and skin color). These complex traits are products of many genes. The influence of these genes is mediated, to varying degrees, by the environment an organism has experienced. The degree to which an organism's genes contribute to a complex trait is called heritability. Measurement of the heritability of a trait is relative—in a more variable environment, the environment has a bigger influence on the total variation of the trait. For example, human height is a trait with complex causes. It has a heritability of 89% in the United States. In Nigeria, however, where people experience a more variable access to good nutrition and health care, height has a heritability of only 62%.
Molecular basis for inheritance
DNA and chromosomes
The molecular basis for genes is deoxyribonucleic acid (DNA). DNA is composed of deoxyribose (sugar molecule), a phosphate group, and a base (amine group). There are four types of bases: adenine (A), cytosine (C), guanine (G), and thymine (T). The phosphates make phosphodiester bonds with the sugars to make long phosphate-sugar backbones. Bases specifically pair together (T&A, C&G) between two backbones and make like rungs on a ladder. The bases, phosphates, and sugars together make a nucleotide that connects to make long chains of DNA. Genetic information exists in the sequence of these nucleotides, and genes exist as stretches of sequence along the DNA chain. These chains coil into a double a-helix structure and wrap around proteins called Histones which provide the structural support. DNA wrapped around these histones are called chromosomes. Viruses sometimes use the similar molecule RNA instead of DNA as their genetic material.
DNA normally exists as a double-stranded molecule, coiled into the shape of a double helix. Each nucleotide in DNA preferentially pairs with its partner nucleotide on the opposite strand: A pairs with T, and C pairs with G. Thus, in its two-stranded form, each strand effectively contains all necessary information, redundant with its partner strand. This structure of DNA is the physical basis for inheritance: DNA replication duplicates the genetic information by splitting the strands and using each strand as a template for synthesis of a new partner strand.
Genes are arranged linearly along long chains of DNA base-pair sequences. In bacteria, each cell usually contains a single circular genophore, while eukaryotic organisms (such as plants and animals) have their DNA arranged in multiple linear chromosomes. These DNA strands are often extremely long; the largest human chromosome, for example, is about 247 million base pairs in length. The DNA of a chromosome is associated with structural proteins that organize, compact, and control access to the DNA, forming a material called chromatin; in eukaryotes, chromatin is usually composed of nucleosomes, segments of DNA wound around cores of histone proteins. The full set of hereditary material in an organism (usually the combined DNA sequences of all chromosomes) is called the genome.
DNA is most often found in the nucleus of cells, but Ruth Sager helped in the discovery of nonchromosomal genes found outside of the nucleus. In plants, these are often found in the chloroplasts and in other organisms, in the mitochondria. These nonchromosomal genes can still be passed on by either partner in sexual reproduction and they control a variety of hereditary characteristics that replicate and remain active throughout generations.
While haploid organisms have only one copy of each chromosome, most animals and many plants are diploid, containing two of each chromosome and thus two copies of every gene. The two alleles for a gene are located on identical loci of the two homologous chromosomes, each allele inherited from a different parent.
Many species have so-called sex chromosomes that determine the sex of each organism. In humans and many other animals, the Y chromosome contains the gene that triggers the development of the specifically male characteristics. In evolution, this chromosome has lost most of its content and also most of its genes, while the X chromosome is similar to the other chromosomes and contains many genes. This being said, Mary Frances Lyon discovered that there is X-chromosome inactivation during reproduction to avoid passing on twice as many genes to the offspring. Lyon's discovery led to the discovery of X-linked diseases.
Reproduction
When cells divide, their full genome is copied and each daughter cell inherits one copy. This process, called mitosis, is the simplest form of reproduction and is the basis for asexual reproduction. Asexual reproduction can also occur in multicellular organisms, producing offspring that inherit their genome from a single parent. Offspring that are genetically identical to their parents are called clones.
Eukaryotic organisms often use sexual reproduction to generate offspring that contain a mixture of genetic material inherited from two different parents. The process of sexual reproduction alternates between forms that contain single copies of the genome (haploid) and double copies (diploid). Haploid cells fuse and combine genetic material to create a diploid cell with paired chromosomes. Diploid organisms form haploids by dividing, without replicating their DNA, to create daughter cells that randomly inherit one of each pair of chromosomes. Most animals and many plants are diploid for most of their lifespan, with the haploid form reduced to single cell gametes such as sperm or eggs.
Although they do not use the haploid/diploid method of sexual reproduction, bacteria have many methods of acquiring new genetic information. Some bacteria can undergo conjugation, transferring a small circular piece of DNA to another bacterium. Bacteria can also take up raw DNA fragments found in the environment and integrate them into their genomes, a phenomenon known as transformation. These processes result in horizontal gene transfer, transmitting fragments of genetic information between organisms that would be otherwise unrelated. Natural bacterial transformation occurs in many bacterial species, and can be regarded as a sexual process for transferring DNA from one cell to another cell (usually of the same species). Transformation requires the action of numerous bacterial gene products, and its primary adaptive function appears to be repair of DNA damages in the recipient cell.
Recombination and genetic linkage
The diploid nature of chromosomes allows for genes on different chromosomes to assort independently or be separated from their homologous pair during sexual reproduction wherein haploid gametes are formed. In this way new combinations of genes can occur in the offspring of a mating pair. Genes on the same chromosome would theoretically never recombine. However, they do, via the cellular process of chromosomal crossover. During crossover, chromosomes exchange stretches of DNA, effectively shuffling the gene alleles between the chromosomes. This process of chromosomal crossover generally occurs during meiosis, a series of cell divisions that creates haploid cells. Meiotic recombination, particularly in microbial eukaryotes, appears to serve the adaptive function of repair of DNA damages.
The first cytological demonstration of crossing over was performed by Harriet Creighton and Barbara McClintock in 1931. Their research and experiments on corn provided cytological evidence for the genetic theory that linked genes on paired chromosomes do in fact exchange places from one homolog to the other.
The probability of chromosomal crossover occurring between two given points on the chromosome is related to the distance between the points. For an arbitrarily long distance, the probability of crossover is high enough that the inheritance of the genes is effectively uncorrelated. For genes that are closer together, however, the lower probability of crossover means that the genes demonstrate genetic linkage; alleles for the two genes tend to be inherited together. The amounts of linkage between a series of genes can be combined to form a linear linkage map that roughly describes the arrangement of the genes along the chromosome.
Gene expression
Genetic code
Genes express their functional effect through the production of proteins, which are molecules responsible for most functions in the cell. Proteins are made up of one or more polypeptide chains, each composed of a sequence of amino acids. The DNA sequence of a gene is used to produce a specific amino acid sequence. This process begins with the production of an RNA molecule with a sequence matching the gene's DNA sequence, a process called transcription.
This messenger RNA molecule then serves to produce a corresponding amino acid sequence through a process called translation. Each group of three nucleotides in the sequence, called a codon, corresponds either to one of the twenty possible amino acids in a protein or an instruction to end the amino acid sequence; this correspondence is called the genetic code. The flow of information is unidirectional: information is transferred from nucleotide sequences into the amino acid sequence of proteins, but it never transfers from protein back into the sequence of DNA—a phenomenon Francis Crick called the central dogma of molecular biology.
The specific sequence of amino acids results in a unique three-dimensional structure for that protein, and the three-dimensional structures of proteins are related to their functions. Some are simple structural molecules, like the fibers formed by the protein collagen. Proteins can bind to other proteins and simple molecules, sometimes acting as enzymes by facilitating chemical reactions within the bound molecules (without changing the structure of the protein itself). Protein structure is dynamic; the protein hemoglobin bends into slightly different forms as it facilitates the capture, transport, and release of oxygen molecules within mammalian blood.
A single nucleotide difference within DNA can cause a change in the amino acid sequence of a protein. Because protein structures are the result of their amino acid sequences, some changes can dramatically change the properties of a protein by destabilizing the structure or changing the surface of the protein in a way that changes its interaction with other proteins and molecules. For example, sickle-cell anemia is a human genetic disease that results from a single base difference within the coding region for the β-globin section of hemoglobin, causing a single amino acid change that changes hemoglobin's physical properties.
Sickle-cell versions of hemoglobin stick to themselves, stacking to form fibers that distort the shape of red blood cells carrying the protein. These sickle-shaped cells no longer flow smoothly through blood vessels, having a tendency to clog or degrade, causing the medical problems associated with this disease.
Some DNA sequences are transcribed into RNA but are not translated into protein products—such RNA molecules are called non-coding RNA. In some cases, these products fold into structures which are involved in critical cell functions (e.g. ribosomal RNA and transfer RNA). RNA can also have regulatory effects through hybridization interactions with other RNA molecules (such as microRNA).
Nature and nurture
Although genes contain all the information an organism uses to function, the environment plays an important role in determining the ultimate phenotypes an organism displays. The phrase "nature and nurture" refers to this complementary relationship. The phenotype of an organism depends on the interaction of genes and the environment. An interesting example is the coat coloration of the Siamese cat. In this case, the body temperature of the cat plays the role of the environment. The cat's genes code for dark hair, thus the hair-producing cells in the cat make cellular proteins resulting in dark hair. But these dark hair-producing proteins are sensitive to temperature (i.e. have a mutation causing temperature-sensitivity) and denature in higher-temperature environments, failing to produce dark-hair pigment in areas where the cat has a higher body temperature. In a low-temperature environment, however, the protein's structure is stable and produces dark-hair pigment normally. The protein remains functional in areas of skin that are colder—such as its legs, ears, tail, and faceso the cat has dark hair at its extremities.
Environment plays a major role in effects of the human genetic disease phenylketonuria. The mutation that causes phenylketonuria disrupts the ability of the body to break down the amino acid phenylalanine, causing a toxic build-up of an intermediate molecule that, in turn, causes severe symptoms of progressive intellectual disability and seizures. However, if someone with the phenylketonuria mutation follows a strict diet that avoids this amino acid, they remain normal and healthy.
A common method for determining how genes and environment ("nature and nurture") contribute to a phenotype involves studying identical and fraternal twins, or other siblings of multiple births. Identical siblings are genetically the same since they come from the same zygote. Meanwhile, fraternal twins are as genetically different from one another as normal siblings. By comparing how often a certain disorder occurs in a pair of identical twins to how often it occurs in a pair of fraternal twins, scientists can determine whether that disorder is caused by genetic or postnatal environmental factors. One famous example involved the study of the Genain quadruplets, who were identical quadruplets all diagnosed with schizophrenia.
Gene regulation
The genome of a given organism contains thousands of genes, but not all these genes need to be active at any given moment. A gene is expressed when it is being transcribed into mRNA and there exist many cellular methods of controlling the expression of genes such that proteins are produced only when needed by the cell. Transcription factors are regulatory proteins that bind to DNA, either promoting or inhibiting the transcription of a gene. Within the genome of Escherichia coli bacteria, for example, there exists a series of genes necessary for the synthesis of the amino acid tryptophan. However, when tryptophan is already available to the cell, these genes for tryptophan synthesis are no longer needed. The presence of tryptophan directly affects the activity of the genes—tryptophan molecules bind to the tryptophan repressor (a transcription factor), changing the repressor's structure such that the repressor binds to the genes. The tryptophan repressor blocks the transcription and expression of the genes, thereby creating negative feedback regulation of the tryptophan synthesis process.
Differences in gene expression are especially clear within multicellular organisms, where cells all contain the same genome but have very different structures and behaviors due to the expression of different sets of genes. All the cells in a multicellular organism derive from a single cell, differentiating into variant cell types in response to external and intercellular signals and gradually establishing different patterns of gene expression to create different behaviors. As no single gene is responsible for the development of structures within multicellular organisms, these patterns arise from the complex interactions between many cells.
Within eukaryotes, there exist structural features of chromatin that influence the transcription of genes, often in the form of modifications to DNA and chromatin that are stably inherited by daughter cells. These features are called "epigenetic" because they exist "on top" of the DNA sequence and retain inheritance from one cell generation to the next. Because of epigenetic features, different cell types grown within the same medium can retain very different properties. Although epigenetic features are generally dynamic over the course of development, some, like the phenomenon of paramutation, have multigenerational inheritance and exist as rare exceptions to the general rule of DNA as the basis for inheritance.
Genetic change
Mutations
During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, called mutations, can affect the phenotype of an organism, especially if they occur within the protein coding sequence of a gene. Error rates are usually very low—1 error in every 10–100 million bases—due to the "proofreading" ability of DNA polymerases. Processes that increase the rate of changes in DNA are called mutagenic: mutagenic chemicals promote errors in DNA replication, often by interfering with the structure of base-pairing, while UV radiation induces mutations by causing damage to the DNA structure. Chemical damage to DNA occurs naturally as well and cells use DNA repair mechanisms to repair mismatches and breaks. The repair does not, however, always restore the original sequence. A particularly important source of DNA damages appears to be reactive oxygen species produced by cellular aerobic respiration, and these can lead to mutations.
In organisms that use chromosomal crossover to exchange DNA and recombine genes, errors in alignment during meiosis can also cause mutations. Errors in crossover are especially likely when similar sequences cause partner chromosomes to adopt a mistaken alignment; this makes some regions in genomes more prone to mutating in this way. These errors create large structural changes in DNA sequence—duplications, inversions, deletions of entire regions—or the accidental exchange of whole parts of sequences between different chromosomes, chromosomal translocation.
Natural selection and evolution
Mutations alter an organism's genotype and occasionally this causes different phenotypes to appear. Most mutations have little effect on an organism's phenotype, health, or reproductive fitness. Mutations that do have an effect are usually detrimental, but occasionally some can be beneficial. Studies in the fly Drosophila melanogaster suggest that if a mutation changes a protein produced by a gene, about 70 percent of these mutations are harmful with the remainder being either neutral or weakly beneficial.
Population genetics studies the distribution of genetic differences within populations and how these distributions change over time. Changes in the frequency of an allele in a population are mainly influenced by natural selection, where a given allele provides a selective or reproductive advantage to the organism, as well as other factors such as mutation, genetic drift, genetic hitchhiking, artificial selection and migration.
Over many generations, the genomes of organisms can change significantly, resulting in evolution. In the process called adaptation, selection for beneficial mutations can cause a species to evolve into forms better able to survive in their environment. New species are formed through the process of speciation, often caused by geographical separations that prevent populations from exchanging genes with each other.
By comparing the homology between different species' genomes, it is possible to calculate the evolutionary distance between them and when they may have diverged. Genetic comparisons are generally considered a more accurate method of characterizing the relatedness between species than the comparison of phenotypic characteristics. The evolutionary distances between species can be used to form evolutionary trees; these trees represent the common descent and divergence of species over time, although they do not show the transfer of genetic material between unrelated species (known as horizontal gene transfer and most common in bacteria).
Research and technology
Model organisms
Although geneticists originally studied inheritance in a wide variety of organisms, the range of species studied has narrowed. One reason is that when significant research already exists for a given organism, new researchers are more likely to choose it for further study, and so eventually a few model organisms became the basis for most genetics research. Common research topics in model organism genetics include the study of gene regulation and the involvement of genes in development and cancer. Organisms were chosen, in part, for convenience—short generation times and easy genetic manipulation made some organisms popular genetics research tools. Widely used model organisms include the gut bacterium Escherichia coli, the plant Arabidopsis thaliana, baker's yeast (Saccharomyces cerevisiae), the nematode Caenorhabditis elegans, the common fruit fly (Drosophila melanogaster), the zebrafish (Danio rerio), and the common house mouse (Mus musculus).
Medicine
Medical genetics seeks to understand how genetic variation relates to human health and disease. When searching for an unknown gene that may be involved in a disease, researchers commonly use genetic linkage and genetic pedigree charts to find the location on the genome associated with the disease. At the population level, researchers take advantage of Mendelian randomization to look for locations in the genome that are associated with diseases, a method especially useful for multigenic traits not clearly defined by a single gene. Once a candidate gene is found, further research is often done on the corresponding (or homologous) genes of model organisms. In addition to studying genetic diseases, the increased availability of genotyping methods has led to the field of pharmacogenetics: the study of how genotype can affect drug responses.
Individuals differ in their inherited tendency to develop cancer, and cancer is a genetic disease. The process of cancer development in the body is a combination of events. Mutations occasionally occur within cells in the body as they divide. Although these mutations will not be inherited by any offspring, they can affect the behavior of cells, sometimes causing them to grow and divide more frequently. There are biological mechanisms that attempt to stop this process; signals are given to inappropriately dividing cells that should trigger cell death, but sometimes additional mutations occur that cause cells to ignore these messages. An internal process of natural selection occurs within the body and eventually mutations accumulate within cells to promote their own growth, creating a cancerous tumor that grows and invades various tissues of the body. Normally, a cell divides only in response to signals called growth factors and stops growing once in contact with surrounding cells and in response to growth-inhibitory signals. It usually then divides a limited number of times and dies, staying within the epithelium where it is unable to migrate to other organs. To become a cancer cell, a cell has to accumulate mutations in a number of genes (three to seven). A cancer cell can divide without growth factor and ignores inhibitory signals. Also, it is immortal and can grow indefinitely, even after it makes contact with neighboring cells. It may escape from the epithelium and ultimately from the primary tumor. Then, the escaped cell can cross the endothelium of a blood vessel and get transported by the bloodstream to colonize a new organ, forming deadly metastasis. Although there are some genetic predispositions in a small fraction of cancers, the major fraction is due to a set of new genetic mutations that originally appear and accumulate in one or a small number of cells that will divide to form the tumor and are not transmitted to the progeny (somatic mutations). The most frequent mutations are a loss of function of p53 protein, a tumor suppressor, or in the p53 pathway, and gain of function mutations in the Ras proteins, or in other oncogenes.
Research methods
DNA can be manipulated in the laboratory. Restriction enzymes are commonly used enzymes that cut DNA at specific sequences, producing predictable fragments of DNA. DNA fragments can be visualized through use of gel electrophoresis, which separates fragments according to their length.
The use of ligation enzymes allows DNA fragments to be connected. By binding ("ligating") fragments of DNA together from different sources, researchers can create recombinant DNA, the DNA often associated with genetically modified organisms. Recombinant DNA is commonly used in the context of plasmids: short circular DNA molecules with a few genes on them. In the process known as molecular cloning, researchers can amplify the DNA fragments by inserting plasmids into bacteria and then culturing them on plates of agar (to isolate clones of bacteria cells). "Cloning" can also refer to the various means of creating cloned ("clonal") organisms.
DNA can also be amplified using a procedure called the polymerase chain reaction (PCR). By using specific short sequences of DNA, PCR can isolate and exponentially amplify a targeted region of DNA. Because it can amplify from extremely small amounts of DNA, PCR is also often used to detect the presence of specific DNA sequences.
DNA sequencing and genomics
DNA sequencing, one of the most fundamental technologies developed to study genetics, allows researchers to determine the sequence of nucleotides in DNA fragments. The technique of chain-termination sequencing, developed in 1977 by a team led by Frederick Sanger, is still routinely used to sequence DNA fragments. Using this technology, researchers have been able to study the molecular sequences associated with many human diseases.
As sequencing has become less expensive, researchers have sequenced the genomes of many organisms using a process called genome assembly, which uses computational tools to stitch together sequences from many different fragments. These technologies were used to sequence the human genome in the Human Genome Project completed in 2003. New high-throughput sequencing technologies are dramatically lowering the cost of DNA sequencing, with many researchers hoping to bring the cost of resequencing a human genome down to a thousand dollars.
Next-generation sequencing (or high-throughput sequencing) came about due to the ever-increasing demand for low-cost sequencing. These sequencing technologies allow the production of potentially millions of sequences concurrently. The large amount of sequence data available has created the subfield of genomics, research that uses computational tools to search for and analyze patterns in the full genomes of organisms. Genomics can also be considered a subfield of bioinformatics, which uses computational approaches to analyze large sets of biological data. A common problem to these fields of research is how to manage and share data that deals with human subject and personally identifiable information.
Society and culture
On 19 March 2015, a group of leading biologists urged a worldwide ban on clinical use of methods, particularly the use of CRISPR and zinc finger, to edit the human genome in a way that can be inherited. In April 2015, Chinese researchers reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.
| Biology and health sciences | Biology | null |
12278 | https://en.wikipedia.org/wiki/Ganglion | Ganglion | A ganglion (: ganglia) is a group of neuron cell bodies in the peripheral nervous system. In the somatic nervous system, this includes dorsal root ganglia and trigeminal ganglia among a few others. In the autonomic nervous system, there are both sympathetic and parasympathetic ganglia which contain the cell bodies of postganglionic sympathetic and parasympathetic neurons respectively.
A pseudoganglion looks like a ganglion, but only has nerve fibers and has no nerve cell bodies.
Structure
Ganglia are primarily made up of somata and dendritic structures, which are bundled or connected. Ganglia often interconnect with other ganglia to form a complex system of ganglia known as a plexus. Ganglia provide relay points and intermediary connections between different neurological structures in the body, such as the peripheral and central nervous systems.
Among vertebrates there are three major groups of ganglia:
Dorsal root ganglia (also known as the spinal ganglia) contain the cell bodies of sensory (afferent) neurons.
Cranial nerve ganglia contain the cell bodies of cranial nerve neurons.
Autonomic ganglia contain the cell bodies of autonomic nerves.
In the autonomic nervous system, fibers from the central nervous system to the ganglia are known as preganglionic fibers, while those from the ganglia to the effector organ are called postganglionic fibers.
Basal ganglia
The term "ganglion" refers to the peripheral nervous system.
However, in the brain (part of the central nervous system), the basal ganglia are a group of nuclei interconnected with the cerebral cortex, thalamus, and brainstem, associated with a variety of functions: motor control, cognition, emotions, and learning.
Partly due to this ambiguity, the Terminologia Anatomica recommends using the term 'basal nuclei' instead of 'basal ganglia'; however, this usage has not been generally adopted.
Pseudoganglion
A pseudoganglion is a localized thickening of the main part or trunk of a nerve that has the appearance of a ganglion but has only nerve fibers and no nerve cell bodies.
Pseudoganglia are found in the teres minor muscle and radial nerve.
| Biology and health sciences | Nervous system | Biology |
12293 | https://en.wikipedia.org/wiki/Graphical%20user%20interface | Graphical user interface | A graphical user interface, or GUI, is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation. In many applications, GUIs are used instead of text-based UIs, which are based on typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces (CLIs), which require commands to be typed on a computer keyboard.
The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones and smaller household, office and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where head-up displays (HUDs) are preferred), or not including flat screens like volumetric displays because the term is restricted to the scope of 2D display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.
GUI and interaction design
Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks.
The visible graphical interface features of an application are sometimes referred to as chrome or GUI. Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent of and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin or theme at will, and eases the designer's work to change the interface as user needs evolve. Good GUI design relates to users more, and to system architecture less.
Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message, or drawing. Smaller ones usually act as a user-input tool.
A GUI may be designed for the requirements of a vertical market as application-specific GUIs. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticket and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS).
Cell phones and handheld game systems also employ application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations.
Examples
Components
A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.
A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to work with and use computer software. The most common combination of such elements in GUIs is the windows, icons, text fields, canvases, menus, pointer (WIMP) paradigm, especially in personal computers.
The WIMP style of interaction uses a virtual input device to represent the position of a pointing device's interface, most often a mouse, and presents information organized in windows and represented with icons. Available commands are compiled together in menus, and actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows, applications, and the windowing system. The windowing system handles hardware devices such as pointing devices, graphics hardware, and positioning of the pointer.
In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed. Window managers and other software combine to simulate the desktop environment with varying degrees of realism.
Entries may appear in a list to make space for text and details, or in a grid for compactness and larger icons with little space underneath for text. Variations in between exist, such as a list with multiple columns of items and a grid of items with rows of text extending sideways from the icon.
Multi-row and multi-column layouts commonly found on the web are "shelf" and "waterfall". The former is found on image search engines, where images appear with a fixed height but variable length, and is typically implemented with the CSS property and parameter display: inline-block;. A waterfall layout found on Imgur and TweetDeck with fixed width but variable height per item is usually implemented by specifying column-width:.
Post-WIMP interface
Smaller app mobile devices such as personal digital assistants (PDAs) and smartphones typically use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP UIs.
As of 2011, some touchscreen-based operating systems such as Apple's iOS (iPhone) and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse.
Interaction
Human interface devices, for the efficient interaction with a GUI include a computer keyboard, especially used together with keyboard shortcuts, pointing devices for the cursor (or rather pointer) control: mouse, pointing stick, touchpad, trackball, joystick, virtual keyboards, and head-up displays (translucent information devices at the eye level).
There are also actions performed by programs that affect the GUI. For example, there are components like inotify or D-Bus to facilitate communication between computer programs.
History
Early efforts
Ivan Sutherland developed Sketchpad in 1963, widely held as the first graphical computer-aided design program. It used a light pen to create and manipulate objects in engineering drawings in realtime with coordinated graphics. In the late 1960s, researchers at the Stanford Research Institute, led by Douglas Engelbart, developed the On-Line System (NLS), which used text-based hyperlinks manipulated with a then-new device: the mouse. (A 1968 demonstration of NLS became known as "The Mother of All Demos".) In the 1970s, Engelbart's ideas were further refined and extended to graphics by researchers at Xerox PARC and specifically Alan Kay, who went beyond text-based hyperlinks and used a GUI as the main interface for the Smalltalk programming language, which ran on the Xerox Alto computer, released in 1973. Most modern general-purpose GUIs are derived from this system.
The Xerox PARC GUI consisted of graphical elements such as windows, menus, radio buttons, and check boxes. The concept of icons was later introduced by David Canfield Smith, who had written a thesis on the subject under the guidance of Kay. The PARC GUI employs a pointing device along with a keyboard. These aspects can be emphasized by using the alternative term and acronym for windows, icons, menus, pointing device (WIMP). This effort culminated in the 1973 Xerox Alto, the first computer with a GUI, though the system never reached commercial production.
The first commercially available computer with a GUI was the 1979 PERQ workstation, manufactured by Three Rivers Computer Corporation. Its design was heavily influenced by the work at Xerox PARC. In 1981, Xerox eventually commercialized the ideas from the Alto in the form of a new and enhanced system – the Xerox 8010 Information System – more commonly known as the Xerox Star. These early systems spurred many other GUI efforts, including Lisp machines by Symbolics and other manufacturers, the Apple Lisa (which presented the concept of menu bar and window controls) in 1983, the Apple Macintosh 128K in 1984, and the Atari ST with Digital Research's GEM, and Commodore Amiga in 1985. Visi On was released in 1983 for the IBM PC compatible computers, but was never popular due to its high hardware demands. Nevertheless, it was a crucial influence on the contemporary development of Microsoft Windows.
Apple, Digital Research, IBM and Microsoft used many of Xerox's ideas to develop products, and IBM's Common User Access specifications formed the basis of the GUIs used in Microsoft Windows, IBM OS/2 Presentation Manager, and the Unix Motif toolkit and window manager. These ideas evolved to create the interface found in current versions of Microsoft Windows, and in various desktop environments for Unix-like operating systems, such as macOS and Linux. Thus most current GUIs have largely common idioms.
Popularization
GUIs were a hot topic in the early 1980s. The Apple Lisa was released in 1983, and various windowing systems existed for DOS operating systems (including PC GEM and PC/GEOS). Individual applications for many platforms presented their own GUI variants. Despite the GUIs advantages, many reviewers questioned the value of the entire concept, citing hardware limits, and problems in finding compatible software.
In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS, with allusions to George Orwell's noted novel Nineteen Eighty-Four. The goal of the commercial was to make people think about computers, identifying the user-friendly interface as a personal computer which departed from prior business-oriented systems, and becoming a signature representation of Apple products.
In 1985, Commodore released the Amiga 1000, along with Workbench and Kickstart 1.0 (which contained Intuition). This interface ran as a separate task, meaning it was very responsive and, unlike other GUIs of the time, it didn't freeze up when a program was busy. Additionally, it was the first GUI to introduce something resembling Virtual Desktops.
Windows 95, accompanied by an extensive marketing campaign, was a major success in the marketplace at launch and shortly became the most popular desktop operating system.
In 2007, with the iPhone and later in 2010 with the introduction of the iPad, Apple popularized the post-WIMP style of interaction for multi-touch screens, and those devices were considered to be milestones in the development of mobile devices.
The GUIs familiar to most people as of the mid-late 2010s are Microsoft Windows, macOS, and the X Window System interfaces for desktop and laptop computers, and Android, Apple's iOS, Symbian, BlackBerry OS, Windows Phone/Windows 10 Mobile, Tizen, WebOS, and Firefox OS for handheld (smartphone) devices.
Comparison to other interfaces
Command-line interfaces
Since the commands available in command line interfaces can be many, complex operations can be performed using a short sequence of words and symbols. Custom functions may be used to facilitate access to frequent actions.
Command-line interfaces are more lightweight, as they only recall information necessary for a task; for example, no preview thumbnails or graphical rendering of web pages. This allows greater efficiency and productivity once many commands are learned. But reaching this level takes some time because the command words may not be easily discoverable or mnemonic. Also, using the command line can become slow and error-prone when users must enter long commands comprising many parameters or several different filenames at once. However, windows, icons, menus, pointer (WIMP) interfaces present users with many widgets that represent and can trigger some of the system's available commands.
GUIs can be made quite hard when dialogs are buried deep in a system or moved about to different places during redesigns. Also, icons and dialog boxes are usually harder for users to script.
WIMPs extensively use modes, as the meaning of all keys and clicks on specific positions on the screen are redefined all the time. Command-line interfaces use modes only in limited forms, such as for current directory and environment variables.
Most modern operating systems provide both a GUI and some level of a CLI, although the GUIs usually receive more attention.
GUI wrappers
GUI wrappers find a way around the command-line interface versions (CLI) of (typically) Linux and Unix-like software applications and their text-based UIs or typed command labels. While command-line or text-based applications allow users to run a program non-interactively, GUI wrappers atop them avoid the steep learning curve of the command-line, which requires commands to be typed on the keyboard. By starting a GUI wrapper, users can intuitively interact with, start, stop, and change its working parameters, through graphical icons and visual indicators of a desktop environment, for example. Applications may also provide both interfaces, and when they do the GUI is usually a WIMP wrapper around the command-line version. This is especially common with applications designed for Unix-like operating systems. The latter used to be implemented first because it allowed the developers to focus exclusively on their product's functionality without bothering about interface details such as designing icons and placing buttons. Designing programs this way also allows users to run the program in a shell script.
Three-dimensional graphical user interface
Many environments and games use the methods of 3D graphics to project 3D GUI objects onto the screen. The use of 3D graphics has become increasingly common in mainstream operating systems (ex. Windows Aero, and Aqua (MacOS)) to create attractive interfaces, termed eye candy (which includes, for example, the use of drop shadows underneath windows and the cursor), or for functional purposes only possible using three dimensions. For example, user switching is represented by rotating a cube with faces representing each user's workspace, and window management is represented via a Rolodex-style flipping mechanism in Windows Vista (see Windows Flip 3D). In both cases, the operating system transforms windows on-the-fly while continuing to update the content of those windows.
The GUI is usually WIMP-based, although occasionally other metaphors surface, such as those used in Microsoft Bob, 3dwm, File System Navigator, File System Visualizer, 3D Mailbox, and GopherVR. Zooming (ZUI) is a related technology that promises to deliver the representation benefits of 3D environments without their usability drawbacks of orientation problems and hidden objects. In 2006, Hillcrest Labs introduced the first ZUI for television. Other innovations include the menus on the PlayStation 2, the menus on the Xbox, Sun's Project Looking Glass, Metisse, which was similar to Project Looking Glass, BumpTop, where users can manipulate documents and windows with realistic movement and physics as if they were physical documents, Croquet OS, which is built for collaboration, and compositing window managers such as Enlightenment and Compiz. Augmented reality and virtual reality also make use of 3D GUI elements.
In science fiction
3D GUIs have appeared in science fiction literature and films, even before certain technologies were feasible or in common use.
In prose fiction, 3D GUIs have been portrayed as immersible environments, coined as William Gibson's "cyberspace" and Neal Stephenson's "metaverse" and "avatars".
The 1993 American film Jurassic Park features Silicon Graphics' 3D file manager File System Navigator, a real-life file manager for Unix operating systems.
The film Minority Report has scenes of police officers using specialized 3D data systems.
| Technology | User interface | null |
12295 | https://en.wikipedia.org/wiki/Gamete | Gamete | A gamete (; , ultimately ) is a haploid cell that fuses with another haploid cell during fertilization in organisms that reproduce sexually. Gametes are an organism's reproductive cells, also referred to as sex cells. The name gamete was introduced by the German cytologist Eduard Strasburger in 1878.
Gametes of both mating individuals can be the same size and shape, a condition known as isogamy. By contrast, in the majority of species, the gametes are of different sizes, a condition known as anisogamy or heterogamy that applies to humans and other mammals. The human ovum has approximately 100,000 times the volume of a single human sperm cell. The type of gamete an organism produces determines its sex and sets the basis for the sexual roles and sexual selection. In humans and other species that produce two morphologically distinct types of gametes, and in which each individual produces only one type, a female is any individual that produces the larger type of gamete called an ovum, and a male produces the smaller type, called a sperm cell or spermatozoon. Sperm cells are small and motile due to the presence of a tail-shaped structure, the flagellum, that provides propulsion. In contrast, each egg cell or ovum is relatively large and non-motile.
Oogenesis, the process of female gamete formation in animals, involves meiosis (including meiotic recombination) of a diploid primary oocyte to produce a haploid ovum. Spermatogenesis, the process of male gamete formation in animals, involves meiosis in a diploid primary spermatocyte to produce haploid spermatozoa. In animals, ova are produced in the ovaries of females and sperm develop in the testes of males. During fertilization, a spermatozoon and an ovum, each carrying half of the genetic information of an individual, unite to form a zygote that develops into a new diploid organism.
Evolution
It is generally accepted that isogamy is the ancestral state from which anisogamy and oogamy evolved, although its evolution has left no fossil records. There are almost invariably only two gamete types, all analyses showing that intermediate gamete sizes are eliminated due to selection. Since intermediate sized gametes do not have the same advantages as small or large ones, they do worse than small ones in mobility and numbers, and worse than large ones in supply.
Differences between gametes and somatic cells
In contrast to a gamete, which has only one set of chromosomes, a diploid somatic cell has two sets of homologous chromosomes, one of which is a copy of the chromosome set from the sperm and one a copy of the chromosome set from the egg cell. Recombination of the genes during meiosis ensures that the chromosomes of gametes are not exact duplicates of either of the sets of chromosomes carried in the parental diploid chromosomes but a mixture of the two.
Artificial gametes
Artificial gametes, also known as in vitro derived gametes (IVD), stem cell-derived gametes (SCDGs), and in vitro generated gametes (IVG), are gametes derived from stem cells. The use of such artificial gametes would "necessarily require IVF techniques". Research shows that artificial gametes may be a reproductive technique for same-sex male couples, although a surrogate mother would still be required for the gestation period. Women who have passed menopause may be able to produce eggs and bear genetically related children with artificial gametes. Robert Sparrow wrote, in the Journal of Medical Ethics, that embryos derived from artificial gametes could be used to derive new gametes and this process could be repeated to create multiple human generations in the laboratory. This technique could be used to create cell lines for medical applications and for studying the heredity of genetic disorders. Additionally, this technique could be used for human enhancement by selectively breeding for a desired genome or by using recombinant DNA technology to create enhancements that have not arisen in nature.
Plants
Plants that reproduce sexually also produce gametes. However, since plants have a life cycle involving alternation of diploid and haploid generations some differences from animal life cycles exist. Plants use meiosis to produce spores that develop into multicellular haploid gametophytes which produce gametes by mitosis. In animals there is no corresponding multicellular haploid phase. The sperm of plants that reproduce using spores are formed by mitosis in an organ of the gametophyte known as the antheridium and the egg cells by mitosis in a flask-shaped organ called the archegonium. Plant sperm cells are their only motile cells, often described as flagellate, but more correctly as ciliate. Bryophytes have 2 flagella, horsetails have up to 200 and the mature spermatozoa of the cycad Zamia pumila has up to 50,000 flagella. Cycads and Ginkgo biloba are the only gymnosperms with motile sperm. In the flowering plants, the female gametophyte is produced inside the ovule within the ovary of the flower. When mature, the haploid gametophyte produces female gametes which are ready for fertilization. The male gametophyte is produced inside a pollen grain within the anther and is non-motile, but can be distributed by wind, water or animal vectors. When a pollen grain lands on a mature stigma of a flower it germinates to form a pollen tube that grows down the style into the ovary of the flower and then into the ovule. The pollen then produces non-motile sperm nuclei by mitosis that are transported down the pollen tube to the ovule where they are released for fertilization of the egg cell.
| Biology and health sciences | Biological reproduction | Biology |
12306 | https://en.wikipedia.org/wiki/Geotechnical%20engineering | Geotechnical engineering | Geotechnical engineering, also known as geotechnics, is the branch of civil engineering concerned with the engineering behavior of earth materials. It uses the principles of soil mechanics and rock mechanics to solve its engineering problems. It also relies on knowledge of geology, hydrology, geophysics, and other related sciences.
Geotechnical engineering has applications in military engineering, mining engineering, petroleum engineering, coastal engineering, and offshore construction. The fields of geotechnical engineering and engineering geology have overlapping knowledge areas. However, while geotechnical engineering is a specialty of civil engineering, engineering geology is a specialty of geology.
History
Humans have historically used soil as a material for flood control, irrigation purposes, burial sites, building foundations, and construction materials for buildings. Dykes, dams, and canals dating back to at least 2000 BCE—found in parts of ancient Egypt, ancient Mesopotamia, the Fertile Crescent, and the early settlements of Mohenjo Daro and Harappa in the Indus valley—provide evidence for early activities linked to irrigation and flood control. As cities expanded, structures were erected and supported by formalized foundations. The ancient Greeks notably constructed pad footings and strip-and-raft foundations. Until the 18th century, however, no theoretical basis for soil design had been developed, and the discipline was more of an art than a science, relying on experience.
Several foundation-related engineering problems, such as the Leaning Tower of Pisa, prompted scientists to begin taking a more scientific-based approach to examining the subsurface. The earliest advances occurred in the development of earth pressure theories for the construction of retaining walls. Henri Gautier, a French royal engineer, recognized the "natural slope" of different soils in 1717, an idea later known as the soil's angle of repose. Around the same time, a rudimentary soil classification system was also developed based on a material's unit weight, which is no longer considered a good indication of soil type.
The application of the principles of mechanics to soils was documented as early as 1773 when Charles Coulomb, a physicist and engineer, developed improved methods to determine the earth pressures against military ramparts. Coulomb observed that, at failure, a distinct slip plane would form behind a sliding retaining wall and suggested that the maximum shear stress on the slip plane, for design purposes, was the sum of the soil cohesion, , and friction , where is the normal stress on the slip plane and is the friction angle of the soil. By combining Coulomb's theory with Christian Otto Mohr's 2D stress state, the theory became known as Mohr-Coulomb theory. Although it is now recognized that precise determination of cohesion is impossible because is not a fundamental soil property, the Mohr-Coulomb theory is still used in practice today.
In the 19th century, Henry Darcy developed what is now known as Darcy's Law, describing the flow of fluids in a porous media. Joseph Boussinesq, a mathematician and physicist, developed theories of stress distribution in elastic solids that proved useful for estimating stresses at depth in the ground. William Rankine, an engineer and physicist, developed an alternative to Coulomb's earth pressure theory. Albert Atterberg developed the clay consistency indices that are still used today for soil classification. In 1885, Osborne Reynolds recognized that shearing causes volumetric dilation of dense materials and contraction of loose granular materials.
Modern geotechnical engineering is said to have begun in 1925 with the publication of Erdbaumechanik by Karl von Terzaghi, a mechanical engineer and geologist. Considered by many to be the father of modern soil mechanics and geotechnical engineering, Terzaghi developed the principle of effective stress, and demonstrated that the shear strength of soil is controlled by effective stress. Terzaghi also developed the framework for theories of bearing capacity of foundations, and the theory for prediction of the rate of settlement of clay layers due to consolidation. Afterwards, Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity.
In his 1948 book, Donald Taylor recognized that the interlocking and dilation of densely packed particles contributed to the peak strength of the soil. Roscoe, Schofield, and Wroth, with the publication of On the Yielding of Soils in 1958, established the interrelationships between the volume change behavior (dilation, contraction, and consolidation) and shearing behavior with the theory of plasticity using critical state soil mechanics. Critical state soil mechanics is the basis for many contemporary advanced constitutive models describing the behavior of soil.
In 1960, Alec Skempton carried out an extensive review of the available formulations and experimental data in the literature about the effective stress validity in soil, concrete, and rock in order to reject some of these expressions, as well as clarify what expressions were appropriate according to several working hypotheses, such as stress-strain or strength behavior, saturated or non-saturated media, and rock, concrete or soil behavior.
Roles
Geotechnical investigation
Geotechnical engineers investigate and determine the properties of subsurface conditions and materials. They also design corresponding earthworks and retaining structures, tunnels, and structure foundations, and may supervise and evaluate sites, which may further involve site monitoring as well as the risk assessment and mitigation of natural hazards.
Geotechnical engineers and engineering geologists perform geotechnical investigations to obtain information on the physical properties of soil and rock underlying and adjacent to a site to design earthworks and foundations for proposed structures and for the repair of distress to earthworks and structures caused by subsurface conditions. Geotechnical investigations involve surface and subsurface exploration of a site, often including subsurface sampling and laboratory testing of retrieved soil samples. Sometimes, geophysical methods are also used to obtain data, which include measurement of seismic waves (pressure, shear, and Rayleigh waves), surface-wave methods and downhole methods, and electromagnetic surveys (magnetometer, resistivity, and ground-penetrating radar). Electrical tomography can be used to survey soil and rock properties and existing underground infrastructure in construction projects.
Surface exploration can include on-foot surveys, geologic mapping, geophysical methods, and photogrammetry. Geologic mapping and interpretation of geomorphology are typically completed in consultation with a geologist or engineering geologist. Subsurface exploration usually involves in-situ testing (for example, the standard penetration test and cone penetration test). The digging of test pits and trenching (particularly for locating faults and slide planes) may also be used to learn about soil conditions at depth. Large-diameter borings are rarely used due to safety concerns and expense. Still, they are sometimes used to allow a geologist or engineer to be lowered into the borehole for direct visual and manual examination of the soil and rock stratigraphy.
Various soil samplers exist to meet the needs of different engineering projects. The standard penetration test, which uses a thick-walled split spoon sampler, is the most common way to collect disturbed samples. Piston samplers, employing a thin-walled tube, are most commonly used to collect less disturbed samples. More advanced methods, such as the Sherbrooke block sampler, are superior but expensive. Coring frozen ground provides high-quality undisturbed samples from ground conditions, such as fill, sand, moraine, and rock fracture zones.
Geotechnical centrifuge modeling is another method of testing physical-scale models of geotechnical problems. The use of a centrifuge enhances the similarity of the scale model tests involving soil because soil's strength and stiffness are susceptible to the confining pressure. The centrifugal acceleration allows a researcher to obtain large (prototype-scale) stresses in small physical models.
Foundation design
The foundation of a structure's infrastructure transmits loads from the structure to the earth. Geotechnical engineers design foundations based on the load characteristics of the structure and the properties of the soils and bedrock at the site. Generally, geotechnical engineers first estimate the magnitude and location of loads to be supported before developing an investigation plan to explore the subsurface and determine the necessary soil parameters through field and lab testing. Following this, they may begin the design of an engineering foundation. The primary considerations for a geotechnical engineer in foundation design are bearing capacity, settlement, and ground movement beneath the foundations.
Earthworks
Geotechnical engineers are also involved in the planning and execution of earthworks, which include ground improvement, slope stabilization, and slope stability analysis.
Ground improvement
Various geotechnical engineering methods can be used for ground improvement, including reinforcement geosynthetics such as geocells and geogrids, which disperse loads over a larger area, increasing the soil's load-bearing capacity. Through these methods, geotechnical engineers can reduce direct and long-term costs.
Slope stabilization
Geotechnical engineers can analyze and improve slope stability using engineering methods. Slope stability is determined by the balance of shear stress and shear strength. A previously stable slope may be initially affected by various factors, making it unstable. Nonetheless, geotechnical engineers can design and implement engineered slopes to increase stability.
Slope stability analysis
Stability analysis is needed to design engineered slopes and estimate the risk of slope failure in natural or designed slopes by determining the conditions under which the topmost mass of soil will slip relative to the base of soil and lead to slope failure. If the interface between the mass and the base of a slope has a complex geometry, slope stability analysis is difficult and numerical solution methods are required. Typically, the interface's exact geometry is unknown, and a simplified interface geometry is assumed. Finite slopes require three-dimensional models to be analyzed, so most slopes are analyzed assuming that they are infinitely wide and can be represented by two-dimensional models.
Sub-disciplines
Geosynthetics
Geosynthetics are a type of plastic polymer products used in geotechnical engineering that improve engineering performance while reducing costs. This includes geotextiles, geogrids, geomembranes, geocells, and geocomposites. The synthetic nature of the products make them suitable for use in the ground where high levels of durability are required. Their main functions include drainage, filtration, reinforcement, separation, and containment.
Geosynthetics are available in a wide range of forms and materials, each to suit a slightly different end-use, although they are frequently used together. Some reinforcement geosynthetics, such as geogrids and more recently, cellular confinement systems, have shown to improve bearing capacity, modulus factors and soil stiffness and strength. These products have a wide range of applications and are currently used in many civil and geotechnical engineering applications including roads, airfields, railroads, embankments, piled embankments, retaining structures, reservoirs, canals, dams, landfills, bank protection and coastal engineering.
Offshore
Offshore (or marine) geotechnical engineering is concerned with foundation design for human-made structures in the sea, away from the coastline (in opposition to onshore or nearshore engineering). Oil platforms, artificial islands and submarine pipelines are examples of such structures.
There are a number of significant differences between onshore and offshore geotechnical engineering. Notably, site investigation and ground improvement on the seabed are more expensive; the offshore structures are exposed to a wider range of geohazards; and the environmental and financial consequences are higher in case of failure. Offshore structures are exposed to various environmental loads, notably wind, waves and currents. These phenomena may affect the integrity or the serviceability of the structure and its foundation during its operational lifespan and need to be taken into account in offshore design.
In subsea geotechnical engineering, seabed materials are considered a two-phase material composed of rock or mineral particles and water. Structures may be fixed in place in the seabed—as is the case for piers, jetties and fixed-bottom wind turbines—or may comprise a floating structure that remains roughly fixed relative to its geotechnical anchor point. Undersea mooring of human-engineered floating structures include a large number of offshore oil and gas platforms and, since 2008, a few floating wind turbines. Two common types of engineered design for anchoring floating structures include tension-leg and catenary loose mooring systems.
Observational method
First proposed by Karl Terzaghi and later discussed in a paper by Ralph B. Peck, the observational method is a managed process of construction control, monitoring, and review, which enables modifications to be incorporated during and after construction. The method aims to achieve a greater overall economy without compromising safety by creating designs based on the most probable conditions rather than the most unfavorable. Using the observational method, gaps in available information are filled by measurements and investigation, which aid in assessing the behavior of the structure during construction, which in turn can be modified per the findings. The method was described by Peck as "learn-as-you-go".
The observational method may be described as follows:
General exploration sufficient to establish the rough nature, pattern, and properties of deposits.
Assessment of the most probable conditions and the most unfavorable conceivable deviations.
Creating the design based on a working hypothesis of behavior anticipated under the most probable conditions.
Selection of quantities to be observed as construction proceeds and calculating their anticipated values based on the working hypothesis under the most unfavorable conditions.
Selection, in advance, of a course of action or design modification for every foreseeable significant deviation of the observational findings from those predicted.
Measurement of quantities and evaluation of actual conditions.
Design modification per actual conditions
The observational method is suitable for construction that has already begun when an unexpected development occurs or when a failure or accident looms or has already happened. It is unsuitable for projects whose design cannot be altered during construction.
| Technology | Disciplines | null |
12316 | https://en.wikipedia.org/wiki/Gamma%20function | Gamma function | In mathematics, the gamma function (represented by Γ, capital Greek letter gamma) is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function is defined for all complex numbers except non-positive integers, and for every positive integer , The gamma function can be defined via a convergent improper integral for complex numbers with positive real part:
The gamma function then is defined in the complex plane as the analytic continuation of this integral function: it is a meromorphic function which is holomorphic except at zero and the negative integers, where it has simple poles.
The gamma function has no zeros, so the reciprocal gamma function is an entire function. In fact, the gamma function corresponds to the Mellin transform of the negative exponential function:
Other extensions of the factorial function do exist, but the gamma function is the most popular and useful. It appears as a factor in various probability-distribution functions and other formulas in the fields of probability, statistics, analytic number theory, and combinatorics.
Motivation
The gamma function can be seen as a solution to the interpolation problem of finding a smooth curve that connects the points of the factorial sequence: for all positive integer values of . The simple formula for the factorial, is only valid when is a positive integer, and no elementary function has this property, but a good solution is the gamma function .
The gamma function is not only smooth but analytic (except at the non-positive integers), and it can be defined in several explicit ways. However, it is not the only analytic function that extends the factorial, as one may add any analytic function that is zero on the positive integers, such as for an integer . Such a function is known as a pseudogamma function, the most famous being the Hadamard function.
A more restrictive requirement is the functional equation which interpolates the shifted factorial :
But this still does not give a unique solution, since it allows for multiplication by any periodic function with and , such as .
One way to resolve the ambiguity is the Bohr–Mollerup theorem, which shows that is the unique interpolating function for the factorial, defined over the positive reals, which is logarithmically convex, meaning that is convex.
Definition
Main definition
The notation is due to Legendre. If the real part of the complex number is strictly positive (), then the integral
converges absolutely, and is known as the Euler integral of the second kind. (Euler's integral of the first kind is the beta function.) Using integration by parts, one sees that:
Recognizing that as
Then can be calculated as:
Thus we can show that for any positive integer by induction. Specifically, the base case is that , and the induction step is that
The identity can be used (or, yielding the same result, analytic continuation can be used) to uniquely extend the integral formulation for to a meromorphic function defined for all complex numbers , except integers less than or equal to zero. It is this extended version that is commonly referred to as the gamma function.
Alternative definitions
There are many equivalent definitions.
Euler's definition as an infinite product
For a fixed integer , as the integer increases, we have that
If is not an integer, then this equation is meaningless, since in this section the factorial of a non-integer has not been defined yet. However, let us assume that this equation continues to hold when is replaced by an arbitrary complex number , in order to define the Gamma function for non integers:
Multiplying both sides by gives
This infinite product, which is due to Euler, converges for all complex numbers except the non-positive integers, which fail because of a division by zero. Hence the above assumption produces a unique definition of .
Intuitively, this formula indicates that is approximately the result of computing for some large integer , multiplying by to approximate , and using the relationship backwards times to get an approximation for ; and furthermore that this approximation becomes exact as increases to infinity.
The infinite product for the reciprocal
is an entire function, converging for every complex number .
Weierstrass's definition
The definition for the gamma function due to Weierstrass is also valid for all complex numbers except non-positive integers:
where is the Euler–Mascheroni constant. This is the Hadamard product of in a rewritten form. This definition appears in an important identity involving pi.
Equivalence of the integral definition and Weierstrass definition
By the integral definition, the relation and Hadamard factorization theorem,
for some constants since is an entire function of order . Since as , (or an integer multiple of ) and since ,
where for some integer . Since for , we have and
Equivalence of the Weierstrass definition and Euler definition
Let
and
Then
and
therefore
Then
and taking gives the desired result.
Properties
General
Besides the fundamental property discussed above:
other important functional equations for the gamma function are Euler's reflection formula
which implies
and the Legendre duplication formula
Proof 1
With Euler's infinite product
compute
where the last equality is a known result. A similar derivation begins with Weierstrass's definition.
Proof 2
First prove that
Consider the positively oriented rectangular contour with vertices at , , and where . Then by the residue theorem,
Let
and let be the analogous integral over the top side of the rectangle. Then as and . If denotes the right vertical side of the rectangle, then
for some constant and since , the integral tends to as . Analogously, the integral over the left vertical side of the rectangle tends to as . Therefore
from which
Then
and
Proving the reflection formula for all proves it for all by analytic continuation.
The beta function can be represented as
Setting yields
After the substitution :
The function is even, hence
Now assume
Then
This implies
Since
the Legendre duplication formula follows:
The duplication formula is a special case of the multiplication theorem (see Eq. 5.5.6):
A simple but useful property, which can be seen from the limit definition, is:
In particular, with , this product is
If the real part is an integer or a half-integer, this can be finitely expressed in closed form:
First, consider the reflection formula applied to .
Applying the recurrence relation to the second term:
which with simple rearrangement gives
Second, consider the reflection formula applied to .
Formulas for other values of for which the real part is integer or half-integer quickly follow by induction using the recurrence relation in the positive and negative directions.
Perhaps the best-known value of the gamma function at a non-integer argument is
which can be found by setting in the reflection or duplication formulas, by using the relation to the beta function given below with , or simply by making the substitution in the integral definition of the gamma function, resulting in a Gaussian integral. In general, for non-negative integer values of we have:
where the double factorial . See Particular values of the gamma function for calculated values.
It might be tempting to generalize the result that by looking for a formula for other individual values where is rational, especially because according to Gauss's digamma theorem, it is possible to do so for the closely related digamma function at every rational value. However, these numbers are not known to be expressible by themselves in terms of elementary functions. It has been proved that is a transcendental number and algebraically independent of for any integer and each of the fractions . In general, when computing values of the gamma function, we must settle for numerical approximations.
The derivatives of the gamma function are described in terms of the polygamma function, :
For a positive integer the derivative of the gamma function can be calculated as follows:
where H(m) is the mth harmonic number and is the Euler–Mascheroni constant.
For the th derivative of the gamma function is:
(This can be derived by differentiating the integral form of the gamma function with respect to , and using the technique of differentiation under the integral sign.)
Using the identity
where is the Riemann zeta function, and is the -th Bell polynomial, we have in particular the Laurent series expansion of the gamma function
Inequalities
When restricted to the positive real numbers, the gamma function is a strictly logarithmically convex function. This property may be stated in any of the following three equivalent ways:
For any two positive real numbers and , and for any ,
For any two positive real numbers and , and >
For any positive real number ,
The last of these statements is, essentially by definition, the same as the statement that , where is the polygamma function of order 1. To prove the logarithmic convexity of the gamma function, it therefore suffices to observe that has a series representation which, for positive real , consists of only positive terms.
Logarithmic convexity and Jensen's inequality together imply, for any positive real numbers and ,
There are also bounds on ratios of gamma functions. The best-known is Gautschi's inequality, which says that for any positive real number and any ,
Stirling's formula
The behavior of for an increasing positive real variable is given by Stirling's formula
where the symbol means asymptotic convergence: the ratio of the two sides converges to 1 in the limit This growth is faster than exponential, , for any fixed value of .
Another useful limit for asymptotic approximations for is:
When writing the error term as an infinite product, Stirling's formula can be used to define the gamma function:
Extension to negative, non-integer values
Although the main definition of the gamma function—the Euler integral of the second kind—is only valid (on the real axis) for positive arguments, its domain can be extended with analytic continuation to negative arguments by shifting the negative argument to positive values by using either the Euler's reflection formula,
or the fundamental property,
when .
For example,
Residues
The behavior for non-positive is more intricate. Euler's integral does not converge for but the function it defines in the positive complex half-plane has a unique analytic continuation to the negative half-plane. One way to find that analytic continuation is to use Euler's integral for positive arguments and extend the domain to negative numbers by repeated application of the recurrence formula,
choosing such that is positive. The product in the denominator is zero when equals any of the integers . Thus, the gamma function must be undefined at those points to avoid division by zero; it is a meromorphic function with simple poles at the non-positive integers.
For a function of a complex variable , at a simple pole , the residue of is given by:
For the simple pole , the recurrence formula can be rewritten as:
The numerator at is
and the denominator
So the residues of the gamma function at those points are:
The gamma function is non-zero everywhere along the real line, although it comes arbitrarily close to zero as . There is in fact no complex number for which , and hence the reciprocal gamma function is an entire function, with zeros at .
Minima and maxima
On the real line, the gamma function has a local minimum at where it attains the value . The gamma function rises to either side of this minimum. The solution to is and the common value is . The positive solution to is , the golden ratio, and the common value is .
The gamma function must alternate sign between its poles at the non-positive integers because the product in the forward recurrence contains an odd number of negative factors if the number of poles between and is odd, and an even number if the number of poles is even. The values at the local extrema of the gamma function along the real axis between the non-positive integers are:
,
,
,
,
, etc.
Integral representations
There are many formulas, besides the Euler integral of the second kind, that express the gamma function as an integral. For instance, when the real part of is positive,
and
where the three integrals respectively follow from the substitutions , and in Euler's second integral. The last integral in particular makes clear the connection between the gamma function at half integer arguments and the Gaussian integral: if we get
Binet's first integral formula for the gamma function states that, when the real part of is positive, then:
The integral on the right-hand side may be interpreted as a Laplace transform. That is,
Binet's second integral formula states that, again when the real part of is positive, then:
Let be a Hankel contour, meaning a path that begins and ends at the point on the Riemann sphere, whose unit tangent vector converges to at the start of the path and to at the end, which has winding number 1 around , and which does not cross . Fix a branch of by taking a branch cut along and by taking to be real when is on the negative real axis. Assume is not an integer. Then Hankel's formula for the gamma function is:
where is interpreted as . The reflection formula leads to the closely related expression
again valid whenever is not an integer.
Continued fraction representation
The gamma function can also be represented by a sum of two continued fractions:
where .
Fourier series expansion
The logarithm of the gamma function has the following Fourier series expansion for
which was for a long time attributed to Ernst Kummer, who derived it in 1847. However, Iaroslav Blagouchine discovered that Carl Johan Malmsten first derived this series in 1842.
Raabe's formula
In 1840 Joseph Ludwig Raabe proved that
In particular, if then
The latter can be derived taking the logarithm in the above multiplication formula, which gives an expression for the Riemann sum of the integrand. Taking the limit for gives the formula.
Pi function
An alternative notation introduced by Gauss is the -function, a shifted version of the gamma function:
so that for every non-negative integer .
Using the pi function, the reflection formula is:
using the normalized sinc function; while the multiplication theorem becomes:
The shifted reciprocal gamma function is sometimes denoted
an entire function.
The volume of an -ellipsoid with radii can be expressed as
Relation to other functions
In the first integral defining the gamma function, the limits of integration are fixed. The upper incomplete gamma function is obtained by allowing the lower limit of integration to vary:There is a similar lower incomplete gamma function.
The gamma function is related to Euler's beta function by the formula
The logarithmic derivative of the gamma function is called the digamma function; higher derivatives are the polygamma functions.
The analog of the gamma function over a finite field or a finite ring is the Gaussian sums, a type of exponential sum.
The reciprocal gamma function is an entire function and has been studied as a specific topic.
The gamma function also shows up in an important relation with the Riemann zeta function, . It also appears in the following formula: which is valid only for . The logarithm of the gamma function satisfies the following formula due to Lerch: where is the Hurwitz zeta function, is the Riemann zeta function and the prime () denotes differentiation in the first variable.
The gamma function is related to the stretched exponential function. For instance, the moments of that function are
Particular values
Including up to the first 20 digits after the decimal point, some particular values of the gamma function are:
(These numbers can be found in the OEIS. The values presented here are truncated rather than rounded.)
The complex-valued gamma function is undefined for non-positive integers, but in these cases the value can be defined in the Riemann sphere as . The reciprocal gamma function is well defined and analytic at these values (and in the entire complex plane):
Log-gamma function
Because the gamma and factorial functions grow so rapidly for moderately large arguments, many computing environments include a function that returns the natural logarithm of the gamma function, often given the name lgamma or lngamma in programming environments or gammaln in spreadsheets. This grows much more slowly, and for combinatorial calculations allows adding and subtracting logarithmic values instead of multiplying and dividing very large values. It is often defined as
The digamma function, which is the derivative of this function, is also commonly seen.
In the context of technical and physical applications, e.g. with wave propagation, the functional equation
is often used since it allows one to determine function values in one strip of width 1 in from the neighbouring strip. In particular, starting with a good approximation for a with large real part one may go step by step down to the desired . Following an indication of Carl Friedrich Gauss, Rocktaeschel (1922) proposed for an approximation for large :
This can be used to accurately approximate for with a smaller via (P.E.Böhmer, 1939)
A more accurate approximation can be obtained by using more terms from the asymptotic expansions of and , which are based on Stirling's approximation.
as at constant . (See sequences and in the OEIS.)
In a more "natural" presentation:
as at constant . (See sequences and in the OEIS.)
The coefficients of the terms with of in the last expansion are simply
where the are the Bernoulli numbers.
The gamma function also has Stirling Series (derived by Charles Hermite in 1900) equal to
Properties
The Bohr–Mollerup theorem states that among all functions extending the factorial functions to the positive real numbers, only the gamma function is log-convex, that is, its natural logarithm is convex on the positive real axis. Another characterisation is given by the Wielandt theorem.
The gamma function is the unique function that simultaneously satisfies
,
for all complex numbers except the non-positive integers, and,
for integer , for all complex numbers .
In a certain sense, the log-gamma function is the more natural form; it makes some intrinsic attributes of the function clearer. A striking example is the Taylor series of around 1:
with denoting the Riemann zeta function at .
So, using the following property:
an integral representation for the log-gamma function is:
or, setting to obtain an integral for , we can replace the term with its integral and incorporate that into the above formula, to get:
There also exist special formulas for the logarithm of the gamma function for rational .
For instance, if and are integers with and then
This formula is sometimes used for numerical computation, since the integrand decreases very quickly.
Integration over log-gamma
The integral
can be expressed in terms of the Barnes -function (see Barnes -function for a proof):
where .
It can also be written in terms of the Hurwitz zeta function:
When it follows that
and this is a consequence of Raabe's formula as well. O. Espinosa and V. Moll derived a similar formula for the integral of the square of :
where is .
D. H. Bailey and his co-authors gave an evaluation for
when in terms of the Tornheim–Witten zeta function and its derivatives.
In addition, it is also known that
Approximations
Complex values of the gamma function can be approximated using Stirling's approximation or the Lanczos approximation,
This is precise in the sense that the ratio of the approximation to the true value approaches 1 in the limit as goes to infinity.
The gamma function can be computed to fixed precision for by applying integration by parts to Euler's integral. For any positive number the gamma function can be written
When and , the absolute value of the last integral is smaller than . By choosing a large enough , this last expression can be made smaller than for any desired value . Thus, the gamma function can be evaluated to bits of precision with the above series.
A fast algorithm for calculation of the Euler gamma function for any algebraic argument (including rational) was constructed by E.A. Karatsuba.
For arguments that are integer multiples of , the gamma function can also be evaluated quickly using arithmetic–geometric mean iterations (see particular values of the gamma function).
Practical implementations
Unlike many other functions, such as a Normal Distribution, no obvious fast, accurate implementation that is easy to implement for the Gamma Function is easily found. Therefore, it is worth investigating potential solutions. For the case that speed is more important than accuracy, published tables for are easily found in an Internet search, such as the Online Wiley Library. Such tables may be used with linear interpolation. Greater accuracy is obtainable with the use of cubic interpolation at the cost of more computational overhead. Since tables are usually published for argument values between 1 and 2, the property may be used to quickly and easily translate all real values and into the range , such that only tabulated values of between 1 and 2 need be used.
If interpolation tables are not desirable, then the Lanczos approximation mentioned above works well for 1 to 2 digits of accuracy for small, commonly used values of z. If the Lanczos approximation is not sufficiently accurate, the Stirling's formula for the Gamma Function may be used.
Applications
One author describes the gamma function as "Arguably, the most common special function, or the least 'special' of them. The other transcendental functions […] are called 'special' because you could conceivably avoid some of them by staying away from many specialized mathematical topics. On the other hand, the gamma function is most difficult to avoid."
Integration problems
The gamma function finds application in such diverse areas as quantum physics, astrophysics and fluid dynamics. The gamma distribution, which is formulated in terms of the gamma function, is used in statistics to model a wide range of processes; for example, the time between occurrences of earthquakes.
The primary reason for the gamma function's usefulness in such contexts is the prevalence of expressions of the type which describe processes that decay exponentially in time or space. Integrals of such expressions can occasionally be solved in terms of the gamma function when no elementary solution exists. For example, if is a power function and is a linear function, a simple change of variables gives the evaluation
The fact that the integration is performed along the entire positive real line might signify that the gamma function describes the cumulation of a time-dependent process that continues indefinitely, or the value might be the total of a distribution in an infinite space.
It is of course frequently useful to take limits of integration other than 0 and to describe the cumulation of a finite process, in which case the ordinary gamma function is no longer a solution; the solution is then called an incomplete gamma function. (The ordinary gamma function, obtained by integrating across the entire positive real line, is sometimes called the complete gamma function for contrast.)
An important category of exponentially decaying functions is that of Gaussian functions
and integrals thereof, such as the error function. There are many interrelations between these functions and the gamma function; notably, the factor obtained by evaluating is the "same" as that found in the normalizing factor of the error function and the normal distribution.
The integrals discussed so far involve transcendental functions, but the gamma function also arises from integrals of purely algebraic functions. In particular, the arc lengths of ellipses and of the lemniscate, which are curves defined by algebraic equations, are given by elliptic integrals that in special cases can be evaluated in terms of the gamma function. The gamma function can also be used to calculate "volume" and "area" of -dimensional hyperspheres.
Calculating products
The gamma function's ability to generalize factorial products immediately leads to applications in many areas of mathematics; in combinatorics, and by extension in areas such as probability theory and the calculation of power series. Many expressions involving products of successive integers can be written as some combination of factorials, the most important example perhaps being that of the binomial coefficient. For example, for any complex numbers and , with , we can write
which closely resembles the binomial coefficient when is a non-negative integer,
The example of binomial coefficients motivates why the properties of the gamma function when extended to negative numbers are natural. A binomial coefficient gives the number of ways to choose elements from a set of elements; if , there are of course no ways. If , is the factorial of a negative integer and hence infinite if we use the gamma function definition of factorials—dividing by infinity gives the expected value of 0.
We can replace the factorial by a gamma function to extend any such formula to the complex numbers. Generally, this works for any product wherein each factor is a rational function of the index variable, by factoring the rational function into linear expressions. If and are monic polynomials of degree and with respective roots and , we have
If we have a way to calculate the gamma function numerically, it is very simple to calculate numerical values of such products. The number of gamma functions in the right-hand side depends only on the degree of the polynomials, so it does not matter whether equals 5 or 105. By taking the appropriate limits, the equation can also be made to hold even when the left-hand product contains zeros or poles.
By taking limits, certain rational products with infinitely many factors can be evaluated in terms of the gamma function as well. Due to the Weierstrass factorization theorem, analytic functions can be written as infinite products, and these can sometimes be represented as finite products or quotients of the gamma function. We have already seen one striking example: the reflection formula essentially represents the sine function as the product of two gamma functions. Starting from this formula, the exponential function as well as all the trigonometric and hyperbolic functions can be expressed in terms of the gamma function.
More functions yet, including the hypergeometric function and special cases thereof, can be represented by means of complex contour integrals of products and quotients of the gamma function, called Mellin–Barnes integrals.
Analytic number theory
An application of the gamma function is the study of the Riemann zeta function. A fundamental property of the Riemann zeta function is its functional equation:
Among other things, this provides an explicit form for the analytic continuation of the zeta function to a meromorphic function in the complex plane and leads to an immediate proof that the zeta function has infinitely many so-called "trivial" zeros on the real line. Borwein et al. call this formula "one of the most beautiful findings in mathematics". Another contender for that title might be
Both formulas were derived by Bernhard Riemann in his seminal 1859 paper "Ueber die Anzahl der Primzahlen unter einer gegebenen Größe" ("On the Number of Primes Less Than a Given Magnitude"), one of the milestones in the development of analytic number theory—the branch of mathematics that studies prime numbers using the tools of mathematical analysis.
History
The gamma function has caught the interest of some of the most prominent mathematicians of all time. Its history, notably documented by Philip J. Davis in an article that won him the 1963 Chauvenet Prize, reflects many of the major developments within mathematics since the 18th century. In the words of Davis, "each generation has found something of interest to say about the gamma function. Perhaps the next generation will also."
18th century: Euler and Stirling
The problem of extending the factorial to non-integer arguments was apparently first considered by Daniel Bernoulli and Christian Goldbach in the 1720s. In particular, in a letter from Bernoulli to Goldbach dated 6 October 1729 Bernoulli introduced the product representation
which is well defined for real values of other than the negative integers.
Leonhard Euler later gave two different definitions: the first was not his integral but an infinite product that is well defined for all complex numbers other than the negative integers,
of which he informed Goldbach in a letter dated 13 October 1729. He wrote to Goldbach again on 8 January 1730, to announce his discovery of the integral representation
which is valid when the real part of the complex number is strictly greater than (i.e., ). By the change of variables , this becomes the familiar Euler integral. Euler published his results in the paper "De progressionibus transcendentibus seu quarum termini generales algebraice dari nequeunt" ("On transcendental progressions, that is, those whose general terms cannot be given algebraically"), submitted to the St. Petersburg Academy on 28 November 1729. Euler further discovered some of the gamma function's important functional properties, including the reflection formula.
James Stirling, a contemporary of Euler, also attempted to find a continuous expression for the factorial and came up with what is now known as Stirling's formula. Although Stirling's formula gives a good estimate of , also for non-integers, it does not provide the exact value. Extensions of his formula that correct the error were given by Stirling himself and by Jacques Philippe Marie Binet.
19th century: Gauss, Weierstrass and Legendre
Carl Friedrich Gauss rewrote Euler's product as
and used this formula to discover new properties of the gamma function. Although Euler was a pioneer in the theory of complex variables, he does not appear to have considered the factorial of a complex number, as instead Gauss first did. Gauss also proved the multiplication theorem of the gamma function and investigated the connection between the gamma function and elliptic integrals.
Karl Weierstrass further established the role of the gamma function in complex analysis, starting from yet another product representation,
where is the Euler–Mascheroni constant. Weierstrass originally wrote his product as one for , in which case it is taken over the function's zeros rather than its poles. Inspired by this result, he proved what is known as the Weierstrass factorization theorem—that any entire function can be written as a product over its zeros in the complex plane; a generalization of the fundamental theorem of algebra.
The name gamma function and the symbol were introduced by Adrien-Marie Legendre around 1811; Legendre also rewrote Euler's integral definition in its modern form. Although the symbol is an upper-case Greek gamma, there is no accepted standard for whether the function name should be written "gamma function" or "Gamma function" (some authors simply write "-function"). The alternative "pi function" notation due to Gauss is sometimes encountered in older literature, but Legendre's notation is dominant in modern works.
It is justified to ask why we distinguish between the "ordinary factorial" and the gamma function by using distinct symbols, and particularly why the gamma function should be normalized to instead of simply using "". Consider that the notation for exponents, , has been generalized from integers to complex numbers without any change. Legendre's motivation for the normalization is not known, and has been criticized as cumbersome by some (the 20th-century mathematician Cornelius Lanczos, for example, called it "void of any rationality" and would instead use ). Legendre's normalization does simplify some formulae, but complicates others. From a modern point of view, the Legendre normalization of the gamma function is the integral of the additive character against the multiplicative character with respect to the Haar measure on the Lie group . Thus this normalization makes it clearer that the gamma function is a continuous analogue of a Gauss sum.
19th–20th centuries: characterizing the gamma function
It is somewhat problematic that a large number of definitions have been given for the gamma function. Although they describe the same function, it is not entirely straightforward to prove the equivalence. Stirling never proved that his extended formula corresponds exactly to Euler's gamma function; a proof was first given by Charles Hermite in 1900. Instead of finding a specialized proof for each formula, it would be desirable to have a general method of identifying the gamma function.
One way to prove equivalence would be to find a differential equation that characterizes the gamma function. Most special functions in applied mathematics arise as solutions to differential equations, whose solutions are unique. However, the gamma function does not appear to satisfy any simple differential equation. Otto Hölder proved in 1887 that the gamma function at least does not satisfy any algebraic differential equation by showing that a solution to such an equation could not satisfy the gamma function's recurrence formula, making it a transcendentally transcendental function. This result is known as Hölder's theorem.
A definite and generally applicable characterization of the gamma function was not given until 1922. Harald Bohr and Johannes Mollerup then proved what is known as the Bohr–Mollerup theorem: that the gamma function is the unique solution to the factorial recurrence relation that is positive and logarithmically convex for positive and whose value at 1 is 1 (a function is logarithmically convex if its logarithm is convex). Another characterisation is given by the Wielandt theorem.
The Bohr–Mollerup theorem is useful because it is relatively easy to prove logarithmic convexity for any of the different formulas used to define the gamma function. Taking things further, instead of defining the gamma function by any particular formula, we can choose the conditions of the Bohr–Mollerup theorem as the definition, and then pick any formula we like that satisfies the conditions as a starting point for studying the gamma function. This approach was used by the Bourbaki group.
Borwein & Corless review three centuries of work on the gamma function.
Reference tables and software
Although the gamma function can be calculated virtually as easily as any mathematically simpler function with a modern computer—even with a programmable pocket calculator—this was of course not always the case. Until the mid-20th century, mathematicians relied on hand-made tables; in the case of the gamma function, notably a table computed by Gauss in 1813 and one computed by Legendre in 1825.
Tables of complex values of the gamma function, as well as hand-drawn graphs, were given in Tables of Functions With Formulas and Curves by Jahnke and , first published in Germany in 1909. According to Michael Berry, "the publication in J&E of a three-dimensional graph showing the poles of the gamma function in the complex plane acquired an almost iconic status."
There was in fact little practical need for anything but real values of the gamma function until the 1930s, when applications for the complex gamma function were discovered in theoretical physics. As electronic computers became available for the production of tables in the 1950s, several extensive tables for the complex gamma function were published to meet the demand, including a table accurate to 12 decimal places from the U.S. National Bureau of Standards.
Double-precision floating-point implementations of the gamma function and its logarithm are now available in most scientific computing software and special functions libraries, for example TK Solver, Matlab, GNU Octave, and the GNU Scientific Library. The gamma function was also added to the C standard library (math.h). Arbitrary-precision implementations are available in most computer algebra systems, such as Mathematica and Maple. PARI/GP, MPFR and MPFUN contain free arbitrary-precision implementations. In some software calculators, e.g. Windows Calculator and GNOME Calculator, the factorial function returns Γ(x + 1) when the input x is a non-integer value.
| Mathematics | Basics | null |
12339 | https://en.wikipedia.org/wiki/Genetically%20modified%20organism | Genetically modified organism | A genetically modified organism (GMO) is any organism whose genetic material has been altered using genetic engineering techniques. The exact definition of a genetically modified organism and what constitutes genetic engineering varies, with the most common being an organism altered in a way that "does not occur naturally by mating and/or natural recombination". A wide variety of organisms have been genetically modified (GM), including animals, plants, and microorganisms.
Genetic modification can include the introduction of new genes or enhancing, altering, or knocking out endogenous genes. In some genetic modifications, genes are transferred within the same species, across species (creating transgenic organisms), and even across kingdoms. Creating a genetically modified organism is a multi-step process. Genetic engineers must isolate the gene they wish to insert into the host organism and combine it with other genetic elements, including a promoter and terminator region and often a selectable marker. A number of techniques are available for inserting the isolated gene into the host genome. Recent advancements using genome editing techniques, notably CRISPR, have made the production of GMOs much simpler. Herbert Boyer and Stanley Cohen made the first genetically modified organism in 1973, a bacterium resistant to the antibiotic kanamycin. The first genetically modified animal, a mouse, was created in 1974 by Rudolf Jaenisch, and the first plant was produced in 1983. In 1994, the Flavr Savr tomato was released, the first commercialized genetically modified food. The first genetically modified animal to be commercialized was the GloFish (2003) and the first genetically modified animal to be approved for food use was the AquAdvantage salmon in 2015.
Bacteria are the easiest organisms to engineer and have been used for research, food production, industrial protein purification (including drugs), agriculture, and art. There is potential to use them for environmental purposes or as medicine. Fungi have been engineered with much the same goals. Viruses play an important role as vectors for inserting genetic information into other organisms. This use is especially relevant to human gene therapy. There are proposals to remove the virulent genes from viruses to create vaccines. Plants have been engineered for scientific research, to create new colors in plants, deliver vaccines, and to create enhanced crops. Genetically modified crops are publicly the most controversial GMOs, in spite of having the most human health and environmental benefits. Animals are generally much harder to transform and the vast majority are still at the research stage. Mammals are the best model organisms for humans. Livestock is modified with the intention of improving economically important traits such as growth rate, quality of meat, milk composition, disease resistance, and survival. Genetically modified fish are used for scientific research, as pets, and as a food source. Genetic engineering has been proposed as a way to control mosquitos, a vector for many deadly diseases. Although human gene therapy is still relatively new, it has been used to treat genetic disorders such as severe combined immunodeficiency and Leber's congenital amaurosis.
Many objections have been raised over the development of GMOs, particularly their commercialization. Many of these involve GM crops and whether food produced from them is safe and what impact growing them will have on the environment. Other concerns are the objectivity and rigor of regulatory authorities, contamination of non-genetically modified food, control of the food supply, patenting of life, and the use of intellectual property rights. Although there is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, GM food safety is a leading issue with critics. Gene flow, impact on non-target organisms, and escape are the major environmental concerns. Countries have adopted regulatory measures to deal with these concerns. There are differences in the regulation for the release of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Key issues concerning regulators include whether GM food should be labeled and the status of gene-edited organisms.
Definition
The definition of a genetically modified organism (GMO) is not clear and varies widely between countries, international bodies, and other communities. At its broadest, the definition of a GMO can include anything that has had its genes altered, including by nature. Taking a less broad view, it can encompass every organism that has had its genes altered by humans, which would include all crops and livestock. In 1993, the Encyclopedia Britannica defined genetic engineering as "any of a wide range of techniques ... among them artificial insemination, in vitro fertilization (e.g., 'test-tube' babies), sperm banks, cloning, and gene manipulation." The European Union (EU) included a similarly broad definition in early reviews, specifically mentioning GMOs being produced by "selective breeding and other means of artificial selection" These definitions were promptly adjusted with a number of exceptions added as the result of pressure from scientific and farming communities, as well as developments in science. The EU definition later excluded traditional breeding, in vitro fertilization, induction of polyploidy, mutation breeding, and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process.
Another approach was the definition provided by the Food and Agriculture Organization, the World Health Organization, and the European Commission, stating that the organisms must be altered in a way that does "not occur naturally by mating and/or natural recombination". Progress in science, such as the discovery of horizontal gene transfer being a relatively common natural phenomenon, further added to the confusion on what "occurs naturally", which led to further adjustments and exceptions. There are examples of crops that fit this definition, but are not normally considered GMOs. For example, the grain crop triticale was fully developed in a laboratory in 1930 using various techniques to alter its genome.
Genetically engineered organism (GEO) can be considered a more precise term compared to GMO when describing organisms' genomes that have been directly manipulated with biotechnology. The Cartagena Protocol on Biosafety used the synonym living modified organism (LMO) in 2000 and defined it as "any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology." Modern biotechnology is further defined as "In vitro nucleic acid techniques, including recombinant deoxyribonucleic acid (DNA) and direct injection of nucleic acid into cells or organelles, or fusion of cells beyond the taxonomic family."
Originally, the term GMO was not commonly used by scientists to describe genetically engineered organisms until after usage of GMO became common in popular media. The United States Department of Agriculture (USDA) considers GMOs to be plants or animals with heritable changes introduced by genetic engineering or traditional methods, while GEO specifically refers to organisms with genes introduced, eliminated, or rearranged using molecular biology, particularly recombinant DNA techniques, such as transgenesis.
The definitions focus on the process more than the product, which means there could be GMOS and non-GMOs with very similar genotypes and phenotypes. This has led scientists to label it as a scientifically meaningless category, saying that it is impossible to group all the different types of GMOs under one common definition. It has also caused issues for organic institutions and groups looking to ban GMOs. It also poses problems as new processes are developed. The current definitions came in before genome editing became popular and there is some confusion as to whether they are GMOs. The EU has adjudged that they are changing their GMO definition to include "organisms obtained by mutagenesis", but has excluded them from regulation based on their "long safety record" and that they have been "conventionally been used in a number of applications". In contrast the USDA has ruled that gene edited organisms are not considered GMOs.
Even greater inconsistency and confusion is associated with various "Non-GMO" or "GMO-free" labeling schemes in food marketing, where even products such as water or salt, which do not contain any organic substances and genetic material (and thus cannot be genetically modified by definition), are being labeled to create an impression of being "more healthy".
Production
Creating a genetically modified organism (GMO) is a multi-step process. Genetic engineers must isolate the gene they wish to insert into the host organism. This gene can be taken from a cell or artificially synthesized. If the chosen gene or the donor organism's genome has been well studied it may already be accessible from a genetic library. The gene is then combined with other genetic elements, including a promoter and terminator region and a selectable marker.
A number of techniques are available for inserting the isolated gene into the host genome. Bacteria can be induced to take up foreign DNA, usually by exposed heat shock or electroporation. DNA is generally inserted into animal cells using microinjection, where it can be injected through the cell's nuclear envelope directly into the nucleus, or through the use of viral vectors. In plants the DNA is often inserted using Agrobacterium-mediated recombination, biolistics or electroporation.
As only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. In plants this is accomplished through tissue culture. In animals it is necessary to ensure that the inserted DNA is present in the embryonic stem cells. Further testing using PCR, Southern hybridization, and DNA sequencing is conducted to confirm that an organism contains the new gene.
Traditionally the new genetic material was inserted randomly within the host genome. Gene targeting techniques, which creates double-stranded breaks and takes advantage on the cells natural homologous recombination repair systems, have been developed to target insertion to exact locations. Genome editing uses artificially engineered nucleases that create breaks at specific points. There are four families of engineered nucleases: meganucleases, zinc finger nucleases, transcription activator-like effector nucleases (TALENs), and the Cas9-guideRNA system (adapted from CRISPR). TALEN and CRISPR are the two most commonly used and each has its own advantages. TALENs have greater target specificity, while CRISPR is easier to design and more efficient.
History
Humans have domesticated plants and animals since around 12,000 BCE, using selective breeding or artificial selection (as contrasted with natural selection). The process of selective breeding, in which organisms with desired traits (and thus with the desired genes) are used to breed the next generation and organisms lacking the trait are not bred, is a precursor to the modern concept of genetic modification. Various advancements in genetics allowed humans to directly alter the DNA and therefore genes of organisms. In 1972, Paul Berg created the first recombinant DNA molecule when he combined DNA from a monkey virus with that of the lambda virus.
Herbert Boyer and Stanley Cohen made the first genetically modified organism in 1973. They took a gene from a bacterium that provided resistance to the antibiotic kanamycin, inserted it into a plasmid and then induced other bacteria to incorporate the plasmid. The bacteria that had successfully incorporated the plasmid was then able to survive in the presence of kanamycin. Boyer and Cohen expressed other genes in bacteria. This included genes from the toad Xenopus laevis in 1974, creating the first GMO expressing a gene from an organism of a different kingdom.
In 1974, Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world's first transgenic animal. However it took another eight years before transgenic mice were developed that passed the transgene to their offspring. Genetically modified mice were created in 1984 that carried cloned oncogenes, predisposing them to developing cancer. Mice with genes removed (termed a knockout mouse) were created in 1989. The first transgenic livestock were produced in 1985 and the first animal to synthesize transgenic proteins in their milk were mice in 1987. The mice were engineered to produce human tissue plasminogen activator, a protein involved in breaking down blood clots.
In 1983, the first genetically engineered plant was developed by Michael W. Bevan, Richard B. Flavell and Mary-Dell Chilton. They infected tobacco with Agrobacterium transformed with an antibiotic resistance gene and through tissue culture techniques were able to grow a new plant containing the resistance gene. The gene gun was invented in 1987, allowing transformation of plants not susceptible to Agrobacterium infection. In 2000, Vitamin A-enriched golden rice was the first plant developed with increased nutrient value.
In 1976, Genentech, the first genetic engineering company was founded by Herbert Boyer and Robert Swanson; a year later, the company produced a human protein (somatostatin) in E. coli. Genentech announced the production of genetically engineered human insulin in 1978. The insulin produced by bacteria, branded Humulin, was approved for release by the Food and Drug Administration in 1982. In 1988, the first human antibodies were produced in plants. In 1987, a strain of Pseudomonas syringae became the first genetically modified organism to be released into the environment when a strawberry and potato field in California were sprayed with it.
The first genetically modified crop, an antibiotic-resistant tobacco plant, was produced in 1982. China was the first country to commercialize transgenic plants, introducing a virus-resistant tobacco in 1992. In 1994, Calgene attained approval to commercially release the Flavr Savr tomato, the first genetically modified food. Also in 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialized in Europe. An insect resistant Potato was approved for release in the US in 1995, and by 1996 approval had been granted to commercially grow 8 transgenic crops and one flower crop (carnation) in 6 countries plus the EU.
In 2010, scientists at the J. Craig Venter Institute announced that they had created the first synthetic bacterial genome. They named it Synthia and it was the world's first synthetic life form.
The first genetically modified animal to be commercialized was the GloFish, a Zebra fish with a fluorescent gene added that allows it to glow in the dark under ultraviolet light. It was released to the US market in 2003. In 2015, AquAdvantage salmon became the first genetically modified animal to be approved for food use. Approval is for fish raised in Panama and sold in the US. The salmon were transformed with a growth hormone-regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer.
Bacteria
Bacteria were the first organisms to be genetically modified in the laboratory, due to the relative ease of modifying their chromosomes. This ease made them important tools for the creation of other GMOs. Genes and other genetic information from a wide range of organisms can be added to a plasmid and inserted into bacteria for storage and modification. Bacteria are cheap, easy to grow, clonal, multiply quickly and can be stored at −80 °C almost indefinitely. Once a gene is isolated it can be stored inside the bacteria, providing an unlimited supply for research. A large number of custom plasmids make manipulating DNA extracted from bacteria relatively easy.
Their ease of use has made them great tools for scientists looking to study gene function and evolution. The simplest model organisms come from bacteria, with most of our early understanding of molecular biology coming from studying Escherichia coli. Scientists can easily manipulate and combine genes within the bacteria to create novel or disrupted proteins and observe the effect this has on various molecular systems. Researchers have combined the genes from bacteria and archaea, leading to insights on how these two diverged in the past. In the field of synthetic biology, they have been used to test various synthetic approaches, from synthesizing genomes to creating novel nucleotides.
Bacteria have been used in the production of food for a long time, and specific strains have been developed and selected for that work on an industrial scale. They can be used to produce enzymes, amino acids, flavorings, and other compounds used in food production. With the advent of genetic engineering, new genetic changes can easily be introduced into these bacteria. Most food-producing bacteria are lactic acid bacteria, and this is where the majority of research into genetically engineering food-producing bacteria has gone. The bacteria can be modified to operate more efficiently, reduce toxic byproduct production, increase output, create improved compounds, and remove unnecessary pathways. Food products from genetically modified bacteria include alpha-amylase, which converts starch to simple sugars, chymosin, which clots milk protein for cheese making, and pectinesterase, which improves fruit juice clarity. The majority are produced in the US and even though regulations are in place to allow production in Europe, as of 2015 no food products derived from bacteria are currently available there.
Genetically modified bacteria are used to produce large amounts of proteins for industrial use. The bacteria are generally grown to a large volume before the gene encoding the protein is activated. The bacteria are then harvested and the desired protein purified from them. The high cost of extraction and purification has meant that only high value products have been produced at an industrial scale. The majority of these products are human proteins for use in medicine. Many of these proteins are impossible or difficult to obtain via natural methods and they are less likely to be contaminated with pathogens, making them safer. The first medicinal use of GM bacteria was to produce the protein insulin to treat diabetes. Other medicines produced include clotting factors to treat hemophilia, human growth hormone to treat various forms of dwarfism, interferon to treat some cancers, erythropoietin for anemic patients, and tissue plasminogen activator which dissolves blood clots. Outside of medicine they have been used to produce biofuels. There is interest in developing an extracellular expression system within the bacteria to reduce costs and make the production of more products economical.
With a greater understanding of the role that the microbiome plays in human health, there is a potential to treat diseases by genetically altering the bacteria to, themselves, be therapeutic agents. Ideas include altering gut bacteria so they destroy harmful bacteria, or using bacteria to replace or increase deficient enzymes or proteins. One research focus is to modify Lactobacillus, bacteria that naturally provide some protection against HIV, with genes that will further enhance this protection. If the bacteria do not form colonies inside the patient, the person must repeatedly ingest the modified bacteria in order to get the required doses. Enabling the bacteria to form a colony could provide a more long-term solution, but could also raise safety concerns as interactions between bacteria and the human body are less well understood than with traditional drugs. There are concerns that horizontal gene transfer to other bacteria could have unknown effects. As of 2018 there are clinical trials underway testing the efficacy and safety of these treatments.
For over a century, bacteria have been used in agriculture. Crops have been inoculated with Rhizobia (and more recently Azospirillum) to increase their production or to allow them to be grown outside their original habitat. Application of Bacillus thuringiensis (Bt) and other bacteria can help protect crops from insect infestation and plant diseases. With advances in genetic engineering, these bacteria have been manipulated for increased efficiency and expanded host range. Markers have also been added to aid in tracing the spread of the bacteria. The bacteria that naturally colonize certain crops have also been modified, in some cases to express the Bt genes responsible for pest resistance. Pseudomonas strains of bacteria cause frost damage by nucleating water into ice crystals around themselves. This led to the development of ice-minus bacteria, which have the ice-forming genes removed. When applied to crops they can compete with the non-modified bacteria and confer some frost resistance.
Other uses for genetically modified bacteria include bioremediation, where the bacteria are used to convert pollutants into a less toxic form. Genetic engineering can increase the levels of the enzymes used to degrade a toxin or to make the bacteria more stable under environmental conditions. Bioart has also been created using genetically modified bacteria. In the 1980s artist Jon Davis and geneticist Dana Boyd converted the Germanic symbol for femininity (ᛉ) into binary code and then into a DNA sequence, which was then expressed in Escherichia coli. This was taken a step further in 2012, when a whole book was encoded onto DNA. Paintings have also been produced using bacteria transformed with fluorescent proteins.
Viruses
Viruses are often modified so they can be used as vectors for inserting genetic information into other organisms. This process is called transduction and if successful the recipient of the introduced DNA becomes a GMO. Different viruses have different efficiencies and capabilities. Researchers can use this to control for various factors; including the target location, insert size, and duration of gene expression. Any dangerous sequences inherent in the virus must be removed, while those that allow the gene to be delivered effectively are retained.
While viral vectors can be used to insert DNA into almost any organism it is especially relevant for its potential in treating human disease. Although primarily still at trial stages, there has been some successes using gene therapy to replace defective genes. This is most evident in curing patients with severe combined immunodeficiency rising from adenosine deaminase deficiency (ADA-SCID), although the development of leukemia in some ADA-SCID patients along with the death of Jesse Gelsinger in a 1999 trial set back the development of this approach for many years. In 2009, another breakthrough was achieved when an eight-year-old boy with Leber's congenital amaurosis regained normal eyesight and in 2016 GlaxoSmithKline gained approval to commercialize a gene therapy treatment for ADA-SCID. As of 2018, there are a substantial number of clinical trials underway, including treatments for hemophilia, glioblastoma, chronic granulomatous disease, cystic fibrosis and various cancers.
The most common virus used for gene delivery comes from adenoviruses as they can carry up to 7.5 kb of foreign DNA and infect a relatively broad range of host cells, although they have been known to elicit immune responses in the host and only provide short term expression. Other common vectors are adeno-associated viruses, which have lower toxicity and longer-term expression, but can only carry about 4kb of DNA. Herpes simplex viruses make promising vectors, having a carrying capacity of over 30kb and providing long term expression, although they are less efficient at gene delivery than other vectors. The best vectors for long term integration of the gene into the host genome are retroviruses, but their propensity for random integration is problematic. Lentiviruses are a part of the same family as retroviruses with the advantage of infecting both dividing and non-dividing cells, whereas retroviruses only target dividing cells. Other viruses that have been used as vectors include alphaviruses, flaviviruses, measles viruses, rhabdoviruses, Newcastle disease virus, poxviruses, and picornaviruses.
Most vaccines consist of viruses that have been attenuated, disabled, weakened or killed in some way so that their virulent properties are no longer effective. Genetic engineering could theoretically be used to create viruses with the virulent genes removed. This does not affect the viruses infectivity, invokes a natural immune response and there is no chance that they will regain their virulence function, which can occur with some other vaccines. As such they are generally considered safer and more efficient than conventional vaccines, although concerns remain over non-target infection, potential side effects and horizontal gene transfer to other viruses. Another potential approach is to use vectors to create novel vaccines for diseases that have no vaccines available or the vaccines that do not work effectively, such as AIDS, malaria, and tuberculosis. The most effective vaccine against Tuberculosis, the Bacillus Calmette–Guérin (BCG) vaccine, only provides partial protection. A modified vaccine expressing a M tuberculosis antigen is able to enhance BCG protection. It has been shown to be safe to use at phase II trials, although not as effective as initially hoped. Other vector-based vaccines have already been approved and many more are being developed.
Another potential use of genetically modified viruses is to alter them so they can directly treat diseases. This can be through expression of protective proteins or by directly targeting infected cells. In 2004, researchers reported that a genetically modified virus that exploits the selfish behavior of cancer cells might offer an alternative way of killing tumours. Since then, several researchers have developed genetically modified oncolytic viruses that show promise as treatments for various types of cancer. In 2017, researchers genetically modified a virus to express spinach defensin proteins. The virus was injected into orange trees to combat citrus greening disease that had reduced orange production by 70% since 2005.
Natural viral diseases, such as myxomatosis and rabbit hemorrhagic disease, have been used to help control pest populations. Over time the surviving pests become resistant, leading researchers to look at alternative methods. Genetically modified viruses that make the target animals infertile through immunocontraception have been created in the laboratory as well as others that target the developmental stage of the animal. There are concerns with using this approach regarding virus containment and cross species infection. Sometimes the same virus can be modified for contrasting purposes. Genetic modification of the myxoma virus has been proposed to conserve European wild rabbits in the Iberian peninsula and to help regulate them in Australia. To protect the Iberian species from viral diseases, the myxoma virus was genetically modified to immunize the rabbits, while in Australia the same myxoma virus was genetically modified to lower fertility in the Australian rabbit population.
Outside of biology scientists have used a genetically modified virus to construct a lithium-ion battery and other nanostructured materials. It is possible to engineer bacteriophages to express modified proteins on their surface and join them up in specific patterns (a technique called phage display). These structures have potential uses for energy storage and generation, biosensing and tissue regeneration with some new materials currently produced including quantum dots, liquid crystals, nanorings and nanofibres. The battery was made by engineering M13 bacteriaophages so they would coat themselves in iron phosphate and then assemble themselves along a carbon nanotube. This created a highly conductive medium for use in a cathode, allowing energy to be transferred quickly. They could be constructed at lower temperatures with non-toxic chemicals, making them more environmentally friendly.
Fungi
Fungi can be used for many of the same processes as bacteria. For industrial applications, yeasts combine the bacterial advantages of being a single-celled organism that is easy to manipulate and grow with the advanced protein modifications found in eukaryotes. They can be used to produce large complex molecules for use in food, pharmaceuticals, hormones, and steroids. Yeast is important for wine production and as of 2016 two genetically modified yeasts involved in the fermentation of wine have been commercialized in the United States and Canada. One has increased malolactic fermentation efficiency, while the other prevents the production of dangerous ethyl carbamate compounds during fermentation. There have also been advances in the production of biofuel from genetically modified fungi.
Fungi, being the most common pathogens of insects, make attractive biopesticides. Unlike bacteria and viruses they have the advantage of infecting the insects by contact alone, although they are out competed in efficiency by chemical pesticides. Genetic engineering can improve virulence, usually by adding more virulent proteins, increasing infection rate or enhancing spore persistence. Many of the disease carrying vectors are susceptible to entomopathogenic fungi. An attractive target for biological control are mosquitos, vectors for a range of deadly diseases, including malaria, yellow fever and dengue fever. Mosquitos can evolve quickly so it becomes a balancing act of killing them before the Plasmodium they carry becomes the infectious disease, but not so fast that they become resistant to the fungi. By genetically engineering fungi like Metarhizium anisopliae and Beauveria bassiana to delay the development of mosquito infectiousness the selection pressure to evolve resistance is reduced. Another strategy is to add proteins to the fungi that block transmission of malaria or remove the Plasmodium altogether.
Agaricus bisporus the common white button mushroom, has been gene edited to resist browning, giving it a longer shelf life. The process used CRISPR to knock out a gene that encodes polyphenol oxidase. As it didn't introduce any foreign DNA into the organism it was not deemed to be regulated under existing GMO frameworks and as such is the first CRISPR-edited organism to be approved for release. This has intensified debates as to whether gene-edited organisms should be considered genetically modified organisms and how they should be regulated.
Plants
Plants have been engineered for scientific research, to display new flower colors, deliver vaccines, and to create enhanced crops. Many plants are pluripotent, meaning that a single cell from a mature plant can be harvested and under the right conditions can develop into a new plant. This ability can be taken advantage of by genetic engineers; by selecting for cells that have been successfully transformed in an adult plant a new plant can then be grown that contains the transgene in every cell through a process known as tissue culture.
Much of the advances in the field of genetic engineering has come from experimentation with tobacco. Major advances in tissue culture and plant cellular mechanisms for a wide range of plants has originated from systems developed in tobacco. It was the first plant to be altered using genetic engineering and is considered a model organism for not only genetic engineering, but a range of other fields. As such the transgenic tools and procedures are well established making tobacco one of the easiest plants to transform. Another major model organism relevant to genetic engineering is Arabidopsis thaliana. Its small genome and short life cycle makes it easy to manipulate and it contains many homologs to important crop species. It was the first plant sequenced, has a host of online resources available and can be transformed by simply dipping a flower in a transformed Agrobacterium solution.
In research, plants are engineered to help discover the functions of certain genes. The simplest way to do this is to remove the gene and see what phenotype develops compared to the wild type form. Any differences are possibly the result of the missing gene. Unlike mutagenisis, genetic engineering allows targeted removal without disrupting other genes in the organism. Some genes are only expressed in certain tissues, so reporter genes, like GUS, can be attached to the gene of interest allowing visualization of the location. Other ways to test a gene is to alter it slightly and then return it to the plant and see if it still has the same effect on phenotype. Other strategies include attaching the gene to a strong promoter and see what happens when it is overexpressed, forcing a gene to be expressed in a different location or at different developmental stages.
Some genetically modified plants are purely ornamental. They are modified for flower color, fragrance, flower shape and plant architecture. The first genetically modified ornamentals commercialized altered color. Carnations were released in 1997, with the most popular genetically modified organism, a blue rose (actually lavender or mauve) created in 2004. The roses are sold in Japan, the United States, and Canada. Other genetically modified ornamentals include Chrysanthemum and Petunia. As well as increasing aesthetic value there are plans to develop ornamentals that use less water or are resistant to the cold, which would allow them to be grown outside their natural environments.
It has been proposed to genetically modify some plant species threatened by extinction to be resistant to invasive plants and diseases, such as the emerald ash borer in North American and the fungal disease, Ceratocystis platani, in European plane trees. The papaya ringspot virus devastated papaya trees in Hawaii in the twentieth century until transgenic papaya plants were given pathogen-derived resistance. However, genetic modification for conservation in plants remains mainly speculative. A unique concern is that a transgenic species may no longer bear enough resemblance to the original species to truly claim that the original species is being conserved. Instead, the transgenic species may be genetically different enough to be considered a new species, thus diminishing the conservation worth of genetic modification.
Crops
Genetically modified crops are genetically modified plants that are used in agriculture. The first crops developed were used for animal or human food and provide resistance to certain pests, diseases, environmental conditions, spoilage or chemical treatments (e.g. resistance to a herbicide). The second generation of crops aimed to improve the quality, often by altering the nutrient profile. Third generation genetically modified crops could be used for non-food purposes, including the production of pharmaceutical agents, biofuels, and other industrially useful goods, as well as for bioremediation.
There are three main aims to agricultural advancement; increased production, improved conditions for agricultural workers and sustainability. GM crops contribute by improving harvests through reducing insect pressure, increasing nutrient value and tolerating different abiotic stresses. Despite this potential, as of 2018, the commercialized crops are limited mostly to cash crops like cotton, soybean, maize and canola and the vast majority of the introduced traits provide either herbicide tolerance or insect resistance. Soybeans accounted for half of all genetically modified crops planted in 2014. Adoption by farmers has been rapid, between 1996 and 2013, the total surface area of land cultivated with GM crops increased by a factor of 100. Geographically though the spread has been uneven, with strong growth in the Americas and parts of Asia and little in Europe and Africa. Its socioeconomic spread has been more even, with approximately 54% of worldwide GM crops grown in developing countries in 2013. Although doubts have been raised, most studies have found growing GM crops to be beneficial to farmers through decreased pesticide use as well as increased crop yield and farm profit.
The majority of GM crops have been modified to be resistant to selected herbicides, usually a glyphosate or glufosinate based one. Genetically modified crops engineered to resist herbicides are now more available than conventionally bred resistant varieties; in the USA 93% of soybeans and most of the GM maize grown is glyphosate tolerant. Most currently available genes used to engineer insect resistance come from the Bacillus thuringiensis bacterium and code for delta endotoxins. A few use the genes that encode for vegetative insecticidal proteins. The only gene commercially used to provide insect protection that does not originate from B. thuringiensis is the Cowpea trypsin inhibitor (CpTI). CpTI was first approved for use cotton in 1999 and is currently undergoing trials in rice. Less than one percent of GM crops contained other traits, which include providing virus resistance, delaying senescence and altering the plants composition.
Golden rice is the most well known GM crop that is aimed at increasing nutrient value. It has been engineered with three genes that biosynthesise beta-carotene, a precursor of vitamin A, in the edible parts of rice. It is intended to produce a fortified food to be grown and consumed in areas with a shortage of dietary vitamin A, a deficiency which each year is estimated to kill 670,000 children under the age of 5 and cause an additional 500,000 cases of irreversible childhood blindness. The original golden rice produced 1.6μg/g of the carotenoids, with further development increasing this 23 times. It gained its first approvals for use as food in 2018.
Plants and plant cells have been genetically engineered for production of biopharmaceuticals in bioreactors, a process known as pharming. Work has been done with duckweed Lemna minor, the algae Chlamydomonas reinhardtii and the moss Physcomitrella patens. Biopharmaceuticals produced include cytokines, hormones, antibodies, enzymes and vaccines, most of which are accumulated in the plant seeds. Many drugs also contain natural plant ingredients and the pathways that lead to their production have been genetically altered or transferred to other plant species to produce greater volume. Other options for bioreactors are biopolymers and biofuels. Unlike bacteria, plants can modify the proteins post-translationally, allowing them to make more complex molecules. They also pose less risk of being contaminated. Therapeutics have been cultured in transgenic carrot and tobacco cells, including a drug treatment for Gaucher's disease.
Vaccine production and storage has great potential in transgenic plants. Vaccines are expensive to produce, transport, and administer, so having a system that could produce them locally would allow greater access to poorer and developing areas. As well as purifying vaccines expressed in plants it is also possible to produce edible vaccines in plants. Edible vaccines stimulate the immune system when ingested to protect against certain diseases. Being stored in plants reduces the long-term cost as they can be disseminated without the need for cold storage, don't need to be purified, and have long term stability. Also being housed within plant cells provides some protection from the gut acids upon digestion. However the cost of developing, regulating, and containing transgenic plants is high, leading to most current plant-based vaccine development being applied to veterinary medicine, where the controls are not as strict.
Genetically modified crops have been proposed as one of the ways to reduce farming-related emissions due to higher yield, reduced use of pesticides, reduced use of tractor fuel and no tillage. According to a 2021 study, in EU alone widespread adoption of GE crops would reduce greenhouse gas emissions by 33 million tons of equivalent or 7.5% of total farming-related emissions.
Animals
The vast majority of genetically modified animals are at the research stage with the number close to entering the market remaining small. As of 2018 only three genetically modified animals have been approved, all in the USA. A goat and a chicken have been engineered to produce medicines and a salmon has increased its own growth. Despite the differences and difficulties in modifying them, the end aims are much the same as for plants. GM animals are created for research purposes, production of industrial or therapeutic products, agricultural uses, or improving their health. There is also a market for creating genetically modified pets.
Mammals
The process of genetically engineering mammals is slow, tedious, and expensive. However, new technologies are making genetic modifications easier and more precise. The first transgenic mammals were produced by injecting viral DNA into embryos and then implanting the embryos in females. The embryo would develop and it would be hoped that some of the genetic material would be incorporated into the reproductive cells. Then researchers would have to wait until the animal reached breeding age and then offspring would be screened for the presence of the gene in every cell. The development of the CRISPR-Cas9 gene editing system as a cheap and fast way of directly modifying germ cells, effectively halving the amount of time needed to develop genetically modified mammals.
Mammals are the best models for human disease, making genetic engineered ones vital to the discovery and development of cures and treatments for many serious diseases. Knocking out genes responsible for human genetic disorders allows researchers to study the mechanism of the disease and to test possible cures. Genetically modified mice have been the most common mammals used in biomedical research, as they are cheap and easy to manipulate. Pigs are also a good target as they have a similar body size and anatomical features, physiology, pathophysiological response and diet. Nonhuman primates are the most similar model organisms to humans, but there is less public acceptance towards using them as research animals. In 2009, scientists announced that they had successfully transferred a gene into a primate species (marmosets) for the first time. Their first research target for these marmosets was Parkinson's disease, but they were also considering amyotrophic lateral sclerosis and Huntington's disease.
Human proteins expressed in mammals are more likely to be similar to their natural counterparts than those expressed in plants or microorganisms. Stable expression has been accomplished in sheep, pigs, rats and other animals. In 2009, the first human biological drug produced from such an animal, a goat, was approved. The drug, ATryn, is an anticoagulant which reduces the probability of blood clots during surgery or childbirth and is extracted from the goat's milk. Human alpha-1-antitrypsin is another protein that has been produced from goats and is used in treating humans with this deficiency. Another medicinal area is in creating pigs with greater capacity for human organ transplants (xenotransplantation). Pigs have been genetically modified so that their organs can no longer carry retroviruses or have modifications to reduce the chance of rejection. Chimeric pigs could carry fully human organs. The first human transplant of a genetically modified pig heart occurred in 2023, and kidney in 2024.
Livestock are modified with the intention of improving economically important traits such as growth-rate, quality of meat, milk composition, disease resistance and survival. Animals have been engineered to grow faster, be healthier and resist diseases. Modifications have also improved the wool production of sheep and udder health of cows. Goats have been genetically engineered to produce milk with strong spiderweb-like silk proteins in their milk. A GM pig called Enviropig was created with the capability of digesting plant phosphorus more efficiently than conventional pigs. They could reduce water pollution since they excrete 30 to 70% less phosphorus in manure. Dairy cows have been genetically engineered to produce milk that would be the same as human breast milk. This could potentially benefit mothers who cannot produce breast milk but want their children to have breast milk rather than formula. Researchers have also developed a genetically engineered cow that produces allergy-free milk.
Scientists have genetically engineered several organisms, including some mammals, to include green fluorescent protein (GFP), for research purposes. GFP and other similar reporting genes allow easy visualization and localization of the products of the genetic modification. Fluorescent pigs have been bred to study human organ transplants, regenerating ocular photoreceptor cells, and other topics. In 2011, green-fluorescent cats were created to help find therapies for HIV/AIDS and other diseases as feline immunodeficiency virus is related to HIV.
There have been suggestions that genetic engineering could be used to bring animals back from extinction. It involves changing the genome of a close living relative to resemble the extinct one and is currently being attempted with the passenger pigeon. Genes associated with the woolly mammoth have been added to the genome of an African Elephant, although the lead researcher says he has no intention of creating live elephants and transferring all the genes and reversing years of genetic evolution is a long way from being feasible. It is more likely that scientists could use this technology to conserve endangered animals by bringing back lost diversity or transferring evolved genetic advantages from adapted organisms to those that are struggling.
Humans
Gene therapy uses genetically modified viruses to deliver genes which can cure disease in humans. Although gene therapy is still relatively new, it has had some successes. It has been used to treat genetic disorders such as severe combined immunodeficiency, and Leber's congenital amaurosis. Treatments are also being developed for a range of other currently incurable diseases, such as cystic fibrosis, sickle cell anemia, Parkinson's disease, cancer, diabetes, heart disease and muscular dystrophy. These treatments only effect somatic cells, meaning any changes would not be inheritable. Germline gene therapy results in any change being inheritable, which has raised concerns within the scientific community.
In 2015, CRISPR was used to edit the DNA of non-viable human embryos. In November 2018, He Jiankui announced that he had edited the genomes of two human embryos, in an attempt to disable the CCR5 gene, which codes for a receptor that HIV uses to enter cells. He said that twin girls, Lulu and Nana, had been born a few weeks earlier and that they carried functional copies of CCR5 along with disabled CCR5 (mosaicism) and were still vulnerable to HIV. The work was widely condemned as unethical, dangerous, and premature.
Fish
Genetically modified fish are used for scientific research, as pets and as a food source. Aquaculture is a growing industry, currently providing over half the consumed fish worldwide. Through genetic engineering it is possible to increase growth rates, reduce food intake, remove allergenic properties, increase cold tolerance and provide disease resistance. Fish can also be used to detect aquatic pollution or function as bioreactors.
Several groups have been developing zebrafish to detect pollution by attaching fluorescent proteins to genes activated by the presence of pollutants. The fish will then glow and can be used as environmental sensors. The GloFish is a brand of genetically modified fluorescent zebrafish with bright red, green, and orange fluorescent color. It was originally developed by one of the groups to detect pollution, but is now part of the ornamental fish trade, becoming the first genetically modified animal to become publicly available as a pet when in 2003 it was introduced for sale in the USA.
GM fish are widely used in basic research in genetics and development. Two species of fish, zebrafish and medaka, are most commonly modified because they have optically clear chorions (membranes in the egg), rapidly develop, and the one-cell embryo is easy to see and microinject with transgenic DNA. Zebrafish are model organisms for developmental processes, regeneration, genetics, behavior, disease mechanisms and toxicity testing. Their transparency allows researchers to observe developmental stages, intestinal functions and tumour growth. The generation of transgenic protocols (whole organism, cell or tissue specific, tagged with reporter genes) has increased the level of information gained by studying these fish.
GM fish have been developed with promoters driving an over-production of growth hormone for use in the aquaculture industry to increase the speed of development and potentially reduce fishing pressure on wild stocks. This has resulted in dramatic growth enhancement in several species, including salmon, trout and tilapia. AquaBounty Technologies, a biotechnology company, have produced a salmon (called AquAdvantage salmon) that can mature in half the time as wild salmon. It obtained regulatory approval in 2015, the first non-plant GMO food to be commercialized. As of August 2017, GMO salmon is being sold in Canada. Sales in the US started in May 2021.
Insects
In biological research, transgenic fruit flies (Drosophila melanogaster) are model organisms used to study the effects of genetic changes on development. Fruit flies are often preferred over other animals due to their short life cycle and low maintenance requirements. They also have a relatively simple genome compared to many vertebrates, with typically only one copy of each gene, making phenotypic analysis easy. Drosophila have been used to study genetics and inheritance, embryonic development, learning, behavior, and aging. The discovery of transposons, in particular the p-element, in Drosophila provided an early method to add transgenes to their genome, although this has been taken over by more modern gene-editing techniques.
Due to their significance to human health, scientists are looking at ways to control mosquitoes through genetic engineering. Malaria-resistant mosquitoes have been developed in the laboratory by inserting a gene that reduces the development of the malaria parasite and then use homing endonucleases to rapidly spread that gene throughout the male population (known as a gene drive). This approach has been taken further by using the gene drive to spread a lethal gene. In trials the populations of Aedes aegypti mosquitoes, the single most important carrier of dengue fever and Zika virus, were reduced by between 80% and by 90%. Another approach is to use a sterile insect technique, whereby males genetically engineered to be sterile out compete viable males, to reduce population numbers.
Other insect pests that make attractive targets are moths. Diamondback moths cause US$4 to $5 billion of damage each year worldwide. The approach is similar to the sterile technique tested on mosquitoes, where males are transformed with a gene that prevents any females born from reaching maturity. They underwent field trials in 2017. Genetically modified moths have previously been released in field trials. In this case a strain of pink bollworm that were sterilized with radiation were genetically engineered to express a red fluorescent protein making it easier for researchers to monitor them.
Silkworm, the larvae stage of Bombyx mori, is an economically important insect in sericulture. Scientists are developing strategies to enhance silk quality and quantity. There is also potential to use the silk producing machinery to make other valuable proteins. Proteins currently developed to be expressed by silkworms include; human serum albumin, human collagen α-chain, mouse monoclonal antibody and N-glycanase. Silkworms have been created that produce spider silk, a stronger but extremely difficult to harvest silk, and even novel silks.
Other
Systems have been developed to create transgenic organisms in a wide variety of other animals. Chickens have been genetically modified for a variety of purposes. This includes studying embryo development, preventing the transmission of bird flu and providing evolutionary insights using reverse engineering to recreate dinosaur-like phenotypes. A GM chicken that produces the drug Kanuma, an enzyme that treats a rare condition, in its egg passed US regulatory approval in 2015. Genetically modified frogs, in particular Xenopus laevis and Xenopus tropicalis, are used in developmental biology research. GM frogs can also be used as pollution sensors, especially for endocrine disrupting chemicals. There are proposals to use genetic engineering to control cane toads in Australia.
The nematode Caenorhabditis elegans is one of the major model organisms for researching molecular biology. RNA interference (RNAi) was discovered in C. elegans and could be induced by simply feeding them bacteria modified to express double stranded RNA. It is also relatively easy to produce stable transgenic nematodes and this along with RNAi are the major tools used in studying their genes. The most common use of transgenic nematodes has been studying gene expression and localization by attaching reporter genes. Transgenes can also be combined with RNAi techniques to rescue phenotypes, study gene function, image cell development in real time or control expression for different tissues or developmental stages. Transgenic nematodes have been used to study viruses, toxicology, diseases, and to detect environmental pollutants.
The gene responsible for albinism in sea cucumbers has been found and used to engineer white sea cucumbers, a rare delicacy. The technology also opens the way to investigate the genes responsible for some of the cucumbers more unusual traits, including hibernating in summer, eviscerating their intestines, and dissolving their bodies upon death. Flatworms have the ability to regenerate themselves from a single cell. Until 2017 there was no effective way to transform them, which hampered research. By using microinjection and radiation scientists have now created the first genetically modified flatworms. The bristle worm, a marine annelid, has been modified. It is of interest due to its reproductive cycle being synchronized with lunar phases, regeneration capacity and slow evolution rate. Cnidaria such as Hydra and the sea anemone Nematostella vectensis are attractive model organisms to study the evolution of immunity and certain developmental processes. Other animals that have been genetically modified include snails, geckos, turtles, crayfish, oysters, shrimp, clams, abalone and sponges.
Regulation
Genetically modified organisms are regulated by government agencies. This applies to research as well as the release of genetically modified organisms, including crops and food. The development of a regulatory framework concerning genetic engineering began in 1975, at Asilomar, California. The Asilomar meeting recommended a set of guidelines regarding the cautious use of recombinant technology and any products resulting from that technology. The Cartagena Protocol on Biosafety was adopted on 29 January 2000 and entered into force on 11 September 2003. It is an international treaty that governs the transfer, handling, and use of genetically modified organisms. One hundred and fifty-seven countries are members of the Protocol and many use it as a reference point for their own regulations.
Universities and research institutes generally have a special committee that is responsible for approving any experiments that involve genetic engineering. Many experiments also need permission from a national regulatory group or legislation. All staff must be trained in the use of GMOs and all laboratories must gain approval from their regulatory agency to work with GMOs. The legislation covering GMOs are often derived from regulations and guidelines in place for the non-GMO version of the organism, although they are more severe. There is a near-universal system for assessing the relative risks associated with GMOs and other agents to laboratory staff and the community. They are assigned to one of four risk categories based on their virulence, the severity of the disease, the mode of transmission, and the availability of preventive measures or treatments. There are four biosafety levels that a laboratory can fall into, ranging from level 1 (which is suitable for working with agents not associated with disease) to level 4 (working with life-threatening agents). Different countries use different nomenclature to describe the levels and can have different requirements for what can be done at each level.
There are differences in the regulation for the release of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a crop not intended for food use is generally not reviewed by authorities responsible for food safety. Some nations have banned the release of GMOs or restricted their use, and others permit them with widely differing degrees of regulation. In 2016, thirty eight countries officially ban or prohibit the cultivation of GMOs and nine (Algeria, Bhutan, Kenya, Kyrgyzstan, Madagascar, Peru, Russia, Venezuela and Zimbabwe) ban their importation. Most countries that do not allow GMO cultivation do permit research using GMOs. Despite regulation, illegal releases have sometimes occurred, due to weakness of enforcement.
The European Union (EU) differentiates between approval for cultivation within the EU and approval for import and processing. While only a few GMOs have been approved for cultivation in the EU a number of GMOs have been approved for import and processing. The cultivation of GMOs has triggered a debate about the market for GMOs in Europe. Depending on the coexistence regulations, incentives for cultivation of GM crops differ. The US policy does not focus on the process as much as other countries, looks at verifiable scientific risks and uses the concept of substantial equivalence. Whether gene edited organisms should be regulated the same as genetically modified organism is debated. USA regulations sees them as separate and does not regulate them under the same conditions, while in Europe a GMO is any organism created using genetic engineering techniques.
One of the key issues concerning regulators is whether GM products should be labeled. The European Commission says that mandatory labeling and traceability are needed to allow for informed choice, avoid potential false advertising and facilitate the withdrawal of products if adverse effects on health or the environment are discovered. The American Medical Association and the American Association for the Advancement of Science say that absent scientific evidence of harm even voluntary labeling is misleading and will falsely alarm consumers. Labeling of GMO products in the marketplace is required in 64 countries. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. In the U.S., the National Bioengineered Food Disclosure Standard (Mandatory Compliance Date: January 1, 2022) requires labeling GM foods. In Canada, labeling of GM food is voluntary, while in Europe all food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labeled. In 2014, sales of products that had been labeled as non-GMO grew 30 percent to $1.1 billion.
Controversy
There is controversy over GMOs, especially with regard to their release outside laboratory environments. The dispute involves consumers, producers, biotechnology companies, governmental regulators, non-governmental organizations, and scientists. Many of these concerns involve GM crops and whether food produced from them is safe and what impact growing them will have on the environment. These controversies have led to litigation, international trade disputes, and protests, and to restrictive regulation of commercial products in some countries. Most concerns are around the health and environmental effects of GMOs. These include whether they may provoke an allergic reaction, whether the transgenes could transfer to human cells, and whether genes not approved for human consumption could outcross into the food supply.
There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are much less likely than scientists to perceive GM foods as safe. The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation.
As late as the 1990s gene flow into wild populations was thought to be unlikely and rare, and if it were to occur, easily eradicated. It was thought that this would add no additional environmental costs or risks – no effects were expected other than those already caused by pesticide applications. However, in the decades since, several such examples have been observed. Gene flow between GM crops and compatible plants, along with increased use of broad-spectrum herbicides, can increase the risk of herbicide resistant weed populations. Debate over the extent and consequences of gene flow intensified in 2001 when a paper was published showing transgenes had been found in landrace maize in Mexico, the crop's center of diversity. Gene flow from GM crops to other organisms has been found to generally be lower than what would occur naturally. In order to address some of these concerns some GMOs have been developed with traits to help control their spread. To prevent the genetically modified salmon inadvertently breeding with wild salmon, all the fish raised for food are females, triploid, 99% are reproductively sterile, and raised in areas where escaped salmon could not survive. Bacteria have also been modified to depend on nutrients that cannot be found in nature, and genetic use restriction technology has been developed, though not yet marketed, that causes the second generation of GM plants to be sterile.
Other environmental and agronomic concerns include a decrease in biodiversity, an increase in secondary pests (non-targeted pests) and evolution of resistant insect pests. In the areas of China and the US with Bt crops the overall biodiversity of insects has increased and the impact of secondary pests has been minimal. Resistance was found to be slow to evolve when best practice strategies were followed. The impact of Bt crops on beneficial non-target organisms became a public issue after a 1999 paper suggested they could be toxic to monarch butterflies. Follow up studies have since shown that the toxicity levels encountered in the field were not high enough to harm the larvae.
Accusations that scientists are "playing God" and other religious issues have been ascribed to the technology from the beginning. With the ability to genetically engineer humans now possible there are ethical concerns over how far this technology should go, or if it should be used at all. Much debate revolves around where the line between treatment and enhancement is and whether the modifications should be inheritable. Other concerns include contamination of the non-genetically modified food supply, the rigor of the regulatory process, consolidation of control of the food supply in companies that make and sell GMOs, exaggeration of the benefits of genetic modification, or concerns over the use of herbicides with glyphosate. Other issues raised include the patenting of life and the use of intellectual property rights.
There are large differences in consumer acceptance of GMOs, with Europeans more likely to view GM food negatively than North Americans. GMOs arrived on the scene as the public confidence in food safety, attributed to recent food scares such as Bovine spongiform encephalopathy and other scandals involving government regulation of products in Europe, was low. This along with campaigns run by various non-governmental organizations (NGO) have been very successful in blocking or limiting the use of GM crops. NGOs like the Organic Consumers Association, the Union of Concerned Scientists, Greenpeace and other groups have said that risks have not been adequately identified and managed and that there are unanswered questions regarding the potential long-term impact on human health from food derived from GMOs. They propose mandatory labeling or a moratorium on such products.
| Technology | Biotechnology | null |
12354 | https://en.wikipedia.org/wiki/Greatest%20common%20divisor | Greatest common divisor | In mathematics, the greatest common divisor (GCD), also known as greatest common factor (GCF), of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers , , the greatest common divisor of and is denoted . For example, the GCD of 8 and 12 is 4, that is, .
In the name "greatest common divisor", the adjective "greatest" may be replaced by "highest", and the word "divisor" may be replaced by "factor", so that other names include highest common factor, etc. Historically, other names for the same concept have included greatest common measure.
This notion can be extended to polynomials (see Polynomial greatest common divisor) and other commutative rings (see below).
Overview
Definition
The greatest common divisor (GCD) of integers and , at least one of which is nonzero, is the greatest positive integer such that is a divisor of both and ; that is, there are integers and such that and , and is the largest such integer. The GCD of and is generally denoted .
When one of and is zero, the GCD is the absolute value of the nonzero integer: . This case is important as the terminating step of the Euclidean algorithm.
The above definition is unsuitable for defining , since there is no greatest integer such that . However, zero is its own greatest divisor if greatest is understood in the context of the divisibility relation, so is commonly defined as . This preserves the usual identities for GCD, and in particular Bézout's identity, namely that generates the same ideal as . This convention is followed by many computer algebra systems. Nonetheless, some authors leave undefined.
The GCD of and is their greatest positive common divisor in the preorder relation of divisibility. This means that the common divisors of and are exactly the divisors of their GCD. This is commonly proved by using either Euclid's lemma, the fundamental theorem of arithmetic, or the Euclidean algorithm. This is the meaning of "greatest" that is used for the generalizations of the concept of GCD.
Example
The number 54 can be expressed as a product of two integers in several different ways:
Thus the complete list of divisors of 54 is 1, 2, 3, 6, 9, 18, 27, 54.
Similarly, the divisors of 24 are 1, 2, 3, 4, 6, 8, 12, 24.
The numbers that these two lists have in common are the common divisors of 54 and 24, that is,
Of these, the greatest is 6, so it is the greatest common divisor:
Computing all divisors of the two numbers in this way is usually not efficient, especially for large numbers that have many divisors. Much more efficient methods are described in .
Coprime numbers
Two numbers are called relatively prime, or coprime, if their greatest common divisor equals . For example, 9 and 28 are coprime.
A geometric view
For example, a 24-by-60 rectangular area can be divided into a grid of: 1-by-1 squares, 2-by-2 squares, 3-by-3 squares, 4-by-4 squares, 6-by-6 squares or 12-by-12 squares. Therefore, 12 is the greatest common divisor of 24 and 60. A 24-by-60 rectangular area can thus be divided into a grid of 12-by-12 squares, with two squares along one edge () and five squares along the other ().
Applications
Reducing fractions
The greatest common divisor is useful for reducing fractions to the lowest terms. For example, , therefore,
Least common multiple
The least common multiple of two integers that are not both zero can be computed from their greatest common divisor, by using the relation
Calculation
Using prime factorizations
Greatest common divisors can be computed by determining the prime factorizations of the two numbers and comparing factors. For example, to compute , we find the prime factorizations 48 = 24 · 31 and 180 = 22 · 32 · 51; the GCD is then 2min(4,2) · 3min(1,2) · 5min(0,1) = 22 · 31 · 50 = 12 The corresponding LCM is then
2max(4,2) · 3max(1,2) · 5max(0,1) =
24 · 32 · 51 = 720.
In practice, this method is only feasible for small numbers, as computing prime factorizations takes too long.
Euclid's algorithm
The method introduced by Euclid for computing greatest common divisors is based on the fact that, given two positive integers and such that , the common divisors of and are the same as the common divisors of and .
So, Euclid's method for computing the greatest common divisor of two positive integers consists of replacing the larger number with the difference of the numbers, and repeating this until the two numbers are equal: that is their greatest common divisor.
For example, to compute , one proceeds as follows:
So .
This method can be very slow if one number is much larger than the other. So, the variant that follows is generally preferred.
Euclidean algorithm
A more efficient method is the Euclidean algorithm, a variant in which the difference of the two numbers and is replaced by the remainder of the Euclidean division (also called division with remainder) of by .
Denoting this remainder as , the algorithm replaces with repeatedly until the pair is , where is the greatest common divisor.
For example, to compute gcd(48,18), the computation is as follows:
This again gives .
Binary GCD algorithm
The binary GCD algorithm is a variant of Euclid's algorithm that is specially adapted to the binary representation of the numbers, which is used in most computers.
The binary GCD algorithm differs from Euclid's algorithm essentially by dividing by two every even number that is encountered during the computation. Its efficiency results from the fact that, in binary representation, testing parity consists of testing the right-most digit, and dividing by two consists of removing the right-most digit.
The method is as follows, starting with and that are the two positive integers whose GCD is sought.
If and are both even, then divide both by two until at least one of them becomes odd; let be the number of these paired divisions.
If is even, then divide it by two until it becomes odd.
If is even, then divide it by two until it becomes odd.
Now, and are both odd and will remain odd until the end of the computation
While do
If , then replace with and divide the result by two until becomes odd (as and are both odd, there is, at least, one division by 2).
If , then replace with and divide the result by two until becomes odd.
Now, , and the greatest common divisor is
Step 1 determines as the highest power of that divides and , and thus their greatest common divisor. None of the steps changes the set of the odd common divisors of and . This shows that when the algorithm stops, the result is correct. The algorithm stops eventually, since each steps divides at least one of the operands by at least . Moreover, the number of divisions by and thus the number of subtractions is at most the total number of digits.
Example: (a, b, d) = (48, 18, 0) → (24, 9, 1) → (12, 9, 1) → (6, 9, 1) → (3, 9, 1) → (3, 3, 1) ; the original GCD is thus the product 6 of and .
The binary GCD algorithm is particularly easy to implement and particularly efficient on binary computers. Its computational complexity is
The square in this complexity comes from the fact that division by and subtraction take a time that is proportional to the number of bits of the input.
The computational complexity is usually given in terms of the length of the input. Here, this length is , and the complexity is thus
.
Lehmer's GCD algorithm
Lehmer's algorithm is based on the observation that the initial quotients produced by Euclid's algorithm can be determined based on only the first few digits; this is useful for numbers that are larger than a computer word. In essence, one extracts initial digits, typically forming one or two computer words, and runs Euclid's algorithms on these smaller numbers, as long as it is guaranteed that the quotients are the same with those that would be obtained with the original numbers. The quotients are collected into a small 2-by-2 transformation matrix (a matrix of single-word integers) to reduce the original numbers. This process is repeated until numbers are small enough that the binary algorithm (see below) is more efficient.
This algorithm improves speed, because it reduces the number of operations on very large numbers, and can use hardware arithmetic for most operations. In fact, most of the quotients are very small, so a fair number of steps of the Euclidean algorithm can be collected in a 2-by-2 matrix of single-word integers. When Lehmer's algorithm encounters a quotient that is too large, it must fall back to one iteration of Euclidean algorithm, with a Euclidean division of large numbers.
Other methods
If and are both nonzero, the greatest common divisor of and can be computed by using least common multiple (LCM) of and :
,
but more commonly the LCM is computed from the GCD.
Using Thomae's function ,
which generalizes to and rational numbers or commensurable real numbers.
Keith Slavin has shown that for odd :
which is a function that can be evaluated for complex b. Wolfgang Schramm has shown that
is an entire function in the variable b for all positive integers a where cd(k) is Ramanujan's sum.
Complexity
The computational complexity of the computation of greatest common divisors has been widely studied. If one uses the Euclidean algorithm and the elementary algorithms for multiplication and division, the computation of the greatest common divisor of two integers of at most bits is . This means that the computation of greatest common divisor has, up to a constant factor, the same complexity as the multiplication.
However, if a fast multiplication algorithm is used, one may modify the Euclidean algorithm for improving the complexity, but the computation of a greatest common divisor becomes slower than the multiplication. More precisely, if the multiplication of two integers of bits takes a time of , then the fastest known algorithm for greatest common divisor has a complexity . This implies that the fastest known algorithm has a complexity of .
Previous complexities are valid for the usual models of computation, specifically multitape Turing machines and random-access machines.
The computation of the greatest common divisors belongs thus to the class of problems solvable in quasilinear time. A fortiori, the corresponding decision problem belongs to the class P of problems solvable in polynomial time. The GCD problem is not known to be in NC, and so there is no known way to parallelize it efficiently; nor is it known to be P-complete, which would imply that it is unlikely to be possible to efficiently parallelize GCD computation. Shallcross et al. showed that a related problem (EUGCD, determining the remainder sequence arising during the Euclidean algorithm) is NC-equivalent to the problem of integer linear programming with two variables; if either problem is in NC or is P-complete, the other is as well. Since NC contains NL, it is also unknown whether a space-efficient algorithm for computing the GCD exists, even for nondeterministic Turing machines.
Although the problem is not known to be in NC, parallel algorithms asymptotically faster than the Euclidean algorithm exist; the fastest known deterministic algorithm is by Chor and Goldreich, which (in the CRCW-PRAM model) can solve the problem in time with processors. Randomized algorithms can solve the problem in time on processors (this is superpolynomial).
Properties
For positive integers , .
Every common divisor of and is a divisor of .
, where a and b are not both zero, may be defined alternatively and equivalently as the smallest positive integer d which can be written in the form , where p and q are integers. This expression is called Bézout's identity. Numbers p and q like this can be computed with the extended Euclidean algorithm.
, for , since any number is a divisor of 0, and the greatest divisor of a is . This is usually used as the base case in the Euclidean algorithm.
If a divides the product b⋅c, and , then a/d divides c.
If m is a positive integer, then .
If m is any integer, then . Equivalently, .
If m is a positive common divisor of a and b, then .
The GCD is a commutative function: .
The GCD is an associative function: . Thus can be used to denote the GCD of multiple arguments.
The GCD is a multiplicative function in the following sense: if a1 and a2 are relatively prime, then .
is closely related to the least common multiple : we have
.
This formula is often used to compute least common multiples: one first computes the GCD with Euclid's algorithm and then divides the product of the given numbers by their GCD.
The following versions of distributivity hold true:
.
If we have the unique prime factorizations of and where and , then the GCD of a and b is
.
It is sometimes useful to define and because then the natural numbers become a complete distributive lattice with GCD as meet and LCM as join operation. This extension of the definition is also compatible with the generalization for commutative rings given below.
In a Cartesian coordinate system, can be interpreted as the number of segments between points with integral coordinates on the straight line segment joining the points and .
For non-negative integers and , where and are not both zero, provable by considering the Euclidean algorithm in base n:
.
An identity involving Euler's totient function:
GCD Summatory function (Pillai's arithmetical function):
where is the -adic valuation.
Probabilities and expected value
In 1972, James E. Nymann showed that integers, chosen independently and uniformly from , are coprime with probability as goes to infinity, where refers to the Riemann zeta function. (See coprime for a derivation.) This result was extended in 1987 to show that the probability that random integers have greatest common divisor is .
Using this information, the expected value of the greatest common divisor function can be seen (informally) to not exist when . In this case the probability that the GCD equals is , and since we have
This last summation is the harmonic series, which diverges. However, when , the expected value is well-defined, and by the above argument, it is
For , this is approximately equal to 1.3684. For , it is approximately 1.1106.
In commutative rings
The notion of greatest common divisor can more generally be defined for elements of an arbitrary commutative ring, although in general there need not exist one for every pair of elements.
If is a commutative ring, and and are in , then an element of is called a common divisor of and if it divides both and (that is, if there are elements and in such that d·x = a and d·y = b).
If is a common divisor of and , and every common divisor of and divides , then is called a greatest common divisor of and b.
With this definition, two elements and may very well have several greatest common divisors, or none at all. If is an integral domain, then any two GCDs of and must be associate elements, since by definition either one must divide the other. Indeed, if a GCD exists, any one of its associates is a GCD as well.
Existence of a GCD is not assured in arbitrary integral domains. However, if is a unique factorization domain or any other GCD domain, then any two elements have a GCD. If is a Euclidean domain in which euclidean division is given algorithmically (as is the case for instance when where is a field, or when is the ring of Gaussian integers), then greatest common divisors can be computed using a form of the Euclidean algorithm based on the division procedure.
The following is an example of an integral domain with two elements that do not have a GCD:
The elements and are two maximal common divisors (that is, any common divisor which is a multiple of is associated to , the same holds for , but they are not associated, so there is no greatest common divisor of and .
Corresponding to the Bézout property we may, in any commutative ring, consider the collection of elements of the form , where and range over the ring. This is the ideal generated by and , and is denoted simply . In a ring all of whose ideals are principal (a principal ideal domain or PID), this ideal will be identical with the set of multiples of some ring element ; then this is a greatest common divisor of and . But the ideal can be useful even when there is no greatest common divisor of and . (Indeed, Ernst Kummer used this ideal as a replacement for a GCD in his treatment of Fermat's Last Theorem, although he envisioned it as the set of multiples of some hypothetical, or ideal, ring element , whence the ring-theoretic term.)
| Mathematics | Basics | null |
12366 | https://en.wikipedia.org/wiki/Graphite | Graphite | Graphite () is a crystalline allotrope (form) of the element carbon. It consists of many stacked layers of graphene, typically in the excess of hundreds of layers. Graphite occurs naturally and is the most stable form of carbon under standard conditions. Synthetic and natural graphite are consumed on a large scale (1.3million metric tons per year in 2022) for uses in many critical industries including refractories (50%), lithium-ion batteries (18%), foundries (10%), lubricants (5%), among others (17%). Under extremely high pressures and extremely high temperatures it converts to diamond. Graphite's low cost, thermal and chemical inertness and characteristic conductivity of heat and electricity finds numerous applications in high energy and high temperature processes.
Types and varieties
Natural graphite
Graphite occurs naturally in ores that can be classified into one of two categories either amorphous (microcrystalline) or crystalline (flake or lump/chip) which is determined by the ore morphology, crystallinity, and grain size. All naturally occurring graphite deposits are formed from the metamorphism of carbonaceous sedimentary rocks, and the ore type is due to its geologic setting. Coal that has been thermally metamorphosed is the typical source of amorphous graphite. Crystalline flake graphite is mined from carbonaceous metamorphic rocks, while lump or chip graphite is mined from veins which occur in high-grade metamorphic regions. There are serious negative environmental impacts to graphite mining.
Synthetic graphite
Synthetic graphite is graphite of high purity produced by thermal graphitization at temperatures in excess of 2,100 °C from hydrocarbon materials, most commonly through the Acheson process. The high temperatures are maintained for weeks, and are required not only to form the graphite from the precursor carbons but to also vaporize any impurities that may be present, including hydrogen, nitrogen, sulfur, organics, and metals. This is why synthetic graphite is highly pure in excess of 99.9% C purity, but typically has lower density, conductivity and a higher porosity than its natural equivalent. Synthetic graphite can also be formed into very large flakes (cm) while maintaining its high purity unlike almost all sources of natural graphite. Synthetic graphite has also been known to be formed by other methods including by chemical vapor deposition from hydrocarbons at temperatures above , by decomposition of thermally unstable carbides or by crystallizing from metal melts supersaturated with carbon.
Biographite
Biographite is a commercial product proposal for reducing the carbon footprint of lithium iron phosphate (LFP) batteries. It is produced from forestry waste and similar byproducts by a company in New Zealand using a novel process called thermo-catalytic graphitisation which project is supported by grants from interested parties including a forestry company in Finland and a battery maker in Hong Kong
Natural graphite
Occurrence
Graphite occurs in metamorphic rocks as a result of the reduction of sedimentary carbon compounds during metamorphism. It also occurs in igneous rocks and in meteorites. Minerals associated with graphite include quartz, calcite, micas and tourmaline. The principal export sources of mined graphite are in order of tonnage: China, Mexico, Canada, Brazil, and Madagascar. Significant unexploited graphite resources also exists in Colombia's Cordillera Central in the form of graphite-bearing schists.
In meteorites, graphite occurs with troilite and silicate minerals. Small graphitic crystals in meteoritic iron are called cliftonite. Some microscopic grains have distinctive isotopic compositions, indicating that they were formed before the Solar System. They are one of about 12 known types of minerals that predate the Solar System and have also been detected in molecular clouds. These minerals were formed in the ejecta when supernovae exploded or low to intermediate-sized stars expelled their outer envelopes late in their lives. Graphite may be the second or third oldest mineral in the Universe.
Structure
Graphite consists of sheets of trigonal planar carbon. The individual layers are called graphene. In each layer, each carbon atom is bonded to three other atoms forming a continuous layer of sp2 bonded carbon hexagons, like a honeycomb lattice with a bond length of 0.142 nm, and the distance between planes is 0.335 nm. Bonding between layers is relatively weak van der Waals bonds, which allows the graphene-like layers to be easily separated and to glide past each other. Electrical conductivity perpendicular to the layers is consequently about 1000 times lower.
There are two allotropic forms called alpha (hexagonal) and beta (rhombohedral), differing in terms of the stacking of the graphene layers: stacking in alpha graphite is ABA, as opposed to ABC stacking in the energetically less stable beta graphite. Rhombohedral graphite cannot occur in pure form. Natural graphite, or commercial natural graphite, contains 5 to 15% rhombohedral graphite and this may be due to intensive milling. The alpha form can be converted to the beta form through shear forces, and the beta form reverts to the alpha form when it is heated to 1300 °C for four hours.
Thermodynamics
The equilibrium pressure and temperature conditions for a transition between graphite and diamond is well established theoretically and experimentally. The pressure changes linearly between at and at (the diamond/graphite/liquid triple point).
However, the phases have a wide region about this line where they can coexist. At normal temperature and pressure, and , the stable phase of carbon is graphite, but diamond is metastable and its rate of conversion to graphite is negligible. However, at temperatures above about , diamond rapidly converts to graphite. Rapid conversion of graphite to diamond requires pressures well above the equilibrium line: at , a pressure of is needed.
Other properties
The acoustic and thermal properties of graphite are highly anisotropic, since phonons propagate quickly along the tightly bound planes, but are slower to travel from one plane to another. Graphite's high thermal stability and electrical and thermal conductivity facilitate its widespread use as electrodes and refractories in high temperature material processing applications. However, in oxygen-containing atmospheres graphite readily oxidizes to form carbon dioxide at temperatures of 700 °C and above.
Graphite is an electrical conductor, hence useful in such applications as arc lamp electrodes. It can conduct electricity due to the vast electron delocalization within the carbon layers (a phenomenon called aromaticity). These valence electrons are free to move, so are able to conduct electricity. However, the electricity is primarily conducted within the plane of the layers. The conductive properties of powdered graphite allow its use as pressure sensor in carbon microphones.
Graphite and graphite powder are valued in industrial applications for their self-lubricating and dry lubricating properties. However, the use of graphite is limited by its tendency to facilitate pitting corrosion in some stainless steel, and to promote galvanic corrosion between dissimilar metals (due to its electrical conductivity). It is also corrosive to aluminium in the presence of moisture. For this reason, the US Air Force banned its use as a lubricant in aluminium aircraft, and discouraged its use in aluminium-containing automatic weapons. Even graphite pencil marks on aluminium parts may facilitate corrosion. Another high-temperature lubricant, hexagonal boron nitride, has the same molecular structure as graphite. It is sometimes called white graphite, due to its similar properties.
When a large number of crystallographic defects bind these planes together, graphite loses its lubrication properties and becomes what is known as pyrolytic graphite. It is also highly anisotropic, and diamagnetic, thus it will float in mid-air above a strong magnet. (If it is made in a fluidized bed at 1000–1300 °C then it is isotropic turbostratic, and is used in blood-contacting devices like mechanical heart valves and is called pyrolytic carbon, and is not diamagnetic. Pyrolytic graphite and pyrolytic carbon are often confused but are very different materials.)
For a long time graphite has been considered to be hydrophobic. However, recent studies using highly ordered pyrolytic graphite have shown that freshly clean graphite is hydrophilic (contact angle of 70° approximately), and it becomes hydrophobic (contact angle of 95° approximately) due to airborne pollutants (hydrocarbons) present in the atmosphere. Those contaminants also alter the electric equipotential surface of graphite by creating domains with potential differences of up to 200 mV as measured with kelvin probe force microscopy. Such contaminants can be desorbed by increasing the temperature of graphite to approximately 50 °C or higher.
Natural and crystalline graphites are not often used in pure form as structural materials, due to their shear-planes, brittleness, and inconsistent mechanical properties.
History of natural graphite use
In the 4th millennium BCE, during the Neolithic Age in southeastern Europe, the Marița culture used graphite in a ceramic paint for decorating pottery.
Sometime before 1565 (some sources say as early as 1500), an enormous deposit of graphite was discovered on the approach to Grey Knotts from the hamlet of Seathwaite in Borrowdale parish, Cumbria, England, which the locals found useful for marking sheep. During the reign of Elizabeth I (1558–1603), Borrowdale graphite was used as a refractory material to line molds for cannonballs, resulting in rounder, smoother balls that could be fired farther, contributing to the strength of the English navy. This particular deposit of graphite was extremely pure and soft, and could easily be cut into sticks. Because of its military importance, this unique mine and its production were strictly controlled by the Crown.
During the 19th century, graphite's uses greatly expanded to include stove polish, lubricants, paints, crucibles, foundry facings, and pencils, a major factor in the expansion of educational tools during the first great rise of education for the masses. The British Empire controlled most of the world's production (especially from Ceylon), but production from Austrian, German, and American deposits expanded by mid-century. For example, the Dixon Crucible Company of Jersey City, New Jersey, founded by Joseph Dixon and partner Orestes Cleveland in 1845, opened mines in the Lake Ticonderoga district of New York, built a processing plant there, and a factory to manufacture pencils, crucibles and other products in New Jersey, described in the Engineering & Mining Journal 21 December 1878. The Dixon pencil is still in production.
The beginnings of the revolutionary froth flotation process are associated with graphite mining. Included in the E&MJ article on the Dixon Crucible Company is a sketch of the "floating tanks" used in the age-old process of extracting graphite. Because graphite is so light, the mix of graphite and waste was sent through a final series of water tanks where a cleaner graphite "floated" off, which left waste to drop out. In an 1877 patent, the two brothers Bessel (Adolph and August) of Dresden, Germany, took this "floating" process a step further and added a small amount of oil to the tanks and boiled the mix – an agitation or frothing step – to collect the graphite, the first steps toward the future flotation process. Adolph Bessel received the Wohler Medal for the patented process that upgraded the recovery of graphite to 90% from the German deposit. In 1977, the German Society of Mining Engineers and Metallurgists organized a special symposium dedicated to their discovery and, thus, the 100th anniversary of flotation.
In the United States, in 1885, Hezekiah Bradford of Philadelphia patented a similar process, but it is uncertain if his process was used successfully in the nearby graphite deposits of Chester County, Pennsylvania, a major producer by the 1890s. The Bessel process was limited in use, primarily because of the abundant cleaner deposits found around the globe, which needed not much more than hand-sorting to gather the pure graphite. The state of the art, , is described in the Canadian Department of Mines report on graphite mines and mining when Canadian deposits began to become important producers of graphite.
Other names
Historically, graphite was called black lead or plumbago. Plumbago was commonly used in its massive mineral form. Both of these names arise from confusion with the similar-appearing lead ores, particularly galena. The Latin word for lead, plumbum, gave its name to the English term for this grey metallic-sheened mineral and even to the leadworts or plumbagos, plants with flowers that resemble this colour.
The term black lead usually refers to a powdered or processed graphite, matte black in color.
Abraham Gottlob Werner coined the name graphite ("writing stone") in 1789. He attempted to clear up the confusion between molybdena, plumbago and black lead after Carl Wilhelm Scheele in 1778 proved that these were at least three different minerals. Scheele's analysis showed that the chemical compounds molybdenum sulfide (molybdenite), lead(II) sulfide (galena) and graphite were three different soft black minerals.
Uses of natural graphite
Natural graphite is mostly used for refractories, batteries, steelmaking, expanded graphite, brake linings, foundry facings, and lubricants.
Refractories
The use of graphite as a refractory (heat-resistant) material began before 1900 with graphite crucibles used to hold molten metal; this is now a minor part of refractories. In the mid-1980s, the carbon-magnesite brick became important, and a bit later the alumina-graphite shape. the order of importance is: alumina-graphite shapes, carbon-magnesite brick, Monolithics (gunning and ramming mixes), and then crucibles.
Crucibles began using very large flake graphite, and carbon-magnesite bricks requiring not quite so large flake graphite; for these and others there is now much more flexibility in the size of flake required, and amorphous graphite is no longer restricted to low-end refractories. Alumina-graphite shapes are used as continuous casting ware, such as nozzles and troughs, to convey the molten steel from ladle to mold, and carbon magnesite bricks line steel converters and electric-arc furnaces to withstand extreme temperatures. Graphite blocks are also used in parts of blast furnace linings where the high thermal conductivity of the graphite is critical to ensuring adequate cooling of the bottom and hearth of the furnace. High-purity monolithics are often used as a continuous furnace lining instead of carbon-magnesite bricks.
The US and European refractories industry had a crisis in 2000–2003, with an indifferent market for steel and a declining refractory consumption per tonne of steel underlying firm buyouts and many plant closures. Many of the plant closures resulted from the acquisition of Harbison-Walker Refractories by RHI AG and some plants had their equipment auctioned off. Since much of the lost capacity was for carbon-magnesite brick, graphite consumption within the refractories area moved towards alumina-graphite shapes and Monolithics, and away from the brick. The major source of carbon-magnesite brick is now China. Almost all of the above refractories are used to make steel and account for 75% of refractory consumption; the rest is used by a variety of industries, such as cement.
According to the USGS, US natural graphite consumption in refractories comprised 12,500 tonnes in 2010.
Batteries
The use of graphite in batteries has increased since the 1970s. Natural and synthetic graphite are used as an anode material to construct electrodes in major battery technologies.
The demand for batteries, primarily nickel–metal hydride and lithium-ion batteries, caused a growth in demand for graphite in the late 1980s and early 1990s – a growth driven by portable electronics, such as portable CD players and power tools. Laptops, mobile phones, tablets, and smartphone products have increased the demand for batteries. Electric-vehicle batteries are anticipated to increase graphite demand. As an example, a lithium-ion battery in a fully electric Nissan Leaf contains nearly 40 kg of graphite.
Radioactive graphite removed from nuclear reactors has been investigated as a source of electricity for low-power applications. This waste is rich in carbon-14, which emits electrons through beta decay, so it could potentially be used as the basis for a betavoltaic device. This concept is known as the diamond battery.
Graphite anode materials
Graphite is "predominant anode material used today in lithium-ion batteries". Electric-vehicle (EV) batteries contain four basic components: anode, cathode, electrolyte, and separator. While there is much focus on the cathode materials lithium, nickel, cobalt, manganese, etc. the predominant anode material used in virtually all EV batteries is graphite.
Steelmaking
Natural graphite in steelmaking mostly goes into raising the carbon content in molten steel; it can also serve to lubricate the dies used to extrude hot steel. Carbon additives face competitive pricing from alternatives such as synthetic graphite powder, petroleum coke, and other forms of carbon. A carbon raiser is added to increase the carbon content of the steel to a specified level. An estimate based on USGS's graphite consumption statistics indicates that steelmakers in the US used 10,500 tonnes in this fashion in 2005.
Brake linings
Natural amorphous and fine flake graphite are used in brake linings or brake shoes for heavier (nonautomotive) vehicles, and became important with the need to substitute for asbestos. This use has been important for quite some time, but nonasbestos organic (NAO) compositions are beginning to reduce graphite's market share. A brake-lining industry shake-out with some plant closures has not been beneficial, nor has an indifferent automotive market. According to the USGS, US natural graphite consumption in brake linings was 6,510 tonnes in 2005.
Foundry facings and lubricants
A foundry-facing mold wash is a water-based paint of amorphous or fine flake graphite. Painting the inside of a mold with it and letting it dry leaves a fine graphite coat that will ease the separation of the object cast after the hot metal has cooled. Graphite lubricants are specialty items for use at very high or very low temperatures, as forging die lubricant, an antiseize agent, a gear lubricant for mining machinery, and to lubricate locks. Having low-grit graphite, or even better, no-grit graphite (ultra high purity), is highly desirable. It can be used as a dry powder, in water or oil, or as colloidal graphite (a permanent suspension in a liquid). An estimate based on USGS graphite consumption statistics indicates that 2,200 tonnes were used in this fashion in 2005. Metal can also be impregnated into graphite to create a self-lubricating alloy for application in extreme conditions, such as bearings for machines exposed to high or low temperatures.
Everyday use
Pencils
The ability to leave marks on paper and other objects gave graphite its name, given in 1789 by German mineralogist Abraham Gottlob Werner. It stems from γράφειν ("graphein"), meaning to write or draw in Ancient Greek.
From the 16th century, all pencils were made with leads of English natural graphite, but modern pencil lead is most commonly a mix of powdered graphite and clay; it was invented by Nicolas-Jacques Conté in 1795. It is chemically unrelated to the metal lead, whose ores had a similar appearance, hence the continuation of the name. Plumbago is another older term for natural graphite used for drawing, typically as a lump of the mineral without a wood casing. The term plumbago drawing is normally restricted to 17th and 18th-century works, mostly portraits.
Today, pencils are still a small but significant market for natural graphite. Around 7% of the 1.1 million tonnes produced in 2011 was used to make pencils. Low-quality amorphous graphite is used and sourced mainly from China.
In art, graphite is typically used to create detailed and precise drawings, as it allows for a wide range of values (light to dark) to be achieved. It can also be used to create softer, more subtle lines and shading. Graphite is popular among artists because it is easy to control, easy to erase, and produces a clean, professional look. It is also relatively inexpensive and widely available. Many artists use graphite in conjunction with other media, such as charcoal or ink, to create a range of effects and textures in their work. Graphite of various hardness or softness results in different qualities and tones when used as an artistic medium.
Pinewood derby
Graphite is probably the most-used lubricant in pinewood derbies.
Other uses
Natural graphite has found uses in zinc-carbon batteries, electric motor brushes, and various specialized applications. Railroads would often mix powdered graphite with waste oil or linseed oil to create a heat-resistant protective coating for the exposed portions of a steam locomotive's boiler, such as the smokebox or lower part of the firebox. The Scope soldering iron uses a graphite tip as its heating element.
Expanded graphite
Expanded graphite is made by immersing natural flake graphite in a bath of chromic acid, then concentrated sulfuric acid, which forces the crystal lattice planes apart, thus expanding the graphite. The expanded graphite can be used to make graphite foil or used directly as a "hot top" compound to insulate molten metal in a ladle or red-hot steel ingots and decrease heat loss, or as firestops fitted around a fire door or in sheet metal collars surrounding plastic pipe (during a fire, the graphite expands and chars to resist fire penetration and spread), or to make high-performance gasket material for high-temperature use. After being made into graphite foil, the foil is machined and assembled into the bipolar plates in fuel cells.
The foil is made into heat sinks for laptop computers which keeps them cool while saving weight, and is made into a foil laminate that can be used in valve packings or made into gaskets. Old-style packings are now a minor member of this grouping: fine flake graphite in oils or greases for uses requiring heat resistance. A GAN estimate of current US natural graphite consumption in this end-use is 7,500 tonnes.
Intercalated graphite
Graphite forms intercalation compounds with some metals and small molecules. In these compounds, the host molecule or atom gets "sandwiched" between the graphite layers, resulting in a type of compound with variable stoichiometry. A prominent example of an intercalation compound is potassium graphite, denoted by the formula KC8. Some graphite intercalation compounds are superconductors. The highest transition temperature (by June 2009) Tc = 11.5 K is achieved in CaC6, and it further increases under applied pressure (15.1 K at 8 GPa). Graphite's ability to intercalate lithium ions without significant damage from swelling is what makes it the dominant anode material in lithium-ion batteries.
History of synthetic graphite
Invention of a process to produce synthetic graphite
In 1893, Charles Street of Le Carbone discovered a process for making artificial graphite. In the mid-1890s, Edward Goodrich Acheson (1856–1931) accidentally invented another way to produce synthetic graphite after synthesizing carborundum (also called silicon carbide). He discovered that overheating carborundum, as opposed to pure carbon, produced almost pure graphite. While studying the effects of high temperature on carborundum, he had found that silicon vaporizes at about , leaving the carbon behind in graphitic carbon. This graphite became valuable as a lubricant.
Acheson's technique for producing silicon carbide and graphite is named the Acheson process. In 1896, Acheson received a patent for his method of synthesizing graphite, and in 1897 started commercial production. The Acheson Graphite Co. was formed in 1899.
Synthetic graphite can also be prepared from polyimide and then commercialized.
Scientific research
Highly oriented pyrolytic graphite (HOPG) is the highest-quality synthetic form of graphite. It is used in scientific research, in particular, as a length standard for the calibration of scanning probe microscopes.
Electrodes
Graphite electrodes carry the electricity that melts scrap iron and steel, and sometimes direct-reduced iron (DRI), in electric arc furnaces, which are the vast majority of steel furnaces. They are made from petroleum coke after it is mixed with coal tar pitch. They are extruded and shaped, then baked to carbonize the binder (pitch). This is finally graphitized by heating it to temperatures approaching , at which the carbon atoms arrange into graphite. They can vary in size up to long and in diameter. An increasing proportion of global steel is made using electric arc furnaces, and the electric arc furnace itself is becoming more efficient, making more steel per tonne of electrode. An estimate based on USGS data indicates that graphite electrode consumption was in 2005.
Electrolytic aluminium smelting also uses graphitic carbon electrodes. On a much smaller scale, synthetic graphite electrodes are used in electrical discharge machining (EDM), commonly to make injection molds for plastics.
Powder and scrap
The powder is made by heating powdered petroleum coke above the temperature of graphitization, sometimes with minor modifications. The graphite scrap comes from pieces of unusable electrode material (in the manufacturing stage or after use) and lathe turnings, usually after crushing and sizing. Most synthetic graphite powder goes to carbon raising in steel (competing with natural graphite), with some used in batteries and brake linings. According to the United States Geographical Survey, US synthetic graphite powder and scrap production were in 2001 (latest data).
Neutron moderator
Special grades of synthetic graphite, such as Gilsocarbon, also find use as a matrix and neutron moderator within nuclear reactors. Its low neutron cross-section also recommends it for use in proposed fusion reactors. Care must be taken that reactor-grade graphite is free of neutron absorbing materials such as boron, widely used as the seed electrode in commercial graphite deposition systems – this caused the failure of the Germans' World War II graphite-based nuclear reactors. Since they could not isolate the difficulty they were forced to use far more expensive heavy water moderators. Graphite used for nuclear reactors is often referred to as nuclear graphite. Herbert G. McPherson, a Berkeley trained physicist at National Carbon, a division of Union Carbide, was key in confirming a conjecture of Leo Szilard that boron impurities even in "pure" graphite were responsible for a neutron absorption cross-section in graphite that compromised U-235 chain reactions. McPherson was aware of the presence of impurities in graphite because, with the use of Technicolor in cinematography, the spectra of graphite electrode arcs used in movie projectors required impurities to enhance emission of light in the red region to display warmer skin tones on the screen. Thus, had it not been for color movies, chances are that the first sustained natural U chain reaction would have required a heavy water moderated reactor.
Other uses
Graphite (carbon) fiber and carbon nanotubes are also used in carbon fiber reinforced plastics, and in heat-resistant composites such as reinforced carbon-carbon (RCC). Commercial structures made from carbon fiber graphite composites include fishing rods, golf club shafts, bicycle frames, sports car body panels, the fuselage of the Boeing 787 Dreamliner and pool cue sticks and have been successfully employed in reinforced concrete. The mechanical properties of carbon fiber graphite-reinforced plastic composites and grey cast iron are strongly influenced by the role of graphite in these materials. In this context, the term "(100%) graphite" is often loosely used to refer to a pure mixture of carbon reinforcement and resin, while the term "composite" is used for composite materials with additional ingredients.
Modern smokeless powder is coated in graphite to prevent the buildup of static charge.
Graphite has been used in at least three radar absorbent materials. It was mixed with rubber in Sumpf and Schornsteinfeger, which were used on U-boat snorkels to reduce their radar cross section. It was also used in tiles on early F-117 Nighthawk stealth strike fighters.
Graphite composites are used as absorber for high-energy particles, for example in the Large Hadron Collider beam dump.
Graphite rods when filed into shape are used as a tool in glassworking to manipulate hot molten glass.
Graphite mining, beneficiation, and milling
Graphite is mined by both open pit and underground methods. Graphite usually needs beneficiation. This may be carried out by hand-picking the pieces of gangue (rock) and hand-screening the product or by crushing the rock and floating out the graphite. Beneficiation by flotation encounters the difficulty that graphite is very soft and "marks" (coats) the particles of gangue. This makes the "marked" gangue particles float off with the graphite, yielding impure concentrate. There are two ways of obtaining a commercial concentrate or product: repeated regrinding and floating (up to seven times) to purify the concentrate, or by acid leaching (dissolving) the gangue with hydrofluoric acid (for a silicate gangue) or hydrochloric acid (for a carbonate gangue).
In milling, the incoming graphite products and concentrates can be ground before being classified (sized or screened), with the coarser flake size fractions (below 8 mesh, 8–20 mesh, 20–50 mesh) carefully preserved, and then the carbon contents are determined. Some standard blends can be prepared from the different fractions, each with a certain flake size distribution and carbon content. Custom blends can also be made for individual customers who want a certain flake size distribution and carbon content. If flake size is unimportant, the concentrate can be ground more freely. Typical end products include a fine powder for use as a slurry in oil drilling and coatings for foundry molds, carbon raiser in the steel industry (Synthetic graphite powder and powdered petroleum coke can also be used as carbon raiser). Environmental impacts from graphite mills consist of air pollution including fine particulate exposure of workers and also soil contamination from powder spillages leading to heavy metal contamination of soil.
According to the United States Geological Survey (USGS), world production of natural graphite in 2016 was 1,200,000 tonnes, of which the following major exporters are: China (780,000 t), India (170,000 t), Brazil (80,000 t), Turkey (32,000 t) and North Korea (6,000 t). Graphite is not currently mined in the United States, but there are many historical mine sites including ones in Alabama, Montana, and in the Adirondacks of NY. Westwater Resources is in the development stages of creating a pilot plant for their Coosa Graphite Mine near Sylacauga, Alabama. U.S. production of synthetic graphite in 2010 was 134,000 t valued at $1.07 billion.
Occupational safety
Potential health effects include:
Inhalation: No inhalation hazard in manufactured and shipped state. Dust and fumes generated from the material can enter the body by inhalation. High concentrations of dust and fumes may irritate the throat and respiratory system and cause coughing. Frequent inhalation of fume/dust over a long period of time increases the risk of developing lung diseases. Prolonged and repeated overexposure to dust can lead to pneumoconiosis. Pre-existing pulmonary disorders, such as emphysema, may possibly be aggravated by prolonged exposure to high concentrations of graphite dusts.
Eye contact: Dust in the eyes will cause irritation. Exposed may experience eye tearing, redness, and discomfort.
Skin contact: Under normal conditions of intended use, this material does not pose a risk to health. Dust may irritate skin.
Ingestion: Not relevant, due to the form of the product in its manufactured and shipped state. However, ingestion of dusts generated during working operations may cause nausea and vomiting.
Potential physical / chemical effects: Bulk material is non-combustible. The material may form dust and can accumulate electrostatic charges, which may cause an electrical spark (ignition source). High dust levels may create potential for explosion.
United States
The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for graphite exposure in the workplace as a time weighted average (TWA) of 15million particles per cubic foot (1.5 mg/m3) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 2.5 mg/m3 respirable dust over an 8-hour workday. At levels of 1250 mg/m3, graphite is immediately dangerous to life and health.
Graphite recycling
The most common way of recycling graphite occurs when synthetic graphite electrodes are either manufactured and pieces are cut off or lathe turnings are discarded for reuse, or the electrode (or other materials) are used all the way down to the electrode holder. A new electrode replaces the old one, but a sizeable piece of the old electrode remains. This is crushed and sized, and the resulting graphite powder is mostly used to raise the carbon content of molten steel.
Graphite-containing refractories are sometimes also recycled, but often are not due to their low graphite content: the largest-volume items, such as carbon-magnesite bricks that contain only 15–25% graphite, usually contain too little graphite to be worthwhile to recycle. However, some recycled carbon–magnesite brick is used as the basis for furnace-repair materials, and also crushed carbon–magnesite brick is used in slag conditioners.
While crucibles have a high graphite content, the volume of crucibles used and then recycled is very small.
A high-quality flake graphite product that closely resembles natural flake graphite can be made from steelmaking kish. Kish is a large-volume near-molten waste skimmed from the molten iron feed to a basic oxygen furnace and consists of a mix of graphite (precipitated out of the supersaturated iron), lime-rich slag, and some iron. The iron is recycled on-site, leaving a mixture of graphite and slag. The best recovery process uses hydraulic classification (which utilizes a flow of water to separate minerals by specific gravity: graphite is light and settles nearly last) to get a 70% graphite rough concentrate. Leaching this concentrate with hydrochloric acid gives a 95% graphite product with a flake size ranging from 10 mesh (2 mm) down.
Research and innovation in graphite technologies
Globally, over 60,000 patent families in graphite technologies were filed from 2012 to 2021. Patents were filed by applicants from over 60 countries and regions. However, graphite-related patent families originated predominantly from just a few countries. China was the top contributor with more than 47,000 patent families, accounting for four in every five graphite patent families filed worldwide in the last decade. Among other leading countries were Japan, the Republic of Korea, the United States and the Russian Federation. Together, these top five countries of applicant origin accounted for 95 percent of global patenting output related to graphite.
Among the different graphite sources, flake graphite has the highest number of patent families, with more than 5,600 filed worldwide from 2012 to 2021. Supported by active research from its commercial entities and research institutions, China is the country most actively exploiting flake graphite and has contributed to 85 percent of global patent filings in this area.
At the same time, innovations exploring new synthesis methods and uses for artificial graphite are gaining interest worldwide, as countries seek to exploit the superior material qualities associated with this man-made substance and reduce reliance on the natural material. Patenting activity is strongly led by commercial entities, particularly world-renowned battery manufacturers and anode material suppliers, with patenting interest focused on battery anode applications.
The exfoliation process for bulk graphite, which involves separating the carbon layers within graphite, has been extensively studied between 2012 and 2021. Specifically, ultrasonic and thermal exfoliation have been the two most popular approaches worldwide, with 4,267 and 2,579 patent families, respectively, significantly more than for either the chemical or electrochemical alternatives.
Global patenting activity relating to ultrasonic exfoliation has decreased over the years, indicating that this low-cost technique has become well established. Thermal exfoliation is a more recent process. Compared to ultrasonic exfoliation, this fast and solvent-free thermal approach has attracted greater commercial interest.
As the most widespread anode material for lithium-ion batteries, graphite has drawn significant attention worldwide for use in battery applications. With over 8,000 patent families filed from 2012 to 2021, battery applications were a key driver of global graphite-related inventions. Innovations in this area are led by battery manufacturers or anode suppliers who have amassed sizable patent portfolios focused strongly on battery performance improvements based on graphite anode innovation. Besides industry players, academia and research institutions have been an essential source of innovation in graphite anode technologies.
Graphite for polymer applications was an innovation hot topic from 2012 to 2021, with over 8,000 patent families recorded worldwide. However, in recent years, in the top countries of applicant origin in this area, including China, Japan and the United States of America (US), patent filings have decreased.
Graphite for manufacturing ceramics represents another area of intensive research, with over 6,000 patent families registered in the last decade alone. Specifically, graphite for refractory accounted for over one-third of ceramics-related graphite patent families in China and about one-fifth in the rest of the world. Other important graphite applications include high-value ceramic materials such as carbides for specific industries, ranging from electrical and electronics, aerospace and precision engineering to military and nuclear applications.
Carbon brushes represent a long-explored graphite application area. There have been few inventions in this area over the last decade, with less than 300 patent families filed from 2012 to 2021, very significantly less than between 1992 and 2011.
Biomedical, sensor, and conductive ink are emerging application areas for graphite that have attracted interest from both academia and commercial entities, including renowned universities and multinational corporations. Typically for an emerging technology area, related patent families were filed by various organizations without any players dominating. As a result, the top applicants have a small number of inventions, unlike in well-explored areas, where they will have strong technology accumulation and large patent portfolios. The innovation focus of these three emerging areas is highly scattered and can be diverse, even for a single applicant. However, recent inventions are seen to leverage the development of graphite nanomaterials, particularly graphite nanocomposites and graphene.
| Physical sciences | Chemical elements_2 | null |
12383 | https://en.wikipedia.org/wiki/Genetic%20engineering | Genetic engineering | Genetic engineering, also called genetic modification or genetic manipulation, is the modification and manipulation of an organism's genes using technology. It is a set of technologies used to change the genetic makeup of cells, including the transfer of genes within and across species boundaries to produce improved or novel organisms.
New DNA is obtained by either isolating and copying the genetic material of interest using recombinant DNA methods or by artificially synthesising the DNA. A construct is usually created and used to insert this DNA into the host organism. The first recombinant DNA molecule was made by Paul Berg in 1972 by combining DNA from the monkey virus SV40 with the lambda virus.
As well as inserting genes, the process can be used to remove, or "knock out", genes. The new DNA can be inserted randomly, or targeted to a specific part of the genome.
An organism that is generated through genetic engineering is considered to be genetically modified (GM) and the resulting entity is a genetically modified organism (GMO). The first GMO was a bacterium generated by Herbert Boyer and Stanley Cohen in 1973. Rudolf Jaenisch created the first GM animal when he inserted foreign DNA into a mouse in 1974. The first company to focus on genetic engineering, Genentech, was founded in 1976 and started the production of human proteins. Genetically engineered human insulin was produced in 1978 and insulin-producing bacteria were commercialised in 1982. Genetically modified food has been sold since 1994, with the release of the Flavr Savr tomato. The Flavr Savr was engineered to have a longer shelf life, but most current GM crops are modified to increase resistance to insects and herbicides. GloFish, the first GMO designed as a pet, was sold in the United States in December 2003. In 2016 salmon modified with a growth hormone were sold.
Genetic engineering has been applied in numerous fields including research, medicine, industrial biotechnology and agriculture. In research, GMOs are used to study gene function and expression through loss of function, gain of function, tracking and expression experiments. By knocking out genes responsible for certain conditions it is possible to create animal model organisms of human diseases. As well as producing hormones, vaccines and other drugs, genetic engineering has the potential to cure genetic diseases through gene therapy. Chinese hamster ovary (CHO) cells are used in industrial genetic engineering. Additionally mRNA vaccines are made through genetic engineering to prevent infections by viruses such as COVID-19. The same techniques that are used to produce drugs can also have industrial applications such as producing enzymes for laundry detergent, cheeses and other products.
The rise of commercialised genetically modified crops has provided economic benefit to farmers in many different countries, but has also been the source of most of the controversy surrounding the technology. This has been present since its early use; the first field trials were destroyed by anti-GM activists. Although there is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, critics consider GM food safety a leading concern. Gene flow, impact on non-target organisms, control of the food supply and intellectual property rights have also been raised as potential issues. These concerns have led to the development of a regulatory framework, which started in 1975. It has led to an international treaty, the Cartagena Protocol on Biosafety, that was adopted in 2000. Individual countries have developed their own regulatory systems regarding GMOs, with the most marked differences occurring between the United States and Europe.
Overview
Genetic engineering is a process that alters the genetic structure of an organism by either removing or introducing DNA, or modifying existing genetic material in situ. Unlike traditional animal and plant breeding, which involves doing multiple crosses and then selecting for the organism with the desired phenotype, genetic engineering takes the gene directly from one organism and delivers it to the other. This is much faster, can be used to insert any genes from any organism (even ones from different domains) and prevents other undesirable genes from also being added.
Genetic engineering could potentially fix severe genetic disorders in humans by replacing the defective gene with a functioning one. It is an important tool in research that allows the function of specific genes to be studied. Drugs, vaccines and other products have been harvested from organisms engineered to produce them. Crops have been developed that aid food security by increasing yield, nutritional value and tolerance to environmental stresses.
The DNA can be introduced directly into the host organism or into a cell that is then fused or hybridised with the host. This relies on recombinant nucleic acid techniques to form new combinations of heritable genetic material followed by the incorporation of that material either indirectly through a vector system or directly through micro-injection, macro-injection or micro-encapsulation.
Genetic engineering does not normally include traditional breeding, in vitro fertilisation, induction of polyploidy, mutagenesis and cell fusion techniques that do not use recombinant nucleic acids or a genetically modified organism in the process. However, some broad definitions of genetic engineering include selective breeding. Cloning and stem cell research, although not considered genetic engineering, are closely related and genetic engineering can be used within them. Synthetic biology is an emerging discipline that takes genetic engineering a step further by introducing artificially synthesised material into an organism.
Plants, animals or microorganisms that have been changed through genetic engineering are termed genetically modified organisms or GMOs. If genetic material from another species is added to the host, the resulting organism is called transgenic. If genetic material from the same species or a species that can naturally breed with the host is used the resulting organism is called cisgenic. If genetic engineering is used to remove genetic material from the target organism the resulting organism is termed a knockout organism. In Europe genetic modification is synonymous with genetic engineering while within the United States of America and Canada genetic modification can also be used to refer to more conventional breeding methods.
History
Humans have altered the genomes of species for thousands of years through selective breeding, or artificial selection as contrasted with natural selection. More recently, mutation breeding has used exposure to chemicals or radiation to produce a high frequency of random mutations, for selective breeding purposes. Genetic engineering as the direct manipulation of DNA by humans outside breeding and mutations has only existed since the 1970s. The term "genetic engineering" was coined by the Russian-born geneticist Nikolay Timofeev-Ressovsky in his 1934 paper "The Experimental Production of Mutations", published in the British journal Biological Reviews. Jack Williamson used the term in his science fiction novel Dragon's Island, published in 1951 – one year before DNA's role in heredity was confirmed by Alfred Hershey and Martha Chase, and two years before James Watson and Francis Crick showed that the DNA molecule has a double-helix structure – though the general concept of direct genetic manipulation was explored in rudimentary form in Stanley G. Weinbaum's 1936 science fiction story Proteus Island.
In 1972, Paul Berg created the first recombinant DNA molecules by combining DNA from the monkey virus SV40 with that of the lambda virus. In 1973 Herbert Boyer and Stanley Cohen created the first transgenic organism by inserting antibiotic resistance genes into the plasmid of an Escherichia coli bacterium. A year later Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world's first transgenic animal These achievements led to concerns in the scientific community about potential risks from genetic engineering, which were first discussed in depth at the Asilomar Conference in 1975. One of the main recommendations from this meeting was that government oversight of recombinant DNA research should be established until the technology was deemed safe.
In 1976 Genentech, the first genetic engineering company, was founded by Herbert Boyer and Robert Swanson and a year later the company produced a human protein (somatostatin) in E. coli. Genentech announced the production of genetically engineered human insulin in 1978. In 1980, the U.S. Supreme Court in the Diamond v. Chakrabarty case ruled that genetically altered life could be patented. The insulin produced by bacteria was approved for release by the Food and Drug Administration (FDA) in 1982.
In 1983, a biotech company, Advanced Genetic Sciences (AGS) applied for U.S. government authorisation to perform field tests with the ice-minus strain of Pseudomonas syringae to protect crops from frost, but environmental groups and protestors delayed the field tests for four years with legal challenges. In 1987, the ice-minus strain of P. syringae became the first genetically modified organism (GMO) to be released into the environment when a strawberry field and a potato field in California were sprayed with it. Both test fields were attacked by activist groups the night before the tests occurred: "The world's first trial site attracted the world's first field trasher".
The first field trials of genetically engineered plants occurred in France and the US in 1986, tobacco plants were engineered to be resistant to herbicides. The People's Republic of China was the first country to commercialise transgenic plants, introducing a virus-resistant tobacco in 1992. In 1994 Calgene attained approval to commercially release the first genetically modified food, the Flavr Savr, a tomato engineered to have a longer shelf life. In 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil, making it the first genetically engineered crop commercialised in Europe. In 1995, Bt potato was approved safe by the Environmental Protection Agency, after having been approved by the FDA, making it the first pesticide producing crop to be approved in the US. In 2009 11 transgenic crops were grown commercially in 25 countries, the largest of which by area grown were the US, Brazil, Argentina, India, Canada, China, Paraguay and South Africa.
In 2010, scientists at the J. Craig Venter Institute created the first synthetic genome and inserted it into an empty bacterial cell. The resulting bacterium, named Mycoplasma laboratorium, could replicate and produce proteins. Four years later this was taken a step further when a bacterium was developed that replicated a plasmid containing a unique base pair, creating the first organism engineered to use an expanded genetic alphabet. In 2012, Jennifer Doudna and Emmanuelle Charpentier collaborated to develop the CRISPR/Cas9 system, a technique which can be used to easily and specifically alter the genome of almost any organism.
Process
Creating a GMO is a multi-step process. Genetic engineers must first choose what gene they wish to insert into the organism. This is driven by what the aim is for the resultant organism and is built on earlier research. Genetic screens can be carried out to determine potential genes and further tests then used to identify the best candidates. The development of microarrays, transcriptomics and genome sequencing has made it much easier to find suitable genes. Luck also plays its part; the Roundup Ready gene was discovered after scientists noticed a bacterium thriving in the presence of the herbicide.
Gene isolation and cloning
The next step is to isolate the candidate gene. The cell containing the gene is opened and the DNA is purified. The gene is separated by using restriction enzymes to cut the DNA into fragments or polymerase chain reaction (PCR) to amplify up the gene segment. These segments can then be extracted through gel electrophoresis. If the chosen gene or the donor organism's genome has been well studied it may already be accessible from a genetic library. If the DNA sequence is known, but no copies of the gene are available, it can also be artificially synthesised. Once isolated the gene is ligated into a plasmid that is then inserted into a bacterium. The plasmid is replicated when the bacteria divide, ensuring unlimited copies of the gene are available. The RK2 plasmid is notable for its ability to replicate in a wide variety of single-celled organisms, which makes it suitable as a genetic engineering tool.
Before the gene is inserted into the target organism it must be combined with other genetic elements. These include a promoter and terminator region, which initiate and end transcription. A selectable marker gene is added, which in most cases confers antibiotic resistance, so researchers can easily determine which cells have been successfully transformed. The gene can also be modified at this stage for better expression or effectiveness. These manipulations are carried out using recombinant DNA techniques, such as restriction digests, ligations and molecular cloning.
Inserting DNA into the host genome
There are a number of techniques used to insert genetic material into the host genome. Some bacteria can naturally take up foreign DNA. This ability can be induced in other bacteria via stress (e.g. thermal or electric shock), which increases the cell membrane's permeability to DNA; up-taken DNA can either integrate with the genome or exist as extrachromosomal DNA. DNA is generally inserted into animal cells using microinjection, where it can be injected through the cell's nuclear envelope directly into the nucleus, or through the use of viral vectors.
Plant genomes can be engineered by physical methods or by use of Agrobacterium for the delivery of sequences hosted in T-DNA binary vectors. In plants the DNA is often inserted using Agrobacterium-mediated transformation, taking advantage of the Agrobacteriums T-DNA sequence that allows natural insertion of genetic material into plant cells. Other methods include biolistics, where particles of gold or tungsten are coated with DNA and then shot into young plant cells, and electroporation, which involves using an electric shock to make the cell membrane permeable to plasmid DNA.
As only a single cell is transformed with genetic material, the organism must be regenerated from that single cell. In plants this is accomplished through the use of tissue culture. In animals it is necessary to ensure that the inserted DNA is present in the embryonic stem cells. Bacteria consist of a single cell and reproduce clonally so regeneration is not necessary. Selectable markers are used to easily differentiate transformed from untransformed cells. These markers are usually present in the transgenic organism, although a number of strategies have been developed that can remove the selectable marker from the mature transgenic plant.
Further testing using PCR, Southern hybridization, and DNA sequencing is conducted to confirm that an organism contains the new gene. These tests can also confirm the chromosomal location and copy number of the inserted gene. The presence of the gene does not guarantee it will be expressed at appropriate levels in the target tissue so methods that look for and measure the gene products (RNA and protein) are also used. These include northern hybridisation, quantitative RT-PCR, Western blot, immunofluorescence, ELISA and phenotypic analysis.
The new genetic material can be inserted randomly within the host genome or targeted to a specific location. The technique of gene targeting uses homologous recombination to make desired changes to a specific endogenous gene. This tends to occur at a relatively low frequency in plants and animals and generally requires the use of selectable markers. The frequency of gene targeting can be greatly enhanced through genome editing. Genome editing uses artificially engineered nucleases that create specific double-stranded breaks at desired locations in the genome, and use the cell's endogenous mechanisms to repair the induced break by the natural processes of homologous recombination and nonhomologous end-joining. There are four families of engineered nucleases: meganucleases, zinc finger nucleases, transcription activator-like effector nucleases (TALENs), and the Cas9-guideRNA system (adapted from CRISPR). TALEN and CRISPR are the two most commonly used and each has its own advantages. TALENs have greater target specificity, while CRISPR is easier to design and more efficient. In addition to enhancing gene targeting, engineered nucleases can be used to introduce mutations at endogenous genes that generate a gene knockout.
Applications
Genetic engineering has applications in medicine, research, industry and agriculture and can be used on a wide range of plants, animals and microorganisms. Bacteria, the first organisms to be genetically modified, can have plasmid DNA inserted containing new genes that code for medicines or enzymes that process food and other substrates. Plants have been modified for insect protection, herbicide resistance, virus resistance, enhanced nutrition, tolerance to environmental pressures and the production of edible vaccines. Most commercialised GMOs are insect resistant or herbicide tolerant crop plants. Genetically modified animals have been used for research, model animals and the production of agricultural or pharmaceutical products. The genetically modified animals include animals with genes knocked out, increased susceptibility to disease, hormones for extra growth and the ability to express proteins in their milk.
Medicine
Genetic engineering has many applications to medicine that include the manufacturing of drugs, creation of model animals that mimic human conditions and gene therapy. One of the earliest uses of genetic engineering was to mass-produce human insulin in bacteria. This application has now been applied to human growth hormones, follicle stimulating hormones (for treating infertility), human albumin, monoclonal antibodies, antihemophilic factors, vaccines and many other drugs. Mouse hybridomas, cells fused together to create monoclonal antibodies, have been adapted through genetic engineering to create human monoclonal antibodies. Genetically engineered viruses are being developed that can still confer immunity, but lack the infectious sequences.
Genetic engineering is also used to create animal models of human diseases. Genetically modified mice are the most common genetically engineered animal model. They have been used to study and model cancer (the oncomouse), obesity, heart disease, diabetes, arthritis, substance abuse, anxiety, aging and Parkinson disease. Potential cures can be tested against these mouse models.
Gene therapy is the genetic engineering of humans, generally by replacing defective genes with effective ones. Clinical research using somatic gene therapy has been conducted with several diseases, including X-linked SCID, chronic lymphocytic leukemia (CLL), and Parkinson's disease. In 2012, Alipogene tiparvovec became the first gene therapy treatment to be approved for clinical use. In 2015 a virus was used to insert a healthy gene into the skin cells of a boy suffering from a rare skin disease, epidermolysis bullosa, in order to grow, and then graft healthy skin onto 80 percent of the boy's body which was affected by the illness.
Germline gene therapy would result in any change being inheritable, which has raised concerns within the scientific community. In 2015, CRISPR was used to edit the DNA of non-viable human embryos, leading scientists of major world academies to call for a moratorium on inheritable human genome edits. There are also concerns that the technology could be used not just for treatment, but for enhancement, modification or alteration of a human beings' appearance, adaptability, intelligence, character or behavior. The distinction between cure and enhancement can also be difficult to establish. In November 2018, He Jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the CCR5 gene, which codes for a receptor that HIV uses to enter cells. The work was widely condemned as unethical, dangerous, and premature. Currently, germline modification is banned in 40 countries. Scientists that do this type of research will often let embryos grow for a few days without allowing it to develop into a baby.
Researchers are altering the genome of pigs to induce the growth of human organs, with the aim of increasing the success of pig to human organ transplantation. Scientists are creating "gene drives", changing the genomes of mosquitoes to make them immune to malaria, and then looking to spread the genetically altered mosquitoes throughout the mosquito population in the hopes of eliminating the disease.
Research
Genetic engineering is an important tool for natural scientists, with the creation of transgenic organisms one of the most important tools for analysis of gene function. Genes and other genetic information from a wide range of organisms can be inserted into bacteria for storage and modification, creating genetically modified bacteria in the process. Bacteria are cheap, easy to grow, clonal, multiply quickly, relatively easy to transform and can be stored at -80 °C almost indefinitely. Once a gene is isolated it can be stored inside the bacteria providing an unlimited supply for research.
Organisms are genetically engineered to discover the functions of certain genes. This could be the effect on the phenotype of the organism, where the gene is expressed or what other genes it interacts with. These experiments generally involve loss of function, gain of function, tracking and expression.
Loss of function experiments, such as in a gene knockout experiment, in which an organism is engineered to lack the activity of one or more genes. In a simple knockout a copy of the desired gene has been altered to make it non-functional. Embryonic stem cells incorporate the altered gene, which replaces the already present functional copy. These stem cells are injected into blastocysts, which are implanted into surrogate mothers. This allows the experimenter to analyse the defects caused by this mutation and thereby determine the role of particular genes. It is used especially frequently in developmental biology. When this is done by creating a library of genes with point mutations at every position in the area of interest, or even every position in the whole gene, this is called "scanning mutagenesis". The simplest method, and the first to be used, is "alanine scanning", where every position in turn is mutated to the unreactive amino acid alanine.
Gain of function experiments, the logical counterpart of knockouts. These are sometimes performed in conjunction with knockout experiments to more finely establish the function of the desired gene. The process is much the same as that in knockout engineering, except that the construct is designed to increase the function of the gene, usually by providing extra copies of the gene or inducing synthesis of the protein more frequently. Gain of function is used to tell whether or not a protein is sufficient for a function, but does not always mean it is required, especially when dealing with genetic or functional redundancy.
Tracking experiments, which seek to gain information about the localisation and interaction of the desired protein. One way to do this is to replace the wild-type gene with a 'fusion' gene, which is a juxtaposition of the wild-type gene with a reporting element such as green fluorescent protein (GFP) that will allow easy visualisation of the products of the genetic modification. While this is a useful technique, the manipulation can destroy the function of the gene, creating secondary effects and possibly calling into question the results of the experiment. More sophisticated techniques are now in development that can track protein products without mitigating their function, such as the addition of small sequences that will serve as binding motifs to monoclonal antibodies.
Expression studies aim to discover where and when specific proteins are produced. In these experiments, the DNA sequence before the DNA that codes for a protein, known as a gene's promoter, is reintroduced into an organism with the protein coding region replaced by a reporter gene such as GFP or an enzyme that catalyses the production of a dye. Thus the time and place where a particular protein is produced can be observed. Expression studies can be taken a step further by altering the promoter to find which pieces are crucial for the proper expression of the gene and are actually bound by transcription factor proteins; this process is known as promoter bashing.
Industrial
Organisms can have their cells transformed with a gene coding for a useful protein, such as an enzyme, so that they will overexpress the desired protein. Mass quantities of the protein can then be manufactured by growing the transformed organism in bioreactor equipment using industrial fermentation, and then purifying the protein. Some genes do not work well in bacteria, so yeast, insect cells or mammalian cells can also be used. These techniques are used to produce medicines such as insulin, human growth hormone, and vaccines, supplements such as tryptophan, aid in the production of food (chymosin in cheese making) and fuels. Other applications with genetically engineered bacteria could involve making them perform tasks outside their natural cycle, such as making biofuels, cleaning up oil spills, carbon and other toxic waste and detecting arsenic in drinking water. Certain genetically modified microbes can also be used in biomining and bioremediation, due to their ability to extract heavy metals from their environment and incorporate them into compounds that are more easily recoverable.
In materials science, a genetically modified virus has been used in a research laboratory as a scaffold for assembling a more environmentally friendly lithium-ion battery. Bacteria have also been engineered to function as sensors by expressing a fluorescent protein under certain environmental conditions.
Agriculture
One of the best-known and controversial applications of genetic engineering is the creation and use of genetically modified crops or genetically modified livestock to produce genetically modified food. Crops have been developed to increase production, increase tolerance to abiotic stresses, alter the composition of the food, or to produce novel products.
The first crops to be released commercially on a large scale provided protection from insect pests or tolerance to herbicides. Fungal and virus resistant crops have also been developed or are in development. This makes the insect and weed management of crops easier and can indirectly increase crop yield. GM crops that directly improve yield by accelerating growth or making the plant more hardy (by improving salt, cold or drought tolerance) are also under development. In 2016 Salmon have been genetically modified with growth hormones to reach normal adult size much faster.
GMOs have been developed that modify the quality of produce by increasing the nutritional value or providing more industrially useful qualities or quantities. The Amflora potato produces a more industrially useful blend of starches. Soybeans and canola have been genetically modified to produce more healthy oils. The first commercialised GM food was a tomato that had delayed ripening, increasing its shelf life.
Plants and animals have been engineered to produce materials they do not normally make. Pharming uses crops and animals as bioreactors to produce vaccines, drug intermediates, or the drugs themselves; the useful product is purified from the harvest and then used in the standard pharmaceutical production process. Cows and goats have been engineered to express drugs and other proteins in their milk, and in 2009 the FDA approved a drug produced in goat milk.
Other applications
Genetic engineering has potential applications in conservation and natural area management. Gene transfer through viral vectors has been proposed as a means of controlling invasive species as well as vaccinating threatened fauna from disease. Transgenic trees have been suggested as a way to confer resistance to pathogens in wild populations. With the increasing risks of maladaptation in organisms as a result of climate change and other perturbations, facilitated adaptation through gene tweaking could be one solution to reducing extinction risks. Applications of genetic engineering in conservation are thus far mostly theoretical and have yet to be put into practice.
Genetic engineering is also being used to create microbial art. Some bacteria have been genetically engineered to create black and white photographs. Novelty items such as lavender-colored carnations, blue roses, and glowing fish, have also been produced through genetic engineering.
Regulation
The regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of GMOs. The development of a regulatory framework began in 1975, at Asilomar, California. The Asilomar meeting recommended a set of voluntary guidelines regarding the use of recombinant technology. As the technology improved the US established a committee at the Office of Science and Technology, which assigned regulatory approval of GM food to the USDA, FDA and EPA. The Cartagena Protocol on Biosafety, an international treaty that governs the transfer, handling, and use of GMOs, was adopted on 29 January 2000. One hundred and fifty-seven countries are members of the Protocol, and many use it as a reference point for their own regulations.
The legal and regulatory status of GM foods varies by country, with some nations banning or restricting them, and others permitting them with widely differing degrees of regulation. Some countries allow the import of GM food with authorisation, but either do not allow its cultivation (Russia, Norway, Israel) or have provisions for cultivation even though no GM products are yet produced (Japan, South Korea). Most countries that do not allow GMO cultivation do permit research. Some of the most marked differences occur between the US and Europe. The US policy focuses on the product (not the process), only looks at verifiable scientific risks and uses the concept of substantial equivalence. The European Union by contrast has possibly the most stringent GMO regulations in the world. All GMOs, along with irradiated food, are considered "new food" and subject to extensive, case-by-case, science-based food evaluation by the European Food Safety Authority. The criteria for authorisation fall in four broad categories: "safety", "freedom of choice", "labelling", and "traceability". The level of regulation in other countries that cultivate GMOs lie in between Europe and the United States.
One of the key issues concerning regulators is whether GM products should be labeled. The European Commission says that mandatory labeling and traceability are needed to allow for informed choice, avoid potential false advertising and facilitate the withdrawal of products if adverse effects on health or the environment are discovered. The American Medical Association and the American Association for the Advancement of Science say that absent scientific evidence of harm even voluntary labeling is misleading and will falsely alarm consumers. Labeling of GMO products in the marketplace is required in 64 countries. Labeling can be mandatory up to a threshold GM content level (which varies between countries) or voluntary. In Canada and the US labeling of GM food is voluntary, while in Europe all food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled.
Controversy
Critics have objected to the use of genetic engineering on several grounds, including ethical, ecological and economic concerns. Many of these concerns involve GM crops and whether food produced from them is safe and what impact growing them will have on the environment. These controversies have led to litigation, international trade disputes, and protests, and to restrictive regulation of commercial products in some countries.
Accusations that scientists are "playing God" and other religious issues have been ascribed to the technology from the beginning. Other ethical issues raised include the patenting of life, the use of intellectual property rights, the level of labeling on products, control of the food supply and the objectivity of the regulatory process. Although doubts have been raised, economically most studies have found growing GM crops to be beneficial to farmers.
Gene flow between GM crops and compatible plants, along with increased use of selective herbicides, can increase the risk of "superweeds" developing. Other environmental concerns involve potential impacts on non-target organisms, including soil microbes, and an increase in secondary and resistant insect pests. Many of the environmental impacts regarding GM crops may take many years to be understood and are also evident in conventional agriculture practices. With the commercialisation of genetically modified fish there are concerns over what the environmental consequences will be if they escape.
There are three main concerns over the safety of genetically modified food: whether they may provoke an allergic reaction; whether the genes could transfer from the food into human cells; and whether the genes not approved for human consumption could outcross to other crops. There is a scientific consensus that currently available food derived from GM crops poses no greater risk to human health than conventional food, but that each GM food needs to be tested on a case-by-case basis before introduction. Nonetheless, members of the public are less likely than scientists to perceive GM foods as safe.
In popular culture
Genetic engineering features in many science fiction stories. Frank Herbert's novel The White Plague describes the deliberate use of genetic engineering to create a pathogen which specifically kills women. Another of Herbert's creations, the Dune series of novels, uses genetic engineering to create the powerful Tleilaxu. Few films have informed audiences about genetic engineering, with the exception of the 1978 The Boys from Brazil and the 1993 Jurassic Park, both of which make use of a lesson, a demonstration, and a clip of scientific film. Genetic engineering methods are weakly represented in film; Michael Clark, writing for the Wellcome Trust, calls the portrayal of genetic engineering and biotechnology "seriously distorted" in films such as The 6th Day. In Clark's view, the biotechnology is typically "given fantastic but visually arresting forms" while the science is either relegated to the background or fictionalised to suit a young audience.
| Technology | Food and health | null |
12385 | https://en.wikipedia.org/wiki/Genetic%20code | Genetic code | The genetic code is the set of rules used by living cells to translate information encoded within genetic material (DNA or RNA sequences of nucleotide triplets or codons) into proteins. Translation is accomplished by the ribosome, which links proteinogenic amino acids in an order specified by messenger RNA (mRNA), using transfer RNA (tRNA) molecules to carry amino acids and to read the mRNA three nucleotides at a time. The genetic code is highly similar among all organisms and can be expressed in a simple table with 64 entries.
The codons specify which amino acid will be added next during protein biosynthesis. With some exceptions, a three-nucleotide codon in a nucleic acid sequence specifies a single amino acid. The vast majority of genes are encoded with a single scheme (see the RNA codon table). That scheme is often called the canonical or standard genetic code, or simply the genetic code, though variant codes (such as in mitochondria) exist.
History
Efforts to understand how proteins are encoded began after DNA's structure was discovered in 1953. The key discoverers, English biophysicist Francis Crick and American biologist James Watson, working together at the Cavendish Laboratory of the University of Cambridge, hypothesied that information flows from DNA and that there is a link between DNA and proteins. Soviet-American physicist George Gamow was the first to give a workable scheme for protein synthesis from DNA. He postulated that sets of three bases (triplets) must be employed to encode the 20 standard amino acids used by living cells to build proteins, which would allow a maximum of amino acids. He named this DNA–protein interaction (the original genetic code) as the "diamond code".
In 1954, Gamow created an informal scientific organisation the RNA Tie Club, as suggested by Watson, for scientists of different persuasions who were interested in how proteins were synthesised from genes. However, the club could have only 20 permanent members to represent each of the 20 amino acids; and four additional honorary members to represent the four nucleotides of DNA.
The first scientific contribution of the club, later recorded as "one of the most important unpublished articles in the history of science" and "the most famous unpublished paper in the annals of molecular biology", was made by Crick. Crick presented a type-written paper titled "On Degenerate Templates and the Adaptor Hypothesis: A Note for the RNA Tie Club" to the members of the club in January 1955, which "totally changed the way we thought about protein synthesis", as Watson recalled. The hypothesis states that the triplet code was not passed on to amino acids as Gamow thought, but carried by a different molecule, an adaptor, that interacts with amino acids. The adaptor was later identified as tRNA.
Codons
The Crick, Brenner, Barnett and Watts-Tobin experiment first demonstrated that codons consist of three DNA bases.
Marshall Nirenberg and J. Heinrich Matthaei were the first to reveal the nature of a codon in 1961. They used a cell-free system to translate a poly-uracil RNA sequence (i.e., UUUUU...) and discovered that the polypeptide that they had synthesized consisted of only the amino acid phenylalanine. They thereby deduced that the codon UUU specified the amino acid phenylalanine.
This was followed by experiments in Severo Ochoa's laboratory that demonstrated that the poly-adenine RNA sequence (AAAAA...) coded for the polypeptide poly-lysine and that the poly-cytosine RNA sequence (CCCCC...) coded for the polypeptide poly-proline. Therefore, the codon AAA specified the amino acid lysine, and the codon CCC specified the amino acid proline. Using various copolymers most of the remaining codons were then determined.
Subsequent work by Har Gobind Khorana identified the rest of the genetic code. Shortly thereafter, Robert W. Holley determined the structure of transfer RNA (tRNA), the adapter molecule that facilitates the process of translating RNA into protein. This work was based upon Ochoa's earlier studies, yielding the latter the Nobel Prize in Physiology or Medicine in 1959 for work on the enzymology of RNA synthesis.
Extending this work, Nirenberg and Philip Leder revealed the code's triplet nature and deciphered its codons. In these experiments, various combinations of mRNA were passed through a filter that contained ribosomes, the components of cells that translate RNA into protein. Unique triplets promoted the binding of specific tRNAs to the ribosome. Leder and Nirenberg were able to determine the sequences of 54 out of 64 codons in their experiments. Khorana, Holley and Nirenberg received the Nobel Prize (1968) for their work.
The three stop codons were named by discoverers Richard Epstein and Charles Steinberg. "Amber" was named after their friend Harris Bernstein, whose last name means "amber" in German. The other two stop codons were named "ochre" and "opal" in order to keep the "color names" theme.
Expanded genetic codes (synthetic biology)
In a broad academic audience, the concept of the evolution of the genetic code from the original and ambiguous genetic code to a well-defined ("frozen") code with the repertoire of 20 (+2) canonical amino acids is widely accepted.
However, there are different opinions, concepts, approaches and ideas, which is the best way to change it experimentally. Even models are proposed that predict "entry points" for synthetic amino acid invasion of the genetic code.
Since 2001, 40 non-natural amino acids have been added into proteins by creating a unique codon (recoding) and a corresponding transfer-RNA:aminoacyl – tRNA-synthetase pair to encode it with diverse physicochemical and biological properties in order to be used as a tool to exploring protein structure and function or to create novel or enhanced proteins.
H. Murakami and M. Sisido extended some codons to have four and five bases. Steven A. Benner constructed a functional 65th (in vivo) codon.
In 2015 N. Budisa, D. Söll and co-workers reported the full substitution of all 20,899 tryptophan residues (UGG codons) with unnatural thienopyrrole-alanine in the genetic code of the bacterium Escherichia coli.
In 2016 the first stable semisynthetic organism was created. It was a (single cell) bacterium with two synthetic bases (called X and Y). The bases survived cell division.
In 2017, researchers in South Korea reported that they had engineered a mouse with an extended genetic code that can produce proteins with unnatural amino acids.
In May 2019, researchers reported the creation of a new "Syn61" strain of the bacterium Escherichia coli. This strain has a fully synthetic genome that is refactored (all overlaps expanded), recoded (removing the use of three out of 64 codons completely), and further modified to remove the now unnecessary tRNAs and release factors. It is fully viable and grows 1.6× slower than its wild-type counterpart "MDS42".
Features
Reading frame
A reading frame is defined by the initial triplet of nucleotides from which translation starts. It sets the frame for a run of successive, non-overlapping codons, which is known as an "open reading frame" (ORF). For example, the string 5'-AAATGAACG-3' (see figure), if read from the first position, contains the codons AAA, TGA, and ACG ; if read from the second position, it contains the codons AAT and GAA ; and if read from the third position, it contains the codons ATG and AAC. Every sequence can, thus, be read in its 5' → 3' direction in three reading frames, each producing a possibly distinct amino acid sequence: in the given example, Lys (K)-Trp (W)-Thr (T), Asn (N)-Glu (E), or Met (M)-Asn (N), respectively (when translating with the vertebrate mitochondrial code). When DNA is double-stranded, six possible reading frames are defined, three in the forward orientation on one strand and three reverse on the opposite strand. Protein-coding frames are defined by a start codon, usually the first AUG (ATG) codon in the RNA (DNA) sequence.
In eukaryotes, ORFs in exons are often interrupted by introns.
Start and stop codons
Translation starts with a chain-initiation codon or start codon. The start codon alone is not sufficient to begin the process. Nearby sequences such as the Shine-Dalgarno sequence in E. coli and initiation factors are also required to start translation. The most common start codon is AUG, which is read as methionine or as formylmethionine (in bacteria, mitochondria, and plastids). Alternative start codons depending on the organism include "GUG" or "UUG"; these codons normally represent valine and leucine, respectively, but as start codons they are translated as methionine or formylmethionine.
The three stop codons have names: UAG is amber, UGA is opal (sometimes also called umber), and UAA is ochre. Stop codons are also called "termination" or "nonsense" codons. They signal release of the nascent polypeptide from the ribosome because no cognate tRNA has anticodons complementary to these stop signals, allowing a release factor to bind to the ribosome instead.
Effect of mutations
During the process of DNA replication, errors occasionally occur in the polymerization of the second strand. These errors, mutations, can affect an organism's phenotype, especially if they occur within the protein coding sequence of a gene. Error rates are typically 1 error in every 10–100 million bases—due to the "proofreading" ability of DNA polymerases.
Missense mutations and nonsense mutations are examples of point mutations that can cause genetic diseases such as sickle-cell disease and thalassemia respectively. Clinically important missense mutations generally change the properties of the coded amino acid residue among basic, acidic, polar or non-polar states, whereas nonsense mutations result in a stop codon.
Mutations that disrupt the reading frame sequence by indels (insertions or deletions) of a non-multiple of 3 nucleotide bases are known as frameshift mutations. These mutations usually result in a completely different translation from the original, and likely cause a stop codon to be read, which truncates the protein. These mutations may impair the protein's function and are thus rare in in vivo protein-coding sequences. One reason inheritance of frameshift mutations is rare is that, if the protein being translated is essential for growth under the selective pressures the organism faces, absence of a functional protein may cause death before the organism becomes viable. Frameshift mutations may result in severe genetic diseases such as Tay–Sachs disease.
Although most mutations that change protein sequences are harmful or neutral, some mutations have benefits. These mutations may enable the mutant organism to withstand particular environmental stresses better than wild type organisms, or reproduce more quickly. In these cases a mutation will tend to become more common in a population through natural selection. Viruses that use RNA as their genetic material have rapid mutation rates, which can be an advantage, since these viruses thereby evolve rapidly, and thus evade the immune system defensive responses. In large populations of asexually reproducing organisms, for example, E. coli, multiple beneficial mutations may co-occur. This phenomenon is called clonal interference and causes competition among the mutations.
Degeneracy
Degeneracy is the redundancy of the genetic code. This term was given by Bernfield and Nirenberg. The genetic code has redundancy but no ambiguity (see the codon tables below for the full correlation). For example, although codons GAA and GAG both specify glutamic acid (redundancy), neither specifies another amino acid (no ambiguity). The codons encoding one amino acid may differ in any of their three positions. For example, the amino acid leucine is specified by YUR or CUN (UUA, UUG, CUU, CUC, CUA, or CUG) codons (difference in the first or third position indicated using IUPAC notation), while the amino acid serine is specified by UCN or AGY (UCA, UCG, UCC, UCU, AGU, or AGC) codons (difference in the first, second, or third position). A practical consequence of redundancy is that errors in the third position of the triplet codon cause only a silent mutation or an error that would not affect the protein because the hydrophilicity or hydrophobicity is maintained by equivalent substitution of amino acids; for example, a codon of NUN (where N = any nucleotide) tends to code for hydrophobic amino acids. NCN yields amino acid residues that are small in size and moderate in hydropathicity; NAN encodes average size hydrophilic residues. The genetic code is so well-structured for hydropathicity that a mathematical analysis (Singular Value Decomposition) of 12 variables (4 nucleotides x 3 positions) yields a remarkable correlation (C = 0.95) for predicting the hydropathicity of the encoded amino acid directly from the triplet nucleotide sequence, without translation. Note in the table, below, eight amino acids are not affected at all by mutations at the third position of the codon, whereas in the figure above, a mutation at the second position is likely to cause a radical change in the physicochemical properties of the encoded amino acid.
Nevertheless, changes in the first position of the codons are more important than changes in the second position on a global scale. The reason may be that charge reversal (from a positive to a negative charge or vice versa) can only occur upon mutations in the first position of certain codons, but not upon changes in the second position of any codon. Such charge reversal may have dramatic consequences for the structure or function of a protein. This aspect may have been largely underestimated by previous studies.
Codon usage bias
The frequency of codons, also known as codon usage bias, can vary from species to species with functional implications for the control of translation. The codon varies by organism; for example, most common proline codon in E. coli is CCG, whereas in humans this is the least used proline codon.
Alternative genetic codes
Non-standard amino acids
In some proteins, non-standard amino acids are substituted for standard stop codons, depending on associated signal sequences in the messenger RNA. For example, UGA can code for selenocysteine and UAG can code for pyrrolysine. Selenocysteine came to be seen as the 21st amino acid, and pyrrolysine as the 22nd. Both selenocysteine and pyrrolysine may be present in the same organism. Although the genetic code is normally fixed in an organism, the achaeal prokaryote Acetohalobium arabaticum can expand its genetic code from 20 to 21 amino acids (by including pyrrolysine) under different conditions of growth.
Variations
There was originally a simple and widely accepted argument that the genetic code should be universal: namely, that any variation in the genetic code would be lethal to the organism (although Crick had stated that viruses were an exception). This is known as the "frozen accident" argument for the universality of the genetic code. However, in his seminal paper on the origins of the genetic code in 1968, Francis Crick still stated that the universality of the genetic code in all organisms was an unproven assumption, and was probably not true in some instances. He predicted that "The code is universal (the same in all organisms) or nearly so". The first variation was discovered in 1979, by researchers studying human mitochondrial genes. Many slight variants were discovered thereafter, including various alternative mitochondrial codes. These minor variants for example involve translation of the codon UGA as tryptophan in Mycoplasma species, and translation of CUG as a serine rather than leucine in yeasts of the "CTG clade" (such as Candida albicans). Because viruses must use the same genetic code as their hosts, modifications to the standard genetic code could interfere with viral protein synthesis or functioning. However, viruses such as totiviruses have adapted to the host's genetic code modification. In bacteria and archaea, GUG and UUG are common start codons. In rare cases, certain proteins may use alternative start codons.
Surprisingly, variations in the interpretation of the genetic code exist also in human nuclear-encoded genes: In 2016, researchers studying the translation of malate dehydrogenase found that in about 4% of the mRNAs encoding this enzyme the stop codon is naturally used to encode the amino acids tryptophan and arginine. This type of recoding is induced by a high-readthrough stop codon context and it is referred to as functional translational readthrough.
Despite these differences, all known naturally occurring codes are very similar. The coding mechanism is the same for all organisms: three-base codons, tRNA, ribosomes, single direction reading and translating single codons into single amino acids. The most extreme variations occur in certain ciliates where the meaning of stop codons depends on their position within mRNA. When close to the 3' end they act as terminators while in internal positions they either code for amino acids as in Condylostoma magnum or trigger ribosomal frameshifting as in Euplotes.
The origins and variation of the genetic code, including the mechanisms behind the evolvability of the genetic code, have been widely studied, and some studies have been done experimentally evolving the genetic code of some organisms.
Inference
Variant genetic codes used by an organism can be inferred by identifying highly conserved genes encoded in that genome, and comparing its codon usage to the amino acids in homologous proteins of other organisms. For example, the program FACIL infers a genetic code by searching which amino acids in homologous protein domains are most often aligned to every codon. The resulting amino acid (or stop codon) probabilities for each codon are displayed in a genetic code logo.
As of January 2022, the most complete survey of genetic codes is done by Shulgina and Eddy, who screened 250,000 prokaryotic genomes using their Codetta tool. This tool uses a similar approach to FACIL with a larger Pfam database. Despite the NCBI already providing 27 translation tables, the authors were able to find new 5 genetic code variations (corroborated by tRNA mutations) and correct several misattributions. Codetta was later used to analyze genetic code change in ciliates.
Origin
The genetic code is a key part of the history of life, according to one version of which self-replicating RNA molecules preceded life as we know it. This is the RNA world hypothesis. Under this hypothesis, any model for the emergence of the genetic code is intimately related to a model of the transfer from ribozymes (RNA enzymes) to proteins as the principal enzymes in cells. In line with the RNA world hypothesis, transfer RNA molecules appear to have evolved before modern aminoacyl-tRNA synthetases, so the latter cannot be part of the explanation of its patterns.
A hypothetical randomly evolved genetic code further motivates a biochemical or evolutionary model for its origin. If amino acids were randomly assigned to triplet codons, there would be 1.5 × 1084 possible genetic codes. This number is found by calculating the number of ways that 21 items (20 amino acids plus one stop) can be placed in 64 bins, wherein each item is used at least once. However, the distribution of codon assignments in the genetic code is nonrandom. In particular, the genetic code clusters certain amino acid assignments.
Amino acids that share the same biosynthetic pathway tend to have the same first base in their codons. This could be an evolutionary relic of an early, simpler genetic code with fewer amino acids that later evolved to code a larger set of amino acids. It could also reflect steric and chemical properties that had another effect on the codon during its evolution. Amino acids with similar physical properties also tend to have similar codons, reducing the problems caused by point mutations and mistranslations.
Given the non-random genetic triplet coding scheme, a tenable hypothesis for the origin of genetic code could address multiple aspects of the codon table, such as absence of codons for D-amino acids, secondary codon patterns for some amino acids, confinement of synonymous positions to third position, the small set of only 20 amino acids (instead of a number approaching 64), and the relation of stop codon patterns to amino acid coding patterns.
Three main hypotheses address the origin of the genetic code. Many models belong to one of them or to a hybrid:
Random freeze: the genetic code was randomly created. For example, early tRNA-like ribozymes may have had different affinities for amino acids, with codons emerging from another part of the ribozyme that exhibited random variability. Once enough peptides were coded for, any major random change in the genetic code would have been lethal; hence it became "frozen".
Stereochemical affinity: the genetic code is a result of a high affinity between each amino acid and its codon or anti-codon; the latter option implies that pre-tRNA molecules matched their corresponding amino acids by this affinity. Later during evolution, this matching was gradually replaced with matching by aminoacyl-tRNA synthetases.
Optimality: the genetic code continued to evolve after its initial creation, so that the current code maximizes some fitness function, usually some kind of error minimization.
Hypotheses have addressed a variety of scenarios:
Chemical principles govern specific RNA interaction with amino acids. Experiments with aptamers showed that some amino acids have a selective chemical affinity for their codons. Experiments showed that of 8 amino acids tested, 6 show some RNA triplet-amino acid association.
Biosynthetic expansion. The genetic code grew from a simpler earlier code through a process of "biosynthetic expansion". Primordial life "discovered" new amino acids (for example, as by-products of metabolism) and later incorporated some of these into the machinery of genetic coding. Although much circumstantial evidence has been found to suggest that fewer amino acid types were used in the past, precise and detailed hypotheses about which amino acids entered the code in what order are controversial. However, several studies have suggested that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may belong to a group of early-addition amino acids, whereas Cys, Met, Tyr, Trp, His, Phe may belong to a group of later-addition amino acids.
Natural selection has led to codon assignments of the genetic code that minimize the effects of mutations. A recent hypothesis suggests that the triplet code was derived from codes that used longer than triplet codons (such as quadruplet codons). Longer than triplet decoding would increase codon redundancy and would be more error resistant. This feature could allow accurate decoding absent complex translational machinery such as the ribosome, such as before cells began making ribosomes.
Information channels: Information-theoretic approaches model the process of translating the genetic code into corresponding amino acids as an error-prone information channel. The inherent noise (that is, the error) in the channel poses the organism with a fundamental question: how can a genetic code be constructed to withstand noise while accurately and efficiently translating information? These "rate-distortion" models suggest that the genetic code originated as a result of the interplay of the three conflicting evolutionary forces: the needs for diverse amino acids, for error-tolerance and for minimal resource cost. The code emerges at a transition when the mapping of codons to amino acids becomes nonrandom. The code's emergence is governed by the topology defined by the probable errors and is related to the map coloring problem.
Game theory: Models based on signaling games combine elements of game theory, natural selection and information channels. Such models have been used to suggest that the first polypeptides were likely short and had non-enzymatic function. Game theoretic models suggested that the organization of RNA strings into cells may have been necessary to prevent "deceptive" use of the genetic code, i.e. preventing the ancient equivalent of viruses from overwhelming the RNA world.
Stop codons: Codons for translational stops are also an interesting aspect to the problem of the origin of the genetic code. As an example for addressing stop codon evolution, it has been suggested that the stop codons are such that they are most likely to terminate translation early in the case of a frame shift error. In contrast, some stereochemical molecular models explain the origin of stop codons as "unassignable".
| Biology and health sciences | Genetics and taxonomy | null |
12386 | https://en.wikipedia.org/wiki/Golden%20ratio | Golden ratio | In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Expressed algebraically, for quantities and with , is in a golden ratio to if
where the Greek letter phi ( or ) denotes the golden ratio. The constant satisfies the quadratic equation and is an irrational number with a value of
The golden ratio was called the extreme and mean ratio by Euclid, and the divine proportion by Luca Pacioli; and also goes by other names.
Mathematicians have studied the golden ratio's properties since antiquity. It is the ratio of a regular pentagon's diagonal to its side and thus appears in the construction of the dodecahedron and icosahedron. A golden rectangle—that is, a rectangle with an aspect ratio of —may be cut into a square and a smaller rectangle with the same aspect ratio. The golden ratio has been used to analyze the proportions of natural objects and artificial systems such as financial markets, in some cases based on dubious fits to data. The golden ratio appears in some patterns in nature, including the spiral arrangement of leaves and other parts of vegetation.
Some 20th-century artists and architects, including Le Corbusier and Salvador Dalí, have proportioned their works to approximate the golden ratio, believing it to be aesthetically pleasing. These uses often appear in the form of a golden rectangle.
Calculation
Two quantities and are in the golden ratio if
Thus, if we want to find , we may use that the definition above holds for arbitrary ; thus, we just set , in which case and we get the equation
,
which becomes a quadratic equation after multiplying by :
which can be rearranged to
The quadratic formula yields two solutions:
Because is a ratio between positive quantities, is necessarily the positive root. The negative root is in fact the negative inverse , which shares many properties with the golden ratio.
History
According to Mario Livio,
Ancient Greek mathematicians first studied the golden ratio because of its frequent appearance in geometry; the division of a line into "extreme and mean ratio" (the golden section) is important in the geometry of regular pentagrams and pentagons. According to one story, 5th-century BC mathematician Hippasus discovered that the golden ratio was neither a whole number nor a fraction (it is irrational), surprising Pythagoreans. Euclid's Elements () provides several propositions and their proofs employing the golden ratio, and contains its first known definition which proceeds as follows:
The golden ratio was studied peripherally over the next millennium. Abu Kamil (c. 850–930) employed it in his geometric calculations of pentagons and decagons; his writings influenced that of Fibonacci (Leonardo of Pisa) (c. 1170–1250), who used the ratio in related geometry problems but did not observe that it was connected to the Fibonacci numbers.
Luca Pacioli named his book Divina proportione (1509) after the ratio; the book, largely plagiarized from Piero della Francesca, explored its properties including its appearance in some of the Platonic solids. Leonardo da Vinci, who illustrated Pacioli's book, called the ratio the sectio aurea ('golden section'). Though it is often said that Pacioli advocated the golden ratio's application to yield pleasing, harmonious proportions, Livio points out that the interpretation has been traced to an error in 1799, and that Pacioli actually advocated the Vitruvian system of rational proportions. Pacioli also saw Catholic religious significance in the ratio, which led to his work's title. 16th-century mathematicians such as Rafael Bombelli solved geometric problems using the ratio.
German mathematician Simon Jacob (d. 1564) noted that consecutive Fibonacci numbers converge to the golden ratio; this was rediscovered by Johannes Kepler in 1608. The first known decimal approximation of the (inverse) golden ratio was stated as "about " in 1597 by Michael Maestlin of the University of Tübingen in a letter to Kepler, his former student. The same year, Kepler wrote to Maestlin of the Kepler triangle, which combines the golden ratio with the Pythagorean theorem. Kepler said of these:
Eighteenth-century mathematicians Abraham de Moivre, Nicolaus I Bernoulli, and Leonhard Euler used a golden ratio-based formula which finds the value of a Fibonacci number based on its placement in the sequence; in 1843, this was rediscovered by Jacques Philippe Marie Binet, for whom it was named "Binet's formula". Martin Ohm first used the German term goldener Schnitt ('golden section') to describe the ratio in 1835. James Sully used the equivalent English term in 1875.
By 1910, inventor Mark Barr began using the Greek letter phi () as a symbol for the golden ratio. It has also been represented by tau (), the first letter of the ancient Greek τομή ('cut' or 'section').
The zome construction system, developed by Steve Baer in the late 1960s, is based on the symmetry system of the icosahedron/dodecahedron, and uses the golden ratio ubiquitously. Between 1973 and 1974, Roger Penrose developed Penrose tiling, a pattern related to the golden ratio both in the ratio of areas of its two rhombic tiles and in their relative frequency within the pattern. This gained in interest after Dan Shechtman's Nobel-winning 1982 discovery of quasicrystals with icosahedral symmetry, which were soon afterwards explained through analogies to the Penrose tiling.
Mathematics
Irrationality
The golden ratio is an irrational number. Below are two short proofs of irrationality:
Contradiction from an expression in lowest terms
This is a proof by infinite descent. Recall that:
If we call the whole and the longer part , then the second statement above becomes
To say that the golden ratio is rational means that is a fraction where and are integers. We may take to be in lowest terms and and to be positive. But if is in lowest terms, then the equally valued is in still lower terms. That is a contradiction that follows from the assumption that is rational.
By irrationality of the square root of 5
Another short proof – perhaps more commonly known – of the irrationality of the golden ratio makes use of the closure of rational numbers under addition and multiplication. If is assumed to be rational, then , the square root of , must also be rational. This is a contradiction as the square roots of all non-square natural numbers are irrational.
Minimal polynomial
The golden ratio is also an algebraic number and even an algebraic integer. It has minimal polynomial
This quadratic polynomial has two roots, and .
The golden ratio is also closely related to the polynomial , which has roots and . As the root of a quadratic polynomial, the golden ratio is a constructible number.
Golden ratio conjugate and powers
The conjugate root to the minimal polynomial is
The absolute value of this quantity () corresponds to the length ratio taken in reverse order (shorter segment length over longer segment length, ).
This illustrates the unique property of the golden ratio among positive numbers, that
or its inverse,
The conjugate and the defining quadratic polynomial relationship lead to decimal values that have their fractional part in common with :
The sequence of powers of contains these values , , , ; more generally,
any power of is equal to the sum of the two immediately preceding powers:
As a result, one can easily decompose any power of into a multiple of and a constant. The multiple and the constant are always adjacent Fibonacci numbers. This leads to another property of the positive powers of :
If , then:
Continued fraction and square root
The formula can be expanded recursively to obtain a simple continued fraction for the golden ratio:
It is in fact the simplest form of a continued fraction, alongside its reciprocal form:
The convergents of these continued fractions, , , , , , or , , , , , are ratios of successive Fibonacci numbers. The consistently small terms in its continued fraction explain why the approximants converge so slowly. This makes the golden ratio an extreme case of the Hurwitz inequality for Diophantine approximations, which states that for every irrational , there are infinitely many distinct fractions such that,
This means that the constant cannot be improved without excluding the golden ratio. It is, in fact, the smallest number that must be excluded to generate closer approximations of such Lagrange numbers.
A continued square root form for can be obtained from , yielding:
Relationship to Fibonacci and Lucas numbers
Fibonacci numbers and Lucas numbers have an intricate relationship with the golden ratio. In the Fibonacci sequence, each term is equal to the sum of the preceding two terms and , starting with the base sequence as the 0th and 1st terms and :
The sequence of Lucas numbers (not to be confused with the generalized Lucas sequences, of which this is part) is like the Fibonacci sequence, in that each term is the sum of the previous two terms and , however instead starts with as the 0th and 1st terms and :
Exceptionally, the golden ratio is equal to the limit of the ratios of successive terms in the Fibonacci sequence and sequence of Lucas numbers:
In other words, if a Fibonacci and Lucas number is divided by its immediate predecessor in the sequence, the quotient approximates . For example,
These approximations are alternately lower and higher than , and converge to as the Fibonacci and Lucas numbers increase.
Closed-form expressions for the Fibonacci and Lucas sequences that involve the golden ratio are:
Combining both formulas above, one obtains a formula for that involves both Fibonacci and Lucas numbers:
Between Fibonacci and Lucas numbers one can deduce , which simplifies to express the limit of the quotient of Lucas numbers by Fibonacci numbers as equal to the square root of five:
Indeed, much stronger statements are true:
These values describe as a fundamental unit of the algebraic number field .
Successive powers of the golden ratio obey the Fibonacci recurrence, .
The reduction to a linear expression can be accomplished in one step by using:
This identity allows any polynomial in to be reduced to a linear expression, as in:
Consecutive Fibonacci numbers can also be used to obtain a similar formula for the golden ratio, here by infinite summation:
In particular, the powers of themselves round to Lucas numbers (in order, except for the first two powers, and , are in reverse order):
and so forth. The Lucas numbers also directly generate powers of the golden ratio; for :
Rooted in their interconnecting relationship with the golden ratio is the notion that the sum of third consecutive Fibonacci numbers equals a Lucas number, that is ; and, importantly, that .
Both the Fibonacci sequence and the sequence of Lucas numbers can be used to generate approximate forms of the golden spiral (which is a special form of a logarithmic spiral) using quarter-circles with radii from these sequences, differing only slightly from the true golden logarithmic spiral. Fibonacci spiral is generally the term used for spirals that approximate golden spirals using Fibonacci number-sequenced squares and quarter-circles.
Geometry
The golden ratio features prominently in geometry. For example, it is intrinsically involved in the internal symmetry of the pentagon, and extends to form part of the coordinates of the vertices of a regular dodecahedron, as well as those of a regular icosahedron. It features in the Kepler triangle and Penrose tilings too, as well as in various other polytopes.
Construction
Dividing by interior division
Having a line segment , construct a perpendicular at point , with half the length of . Draw the hypotenuse .
Draw an arc with center and radius . This arc intersects the hypotenuse at point .
Draw an arc with center and radius . This arc intersects the original line segment at point . Point divides the original line segment into line segments and with lengths in the golden ratio.
Dividing by exterior division
Draw a line segment and construct off the point a segment perpendicular to and with the same length as .
Do bisect the line segment with .
A circular arc around with radius intersects in point the straight line through points and (also known as the extension of ). The ratio of to the constructed segment is the golden ratio.
Application examples you can see in the articles Pentagon with a given side length, Decagon with given circumcircle and Decagon with a given side length.
Both of the above displayed different algorithms produce geometric constructions that determine two aligned line segments where the ratio of the longer one to the shorter one is the golden ratio.
Golden angle
When two angles that make a full circle have measures in the golden ratio, the smaller is called the golden angle, with measure :
This angle occurs in patterns of plant growth as the optimal spacing of leaf shoots around plant stems so that successive leaves do not block sunlight from the leaves below them.
Pentagonal symmetry system
Pentagon and pentagram
In a regular pentagon the ratio of a diagonal to a side is the golden ratio, while intersecting diagonals section each other in the golden ratio. The golden ratio properties of a regular pentagon can be confirmed by applying Ptolemy's theorem to the quadrilateral formed by removing one of its vertices. If the quadrilateral's long edge and diagonals are , and short edges are , then Ptolemy's theorem gives . Dividing both sides by yields (see above),
The diagonal segments of a pentagon form a pentagram, or five-pointed star polygon, whose geometry is quintessentially described by . Primarily, each intersection of edges sections other edges in the golden ratio. The ratio of the length of the shorter segment to the segment bounded by the two intersecting edges (that is, a side of the inverted pentagon in the pentagram's center) is , as the four-color illustration shows.
Pentagonal and pentagrammic geometry permits us to calculate the following values for :
Golden triangle and golden gnomon
The triangle formed by two diagonals and a side of a regular pentagon is called a golden triangle or sublime triangle. It is an acute isosceles triangle with apex angle and base angles . Its two equal sides are in the golden ratio to its base. The triangle formed by two sides and a diagonal of a regular pentagon is called a golden gnomon. It is an obtuse isosceles triangle with apex angle and base angle . Its base is in the golden ratio to its two equal sides. The pentagon can thus be subdivided into two golden gnomons and a central golden triangle. The five points of a regular pentagram are golden triangles, as are the ten triangles formed by connecting the vertices of a regular decagon to its center point.
Bisecting one of the base angles of the golden triangle subdivides it into a smaller golden triangle and a golden gnomon. Analogously, any acute isosceles triangle can be subdivided into a similar triangle and an obtuse isosceles triangle, but the golden triangle is the only one for which this subdivision is made by the angle bisector, because it is the only isosceles triangle whose base angle is twice its apex angle. The angle bisector of the golden triangle subdivides the side that it meets in the golden ratio, and the areas of the two subdivided pieces are also in the golden ratio.
If the apex angle of the golden gnomon is trisected, the trisector again subdivides it into a smaller golden gnomon and a golden triangle. The trisector subdivides the base in the golden ratio, and the two pieces have areas in the golden ratio. Analogously, any obtuse triangle can be subdivided into a similar triangle and an acute isosceles triangle, but the golden gnomon is the only one for which this subdivision is made by the angle trisector, because it is the only isosceles triangle whose apex angle is three times its base angle.
Penrose tilings
The golden ratio appears prominently in the Penrose tiling, a family of aperiodic tilings of the plane developed by Roger Penrose, inspired by Johannes Kepler's remark that pentagrams, decagons, and other shapes could fill gaps that pentagonal shapes alone leave when tiled together. Several variations of this tiling have been studied, all of whose prototiles exhibit the golden ratio:
Penrose's original version of this tiling used four shapes: regular pentagons and pentagrams, "boat" figures with three points of a pentagram, and "diamond" shaped rhombi.
The kite and dart Penrose tiling uses kites with three interior angles of and one interior angle of , and darts, concave quadrilaterals with two interior angles of , one of , and one non-convex angle of . Special matching rules restrict how the tiles can meet at any edge, resulting in seven combinations of tiles at any vertex. Both the kites and darts have sides of two lengths, in the golden ratio to each other. The areas of these two tile shapes are also in the golden ratio to each other.
The kite and dart can each be cut on their symmetry axes into a pair of golden triangles and golden gnomons, respectively. With suitable matching rules, these triangles, called in this context Robinson triangles, can be used as the prototiles for a form of the Penrose tiling.
The rhombic Penrose tiling contains two types of rhombus, a thin rhombus with angles of and , and a thick rhombus with angles of and . All side lengths are equal, but the ratio of the length of sides to the short diagonal in the thin rhombus equals , as does the ratio of the sides of to the long diagonal of the thick rhombus. As with the kite and dart tiling, the areas of the two rhombi are in the golden ratio to each other. Again, these rhombi can be decomposed into pairs of Robinson triangles.
In triangles and quadrilaterals
Odom's construction
George Odom found a construction for involving an equilateral triangle: if the line segment joining the midpoints of two sides is extended to intersect the circumcircle, then the two midpoints and the point of intersection with the circle are in golden proportion.
Kepler triangle
The Kepler triangle, named after Johannes Kepler, is the unique right triangle with sides in geometric progression:
These side lengths are the three Pythagorean means of the two numbers . The three squares on its sides have areas in the golden geometric progression .
Among isosceles triangles, the ratio of inradius to side length is maximized for the triangle formed by two reflected copies of the Kepler triangle, sharing the longer of their two legs. The same isosceles triangle maximizes the ratio of the radius of a semicircle on its base to its perimeter.
For a Kepler triangle with smallest side length , the area and acute internal angles are:
Golden rectangle
The golden ratio proportions the adjacent side lengths of a golden rectangle in ratio. Stacking golden rectangles produces golden rectangles anew, and removing or adding squares from golden rectangles leaves rectangles still proportioned in ratio. They can be generated by golden spirals, through successive Fibonacci and Lucas number-sized squares and quarter circles. They feature prominently in the icosahedron as well as in the dodecahedron (see section below for more detail).
Golden rhombus
A golden rhombus is a rhombus whose diagonals are in proportion to the golden ratio, most commonly . For a rhombus of such proportions, its acute angle and obtuse angles are:
The lengths of its short and long diagonals and , in terms of side length are:
Its area, in terms of and :
Its inradius, in terms of side :
Golden rhombi form the faces of the rhombic triacontahedron, the two golden rhombohedra, the Bilinski dodecahedron, and the rhombic hexecontahedron.
Golden spiral
Logarithmic spirals are self-similar spirals where distances covered per turn are in geometric progression. A logarithmic spiral whose radius increases by a factor of the golden ratio for each quarter-turn is called the golden spiral. These spirals can be approximated by quarter-circles that grow by the golden ratio, or their approximations generated from Fibonacci numbers, often depicted inscribed within a spiraling pattern of squares growing in the same ratio. The exact logarithmic spiral form of the golden spiral can be described by the polar equation with :
Not all logarithmic spirals are connected to the golden ratio, and not all spirals that are connected to the golden ratio are the same shape as the golden spiral. For instance, a different logarithmic spiral, encasing a nested sequence of golden isosceles triangles, grows by the golden ratio for each that it turns, instead of the turning angle of the golden spiral. Another variation, called the "better golden spiral", grows by the golden ratio for each half-turn, rather than each quarter-turn.
Dodecahedron and icosahedron
The regular dodecahedron and its dual polyhedron the icosahedron are Platonic solids whose dimensions are related to the golden ratio. A dodecahedron has regular pentagonal faces, whereas an icosahedron has equilateral triangles; both have edges.
For a dodecahedron of side , the radius of a circumscribed and inscribed sphere, and midradius are (, , and , respectively):
While for an icosahedron of side , the radius of a circumscribed and inscribed sphere, and midradius are:
The volume and surface area of the dodecahedron can be expressed in terms of :
As well as for the icosahedron:
These geometric values can be calculated from their Cartesian coordinates, which also can be given using formulas involving . The coordinates of the dodecahedron are displayed on the figure to the right, while those of the icosahedron are:
Sets of three golden rectangles intersect perpendicularly inside dodecahedra and icosahedra, forming Borromean rings. In dodecahedra, pairs of opposing vertices in golden rectangles meet the centers of pentagonal faces, and in icosahedra, they meet at its vertices. The three golden rectangles together contain all vertices of the icosahedron, or equivalently, intersect the centers of all of the dodecahedron's faces.
A cube can be inscribed in a regular dodecahedron, with some of the diagonals of the pentagonal faces of the dodecahedron serving as the cube's edges; therefore, the edge lengths are in the golden ratio. The cube's volume is times that of the dodecahedron's. In fact, golden rectangles inside a dodecahedron are in golden proportions to an inscribed cube, such that edges of a cube and the long edges of a golden rectangle are themselves in ratio. On the other hand, the octahedron, which is the dual polyhedron of the cube, can inscribe an icosahedron, such that an icosahedron's vertices touch the edges of an octahedron at points that divide its edges in golden ratio.
Other properties
The golden ratio's decimal expansion can be calculated via root-finding methods, such as Newton's method or Halley's method, on the equation or on (to compute first). The time needed to compute digits of the golden ratio using Newton's method is essentially , where is the time complexity of multiplying two -digit numbers. This is considerably faster than known algorithms for and . An easily programmed alternative using only integer arithmetic is to calculate two large consecutive Fibonacci numbers and divide them. The ratio of Fibonacci numbers and , each over digits, yields over significant digits of the golden ratio. The decimal expansion of the golden ratio has been calculated to an accuracy of ten trillion () digits.
In the complex plane, the fifth roots of unity (for an integer ) satisfying are the vertices of a pentagon. They do not form a ring of quadratic integers, however the sum of any fifth root of unity and its complex conjugate, , is a quadratic integer, an element of . Specifically,
This also holds for the remaining tenth roots of unity satisfying ,
For the gamma function , the only solutions to the equation are and .
When the golden ratio is used as the base of a numeral system (see golden ratio base, sometimes dubbed phinary or -nary), quadratic integers in the ring – that is, numbers of the form for and in – have terminating representations, but rational fractions have non-terminating representations.
The golden ratio also appears in hyperbolic geometry, as the maximum distance from a point on one side of an ideal triangle to the closer of the other two sides: this distance, the side length of the equilateral triangle formed by the points of tangency of a circle inscribed within the ideal triangle, is .
The golden ratio appears in the theory of modular functions as well. For let
Then
and
where and in the continued fraction should be evaluated as . The function is invariant under , a congruence subgroup of the modular group. Also for positive real numbers and such that
is a Pisot–Vijayaraghavan number.
Applications and observations
Architecture
The Swiss architect Le Corbusier, famous for his contributions to the modern international style, centered his design philosophy on systems of harmony and proportion. Le Corbusier's faith in the mathematical order of the universe was closely bound to the golden ratio and the Fibonacci series, which he described as "rhythms apparent to the eye and clear in their relations with one another. And these rhythms are at the very root of human activities. They resound in man by an organic inevitability, the same fine inevitability which causes the tracing out of the Golden Section by children, old men, savages and the learned."
Le Corbusier explicitly used the golden ratio in his Modulor system for the scale of architectural proportion. He saw this system as a continuation of the long tradition of Vitruvius, Leonardo da Vinci's "Vitruvian Man", the work of Leon Battista Alberti, and others who used the proportions of the human body to improve the appearance and function of architecture.
In addition to the golden ratio, Le Corbusier based the system on human measurements, Fibonacci numbers, and the double unit. He took suggestion of the golden ratio in human proportions to an extreme: he sectioned his model human body's height at the navel with the two sections in golden ratio, then subdivided those sections in golden ratio at the knees and throat; he used these golden ratio proportions in the Modulor system. Le Corbusier's 1927 Villa Stein in Garches exemplified the Modulor system's application. The villa's rectangular ground plan, elevation, and inner structure closely approximate golden rectangles.
Another Swiss architect, Mario Botta, bases many of his designs on geometric figures. Several private houses he designed in Switzerland are composed of squares and circles, cubes and cylinders. In a house he designed in Origlio, the golden ratio is the proportion between the central section and the side sections of the house.
Art
Leonardo da Vinci's illustrations of polyhedra in Pacioli's Divina proportione have led some to speculate that he incorporated the golden ratio in his paintings. But the suggestion that his Mona Lisa, for example, employs golden ratio proportions, is not supported by Leonardo's own writings. Similarly, although Leonardo's Vitruvian Man is often shown in connection with the golden ratio, the proportions of the figure do not actually match it, and the text only mentions whole number ratios.
Salvador Dalí, influenced by the works of Matila Ghyka, explicitly used the golden ratio in his masterpiece, The Sacrament of the Last Supper. The dimensions of the canvas are a golden rectangle. A huge dodecahedron, in perspective so that edges appear in golden ratio to one another, is suspended above and behind Jesus and dominates the composition.
A statistical study on 565 works of art of different great painters, performed in 1999, found that these artists had not used the golden ratio in the size of their canvases. The study concluded that the average ratio of the two sides of the paintings studied is , with averages for individual artists ranging from (Goya) to (Bellini). On the other hand, Pablo Tosto listed over 350 works by well-known artists, including more than 100 which have canvasses with golden rectangle and proportions, and others with proportions like , , , and .
Books and design
According to Jan Tschichold,
There was a time when deviations from the truly beautiful page proportions , , and the Golden Section were rare. Many books produced between 1550 and 1770 show these proportions exactly, to within half a millimeter.
According to some sources, the golden ratio is used in everyday design, for example in the proportions of playing cards, postcards, posters, light switch plates, and widescreen televisions.
Flags
The aspect ratio (width to height ratio) of the flag of Togo was intended to be the golden ratio, according to its designer.
Music
Ernő Lendvai analyzes Béla Bartók's works as being based on two opposing systems, that of the golden ratio and the acoustic scale, though other music scholars reject that analysis. French composer Erik Satie used the golden ratio in several of his pieces, including Sonneries de la Rose+Croix. The golden ratio is also apparent in the organization of the sections in the music of Debussy's Reflets dans l'eau (Reflections in water), from Images (1st series, 1905), in which "the sequence of keys is marked out by the intervals and and the main climax sits at the phi position".
The musicologist Roy Howat has observed that the formal boundaries of Debussy's La Mer correspond exactly to the golden section. Trezise finds the intrinsic evidence "remarkable", but cautions that no written or reported evidence suggests that Debussy consciously sought such proportions.
Music theorists including Hans Zender and Heinz Bohlen have experimented with the 833 cents scale, a musical scale based on using the golden ratio as its fundamental musical interval. When measured in cents, a logarithmic scale for musical intervals, the golden ratio is approximately 833.09 cents.
Nature
Johannes Kepler wrote that "the image of man and woman stems from the divine proportion. In my opinion, the propagation of plants and the progenitive acts of animals are in the same ratio".
The psychologist Adolf Zeising noted that the golden ratio appeared in phyllotaxis and argued from these patterns in nature that the golden ratio was a universal law. Zeising wrote in 1854 of a universal orthogenetic law of "striving for beauty and completeness in the realms of both nature and art".
However, some have argued that many apparent manifestations of the golden ratio in nature, especially in regard to animal dimensions, are fictitious.
Physics
The quasi-one-dimensional Ising ferromagnet CoNb2O6 (cobalt niobate) has predicted excitation states (with symmetry), that when probed with neutron scattering, showed its lowest two were in golden ratio. Specifically, these quantum phase transitions during spin excitation, which occur at near absolute zero temperature, showed pairs of kinks in its ordered-phase to spin-flips in its paramagnetic phase; revealing, just below its critical field, a spin dynamics with sharp modes at low energies approaching the golden mean.
Optimization
There is no known general algorithm to arrange a given number of nodes evenly on a sphere, for any of several definitions of even distribution (see, for example, Thomson problem or Tammes problem). However, a useful approximation results from dividing the sphere into parallel bands of equal surface area and placing one node in each band at longitudes spaced by a golden section of the circle, i.e. . This method was used to arrange the mirrors of the student-participatory satellite Starshine-3.
The golden ratio is a critical element to golden-section search as well.
Disputed observations
Examples of disputed observations of the golden ratio include the following:
Specific proportions in the bodies of vertebrates (including humans) are often claimed to be in the golden ratio; for example the ratio of successive phalangeal and metacarpal bones (finger bones) has been said to approximate the golden ratio. There is a large variation in the real measures of these elements in specific individuals, however, and the proportion in question is often significantly different from the golden ratio.
The shells of mollusks such as the nautilus are often claimed to be in the golden ratio. The growth of nautilus shells follows a logarithmic spiral, and it is sometimes erroneously claimed that any logarithmic spiral is related to the golden ratio, or sometimes claimed that each new chamber is golden-proportioned relative to the previous one. However, measurements of nautilus shells do not support this claim.
Historian John Man states that both the pages and text area of the Gutenberg Bible were "based on the golden section shape". However, according to his own measurements, the ratio of height to width of the pages is .
Studies by psychologists, starting with Gustav Fechner , have been devised to test the idea that the golden ratio plays a role in human perception of beauty. While Fechner found a preference for rectangle ratios centered on the golden ratio, later attempts to carefully test such a hypothesis have been, at best, inconclusive.
In investing, some practitioners of technical analysis use the golden ratio to indicate support of a price level, or resistance to price increases, of a stock or commodity; after significant price changes up or down, new support and resistance levels are supposedly found at or near prices related to the starting price via the golden ratio. The use of the golden ratio in investing is also related to more complicated patterns described by Fibonacci numbers (e.g. Elliott wave principle and Fibonacci retracement). However, other market analysts have published analyses suggesting that these percentages and patterns are not supported by the data.
Egyptian pyramids
The Great Pyramid of Giza (also known as the Pyramid of Cheops or Khufu) has been analyzed by pyramidologists as having a doubled Kepler triangle as its cross-section. If this theory were true, the golden ratio would describe the ratio of distances from the midpoint of one of the sides of the pyramid to its apex, and from the same midpoint to the center of the pyramid's base. However, imprecision in measurement caused in part by the removal of the outer surface of the pyramid makes it impossible to distinguish this theory from other numerical theories of the proportions of the pyramid, based on pi or on whole-number ratios. The consensus of modern scholars is that this pyramid's proportions are not based on the golden ratio, because such a basis would be inconsistent both with what is known about Egyptian mathematics from the time of construction of the pyramid, and with Egyptian theories of architecture and proportion used in their other works.
The Parthenon
The Parthenon's façade (c. 432 BC) as well as elements of its façade and elsewhere are said by some to be circumscribed by golden rectangles. Other scholars deny that the Greeks had any aesthetic association with golden ratio. For example, Keith Devlin says, "Certainly, the oft repeated assertion that the Parthenon in Athens is based on the golden ratio is not supported by actual measurements. In fact, the entire story about the Greeks and golden ratio seems to be without foundation." Midhat J. Gazalé affirms that "It was not until Euclid ... that the golden ratio's mathematical properties were studied."
From measurements of 15 temples, 18 monumental tombs, 8 sarcophagi, and 58 grave stelae from the fifth century BC to the second century AD, one researcher concluded that the golden ratio was totally absent from Greek architecture of the classical fifth century BC, and almost absent during the following six centuries.
Later sources like Vitruvius (first century BC) exclusively discuss proportions that can be expressed in whole numbers, i.e. commensurate as opposed to irrational proportions.
Modern art
The Section d'Or ('Golden Section') was a collective of painters, sculptors, poets and critics associated with Cubism and Orphism. Active from 1911 to around 1914, they adopted the name both to highlight that Cubism represented the continuation of a grand tradition, rather than being an isolated movement, and in homage to the mathematical harmony associated with Georges Seurat. (Several authors have claimed that Seurat employed the golden ratio in his paintings, but Seurat's writings and paintings suggest that he employed simple whole-number ratios and any approximation of the golden ratio was coincidental.) The Cubists observed in its harmonies, geometric structuring of motion and form, "the primacy of idea over nature", "an absolute scientific clarity of conception". However, despite this general interest in mathematical harmony, whether the paintings featured in the celebrated 1912 Salon de la Section d'Or exhibition used the golden ratio in any compositions is more difficult to determine. Livio, for example, claims that they did not, and Marcel Duchamp said as much in an interview. On the other hand, an analysis suggests that Juan Gris made use of the golden ratio in composing works that were likely, but not definitively, shown at the exhibition. Art historian Daniel Robbins has argued that in addition to referencing the mathematical term, the exhibition's name also refers to the earlier Bandeaux d'Or group, with which Albert Gleizes and other former members of the Abbaye de Créteil had been involved.
Piet Mondrian has been said to have used the golden section extensively in his geometrical paintings, though other experts (including critic Yve-Alain Bois) have discredited these claims.
| Mathematics | Basics | null |
12388 | https://en.wikipedia.org/wiki/Genome | Genome | A genome is all the genetic information of an organism. It consists of nucleotide sequences of DNA (or RNA in RNA viruses). The nuclear genome includes protein-coding genes and non-coding genes, other functional regions of the genome such as regulatory sequences (see non-coding DNA), and often a substantial fraction of junk DNA with no evident function. Almost all eukaryotes have mitochondria and a small mitochondrial genome. Algae and plants also contain chloroplasts with a chloroplast genome.
The study of the genome is called genomics. The genomes of many organisms have been sequenced and various regions have been annotated. The first genome to be sequenced was that of the virus φX174 in 1977; the first genome sequence of a prokaryote (Haemophilus influenzae) was published in 1995; the yeast (Saccharomyces cerevisiae) genome was the first eukaryotic genome to be sequenced in 1996. The Human Genome Project was started in October 1990, and the first draft sequences of the human genome were reported in February 2001.
Origin of the term
The term genome was created in 1920 by Hans Winkler, professor of botany at the University of Hamburg, Germany. The website Oxford Dictionaries and the Online Etymology Dictionary suggest the name is a blend of the words gene and chromosome. However, see omics for a more thorough discussion. A few related -ome words already existed, such as biome and rhizome, forming a vocabulary into which genome fits systematically.
Definition
The term "genome" usually refers to the DNA (or sometimes RNA) molecules that carry the genetic information in an organism, but sometimes it is uncertain which molecules to include; for example, bacteria usually have one or two large DNA molecules (chromosomes) that contain all of the essential genetic material but they also contain smaller extrachromosomal plasmid molecules that carry important genetic information. In the scientific literature, the term 'genome' usually refers to the large chromosomal DNA molecules in bacteria.
Nuclear genome
Eukaryotic genomes are even more difficult to define because almost all eukaryotic species contain nuclear chromosomes plus extra DNA molecules in the mitochondria. In addition, algae and plants have chloroplast DNA. Most textbooks make a distinction between the nuclear genome and the organelle (mitochondria and chloroplast) genomes so when they speak of, say, the human genome, they are only referring to the genetic material in the nucleus. This is the most common use of 'genome' in the scientific literature.
Ploidy
Most eukaryotes are diploid, meaning that there are two of each chromosome in the nucleus but the 'genome' refers to only one copy of each chromosome. Some eukaryotes have distinctive sex chromosomes, such as the X and Y chromosomes of mammals, so the technical definition of the genome must include both copies of the sex chromosomes. For example, the standard reference genome of humans consists of one copy of each of the 22 autosomes plus one X chromosome and one Y chromosome.
Sequencing and mapping
A genome sequence is the complete list of the nucleotides (A, C, G, and T for DNA genomes) that make up all the chromosomes of an individual or a species. Within a species, the vast majority of nucleotides are identical between individuals, but sequencing multiple individuals is necessary to understand the genetic diversity.
In 1976, Walter Fiers at the University of Ghent (Belgium) was the first to establish the complete nucleotide sequence of a viral RNA-genome (Bacteriophage MS2). The next year, Fred Sanger completed the first DNA-genome sequence: Phage X174, of 5386 base pairs. The first bacterial genome to be sequenced was that of Haemophilus influenzae, completed by a team at The Institute for Genomic Research in 1995. A few months later, the first eukaryotic genome was completed, with sequences of the 16 chromosomes of budding yeast Saccharomyces cerevisiae published as the result of a European-led effort begun in the mid-1980s. The first genome sequence for an archaeon, Methanococcus jannaschii, was completed in 1996, again by The Institute for Genomic Research.
The development of new technologies has made genome sequencing dramatically cheaper and easier, and the number of complete genome sequences is growing rapidly. The US National Institutes of Health maintains one of several comprehensive databases of genomic information. Among the thousands of completed genome sequencing projects include those for rice, a mouse, the plant Arabidopsis thaliana, the puffer fish, and the bacteria E. coli. In December 2013, scientists first sequenced the entire genome of a Neanderthal, an extinct species of humans. The genome was extracted from the toe bone of a 130,000-year-old Neanderthal found in a Siberian cave.
Viral genomes
Viral genomes can be composed of either RNA or DNA. The genomes of RNA viruses can be either single-stranded RNA or double-stranded RNA, and may contain one or more separate RNA molecules (segments: monopartit or multipartit genome). DNA viruses can have either single-stranded or double-stranded genomes. Most DNA virus genomes are composed of a single, linear molecule of DNA, but some are made up of a circular DNA molecule.
Prokaryotic genomes
Prokaryotes and eukaryotes have DNA genomes. Archaea and most bacteria have a single circular chromosome, however, some bacterial species have linear or multiple chromosomes. If the DNA is replicated faster than the bacterial cells divide, multiple copies of the chromosome can be present in a single cell, and if the cells divide faster than the DNA can be replicated, multiple replication of the chromosome is initiated before the division occurs, allowing daughter cells to inherit complete genomes and already partially replicated chromosomes. Most prokaryotes have very little repetitive DNA in their genomes. However, some symbiotic bacteria (e.g. Serratia symbiotica) have reduced genomes and a high fraction of pseudogenes: only ~40% of their DNA encodes proteins.
Some bacteria have auxiliary genetic material, also part of their genome, which is carried in plasmids. For this, the word genome should not be used as a synonym of chromosome.
Eukaryotic genomes
Eukaryotic genomes are composed of one or more linear DNA chromosomes. The number of chromosomes varies widely from Jack jumper ants and an asexual nemotode, which each have only one pair, to a fern species that has 720 pairs. It is surprising the amount of DNA that eukaryotic genomes contain compared to other genomes. The amount is even more than what is necessary for DNA protein-coding and noncoding genes due to the fact that eukaryotic genomes show as much as 64,000-fold variation in their sizes. However, this special characteristic is caused by the presence of repetitive DNA, and transposable elements (TEs).
A typical human cell has two copies of each of 22 autosomes, one inherited from each parent, plus two sex chromosomes, making it diploid. Gametes, such as ova, sperm, spores, and pollen, are haploid, meaning they carry only one copy of each chromosome. In addition to the chromosomes in the nucleus, organelles such as the chloroplasts and mitochondria have their own DNA. Mitochondria are sometimes said to have their own genome often referred to as the "mitochondrial genome". The DNA found within the chloroplast may be referred to as the "plastome". Like the bacteria they originated from, mitochondria and chloroplasts have a circular chromosome.
Unlike prokaryotes where exon-intron organization of protein coding genes exists but is rather exceptional, eukaryotes generally have these features in their genes and their genomes contain variable amounts of repetitive DNA. In mammals and plants, the majority of the genome is composed of repetitive DNA.
DNA sequencing
High-throughput technology makes sequencing to assemble new genomes accessible to everyone. Sequence polymorphisms are typically discovered by comparing resequenced isolates to a reference, whereas analyses of coverage depth and mapping topology can provide details regarding structural variations such as chromosomal translocations and segmental duplications.
Coding sequences
DNA sequences that carry the instructions to make proteins are referred to as coding sequences. The proportion of the genome occupied by coding sequences varies widely. A larger genome does not necessarily contain more genes, and the proportion of non-repetitive DNA decreases along with increasing genome size in complex eukaryotes.
Noncoding sequences
Noncoding sequences include introns, sequences for non-coding RNAs, regulatory regions, and repetitive DNA. Noncoding sequences make up 98% of the human genome. There are two categories of repetitive DNA in the genome: tandem repeats and interspersed repeats.
Tandem repeats
Short, non-coding sequences that are repeated head-to-tail are called tandem repeats. Microsatellites consisting of 2–5 basepair repeats, while minisatellite repeats are 30–35 bp. Tandem repeats make up about 4% of the human genome and 9% of the fruit fly genome. Tandem repeats can be functional. For example, telomeres are composed of the tandem repeat TTAGGG in mammals, and they play an important role in protecting the ends of the chromosome.
In other cases, expansions in the number of tandem repeats in exons or introns can cause disease. For example, the human gene huntingtin (Htt) typically contains 6–29 tandem repeats of the nucleotides CAG (encoding a polyglutamine tract). An expansion to over 36 repeats results in Huntington's disease, a neurodegenerative disease. Twenty human disorders are known to result from similar tandem repeat expansions in various genes. The mechanism by which proteins with expanded polygulatamine tracts cause death of neurons is not fully understood. One possibility is that the proteins fail to fold properly and avoid degradation, instead accumulating in aggregates that also sequester important transcription factors, thereby altering gene expression.
Tandem repeats are usually caused by slippage during replication, unequal crossing-over and gene conversion.
Transposable elements
Transposable elements (TEs) are sequences of DNA with a defined structure that are able to change their location in the genome. TEs are categorized as either as a mechanism that replicates by copy-and-paste or as a mechanism that can be excised from the genome and inserted at a new location. In the human genome, there are three important classes of TEs that make up more than 45% of the human DNA; these classes are The long interspersed nuclear elements (LINEs), The interspersed nuclear elements (SINEs), and endogenous retroviruses. These elements have a big potential to modify the genetic control in a host organism.
The movement of TEs is a driving force of genome evolution in eukaryotes because their insertion can disrupt gene functions, homologous recombination between TEs can produce duplications, and TE can shuffle exons and regulatory sequences to new locations.
Retrotransposons
Retrotransposons are found mostly in eukaryotes but not found in prokaryotes. Retrotransposons form a large portion of the genomes of many eukaryotes. A retrotransposon is a transposable element that transposes through an RNA intermediate. Retrotransposons are composed of DNA, but are transcribed into RNA for transposition, then the RNA transcript is copied back to DNA formation with the help of a specific enzyme called reverse transcriptase. A retrotransposon that carries reverse transcriptase in its sequence can trigger its own transposition but retrotransposons that lack a reverse transcriptase must use reverse transcriptase synthesized by another retrotransposon. Retrotransposons can be transcribed into RNA, which are then duplicated at another site into the genome. Retrotransposons can be divided into long terminal repeats (LTRs) and non-long terminal repeats (Non-LTRs).
Long terminal repeats (LTRs) are derived from ancient retroviral infections, so they encode proteins related to retroviral proteins including gag (structural proteins of the virus), pol (reverse transcriptase and integrase), pro (protease), and in some cases env (envelope) genes. These genes are flanked by long repeats at both 5' and 3' ends. It has been reported that LTRs consist of the largest fraction in most plant genome and might account for the huge variation in genome size.
Non-long terminal repeats (Non-LTRs) are classified as long interspersed nuclear elements (LINEs), short interspersed nuclear elements (SINEs), and Penelope-like elements (PLEs). In Dictyostelium discoideum, there is another DIRS-like elements belong to Non-LTRs. Non-LTRs are widely spread in eukaryotic genomes.
Long interspersed elements (LINEs) encode genes for reverse transcriptase and endonuclease, making them autonomous transposable elements. The human genome has around 500,000 LINEs, taking around 17% of the genome.
Short interspersed elements (SINEs) are usually less than 500 base pairs and are non-autonomous, so they rely on the proteins encoded by LINEs for transposition. The Alu element is the most common SINE found in primates. It is about 350 base pairs and occupies about 11% of the human genome with around 1,500,000 copies.
DNA transposons
DNA transposons encode a transposase enzyme between inverted terminal repeats. When expressed, the transposase recognizes the terminal inverted repeats that flank the transposon and catalyzes its excision and reinsertion in a new site. This cut-and-paste mechanism typically reinserts transposons near their original location (within 100 kb). DNA transposons are found in bacteria and make up 3% of the human genome and 12% of the genome of the roundworm C. elegans.
Genome size
Genome size is the total number of the DNA base pairs in one copy of a haploid genome. Genome size varies widely across species. Invertebrates have small genomes, this is also correlated to a small number of transposable elements. Fish and Amphibians have intermediate-size genomes, and birds have relatively small genomes but it has been suggested that birds lost a substantial portion of their genomes during the phase of transition to flight. Before this loss, DNA methylation allows the adequate expansion of the genome.
In humans, the nuclear genome comprises approximately 3.1 billion nucleotides of DNA, divided into 24 linear molecules, the shortest 45 000 000 nucleotides in length and the longest 248 000 000 nucleotides, each contained in a different chromosome. There is no clear and consistent correlation between morphological complexity and genome size in either prokaryotes or lower eukaryotes. Genome size is largely a function of the expansion and contraction of repetitive DNA elements.
Since genomes are very complex, one research strategy is to reduce the number of genes in a genome to the bare minimum and still have the organism in question survive. There is experimental work being done on minimal genomes for single cell organisms as well as minimal genomes for multi-cellular organisms (see developmental biology). The work is both in vivo and in silico.
Genome size differences due to transposable elements
There are many enormous differences in size in genomes, specially mentioned before in the multicellular eukaryotic genomes. Much of this is due to the differing abundances of transposable elements, which evolve by creating new copies of themselves in the chromosomes. Eukaryote genomes often contain many thousands of copies of these elements, most of which have acquired mutations that make them defective.
Genomic alterations
All the cells of an organism originate from a single cell, so they are expected to have identical genomes; however, in some cases, differences arise. Both the process of copying DNA during cell division and exposure to environmental mutagens can result in mutations in somatic cells. In some cases, such mutations lead to cancer because they cause cells to divide more quickly and invade surrounding tissues. In certain lymphocytes in the human immune system, V(D)J recombination generates different genomic sequences such that each cell produces a unique antibody or T cell receptors.
During meiosis, diploid cells divide twice to produce haploid germ cells. During this process, recombination results in a reshuffling of the genetic material from homologous chromosomes so each gamete has a unique genome.
Genome-wide reprogramming
Genome-wide reprogramming in mouse primordial germ cells involves epigenetic imprint erasure leading to totipotency. Reprogramming is facilitated by active DNA demethylation, a process that entails the DNA base excision repair pathway. This pathway is employed in the erasure of CpG methylation (5mC) in primordial germ cells. The erasure of 5mC occurs via its conversion to 5-hydroxymethylcytosine (5hmC) driven by high levels of the ten-eleven dioxygenase enzymes TET1 and TET2.
Genome evolution
Genomes are more than the sum of an organism's genes and have traits that may be measured and studied without reference to the details of any particular genes and their products. Researchers compare traits such as karyotype (chromosome number), genome size, gene order, codon usage bias, and GC-content to determine what mechanisms could have produced the great variety of genomes that exist today (for recent overviews, see Brown 2002; Saccone and Pesole 2003; Benfey and Protopapas 2004; Gibson and Muse 2004; Reese 2004; Gregory 2005).
Duplications play a major role in shaping the genome. Duplication may range from extension of short tandem repeats, to duplication of a cluster of genes, and all the way to duplication of entire chromosomes or even entire genomes. Such duplications are probably fundamental to the creation of genetic novelty.
Horizontal gene transfer is invoked to explain how there is often an extreme similarity between small portions of the genomes of two organisms that are otherwise very distantly related. Horizontal gene transfer seems to be common among many microbes. Also, eukaryotic cells seem to have experienced a transfer of some genetic material from their chloroplast and mitochondrial genomes to their nuclear chromosomes. Recent empirical data suggest an important role of viruses and sub-viral RNA-networks to represent a main driving role to generate genetic novelty and natural genome editing.
In fiction
Works of science fiction illustrate concerns about the availability of genome sequences.
Michael Crichton's 1990 novel Jurassic Park and the subsequent film tell the story of a billionaire who creates a theme park of cloned dinosaurs on a remote island, with disastrous outcomes. A geneticist extracts dinosaur DNA from the blood of ancient mosquitoes and fills in the gaps with DNA from modern species to create several species of dinosaurs. A chaos theorist is asked to give his expert opinion on the safety of engineering an ecosystem with the dinosaurs, and he repeatedly warns that the outcomes of the project will be unpredictable and ultimately uncontrollable. These warnings about the perils of using genomic information are a major theme of the book.
The 1997 film Gattaca is set in a futurist society where genomes of children are engineered to contain the most ideal combination of their parents' traits, and metrics such as risk of heart disease and predicted life expectancy are documented for each person based on their genome. People conceived outside of the eugenics program, known as "In-Valids" suffer discrimination and are relegated to menial occupations. The protagonist of the film is an In-Valid who works to defy the supposed genetic odds and achieve his dream of working as a space navigator. The film warns against a future where genomic information fuels prejudice and extreme class differences between those who can and cannot afford genetically engineered children.
| Biology and health sciences | Genetics | Biology |
12395 | https://en.wikipedia.org/wiki/Greenhouse%20effect | Greenhouse effect | The greenhouse effect occurs when greenhouse gases in a planet's atmosphere insulate the planet from losing heat to space, raising its surface temperature. Surface heating can happen from an internal heat source (as in the case of Jupiter) or come from an external source, such as its host star. In the case of Earth, the Sun emits shortwave radiation (sunlight) that passes through greenhouse gases to heat the Earth's surface. In response, the Earth's surface emits longwave radiation that is mostly absorbed by greenhouse gases. The absorption of longwave radiation prevents it from reaching space, reducing the rate at which the Earth can cool off.
Without the greenhouse effect, the Earth's average surface temperature would be as cold as . This is of course much less than the 20th century average of about . In addition to naturally present greenhouse gases, burning of fossil fuels has increased amounts of carbon dioxide and methane in the atmosphere. As a result, global warming of about has occurred since the Industrial Revolution, with the global average surface temperature increasing at a rate of per decade since 1981.
All objects with a temperature above absolute zero emit thermal radiation. The wavelengths of thermal radiation emitted by the Sun and Earth differ because their surface temperatures are different. The Sun has a surface temperature of , so it emits most of its energy as shortwave radiation in near-infrared and visible wavelengths (as sunlight). In contrast, Earth's surface has a much lower temperature, so it emits longwave radiation at mid- and far-infrared wavelengths. A gas is a greenhouse gas if it absorbs longwave radiation. Earth's atmosphere absorbs only 23% of incoming shortwave radiation, but absorbs 90% of the longwave radiation emitted by the surface, thus accumulating energy and warming the Earth's surface.
The existence of the greenhouse effect (while not named as such) was proposed as early as 1824 by Joseph Fourier. The argument and the evidence were further strengthened by Claude Pouillet in 1827 and 1838. In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.
Definition
The greenhouse effect on Earth is defined as: "The infrared radiative effect of all infrared absorbing constituents in the atmosphere. Greenhouse gases (GHGs), clouds, and some aerosols absorb terrestrial radiation emitted by the Earth’s surface and elsewhere in the atmosphere."
The enhanced greenhouse effect describes the fact that by increasing the concentration of GHGs in the atmosphere (due to human action), the natural greenhouse effect is increased.
Terminology
The term greenhouse effect comes from an analogy to greenhouses. Both greenhouses and the greenhouse effect work by retaining heat from sunlight, but the way they retain heat differs. Greenhouses retain heat mainly by blocking convection (the movement of air). In contrast, the greenhouse effect retains heat by restricting radiative transfer through the air and reducing the rate at which thermal radiation is emitted into space.
History of discovery and investigation
The existence of the greenhouse effect, while not named as such, was proposed as early as 1824 by Joseph Fourier. The argument and the evidence were further strengthened by Claude Pouillet in 1827 and 1838. In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. She concluded that "An atmosphere of that gas would give to our earth a high temperature..."
John Tyndall was the first to measure the infrared absorption and emission of various gases and vapors. From 1859 onwards, he showed that the effect was due to a very small proportion of the atmosphere, with the main gases having no effect, and was largely due to water vapor, though small percentages of hydrocarbons and carbon dioxide had a significant effect. The effect was more fully quantified by Svante Arrhenius in 1896, who made the first quantitative prediction of global warming due to a hypothetical doubling of atmospheric carbon dioxide. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.
Measurement
Matter emits thermal radiation at a rate that is directly proportional to the fourth power of its temperature. Some of the radiation emitted by the Earth's surface is absorbed by greenhouse gases and clouds. Without this absorption, Earth's surface would have an average temperature of . However, because some of the radiation is absorbed, Earth's average surface temperature is around . Thus, the Earth's greenhouse effect may be measured as a temperature change of .
Thermal radiation is characterized by how much energy it carries, typically in watts per square meter (W/m). Scientists also measure the greenhouse effect based on how much more longwave thermal radiation leaves the Earth's surface than reaches space. Currently, longwave radiation leaves the surface at an average rate of 398 W/m, but only 239 W/m reaches space. Thus, the Earth's greenhouse effect can also be measured as an energy flow change of 159 W/m. The greenhouse effect can be expressed as a fraction (0.40) or percentage (40%) of the longwave thermal radiation that leaves Earth's surface but does not reach space.
Whether the greenhouse effect is expressed as a change in temperature or as a change in longwave thermal radiation, the same effect is being measured.
Role in climate change
Strengthening of the greenhouse effect through additional greenhouse gases from human activities is known as the enhanced greenhouse effect. As well as being inferred from measurements by ARGO, CERES and other instruments throughout the 21st century, this increase in radiative forcing from human activity has been observed directly, and is attributable mainly to increased atmospheric carbon dioxide levels.
is produced by fossil fuel burning and other activities such as cement production and tropical deforestation. Measurements of from the Mauna Loa Observatory show that concentrations have increased from about 313 parts per million (ppm) in 1960, passing the 400 ppm milestone in 2013. The current observed amount of exceeds the geological record maxima (≈300 ppm) from ice core data.
Over the past 800,000 years, ice core data shows that carbon dioxide has varied from values as low as 180 ppm to the pre-industrial level of 270 ppm. Paleoclimatologists consider variations in carbon dioxide concentration to be a fundamental factor influencing climate variations over this time scale.
Energy balance and temperature
Incoming shortwave radiation
Hotter matter emits shorter wavelengths of radiation. As a result, the Sun emits shortwave radiation as sunlight while the Earth and its atmosphere emit longwave radiation. Sunlight includes ultraviolet, visible light, and near-infrared radiation.
Sunlight is reflected and absorbed by the Earth and its atmosphere. The atmosphere and clouds reflect about 23% and absorb 23%. The surface reflects 7% and absorbs 48%. Overall, Earth reflects about 30% of the incoming sunlight, and absorbs the rest (240 W/m).
Outgoing longwave radiation
The Earth and its atmosphere emit longwave radiation, also known as thermal infrared or terrestrial radiation. Informally, longwave radiation is sometimes called thermal radiation. Outgoing longwave radiation (OLR) is the radiation from Earth and its atmosphere that passes through the atmosphere and into space.
The greenhouse effect can be directly seen in graphs of Earth's outgoing longwave radiation as a function of frequency (or wavelength). The area between the curve for longwave radiation emitted by Earth's surface and the curve for outgoing longwave radiation indicates the size of the greenhouse effect.
Different substances are responsible for reducing the radiation energy reaching space at different frequencies; for some frequencies, multiple substances play a role. Carbon dioxide is understood to be responsible for the dip in outgoing radiation (and associated rise in the greenhouse effect) at around 667 cm−1 (equivalent to a wavelength of 15 microns).
Each layer of the atmosphere with greenhouse gases absorbs some of the longwave radiation being radiated upwards from lower layers. It also emits longwave radiation in all directions, both upwards and downwards, in equilibrium with the amount it has absorbed. This results in less radiative heat loss and more warmth below. Increasing the concentration of the gases increases the amount of absorption and emission, and thereby causing more heat to be retained at the surface and in the layers below.
Effective temperature
The power of outgoing longwave radiation emitted by a planet corresponds to the effective temperature of the planet. The effective temperature is the temperature that a planet radiating with a uniform temperature (a blackbody) would need to have in order to radiate the same amount of energy.
This concept may be used to compare the amount of longwave radiation emitted to space and the amount of longwave radiation emitted by the surface:
Emissions to space: Based on its emissions of longwave radiation to space, Earth's overall effective temperature is .
Emissions from surface: Based on thermal emissions from the surface, Earth's effective surface temperature is about , which is warmer than Earth's overall effective temperature.
Earth's surface temperature is often reported in terms of the average near-surface air temperature. This is about , a bit lower than the effective surface temperature. This value is warmer than Earth's overall effective temperature.
Energy flux
Energy flux is the rate of energy flow per unit area. Energy flux is expressed in units of W/m2, which is the number of joules of energy that pass through a square meter each second. Most fluxes quoted in high-level discussions of climate are global values, which means they are the total flow of energy over the entire globe, divided by the surface area of the Earth, .
The fluxes of radiation arriving at and leaving the Earth are important because radiative transfer is the only process capable of exchanging energy between Earth and the rest of the universe.
Radiative balance
The temperature of a planet depends on the balance between incoming radiation and outgoing radiation. If incoming radiation exceeds outgoing radiation, a planet will warm. If outgoing radiation exceeds incoming radiation, a planet will cool. A planet will tend towards a state of radiative equilibrium, in which the power of outgoing radiation equals the power of absorbed incoming radiation.
Earth's energy imbalance is the amount by which the power of incoming sunlight absorbed by Earth's surface or atmosphere exceeds the power of outgoing longwave radiation emitted to space. Energy imbalance is the fundamental measurement that drives surface temperature. A UN presentation says "The EEI is the most critical number defining the prospects for continued global warming and climate change." One study argues, "The absolute value of EEI represents the most fundamental metric defining the status of global climate change."
Earth's energy imbalance (EEI) was about 0.7 W/m as of around 2015, indicating that Earth as a whole is accumulating thermal energy and is in a process of becoming warmer.
Over 90% of the retained energy goes into warming the oceans, with much smaller amounts going into heating the land, atmosphere, and ice.
Day and night cycle
A simple picture assumes a steady state, but in the real world, the day/night (diurnal) cycle, as well as the seasonal cycle and weather disturbances, complicate matters. Solar heating applies only during daytime. At night the atmosphere cools somewhat, but not greatly because the thermal inertia of the climate system resists changes both day and night, as well as for longer periods. Diurnal temperature changes decrease with height in the atmosphere.
Effect of lapse rate
Lapse rate
In the lower portion of the atmosphere, the troposphere, the air temperature decreases (or "lapses") with increasing altitude. The rate at which temperature changes with altitude is called the lapse rate.
On Earth, the air temperature decreases by about 6.5 °C/km (3.6 °F per 1000 ft), on average, although this varies.
The temperature lapse is caused by convection. Air warmed by the surface rises. As it rises, air expands and cools. Simultaneously, other air descends, compresses, and warms. This process creates a vertical temperature gradient within the atmosphere.
This vertical temperature gradient is essential to the greenhouse effect. If the lapse rate was zero (so that the atmospheric temperature did not vary with altitude and was the same as the surface temperature) then there would be no greenhouse effect (i.e., its value would be zero).
Emission temperature and altitude
Greenhouse gases make the atmosphere near Earth's surface mostly opaque to longwave radiation. The atmosphere only becomes transparent to longwave radiation at higher altitudes, where the air is less dense, there is less water vapor, and reduced pressure broadening of absorption lines limits the wavelengths that gas molecules can absorb.
For any given wavelength, the longwave radiation that reaches space is emitted by a particular radiating layer of the atmosphere. The intensity of the emitted radiation is determined by the weighted average air temperature within that layer. So, for any given wavelength of radiation emitted to space, there is an associated effective emission temperature (or brightness temperature).
A given wavelength of radiation may also be said to have an effective emission altitude, which is a weighted average of the altitudes within the radiating layer.
The effective emission temperature and altitude vary by wavelength (or frequency). This phenomenon may be seen by examining plots of radiation emitted to space.
Greenhouse gases and the lapse rate
Earth's surface radiates longwave radiation with wavelengths in the range of 4–100 microns. Greenhouse gases that were largely transparent to incoming solar radiation are more absorbent for some wavelengths in this range.
The atmosphere near the Earth's surface is largely opaque to longwave radiation and most heat loss from the surface is by evaporation and convection. However radiative energy losses become increasingly important higher in the atmosphere, largely because of the decreasing concentration of water vapor, an important greenhouse gas.
Rather than thinking of longwave radiation headed to space as coming from the surface itself, it is more realistic to think of this outgoing radiation as being emitted by a layer in the mid-troposphere, which is effectively coupled to the surface by a lapse rate. The difference in temperature between these two locations explains the difference between surface emissions and emissions to space, i.e., it explains the greenhouse effect.
Infrared absorbing constituents in the atmosphere
Greenhouse gases
A greenhouse gas (GHG) is a gas which contributes to the trapping of heat by impeding the flow of longwave radiation out of a planet's atmosphere. Greenhouse gases contribute most of the greenhouse effect in Earth's energy budget.
Infrared active gases
Gases which can absorb and emit longwave radiation are said to be infrared active and act as greenhouse gases.
Most gases whose molecules have two different atoms (such as carbon monoxide, ), and all gases with three or more atoms (including and ), are infrared active and act as greenhouse gases. (Technically, this is because when these molecules vibrate, those vibrations modify the molecular dipole moment, or asymmetry in the distribution of electrical charge. See Infrared spectroscopy.)
Gases with only one atom (such as argon, Ar) or with two identical atoms (such as nitrogen, , and oxygen, ) are not infrared active. They are transparent to longwave radiation, and, for practical purposes, do not absorb or emit longwave radiation. (This is because their molecules are symmetrical and so do not have a dipole moment.) Such gases make up more than 99% of the dry atmosphere.
Absorption and emission
Greenhouse gases absorb and emit longwave radiation within specific ranges of wavelengths (organized as spectral lines or bands).
When greenhouse gases absorb radiation, they distribute the acquired energy to the surrounding air as thermal energy (i.e., kinetic energy of gas molecules). Energy is transferred from greenhouse gas molecules to other molecules via molecular collisions.
Contrary to what is sometimes said, greenhouse gases do not "re-emit" photons after they are absorbed. Because each molecule experiences billions of collisions per second, any energy a greenhouse gas molecule receives by absorbing a photon will be redistributed to other molecules before there is a chance for a new photon to be emitted.
In a separate process, greenhouse gases emit longwave radiation, at a rate determined by the air temperature. This thermal energy is either absorbed by other greenhouse gas molecules or leaves the atmosphere, cooling it.
Radiative effects
Effect on air: Air is warmed by latent heat (buoyant water vapor condensing into water droplets and releasing heat), thermals (warm air rising from below), and by sunlight being absorbed in the atmosphere. Air is cooled radiatively, by greenhouse gases and clouds emitting longwave thermal radiation. Within the troposphere, greenhouse gases typically have a net cooling effect on air, emitting more thermal radiation than they absorb. Warming and cooling of air are well balanced, on average, so that the atmosphere maintains a roughly stable average temperature.
Effect on surface cooling: Longwave radiation flows both upward and downward due to absorption and emission in the atmosphere. These canceling energy flows reduce radiative surface cooling (net upward radiative energy flow). Latent heat transport and thermals provide non-radiative surface cooling which partially compensates for this reduction, but there is still a net reduction in surface cooling, for a given surface temperature.
Effect on TOA energy balance: Greenhouse gases impact the top-of-atmosphere (TOA) energy budget by reducing the flux of longwave radiation emitted to space, for a given surface temperature. Thus, greenhouse gases alter the energy balance at TOA. This means that the surface temperature needs to be higher (than the planet's effective temperature, i.e., the temperature associated with emissions to space), in order for the outgoing energy emitted to space to balance the incoming energy from sunlight. It is important to focus on the top-of-atmosphere (TOA) energy budget (rather than the surface energy budget) when reasoning about the warming effect of greenhouse gases.
Clouds and aerosols
Clouds and aerosols have both cooling effects, associated with reflecting sunlight back to space, and warming effects, associated with trapping thermal radiation.
On average, clouds have a strong net cooling effect. However, the mix of cooling and warming effects varies, depending on detailed characteristics of particular clouds (including their type, height, and optical properties). Thin cirrus clouds can have a net warming effect. Clouds can absorb and emit infrared radiation and thus affect the radiative properties of the atmosphere.
Basic formulas
Effective temperature
A given flux of thermal radiation has an associated effective radiating temperature or effective temperature. Effective temperature is the temperature that a black body (a perfect absorber/emitter) would need to be to emit that much thermal radiation. Thus, the overall effective temperature of a planet is given by
where OLR is the average flux (power per unit area) of outgoing longwave radiation emitted to space and is the Stefan-Boltzmann constant. Similarly, the effective temperature of the surface is given by
where SLR is the average flux of longwave radiation emitted by the surface. (OLR is a conventional abbreviation. SLR is used here to denote the flux of surface-emitted longwave radiation, although there is no standard abbreviation for this.)
Metrics for the greenhouse effect
The IPCC reports the greenhouse effect, , as being 159 W m, where is the flux of longwave thermal radiation that leaves the surface minus the flux of outgoing longwave radiation that reaches space:
Alternatively, the greenhouse effect can be described using the normalized greenhouse effect, , defined as
The normalized greenhouse effect is the fraction of the amount of thermal radiation emitted by the surface that does not reach space.
Based on the IPCC numbers, = 0.40. In other words, 40 percent less thermal radiation reaches space than what leaves the surface.
Sometimes the greenhouse effect is quantified as a temperature difference. This temperature difference is closely related to the quantities above.
When the greenhouse effect is expressed as a temperature difference, , this refers to the effective temperature associated with thermal radiation emissions from the surface minus the effective temperature associated with emissions to space:
Informal discussions of the greenhouse effect often compare the actual surface temperature to the temperature that the planet would have if there were no greenhouse gases. However, in formal technical discussions, when the size of the greenhouse effect is quantified as a temperature, this is generally done using the above formula. The formula refers to the effective surface temperature rather than the actual surface temperature, and compares the surface with the top of the atmosphere, rather than comparing reality to a hypothetical situation.
The temperature difference, , indicates how much warmer a planet's surface is than the planet's overall effective temperature.
Radiative balance
Earth's top-of-atmosphere (TOA) energy imbalance (EEI) is the amount by which the power of incoming radiation exceeds the power of outgoing radiation:
where ASR is the mean flux of absorbed solar radiation. ASR may be expanded as
where is the albedo (reflectivity) of the planet and MSI is the mean solar irradiance incoming at the top of the atmosphere.
The radiative equilibrium temperature of a planet can be expressed as
A planet's temperature will tend to shift towards a state of radiative equilibrium, in which the TOA energy imbalance is zero, i.e., . When the planet is in radiative equilibrium, the overall effective temperature of the planet is given by
Thus, the concept of radiative equilibrium is important because it indicates what effective temperature a planet will tend towards having.
If, in addition to knowing the effective temperature, , we know the value of the greenhouse effect, then we know the mean (average) surface temperature of the planet.
This is why the quantity known as the greenhouse effect is important: it is one of the few quantities that go into determining the planet's mean surface temperature.
Greenhouse effect and temperature
Typically, a planet will be close to radiative equilibrium, with the rates of incoming and outgoing energy being well-balanced. Under such conditions, the planet's equilibrium temperature is determined by the mean solar irradiance and the planetary albedo (how much sunlight is reflected back to space instead of being absorbed).
The greenhouse effect measures how much warmer the surface is than the overall effective temperature of the planet. So, the effective surface temperature, , is, using the definition of ,
One could also express the relationship between and using or .
So, the principle that a larger greenhouse effect corresponds to a higher surface temperature, if everything else (i.e., the factors that determine ) is held fixed, is true as a matter of definition.
Note that the greenhouse effect influences the temperature of the planet as a whole, in tandem with the planet's tendency to move toward radiative equilibrium.
Misconceptions
There are sometimes misunderstandings about how the greenhouse effect functions and raises temperatures.
The surface budget fallacy is a common error in thinking. It involves thinking that an increased concentration could only cause warming by increasing the downward thermal radiation to the surface, as a result of making the atmosphere a better emitter. If the atmosphere near the surface is already nearly opaque to thermal radiation, this would mean that increasing could not lead to higher temperatures. However, it is a mistake to focus on the surface energy budget rather than the top-of-atmosphere energy budget. Regardless of what happens at the surface, increasing the concentration of tends to reduce the thermal radiation reaching space (OLR), leading to a TOA energy imbalance that leads to warming. Earlier researchers like Callendar (1938) and Plass (1959) focused on the surface budget, but the work of Manabe in the 1960s clarified the importance of the top-of-atmosphere energy budget.
Among those who do not believe in the greenhouse effect, there is a fallacy that the greenhouse effect involves greenhouse gases sending heat from the cool atmosphere to the planet's warm surface, in violation of the second law of thermodynamics. However, this idea reflects a misunderstanding. Radiation heat flow is the net energy flow after the flows of radiation in both directions have been taken into account. Radiation heat flow occurs in the direction from the surface to the atmosphere and space, as is to be expected given that the surface is warmer than the atmosphere and space. While greenhouse gases emit thermal radiation downward to the surface, this is part of the normal process of radiation heat transfer. The downward thermal radiation simply reduces the upward thermal radiation net energy flow (radiation heat flow), i.e., it reduces cooling.
Simplified models
Simplified models are sometimes used to support understanding of how the greenhouse effect comes about and how this affects surface temperature.
Atmospheric layer models
The greenhouse effect can be seen to occur in a simplified model in which the air is treated as if it is single uniform layer exchanging radiation with the ground and space. Slightly more complex models add additional layers, or introduce convection.
Equivalent emission altitude
One simplification is to treat all outgoing longwave radiation as being emitted from an altitude where the air temperature equals the overall effective temperature for planetary emissions, . Some authors have referred to this altitude as the effective radiating level (ERL), and suggest that as the concentration increases, the ERL must rise to maintain the same mass of above that level.
This approach is less accurate than accounting for variation in radiation wavelength by emission altitude. However, it can be useful in supporting a simplified understanding of the greenhouse effect. For instance, it can be used to explain how the greenhouse effect increases as the concentration of greenhouse gases increase.
Earth's overall equivalent emission altitude has been increasing with a trend of /decade, which is said to be consistent with a global mean surface warming of /decade over the period 1979–2011.
Related effects on Earth
Negative greenhouse effect
Scientists have observed that, at times, there is a negative greenhouse effect over parts of Antarctica. In a location where there is a strong temperature inversion, so that the air is warmer than the surface, it is possible for the greenhouse effect to be reversed, so that the presence of greenhouse gases increases the rate of radiative cooling to space. In this case, the rate of thermal radiation emission to space is greater than the rate at which thermal radiation is emitted by the surface. Thus, the local value of the greenhouse effect is negative.
Runaway greenhouse effect
Bodies other than Earth
In the solar system, apart from the Earth, at least two other planets and a moon also have a greenhouse effect.
Venus
The greenhouse effect on Venus is particularly large, and it brings the surface temperature to as high as . This is due to its very dense atmosphere which consists of about 97% carbon dioxide.
Although Venus is about 30% closer to the Sun, it absorbs (and is warmed by) less sunlight than Earth, because Venus reflects 77% of incident sunlight while Earth reflects around 30%. In the absence of a greenhouse effect, the surface of Venus would be expected to have a temperature of . Thus, contrary to what one might think, being nearer to the Sun is not a reason why Venus is warmer than Earth.
Due to its high pressure, the CO2 in the atmosphere of Venus exhibits continuum absorption (absorption over a broad range of wavelengths) and is not limited to absorption within the bands relevant to its absorption on Earth.
A runaway greenhouse effect involving carbon dioxide and water vapor has for many years been hypothesized to have occurred on Venus; this idea is still largely accepted. The planet Venus experienced a runaway greenhouse effect, resulting in an atmosphere which is 96% carbon dioxide, and a surface atmospheric pressure roughly the same as found underwater on Earth. Venus may have had water oceans, but they would have boiled off as the mean surface temperature rose to the current .
Mars
Mars has about 70 times as much carbon dioxide as Earth, but experiences only a small greenhouse effect, about . The greenhouse effect is small due to the lack of water vapor and the overall thinness of the atmosphere.
The same radiative transfer calculations that predict warming on Earth accurately explain the temperature on Mars, given its atmospheric composition.
Titan
Saturn's moon Titan has both a greenhouse effect and an anti-greenhouse effect. The presence of nitrogen (N2), methane (CH4), and hydrogen (H2) in the atmosphere contribute to a greenhouse effect, increasing the surface temperature by over the expected temperature of the body without these gases.
While the gases N2 and H2 ordinarily do not absorb infrared radiation, these gases absorb thermal radiation on Titan due to pressure-induced collisions, the large mass and thickness of the atmosphere, and the long wavelengths of the thermal radiation from the cold surface.
The existence of a high-altitude haze, which absorbs wavelengths of solar radiation but is transparent to infrared, contribute to an anti-greenhouse effect of approximately .
The net result of these two effects is a warming of 21 K − 9 K = , so Titan's surface temperature of is 12 K warmer than it would be if there were no atmosphere.
Effect of pressure
One cannot predict the relative sizes of the greenhouse effects on different bodies simply by comparing the amount of greenhouse gases in their atmospheres. This is because factors other than the quantity of these gases also play a role in determining the size of the greenhouse effect.
Overall atmospheric pressure affects how much thermal radiation each molecule of a greenhouse gas can absorb. High pressure leads to more absorption and low pressure leads to less.
This is due to "pressure broadening" of spectral lines. When the total atmospheric pressure is higher, collisions between molecules occur at a higher rate. Collisions broaden the width of absorption lines, allowing a greenhouse gas to absorb thermal radiation over a broader range of wavelengths.
Each molecule in the air near Earth's surface experiences about 7 billion collisions per second. This rate is lower at higher altitudes, where the pressure and temperature are both lower. This means that greenhouse gases are able to absorb more wavelengths in the lower atmosphere than they can in the upper atmosphere.
On other planets, pressure broadening means that each molecule of a greenhouse gas is more effective at trapping thermal radiation if the total atmospheric pressure is high (as on Venus), and less effective at trapping thermal radiation if the atmospheric pressure is low (as on Mars).
| Physical sciences | Climatology | null |
12401 | https://en.wikipedia.org/wiki/Graph%20theory | Graph theory | In mathematics and computer science, graph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A graph in this context is made up of vertices (also called nodes or points) which are connected by edges (also called arcs, links or lines). A distinction is made between undirected graphs, where edges link two vertices symmetrically, and directed graphs, where edges link two vertices asymmetrically. Graphs are one of the principal objects of study in discrete mathematics.
Definitions
Definitions in graph theory vary. The following are some of the more basic ways of defining graphs and related mathematical structures.
Graph
In one restricted but very common sense of the term, a graph is an ordered pair comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called links or lines), which are unordered pairs of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely an undirected simple graph.
In the edge , the vertices and are called the endpoints of the edge. The edge is said to join and and to be incident on and on . A vertex may exist in a graph and not belong to an edge. Under this definition, multiple edges, in which two or more edges connect the same vertices, are not allowed.
In one more general sense of the term allowing multiple edges, a graph is an ordered triple comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called links or lines);
, an incidence function mapping every edge to an unordered pair of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely an undirected multigraph.
A loop is an edge that joins a vertex to itself. Graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex to itself is the edge (for an undirected simple graph) or is incident on (for an undirected multigraph) which is not in . To allow loops, the definitions must be expanded. For undirected simple graphs, the definition of should be modified to . For undirected multigraphs, the definition of should be modified to . To avoid ambiguity, these types of objects may be called undirected simple graph permitting loops and undirected multigraph permitting loops (sometimes also undirected pseudograph), respectively.
and are usually taken to be finite, and many of the well-known results are not true (or are rather different) for infinite graphs because many of the arguments fail in the infinite case. Moreover, is often assumed to be non-empty, but is allowed to be the empty set. The order of a graph is , its number of vertices. The size of a graph is , its number of edges. The degree or valency of a vertex is the number of edges that are incident to it, where a loop is counted twice. The degree of a graph is the maximum of the degrees of its vertices.
In an undirected simple graph of order n, the maximum degree of each vertex is and the maximum size of the graph is .
The edges of an undirected simple graph permitting loops induce a symmetric homogeneous relation on the vertices of that is called the adjacency relation of . Specifically, for each edge , its endpoints and are said to be adjacent to one another, which is denoted .
Directed graph
A directed graph or digraph is a graph in which edges have orientations.
In one restricted but very common sense of the term, a directed graph is an ordered pair comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called directed edges, directed links, directed lines, arrows or arcs) which are ordered pairs of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely a directed simple graph. In set theory and graph theory, denotes the set of -tuples of elements of that is, ordered sequences of elements that are not necessarily distinct.
In the edge directed from to , the vertices and are called the endpoints of the edge, the tail of the edge and the head of the edge. The edge is said to join and and to be incident on and on . A vertex may exist in a graph and not belong to an edge. The edge is called the inverted edge of . Multiple edges, not allowed under the definition above, are two or more edges with both the same tail and the same head.
In one more general sense of the term allowing multiple edges, a directed graph is an ordered triple comprising:
, a set of vertices (also called nodes or points);
, a set of edges (also called directed edges, directed links, directed lines, arrows or arcs);
, an incidence function mapping every edge to an ordered pair of vertices (that is, an edge is associated with two distinct vertices).
To avoid ambiguity, this type of object may be called precisely a directed multigraph.
A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a vertex to itself is the edge (for a directed simple graph) or is incident on (for a directed multigraph) which is not in . So to allow loops the definitions must be expanded. For directed simple graphs, the definition of should be modified to . For directed multigraphs, the definition of should be modified to . To avoid ambiguity, these types of objects may be called precisely a directed simple graph permitting loops and a directed multigraph permitting loops (or a quiver) respectively.
The edges of a directed simple graph permitting loops is a homogeneous relation ~ on the vertices of that is called the adjacency relation of . Specifically, for each edge , its endpoints and are said to be adjacent to one another, which is denoted ~ .
Applications
Graphs can be used to model many types of relations and processes in physical, biological, social and information systems. Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the term network is sometimes defined to mean a graph in which attributes (e.g. names) are associated with the vertices and edges, and the subject that expresses and understands real-world systems as a network is called network science.
Computer science
Within computer science, 'causal' and 'non-causal' linked structures are graphs that are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in social media, travel, biology, computer chip design, mapping the progression of neuro-degenerative diseases, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.
Linguistics
Graph-theoretic methods, in various forms, have proven particularly useful in linguistics, since natural language often lends itself well to discrete structure. Traditionally, syntax and compositional semantics follow tree-based structures, whose expressive power lies in the principle of compositionality, modeled in a hierarchical graph. More contemporary approaches such as head-driven phrase structure grammar model the syntax of natural language using typed feature structures, which are directed acyclic graphs.
Within lexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words; semantic networks are therefore important in computational linguistics. Still, other methods in phonology (e.g. optimality theory, which uses lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are common in the analysis of language as a graph. Indeed, the usefulness of this area of mathematics to linguistics has borne organizations such as TextGraphs, as well as various 'Net' projects, such as WordNet, VerbNet, and others.
Physics and chemistry
Graph theory is also used to study molecules in chemistry and physics. In condensed matter physics, the three-dimensional structure of complicated simulated atomic structures can be studied quantitatively by gathering statistics on graph-theoretic properties related to the topology of the atoms. Also, "the Feynman graphs and rules of calculation summarize quantum field theory in a form in close contact with the experimental numbers one wants to understand." In chemistry a graph makes a natural model for a molecule, where vertices represent atoms and edges bonds. This approach is especially used in computer processing of molecular structures, ranging from chemical editors to database searching. In statistical physics, graphs can represent local connections between interacting parts of a system, as well as the dynamics of a physical process on such
systems. Similarly, in computational neuroscience graphs can be used to represent functional connections between brain areas that interact to give rise to various cognitive processes, where the vertices represent different areas of the brain and the edges represent the connections between those areas. Graph theory plays an important role in electrical modeling of electrical networks, here, weights are associated with resistance of the wire segments to obtain electrical properties of network structures. Graphs are also used to represent the micro-scale channels of porous media, in which the vertices represent the pores and the edges represent the smaller channels connecting the pores. Chemical graph theory uses the molecular graph as a means to model molecules.
Graphs and networks are excellent models to study and understand phase transitions and critical phenomena.
Removal of nodes or edges leads to a critical transition where the network breaks into small clusters which is studied as a phase transition. This breakdown is studied via percolation theory.
Social sciences
Graph theory is also widely used in sociology as a way, for example, to measure actors' prestige or to explore rumor spreading, notably through the use of social network analysis software. Under the umbrella of social networks are many different types of graphs. Acquaintanceship and friendship graphs describe whether people know each other. Influence graphs model whether certain people can influence the behavior of others. Finally, collaboration graphs model whether two people work together in a particular way, such as acting in a movie together.
Biology
Likewise, graph theory is useful in biology and conservation efforts where a vertex can represent regions where certain species exist (or inhabit) and the edges represent migration paths or movement between the regions. This information is important when looking at breeding patterns or tracking the spread of disease, parasites or how changes to the movement can affect other species.
Graphs are also commonly used in molecular biology and genomics to model and analyse datasets with complex relationships. For example, graph-based methods are often used to 'cluster' cells together into cell-types in single-cell transcriptome analysis. Another use is to model genes or proteins in a pathway and study the relationships between them, such as metabolic pathways and gene regulatory networks. Evolutionary trees, ecological networks, and hierarchical clustering of gene expression patterns are also represented as graph structures.
Graph theory is also used in connectomics; nervous systems can be seen as a graph, where the nodes are neurons and the edges are the connections between them.
Mathematics
In mathematics, graphs are useful in geometry and certain parts of topology such as knot theory. Algebraic graph theory has close links with group theory. Algebraic graph theory has been applied to many areas including dynamic systems and complexity.
Other topics
A graph structure can be extended by assigning a weight to each edge of the graph. Graphs with weights, or weighted graphs, are used to represent structures in which pairwise connections have some numerical values. For example, if a graph represents a road network, the weights could represent the length of each road. There may be several weights associated with each edge, including distance (as in the previous example), travel time, or monetary cost. Such weighted graphs are commonly used to program GPS's, and travel-planning search engines that compare flight times and costs.
History
The paper written by Leonhard Euler on the Seven Bridges of Königsberg and published in 1736 is regarded as the first paper in the history of graph theory. This paper, as well as the one written by Vandermonde on the knight problem, carried on with the analysis situs initiated by Leibniz. Euler's formula relating the number of edges, vertices, and faces of a convex polyhedron was studied and generalized by Cauchy and L'Huilier, and represents the beginning of the branch of mathematics known as topology.
More than one century after Euler's paper on the bridges of Königsberg and while Listing was introducing the concept of topology, Cayley was led by an interest in particular analytical forms arising from differential calculus to study a particular class of graphs, the trees. This study had many implications for theoretical chemistry. The techniques he used mainly concern the enumeration of graphs with particular properties. Enumerative graph theory then arose from the results of Cayley and the fundamental results published by Pólya between 1935 and 1937. These were generalized by De Bruijn in 1959. Cayley linked his results on trees with contemporary studies of chemical composition. The fusion of ideas from mathematics with those from chemistry began what has become part of the standard terminology of graph theory.
In particular, the term "graph" was introduced by Sylvester in a paper published in 1878 in Nature, where he draws an analogy between "quantic invariants" and "co-variants" of algebra and molecular diagrams:
"[…] Every invariant and co-variant thus becomes expressible by a graph precisely identical with a Kekuléan diagram or chemicograph. […] I give a rule for the geometrical multiplication of graphs, i.e. for constructing a graph to the product of in- or co-variants whose separate graphs are given. […]" (italics as in the original).
The first textbook on graph theory was written by Dénes Kőnig, and published in 1936. Another book by Frank Harary, published in 1969, was "considered the world over to be the definitive textbook on the subject", and enabled mathematicians, chemists, electrical engineers and social scientists to talk to each other. Harary donated all of the royalties to fund the Pólya Prize.
One of the most famous and stimulating problems in graph theory is the four color problem: "Is it true that any map drawn in the plane may have its regions colored with four colors, in such a way that any two regions having a common border have different colors?" This problem was first posed by Francis Guthrie in 1852 and its first written record is in a letter of De Morgan addressed to Hamilton the same year. Many incorrect proofs have been proposed, including those by Cayley, Kempe, and others. The study and the generalization of this problem by Tait, Heawood, Ramsey and Hadwiger led to the study of the colorings of the graphs embedded on surfaces with arbitrary genus. Tait's reformulation generated a new class of problems, the factorization problems, particularly studied by Petersen and Kőnig. The works of Ramsey on colorations and more specially the results obtained by Turán in 1941 was at the origin of another branch of graph theory, extremal graph theory.
The four color problem remained unsolved for more than a century. In 1969 Heinrich Heesch published a method for solving the problem using computers. A computer-aided proof produced in 1976 by Kenneth Appel and Wolfgang Haken makes fundamental use of the notion of "discharging" developed by Heesch. The proof involved checking the properties of 1,936 configurations by computer, and was not fully accepted at the time due to its complexity. A simpler proof considering only 633 configurations was given twenty years later by Robertson, Seymour, Sanders and Thomas.
The autonomous development of topology from 1860 and 1930 fertilized graph theory back through the works of Jordan, Kuratowski and Whitney. Another important factor of common development of graph theory and topology came from the use of the techniques of modern algebra. The first example of such a use comes from the work of the physicist Gustav Kirchhoff, who published in 1845 his Kirchhoff's circuit laws for calculating the voltage and current in electric circuits.
The introduction of probabilistic methods in graph theory, especially in the study of Erdős and Rényi of the asymptotic probability of graph connectivity, gave rise to yet another branch, known as random graph theory, which has been a fruitful source of graph-theoretic results.
Representation
A graph is an abstraction of relationships that emerge in nature; hence, it cannot be coupled to a certain representation. The way it is represented depends on the degree of convenience such representation provides for a certain application. The most common representations are the visual, in which, usually, vertices are drawn and connected by edges, and the tabular, in which rows of a table provide information about the relationships between the vertices within the graph.
Visual: Graph drawing
Graphs are usually represented visually by drawing a point or circle for every vertex, and drawing a line between two vertices if they are connected by an edge. If the graph is directed, the direction is indicated by drawing an arrow. If the graph is weighted, the weight is added on the arrow.
A graph drawing should not be confused with the graph itself (the abstract, non-visual structure) as there are several ways to structure the graph drawing. All that matters is which vertices are connected to which others by how many edges and not the exact layout. In practice, it is often difficult to decide if two drawings represent the same graph. Depending on the problem domain some layouts may be better suited and easier to understand than others.
The pioneering work of W. T. Tutte was very influential on the subject of graph drawing. Among other achievements, he introduced the use of linear algebraic methods to obtain graph drawings.
Graph drawing also can be said to encompass problems that deal with the crossing number and its various generalizations. The crossing number of a graph is the minimum number of intersections between edges that a drawing of the graph in the plane must contain. For a planar graph, the crossing number is zero by definition. Drawings on surfaces other than the plane are also studied.
There are other techniques to visualize a graph away from vertices and edges, including circle packings, intersection graph, and other visualizations of the adjacency matrix.
Tabular: Graph data structures
The tabular representation lends itself well to computational applications. There are different ways to store graphs in a computer system. The data structure used depends on both the graph structure and the algorithm used for manipulating the graph. Theoretically one can distinguish between list and matrix structures but in concrete applications the best structure is often a combination of both. List structures are often preferred for sparse graphs as they have smaller memory requirements. Matrix structures on the other hand provide faster access for some applications but can consume huge amounts of memory. Implementations of sparse matrix structures that are efficient on modern parallel computer architectures are an object of current investigation.
List structures include the edge list, an array of pairs of vertices, and the adjacency list, which separately lists the neighbors of each vertex: Much like the edge list, each vertex has a list of which vertices it is adjacent to.
Matrix structures include the incidence matrix, a matrix of 0's and 1's whose rows represent vertices and whose columns represent edges, and the adjacency matrix, in which both the rows and columns are indexed by vertices. In both cases a 1 indicates two adjacent objects and a 0 indicates two non-adjacent objects. The degree matrix indicates the degree of vertices. The Laplacian matrix is a modified form of the adjacency matrix that incorporates information about the degrees of the vertices, and is useful in some calculations such as Kirchhoff's theorem on the number of spanning trees of a graph.
The distance matrix, like the adjacency matrix, has both its rows and columns indexed by vertices, but rather than containing a 0 or a 1 in each cell it contains the length of a shortest path between two vertices.
Problems
Enumeration
There is a large literature on graphical enumeration: the problem of counting graphs meeting specified conditions. Some of this work is found in Harary and Palmer (1973).
Subgraphs, induced subgraphs, and minors
A common problem, called the subgraph isomorphism problem, is finding a fixed graph as a subgraph in a given graph. One reason to be interested in such a question is that many graph properties are hereditary for subgraphs, which means that a graph has the property if and only if all subgraphs have it too.
Finding maximal subgraphs of a certain kind is often an NP-complete problem. For example:
Finding the largest complete subgraph is called the clique problem (NP-complete).
One special case of subgraph isomorphism is the graph isomorphism problem. It asks whether two graphs are isomorphic. It is not known whether this problem is NP-complete, nor whether it can be solved in polynomial time.
A similar problem is finding induced subgraphs in a given graph. Again, some important graph properties are hereditary with respect to induced subgraphs, which means that a graph has a property if and only if all induced subgraphs also have it. Finding maximal induced subgraphs of a certain kind is also often NP-complete. For example:
Finding the largest edgeless induced subgraph or independent set is called the independent set problem (NP-complete).
Still another such problem, the minor containment problem, is to find a fixed graph as a minor of a given graph. A minor or subcontraction of a graph is any graph obtained by taking a subgraph and contracting some (or no) edges. Many graph properties are hereditary for minors, which means that a graph has a property if and only if all minors have it too. For example, Wagner's Theorem states:
A graph is planar if it contains as a minor neither the complete bipartite graph K3,3 (see the Three-cottage problem) nor the complete graph K5.
A similar problem, the subdivision containment problem, is to find a fixed graph as a subdivision of a given graph. A subdivision or homeomorphism of a graph is any graph obtained by subdividing some (or no) edges. Subdivision containment is related to graph properties such as planarity. For example, Kuratowski's Theorem states:
A graph is planar if it contains as a subdivision neither the complete bipartite graph K3,3 nor the complete graph K5.
Another problem in subdivision containment is the Kelmans–Seymour conjecture:
Every 5-vertex-connected graph that is not planar contains a subdivision of the 5-vertex complete graph K5.
Another class of problems has to do with the extent to which various species and generalizations of graphs are determined by their point-deleted subgraphs. For example:
The reconstruction conjecture
Graph coloring
Many problems and theorems in graph theory have to do with various ways of coloring graphs. Typically, one is interested in coloring a graph so that no two adjacent vertices have the same color, or with other similar restrictions. One may also consider coloring edges (possibly so that no two coincident edges are the same color), or other variations. Among the famous results and conjectures concerning graph coloring are the following:
Four-color theorem
Strong perfect graph theorem
Erdős–Faber–Lovász conjecture
Total coloring conjecture, also called Behzad's conjecture (unsolved)
List coloring conjecture (unsolved)
Hadwiger conjecture (graph theory) (unsolved)
Subsumption and unification
Constraint modeling theories concern families of directed graphs related by a partial order. In these applications, graphs are ordered by specificity, meaning that more constrained graphs—which are more specific and thus contain a greater amount of information—are subsumed by those that are more general. Operations between graphs include evaluating the direction of a subsumption relationship between two graphs, if any, and computing graph unification. The unification of two argument graphs is defined as the most general graph (or the computation thereof) that is consistent with (i.e. contains all of the information in) the inputs, if such a graph exists; efficient unification algorithms are known.
For constraint frameworks which are strictly compositional, graph unification is the sufficient satisfiability and combination function. Well-known applications include automatic theorem proving and modeling the elaboration of linguistic structure.
Route problems
Hamiltonian path problem
Minimum spanning tree
Route inspection problem (also called the "Chinese postman problem")
Seven bridges of Königsberg
Shortest path problem
Steiner tree
Three-cottage problem
Traveling salesman problem (NP-hard)
Network flow
There are numerous problems arising especially from applications that have to do with various notions of flows in networks, for example:
Max flow min cut theorem
Visibility problems
Museum guard problem
Covering problems
Covering problems in graphs may refer to various set cover problems on subsets of vertices/subgraphs.
Dominating set problem is the special case of set cover problem where sets are the closed neighborhoods.
Vertex cover problem is the special case of set cover problem where sets to cover are every edges.
The original set cover problem, also called hitting set, can be described as a vertex cover in a hypergraph.
Decomposition problems
Decomposition, defined as partitioning the edge set of a graph (with as many vertices as necessary accompanying the edges of each part of the partition), has a wide variety of questions. Often, the problem is to decompose a graph into subgraphs isomorphic to a fixed graph; for instance, decomposing a complete graph into Hamiltonian cycles. Other problems specify a family of graphs into which a given graph should be decomposed, for instance, a family of cycles, or decomposing a complete graph Kn into specified trees having, respectively, 1, 2, 3, ..., edges.
Some specific decomposition problems that have been studied include:
Arboricity, a decomposition into as few forests as possible
Cycle double cover, a decomposition into a collection of cycles covering each edge exactly twice
Edge coloring, a decomposition into as few matchings as possible
Graph factorization, a decomposition of a regular graph into regular subgraphs of given degrees
Graph classes
Many problems involve characterizing the members of various classes of graphs. Some examples of such questions are below:
Enumerating the members of a class
Characterizing a class in terms of forbidden substructures
Ascertaining relationships among classes (e.g. does one property of graphs imply another)
Finding efficient algorithms to decide membership in a class
Finding representations for members of a class
| Mathematics | Discrete mathematics | null |
12431 | https://en.wikipedia.org/wiki/Google%20Search | Google Search | Google Search (also known simply as Google or Google.com) is a search engine operated by Google. It allows users to search for information on the Web by entering keywords or phrases. Google Search uses algorithms to analyze and rank websites based on their relevance to the search query. It is the most popular search engine worldwide.
Google Search is the most-visited website in the world. As of 2020, Google Search has a 92% share of the global search engine market. Approximately 26.75% of Google's monthly global traffic comes from the United States, 4.44% from India, 4.4% from Brazil, 3.92% from the United Kingdom and 3.84% from Japan according to data provided by Similarweb.
The order of search results returned by Google is based, in part, on a priority rank system called "PageRank". Google Search also provides many different options for customized searches, using symbols to include, exclude, specify or require certain search behavior, and offers specialized interactive experiences, such as flight status and package tracking, weather forecasts, currency, unit, and time conversions, word definitions, and more.
The main purpose of Google Search is to search for text in publicly accessible documents offered by web servers, as opposed to other data, such as images or data contained in databases. It was originally developed in 1996 by Larry Page, Sergey Brin, and Scott Hassan. The search engine would also be set up in the garage of Susan Wojcicki's Menlo Park home. In 2011, Google introduced "Google Voice Search" to search for spoken, rather than typed, words. In 2012, Google introduced a semantic search feature named Knowledge Graph.
Analysis of the frequency of search terms may indicate economic, social and health trends. Data about the frequency of use of search terms on Google can be openly inquired via Google Trends and have been shown to correlate with flu outbreaks and unemployment levels, and provide the information faster than traditional reporting methods and surveys. As of mid-2016, Google's search engine has begun to rely on deep neural networks.
In August 2024, a US judge in Virginia ruled that Google's search engine held an illegal monopoly over Internet search. The court found that Google maintained its market dominance by paying large amounts to phone-makers and browser-developers to make Google its default search engine.
Search indexing
Google indexes hundreds of terabytes of information from web pages. For websites that are currently down or otherwise not available, Google provides links to cached versions of the site, formed by the search engine's latest indexing of that page. Additionally, Google indexes some file types, being able to show users PDFs, Word documents, Excel spreadsheets, PowerPoint presentations, certain Flash multimedia content, and plain text files. Users can also activate "SafeSearch", a filtering technology aimed at preventing explicit and pornographic content from appearing in search results.
Despite Google search's immense index, sources generally assume that Google is only indexing less than 5% of the total Internet, with the rest belonging to the deep web, inaccessible through its search tools.
In 2012, Google changed its search indexing tools to demote sites that had been accused of piracy. In October 2016, Gary Illyes, a webmaster trends analyst with Google, announced that the search engine would be making a separate, primary web index dedicated for mobile devices, with a secondary, less up-to-date index for desktop use. The change was a response to the continued growth in mobile usage, and a push for web developers to adopt a mobile-friendly version of their websites. In December 2017, Google began rolling out the change, having already done so for multiple websites.
"Caffeine" search architecture upgrade
In August 2009, Google invited web developers to test a new search architecture, codenamed "Caffeine", and give their feedback. The new architecture provided no visual differences in the user interface, but added significant speed improvements and a new "under-the-hood" indexing infrastructure. The move was interpreted in some quarters as a response to Microsoft's recent release of an upgraded version of its own search service, renamed Bing, as well as the launch of Wolfram Alpha, a new search engine based on "computational knowledge". Google announced completion of "Caffeine" on June 8, 2010, claiming 50% fresher results due to continuous updating of its index.
With "Caffeine", Google moved its back-end indexing system away from MapReduce and onto Bigtable, the company's distributed database platform.
"Medic" search algorithm update
In August 2018, Danny Sullivan from Google announced a broad core algorithm update. As per current analysis done by the industry leaders Search Engine Watch and Search Engine Land, the update was to drop down the medical and health-related websites that were not user friendly and were not providing good user experience. This is why the industry experts named it "Medic".
Google reserves very high standards for YMYL (Your Money or Your Life) pages. This is because misinformation can affect users financially, physically, or emotionally. Therefore, the update targeted particularly those YMYL pages that have low-quality content and misinformation. This resulted in the algorithm targeting health and medical-related websites more than others. However, many other websites from other industries were also negatively affected.
Search results
Ranking of results
By 2012, it handled more than 3.5 billion searches per day. In 2013 the European Commission found that Google Search favored Google's own products, instead of the best result for consumers' needs. In February 2015 Google announced a major change to its mobile search algorithm which would favor mobile friendly over other websites. Nearly 60% of Google searches come from mobile phones. Google says it wants users to have access to premium quality websites. Those websites which lack a mobile-friendly interface would be ranked lower and it is expected that this update will cause a shake-up of ranks. Businesses who fail to update their websites accordingly could see a dip in their regular websites traffic.
PageRank
Google's rise was largely due to a patented algorithm called PageRank which helps rank web pages that match a given search string. When Google was a Stanford research project, it was nicknamed BackRub because the technology checks backlinks to determine a site's importance. Other keyword-based methods to rank search results, used by many search engines that were once more popular than Google, would check how often the search terms occurred in a page, or how strongly associated the search terms were within each resulting page. The PageRank algorithm instead analyzes human-generated links assuming that web pages linked from many important pages are also important. The algorithm computes a recursive score for pages, based on the weighted sum of other pages linking to them. PageRank is thought to correlate well with human concepts of importance. In addition to PageRank, Google, over the years, has added many other secret criteria for determining the ranking of resulting pages. This is reported to comprise over 250 different indicators, the specifics of which are kept secret to avoid difficulties created by scammers and help Google maintain an edge over its competitors globally.
PageRank was influenced by a similar page-ranking and site-scoring algorithm earlier used for RankDex, developed by Robin Li in 1996. Larry Page's patent for PageRank filed in 1998 includes a citation to Li's earlier patent. Li later went on to create the Chinese search engine Baidu in 2000.
In a potential hint of Google's future direction of their Search algorithm, Google's then chief executive Eric Schmidt, said in a 2007 interview with the Financial Times: "The goal is to enable Google users to be able to ask the question such as 'What shall I do tomorrow?' and 'What job shall I take?. Schmidt reaffirmed this during a 2010 interview with The Wall Street Journal: "I actually think most people don't want Google to answer their questions, they want Google to tell them what they should be doing next."
Google optimization
Because Google is the most popular search engine, many webmasters attempt to influence their website's Google rankings. An industry of consultants has arisen to help websites increase their rankings on Google and other search engines. This field, called search engine optimization, attempts to discern patterns in search engine listings, and then develop a methodology for improving rankings to draw more searchers to their clients' sites. Search engine optimization encompasses both "on page" factors (like body copy, title elements, H1 heading elements and image alt attribute values) and Off Page Optimization factors (like anchor text and PageRank). The general idea is to affect Google's relevance algorithm by incorporating the keywords being targeted in various places "on page", in particular the title element and the body copy (note: the higher up in the page, presumably the better its keyword prominence and thus the ranking). Too many occurrences of the keyword, however, cause the page to look suspect to Google's spam checking algorithms. Google has published guidelines for website owners who would like to raise their rankings when using legitimate optimization consultants. It has been hypothesized, and, allegedly, is the opinion of the owner of one business about which there have been numerous complaints, that negative publicity, for example, numerous consumer complaints, may serve as well to elevate page rank on Google Search as favorable comments. The particular problem addressed in The New York Times article, which involved DecorMyEyes, was addressed shortly thereafter by an undisclosed fix in the Google algorithm. According to Google, it was not the frequently published consumer complaints about DecorMyEyes which resulted in the high ranking but mentions on news websites of events which affected the firm such as legal actions against it. Google Search Console helps to check for websites that use duplicate or copyright content.
"Hummingbird" search algorithm upgrade
In 2013, Google significantly upgraded its search algorithm with "Hummingbird". Its name was derived from the speed and accuracy of the hummingbird. The change was announced on September 26, 2013, having already been in use for a month. "Hummingbird" places greater emphasis on natural language queries, considering context and meaning over individual keywords. It also looks deeper at content on individual pages of a website, with improved ability to lead users directly to the most appropriate page rather than just a website's homepage. The upgrade marked the most significant change to Google search in years, with more "human" search interactions and a much heavier focus on conversation and meaning. Thus, web developers and writers were encouraged to optimize their sites with natural writing rather than forced keywords, and make effective use of technical web development for on-site navigation.
Search results quality
In 2023, drawing on internal Google documents disclosed as part of the United States v. Google LLC (2020) antitrust case, technology reporters claimed that Google Search was "bloated and overmonetized" and that the "semantic matching" of search queries put advertising profits before quality. Wired withdrew Megan Gray's piece after Google complained about alleged inaccuracies, while the author reiterated that «As stated in court, "A goal of Project Mercury was to increase commercial queries"».
In March 2024, Google announced a significant update to its core search algorithm and spam targeting, which is expected to wipe out 40 percent of all spam results. On March 20th, it was confirmed that the roll out of the spam update was complete.
Shopping search
On September 10, 2024, the European-based EU Court of Justice found that Google held an illegal monopoly with the way the company showed favoritism to its shopping search, and could not avoid paying €2.4 billion. The EU Court of Justice referred to Google's treatment of rival shopping searches as "discriminatory" and in violation of the Digital Markets Act.
Interface
Page layout
At the top of the search page, the approximate result count and the response time two digits behind decimal is noted. Of search results, page titles and URLs, dates, and a preview text snippet for each result appears. Along with web search results, sections with images, news, and videos may appear. The length of the previewed text snipped was experimented with in 2015 and 2017.
Universal search
"Universal search" was launched by Google on May 16, 2007, as an idea that merged the results from different kinds of search types into one. Prior to Universal search, a standard Google search would consist of links only to websites. Universal search, however, incorporates a wide variety of sources, including websites, news, pictures, maps, blogs, videos, and more, all shown on the same search results page. Marissa Mayer, then-vice president of search products and user experience, described the goal of Universal search as "we're attempting to break down the walls that traditionally separated our various search properties and integrate the vast amounts of information available into one simple set of search results.
In June 2017, Google expanded its search results to cover available job listings. The data is aggregated from various major job boards and collected by analyzing company homepages. Initially only available in English, the feature aims to simplify finding jobs suitable for each user.
Rich snippets
In May 2009, Google announced that they would be parsing website microformats to populate search result pages with "Rich snippets". Such snippets include additional details about results, such as displaying reviews for restaurants and social media accounts for individuals.
In May 2016, Google expanded on the "Rich snippets" format to offer "Rich cards", which, similarly to snippets, display more information about results, but shows them at the top of the mobile website in a swipeable carousel-like format. Originally limited to movie and recipe websites in the United States only, the feature expanded to all countries globally in 2017.
Knowledge Graph
The Knowledge Graph is a knowledge base used by Google to enhance its search engine's results with information gathered from a variety of sources. This information is presented to users in a box to the right of search results. Knowledge Graph boxes were added to Google's search engine in May 2012, starting in the United States, with international expansion by the end of the year. The information covered by the Knowledge Graph grew significantly after launch, tripling its original size within seven months, and being able to answer "roughly one-third" of the 100 billion monthly searches Google processed in May 2016. The information is often used as a spoken answer in Google Assistant and Google Home searches. The Knowledge Graph has been criticized for providing answers without source attribution.
Google Knowledge Panel
A Google Knowledge Panel is a feature integrated into Google search engine result pages, designed to present a structured overview of entities such as individuals, organizations, locations, or objects directly within the search interface. This feature leverages data from Google's Knowledge Graph, a database that organizes and interconnects information about entities, enhancing the retrieval and presentation of relevant content to users.
The content within a Knowledge Panel is derived from various sources, including Wikipedia and other structured databases, ensuring that the information displayed is both accurate and contextually relevant. For instance, querying a well-known public figure may trigger a Knowledge Panel displaying essential details such as biographical information, birthdate, and links to social media profiles or official websites.
The primary objective of the Google Knowledge Panel is to provide users with immediate, factual answers, reducing the need for extensive navigation across multiple web pages.
Personal tab
In May 2017, Google enabled a new "Personal" tab in Google Search, letting users search for content in their Google accounts' various services, including email messages from Gmail and photos from Google Photos.
Google Discover
Google Discover, previously known as Google Feed, is a personalized stream of articles, videos, and other news-related content. The feed contains a "mix of cards" which show topics of interest based on users' interactions with Google, or topics they choose to follow directly. Cards include, "links to news stories, YouTube videos, sports scores, recipes, and other content based on what [Google] determined you're most likely to be interested in at that particular moment." Users can also tell Google they're not interested in certain topics to avoid seeing future updates.
Google Discover launched in December 2016 and received a major update in July 2017. Another major update was released in September 2018, which renamed the app from Google Feed to Google Discover, updated the design, and adding more features.
Discover can be found on a tab in the Google app and by swiping left on the home screen of certain Android devices. As of 2019, Google will not allow political campaigns worldwide to target their advertisement to people to make them vote.
AI Overviews
At the 2023 Google I/O event in May, Google unveiled Search Generative Experience (SGE), an experimental feature in Google Search available through Google Labs which produces AI-generated summaries in response to search prompts. This was part of Google's wider efforts to counter the unprecedented rise of generative AI technology, ushered by OpenAI's launch of ChatGPT, which sent Google executives to a panic due to its potential threat to Google Search. Google added the ability to generate images in October. At I/O in 2024, the feature was upgraded and renamed AI Overviews.
AI Overviews was rolled out to users in the United States in May 2024. The feature faced public criticism in the first weeks of its rollout after errors from the tool went viral online. These included results suggesting users add glue to pizza or eat rocks, or incorrectly claiming Barack Obama is Muslim. Google described these viral errors as "isolated examples", maintaining that most AI Overviews provide accurate information. Two weeks after the rollout of AI Overviews, Google made technical changes and scaled back the feature, pausing its use for some health-related queries and limiting its reliance on social media posts. Scientific American has criticised the system on environmental grounds, as such a search uses 30 times more energy than a conventional one. It has also been criticized for condensing information from various sources, making it less likely for people to view full articles and websites. When it was announced in May 2024, Danielle Coffey, CEO of the News/Media Alliance was quoted as saying "This will be catastrophic to our traffic, as marketed by Google to further satisfy user queries, leaving even less incentive to click through so that we can monetize our content."
In August 2024, AI Overviews were rolled out in the UK, India, Japan, Indonesia, Mexico and Brazil, with local language support. On October 28, 2024, AI Overviews was rolled out to 100 more countries, including Australia and New Zealand.
Redesigns
In late June 2011, Google introduced a new look to the Google homepage in order to boost the use of the Google+ social tools.
One of the major changes was replacing the classic navigation bar with a black one. Google's digital creative director Chris Wiggins explains: "We're working on a project to bring you a new and improved Google experience, and over the next few months, you'll continue to see more updates to our look and feel." The new navigation bar has been negatively received by a vocal minority.
In November 2013, Google started testing yellow labels for advertisements displayed in search results, to improve user experience. The new labels, highlighted in yellow color, and aligned to the left of each sponsored link help users differentiate between organic and sponsored results.
On December 15, 2016, Google rolled out a new desktop search interface that mimics their modular mobile user interface. The mobile design consists of a tabular design that highlights search features in boxes. and works by imitating the desktop Knowledge Graph real estate, which appears in the right-hand rail of the search engine result page, these featured elements frequently feature Twitter carousels, People Also Search For, and Top Stories (vertical and horizontal design) modules. The Local Pack and Answer Box were two of the original features of the Google SERP that were primarily showcased in this manner, but this new layout creates a previously unseen level of design consistency for Google results.
Smartphone apps
Google offers a "Google Search" mobile app for Android and iOS devices. The mobile apps exclusively feature Google Discover and a "Collections" feature, in which the user can save for later perusal any type of search result like images, bookmarks or map locations into groups. Android devices were introduced to a preview of the feed, perceived as related to Google Now, in December 2016, while it was made official on both Android and iOS in July 2017.
In April 2016, Google updated its Search app on Android to feature "Trends"; search queries gaining popularity appeared in the autocomplete box along with normal query autocompletion. The update received significant backlash, due to encouraging search queries unrelated to users' interests or intentions, prompting the company to issue an update with an opt-out option. In September 2017, the Google Search app on iOS was updated to feature the same functionality.
In December 2017, Google released "Google Go", an app designed to enable use of Google Search on physically smaller and lower-spec devices in multiple languages. A Google blog post about designing "India-first" products and features explains that it is "tailor-made for the millions of people in [India and Indonesia] coming online for the first time".
Performing a search
Google Search consists of a series of localized websites. The largest of those, the google.com site, is the top most-visited website in the world. Some of its features include a definition link for most searches including dictionary words, the number of results you got on your search, links to other searches (e.g. for words that Google believes to be misspelled, it provides a link to the search results using its proposed spelling), the ability to filter results to a date range, and many more.
Search syntax
Google search accepts queries as normal text, as well as individual keywords. It automatically corrects apparent misspellings by default (while offering to use the original spelling as a selectable alternative), and provides the same results regardless of capitalization. For more customized results, one can use a wide variety of operators, including, but not limited to:
OR or | – Search for webpages containing one of two similar queries, such as marathon OR race
AND – Search for webpages containing two similar queries, such as marathon AND runner
- (minus sign) – Exclude a word or a phrase, so that "apple -tree" searches where word "tree" is not used
"" – Force inclusion of a word or a phrase, such as "tallest building"
* – Placeholder symbol allowing for any substitute words in the context of the query, such as "largest * in the world"
.. – Search within a range of numbers, such as "camera $50..$100"
site: – Search within a specific website, such as "site:youtube.com"
define: – Search for definitions for a word or phrase, such as "define:phrase"
stocks: – See the stock price of investments, such as "stocks:googl"
related: – Find web pages related to specific URL addresses, such as "related:www.wikipedia.org"
cache: – Highlights the search-words within the cached pages, so that "cache:www.google.com xxx" shows cached content with word "xxx" highlighted.
( ) – Group operators and searches, such as (marathon OR race) AND shoes
filetype: or ext: – Search for specific file types, such as filetype:gif
before: – Search for before a specific date, such as spacex before:2020-08-11
after: – Search for after a specific date, such as iphone after:2007-06-29
@ – Search for a specific word on social media networks, such as "@twitter"
Google also offers a Google Advanced Search page with a web interface to access the advanced features without needing to remember the special operators.
Query expansion
Google applies query expansion to submitted search queries, using techniques to deliver results that it considers "smarter" than the query users actually submitted. This technique involves several steps, including:
Word stemming – Certain words can be reduced so other, similar terms, are also found in results, so that "translator" can also search for "translation"
Acronyms – Searching for abbreviations can also return results about the name in its full length, so that "NATO" can show results for "North Atlantic Treaty Organization"
Misspellings – Google will often suggest correct spellings for misspelled words
Synonyms – In most cases where a word is incorrectly used in a phrase or sentence, Google search will show results based on the correct synonym
Translations – The search engine can, in some instances, suggest results for specific words in a different language
Ignoring words – In some search queries containing extraneous or insignificant words, Google search will simply drop those specific words from the query
In 2008, Google started to give users autocompleted search suggestions in a list below the search bar while typing, originally with the approximate result count previewed for each listed search suggestion.
"I'm Feeling Lucky"
Google's homepage includes a button labeled "I'm Feeling Lucky". This feature originally allowed users to type in their search query, click the button and be taken directly to the first result, bypassing the search results page. Clicking it while leaving the search box empty opens Google's archive of Doodles. With the 2010 announcement of Google Instant, an automatic feature that immediately displays relevant results as users are typing in their query, the "I'm Feeling Lucky" button disappears, requiring that users opt-out of Instant results through search settings to keep using the "I'm Feeling Lucky" functionality. In 2012, "I'm Feeling Lucky" was changed to serve as an advertisement for Google services; users hover their computer mouse over the button, it spins and shows an emotion ("I'm Feeling Puzzled" or "I'm Feeling Trendy", for instance), and, when clicked, takes users to a Google service related to that emotion.
Tom Chavez of "Rapt", a firm helping to determine a website's advertising worth, estimated in 2007 that Google lost $110 million in revenue per year due to use of the button, which bypasses the advertisements found on the search results page.
Special interactive features
Besides the main text-based search-engine function of Google search, it also offers multiple quick, interactive features. These include, but are not limited to:
Calculator
Time zone, currency, and unit conversions
Word translations
Flight status
Local film showings
Weather forecasts
Population and unemployment rates
Package tracking
Word definitions
Metronome
Roll a die
"Do a barrel roll" (search page spins)
"Askew" (results show up sideways)
"OK Google" conversational search
During Google's developer conference, Google I/O, in May 2013, the company announced that users on Google Chrome and ChromeOS would be able to have the browser initiate an audio-based search by saying "OK Google", with no button presses required. After having the answer presented, users can follow up with additional, contextual questions; an example include initially asking "OK Google, will it be sunny in Santa Cruz this weekend?", hearing a spoken answer, and reply with "how far is it from here?" An update to the Chrome browser with voice-search functionality rolled out a week later, though it required a button press on a microphone icon rather than "OK Google" voice activation. Google released a browser extension for the Chrome browser, named with a "beta" tag for unfinished development, shortly thereafter. In May 2014, the company officially added "OK Google" into the browser itself; they removed it in October 2015, citing low usage, though the microphone icon for activation remained available. In May 2016, 20% of search queries on mobile devices were done through voice.
Operations
Search products
In addition to its tool for searching web pages, Google also provides services for searching images, Usenet newsgroups, news websites, videos (Google Videos), searching by locality, maps, and items for sale online. Google Videos allows searching the World Wide Web for video clips. The service evolved from Google Video, Google's discontinued video hosting service that also allowed to search the web for video clips.
In 2012, Google has indexed over 30 trillion web pages, and received 100 billion queries per month. It also caches much of the content that it indexes. Google operates other tools and services including Google News, Google Shopping, Google Maps, Google Custom Search, Google Earth, Google Docs, Picasa (discontinued), Panoramio (discontinued), YouTube, Google Translate, Google Blog Search and Google Desktop Search (discontinued).
There are also products available from Google that are not directly search-related. Gmail, for example, is a webmail application, but still includes search features; Google Browser Sync does not offer any search facilities, although it aims to organize your browsing time.
Energy consumption
In 2009, Google claimed that a search query requires altogether about 1 kJ or 0.0003 kW·h, which is enough to raise the temperature of one liter of water by 0.24 °C. According to green search engine Ecosia, the industry standard for search engines is estimated to be about 0.2 grams of CO2 emission per search. Google's 40,000 searches per second translate to 8 kg CO2 per second or over 252 million kilos of CO2 per year.
Google Doodles
On certain occasions, the logo on Google's webpage will change to a special version, known as a "Google Doodle". This is a picture, drawing, animation, or interactive game that includes the logo. It is usually done for a special event or day although not all of them are well known. Clicking on the Doodle links to a string of Google search results about the topic. The first was a reference to the Burning Man Festival in 1998, and others have been produced for the birthdays of notable people like Albert Einstein, historical events like the interlocking Lego block's 50th anniversary and holidays like Valentine's Day. Some Google Doodles have interactivity beyond a simple search, such as the famous "Google Pac-Man" version that appeared on May 21, 2010.
Criticism
Privacy
Google has been criticized for placing long-term cookies on users' machines to store preferences, a tactic which also enables them to track a user's search terms and retain the data for more than a year.
Since 2012, Google Inc. has globally introduced encrypted connections for most of its clients, to bypass governative blockings of the commercial and IT services.
Complaints about indexing
In 2003, The New York Times complained about Google's indexing, claiming that Google's caching of content on its site infringed its copyright for the content. In both Field v. Google and Parker v. Google, the United States District Court of Nevada ruled in favor of Google.
Child sexual abuse
A 2019 New York Times article on Google Search showed that images of child sexual abuse had been found on Google and that the company had been reluctant at times to remove them.
January 2009 malware bug
Google flags search results with the message "This site may harm your computer" if the site is known to install malicious software in the background or otherwise surreptitiously. For approximately 40 minutes on January 31, 2009, all search results were mistakenly classified as malware and could therefore not be clicked; instead a warning message was displayed and the user was required to enter the requested URL manually. The bug was caused by human error. The URL of "/" (which expands to all URLs) was mistakenly added to the malware patterns file.
Possible misuse of search results
In 2007, a group of researchers observed a tendency for users to rely exclusively on Google Search for finding information, writing that "With the Google interface the user gets the impression that the search results imply a kind of totality. ... In fact, one only sees a small part of what one could see if one also integrates other research tools."
In 2011, Google Search query results have been shown by Internet activist Eli Pariser to be tailored to users, effectively isolating users in what he defined as a filter bubble. Pariser holds algorithms used in search engines such as Google Search responsible for catering "a personal ecosystem of information". Although contrasting views have mitigated the potential threat of "informational dystopia" and questioned the scientific nature of Pariser's claims, filter bubbles have been mentioned to account for the surprising results of the U.S. presidential election in 2016 alongside fake news and echo chambers, suggesting that Facebook and Google have designed personalized online realities in which "we only see and hear what we like".
FTC fines
In 2012, the US Federal Trade Commission fined Google US$22.5 million for violating their agreement not to violate the privacy of users of Apple's Safari web browser. The FTC was also continuing to investigate if Google's favoring of their own services in their search results violated antitrust regulations.
Payments to Apple
In a November 2023 disclosure, during the ongoing antitrust trial against Google, an economics professor at the University of Chicago revealed that Google pays Apple 36% of all search advertising revenue generated when users access Google through the Safari browser. This revelation reportedly caused Google's lead attorney to cringe visibly. The revenue generated from Safari users has been kept confidential, but the 36% figure suggests that it is likely in the tens of billions of dollars.
Both Apple and Google have argued that disclosing the specific terms of their search default agreement would harm their competitive positions. However, the court ruled that the information was relevant to the antitrust case and ordered its disclosure. This revelation has raised concerns about the dominance of Google in the search engine market and the potential anticompetitive effects of its agreements with Apple.
Big data and human bias
Google search engine robots are programmed to use algorithms that understand and predict human behavior. The book, Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin talks about human bias as a behavior that the Google search engine can recognize. In 2016, some users Google searched "three Black teenagers" and images of criminal mugshots of young African American teenagers came up. Then, the users searched "three White teenagers" and were presented with photos of smiling, happy teenagers. They also searched for "three Asian teenagers", and very revealing photos of Asian girls and women appeared. Benjamin concluded that these results reflect human prejudice and views on different ethnic groups. A group of analysts explained the concept of a racist computer program: "The idea here is that computers, unlike people, can't be racist but we're increasingly learning that they do in fact take after their makers ... Some experts believe that this problem might stem from the hidden biases in the massive piles of data that the algorithms process as they learn to recognize patterns ... reproducing our worst values".
Monopoly ruling
On August 5, 2024, Google lost a lawsuit which started in 2020 in D.C. Circuit Court, with Judge Amit Mehta finding that the company had an illegal monopoly over Internet search. This monopoly was held to be in violation of Section 2 of the Sherman Act. Google has said it will appeal the ruling, though they did propose to loosen search deals with Apple and others requiring them to set Google as the default search engine.
Trademark
As people talk about "googling" rather than searching, the company has taken some steps to defend its trademark, in an effort to prevent it from becoming a generic trademark. This has led to lawsuits, threats of lawsuits, and the use of euphemisms, such as calling Google Search a famous web search engine.
Discontinued features
Translate foreign pages
Until May 2013, Google Search had offered a feature to translate search queries into other languages. A Google spokesperson told Search Engine Land that "Removing features is always tough, but we do think very hard about each decision and its implications for our users. Unfortunately, this feature never saw much pick up".
Instant search was announced in September 2010 as a feature that displayed suggested results while the user typed in their search query, initially only in select countries or to registered users. The primary advantage of the new system was its ability to save time, with Marissa Mayer, then-vice president of search products and user experience, proclaiming that the feature would save 2–5 seconds per search, elaborating that "That may not seem like a lot at first, but it adds up. With Google Instant, we estimate that we'll save our users 11 hours with each passing second!" Matt Van Wagner of Search Engine Land wrote that "Personally, I kind of like Google Instant and I think it represents a natural evolution in the way search works", and also praised Google's efforts in public relations, writing that "With just a press conference and a few well-placed interviews, Google has parlayed this relatively minor speed improvement into an attention-grabbing front-page news story". The upgrade also became notable for the company switching Google Search's underlying technology from HTML to AJAX.
Instant Search could be disabled via Google's "preferences" menu for those who didn't want its functionality.
The publication 2600: The Hacker Quarterly compiled a list of words that Google Instant did not show suggested results for, with a Google spokesperson giving the following statement to Mashable:
PC Magazine discussed the inconsistency in how some forms of the same topic are allowed; for instance, "lesbian" was blocked, while "gay" was not, and "cocaine" was blocked, while "crack" and "heroin" were not. The report further stated that seemingly normal words were also blocked due to pornographic innuendos, most notably "scat", likely due to having two completely separate contextual meanings, one for music and one for a sexual practice.
On July 26, 2017, Google removed Instant results, due to a growing number of searches on mobile devices, where interaction with search, as well as screen sizes, differ significantly from a computer.
"Instant previews" allowed previewing screenshots of search results' web pages without having to open them. The feature was introduced in November 2010 to the desktop website and removed in April 2013 citing low usage.
Dedicated encrypted search page
Various search engines provide encrypted Web search facilities. In May 2010 Google rolled out SSL-encrypted web search. The encrypted search was accessed at encrypted.google.com However, the web search is encrypted via Transport Layer Security (TLS) by default today, thus every search request should be automatically encrypted if TLS is supported by the web browser. On its support website, Google announced that the address encrypted.google.com would be turned off April 30, 2018, stating that all Google products and most new browsers use HTTPS connections as the reason for the discontinuation.
Real-Time Search
Google Real-Time Search was a feature of Google Search in which search results also sometimes included real-time information from sources such as Twitter, Facebook, blogs, and news websites. The feature was introduced on December 7, 2009, and went offline on July 2, 2011, after the deal with Twitter expired. Real-Time Search included Facebook status updates beginning on February 24, 2010. A feature similar to Real-Time Search was already available on Microsoft's Bing search engine, which showed results from Twitter and Facebook. The interface for the engine showed a live, descending "river" of posts in the main region (which could be paused or resumed), while a bar chart metric of the frequency of posts containing a certain search term or hashtag was located on the right hand corner of the page above a list of most frequently reposted posts and outgoing links. Hashtag search links were also supported, as were "promoted" tweets hosted by Twitter (located persistently on top of the river) and thumbnails of retweeted image or video links.
In January 2011, geolocation links of posts were made available alongside results in Real-Time Search. In addition, posts containing syndicated or attached shortened links were made searchable by the link: query option. In July 2011, Real-Time Search became inaccessible, with the Real-Time link in the Google sidebar disappearing and a custom 404 error page generated by Google returned at its former URL. Google originally suggested that the interruption was temporary and related to the launch of Google+; they subsequently announced that it was due to the expiry of a commercial arrangement with Twitter to provide access to tweets.
| Technology | Search engines | null |
12436 | https://en.wikipedia.org/wiki/Grape | Grape | A grape is a fruit, botanically a berry, of the deciduous woody vines of the flowering plant genus Vitis. Grapes are a non-climacteric type of fruit, generally occurring in clusters.
The cultivation of grapes began approximately 8,000 years ago, and the fruit has been used as human food throughout its history. Eaten fresh or in dried form (as raisins, currants and sultanas), grapes also hold cultural significance in many parts of the world, particularly for their role in winemaking. Other grape-derived products include various types of jam, juice, vinegar and oil.
History
The Middle East is generally described as the homeland of grapes and the cultivation of this plant began there 6,000–8,000 years ago. Yeast, one of the earliest domesticated microorganisms, occurs naturally on the skins of grapes, leading to the discovery of alcoholic drinks such as wine. The earliest archeological evidence for a dominant position of wine-making in human culture dates from 8,000 years ago in Georgia.
The oldest known winery, the Areni-1 winery, was found in Armenia and dated back to around 4000 BC. By the 9th century AD, the city of Shiraz was known to produce some of the finest wines in the Middle East. Thus it has been proposed that Syrah red wine is named after Shiraz, a city in Persia where the grape was used to make Shirazi wine.
Ancient Egyptian hieroglyphics record the cultivation of purple grapes, and history attests to the ancient Greeks, Cypriots, Phoenicians, and Romans growing purple grapes both for eating and wine production. The growing of grapes would later spread to other regions in Europe, as well as North Africa, and eventually in North America.
In 2005, a team of archaeologists concluded that Chalcolithic wine jars discovered in Cyprus in the 1930s dated back to 3500 BC, making them the oldest of their kind in the world. Commandaria, a sweet dessert wine from Cyprus, is the oldest manufactured wine in the world with origins as far back as 2000 BC.
In North America, native grapes belonging to various species of the genus Vitis proliferate in the wild across the continent and were a part of the diet of many Native Americans, but they were considered by early European colonists to be unsuitable for wine. In the 19th century, Ephraim Bull of Concord, Massachusetts, cultivated seeds from wild Vitis labrusca vines to create the Concord grape, which would become an important agricultural crop in the United States.
Description
Grapes are a type of fruit that grow in clusters of 15 to 300 and can be crimson, black, dark blue, yellow, green, orange, and pink. "White" grapes are actually green in color and are evolutionarily derived from the purple grape. Mutations in two regulatory genes of white grapes turn off production of anthocyanins, which are responsible for the color of purple grapes. Anthocyanins and other pigment chemicals of the larger family of polyphenols in purple grapes are responsible for the varying shades of purple in red wines. Grapes are typically an ellipsoid shape resembling a prolate spheroid.
Nutrition
Raw grapes are 81% water, 18% carbohydrates, 1% protein, and have negligible fat (table). A reference amount of raw grapes supplies of food energy and a moderate amount of vitamin K (14% of the Daily Value), with no other micronutrients in significant amounts.
Grapevines
Most domesticated grapes come from cultivars of Vitis vinifera, a grapevine native to the Mediterranean and Central Asia. Minor amounts of fruit and wine come from American and Asian species such as:
Vitis amurensis, the most important Asian species
Vitis labrusca, the North American table and grape juice grapevines (including the Concord cultivar), sometimes used for wine, are native to the Eastern United States and Canada.
Vitis mustangensis (the mustang grape), found in Mississippi, Alabama, Louisiana, Texas, and Oklahoma
Vitis riparia, a wild vine of North America, is sometimes used for winemaking and for jam. It is native to the entire Eastern United States and north to Quebec.
Vitis rotundifolia (the muscadine), used for jams and wine, is native to the Southeastern United States from Delaware to the Gulf of Mexico.
Trade
Distribution and production
According to the Food and Agriculture Organization (FAO), 75,866 square kilometers of the world are dedicated to grapes. Approximately 71% of world grape production is used for wine, 27% as fresh fruit, and 2% as dried fruit. A portion of grape production goes to producing grape juice to be reconstituted for fruits canned "with no added sugar" and "100% natural". The area dedicated to vineyards is increasing by about 2% per year.
There are no reliable statistics that break down grape production by variety. It is believed that the most widely planted variety is Sultana, also known as Thompson Seedless, with at least 3,600 km2 (880,000 acres) dedicated to it. The second most common variety is Airén. Other popular varieties include Cabernet Sauvignon, Sauvignon blanc, Cabernet Franc, Merlot, Grenache, Tempranillo, Riesling, and Chardonnay.
Table and wine grapes
Commercially cultivated grapes can usually be classified as either table or wine grapes, based on their intended method of consumption: eaten raw (table grapes) or used to make wine (wine grapes). The sweetness of grapes depends on when they are harvested, as they do not continue to ripen once picked. While almost all of them belong to the same species, Vitis vinifera, table and wine grapes have significant differences, brought about through selective breeding. Table grape cultivars tend to have large, seedless fruit (see below) with relatively thin skin. Wine grapes are smaller, usually seeded, and have relatively thick skins (a desirable characteristic in winemaking, since much of the aroma in wine comes from the skin). Wine grapes also tend to be very sweet: they are harvested at the time when their juice is approximately 24% sugar by weight. By comparison, commercially produced "100% grape juice", made from table grapes, is usually around 15% sugar by weight.
Seedless grapes
Seedless cultivars now make up the overwhelming majority of table grape plantings. Because grapevines are vegetatively propagated by cuttings, the lack of seeds does not present a problem for reproduction. It is an issue for breeders, who must either use a seeded variety as the female parent or rescue embryos early in development using tissue culture techniques.
There are several sources of the seedlessness trait, and essentially all commercial cultivators get it from one of three sources: Thompson Seedless, Russian Seedless, and Black Monukka, all being cultivars of Vitis vinifera. There are currently more than a dozen varieties of seedless grapes. Several, such as Einset Seedless, Benjamin Gunnels's Prime seedless grapes, Reliance, and Venus, have been specifically cultivated for hardiness and quality in the relatively cold climates of northeastern United States and southern Ontario.
An offset to the improved eating quality of seedlessness is the loss of potential health benefits provided by the enriched phytochemical content of grape seeds (see Health claims, below).
Uses
Culinary
Grapes are eaten raw, dried (as raisins, currants and sultanas), or cooked. Also, depending on grape cultivar, grapes are used in winemaking. Grapes can be processed into a multitude of products such as jams, juices, vinegars and oils.
Commercially cultivated grapes are classified as either table or wine grapes. These categories are based on their intended method of consumption: grapes that are eaten raw (table grapes), or grapes that are used to make wine (wine grapes).
Table grape cultivars normally have large, seedless fruit and thin skins. Wine grapes are smaller (in comparison to table grapes), usually contains seeds, and have thicker skins (a desirable characteristic in making wine). Most of the aroma in wine is from the skin. Wine grapes tend to have a high sugar content. They are harvested at peak sugar levels (approximately 24% sugar by weight.) In comparison, commercially produced "100% grape juice" made from table grapes are normally around 15% sugar by weight.
Raisins, currants and sultanas
In most of Europe and North America, dried grapes are referred to as "raisins" or the local equivalent. In the UK, three different varieties are recognized, forcing the EU to use the term "dried vine fruit" in official documents.
A raisin is any dried grape. While raisin is a French loanword, the word in French refers to the fresh fruit; grappe (from which the English grape is derived) refers to the bunch (as in une grappe de raisins). A raisin in French is called raisin sec ("dry grape").
A currant is a dried Zante Black Corinth grape, the name being a corruption of the French raisin de Corinthe (Corinth grape). The names of the black and red currant, now more usually blackcurrant and redcurrant, two berries unrelated to grapes, are derived from this use. Some other fruits of similar appearance are also so named, for example, Australian currant, native currant, Indian currant.
A sultana was originally a raisin made from Sultana grapes of Turkish origin (known as Thompson Seedless in the United States), but the word is now applied to raisins made from either white grapes or red grapes that are bleached to resemble the traditional sultana.
Juice
Grape juice is obtained from crushing and blending grapes into a liquid. The juice is often sold in stores or fermented and made into wine, brandy, or vinegar. Grape juice that has been pasteurized, removing any naturally occurring yeast, will not ferment if kept sterile, and thus contains no alcohol. In the wine industry, grape juice that contains 7–23% of pulp, skins, stems and seeds is often referred to as "must". In North America, the most common grape juice is purple and made from Concord grapes, while white grape juice is commonly made from Niagara grapes, both of which are varieties of grapes, a different species from European wine grapes. In California, Sultana (known there as Thompson Seedless) grapes are sometimes diverted from the raisin or table market to produce white juice.
Vinegars
Husrum, also known as verjuice, is a type of vinegar made from sour grapes in the Middle East. It is produced by crushing unripened grapes, collecting and salting the juice, simmering it to remove foam, and then storing it with a layer of olive oil to prevent contamination and oxidation. It is then used as an acidic ingredient in salads and stuffed vegetables. Unripened husrum grapes sent from Ashkelon to Egypt are mentioned in a 12th-century document found in the Cairo Geniza. In Iran, a sour grape vinegar is used for making Shirazi salad.
Pomace and phytochemicals
Winemaking from red and white grape flesh and skins produces substantial quantities of organic residues, collectively called pomace (also "marc"), which includes crushed skins, seeds, stems, and leaves generally used as compost. Grape pomace – some 10–30% of the total mass of grapes crushed – contains various phytochemicals, such as unfermented sugars, alcohol, polyphenols, tannins, anthocyanins, and numerous other compounds, some of which are harvested and extracted for commercial applications (a process sometimes called "valorization" of the pomace).
Skin
Anthocyanins tend to be the main polyphenolics in purple grapes, whereas flavan-3-ols (i.e. catechins) are the more abundant class of polyphenols in white varieties. Total phenolic content is higher in purple varieties due almost entirely to anthocyanin density in purple grape skin compared to absence of anthocyanins in white grape skin. Phenolic content of grape skin varies with cultivar, soil composition, climate, geographic origin, and cultivation practices or exposure to diseases, such as fungal infections.
Muscadine grapes contain a relatively high phenolic content among dark grapes. In muscadine skins, ellagic acid, myricetin, quercetin, kaempferol, and trans-resveratrol are major phenolics.
The flavonols syringetin, syringetin 3-O-galactoside, laricitrin and laricitrin 3-O-galactoside are also found in purple grape but absent in white grape.
Seeds
Muscadine grape seeds contain about twice the total polyphenol content of skins. Grape seed oil from crushed seeds is used in cosmeceuticals and skincare products. Grape seed oil, including tocopherols (vitamin E) and high contents of phytosterols and polyunsaturated fatty acids such as linoleic acid, oleic acid, and alpha-linolenic acid.
Resveratrol
Resveratrol, a stilbene compound, is found in widely varying amounts among grape varieties, primarily in their skins and seeds. Muscadine grapes have about one hundred times higher concentration of stilbenes than pulp. Fresh grape skin contains about 50 to 100 micrograms of resveratrol per gram.
Health claims
French paradox
Comparing diets among Western countries, researchers have discovered that, although French people tend to eat higher levels of animal fat, the incidence of heart disease remains low in France. This phenomenon has been termed the French paradox and is thought to occur due to the protective benefits of regularly consuming red wine, among other dietary practices. Alcohol consumption in moderation may be cardioprotective by its minor anticoagulant effect and vasodilation.
Although adoption of wine consumption is generally not recommended by health authorities, some research indicates moderate consumption, such as one glass of red wine a day for women and two for men, may confer health benefits. Alcohol itself may have protective effects on the cardiovascular system.
Grape and raisin toxicity in dogs
The consumption of grapes and raisins presents a potential health threat to dogs. Their toxicity to dogs can cause the animal to develop acute kidney failure (the sudden development of kidney failure) with anuria (a lack of urine production) and may be fatal.
In religion
Christians have traditionally used wine during worship services as a means of remembering the blood of Jesus Christ which was shed for the remission of sins. Christians who oppose the partaking of alcoholic beverages sometimes use grape juice as the "cup" or "wine" in the Lord's Supper.
The Catholic Church continues to use wine in the celebration of the Eucharist because it is part of the tradition passed down through the ages starting with Jesus Christ at the Last Supper, where Catholics believe the consecrated bread and wine become the body and blood of Jesus Christ, a dogma known as transubstantiation. Wine is used (not grape juice) both due to its strong Scriptural roots, and also to follow the tradition set by the early Christian Church. The Code of Canon Law of the Catholic Church (1983), Canon 924 says that the wine used must be natural, made from grapes of the vine, and not corrupt.
Gallery
| Biology and health sciences | Others | null |
12437 | https://en.wikipedia.org/wiki/Genetic%20disorder | Genetic disorder | A genetic disorder is a health problem caused by one or more abnormalities in the genome. It can be caused by a mutation in a single gene (monogenic) or multiple genes (polygenic) or by a chromosome abnormality. Although polygenic disorders are the most common, the term is mostly used when discussing disorders with a single genetic cause, either in a gene or chromosome. The mutation responsible can occur spontaneously before embryonic development (a de novo mutation), or it can be inherited from two parents who are carriers of a faulty gene (autosomal recessive inheritance) or from a parent with the disorder (autosomal dominant inheritance). When the genetic disorder is inherited from one or both parents, it is also classified as a hereditary disease. Some disorders are caused by a mutation on the X chromosome and have X-linked inheritance. Very few disorders are inherited on the Y chromosome or mitochondrial DNA (due to their size).
There are well over 6,000 known genetic disorders, and new genetic disorders are constantly being described in medical literature. More than 600 genetic disorders are treatable. Around 1 in 50 people are affected by a known single-gene disorder, while around 1 in 263 are affected by a chromosomal disorder. Around 65% of people have some kind of health problem as a result of congenital genetic mutations. Due to the significantly large number of genetic disorders, approximately 1 in 21 people are affected by a genetic disorder classified as "rare" (usually defined as affecting less than 1 in 2,000 people). Most genetic disorders are rare in themselves.
Genetic disorders are present before birth, and some genetic disorders produce birth defects, but birth defects can also be developmental rather than hereditary. The opposite of a hereditary disease is an acquired disease. Most cancers, although they involve genetic mutations to a small proportion of cells in the body, are acquired diseases. Some cancer syndromes, however, such as BRCA mutations, are hereditary genetic disorders.
Single-gene
A single-gene disorder (or monogenic disorder) is the result of a single mutated gene. Single-gene disorders can be passed on to subsequent generations in several ways. Genomic imprinting and uniparental disomy, however, may affect inheritance patterns. The divisions between recessive and dominant types are not "hard and fast", although the divisions between autosomal and X-linked types are (since the latter types are distinguished purely based on the chromosomal location of the gene). For example, the common form of dwarfism, achondroplasia, is typically considered a dominant disorder, but children with two genes for achondroplasia have a severe and usually lethal skeletal disorder, one that achondroplasics(ones affected with achondroplasia) could be considered carriers for. Sickle cell anemia is also considered a recessive condition, but heterozygous carriers have increased resistance to malaria in early childhood, which could be described as a related dominant condition. When a couple where one partner or both are affected or carriers of a single-gene disorder wish to have a child, they can do so through in vitro fertilization, which enables preimplantation genetic diagnosis to occur to check whether the embryo has the genetic disorder.
Most congenital metabolic disorders known as inborn errors of metabolism result from single-gene defects. Many such single-gene defects can decrease the fitness of affected people and are therefore present in the population in lower frequencies compared to what would be expected based on simple probabilistic calculations.
Autosomal dominant
Only one mutated copy of the gene will be necessary for a person to be affected by an autosomal dominant disorder. Each affected person usually has one affected parent. The chance a child will inherit the mutated gene is 50%. Autosomal dominant conditions sometimes have reduced penetrance, which means although only one mutated copy is needed, not all individuals who inherit that mutation go on to develop the disease. Examples of this type of disorder are Huntington's disease, neurofibromatosis type 1, neurofibromatosis type 2, Marfan syndrome, hereditary nonpolyposis colorectal cancer, hereditary multiple exostoses (a highly penetrant autosomal dominant disorder), tuberous sclerosis, Von Willebrand disease, and acute intermittent porphyria. Birth defects are also called congenital anomalies.
Autosomal recessive
Two copies of the gene must be mutated for a person to be affected by an autosomal recessive disorder. An affected person usually has unaffected parents who each carry a single copy of the mutated gene and are referred to as genetic carriers. Each parent with a defective gene normally do not have symptoms. Two unaffected people who each carry one copy of the mutated gene have a 25% risk with each pregnancy of having a child affected by the disorder. Examples of this type of disorder are albinism, medium-chain acyl-CoA dehydrogenase deficiency, cystic fibrosis, sickle cell disease, Tay–Sachs disease, Niemann–Pick disease, spinal muscular atrophy, and Roberts syndrome. Certain other phenotypes, such as wet versus dry earwax, are also determined in an autosomal recessive fashion. Some autosomal recessive disorders are common because, in the past, carrying one of the faulty genes led to a slight protection against an infectious disease or toxin such as tuberculosis or malaria. Such disorders include cystic fibrosis, sickle cell disease, phenylketonuria and thalassaemia.
X-linked dominant
X-linked dominant disorders are caused by mutations in genes on the X chromosome. Only a few disorders have this inheritance pattern, with a prime example being X-linked hypophosphatemic rickets. Males and females are both affected in these disorders, with males typically being more severely affected than females. Some X-linked dominant conditions, such as Rett syndrome, incontinentia pigmenti type 2, and Aicardi syndrome, are usually fatal in males either in utero or shortly after birth, and are therefore predominantly seen in females. Exceptions to this finding are extremely rare cases in which boys with Klinefelter syndrome (44+xxy) also inherit an X-linked dominant condition and exhibit symptoms more similar to those of a female in terms of disease severity. The chance of passing on an X-linked dominant disorder differs between men and women. The sons of a man with an X-linked dominant disorder will all be unaffected (since they receive their father's Y chromosome), but his daughters will all inherit the condition. A woman with an X-linked dominant disorder has a 50% chance of having an affected foetus with each pregnancy, although in cases such as incontinentia pigmenti, only female offspring are generally viable.
X-linked recessive
X-linked recessive conditions are also caused by mutations in genes on the X chromosome. Males are much more frequently affected than females, because they only have the one X chromosome necessary for the condition to present. The chance of passing on the disorder differs between men and women. The sons of a man with an X-linked recessive disorder will not be affected (since they receive their father's Y chromosome), but his daughters will be carriers of one copy of the mutated gene. A woman who is a carrier of an X-linked recessive disorder (XRXr) has a 50% chance of having sons who are affected and a 50% chance of having daughters who are carriers of one copy of the mutated gene. X-linked recessive conditions include the serious diseases hemophilia A, Duchenne muscular dystrophy, and Lesch–Nyhan syndrome, as well as common and less serious conditions such as male pattern baldness and red–green color blindness. X-linked recessive conditions can sometimes manifest in females due to skewed X-inactivation or monosomy X (Turner syndrome).
Y-linked
Y-linked disorders are caused by mutations on the Y chromosome. These conditions may only be transmitted from the heterogametic sex (e.g. male humans) to offspring of the same sex. More simply, this means that Y-linked disorders in humans can only be passed from men to their sons; females can never be affected because they do not possess Y-allosomes.
Y-linked disorders are exceedingly rare but the most well-known examples typically cause infertility. Reproduction in such conditions is only possible through the circumvention of infertility by medical intervention.
Mitochondrial
This type of inheritance, also known as maternal inheritance, is the rarest and applies to the 13 genes encoded by mitochondrial DNA. Because only egg cells contribute mitochondria to the developing embryo, only mothers (who are affected) can pass on mitochondrial DNA conditions to their children. An example of this type of disorder is Leber's hereditary optic neuropathy.
It is important to stress that the vast majority of mitochondrial diseases (particularly when symptoms develop in early life) are actually caused by a nuclear gene defect, as the mitochondria are mostly developed by non-mitochondrial DNA. These diseases most often follow autosomal recessive inheritance.
Multifactorial disorder
Genetic disorders may also be complex, multifactorial, or polygenic, meaning they are likely associated with the effects of multiple genes in combination with lifestyles and environmental factors. Multifactorial disorders include heart disease and diabetes. Although complex disorders often cluster in families, they do not have a clear-cut pattern of inheritance. This makes it difficult to determine a person's risk of inheriting or passing on these disorders. Complex disorders are also difficult to study and treat because the specific factors that cause most of these disorders have not yet been identified. Studies that aim to identify the cause of complex disorders can use several methodological approaches to determine genotype–phenotype associations. One method, the genotype-first approach, starts by identifying genetic variants within patients and then determining the associated clinical manifestations. This is opposed to the more traditional phenotype-first approach, and may identify causal factors that have previously been obscured by clinical heterogeneity, penetrance, and expressivity.
On a pedigree, polygenic diseases do tend to "run in families", but the inheritance does not fit simple patterns as with Mendelian diseases. This does not mean that the genes cannot eventually be located and studied. There is also a strong environmental component to many of them (e.g., blood pressure).
Other such cases include:
asthma
autoimmune diseases such as multiple sclerosis
cancers
ciliopathies
cleft palate
diabetes
heart disease
hypertension
inflammatory bowel disease
intellectual disability
mood disorder
obesity
refractive error
infertility
Chromosomal disorder
A chromosomal disorder is a missing, extra, or irregular portion of chromosomal DNA. It can be from an atypical number of chromosomes or a structural abnormality in one or more chromosomes. An example of these disorders is Trisomy 21 (the most common form of Down syndrome), in which there is an extra copy of chromosome 21 in all cells.
Diagnosis
Due to the wide range of genetic disorders that are known, diagnosis is widely varied and dependent of the disorder. Most genetic disorders are diagnosed pre-birth, at birth, or during early childhood however some, such as Huntington's disease, can escape detection until the patient begins exhibiting symptoms well into adulthood.
The basic aspects of a genetic disorder rests on the inheritance of genetic material. With an in depth family history, it is possible to anticipate possible disorders in children which direct medical professionals to specific tests depending on the disorder and allow parents the chance to prepare for potential lifestyle changes, anticipate the possibility of stillbirth, or contemplate termination. Prenatal diagnosis can detect the presence of characteristic abnormalities in fetal development through ultrasound, or detect the presence of characteristic substances via invasive procedures which involve inserting probes or needles into the uterus such as in amniocentesis.
Prognosis
Not all genetic disorders directly result in death; however, there are no known cures for genetic disorders. Many genetic disorders affect stages of development, such as Down syndrome, while others result in purely physical symptoms such as muscular dystrophy. Other disorders, such as Huntington's disease, show no signs until adulthood. During the active time of a genetic disorder, patients mostly rely on maintaining or slowing the degradation of quality of life and maintain patient autonomy. This includes physical therapy and pain management.
Treatment
The treatment of disorder an ongoing battle, with over 1,800 gene therapy clinical trials having been completed, are ongoing, or have been approved worldwide. Despite this, most treatment options revolve around treating the symptoms of the disorders in an attempt to improve patient quality of life.
Gene therapy refers to a form of treatment where a healthy gene is introduced to a patient. This should alleviate the defect caused by a faulty gene or slow the progression of the disease. A major obstacle has been the delivery of genes to the appropcell, tissue, and organ affected by the disorder. Researchers have investigated how they can introduce a gene into the potentially trillions of cells that carry the defective copy. Finding an answer to this has been a roadblock between understanding the genetic disorder and correcting the genetic disorder.
Epidemiology
Around 1 in 50 people are affected by a known single-gene disorder, while around 1 in 263 are affected by a chromosomal disorder. Around 65% of people have some kind of health problem as a result of congenital genetic mutations. Due to the significantly large number of genetic disorders, approximately 1 in 21 people are affected by a genetic disorder classified as "rare" (usually defined as affecting less than 1 in 2,000 people). Most genetic disorders are rare in themselves. There are well over 6,000 known genetic disorders, and new genetic disorders are constantly being described in medical literature.
History
The earliest known genetic condition in a hominid was in the fossil species Paranthropus robustus, with over a third of individuals displaying amelogenesis imperfecta.
| Biology and health sciences | Specific diseases | Health |
12439 | https://en.wikipedia.org/wiki/Guanine | Guanine | Guanine () (symbol G or Gua) is one of the four main nucleotide bases found in the nucleic acids DNA and RNA, the others being adenine, cytosine, and thymine (uracil in RNA). In DNA, guanine is paired with cytosine. The guanine nucleoside is called guanosine.
With the formula C5H5N5O, guanine is a derivative of purine, consisting of a fused pyrimidine-imidazole ring system with conjugated double bonds. This unsaturated arrangement means the bicyclic molecule is planar.
Properties
Guanine, along with adenine and cytosine, is present in both DNA and RNA, whereas thymine is usually seen only in DNA, and uracil only in RNA. Guanine has two tautomeric forms, the major keto form (see figures) and rare enol form.
It binds to cytosine through three hydrogen bonds. In cytosine, the amino group acts as the hydrogen bond donor and the C-2 carbonyl and the N-3 amine as the hydrogen-bond acceptors. Guanine has the C-6 carbonyl group that acts as the hydrogen bond acceptor, while a group at N-1 and the amino group at C-2 act as the hydrogen bond donors.
Guanine can be hydrolyzed with strong acid to glycine, ammonia, carbon dioxide, and carbon monoxide. First, guanine gets deaminated to become xanthine. Guanine oxidizes more readily than adenine, the other purine-derivative base in DNA. Its high melting point of 350 °C reflects the intermolecular hydrogen bonding between the oxo and amino groups in the molecules in the crystal. Because of this intermolecular bonding, guanine is relatively insoluble in water, but it is soluble in dilute acids and bases.
History
The first isolation of guanine was reported in 1844 by the German chemist (1819–1885), who obtained it as a mineral formed from the excreta of sea birds, which is known as guano and which was used as a source of fertilizer; guanine was named in 1846. Between 1882 and 1906, Emil Fischer determined the structure and also showed that uric acid can be converted to guanine.
Synthesis
Trace amounts of guanine form by the polymerization of ammonium cyanide (). Two experiments conducted by Levy et al. showed that heating 10 mol·L−1 at 80 °C for 24 hours gave a yield of 0.0007%, while using 0.1 mol·L−1 frozen at −20 °C for 25 years gave a 0.0035% yield. These results indicate guanine could arise in frozen regions of the primitive earth. In 1984, Yuasa reported a 0.00017% yield of guanine after the electrical discharge of , , , and 50 mL of water, followed by a subsequent acid hydrolysis. However, it is unknown whether the presence of guanine was not simply a resultant contaminant of the reaction.
10NH3 + 2CH4 + 4C2H6 + 2H2O → 2C5H8N5O (guanine) + 25H2
A Fischer–Tropsch synthesis can also be used to form guanine, along with adenine, uracil, and thymine. Heating an equimolar gas mixture of CO, H2, and NH3 to 700 °C for 15 to 24 minutes, followed by quick cooling and then sustained reheating to 100 to 200 °C for 16 to 44 hours with an alumina catalyst, yielded guanine and uracil:
10CO + H2 + 10NH3 → 2C5H8N5O (guanine) + 8H2O
Another possible abiotic route was explored by quenching a 90% N2–10%CO–H2O gas mixture high-temperature plasma.
Traube's synthesis involves heating 2,4,5-triamino-1,6-dihydro-6-oxypyrimidine (as the sulfate) with formic acid for several hours.
Biosynthesis
Guanine is not synthesized de novo. Instead, it is split from the more complex molecule guanosine by the enzyme guanosine phosphorylase:
guanosine + phosphate guanine + alpha-D-ribose 1-phosphate
Guanine can be synthesized de novo, with the rate-limiting enzyme of inosine monophosphate dehydrogenase.
Other occurrences and biological uses
The word guanine derives from the Spanish loanword ('bird/bat droppings'), which itself is from the Quechua word , meaning 'dung'. As the Oxford English Dictionary notes, guanine is "A white amorphous substance obtained abundantly from guano, forming a constituent of the excrement of birds".
In 1656 in Paris, a Mr. Jaquin extracted from the scales of the fish Alburnus alburnus so-called "pearl essence", which is crystalline guanine. In the cosmetics industry, crystalline guanine is used as an additive to various products (e.g., shampoos), where it provides a pearly iridescent effect. It is also used in metallic paints and simulated pearls and plastics. It provides shimmering luster to eye shadow and nail polish. Facial treatments using the droppings, or guano, from Japanese nightingales have been used in Japan and elsewhere, because the guanine in the droppings makes the skin look paler. Guanine crystals are rhombic platelets composed of multiple transparent layers, but they have a high index of refraction that partially reflects and transmits light from layer to layer, thus producing a pearly luster. It can be applied by spray, painting, or dipping. It may irritate the eyes. Its alternatives are mica, faux pearl (from ground shells), and aluminium and bronze particles.
Guanine has a very wide variety of biological uses that include a range of functions ranging in both complexity and versatility. These include camouflage, display, and vision among other purposes.
Spiders, scorpions, and some amphibians convert ammonia, as a product of protein metabolism in the cells, to guanine, as it can be excreted with minimal water loss.
Guanine is also found in specialized skin cells of fish called iridocytes (e.g., the sturgeon), as well as being present in the reflective deposits of the eyes of deep-sea fish and some reptiles, such as crocodiles and chameleons.
On 8 August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of DNA and RNA (guanine, adenine and related organic molecules) may have been formed extra-terrestrially in outer space.
| Biology and health sciences | Nucleic acids | Biology |
12450 | https://en.wikipedia.org/wiki/G%C3%B6del%27s%20completeness%20theorem | Gödel's completeness theorem | Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic.
The completeness theorem applies to any first-order theory: If T is such a theory, and φ is a sentence (in the same language) and every model of T is a model of φ, then there is a (first-order) proof of φ using the statements of T as axioms. One sometimes says this as "anything true in all models is provable". (This does not contradict Gödel's incompleteness theorem, which is about a formula φu that is unprovable in a certain theory T but true in the "standard" model of the natural numbers: φu is false in some other, "non-standard" models of T.)
The completeness theorem makes a close link between model theory, which deals with what is true in different models, and proof theory, which studies what can be formally proven in particular formal systems.
It was first proved by Kurt Gödel in 1929. It was then simplified when Leon Henkin observed in his Ph.D. thesis that the hard part of the proof can be presented as the Model Existence Theorem (published in 1949). Henkin's proof was simplified by Gisbert Hasenjaeger in 1953.
Preliminaries
There are numerous deductive systems for first-order logic, including systems of natural deduction and Hilbert-style systems. Common to all deductive systems is the notion of a formal deduction. This is a sequence (or, in some cases, a finite tree) of formulae with a specially designated conclusion. The definition of a deduction is such that it is finite and that it is possible to verify algorithmically (by a computer, for example, or by hand) that a given sequence (or tree) of formulae is indeed a deduction.
A first-order formula is called logically valid if it is true in every structure for the language of the formula (i.e. for any assignment of values to the variables of the formula). To formally state, and then prove, the completeness theorem, it is necessary to also define a deductive system. A deductive system is called complete if every logically valid formula is the conclusion of some formal deduction, and the completeness theorem for a particular deductive system is the theorem that it is complete in this sense. Thus, in a sense, there is a different completeness theorem for each deductive system. A converse to completeness is soundness, the fact that only logically valid formulas are provable in the deductive system.
If some specific deductive system of first-order logic is sound and complete, then it is "perfect" (a formula is provable if and only if it is logically valid), thus equivalent to any other deductive system with the same quality (any proof in one system can be converted into the other).
Statement
We first fix a deductive system of first-order predicate calculus, choosing any of the well-known equivalent systems. Gödel's original proof assumed the Hilbert-Ackermann proof system.
Gödel's original formulation
The completeness theorem says that if a formula is logically valid then there is a finite deduction (a formal proof) of the formula.
Thus, the deductive system is "complete" in the sense that no additional inference rules are required to prove all the logically valid formulae. A converse to completeness is soundness, the fact that only logically valid formulae are provable in the deductive system. Together with soundness (whose verification is easy), this theorem implies that a formula is logically valid if and only if it is the conclusion of a formal deduction.
More general form
The theorem can be expressed more generally in terms of logical consequence. We say that a sentence s is a syntactic consequence of a theory T, denoted , if s is provable from T in our deductive system. We say that s is a semantic consequence of T, denoted , if s holds in every model of T. The completeness theorem then says that for any first-order theory T with a well-orderable language, and any sentence s in the language of T,
Since the converse (soundness) also holds, it follows that if and only if , and thus that syntactic and semantic consequence are equivalent for first-order logic.
This more general theorem is used implicitly, for example, when a sentence is shown to be provable from the axioms of group theory by considering an arbitrary group and showing that the sentence is satisfied by that group.
Gödel's original formulation is deduced by taking the particular case of a theory without any axiom.
Model existence theorem
The completeness theorem can also be understood in terms of consistency, as a consequence of Henkin's model existence theorem. We say that a theory T is syntactically consistent if there is no sentence s such that both s and its negation ¬s are provable from T in our deductive system. The model existence theorem says that for any first-order theory T with a well-orderable language,
Another version, with connections to the Löwenheim–Skolem theorem, says:
Given Henkin's theorem, the completeness theorem can be proved as follows: If , then does not have models. By the contrapositive of Henkin's theorem, then is syntactically inconsistent. So a contradiction () is provable from in the deductive system. Hence , and then by the properties of the deductive system, .
As a theorem of arithmetic
The model existence theorem and its proof can be formalized in the framework of Peano arithmetic. Precisely, we can systematically define a model of any consistent effective first-order theory T in Peano arithmetic by interpreting each symbol of T by an arithmetical formula whose free variables are the arguments of the symbol. (In many cases, we will need to assume, as a hypothesis of the construction, that T is consistent, since Peano arithmetic may not prove that fact.) However, the definition expressed by this formula is not recursive (but is, in general, Δ2).
Consequences
An important consequence of the completeness theorem is that it is possible to recursively enumerate the semantic consequences of any effective first-order theory, by enumerating all the possible formal deductions from the axioms of the theory, and use this to produce an enumeration of their conclusions.
This comes in contrast with the direct meaning of the notion of semantic consequence, that quantifies over all structures in a particular language, which is clearly not a recursive definition.
Also, it makes the concept of "provability", and thus of "theorem", a clear concept that only depends on the chosen system of axioms of the theory, and not on the choice of a proof system.
Relationship to the incompleteness theorems
Gödel's incompleteness theorems show that there are inherent limitations to what can be proven within any given first-order theory in mathematics. The "incompleteness" in their name refers to another meaning of complete (see model theory – Using the compactness and completeness theorems): A theory is complete (or decidable) if every sentence in the language of is either provable () or disprovable ().
The first incompleteness theorem states that any which is consistent, effective and contains Robinson arithmetic ("Q") must be incomplete in this sense, by explicitly constructing a sentence which is demonstrably neither provable nor disprovable within . The second incompleteness theorem extends this result by showing that can be chosen so that it expresses the consistency of itself.
Since cannot be proven in , the completeness theorem implies the existence of a model of in which is false. In fact, is a Π1 sentence, i.e. it states that some finitistic property is true of all natural numbers; so if it is false, then some natural number is a counterexample. If this counterexample existed within the standard natural numbers, its existence would disprove within ; but the incompleteness theorem showed this to be impossible, so the counterexample must not be a standard number, and thus any model of in which is false must include non-standard numbers.
In fact, the model of any theory containing Q obtained by the systematic construction of the arithmetical model existence theorem, is always non-standard with a non-equivalent provability predicate and a non-equivalent way to interpret its own construction, so that this construction is non-recursive (as recursive definitions would be unambiguous).
Also, if is at least slightly stronger than Q (e.g. if it includes induction for bounded existential formulas), then Tennenbaum's theorem shows that it has no recursive non-standard models.
Relationship to the compactness theorem
The completeness theorem and the compactness theorem are two cornerstones of first-order logic. While neither of these theorems can be proven in a completely effective manner, each one can be effectively obtained from the other.
The compactness theorem says that if a formula φ is a logical consequence of a (possibly infinite) set of formulas Γ then it is a logical consequence of a finite subset of Γ. This is an immediate consequence of the completeness theorem, because only a finite number of axioms from Γ can be mentioned in a formal deduction of φ, and the soundness of the deductive system then implies φ is a logical consequence of this finite set. This proof of the compactness theorem is originally due to Gödel.
Conversely, for many deductive systems, it is possible to prove the completeness theorem as an effective consequence of the compactness theorem.
The ineffectiveness of the completeness theorem can be measured along the lines of reverse mathematics. When considered over a countable language, the completeness and compactness theorems are equivalent to each other and equivalent to a weak form of choice known as weak Kőnig's lemma, with the equivalence provable in RCA0 (a second-order variant of Peano arithmetic restricted to induction over Σ01 formulas). Weak Kőnig's lemma is provable in ZF, the system of Zermelo–Fraenkel set theory without axiom of choice, and thus the completeness and compactness theorems for countable languages are provable in ZF. However the situation is different when the language is of arbitrary large cardinality since then, though the completeness and compactness theorems remain provably equivalent to each other in ZF, they are also provably equivalent to a weak form of the axiom of choice known as the ultrafilter lemma. In particular, no theory extending ZF can prove either the completeness or compactness theorems over arbitrary (possibly uncountable) languages without also proving the ultrafilter lemma on a set of the same cardinality.
Completeness in other logics
The completeness theorem is a central property of first-order logic that does not hold for all logics. Second-order logic, for example, does not have a completeness theorem for its standard semantics (though does have the completeness property for Henkin semantics), and the set of logically valid formulas in second-order logic is not recursively enumerable. The same is true of all higher-order logics. It is possible to produce sound deductive systems for higher-order logics, but no such system can be complete.
Lindström's theorem states that first-order logic is the strongest (subject to certain constraints) logic satisfying both compactness and completeness.
A completeness theorem can be proved for modal logic or intuitionistic logic with respect to Kripke semantics.
Proofs
Gödel's original proof of the theorem proceeded by reducing the problem to a special case for formulas in a certain syntactic form, and then handling this form with an ad hoc argument.
In modern logic texts, Gödel's completeness theorem is usually proved with Henkin's proof, rather than with Gödel's original proof. Henkin's proof directly constructs a term model for any consistent first-order theory. James Margetson (2004) developed a computerized formal proof using the Isabelle theorem prover. Other proofs are also known.
| Mathematics | Model theory | null |
12460 | https://en.wikipedia.org/wiki/Green | Green | Green is the color between cyan and yellow on the visible spectrum. It is evoked by light which has a dominant wavelength of roughly 495570 nm. In subtractive color systems, used in painting and color printing, it is created by a combination of yellow and cyan; in the RGB color model, used on television and computer screens, it is one of the additive primary colors, along with red and blue, which are mixed in different combinations to create all other colors. By far the largest contributor to green in nature is chlorophyll, the chemical by which plants photosynthesize and convert sunlight into chemical energy. Many creatures have adapted to their green environments by taking on a green hue themselves as camouflage. Several minerals have a green color, including the emerald, which is colored green by its chromium content.
During post-classical and early modern Europe, green was the color commonly associated with wealth, merchants, bankers, and the gentry, while red was reserved for the nobility. For this reason, the costume of the Mona Lisa by Leonardo da Vinci and the benches in the British House of Commons are green while those in the House of Lords are red. It also has a long historical tradition as the color of Ireland and of Gaelic culture. It is the historic color of Islam, representing the lush vegetation of Paradise. It was the color of the banner of Muhammad, and is found in the flags of nearly all Islamic countries.
In surveys made in American, European, and Islamic countries, green is the color most commonly associated with nature, life, health, youth, spring, hope, and envy. In the European Union and the United States, green is also sometimes associated with toxicity and poor health, but in China and most of Asia, its associations are very positive, as the symbol of fertility and happiness. Because of its association with nature, it is the color of the environmental movement. Political groups advocating environmental protection and social justice describe themselves as part of the Green movement, some naming themselves Green parties. This has led to similar campaigns in advertising, as companies have sold green, or environmentally friendly, products. Green is also the traditional color of safety and permission; a green light means go ahead, a green card permits permanent residence in the United States.
Etymology and linguistic definitions
The word green comes from the Middle English and Old English word grene, which, like the German word grün, has the same root as the words grass and grow. It is from a Common Germanic *gronja-, which is also reflected in Old Norse grænn, Old High German gruoni (but unattested in East Germanic), ultimately from a PIE root * "to grow", and root-cognate with grass and to grow.
The first recorded use of the word as a color term in Old English dates to ca. AD 700.
Latin with viridis also has a genuine and widely used term for "green". Related to virere "to grow" and ver "spring", it gave rise to words in several Romance languages, French vert, Italian verde (and English vert, verdure etc.). Likewise the Slavic languages with zelenъ. Ancient Greek also had a term for yellowish, pale green – χλωρός, chloros (cf. the color of chlorine), cognate with χλοερός "verdant" and χλόη "chloe, the green of new growth".
Thus, the languages mentioned above (Germanic, Romance, Slavic, Greek) have old terms for "green" which are derived from words for fresh, sprouting vegetation.
However, comparative linguistics makes clear that these terms were coined independently, over the past few millennia, and there is no identifiable single Proto-Indo-European or word for "green". For example, the Slavic zelenъ is cognate with Sanskrit "yellow, ochre, golden".
The Turkic languages also have jašɨl "green" or "yellowish green", compared to a Mongolian word for "meadow".
Languages where green and blue are one color
In some languages, including old Chinese, Thai, old Japanese, and Vietnamese, the same word can mean either blue or green. The Chinese character 青 (pronounced qīng in Mandarin, ao in Japanese, and thanh in Sino-Vietnamese) has a meaning that covers both blue and green; blue and green are traditionally considered shades of "青". In more contemporary terms, they are 藍 (lán, in Mandarin) and 綠 (lǜ, in Mandarin) respectively. Japanese also has two terms that refer specifically to the color green, 緑 (midori, which is derived from the classical Japanese descriptive verb midoru "to be in leaf, to flourish" in reference to trees) and グリーン (guriin, which is derived from the English word "green"). However, in Japan, although the traffic lights have the same colors as other countries have, the green light is described using the same word as for blue, aoi, because green is considered a shade of aoi; similarly, green variants of certain fruits and vegetables such as green apples, green shiso (as opposed to red apples and red shiso) will be described with the word aoi. Vietnamese uses a single word for both blue and green, xanh, with variants such as xanh da trời (azure, lit. "sky blue"), lam (blue), and lục (green; also xanh lá cây, lit. "leaf green").
"Green" in modern European languages corresponds to about 520–570 nm, but many historical and non-European languages make other choices, e.g. using a term for the range of ca. 450–530 nm ("blue/green") and another for ca. 530–590 nm ("green/yellow"). In the comparative study of color terms in the world's languages, green is only found as a separate category in languages with the fully developed range of six colors (white, black, red, green, yellow, and blue), or more rarely in systems with five colors (white, red, yellow, green, and black/blue). These languages have introduced supplementary vocabulary to denote "green", but these terms are recognizable as recent adoptions that are not in origin color terms (much like the English adjective orange being in origin not a color term but the name of a fruit). Thus, the Thai word เขียว kheīyw, besides meaning "green", also means "rank" and "smelly" and holds other unpleasant associations.
The Celtic languages had a term for "blue/green/grey", Proto-Celtic *glasto-, which gave rise to Old Irish glas "green, grey" and to Welsh glas "blue". This word is cognate with the Ancient Greek γλαυκός "bluish green", contrasting with χλωρός "yellowish green" discussed above.
In modern Japanese, the term for green is 緑, while the old term for "blue/green", now means "blue". But in certain contexts, green is still conventionally referred to as 青, as in and , reflecting the absence of blue-green distinction in old Japanese (more accurately, the traditional Japanese color terminology grouped some shades of green with blue, and others with yellow tones).
In science
Color vision and colorimetry
In optics, the perception of green is evoked by light having a spectrum dominated by energy with a wavelength of roughly 495–570 nm. The sensitivity of the dark-adapted human eye is greatest at about 507 nm, a blue-green color, while the light-adapted eye is most sensitive about 555 nm, a yellow-green; these are the peak locations of the rod and cone (scotopic and photopic, respectively) luminosity functions.
The perception of greenness (in opposition to redness forming one of the opponent mechanisms in human color vision) is evoked by light which triggers the medium-wavelength M cone cells in the eye more than the long-wavelength L cones. Light which triggers this greenness response more than the yellowness or blueness of the other color opponent mechanism is called green. A green light source typically has a spectral power distribution dominated by energy with a wavelength of roughly 487–570 nm.
Human eyes have color receptors known as cone cells, of which there are three types. In some cases, one is missing or faulty, which can cause color blindness, including the common inability to distinguish red and yellow from green, known as deuteranopia or red-green color blindness.
Green is restful to the eye. Studies show that a green environment can reduce fatigue.
In the subtractive color system, used in painting and color printing, green is created by a combination of yellow and blue, or yellow and cyan; in the RGB color model, used on television and computer screens, it is one of the additive primary colors, along with red and blue, which are mixed in different combinations to create all other colors. On the HSV color wheel, also known as the RGB color wheel, the complement of green is magenta; that is, a color corresponding to an equal mixture of red and blue light (one of the purples). On a traditional color wheel, based on subtractive color, the complementary color to green is considered to be red.
In additive color devices such as computer displays and televisions, one of the primary light sources is typically a narrow-spectrum yellowish-green of dominant wavelength ≈550 nm; this "green" primary is combined with an orangish-red "red" primary and a purplish-blue "blue" primary to produce any color in between – the RGB color model. A unique green (green appearing neither yellowish nor bluish) is produced on such a device by mixing light from the green primary with some light from the blue primary.
Lasers
Lasers emitting in the green part of the spectrum are widely available to the general public in a wide range of output powers. Green laser pointers outputting at 532 nm (563.5 THz) are relatively inexpensive compared to other wavelengths of the same power, and are very popular due to their good beam quality and very high apparent brightness. The most common green lasers use diode pumped solid state (DPSS) technology to create the green light.
An infrared laser diode at 808 nm is used to pump a crystal of neodymium-doped yttrium vanadium oxide (Nd:YVO4) or neodymium-doped yttrium aluminium garnet (Nd:YAG) and induces it to emit 281.76 THz (1064 nm). This deeper infrared light is then passed through another crystal containing potassium, titanium and phosphorus (KTP), whose non-linear properties generate light at a frequency that is twice that of the incident beam (563.5 THz); in this case corresponding to the wavelength of 532 nm ("green").
Other green wavelengths are also available using DPSS technology ranging from 501 nm to 543 nm.
Green wavelengths are also available from gas lasers, including the helium–neon laser (543 nm), the Argon-ion laser (514 nm) and the Krypton-ion laser (521 nm and 531 nm), as well as liquid dye lasers. Green lasers have a wide variety of applications, including pointing, illumination, surgery, laser light shows, spectroscopy, interferometry, fluorescence, holography, machine vision, non-lethal weapons, and bird control.
As of mid-2011, direct green laser diodes at 510 nm and 500 nm have become generally available,
although the price remains relatively prohibitive for widespread public use. The efficiency of these lasers (peak 3%) compared to that of DPSS green lasers (peak 35%)
may also be limiting adoption of the diodes to niche uses.
Pigments, food coloring and fireworks
Many minerals provide pigments which have been used in green paints and dyes over the centuries. Pigments, in this case, are minerals which reflect the color green, rather that emitting it through luminescent or phosphorescent qualities. The large number of green pigments makes it impossible to mention them all. Among the more notable green minerals, however is the emerald, which is colored green by trace amounts of chromium and sometimes vanadium.
Chromium(III) oxide (Cr2O3), is called chrome green, also called viridian or institutional green when used as a pigment. For many years, the source of amazonite's color was a mystery. Widely thought to have been due to copper because copper compounds often have blue and green colors, the blue-green color is likely to be derived from small quantities of lead and water in the feldspar.
Copper is the source of the green color in malachite pigments, chemically known as basic copper(II) carbonate.
Verdigris is made by placing a plate or blade of copper, brass or bronze, slightly warmed, into a vat of fermenting wine, leaving it there for several weeks, and then scraping off and drying the green powder that forms on the metal. The process of making verdigris was described in ancient times by Pliny. It was used by the Romans in the murals of Pompeii, and in Celtic medieval manuscripts as early as the 5th century AD. It produced a blue-green which no other pigment could imitate, but it had drawbacks: it was unstable, it could not resist dampness, it did not mix well with other colors, it could ruin other colors with which it came into contact, and it was toxic. Leonardo da Vinci, in his treatise on painting, warned artists not to use it. It was widely used in miniature paintings in Europe and Persia in the 16th and 17th centuries. Its use largely ended in the late 19th century, when it was replaced by the safer and more stable chrome green. Viridian, as described above, was patented in 1859. It became popular with painters, since, unlike other synthetic greens, it was stable and not toxic. Vincent van Gogh used it, along with Prussian blue, to create a dark blue sky with a greenish tint in his painting Café Terrace at Night.
Green earth is a natural pigment used since the time of the Roman Empire. It is composed of clay colored by iron oxide, magnesium, aluminum silicate, or potassium. Large deposits were found in the South of France near Nice, and in Italy around Verona, on Cyprus, and in Bohemia. The clay was crushed, washed to remove impurities, then powdered. It was sometimes called Green of Verona.
Mixtures of oxidized cobalt and zinc were also used to create green paints as early as the 18th century.
Cobalt green, sometimes known as Rinman's green or zinc green, is a translucent green pigment made by heating a mixture of cobalt (II) oxide and zinc oxide. Sven Rinman, a Swedish chemist, discovered this compound in 1780.
Green chrome oxide was a new synthetic green created by a chemist named Pannetier in Paris in about 1835. Emerald green was a synthetic deep green made in the 19th century by hydrating chrome oxide. It was also known as Guignet green.
There is no natural source for green food colorings which has been approved by the US Food and Drug Administration. Chlorophyll, the E numbers E140 and E141, is the most common green chemical found in nature, and only allowed in certain medicines and cosmetic materials.
Quinoline Yellow (E104) is a commonly used coloring in the United Kingdom but is banned in Australia, Japan, Norway and the United States.
Green S (E142) is prohibited in many countries, for it is known to cause hyperactivity, asthma, urticaria, and insomnia.
To create green sparks, fireworks use barium salts, such as barium chlorate, barium nitrate crystals, or barium chloride, also used for green fireplace logs. Copper salts typically burn blue, but cupric chloride (also known as "campfire blue") can also produce green flames. Green pyrotechnic flares can use a mix ratio 75:25 of boron and potassium nitrate. Smoke can be turned green by a mixture: solvent yellow 33, solvent green 3, lactose, magnesium carbonate plus sodium carbonate added to potassium chlorate.
Biology
Green is common in nature, as many plants are green because of a complex chemical known as chlorophyll, which is involved in photosynthesis. Chlorophyll absorbs the long wavelengths of light (red) and short wavelengths of light (blue) much more efficiently than the wavelengths that appear green to the human eye, so light reflected by plants is enriched in green.
Chlorophyll absorbs green light poorly because it first arose in organisms living in oceans where purple halobacteria were already exploiting photosynthesis. Their purple color arose because they extracted energy in the green portion of the spectrum using bacteriorhodopsin. The new organisms that then later came to dominate the extraction of light were selected to exploit those portions of the spectrum not used by the halobacteria.
Animals typically use the color green as camouflage, blending in with the chlorophyll green of the surrounding environment. Most fish, reptiles, amphibians, and birds appear green because of a reflection of blue light coming through an over-layer of yellow pigment. Perception of color can also be affected by the surrounding environment. For example, broadleaf forests typically have a yellow-green light about them as the trees filter the light. Turacoverdin is one chemical which can cause a green hue in birds, especially. Invertebrates such as insects or mollusks often display green colors because of porphyrin pigments, sometimes caused by diet. This can causes their feces to look green as well. Other chemicals which generally contribute to greenness among organisms are flavins (lychochromes) and hemanovadin. Humans have imitated this by wearing green clothing as a camouflage in military and other fields. Substances that may impart a greenish hue to one's skin include biliverdin, the green pigment in bile, and ceruloplasmin, a protein that carries copper ions in chelation.
The green huntsman spider is green due to the presence of bilin pigments in the spider's hemolymph (circulatory system fluids) and tissue fluids.
It hunts insects in green vegetation, where it is well camouflaged.
Green eyes
There is no green pigment in green eyes; like the color of blue eyes, it is an optical illusion; its appearance is caused by the combination of an amber or light brown pigmentation of the stroma, given by a low or moderate concentration of melanin, with the blue tone imparted by the Rayleigh scattering of the reflected light.
Nobody is brought into the world with green eyes. An infant has one of two eye hues: dark or blue. Following birth, cells called melanocytes start to discharge melanin, the earthy colored shade, in the child's irises. This begins happening since melanocytes respond to light in time.
Green eyes are most common in Northern and Central Europe.
They can also be found in Southern Europe, West Asia, Central Asia, and South Asia. In Iceland, 89% of women and 87% of men have either blue or green eye color.
A study of Icelandic and Dutch adults found green eyes to be much more prevalent in women than in men.
In history and art
Prehistoric history
Neolithic cave paintings do not have traces of green pigments, but neolithic peoples in northern Europe did make a green dye for clothing, made from the leaves of the birch tree. It was of very poor quality, more brown than green. Ceramics from ancient Mesopotamia show people wearing vivid green costumes, but it is not known how the colors were produced.
Ancient history
In Ancient Egypt, green was the symbol of regeneration and rebirth, and of the crops made possible by the annual flooding of the Nile. For painting on the walls of tombs or on papyrus, Egyptian artists used finely ground malachite, mined in the west Sinai and the eastern desert; a paintbox with malachite pigment was found inside the tomb of King Tutankhamun. They also used less expensive green earth pigment, or mixed yellow ochre and blue azurite. To dye fabrics green, they first colored them yellow with dye made from saffron and then soaked them in blue dye from the roots of the woad plant.
For the ancient Egyptians, green had very positive associations. The hieroglyph for green represented a growing papyrus sprout, showing the close connection between green, vegetation, vigor and growth. In wall paintings, the ruler of the underworld, Osiris, was typically portrayed with a green face, because green was the symbol of good health and rebirth. Palettes of green facial makeup, made with malachite, were found in tombs. It was worn by both the living and the dead, particularly around the eyes, to protect them from evil. Tombs also often contained small green amulets in the shape of scarab beetles made of malachite, which would protect and give vigor to the deceased. It also symbolized the sea, which was called the "Very Green".
In Ancient Greece, green and blue were sometimes considered the same color, and the same word sometimes described the color of the sea and the color of trees. The philosopher Democritus described two different greens: , or pale green, and , or leek green. Aristotle considered that green was located midway between black, symbolizing the earth, and white, symbolizing water. However, green was not counted among the four classic colors of Greek painting – red, yellow, black and white – and is rarely found in Greek art.
The Romans had a greater appreciation for the color green; it was the color of Venus, the goddess of gardens, vegetables and vineyards. The Romans made a fine green earth pigment that was widely used in the wall paintings of Pompeii, Herculaneum, Lyon, Vaison-la-Romaine, and other Roman cities. They also used the pigment verdigris, made by soaking copper plates in fermenting wine. By the second century AD, the Romans were using green in paintings, mosaics and glass, and there were ten different words in Latin for varieties of green.
Postclassical history
In the Middle Ages and Renaissance, the color of clothing showed a person's social rank and profession. Red could only be worn by the nobility, brown and gray by peasants, and green by merchants, bankers and the gentry and their families. The Mona Lisa wears green in her portrait, as does the bride in the Arnolfini portrait by Jan van Eyck.
There were no good vegetal green dyes which resisted washing and sunlight for those who wanted or were required to wear green. Green dyes were made out of the fern, plantain, buckthorn berries, the juice of nettles and of leeks, the digitalis plant, the broom plant, the leaves of the fraxinus, or ash tree, and the bark of the alder tree, but they rapidly faded or changed color. Only in the 16th century was a good green dye produced, by first dyeing the cloth blue with woad, and then yellow with Reseda luteola, also known as yellow-weed.
The pigments available to painters were more varied; monks in monasteries used verdigris, made by soaking copper in fermenting wine, to color medieval manuscripts. They also used finely-ground malachite, which made a luminous green. They used green earth colors for backgrounds.
During the early Renaissance, painters such as Duccio di Buoninsegna learned to paint faces first with a green undercoat, then with pink, which gave the faces a more realistic hue. Over the centuries the pink has faded, making some of the faces look green.
Modern history
In the 18th and 19th century
The 18th and 19th centuries brought the discovery and production of synthetic green pigments and dyes, which rapidly replaced the earlier mineral and vegetable pigments and dyes. These new dyes were more stable and brilliant than the vegetable dyes, but some contained high levels of arsenic, and were eventually banned.
In the 18th and 19th centuries, green was associated with the romantic movement in literature and art. The German poet and philosopher Goethe declared that green was the most restful color, suitable for decorating bedrooms. Painters such as John Constable and Jean-Baptiste-Camille Corot depicted the lush green of rural landscapes and forests. Green was contrasted to the smoky grays and blacks of the Industrial Revolution.
The second half of the 19th century saw the use of green in art to create specific emotions, not just to imitate nature. One of the first to make color the central element of his picture was the American artist James McNeill Whistler, who created a series of paintings called "symphonies" or "noctures" of color, including Symphony in gray and green; The Ocean between 1866 and 1872.
The late 19th century also brought the systematic study of color theory, and particularly the study of how complementary colors such as red and green reinforced each other when they were placed next to each other. These studies were avidly followed by artists such as Vincent van Gogh. Describing his painting, The Night Cafe, to his brother Theo in 1888, Van Gogh wrote: "I sought to express with red and green the terrible human passions. The hall is blood red and pale yellow, with a green billiard table in the center, and four lamps of lemon yellow, with rays of orange and green. Everywhere it is a battle and antithesis of the most different reds and greens."
In the 20th and 21st century
In the 1980s, green became a political symbol, the color of the Green Party in Germany and in many other European countries. It symbolized the environmental movement, and also a new politics of the left which rejected traditional socialism and communism. (See section below.)
Symbolism and associations
Safety and permission
Green can communicate safety to proceed, as in traffic lights. Green and red were standardized as the colors of international railroad signals in the 19th century. The first traffic light, using green and red gas lamps, was erected in 1868 in front of the Houses of Parliament in London. It exploded the following year, injuring the policeman who operated it. In 1912, the first modern electric traffic lights were put up in Salt Lake City, Utah. Red was chosen largely because of its high visibility, and its association with danger, while green was chosen largely because it could not be mistaken for red. Today green lights universally signal that a system is turned on and working as it should. In many video games, green signifies both health and completed objectives, opposite red.
Nature, vivacity, and life
Green is the color most commonly associated in Europe and the United States with nature, vivacity and life.
It is the color of many environmental organizations, such as Greenpeace, and of the Green Parties in Europe. Many cities have designated a garden or park as a green space, and use green trash bins and containers. A green cross is commonly used to designate pharmacies in Europe.
In China, green is associated with the east, with sunrise, and with life and growth. In Thailand, the color green is considered auspicious for those born on a Wednesday (light green for those born at night).
Springtime, freshness, and hope
Green is the color most commonly associated in the United States and Europe with springtime, freshness, and hope. Green is often used to symbolize rebirth and renewal and immortality. In Ancient Egypt; the god Osiris, king of the underworld, was depicted as green-skinned. Green as the color of hope is connected with the color of springtime; hope represents the faith that things will improve after a period of difficulty, like the renewal of flowers and plants after the winter season.
Youth and inexperience
Green the color most commonly associated in Europe and the United States with youth. It also often is used to describe anyone young, inexperienced, probably by the analogy to immature and unripe fruit. Examples include green cheese, a term for a fresh, unaged cheese, and greenhorn, an inexperienced person.
Food and diet
The color green has been increasingly used by food companies, governments, and practitioners themselves to identify veganism and vegetarianism. The government of India requires food that is vegetarian to be marked with a green circle as part of the Food Safety and Standards Act of 2006 with changes to symbolism since but still maintaining the color green. In 2021, India introduced a green V to exclusively label vegan options. In the west, the V-Label, a green V designed by the European Vegetarian Union, has been used by food distributors to label vegan and vegetarian options.
Calm, tolerance, and the agreeable
Surveys also show that green is the color most associated with the calm, the agreeable, and tolerance. Red is associated with heat, blue with cold, and green with an agreeable temperature. Red is associated with dry, blue with wet, and green, in the middle, with dampness. Red is the most active color, blue the most passive; green, in the middle, is the color of neutrality and calm, sometimes used in architecture and design for these reasons.
Blue and green together symbolize harmony and balance. Experimental studies also show this calming effect in a statistical significant decrease of negative emotions
and increase of creative performance.
Jealousy and envy
Green is often associated with jealousy and envy. The expression "green-eyed monster" was first used by William Shakespeare in Othello: "it is the green-eyed monster which doth mock the meat it feeds on." Shakespeare also used it in the Merchant of Venice, speaking of "green-eyed jealousy".
Love and sexuality
Green today is not commonly associated in Europe and the United States with love and sexuality, but in stories of the medieval period it sometimes represented love and the base, natural desires of man. It was the color of the serpent in the Garden of Eden who caused the downfall of Adam and Eve. However, for the troubadours, green was the color of growing love, and light green clothing was reserved for young women who were not yet married.
In Persian and Sudanese poetry, dark-skinned women, called "green" women, were considered erotic. The Chinese term for cuckold is "to wear a green hat." This was because in ancient China, prostitutes were called "the family of the green lantern" and a prostitute's family would wear a green headscarf.
In Victorian England, the color green was associated with homosexuality.
Dragons, fairies, monsters, and devils
In legends, folk tales and films, fairies, dragons, monsters, and the devil are often shown as green.
In the Middle Ages, the devil was usually shown as either red, black or green. Dragons were usually green, because they had the heads, claws and tails of reptiles.
Modern Chinese dragons are also often green, but unlike European dragons, they are benevolent; Chinese dragons traditionally symbolize potent and auspicious powers, particularly control over water, rainfall, hurricane, and floods. The dragon is also a symbol of power, strength, and good luck. The Emperor of China usually used the dragon as a symbol of his imperial power and strength. The dragon dance is a popular feature of Chinese festivals.
In Irish and English folklore, the color was sometimes associated with witchcraft, and with faeries and spirits. The type of Irish fairy known as a leprechaun is commonly portrayed wearing a green suit, though before the 20th century he was usually described as wearing a red suit.
In theater and film, green was often connected with monsters and the inhuman. The earliest films of Frankenstein were in black and white, but in the poster for the 1935 version The Bride of Frankenstein, the monster had a green face. Actor Bela Lugosi wore green-hued makeup for the role of Dracula in the 1927–1928 Broadway stage production.
Poison and sickness
Like other common colors, green has several completely opposite associations. While it is the color most associated by Europeans and Americans with good health, it is also the color most often associated with toxicity and poison. There was a solid foundation for this association; in the nineteenth century several popular paints and pigments, notably verdigris, vert de Schweinfurt and vert de Paris, were highly toxic, containing copper or arsenic. The intoxicating drink absinthe was known as "the green fairy".
A green tinge in the skin is sometimes associated with nausea and sickness. The expression 'green at the gills' means appearing sick. The color, when combined with gold, is sometimes seen as representing the fading of youth. In some Far East cultures the color green is used as a symbol of sickness or nausea.
Social status, prosperity and the dollar
Green in Europe and the United States is sometimes associated with status and prosperity. From the Middle Ages to the 19th century it was often worn by bankers, merchants country gentlemen and others who were wealthy but not members of the nobility. The benches in the House of Commons of the United Kingdom, where the landed gentry sat, are colored green.
In the United States green was connected with the dollar bill. Since 1861, the reverse side of the dollar bill has been green. Green was originally chosen because it deterred counterfeiters, who tried to use early camera equipment to duplicate banknotes. Also, since the banknotes were thin, the green on the back did not show through and muddle the pictures on the front of the banknote. Green continues to be used because the public now associates it with a strong and stable currency.
One of the more notable uses of this meaning is found in The Wonderful Wizard of Oz. The Emerald City in this story is a place where everyone wears tinted glasses that make everything appear green. According to the populist interpretation of the story, the city's color is used by the author, L. Frank Baum, to illustrate the financial system of America in his day, as he lived in a time when America was debating the use of paper money versus gold.
On flags
The flag of Italy (1797) was modeled after the French tricolor. It was originally the flag of the Cisalpine Republic, whose capital was Milan; red and white were the colors of Milan, and green was the color of the military uniforms of the army of the Cisalpine Republic. Other versions say it is the color of the Italian landscape, or symbolizes hope.
The flag of Brazil has a green field adapted from the flag of the Empire of Brazil. The green represented the royal family.
The flag of India was inspired by an earlier flag of the independence movement of Gandhi, which had a red band for Hinduism and a green band representing Islam, the second largest religion in India.
The flag of Pakistan symbolizes Pakistan's commitment to Islam and equal rights of religious minorities where the larger portion (3:2 ratio) of flag is dark green representing Muslim majority (98% of total population) while a white vertical bar (3:1 ratio) at the mast representing equal rights for religious minorities and minority religions in country. The crescent and star symbolizes progress and bright future respectively.
The flag of Bangladesh has a green field based on a similar flag used during the Bangladesh Liberation War of 1971. It consists of a red disc on top of a green field. The red disc represents the sun rising over Bengal, and also the blood of those who died for the independence of Bangladesh. The green field stands for the lushness of the land of Bangladesh.
The flag of the international constructed language Esperanto has a green field and a green star in a white area. The green represents hope ("esperanto" means "one who hopes"), the white represents peace and neutrality and the star represents the five inhabited continents.
Green is one of the three colors (along with red and black, or red and gold) of Pan-Africanism. Several African countries thus use the color on their flags, including Nigeria, South Africa, Ghana, Senegal, Mali, Ethiopia, Togo, Guinea, Benin, and Zimbabwe. The Pan-African colors are borrowed from the Ethiopian flag, one of the oldest independent African countries. Green on some African flags represents the natural richness of Africa.
Many flags of the Islamic world are green, as the color is considered sacred in Islam (see below). The flag of Hamas, as well as the flag of Iran, is green, symbolizing their Islamist ideology. The 1977 flag of Libya consisted of a simple green field with no other characteristics. It was the only national flag in the world with just one color and no design, insignia, or other details. Some countries used green in their flags to represent their country's lush vegetation, as in the flag of Jamaica, and hope in the future, as in the flags of Portugal and Nigeria. The green cedar of Lebanon tree on the Flag of Lebanon officially represents steadiness and tolerance.
Green is a symbol of Ireland, which is often referred to as the "Emerald Isle". The color is particularly identified with the republican and nationalist traditions in modern times. It is used this way on the flag of the Republic of Ireland, in balance with white and the Protestant orange. Green is a strong trend in the Irish holiday St. Patrick's Day.
In politics
The first recorded green party was a political faction in Constantinople during the 6th century Byzantine Empire. which took its name from a popular chariot racing team. They were bitter opponents of the blue faction, which supported Emperor Justinian I and which had its own chariot racing team. In 532 AD rioting between the factions began after one race, which led to the massacre of green supporters and the destruction of much of the center of Constantinople. (See Nika Riots).
Green was the traditional color of Irish nationalism, beginning in the 17th century. The green harp flag, with a traditional gaelic harp, became the symbol of the movement. It was the banner of the Society of United Irishmen, which organized the ultimately unsuccessful Irish Rebellion of 1798. When Ireland achieved independence in 1922, green was incorporated into the national flag.
In the 1970s, green became the color of the third biggest Swiss Federal Council political party, the Swiss People's Party SVP. The ideology is Swiss nationalism, national conservatism, right-wing populism, economic liberalism, agrarianism, isolationism, euroscepticism. The SVP was founded on September 22, 1971, and has 90,000 members.
In the 1980s, green became the color of a number of new European political parties organized around an agenda of environmentalism. Green was chosen for its association with nature, health, and growth. The largest green party in Europe is Alliance '90/The Greens (German: Bündnis 90/Die Grünen) in Germany, which was formed in 1993 from the merger of the German Green Party, founded in West Germany in 1980, and Alliance 90, founded during the Revolution of 1989–1990 in East Germany. In the 2009 federal elections, the party won 11% of the votes and 68 out of 622 seats in the Bundestag.
Green parties in Europe have programs based on ecology, grassroots democracy, nonviolence, and social justice. Green parties are found in over one hundred countries, and most are members of the Global Green Network.
Greenpeace is a non-governmental environmental organization which emerged from the anti-nuclear and peace movements in the 1970s. Its ship, the Rainbow Warrior, frequently tried to interfere with nuclear tests and whaling operations. The movement now has branches in forty countries.
The Australian Greens was founded in 1992. In the 2010 federal election, the party received 13% of the vote (more than 1.6 million votes) in the Senate, a first for any Australian minor party.
Green is the color associated with Puerto Rico's Independence Party, the smallest of that country's three major political parties, which advocates Puerto Rican independence from the United States.
In Indonesia, green is used by several Islamist political party, including National Awakening Party, Crescent Star Party, United Development Party, and the local Aceh Just and Prosperous Party.
In Taiwan, green is used by Democratic Progressive Party. Green in Taiwan associates with Taiwan independence movement.
In religion
Green is the traditional color of Islam. According to tradition, the robe and banner of Muhammad were green, and according to the Koran (XVIII, 31 and LXXVI, 21) those fortunate enough to live in paradise wear green silk robes. Muhammad is quoted in a hadith as saying that "water, greenery, and a beautiful face" were three universally good things. Green was accordingly adopted as a Shi'a color.
Al-Khidr ("The Green One"), was an important Qur'anic figure who was said to have met and traveled with Moses. He was given that name because of his role as a diplomat and negotiator. Green was also considered to be the median color between light and obscurity.
Roman Catholic and more traditional Protestant clergy wear green vestments at liturgical celebrations during Ordinary Time. In the Eastern Catholic Church, green is the color of Pentecost. Green is one of the Christmas colors as well, possibly dating back to pre-Christian times, when evergreens were worshiped for their ability to maintain their color through the winter season. Romans used green holly and evergreen as decorations for their winter solstice celebration called Saturnalia, which eventually evolved into a Christmas celebration. In Ireland and Scotland especially, green is used to represent Catholics, while orange is used to represent Protestantism. This is shown on the national flag of Ireland.
In Paganism, green represents abundance, growth, wealth, renewal, and balance. In magickal practices, green is often used to bring money and luck. One figure who shares parallels with various deities is the Green Man.
In gambling and sports
Gambling tables in a casino are traditionally green. The tradition is said to have started in gambling rooms in Venice in the 16th century.
Billiards tables are traditionally covered with green woolen cloth. The first indoor tables, dating to the 15th century, were colored green after the grass courts used for the similar lawn games of the period.
Green was the traditional color worn by hunters in the 19th century, particularly the shade called hunter green. In the 20th century most hunters began wearing the color olive drab, a shade of green, instead of hunter green.
Green is a common color for sports teams. Well-known teams include A.S. Saint-Étienne of France, known as Les Verts (The Greens). The Green Bay Packers, an American football team, has the color in its official name and wears green uniforms. The NBA basketball team Boston Celtics is known for the green and white colors. In Israel, the green and white colors are identified with Maccabi Haifa F.C., a successful football club known as "The Greens". A number of national soccer teams feature the color, with the color usually reflective of the teams' national flag.
British racing green was the international motor racing color of Britain from the early 1900s until the 1960s, when it was replaced by the colors of the sponsoring automobile companies.
A green belt in karate, taekwondo, and judo symbolizes a level of proficiency in the sport.
Idioms and expressions
Having a green thumb (American English) or green fingers (British English). To be passionate about or talented at gardening. The expression was popularized beginning in 1925 by a BBC gardening program.
Greenhorn. Someone who is inexperienced.
Green-eyed monster. Refers to jealousy. (See section above on jealousy and envy).
Greenmail. A term used in finance and corporate takeovers. It refers to the practice of a company paying a high price to buy back shares of its own stock to prevent an unfriendly takeover by another company or businessman. It originated in the 1980s on Wall Street, and originates from the green of dollars.
Green room. A room at a theater where actors rest when not onstage, or a room at a television studio where guests wait before going on-camera. It originated in the late 17th century from a room of that color at the Theatre Royal, Drury Lane in London.
Greenwashing. Environmental activists sometimes use this term to describe the advertising of a company that promotes its positive environmental practices to cover up its environmental destruction.
Green around the gills. A description of a person who looks physically ill.
Going green. An expression commonly used to refer to preserving the natural environment, and participating in activities such as recycling materials.
Looking green. A description of a person who looks revolted or repulsed.
| Physical sciences | Color terms | null |
12461 | https://en.wikipedia.org/wiki/Gradient | Gradient | In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function may be defined by:
where is the total infinitesimal change in for an infinitesimal displacement , and is seen to be maximal when is in the direction of the gradient . The nabla symbol , written as an upside-down triangle and pronounced "del", denotes the vector differential operator.
When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by the vector whose components are the partial derivatives of at . That is, for , its gradient is defined at the point in n-dimensional space as the vector
Note that the above definition for gradient is defined for the function only if is differentiable at . There can be functions for which partial derivatives exist in every direction but fail to be differentiable. Furthermore, this definition as the vector of partial derivatives is only valid when the basis of the coordinate system is orthonormal. For any other basis, the metric tensor at that point needs to be taken into account.
For example, the function unless at origin where , is not differentiable at the origin as it does not have a well defined tangent plane despite having well defined partial derivatives in every direction at the origin. In this particular example, under rotation of x-y coordinate system, the above formula for gradient fails to transform like a vector (gradient becomes dependent on choice of basis for coordinate system) and also fails to point towards the 'steepest ascent' in some orientations. For differentiable functions where the formula for gradient holds, it can be shown to always transform as a vector under transformation of the basis so as to always point towards the fastest increase.
The gradient is dual to the total derivative : the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a cotangent vector – a linear functional on vectors. They are related in that the dot product of the gradient of at a point with another tangent vector equals the directional derivative of at of the function along ; that is, .
The gradient admits multiple generalizations to more general functions on manifolds; see .
Motivation
Consider a room where the temperature is given by a scalar field, , so at each point the temperature is , independent of time. At each point in the room, the gradient of at that point will show the direction in which the temperature rises most quickly, moving away from . The magnitude of the gradient will determine how fast the temperature rises in that direction.
Consider a surface whose height above sea level at point is . The gradient of at a point is a plane vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector.
The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a dot product. Suppose that the steepest slope on a hill is 40%. A road going directly uphill has slope 40%, but a road going around the hill at an angle will have a shallower slope. For example, if the road is at a 60° angle from the uphill direction (when both directions are projected onto the horizontal plane), then the slope along the road will be the dot product between the gradient vector and a unit vector along the road, as the dot product measures how much the unit vector along the road aligns with the steepest slope, which is 40% times the cosine of 60°, or 20%.
More generally, if the hill height function is differentiable, then the gradient of dotted with a unit vector gives the slope of the hill in the direction of the vector, the directional derivative of along the unit vector.
Notation
The gradient of a function at point is usually written as . It may also be denoted by any of the following:
: to emphasize the vector nature of the result.
and : Written with Einstein notation, where repeated indices () are summed over.
Definition
The gradient (or gradient vector field) of a scalar function is denoted or where (nabla) denotes the vector differential operator, del. The notation is also commonly used to represent the gradient. The gradient of is defined as the unique vector field whose dot product with any vector at each point is the directional derivative of along . That is,
where the right-hand side is the directional derivative and there are many ways to represent it. Formally, the derivative is dual to the gradient; see relationship with derivative.
When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only (see Spatial gradient).
The magnitude and direction of the gradient vector are independent of the particular coordinate representation.
Cartesian coordinates
In the three-dimensional Cartesian coordinate system with a Euclidean metric, the gradient, if it exists, is given by
where , , are the standard unit vectors in the directions of the , and coordinates, respectively. For example, the gradient of the function
is
or
In some applications it is customary to represent the gradient as a row vector or column vector of its components in a rectangular coordinate system; this article follows the convention of the gradient being a column vector, while the derivative is a row vector.
Cylindrical and spherical coordinates
In cylindrical coordinates with a Euclidean metric, the gradient is given by:
where is the axial distance, is the azimuthal or azimuth angle, is the axial coordinate, and , and are unit vectors pointing along the coordinate directions.
In spherical coordinates, the gradient is given by:
where is the radial distance, is the azimuthal angle and is the polar angle, and , and are again local unit vectors pointing in the coordinate directions (that is, the normalized covariant basis).
For the gradient in other orthogonal coordinate systems, see Orthogonal coordinates (Differential operators in three dimensions).
General coordinates
We consider general coordinates, which we write as , where is the number of dimensions of the domain. Here, the upper index refers to the position in the list of the coordinate or component, so refers to the second component—not the quantity squared. The index variable refers to an arbitrary element . Using Einstein notation, the gradient can then be written as:
(Note that its dual is ),
where and refer to the unnormalized local covariant and contravariant bases respectively, is the inverse metric tensor, and the Einstein summation convention implies summation over i and j.
If the coordinates are orthogonal we can easily express the gradient (and the differential) in terms of the normalized bases, which we refer to as and , using the scale factors (also known as Lamé coefficients) :
(and ),
where we cannot use Einstein notation, since it is impossible to avoid the repetition of more than two indices. Despite the use of upper and lower indices, , , and are neither contravariant nor covariant.
The latter expression evaluates to the expressions given above for cylindrical and spherical coordinates.
Relationship with derivative
Relationship with total derivative
The gradient is closely related to the total derivative (total differential) : they are transpose (dual) to each other. Using the convention that vectors in are represented by column vectors, and that covectors (linear maps ) are represented by row vectors, the gradient and the derivative are expressed as a column and row vector, respectively, with the same components, but transpose of each other:
While these both have the same components, they differ in what kind of mathematical object they represent: at each point, the derivative is a cotangent vector, a linear form (or covector) which expresses how much the (scalar) output changes for a given infinitesimal change in (vector) input, while at each point, the gradient is a tangent vector, which represents an infinitesimal change in (vector) input. In symbols, the gradient is an element of the tangent space at a point, , while the derivative is a map from the tangent space to the real numbers, . The tangent spaces at each point of can be "naturally" identified with the vector space itself, and similarly the cotangent space at each point can be naturally identified with the dual vector space of covectors; thus the value of the gradient at a point can be thought of a vector in the original , not just as a tangent vector.
Computationally, given a tangent vector, the vector can be multiplied by the derivative (as matrices), which is equal to taking the dot product with the gradient:
Differential or (exterior) derivative
The best linear approximation to a differentiable function
at a point in is a linear map from to which is often denoted by or and called the differential or total derivative of at . The function , which maps to , is called the total differential or exterior derivative of and is an example of a differential 1-form.
Much as the derivative of a function of a single variable represents the slope of the tangent to the graph of the function, the directional derivative of a function in several variables represents the slope of the tangent hyperplane in the direction of the vector.
The gradient is related to the differential by the formula
for any , where is the dot product: taking the dot product of a vector with the gradient is the same as taking the directional derivative along the vector.
If is viewed as the space of (dimension ) column vectors (of real numbers), then one can regard as the row vector with components
so that is given by matrix multiplication. Assuming the standard Euclidean metric on , the gradient is then the corresponding column vector, that is,
Linear approximation to a function
The best linear approximation to a function can be expressed in terms of the gradient, rather than the derivative. The gradient of a function from the Euclidean space to at any particular point in characterizes the best linear approximation to at . The approximation is as follows:
for close to , where is the gradient of computed at , and the dot denotes the dot product on . This equation is equivalent to the first two terms in the multivariable Taylor series expansion of at .
Relationship with
Let be an open set in . If the function is differentiable, then the differential of is the Fréchet derivative of . Thus is a function from to the space such that
where · is the dot product.
As a consequence, the usual properties of the derivative hold for the gradient, though the gradient is not a derivative itself, but rather dual to the derivative:
Linearity
The gradient is linear in the sense that if and are two real-valued functions differentiable at the point , and and are two constants, then is differentiable at , and moreover
Product rule
If and are real-valued functions differentiable at a point , then the product rule asserts that the product is differentiable at , and
Chain rule
Suppose that is a real-valued function defined on a subset of , and that is differentiable at a point . There are two forms of the chain rule applying to the gradient. First, suppose that the function is a parametric curve; that is, a function maps a subset into . If is differentiable at a point such that , then where ∘ is the composition operator: .
More generally, if instead , then the following holds:
where T denotes the transpose Jacobian matrix.
For the second form of the chain rule, suppose that is a real valued function on a subset of , and that is differentiable at the point . Then
Further properties and applications
Level sets
A level surface, or isosurface, is the set of all points where some function has a given value.
If is differentiable, then the dot product of the gradient at a point with a vector gives the directional derivative of at in the direction . It follows that in this case the gradient of is orthogonal to the level sets of . For example, a level surface in three-dimensional space is defined by an equation of the form . The gradient of is then normal to the surface.
More generally, any embedded hypersurface in a Riemannian manifold can be cut out by an equation of the form such that is nowhere zero. The gradient of is then normal to the hypersurface.
Similarly, an affine algebraic hypersurface may be defined by an equation , where is a polynomial. The gradient of is zero at a singular point of the hypersurface (this is the definition of a singular point). At a non-singular point, it is a nonzero normal vector.
Conservative vector fields and the gradient theorem
The gradient of a function is called a gradient field. A (continuous) gradient field is always a conservative vector field: its line integral along any path depends only on the endpoints of the path, and can be evaluated by the gradient theorem (the fundamental theorem of calculus for line integrals). Conversely, a (continuous) conservative vector field is always the gradient of a function.
Gradient is direction of steepest ascent
The gradient of a function at point is also the direction of its steepest ascent, i.e. it maximizes its directional derivative:
Let be an arbitrary unit vector. With the directional derivative defined as
we get, by substituting the function with its Taylor series,
where denotes higher order terms in .
Dividing by , and taking the limit yields a term which is bounded from above by the Cauchy-Schwarz inequality
Choosing maximizes the directional derivative, and equals the upper bound
Generalizations
Jacobian
The Jacobian matrix is the generalization of the gradient for vector-valued functions of several variables and differentiable maps between Euclidean spaces or, more generally, manifolds. A further generalization for a function between Banach spaces is the Fréchet derivative.
Suppose is a function such that each of its first-order partial derivatives exist on . Then the Jacobian matrix of is defined to be an matrix, denoted by or simply . The th entry is . Explicitly
Gradient of a vector field
Since the total derivative of a vector field is a linear mapping from vectors to vectors, it is a tensor quantity.
In rectangular coordinates, the gradient of a vector field is defined by:
(where the Einstein summation notation is used and the tensor product of the vectors and is a dyadic tensor of type (2,0)). Overall, this expression equals the transpose of the Jacobian matrix:
In curvilinear coordinates, or more generally on a curved manifold, the gradient involves Christoffel symbols:
where are the components of the inverse metric tensor and the are the coordinate basis vectors.
Expressed more invariantly, the gradient of a vector field can be defined by the Levi-Civita connection and metric tensor:
where is the connection.
Riemannian manifolds
For any smooth function on a Riemannian manifold , the gradient of is the vector field such that for any vector field ,
that is,
where denotes the inner product of tangent vectors at defined by the metric and is the function that takes any point to the directional derivative of in the direction , evaluated at . In other words, in a coordinate chart from an open subset of to an open subset of , is given by:
where denotes the th component of in this coordinate chart.
So, the local form of the gradient takes the form:
Generalizing the case , the gradient of a function is related to its exterior derivative, since
More precisely, the gradient is the vector field associated to the differential 1-form using the musical isomorphism
(called "sharp") defined by the metric . The relation between the exterior derivative and the gradient of a function on is a special case of this in which the metric is the flat metric given by the dot product.
| Mathematics | Multivariable and vector calculus | null |
12462 | https://en.wikipedia.org/wiki/Gauss%20%28unit%29 | Gauss (unit) | The gauss (symbol: , sometimes Gs) is a unit of measurement of magnetic induction, also known as magnetic flux density. The unit is part of the Gaussian system of units, which inherited it from the older centimetre–gram–second electromagnetic units (CGS-EMU) system. It was named after the German mathematician and physicist Carl Friedrich Gauss in 1936. One gauss is defined as one maxwell per square centimetre.
As the centimetre–gram–second system of units (cgs system) has been superseded by the International System of Units (SI), the use of the gauss has been deprecated by the standards bodies, but is still regularly used in various subfields of science. The SI unit for magnetic flux density is the tesla (symbol T), which corresponds to .
Name, symbol, and metric prefixes
Albeit not a component of the International System of Units, the usage of the gauss generally follows the rules for SI units. Since the name is derived from a person's name, its symbol is the uppercase letter "G". When the unit is spelled out, it is written in lowercase ("gauss"), unless it begins a sentence. The gauss may be combined with metric prefixes, such as in milligauss, mG (or mGs), or kilogauss, kG (or kGs).
Unit conversions
The gauss is the unit of magnetic flux density B in the system of Gaussian units and is equal to Mx/cm2 or g/Bi/s2, while the oersted is the unit of -field. One tesla (T) corresponds to 104 gauss, and one ampere (A) per metre corresponds to 4π × 10−3 oersted.
The units for magnetic flux Φ, which is the integral of magnetic -field over an area, are the weber (Wb) in the SI and the maxwell (Mx) in the CGS-Gaussian system. The conversion factor is , since flux is the integral of field over an area, area having the units of the square of distance, thus (magnetic field conversion factor) times the square of (linear distance conversion factor). 108 Mx/Wb = 104 G/T × (102 cm/m)2.
Typical values
10−9–10−8 G – the magnetic field of the human brain
10−6–10−3 G – the magnetic field of Galactic molecular clouds. Typical magnetic field strengths within the interstellar medium of the Milky Way are ~5 μG.
0.25–0.60 G – the Earth's magnetic field at its surface
4 G – near Jupiter's equator
25 G – the Earth's magnetic field in its core
50 G – a typical refrigerator magnet
100 G – an iron magnet
1500 G – within a sun spot
10000 to 13000 G – remanence of a neodymium-iron-boron (NIB) magnet
16000 to 22000 G – saturation of high permeability iron alloys used in transformers
3000–70000 G – a medical magnetic resonance imaging machine
1012–1013 G – the surface of a neutron star
4 × 1013 G – the Schwinger limit
1014 G – the magnetic field of SGR J1745-2900, orbiting the supermassive black hole Sgr A* in the center of the Milky Way.
1015 G – the magnetic field of some newly created magnetars
1017 G – the upper limit to neutron star magnetism
| Physical sciences | Magnetic field | Basics and measurement |
12463 | https://en.wikipedia.org/wiki/Glacier | Glacier | A glacier (; ) is a persistent body of dense ice that is constantly moving downhill under its own weight. A glacier forms where the accumulation of snow exceeds its ablation over many years, often centuries. It acquires distinguishing features, such as crevasses and seracs, as it slowly flows and deforms under stresses induced by its weight. As it moves, it abrades rock and debris from its substrate to create landforms such as cirques, moraines, or fjords. Although a glacier may flow into a body of water, it forms only on land and is distinct from the much thinner sea ice and lake ice that form on the surface of bodies of water.
On Earth, 99% of glacial ice is contained within vast ice sheets (also known as "continental glaciers") in the polar regions, but glaciers may be found in mountain ranges on every continent other than the Australian mainland, including Oceania's high-latitude oceanic island countries such as New Zealand. Between latitudes 35°N and 35°S, glaciers occur only in the Himalayas, Andes, and a few high mountains in East Africa, Mexico, New Guinea and on Zard-Kuh in Iran. With more than 7,000 known glaciers, Pakistan has more glacial ice than any other country outside the polar regions. Glaciers cover about 10% of Earth's land surface. Continental glaciers cover nearly or about 98% of Antarctica's , with an average thickness of ice . Greenland and Patagonia also have huge expanses of continental glaciers. The volume of glaciers, not including the ice sheets of Antarctica and Greenland, has been estimated at 170,000 km3.
Glacial ice is the largest reservoir of fresh water on Earth, holding with ice sheets about 69 percent of the world's freshwater. Many glaciers from temperate, alpine and seasonal polar climates store water as ice during the colder seasons and release it later in the form of meltwater as warmer summer temperatures cause the glacier to melt, creating a water source that is especially important for plants, animals and human uses when other sources may be scant. However, within high-altitude and Antarctic environments, the seasonal temperature difference is often not sufficient to release meltwater.
Since glacial mass is affected by long-term climatic changes, e.g., precipitation, mean temperature, and cloud cover, glacial mass changes are considered among the most sensitive indicators of climate change and are a major source of variations in sea level.
A large piece of compressed ice, or a glacier, appears blue, as large quantities of water appear blue, because water molecules absorb other colors more efficiently than blue. The other reason for the blue color of glaciers is the lack of air bubbles. Air bubbles, which give a white color to ice, are squeezed out by pressure increasing the created ice's density.
Etymology and related terms
The word glacier is a loanword from French and goes back, via Franco-Provençal, to the Vulgar Latin , derived from the Late Latin , and ultimately Latin , meaning "ice". The processes and features caused by or related to glaciers are referred to as glacial. The process of glacier establishment, growth and flow is called glaciation. The corresponding area of study is called glaciology. Glaciers are important components of the global cryosphere.
Types
Classification by size, shape and behavior
Glaciers are categorized by their morphology, thermal characteristics, and behavior. Alpine glaciers form on the crests and slopes of mountains. A glacier that fills a valley is called a valley glacier, or alternatively, an alpine glacier or mountain glacier. A large body of glacial ice astride a mountain, mountain range, or volcano is termed an ice cap or ice field. Ice caps have an area less than by definition.
Glacial bodies larger than are called ice sheets or continental glaciers. Several kilometers deep, they obscure the underlying topography. Only nunataks protrude from their surfaces. The only extant ice sheets are the two that cover most of Antarctica and Greenland. They contain vast quantities of freshwater, enough that if both melted, global sea levels would rise by over . Portions of an ice sheet or cap that extend into water are called ice shelves; they tend to be thin with limited slopes and reduced velocities. Narrow, fast-moving sections of an ice sheet are called ice streams. In Antarctica, many ice streams drain into large ice shelves. Some drain directly into the sea, often with an ice tongue, like Mertz Glacier.
Tidewater glaciers are glaciers that terminate in the sea, including most glaciers flowing from Greenland, Antarctica, Baffin, Devon, and Ellesmere Islands in Canada, Southeast Alaska, and the Northern and Southern Patagonian Ice Fields. As the ice reaches the sea, pieces break off or calve, forming icebergs. Most tidewater glaciers calve above sea level, which often results in a tremendous impact as the iceberg strikes the water. Tidewater glaciers undergo centuries-long cycles of advance and retreat that are much less affected by climate change than other glaciers.
Classification by thermal state
Thermally, a temperate glacier is at a melting point throughout the year, from its surface to its base. The ice of a polar glacier is always below the freezing threshold from the surface to its base, although the surface snowpack may experience seasonal melting. A subpolar glacier includes both temperate and polar ice, depending on the depth beneath the surface and position along the length of the glacier. In a similar way, the thermal regime of a glacier is often described by its basal temperature. A cold-based glacier is below freezing at the ice-ground interface and is thus frozen to the underlying substrate. A warm-based glacier is above or at freezing at the interface and is able to slide at this contact. This contrast is thought to a large extent to govern the ability of a glacier to effectively erode its bed, as sliding ice promotes plucking at rock from the surface below. Glaciers which are partly cold-based and partly warm-based are known as polythermal.
Formation
Glaciers form where the accumulation of snow and ice exceeds ablation. A glacier usually originates from a cirque landform (alternatively known as a corrie or as a ) – a typically armchair-shaped geological feature (such as a depression between mountains enclosed by arêtes) – which collects and compresses through gravity the snow that falls into it. This snow accumulates and the weight of the snow falling above compacts it, forming névé (granular snow). Further crushing of the individual snowflakes and squeezing the air from the snow turns it into "glacial ice". This glacial ice will fill the cirque until it "overflows" through a geological weakness or vacancy, such as a gap between two mountains. When the mass of snow and ice reaches sufficient thickness, it begins to move by a combination of surface slope, gravity, and pressure. On steeper slopes, this can occur with as little as of snow-ice.
In temperate glaciers, snow repeatedly freezes and thaws, changing into granular ice called firn. Under the pressure of the layers of ice and snow above it, this granular ice fuses into denser firn. Over a period of years, layers of firn undergo further compaction and become glacial ice. Glacier ice is slightly more dense than ice formed from frozen water because glacier ice contains fewer trapped air bubbles.
Glacial ice has a distinctive blue tint because it absorbs some red light due to an overtone of the infrared OH stretching mode of the water molecule. (Liquid water appears blue for the same reason. The blue of glacier ice is sometimes misattributed to Rayleigh scattering of bubbles in the ice.)
Structure
A glacier originates at a location called its glacier head and terminates at its glacier foot, snout, or terminus.
Glaciers are broken into zones based on surface snowpack and melt conditions. The ablation zone is the region where there is a net loss in glacier mass. The upper part of a glacier, where accumulation exceeds ablation, is called the accumulation zone. The equilibrium line separates the ablation zone and the accumulation zone; it is the contour where the amount of new snow gained by accumulation is equal to the amount of ice lost through ablation. In general, the accumulation zone accounts for 60–70% of the glacier's surface area, more if the glacier calves icebergs. Ice in the accumulation zone is deep enough to exert a downward force that erodes underlying rock. After a glacier melts, it often leaves behind a bowl- or amphitheater-shaped depression that ranges in size from large basins like the Great Lakes to smaller mountain depressions known as cirques.
The accumulation zone can be subdivided based on its melt conditions.
The dry snow zone is a region where no melt occurs, even in the summer, and the snowpack remains dry.
The percolation zone is an area with some surface melt, causing meltwater to percolate into the snowpack. This zone is often marked by refrozen ice lenses, glands, and layers. The snowpack also never reaches the melting point.
Near the equilibrium line on some glaciers, a superimposed ice zone develops. This zone is where meltwater refreezes as a cold layer in the glacier, forming a continuous mass of ice.
The wet snow zone is the region where all of the snow deposited since the end of the previous summer has been raised to 0 °C.
The health of a glacier is usually assessed by determining the glacier mass balance or observing terminus behavior. Healthy glaciers have large accumulation zones, more than 60% of their area is snow-covered at the end of the melt season, and they have a terminus with a vigorous flow.
Following the Little Ice Age's end around 1850, glaciers around the Earth have retreated substantially. A slight cooling led to the advance of many alpine glaciers between 1950 and 1985, but since 1985 glacier retreat and mass loss has become larger and increasingly ubiquitous.
Motion
Glaciers move downhill by the force of gravity and the internal deformation of ice. At the molecular level, ice consists of stacked layers of molecules with relatively weak bonds between layers. When the amount of strain (deformation) is proportional to the stress being applied, ice will act as an elastic solid. Ice needs to be at least thick to even start flowing, but once its thickness exceeds about , stress on the layer above will exceeds the inter-layer binding strength, and then it will move faster than the layer below. This means that small amounts of stress can result in a large amount of strain, causing the deformation to become a plastic flow rather than elastic. Then, the glacier will begin to deform under its own weight and flow across the landscape. According to the Glen–Nye flow law, the relationship between stress and strain, and thus the rate of internal flow, can be modeled as follows:
where:
= shear strain (flow) rate
= stress
= a constant between 2–4 (typically 3 for most glaciers)
= a temperature-dependent constant
The lowest velocities are near the base of the glacier and along valley sides where friction acts against flow, causing the most deformation. Velocity increases inward toward the center line and upward, as the amount of deformation decreases. The highest flow velocities are found at the surface, representing the sum of the velocities of all the layers below.
Because ice can flow faster where it is thicker, the rate of glacier-induced erosion is directly proportional to the thickness of overlying ice. Consequently, pre-glacial low hollows will be deepened and pre-existing topography will be amplified by glacial action, while nunataks, which protrude above ice sheets, barely erode at all – erosion has been estimated as 5 m per 1.2 million years. This explains, for example, the deep profile of fjords, which can reach a kilometer in depth as ice is topographically steered into them. The extension of fjords inland increases the rate of ice sheet thinning since they are the principal conduits for draining ice sheets. It also makes the ice sheets more sensitive to changes in climate and the ocean.
Although evidence in favor of glacial flow was known by the early 19th century, other theories of glacial motion were advanced, such as the idea that meltwater, refreezing inside glaciers, caused the glacier to dilate and extend its length. As it became clear that glaciers behaved to some degree as if the ice were a viscous fluid, it was argued that "regelation", or the melting and refreezing of ice at a temperature lowered by the pressure on the ice inside the glacier, was what allowed the ice to deform and flow. James Forbes came up with the essentially correct explanation in the 1840s, although it was several decades before it was fully accepted.
Fracture zone and cracks
The top of a glacier are rigid because they are under low pressure. This upper section is known as the fracture zone and moves mostly as a single unit over the plastic-flowing lower section. When a glacier moves through irregular terrain, cracks called crevasses develop in the fracture zone. Crevasses form because of differences in glacier velocity. If two rigid sections of a glacier move at different speeds or directions, shear forces cause them to break apart, opening a crevasse. Crevasses are seldom more than deep but, in some cases, can be at least deep. Beneath this point, the plasticity of the ice prevents the formation of cracks. Intersecting crevasses can create isolated peaks in the ice, called seracs.
Crevasses can form in several different ways. Transverse crevasses are transverse to flow and form where steeper slopes cause a glacier to accelerate. Longitudinal crevasses form semi-parallel to flow where a glacier expands laterally. Marginal crevasses form near the edge of the glacier, caused by the reduction in speed caused by friction of the valley walls. Marginal crevasses are largely transverse to flow. Moving glacier ice can sometimes separate from the stagnant ice above, forming a bergschrund. Bergschrunds resemble crevasses but are singular features at a glacier's margins. Crevasses make travel over glaciers hazardous, especially when they are hidden by fragile snow bridges.
Below the equilibrium line, glacial meltwater is concentrated in stream channels. Meltwater can pool in proglacial lakes on top of a glacier or descend into the depths of a glacier via moulins. Streams within or beneath a glacier flow in englacial or sub-glacial tunnels. These tunnels sometimes reemerge at the glacier's surface.
Subglacial processes
Most of the important processes controlling glacial motion occur in the ice-bed contact—even though it is only a few meters thick. The bed's temperature, roughness and softness define basal shear stress, which in turn defines whether movement of the glacier will be accommodated by motion in the sediments, or if it will be able to slide. A soft bed, with high porosity and low pore fluid pressure, allows the glacier to move by sediment sliding: the base of the glacier may even remain frozen to the bed, where the underlying sediment slips underneath it like a tube of toothpaste. A hard bed cannot deform in this way; therefore the only way for hard-based glaciers to move is by basal sliding, where meltwater forms between the ice and the bed itself. Whether a bed is hard or soft depends on the porosity and pore pressure; higher porosity decreases the sediment strength (thus increases the shear stress τB).
Porosity may vary through a range of methods.
Movement of the overlying glacier may cause the bed to undergo dilatancy; the resulting shape change reorganizes blocks. This reorganizes closely packed blocks (a little like neatly folded, tightly packed clothes in a suitcase) into a messy jumble (just as clothes never fit back in when thrown in in a disordered fashion). This increases the porosity. Unless water is added, this will necessarily reduce the pore pressure (as the pore fluids have more space to occupy).
Pressure may cause compaction and consolidation of underlying sediments. Since water is relatively incompressible, this is easier when the pore space is filled with vapor; any water must be removed to permit compression. In soils, this is an irreversible process.
Sediment degradation by abrasion and fracture decreases the size of particles, which tends to decrease pore space. However, the motion of the particles may disorder the sediment, with the opposite effect. These processes also generate heat.
Bed softness may vary in space or time, and changes dramatically from glacier to glacier. An important factor is the underlying geology; glacial speeds tend to differ more when they change bedrock than when the gradient changes. Further, bed roughness can also act to slow glacial motion. The roughness of the bed is a measure of how many boulders and obstacles protrude into the overlying ice. Ice flows around these obstacles by melting under the high pressure on their stoss side; the resultant meltwater is then forced into the cavity arising in their lee side, where it re-freezes.
As well as affecting the sediment stress, fluid pressure (pw) can affect the friction between the glacier and the bed. High fluid pressure provides a buoyancy force upwards on the glacier, reducing the friction at its base. The fluid pressure is compared to the ice overburden pressure, pi, given by ρgh. Under fast-flowing ice streams, these two pressures will be approximately equal, with an effective pressure (pi – pw) of 30 kPa; i.e. all of the weight of the ice is supported by the underlying water, and the glacier is afloat.
Basal melting and sliding
Glaciers may also move by basal sliding, where the base of the glacier is lubricated by the presence of liquid water, reducing basal shear stress and allowing the glacier to slide over the terrain on which it sits. Meltwater may be produced by pressure-induced melting, friction or geothermal heat. The more variable the amount of melting at surface of the glacier, the faster the ice will flow. Basal sliding is dominant in temperate or warm-based glaciers.
τD = ρgh sin α
where τD is the driving stress, and α the ice surface slope in radians.
τB is the basal shear stress, a function of bed temperature and softness.
τF, the shear stress, is the lower of τB and τD. It controls the rate of plastic flow.
The presence of basal meltwater depends on both bed temperature and other factors. For instance, the melting point of water decreases under pressure, meaning that water melts at a lower temperature under thicker glaciers. This acts as a "double whammy", because thicker glaciers have a lower heat conductance, meaning that the basal temperature is also likely to be higher. Bed temperature tends to vary in a cyclic fashion. A cool bed has a high strength, reducing the speed of the glacier. This increases the rate of accumulation, since newly fallen snow is not transported away. Consequently, the glacier thickens, with three consequences: firstly, the bed is better insulated, allowing greater retention of geothermal heat.
Secondly, the increased pressure can facilitate melting. Most importantly, τD is increased. These factors will combine to accelerate the glacier. As friction increases with the square of velocity, faster motion will greatly increase frictional heating, with ensuing melting – which causes a positive feedback, increasing ice speed to a faster flow rate still: west Antarctic glaciers are known to reach velocities of up to a kilometer per year.
Eventually, the ice will be surging fast enough that it begins to thin, as accumulation cannot keep up with the transport. This thinning will increase the conductive heat loss, slowing the glacier and causing freezing. This freezing will slow the glacier further, often until it is stationary, whence the cycle can begin again.
The flow of water under the glacial surface can have a large effect on the motion of the glacier itself. Subglacial lakes contain significant amounts of water, which can move fast: cubic kilometers can be transported between lakes over the course of a couple of years. This motion is thought to occur in two main modes: pipe flow involves liquid water moving through pipe-like conduits, like a sub-glacial river; sheet flow involves motion of water in a thin layer. A switch between the two flow conditions may be associated with surging behavior. Indeed, the loss of sub-glacial water supply has been linked with the shut-down of ice movement in the Kamb ice stream. The subglacial motion of water is expressed in the surface topography of ice sheets, which slump down into vacated subglacial lakes.
Speed
The speed of glacial displacement is partly determined by friction. Friction makes the ice at the bottom of the glacier move more slowly than ice at the top. In alpine glaciers, friction is also generated at the valley's sidewalls, which slows the edges relative to the center.
Mean glacial speed varies greatly but is typically around per day. There may be no motion in stagnant areas; for example, in parts of Alaska, trees can establish themselves on surface sediment deposits. In other cases, glaciers can move as fast as per day, such as in Greenland's Jacobshavn Isbræ. Glacial speed is affected by factors such as slope, ice thickness, snowfall, longitudinal confinement, basal temperature, meltwater production, and bed hardness.
A few glaciers have periods of very rapid advancement called surges. These glaciers exhibit normal movement until suddenly they accelerate, then return to their previous movement state. These surges may be caused by the failure of the underlying bedrock, the pooling of meltwater at the base of the glacier — perhaps delivered from a supraglacial lake — or the simple accumulation of mass beyond a critical "tipping point". Temporary rates up to per day have occurred when increased temperature or overlying pressure caused bottom ice to melt and water to accumulate beneath a glacier.
In glaciated areas where the glacier moves faster than one km per year, glacial earthquakes occur. These are large scale earthquakes that have seismic magnitudes as high as 6.1. The number of glacial earthquakes in Greenland peaks every year in July, August, and September and increased rapidly in the 1990s and 2000s. In a study using data from January 1993 through October 2005, more events were detected every year since 2002, and twice as many events were recorded in 2005 as there were in any other year.
Ogives
Ogives or Forbes bands are alternating wave crests and valleys that appear as dark and light bands of ice on glacier surfaces. They are linked to seasonal motion of glaciers; the width of one dark and one light band generally equals the annual movement of the glacier. Ogives are formed when ice from an icefall is severely broken up, increasing ablation surface area during summer. This creates a swale and space for snow accumulation in the winter, which in turn creates a ridge. Sometimes ogives consist only of undulations or color bands and are described as wave ogives or band ogives.
Geography
Glaciers are present on every continent and in approximately fifty countries, excluding those (Australia, South Africa) that have glaciers only on distant subantarctic island territories. Extensive glaciers are found in Antarctica, Argentina, Chile, Canada, Pakistan, Alaska, Greenland and Iceland. Mountain glaciers are widespread, especially in the Andes, the Himalayas, the Rocky Mountains, the Caucasus, Scandinavian Mountains, and the Alps. Snezhnika glacier in Pirin Mountain, Bulgaria with a latitude of 41°46′09″ N is the southernmost glacial mass in Europe. Mainland Australia currently contains no glaciers, although a small glacier on Mount Kosciuszko was present in the last glacial period. In New Guinea, small, rapidly diminishing, glaciers are located on Puncak Jaya. Africa has glaciers on Mount Kilimanjaro in Tanzania, on Mount Kenya, and in the Rwenzori Mountains. Oceanic islands with glaciers include Iceland, several of the islands off the coast of Norway including Svalbard and Jan Mayen to the far north, New Zealand and the subantarctic islands of Marion, Heard, Grande Terre (Kerguelen) and Bouvet. During glacial periods of the Quaternary, Taiwan, Hawaii on Mauna Kea and Tenerife also had large alpine glaciers, while the Faroe and Crozet Islands were completely glaciated.
The permanent snow cover necessary for glacier formation is affected by factors such as the degree of slope on the land, amount of snowfall and the winds. Glaciers can be found in all latitudes except from 20° to 27° north and south of the equator where the presence of the descending limb of the Hadley circulation lowers precipitation so much that with high insolation snow lines reach above . Between 19˚N and 19˚S, however, precipitation is higher, and the mountains above usually have permanent snow.
Even at high latitudes, glacier formation is not inevitable. Areas of the Arctic, such as Banks Island, and the McMurdo Dry Valleys in Antarctica are considered polar deserts where glaciers cannot form because they receive little snowfall despite the bitter cold. Cold air, unlike warm air, is unable to transport much water vapor. Even during glacial periods of the Quaternary, Manchuria, lowland Siberia, and central and northern Alaska, though extraordinarily cold, had such light snowfall that glaciers could not form.
In addition to the dry, unglaciated polar regions, some mountains and volcanoes in Bolivia, Chile and Argentina are high () and cold, but the relative lack of precipitation prevents snow from accumulating into glaciers. This is because these peaks are located near or in the hyperarid Atacama Desert.
Glacial geology
Erosion
Glaciers erode terrain through two principal processes: plucking and abrasion.
As glaciers flow over bedrock, they soften and lift blocks of rock into the ice. This process, called plucking, is caused by subglacial water that penetrates fractures in the bedrock and subsequently freezes and expands. This expansion causes the ice to act as a lever that loosens the rock by lifting it. Thus, sediments of all sizes become part of the glacier's load. If a retreating glacier gains enough debris, it may become a rock glacier, like the Timpanogos Glacier in Utah.
Abrasion occurs when the ice and its load of rock fragments slide over bedrock and function as sandpaper, smoothing and polishing the bedrock below. The pulverized rock this process produces is called rock flour and is made up of rock grains between 0.002 and 0.00625 mm in size. Abrasion leads to steeper valley walls and mountain slopes in alpine settings, which can cause avalanches and rock slides, which add even more material to the glacier. Glacial abrasion is commonly characterized by glacial striations. Glaciers produce these when they contain large boulders that carve long scratches in the bedrock. By mapping the direction of the striations, researchers can determine the direction of the glacier's movement. Similar to striations are chatter marks, lines of crescent-shape depressions in the rock underlying a glacier. They are formed by abrasion when boulders in the glacier are repeatedly caught and released as they are dragged along the bedrock.The rate of glacier erosion varies. Six factors control erosion rate:
Velocity of glacial movement
Thickness of the ice
Shape, abundance and hardness of rock fragments contained in the ice at the bottom of the glacier
Relative ease of erosion of the surface under the glacier
Thermal conditions at the glacier base
Permeability and water pressure at the glacier base
When the bedrock has frequent fractures on the surface, glacial erosion rates tend to increase as plucking is the main erosive force on the surface; when the bedrock has wide gaps between sporadic fractures, however, abrasion tends to be the dominant erosive form and glacial erosion rates become slow. Glaciers in lower latitudes tend to be much more erosive than glaciers in higher latitudes, because they have more meltwater reaching the glacial base and facilitate sediment production and transport under the same moving speed and amount of ice.
Material that becomes incorporated in a glacier is typically carried as far as the zone of ablation before being deposited. Glacial deposits are of two distinct types:
Glacial till: material directly deposited from glacial ice. Till includes a mixture of undifferentiated material ranging from clay size to boulders, the usual composition of a moraine.
Fluvial and outwash sediments: sediments deposited by water. These deposits are stratified by size.
Larger pieces of rock that are encrusted in till or deposited on the surface are called "glacial erratics". They range in size from pebbles to boulders, but as they are often moved great distances, they may be drastically different from the material upon which they are found. Patterns of glacial erratics hint at past glacial motions.
Moraines
Glacial moraines are formed by the deposition of material from a glacier and are exposed after the glacier has retreated. They usually appear as linear mounds of till, a non-sorted mixture of rock, gravel, and boulders within a matrix of fine powdery material. Terminal or end moraines are formed at the foot or terminal end of a glacier. Lateral moraines are formed on the sides of the glacier. Medial moraines are formed when two different glaciers merge and the lateral moraines of each coalesce to form a moraine in the middle of the combined glacier. Less apparent are ground moraines, also called glacial drift, which often blankets the surface underneath the glacier downslope from the equilibrium line. The term moraine is of French origin. It was coined by peasants to describe alluvial embankments and rims found near the margins of glaciers in the French Alps. In modern geology, the term is used more broadly and is applied to a series of formations, all of which are composed of till. Moraines can also create moraine-dammed lakes.
Drumlins
Drumlins are asymmetrical, canoe-shaped hills made mainly of till. Their heights vary from 15 to 50 meters, and they can reach a kilometer in length. The steepest side of the hill faces the direction from which the ice advanced (stoss), while a longer slope is left in the ice's direction of movement (lee). Drumlins are found in groups called drumlin fields or drumlin camps. One of these fields is found east of Rochester, New York; it is estimated to contain about 10,000 drumlins. Although the process that forms drumlins is not fully understood, their shape implies that they are products of the plastic deformation zone of ancient glaciers. It is believed that many drumlins were formed when glaciers advanced over and altered the deposits of earlier glaciers.
Glacial valleys, cirques, arêtes, and pyramidal peaks
Before glaciation, mountain valleys have a characteristic "V" shape, produced by eroding water. During glaciation, these valleys are often widened, deepened and smoothed to form a U-shaped glacial valley or glacial trough, as it is sometimes called. The erosion that creates glacial valleys truncates any spurs of rock or earth that may have earlier extended across the valley, creating broadly triangular-shaped cliffs called truncated spurs. Within glacial valleys, depressions created by plucking and abrasion can be filled by lakes, called paternoster lakes. If a glacial valley runs into a large body of water, it forms a fjord.
Typically glaciers deepen their valleys more than their smaller tributaries. Therefore, when glaciers recede, the valleys of the tributary glaciers remain above the main glacier's depression and are called hanging valleys.
At the start of a classic valley glacier is a bowl-shaped cirque, which have escarped walls on three sides but is open on the side that descends into the valley. Cirques are where ice begins to accumulate in a glacier. Two glacial cirques may form back to back and erode their backwalls until only a narrow ridge, called an arête is left. This structure may result in a mountain pass. If multiple cirques encircle a single mountain, they create pointed pyramidal peaks; particularly steep examples are called horns.
Roches moutonnées
Passage of glacial ice over an area of bedrock may cause the rock to be sculpted into a knoll called a roche moutonnée, or "sheepback" rock. Roches moutonnées may be elongated, rounded and asymmetrical in shape. They range in length from less than a meter to several hundred meters long. Roches moutonnées have a gentle slope on their up-glacier sides and a steep to vertical face on their down-glacier sides. The glacier abrades the smooth slope on the upstream side as it flows along, but tears rock fragments loose and carries them away from the downstream side via plucking.
Alluvial stratification
As the water that rises from the ablation zone moves away from the glacier, it carries fine eroded sediments with it. As the speed of the water decreases, so does its capacity to carry objects in suspension. The water thus gradually deposits the sediment as it runs, creating an alluvial plain. When this phenomenon occurs in a valley, it is called a valley train. When the deposition is in an estuary, the sediments are known as bay mud. Outwash plains and valley trains are usually accompanied by basins known as "kettles". These are small lakes formed when large ice blocks that are trapped in alluvium melt and produce water-filled depressions. Kettle diameters range from 5 m to 13 km, with depths of up to 45 meters. Most are circular in shape because the blocks of ice that formed them were rounded as they melted.
Glacial deposits
When a glacier's size shrinks below a critical point, its flow stops and it becomes stationary. Meanwhile, meltwater within and beneath the ice leaves stratified alluvial deposits. These deposits, in the forms of columns, terraces and clusters, remain after the glacier melts and are known as "glacial deposits". Glacial deposits that take the shape of hills or mounds are called kames. Some kames form when meltwater deposits sediments through openings in the interior of the ice. Others are produced by fans or deltas created by meltwater. When the glacial ice occupies a valley, it can form terraces or kames along the sides of the valley. Long, sinuous glacial deposits are called eskers. Eskers are composed of sand and gravel that was deposited by meltwater streams that flowed through ice tunnels within or beneath a glacier. They remain after the ice melts, with heights exceeding 100 meters and lengths of as long as 100 km.
Loess deposits
Very fine glacial sediments or rock flour is often picked up by wind blowing over the bare surface and may be deposited great distances from the original fluvial deposition site. These eolian loess deposits may be very deep, even hundreds of meters, as in areas of China and the Midwestern United States. Katabatic winds can be important in this process.
Retreat of glaciers due to climate change
Glaciers, which can be hundreds of thousands of years old, are used to track climate change over long periods of time. Researchers melt or crush samples from glacier ice cores whose progressively deep layers represent respectively earlier times in Earth's climate history. The researchers apply various instruments to the content of bubbles trapped in the cores' layers in order to track changes in the atmosphere's composition. Temperatures are deduced from differing relative concentrations of respective gases, confirming that for at least the last million years, global temperatures have been linked to carbon dioxide concentrations.
Human activities in the industrial era have increased the concentration of carbon dioxide and other heat-trapping greenhouse gases in the air, causing current global warming. Human influence is the principal driver of changes to the cryosphere of which glaciers are a part.
Global warming creates positive feedback loops with glaciers. For example, in ice–albedo feedback, rising temperatures increase glacier melt, exposing more of earth's land and sea surface (which is darker than glacier ice), allowing sunlight to warm the surface rather than being reflected back into space. Reference glaciers tracked by the World Glacier Monitoring Service have lost ice every year since 1988. A study that investigated the period 1995 to 2022 showed that the flow velocity of glaciers in the Alps accelerates and slows down to a similar extent at the same time, despite large distances. This clearly shows that their speed is controlled by the climate change.
Water runoff from melting glaciers causes global sea level to rise, a phenomenon the IPCC terms a "slow onset" event. Impacts at least partially attributable to sea level rise include for example encroachment on coastal settlements and infrastructure, existential threats to small islands and low-lying coasts, losses of coastal ecosystems and ecosystem services, groundwater salinization, and compounding damage from tropical cyclones, flooding, storm surges, and land subsidence.
Isostatic rebound
Large masses, such as ice sheets or glaciers, can depress the crust of the Earth into the mantle. The depression usually totals a third of the ice sheet or glacier's thickness. After the ice sheet or glacier melts, the mantle begins to flow back to its original position, pushing the crust back up. This post-glacial rebound, which proceeds very slowly after the melting of the ice sheet or glacier, is currently occurring in measurable amounts in Scandinavia and the Great Lakes region of North America.
A geomorphological feature created by the same process on a smaller scale is known as dilation-faulting. It occurs where previously compressed rock is allowed to return to its original shape more rapidly than can be maintained without faulting. This leads to an effect similar to what would be seen if the rock were hit by a large hammer. Dilation faulting can be observed in recently de-glaciated parts of Iceland and Cumbria.
On other planets
The polar ice caps of Mars show geologic evidence of glacial deposits. The south polar cap is especially comparable to glaciers on Earth. Topographical features and computer models indicate the existence of more glaciers in Mars' past. At mid-latitudes, between 35° and 65° north or south, Martian glaciers are affected by the thin Martian atmosphere. Because of the low atmospheric pressure, ablation near the surface is solely caused by sublimation, not melting. As on Earth, many glaciers are covered with a layer of rocks which insulates the ice. A radar instrument on board the Mars Reconnaissance Orbiter found ice under a thin layer of rocks in formations called lobate debris aprons (LDAs).
In 2015, as New Horizons flew by the Pluto-Charon system, the spacecraft discovered a massive basin covered in a layer of nitrogen ice on Pluto. A large portion of the basin's surface is divided into irregular polygonal features separated by narrow troughs, interpreted as convection cells fueled by internal heat from Pluto's interior. Glacial flows were also observed near Sputnik Planitia's margins, appearing to flow both into and out of the basin.
| Physical sciences | Glaciology | null |
12505 | https://en.wikipedia.org/wiki/Galilean%20moons | Galilean moons | The Galilean moons (), or Galilean satellites, are the four largest moons of Jupiter: Io, Europa, Ganymede, and Callisto. They are the most readily visible Solar System objects after Saturn, the dimmest of the classical planets; though their closeness to bright Jupiter makes naked-eye observation very difficult, they are readily seen with common binoculars, even under night sky conditions of high light pollution. The invention of the telescope enabled the discovery of the moons in 1610. Through this, they became the first Solar System objects discovered since humans have started tracking the classical planets, and the first objects to be found to orbit any planet beyond Earth.
They are planetary-mass moons and among the largest objects in the Solar System. All four, along with Titan, Triton, and Earth's Moon, are larger than any of the Solar System's dwarf planets. The largest, Ganymede, is the largest moon in the Solar System and surpasses the planet Mercury in size (though not mass). Callisto is only slightly smaller than Mercury in size; the smaller ones, Io and Europa, are about the size of the Moon. The three inner moons — Io, Europa, and Ganymede — are in a 4:2:1 orbital resonance with each other. While the Galilean moons are spherical, all of Jupiter's remaining moons have irregular forms because they are too small for their self-gravitation to pull them into spheres.
The Galilean moons are named after Galileo Galilei, who observed them in either December 1609 or January 1610, and recognized them as satellites of Jupiter in March 1610; they remained the only known moons of Jupiter until the discovery of the fifth largest moon of Jupiter Amalthea in 1892. Galileo initially named his discovery the Cosmica Sidera ("Cosimo's stars") or Medicean Stars, but the names that eventually prevailed were chosen by Simon Marius. Marius discovered the moons independently at nearly the same time as Galileo, 8 January 1610, and gave them their present individual names, after mythological characters that Zeus seduced or abducted, which were suggested by Johannes Kepler in his Mundus Jovialis, published in 1614. Their discovery showed the importance of the telescope as a tool for astronomers by proving that there were objects in space that cannot be seen by the naked eye. The discovery of celestial bodies orbiting something other than Earth dealt a serious blow to the then-accepted (among educated Europeans) Ptolemaic world system, a geocentric theory in which everything orbits around Earth.
History
Discovery
As a result of improvements that Galileo Galilei made to the telescope, with a magnifying capability of 20×, he was able to see celestial bodies more distinctly than was previously possible. This allowed Galileo to observe in either December 1609 or January 1610 what came to be known as the Galilean moons.
On 7 January 1610, Galileo wrote a letter containing the first mention of Jupiter's moons. At the time, he saw only three of them, and he believed them to be fixed stars near Jupiter. He continued to observe these celestial orbs from 8 January to 2 March 1610. In these observations, he discovered a fourth body, and also observed that the four were not fixed stars, but rather were orbiting Jupiter.
Galileo's discovery proved the importance of the telescope as a tool for astronomers by showing that there were objects in space to be discovered that until then had remained unseen by the naked eye. More importantly, the discovery of celestial bodies orbiting something other than Earth dealt a blow to the then-accepted Ptolemaic world system, which held that Earth was at the center of the universe and all other celestial bodies revolved around it. Galileo's 13 March 1610, Sidereus Nuncius (Starry Messenger), which announced celestial observations through his telescope, does not explicitly mention Copernican heliocentrism, a theory that placed the Sun at the center of the universe. Nevertheless, Galileo accepted the Copernican theory.
A Chinese historian of astronomy, Xi Zezong, has claimed that a "small reddish star" observed near Jupiter in 364 BCE by Chinese astronomer Gan De may have been Ganymede. If true, this might predate Galileo's discovery by around two millennia.
The observations of Simon Marius are another noted example of observation, and he later reported observing the moons in 1609. However, because he did not publish these findings until after Galileo, there is a degree of uncertainty around his records.
Names
In 1605, Galileo had been employed as a mathematics tutor for Cosimo de' Medici. In 1609, Cosimo became Grand Duke Cosimo II of Tuscany. Galileo, seeking patronage from his now-wealthy former student and his powerful family, used the discovery of Jupiter's moons to gain it. On 13 February 1610, Galileo wrote to the Grand Duke's secretary:
"God graced me with being able, through such a singular sign, to reveal to my Lord my devotion and the desire I have that his glorious name live as equal among the stars, and since it is up to me, the first discoverer, to name these new planets, I wish, in imitation of the great sages who placed the most excellent heroes of that age among the stars, to inscribe these with the name of the Most Serene Grand Duke."
Galileo initially called his discovery the Cosmica Sidera ("Cosimo's stars"), in honour of Cosimo alone. Cosimo's secretary suggested to change the name to Medicea Sidera ("the Medician stars"), honouring all four Medici brothers (Cosimo, Francesco, Carlo, and Lorenzo). The discovery was announced in the Sidereus Nuncius ("Starry Messenger"), published in Venice in March 1610, less than two months after the first observations.
On 12 March 1610, Galileo wrote his dedicatory letter to the Duke of Tuscany, and the next day sent a copy to the Grand Duke, hoping to obtain the Grand Duke's support as quickly as possible. On 19 March, he sent the telescope he had used to first view Jupiter's moons to the Grand Duke, along with an official copy of Sidereus Nuncius (The Starry Messenger) that, following the secretary's advice, named the four moons the Medician Stars. In his dedicatory introduction, Galileo wrote:
Scarcely have the immortal graces of your soul begun to shine forth on earth than bright stars offer themselves in the heavens which, like tongues, will speak of and celebrate your most excellent virtues for all time. Behold, therefore, four stars reserved for your illustrious name ... which ... make their journeys and orbits with a marvelous speed around the star of Jupiter ... like children of the same family ... Indeed, it appears the Maker of the Stars himself, by clear arguments, admonished me to call these new planets by the illustrious name of Your Highness before all others.
Other names put forward include:
I. Principharus (for the "prince" of Tuscany), II. Victripharus (after Vittoria della Rovere), III. Cosmipharus (after Cosimo de' Medici) and IV. Fernipharus (after Duke Ferdinando de' Medici) – by Giovanni Battista Hodierna, a disciple of Galileo and author of the first ephemerides (Medicaeorum Ephemerides, 1656);
Circulatores Jovis, or Jovis Comites – by Johannes Hevelius;
Gardes, or Satellites (from the Latin satelles, satellitis, meaning "escorts") – by Jacques Ozanam.
The names that eventually prevailed were chosen by Simon Marius, who discovered the moons independently at the same time as Galileo: he named them at the suggestion of Johannes Kepler after lovers of the god Zeus (the Greek equivalent of Jupiter), in his Mundus Jovialis, published in 1614:
Jupiter is much blamed by the poets on account of his irregular loves. Three maidens are especially mentioned as having been clandestinely courted by Jupiter with success. Io, daughter of the River Inachus, Callisto of Lycaon, Europa of Agenor. Then there was Ganymede, the handsome son of King Tros, whom Jupiter, having taken the form of an eagle, transported to heaven on his back, as poets fabulously tell... I think, therefore, that I shall not have done amiss if the First is called by me Io, the Second Europa, the Third, on account of its majesty of light, Ganymede, the Fourth Callisto... This fancy, and the particular names given, were suggested to me by Kepler, Imperial Astronomer, when we met at Ratisbon fair in October 1613. So if, as a jest, and in memory of our friendship then begun, I hail him as joint father of these four stars, again I shall not be doing wrong.
Galileo steadfastly refused to use Marius' names and invented as a result the numbering scheme that is still used nowadays, in parallel with proper moon names. The numbers run from Jupiter outward, thus I, II, III and IV for Io, Europa, Ganymede, and Callisto respectively. Galileo used this system in his notebooks but never actually published it. The numbered names (Jupiter x) were used until the mid-20th century when other inner moons were discovered, and Marius' names became widely used.
Determination of longitude
Galileo's discovery had practical applications. Safe navigation required accurately determining a ship's position at sea. While latitude could be measured well enough by local astronomical observations, determining longitude required knowledge of the time of each observation synchronized to the time at a reference longitude. The longitude problem was so important that large prizes were offered for its solution at various times by Spain, Holland, and Britain.
Galileo proposed determining longitude based on the timing of the orbits of the Galilean moons. The times of the eclipses of the moons could be precisely calculated in advance and compared with local observations on land or on ship to determine the local time and hence longitude. Galileo applied in 1616 for the Spanish prize of 6,000 gold ducats with a lifetime pension of 2,000 a year, and almost two decades later for the Dutch prize, but by then he was under house arrest for possible heresy.
The main problem with the Jovian moon technique was that it was difficult to observe the Galilean moons through a telescope on a moving ship, a problem that Galileo tried to solve with the invention of the celatone. Others suggested improvements, but without success.
Land mapping surveys had the same problem determining longitude, though with less severe observational conditions. The method proved practical and was used by Giovanni Domenico Cassini and Jean Picard to re-map France.
Members
Some models predict that there may have been several generations of Galilean satellites in Jupiter's early history. Each generation of moons to have formed would have spiraled into Jupiter and been destroyed, due to tidal interactions with Jupiter's proto-satellite disk, with new moons forming from the remaining debris. By the time the present generation formed, the gas in the proto-satellite disk had thinned out to the point that it no longer greatly interfered with the moons' orbits.
Other models suggest that Galilean satellites formed in a proto-satellite disk, in which formation timescales were comparable to or shorter than orbital migration timescales. Io is anhydrous and likely has an interior of rock and metal. Europa is thought to contain 8% ice and water by mass with the remainder rock. These moons are, in increasing order of distance from Jupiter:
Io
Io (Jupiter I) is the innermost of the four Galilean moons of Jupiter; with a diameter of 3642 kilometers, it is the fourth-largest moon in the Solar System, and is only marginally larger than Earth's moon. It was named after Io, a priestess of Hera who became one of the lovers of Zeus. It was referred to as "Jupiter I", or "The first satellite of Jupiter" until the mid-20th century.
With over 400 active volcanos, Io is the most geologically active object in the Solar System. Its surface is dotted with more than 100 mountains, some of which are taller than Earth's Mount Everest. Unlike most satellites in the outer Solar System (which have a thick coating of ice), Io is primarily composed of silicate rock surrounding a molten iron or iron sulfide core.
Although not proven, data from the Galileo orbiter indicates that Io might have its own magnetic field. Io has an extremely thin atmosphere made up mostly of sulfur dioxide (SO2). If a surface data or collection vessel were to land on Io in the future, it would have to be extremely tough (similar to the tank-like bodies of the Soviet Venera landers) to survive the radiation and magnetic fields that originate from Jupiter.
Europa
Europa (Jupiter II), the second of the four Galilean moons, is the second closest to Jupiter and the smallest at 3121.6 kilometers in diameter, which is slightly smaller than Earth's Moon. The name comes from a mythical Phoenician noblewoman, Europa, who was courted by Zeus and became the queen of Crete, though the name did not become widely used until the mid-20th century.
It has a smooth and bright surface, with a layer of water surrounding the mantle of the planet, thought to be 100 kilometers thick. The smooth surface includes a layer of ice, while the bottom of the ice is theorized to be liquid water. The apparent youth and smoothness of the surface have led to the hypothesis that a water ocean exists beneath it, which could conceivably serve as an abode for extraterrestrial life. Heat energy from tidal flexing ensures that the ocean remains liquid and drives geological activity. Life may exist in Europa's under-ice ocean. So far, there is no evidence that life exists on Europa, but the likely presence of liquid water has spurred calls to send a probe there.
The prominent markings that criss-cross the moon seem to be mainly albedo features, which emphasize low topography. There are few craters on Europa because its surface is tectonically active and young. Some theories suggest that Jupiter's gravity is causing these markings, as one side of Europa is constantly facing Jupiter. Volcanic water eruptions splitting the surface of Europa and even geysers have also been considered as causes. The reddish-brown color of the markings is theorized to be caused by sulfur, but because no data collection devices have been sent to Europa, scientists cannot yet confirm this. Europa is primarily made of silicate rock and likely has an iron core. It has a tenuous atmosphere composed primarily of oxygen.
Ganymede
Ganymede (Jupiter III), the third Galilean moon, is named after the mythological Ganymede, cupbearer of the Greek gods and Zeus's beloved. Ganymede is the largest natural satellite in the Solar System at 5262.4 kilometers in diameter, which makes it larger than the planet Mercury – although only at about half of its mass since Ganymede is an icy world. It is the only satellite in the Solar System known to possess a magnetosphere, likely created through convection within the liquid iron core.
Ganymede is composed primarily of silicate rock and water ice, and a salt-water ocean is believed to exist nearly 200 km below Ganymede's surface, sandwiched between layers of ice. The metallic core of Ganymede suggests a greater heat at some time in its past than had previously been proposed. The surface is a mix of two types of terrain—highly cratered dark regions and younger, but still ancient, regions with a large array of grooves and ridges. Ganymede has a high number of craters, but many are gone or barely visible due to its icy crust forming over them. The satellite has a thin oxygen atmosphere that includes O, O2, and possibly O3 (ozone), and some atomic hydrogen.
Callisto
Callisto (Jupiter IV) is the fourth and last Galilean moon, and is the second-largest of the four, and at 4820.6 kilometers in diameter, it is the third largest moon in the Solar System, and barely smaller than Mercury, though only a third of the latter's mass. It is named after the Greek mythological nymph Callisto, a lover of Zeus who was a daughter of the Arkadian King Lykaon and a hunting companion of the goddess Artemis. The moon does not form part of the orbital resonance that affects three inner Galilean satellites and thus does not experience appreciable tidal heating. Callisto is composed of approximately equal amounts of rock and ices, which makes it the least dense of the Galilean moons. It is one of the most heavily cratered satellites in the Solar System, and one major feature is a basin around 3000 km wide called Valhalla.
Callisto is surrounded by an extremely thin atmosphere composed of carbon dioxide and probably molecular oxygen. Investigation revealed that Callisto may possibly have a subsurface ocean of liquid water at depths less than 300 kilometres. The likely presence of an ocean within Callisto indicates that it can or could harbour life. However, this is less likely than on nearby Europa. Callisto has long been considered the most suitable place for a human base for future exploration of the Jupiter system since it is furthest from the intense radiation of Jupiter's magnetic field.
Comparative structure
Fluctuations in the orbits of the moons indicate that their mean density decreases with distance from Jupiter. Callisto, the outermost and least dense of the four, has a density intermediate between ice and rock whereas Io, the innermost and densest moon, has a density intermediate between rock and iron. Callisto has an ancient, heavily cratered and unaltered ice surface and the way it rotates indicates that its density is equally distributed, suggesting that it has no rocky or metallic core but consists of a homogeneous mix of rock and ice. This may well have been the original structure of all the moons. The rotation of the three inner moons, in contrast, indicates differentiation of their interiors with denser matter at the core and lighter matter above. They also reveal significant alteration of the surface. Ganymede reveals past tectonic movement of the ice surface which required partial melting of subsurface layers. Europa reveals more dynamic and recent movement of this nature, suggesting a thinner ice crust. Finally, Io, the innermost moon, has a sulfur surface, active volcanism and no sign of ice. All this evidence suggests that the nearer a moon is to Jupiter the hotter its interior. The current model is that the moons experience tidal heating as a result of the gravitational field of Jupiter in inverse proportion to the square of their distance from the giant planet. In all but Callisto this will have melted the interior ice, allowing rock and iron to sink to the interior and water to cover the surface. In Ganymede a thick and solid ice crust then formed. In warmer Europa a thinner more easily broken crust formed. In Io the heating is so extreme that all the rock has melted and water has long ago boiled out into space.
Size
Latest flyby
Origin and evolution
Jupiter's regular satellites are believed to have formed from a circumplanetary disk, a ring of accreting gas and solid debris analogous to a protoplanetary disk. They may be the remnants of a score of Galilean-mass satellites that formed early in Jupiter's history.
Simulations suggest that, while the disk had a relatively high mass at any given moment, over time a substantial fraction (several tenths of a percent) of the mass of Jupiter captured from the Solar nebula was processed through it. However, the disk mass of only 2% that of Jupiter is required to explain the existing satellites. Thus there may have been several generations of Galilean-mass satellites in Jupiter's early history. Each generation of moons would have spiraled into Jupiter, due to drag from the disk, with new moons then forming from the new debris captured from the Solar nebula. By the time the present (possibly fifth) generation formed, the disk had thinned out to the point that it no longer greatly interfered with the moons' orbits. The current Galilean moons were still affected, falling into and being partially protected by an orbital resonance which still exists for Io, Europa, and Ganymede. Ganymede's larger mass means that it would have migrated inward at a faster rate than Europa or Io. Tidal dissipation in the Jovian system is still ongoing and Callisto will likely be captured into the resonance in about 1.5 billion years, creating a 1:2:4:8 chain.
Visibility
All four Galilean moons are bright enough to be viewed from Earth without a telescope, if only they could appear farther away from Jupiter. (They are, however, easily distinguished with even low-powered binoculars.) They have apparent magnitudes between 4.6 and 5.6 when Jupiter is in opposition with the Sun, and are about one unit of magnitude dimmer when Jupiter is in conjunction. The main difficulty in observing the moons from Earth is their proximity to Jupiter, since they are obscured by its brightness. The maximum angular separations of the moons are between 2 and 10 arcminutes from Jupiter, which is close to the limit of human visual acuity. Ganymede and Callisto, at their maximum separation, are the likeliest targets for potential naked-eye observation.
Orbit animations
GIF animations depicting the Galilean moon orbits and the resonance of Io, Europa, and Ganymede
| Physical sciences | Solar System | Astronomy |
12528 | https://en.wikipedia.org/wiki/Cis%E2%80%93trans%20isomerism | Cis–trans isomerism | Cis–trans isomerism, also known as geometric isomerism, describes certain arrangements of atoms within molecules. The prefixes "cis" and "trans" are from Latin: "this side of" and "the other side of", respectively. In the context of chemistry, cis indicates that the functional groups (substituents) are on the same side of some plane, while trans conveys that they are on opposing (transverse) sides. Cis–trans isomers are stereoisomers, that is, pairs of molecules which have the same formula but whose functional groups are in different orientations in three-dimensional space. Cis and trans isomers occur both in organic molecules and in inorganic coordination complexes. Cis and trans descriptors are not used for cases of conformational isomerism where the two geometric forms easily interconvert, such as most open-chain single-bonded structures; instead, the terms "syn" and "anti" are used.
According to IUPAC, "geometric isomerism" is an obsolete synonym of "cis–trans isomerism".
Cis–trans or geometric isomerism is classified as one type of configurational isomerism.
Organic chemistry
Very often, cis–trans stereoisomers contain double bonds or ring structures. In both cases the rotation of bonds is restricted or prevented. When the substituent groups are oriented in the same direction, the diastereomer is referred to as cis, whereas when the substituents are oriented in opposing directions, the diastereomer is referred to as trans. An example of a small hydrocarbon displaying cis–trans isomerism is but-2-ene. 1,2-Dichlorocyclohexane is another example.
Comparison of physical properties
Cis and trans isomers have distinct physical properties. Their differing shapes influences the dipole moments, boiling, and especially melting points.
These differences can be very small, as in the case of the boiling point of straight-chain alkenes, such as pent-2-ene, which is 37 °C in the cis isomer and 36 °C in the trans isomer. The differences between cis and trans isomers can be larger if polar bonds are present, as in the 1,2-dichloroethenes. The cis isomer in this case has a boiling point of 60.3 °C, while the trans isomer has a boiling point of 47.5 °C. In the cis isomer the two polar C–Cl bond dipole moments combine to give an overall molecular dipole, so that there are intermolecular dipole–dipole forces (or Keesom forces), which add to the London dispersion forces and raise the boiling point. In the trans isomer on the other hand, this does not occur because the two C−Cl bond moments cancel and the molecule has a net zero dipole moment (it does however have a non-zero quadrupole moment).
The differing properties of the two isomers of butenedioic acid are often very different.
Polarity is key in determining relative boiling point as strong intermolecular forces raise the boiling point. In the same manner, symmetry is key in determining relative melting point as it allows for better packing in the solid state, even if it does not alter the polarity of the molecule. Another example of this is the relationship between oleic acid and elaidic acid; oleic acid, the cis isomer, has a melting point of 13.4 °C, making it a liquid at room temperature, while the trans isomer, elaidic acid, has the much higher melting point of 43 °C, due to the straighter trans isomer being able to pack more tightly, and is solid at room temperature.
Thus, trans alkenes, which are less polar and more symmetrical, have lower boiling points and higher melting points, and cis alkenes, which are generally more polar and less symmetrical, have higher boiling points and lower melting points.
In the case of geometric isomers that are a consequence of double bonds, and, in particular, when both substituents are the same, some general trends usually hold. These trends can be attributed to the fact that the dipoles of the substituents in a cis isomer will add up to give an overall molecular dipole. In a trans isomer, the dipoles of the substituents will cancel out due to being on opposite sides of the molecule. Trans isomers also tend to have lower densities than their cis counterparts.
As a general trend, trans alkenes tend to have higher melting points and lower solubility in inert solvents, as trans alkenes, in general, are more symmetrical than cis alkenes.
Vicinal coupling constants (3JHH), measured by NMR spectroscopy, are larger for trans (range: 12–18 Hz; typical: 15 Hz) than for cis (range: 0–12 Hz; typical: 8 Hz) isomers.
Stability
Usually for acyclic systems trans isomers are more stable than cis isomers. This difference is attributed to the unfavorable steric interaction of the substituents in the cis isomer. Therefore, trans isomers have a less-exothermic heat of combustion, indicating higher thermochemical stability. In the Benson heat of formation group additivity dataset, cis isomers suffer a 1.10 kcal/mol stability penalty. Exceptions to this rule exist, such as 1,2-difluoroethylene, 1,2-difluorodiazene (FN=NF), and several other halogen- and oxygen-substituted ethylenes. In these cases, the cis isomer is more stable than the trans isomer. This phenomenon is called the cis effect.
E–Z notation
In principle, cis–trans notation should not be used for alkenes with two or more different substituents. Instead the E–Z notation is used based on the priority of the substituents using the Cahn–Ingold–Prelog (CIP) priority rules for absolute configuration. The IUPAC standard designations E and Z are unambiguous in all cases, and therefore are especially useful for tri- and tetrasubstituted alkenes to avoid any confusion about which groups are being identified as cis or trans to each other.
Z (from the German ) means "together". E (from the German ) means "opposed" in the sense of "opposite". That is, Z has the higher-priority groups cis to each other and E has the higher-priority groups trans to each other. Whether a molecular configuration is designated E or Z is determined by the CIP rules; higher atomic numbers are given higher priority. For each of the two atoms in the double bond, it is necessary to determine the priority of each substituent. If both the higher-priority substituents are on the same side, the arrangement is Z; if on opposite sides, the arrangement is E.
Because the cis–trans and E–Z systems compare different groups on the alkene, it is not strictly true that Z corresponds to cis and E corresponds to trans. For example, trans-2-chlorobut-2-ene (the two methyl groups, C1 and C4, on the but-2-ene backbone are trans to each other) is (Z)-2-chlorobut-2-ene (the chlorine and C4 are together because C1 and C4 are opposite).
Undefined alkene stereochemistry
Wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes; it is no longer considered an acceptable style for general use by IUPAC but may still be required by computer software.
Inorganic chemistry
Cis–trans isomerism can also occur in inorganic compounds.
Diazenes
Diazenes (and the related diphosphenes) can also exhibit cis–trans isomerism. As with organic compounds, the cis isomer is generally the more reactive of the two, being the only isomer that can reduce alkenes and alkynes to alkanes, but for a different reason: the trans isomer cannot line its hydrogens up suitably to reduce the alkene, but the cis isomer, being shaped differently, can.
Coordination complexes
Coordination complexes with octahedral or square planar geometries can also exhibit cis-trans isomerism.
For example, there are two isomers of square planar Pt(NH3)2Cl2, as explained by Alfred Werner in 1893. The cis isomer, whose full name is cis-diamminedichloroplatinum(II), was shown in 1969 by Barnett Rosenberg to have antitumor activity, and is now a chemotherapy drug known by the short name cisplatin. In contrast, the trans isomer (transplatin) has no useful anticancer activity. Each isomer can be synthesized using the trans effect to control which isomer is produced.
For octahedral complexes of formula MX4Y2, two isomers also exist. (Here M is a metal atom, and X and Y are two different types of ligands.) In the cis isomer, the two Y ligands are adjacent to each other at 90°, as is true for the two chlorine atoms shown in green in cis-[Co(NH3)4Cl2]+, at left. In the trans isomer shown at right, the two Cl atoms are on opposite sides of the central Co atom.
A related type of isomerism in octahedral MX3Y3 complexes is
facial–meridional (or fac–mer) isomerism, in which different numbers of ligands are cis or trans to each other. Metal carbonyl compounds can be characterized as fac or mer using infrared spectroscopy.
| Physical sciences | Stereochemistry | Chemistry |
12546 | https://en.wikipedia.org/wiki/Gorilla | Gorilla | Gorillas are herbivorous, predominantly ground-dwelling great apes that inhabit the tropical forests of equatorial Africa. The genus Gorilla is divided into two species: the eastern gorilla and the western gorilla, and either four or five subspecies. The DNA of gorillas is highly similar to that of humans, from 95 to 99% depending on what is included, and they are the next closest living relatives to humans after chimpanzees.
Gorillas are the largest living primates, reaching heights between 1.25 and 1.8 metres, weights between 100 and 270 kg, and arm spans up to 2.6 metres, depending on species and sex. They tend to live in troops, with the leader being called a silverback. The eastern gorilla is distinguished from the western by darker fur colour and some other minor morphological differences. Gorillas tend to live 35–40 years in the wild.
Gorillas' natural habitats cover tropical or subtropical forest in Sub-Saharan Africa. Although their range covers a small percentage of Sub-Saharan Africa, gorillas cover a wide range of elevations. The mountain gorilla inhabits the Albertine Rift montane cloud forests of the Virunga Volcanoes, ranging in altitude from . Lowland gorillas live in dense forests and lowland swamps and marshes as low as sea level, with western lowland gorillas living in Central West African countries and eastern lowland gorillas living in the Democratic Republic of the Congo near its border with Rwanda.
There are thought to be around 316,000 western gorillas in the wild, and 5,000 eastern gorillas. Both species are classified as Critically Endangered by the IUCN; all subspecies are classified as Critically Endangered with the exception of the mountain gorilla, which is classified as Endangered. There are many threats to their survival, such as poaching, habitat destruction, and disease, which threaten the survival of the species. However, conservation efforts have been successful in some areas where they live.
History and etymology
The word gorilla comes from the history of Hanno the Navigator ( 500 BC), a Carthaginian explorer on an expedition to the west African coast to the area that later became Sierra Leone. Members of the expedition encountered "savage people, the greater part of whom were women, whose bodies were hairy, and whom our interpreters called Gorillae". It is unknown whether what the explorers encountered were what we now call gorillas, another species of ape or monkeys, or humans. Skins of gorillai women, brought back by Hanno, are reputed to have been kept at Carthage until Rome destroyed the city 350 years later at the end of the Punic Wars, 146 BC.
In 1625 Andrew Battel mentioned the existence of the animal, under the name Pongo:
A century and a half after Battel's story was published, one writer claimed that "the large species, described by Buffon and other authors as of the size of a man, is held by many to be a Chimera."
The American physician and missionary Thomas Staughton Savage and naturalist Jeffries Wyman first described the western gorilla in 1847 from specimens obtained in Liberia. They called it Troglodytes gorilla, using the then-current name of the chimpanzee genus. The species name was derived , as described by Hanno.
Evolution and classification
The closest relatives of gorillas are the other two Homininae genera, chimpanzees and humans, all of them having diverged from a common ancestor about 7 million years ago. Human gene sequences differ only 1.6% on average from the sequences of corresponding gorilla genes, but there is further difference in how many copies each gene has.
Until recently, gorillas were considered to be a single species, with three subspecies: the western lowland gorilla, the eastern lowland gorilla and the mountain gorilla. There is now agreement that there are two species, each with two subspecies. More recently, a third subspecies has been claimed to exist in one of the species. The separate species and subspecies developed from a single type of gorilla during the Ice Age, when their forest habitats shrank and became isolated from each other. Primatologists continue to explore the relationships between various gorilla populations. The species and subspecies listed here are the ones upon which most scientists agree.
The proposed third subspecies of Gorilla beringei, which has not yet received a trinomen, is the Bwindi population of the mountain gorilla, sometimes called the Bwindi gorilla.
Some variations that distinguish the classifications of gorilla include varying density, size, hair colour, length, culture, and facial widths. Population genetics of the lowland gorillas suggest that the western and eastern lowland populations diverged around 261 thousand years ago.
Characteristics
Wild male gorillas weigh , while adult females weigh . Adult males are tall, with an arm span that stretches from . Female gorillas are shorter at , with smaller arm spans. Colin Groves (1970) calculated the average weight of 42 wild adult male gorillas at 144 kg, while Smith and Jungers (1997) found the average weight of 19 wild adult male gorillas to be 169 kg. Adult male gorillas are known as silverbacks due to the characteristic silver hair on their backs reaching to the hips. The tallest gorilla recorded was a silverback with an arm span of , a chest of , and a weight of , shot in Alimbongo, northern Kivu in May 1938. The heaviest gorilla recorded was a silverback shot in Ambam, Cameroon, which weighed . Males in captivity can be overweight and reach weights up to .
The eastern gorilla is more darkly coloured than the western gorilla, with the mountain gorilla being the darkest of all. The mountain gorilla also has the thickest hair. The western lowland gorilla can be brown or greyish with a reddish forehead. In addition, gorillas that live in lowland forest are more slender and agile than the more bulky mountain gorillas. The eastern gorilla also has a longer face and broader chest than the western gorilla. Like humans, gorillas have individual fingerprints.
Their eye colour is dark brown, framed by a black ring around the iris. Gorilla facial structure is described as mandibular prognathism, that is, the mandible protrudes farther out than the maxilla. Adult males also have a prominent sagittal crest.
Gorillas move around by knuckle-walking, although they sometimes walk upright for short distances, typically while carrying food or in defensive situations. A 2018 study investigating the hand posture of 77 mountain gorillas at Bwindi Impenetrable National Park (8% of the population) found that knuckle walking was done only 60% of the time, and they also supported their weight on their fists, the backs of their hands/feet, and on their palms/soles (with the digits flexed). Such a range of hand postures was previously thought to have been used by only orangutans. Studies of gorilla handedness have yielded varying results, with some arguing for no preference for either hand, and others right-hand dominance for the general population.
Studies have shown gorilla blood is not reactive to anti-A and anti-B monoclonal antibodies, which would, in humans, indicate type O blood. Due to novel sequences, though, it is different enough to not conform with the human ABO blood group system, into which the other great apes fit.
A gorilla's lifespan is normally between 35 and 40 years, although zoo gorillas may live for 50 years or more in rare circumstances. At , Fatou is the oldest gorilla ever; oldest female gorilla ever; oldest living gorilla and oldest living female gorilla. The oldest male gorilla ever was Ozoum, who reached to the final age of 61 years, 24 days. The oldest living male gorilla is Guhonda, aged .
Distribution and habitat
Gorillas have a patchy distribution. The range of the two species is separated by the Congo River and its tributaries. The western gorilla lives in west central Africa, while the eastern gorilla lives in east central Africa. Between the species, and even within the species, gorillas live in a variety of habitats and elevations. Gorilla habitat ranges from montane forest to swampland. Eastern gorillas inhabit montane and submontane forests between above sea level.
Mountain gorillas live in montane forests at the higher end of the elevation range, while eastern lowland gorillas live in submontane forests at the lower end. In addition, eastern lowland gorillas live in montane bamboo forests, as well as lowland forests ranging from in elevation. Western gorillas live in both lowland swamp forests and montane forests, at elevations ranging from sea level to . Western lowland gorillas live in swamp and lowland forests ranging up to , and Cross River gorillas live in low-lying and submontane forests ranging from .
Ecology
Diet and foraging
A gorilla's day is divided between rest periods and travel or feeding periods. Diets differ between and within species. Mountain gorillas mostly eat foliage, such as leaves, stems, pith, and shoots, while fruit makes up a very small part of their diets. Mountain gorilla food is widely distributed and neither individuals nor groups have to compete with one another. Their home ranges vary from , and their movements range around or less on an average day. Despite eating a few species in each habitat, mountain gorillas have flexible diets and can live in a variety of habitats.
Eastern lowland gorillas have more diverse diets, which vary seasonally. Leaves and pith are commonly eaten, but fruits can make up as much as 25% of their diets. Since fruit is less available, lowland gorillas must travel farther each day, and their home ranges vary from , with day ranges . Eastern lowland gorillas will also eat insects, preferably ants. Western lowland gorillas depend on fruits more than the others and they are more dispersed across their range. They travel even farther than the other gorilla subspecies, at per day on average, and have larger home ranges of . Western lowland gorillas have less access to terrestrial herbs, although they can access aquatic herbs in some areas. Termites and ants are also eaten.
Gorillas rarely drink water "because they consume succulent vegetation that is almost half water as well as morning dew", although both mountain and lowland gorillas have been observed drinking.
Nesting
Gorillas construct nests for daytime and night use. Nests tend to be simple aggregations of branches and leaves about in diameter and are constructed by individuals. Gorillas, unlike chimpanzees or orangutans, tend to sleep in nests on the ground. The young nest with their mothers, but construct nests after three years of age, initially close to those of their mothers. Gorilla nests are distributed arbitrarily and use of tree species for site and construction appears to be opportunistic. Nest-building by great apes is now considered to be not just animal architecture, but as an important instance of tool use.
Gorillas make a new nest to sleep on each day; even if remaining in the same place, they do not use the previous one. Usually, they are made an hour before dusk, to be ready to sleep when night falls. Gorillas sleep longer than humans, an average of 12 hours per day.
Interspecies interactions
One possible predator of gorillas is the leopard. Gorilla remains have been found in leopard scat, but this may be the result of scavenging. When the group is attacked by humans, leopards, or other gorillas, an individual silverback will protect the group, even at the cost of his own life. Gorillas do not appear to directly compete with chimpanzees in areas where they overlap. When fruit is abundant, gorilla and chimpanzee diets converge, but when fruit is scarce gorillas resort to vegetation. The two apes may also feed on different species, whether fruit or insects. Gorillas and chimpanzees may ignore or avoid each other when feeding on the same tree, but they have also been documented to form social bonds. Conversely, coalitions of chimpanzees have been observed attacking families of gorillas including silverbacks and killing infants.
Behaviour
Social structure
Gorillas live in groups called troops. Troops tend to be made of one adult male or silverback, with a harem of multiple adult females and their offspring. However, multiple-male troops also exist. A silverback is typically more than 12 years of age, and is named for the distinctive patch of silver hair on his back, which comes with maturity. Silverbacks have large canine teeth that also come with maturity. Both males and females tend to emigrate from their natal groups. For mountain gorillas, females disperse from their natal troops more than males. Mountain gorillas and western lowland gorillas also commonly transfer to second new groups.
Mature males also tend to leave their groups and establish their own troops by attracting emigrating females. However, male mountain gorillas sometimes stay in their natal troops and become subordinate to the silverback. If the silverback dies, these males may be able to become dominant or mate with the females. This behaviour has not been observed in eastern lowland gorillas. In a single male group, when the silverback dies, the females and their offspring disperse and find a new troop. Without a silverback to protect them, the infants will likely fall victim to infanticide. Joining a new group is likely to be a tactic against this. However, while gorilla troops usually disband after the silverback dies, female eastern lowlands gorillas and their offspring have been recorded staying together until a new silverback transfers into the group. This likely serves as protection from leopards.
The silverback is the centre of the troop's attention, making all the decisions, mediating conflicts, determining the movements of the group, leading the others to feeding sites, and taking responsibility for the safety and well-being of the troop. Younger males subordinate to the silverback, known as blackbacks, may serve as backup protection. Blackbacks are aged between 8 and 12 years and lack the silver back hair. The bond that a silverback has with his females forms the core of gorilla social life. Bonds between them are maintained by grooming and staying close together. Females form strong relationships with males to gain mating opportunities and protection from predators and infanticidal outside males. However, aggressive behaviours between males and females do occur, but rarely lead to serious injury. Relationships between females may vary. Maternally related females in a troop tend to be friendly towards each other and associate closely. Otherwise, females have few friendly encounters and commonly act aggressively towards each other.
Females may fight for social access to males and a male may intervene. Male gorillas have weak social bonds, particularly in multiple-male groups with apparent dominance hierarchies and strong competition for mates. Males in all-male groups, though, tend to have friendly interactions and socialise through play, grooming, and staying together, and occasionally they even engage in homosexual interactions. Severe aggression is rare in stable groups, but when two mountain gorilla groups meet the two silverbacks can sometimes engage in a fight to the death, using their canines to cause deep, gaping injuries.
Reproduction and parenting
Females mature at 10–12 years (earlier in captivity), and males at 11–13 years. A female's first ovulatory cycle occurs when she is six years of age, and is followed by a two-year period of adolescent infertility. The estrous cycle lasts 30–33 days, with outward ovulation signs subtle compared to those of chimpanzees. The gestation period lasts 8.5 months. Female mountain gorillas first give birth at 10 years of age and have four-year interbirth intervals. Males can be fertile before reaching adulthood. Gorillas mate year round.
Females will purse their lips and slowly approach a male while making eye contact. This serves to urge the male to mount her. If the male does not respond, then she will try to attract his attention by reaching towards him or slapping the ground. In multiple-male groups, solicitation indicates female preference, but females can be forced to mate with multiple males. Males incite copulation by approaching a female and displaying at her or touching her and giving a "train grunt". Recently, gorillas have been observed engaging in face-to-face sex, a trait once considered unique to humans and bonobos.
Gorilla infants are vulnerable and dependent, thus mothers, their primary caregivers, are important to their survival. Male gorillas are not active in caring for the young, but they do play a role in socialising them to other youngsters. The silverback has a largely supportive relationship with the infants in his troop and shields them from aggression within the group. Infants remain in contact with their mothers for the first five months and mothers stay near the silverback for protection. Infants suckle at least once per hour and sleep with their mothers in the same nest.
Infants begin to break contact with their mothers after five months, but only for a brief period each time. By 12 months old, infants move up to from their mothers. At around 18–21 months, the distance between mother and offspring increases and they regularly spend time away from each other. In addition, nursing decreases to once every two hours. Infants spend only half of their time with their mothers by 30 months. They enter their juvenile period at their third year, and this lasts until their sixth year. At this time, gorillas are weaned and they sleep in a separate nest from their mothers. After their offspring are weaned, females begin to ovulate and soon become pregnant again. The presence of play partners, including the silverback, minimizes conflicts in weaning between mother and offspring.
Communication
Twenty-five distinct vocalisations are recognised, many of which are used primarily for group communication within dense vegetation. Sounds classified as grunts and barks are heard most frequently while traveling, and indicate the whereabouts of individual group members. They may also be used during social interactions when discipline is required. Screams and roars signal alarm or warning, and are produced most often by silverbacks. Deep, rumbling belches suggest contentment and are heard frequently during feeding and resting periods. They are the most common form of intragroup communication.
For this reason, conflicts are most often resolved by displays and other threat behaviours that are intended to intimidate without becoming physical. As a result, fights do not occur very frequently. The ritualized charge display is unique to gorillas. The entire sequence has nine steps: (1) progressively quickening hooting, (2) symbolic feeding, (3) rising bipedally, (4) throwing vegetation, (5) chest-beating with cupped hands, (6) one leg kick, (7) sideways running, two-legged to four-legged, (8) slapping and tearing vegetation, and (9) thumping the ground with palms to end display.
A gorilla's chest-beat may vary in frequency depending on its size. Smaller ones tend to have higher frequencies, while larger ones tend to be lower. They also do it the most when females are ready to mate.
Intelligence
Gorillas are considered highly intelligent. A few individuals in captivity, such as Koko, have been taught a subset of sign language. Like the other great apes, gorillas can laugh, grieve, have "rich emotional lives", develop strong family bonds, make and use tools, and think about the past and future. Some researchers believe gorillas have spiritual feelings or religious sentiments. They have been shown to have cultures in different areas revolving around different methods of food preparation, and will show individual colour preferences.
Tool use
The following observations were made by a team led by Thomas Breuer of the Wildlife Conservation Society in September 2005. Gorillas are now known to use tools in the wild. A female gorilla in the Nouabalé-Ndoki National Park in the Republic of Congo was recorded using a stick as if to gauge the depth of water whilst crossing a swamp. A second female was seen using a tree stump as a bridge and also as a support whilst fishing in the swamp. This means all of the great apes are now known to use tools.
In September 2005, a two-and-a-half-year-old gorilla in the Republic of Congo was discovered using rocks to smash open palm nuts inside a game sanctuary. While this was the first such observation for a gorilla, over 40 years previously, chimpanzees had been seen using tools in the wild 'fishing' for termites. Nonhuman great apes are endowed with semiprecision grips, and have been able to use both simple tools and even weapons, such as improvising a club from a convenient fallen branch.
Scientific study
American physician and missionary Thomas Staughton Savage obtained the first specimens (the skull and other bones) during his time in Liberia. The first scientific description of gorillas dates back to an article by Savage and the naturalist Jeffries Wyman in 1847 in Proceedings of the Boston Society of Natural History, where Troglodytes gorilla is described, now known as the western gorilla. Other species of gorilla were described in the next few years.
The explorer Paul Du Chaillu was the first westerner to see a live gorilla during his travel through western equatorial Africa from 1856 to 1859. He brought dead specimens to the UK in 1861.
The first systematic study was not conducted until the 1920s, when Carl Akeley of the American Museum of Natural History traveled to Africa to hunt for an animal to be shot and stuffed. On his first trip, he was accompanied by his friends Mary Bradley, a mystery writer, her husband, and their young daughter Alice, who would later write science fiction under the pseudonym James Tiptree Jr. After their trip, Mary Bradley wrote On the Gorilla Trail. She later became an advocate for the conservation of gorillas, and wrote several more books (mainly for children). In the late 1920s and early 1930s, Robert Yerkes and his wife Ava helped further the study of gorillas when they sent Harold Bigham to Africa. Yerkes also wrote a book in 1929 about the great apes.
After World War II, George Schaller was one of the first researchers to go into the field and study primates. In 1959, he conducted a systematic study of the mountain gorilla in the wild and published his work. Years later, at the behest of Louis Leakey and the National Geographic, Dian Fossey conducted a much longer and more comprehensive study of the mountain gorilla. When she published her work, many misconceptions and myths about gorillas were finally disproved, including the myth that gorillas are violent.
Western lowland gorillas (G. g. gorilla) are believed to be one of the zoonotic origins of HIV/AIDS. The SIVgor Simian immunodeficiency virus that infects them is similar to a certain strain of HIV-1.
Genome sequencing
The gorilla became the next-to-last great ape genus to have its genome sequenced. The first gorilla genome was generated with short read and Sanger sequencing using DNA from a female western lowland gorilla named Kamilah. This gave scientists further insight into the evolution and origin of humans. Despite the chimpanzees being the closest extant relatives of humans, 15% of the human genome was found to be more like that of the gorilla. In addition, 30% of the gorilla genome "is closer to human or chimpanzee than the latter are to each other; this is rarer around coding genes, indicating pervasive selection throughout great ape evolution, and has functional consequences in gene expression." Analysis of the gorilla genome has cast doubt on the idea that the rapid evolution of hearing genes gave rise to language in humans, as it also occurred in gorillas.
Captivity
Gorillas became highly prized by western zoos since the 19th century, though the earliest attempts to keep them in captive facilities ended in their early death. In the late 1920s, the care of captive gorillas significantly improved. Colo (December 22, 1956 – January 17, 2017) of the Columbus Zoo and Aquarium was the first gorilla to be born in captivity.
Captive gorillas exhibit stereotypic behaviors, including eating such as regurgitation, reingestion and self-injurious or conspecific aggression, pacing, rocking, sucking of fingers or lip smacking, and overgrooming. Negative vigilance of visitor behaviors have been identified as starting, posturing and charging at visitors. Groups of bachelor gorillas containing young silverbacks have significantly higher levels of aggression and wounding rates than mixed age and sex groups.
The use of both internal and external privacy screens on exhibit windows has been shown to alleviate stresses from visual effects of high crowd densities, leading to decreased stereotypic behaviors in the gorillas. Playing naturalistic auditory stimuli as opposed to classical music, rock music, or no auditory enrichment (which allows for crowd noise, machinery, etc. to be heard) has been noted to reduce stress behavior as well. Enrichment modifications to feed and foraging, where clover-hay is added to an exhibit floor, decrease stereotypic activities while simultaneously increasing positive food-related behaviors.
Recent research on captive gorilla welfare emphasizes a need to shift to individual assessments instead of a one-size-fits-all group approach to understanding how welfare increases or decreases based on a variety of factors. Individual characteristics such as age, sex, personality and individual histories are essential in understanding that stressors will affect each individual gorilla and their welfare differently.
Conservation status
All species (and subspecies) of gorilla are listed as endangered or critically endangered on the IUCN Red List. All gorillas are listed in Appendix I of the Convention on International Trade in Endangered Species (CITES), meaning that international export/import of the species, including in parts and derivatives, is regulated. Around 316,000 western lowland gorillas are thought to exist in the wild, 4,000 in zoos, thanks to conservation; eastern lowland gorillas have a population of under 5,000 in the wild and 24 in zoos. Mountain gorillas are the most severely endangered, with an estimated population of about 880 left in the wild and none in zoos. Threats to gorilla survival include habitat destruction and poaching for the bushmeat trade. Gorillas are closely related to humans, and are susceptible to diseases that humans also get infected by. In 2004, a population of several hundred gorillas in the Odzala National Park, Republic of Congo was essentially wiped out by the Ebola virus. A 2006 study published in Science concluded more than 5,000 gorillas may have died in recent outbreaks of the Ebola virus in central Africa. The researchers indicated in conjunction with commercial hunting of these apes, the virus creates "a recipe for rapid ecological extinction". In captivity, it has also been observed that gorillas can also be infected with COVID-19.
Conservation efforts include the Great Apes Survival Project, a partnership between the United Nations Environment Programme and the UNESCO, and also an international treaty, the Agreement on the Conservation of Gorillas and Their Habitats, concluded under UNEP-administered Convention on Migratory Species. The Gorilla Agreement is the first legally binding instrument exclusively targeting gorilla conservation; it came into effect on 1 June 2008. Governments of countries where gorillas live placed a ban on their killing and trading, but weak law enforcement still poses a threat to them, since the governments rarely apprehend poachers, traders and consumers that rely on gorillas for profit.
Cultural significance
In Cameroon's Lebialem highlands, folk stories connect people and gorillas via totems; a gorilla's death means the connected person will die also. This creates a local conservation ethic. Many different indigenous peoples interact with wild gorillas. Some have detailed knowledge; the Baka have words to distinguish at least ten types of gorilla individuals, by sex, age, and relationships. In 1861, alongside tales of hunting enormous gorillas, the traveller and anthropologist Paul Du Chaillu reported the Cameroonian story that a pregnant woman who sees a gorilla will give birth to one.
In 1911, the anthropologist Albert Jenks noted the Bulu people's knowledge of gorilla behaviour and ecology, and their gorilla stories. In one such story, "The Gorilla and the Child", a gorilla speaks to people, seeking help and trust, and stealing a baby; a man accidentally kills the baby while attacking the gorilla. Even far from where gorillas live, savannah tribes pursue "cult-like worship" of the apes. Some beliefs are widespread among indigenous peoples. The Fang name for gorilla is ngi while the Bulu name is njamong; the root ngi means fire, denoting a positive energy. From the Central African Republic to Cameroon and Gabon, stories of reincarnations as gorillas, totems, and transformations similar to those recorded by Du Chaillu are still told in the 21st century.
Since gaining international attention, gorillas have been a recurring element of many aspects of popular culture and media. They were usually portrayed as murderous and aggressive. Inspired by Emmanuel Frémiet's Gorilla Carrying off a Woman, gorillas have been depicted kidnapping human women. This theme was used in films such as Ingagi (1930) and most notably King Kong (1933). The comedic play The Gorilla, which debuted in 1925, featured an escaped gorilla taking a woman from her house. Several films would use the "escaped gorilla" trope including The Strange Case of Doctor Rx (1942), The Gorilla Man (1943), Gorilla at Large (1954) and the Disney cartoons The Gorilla Mystery (1930) and Donald Duck and the Gorilla (1944).
Gorillas have been used as opponents to jungle-themed heroes such as Tarzan and Sheena, Queen of the Jungle, as well as superheroes. The DC Comics supervillain Gorilla Grodd is an enemy of the Flash. Gorillas also serve as antagonists in the 1968 film Planet of the Apes. More positive and sympathetic portrayals of gorillas include the films Son of Kong (1933), Mighty Joe Young (1949), Gorillas in the Mist (1988) and Instinct (1999) and the 1992 novel Ishmael. Gorillas have been featured in video games as well, notably Donkey Kong.
| Biology and health sciences | Primates | null |
12552 | https://en.wikipedia.org/wiki/Great%20auk | Great auk | The great auk (Pinguinus impennis), also known as the penguin or garefowl, is a species of flightless alcid that first appeared around 400,000 years ago and became extinct in the mid-19th century. It was the only modern species in the genus Pinguinus. It is unrelated to the penguins of the Southern Hemisphere, which were named for their resemblance to this species.
It bred on rocky, remote islands with easy access to the ocean and a plentiful food supply, a rarity in nature that provided only a few breeding sites for the great auks. During the non-breeding season, the auk foraged in the waters of the North Atlantic, ranging as far south as northern Spain and along the coastlines of Canada, Greenland, Iceland, the Faroe Islands, Norway, Ireland, and Great Britain.
The bird was tall and weighed about , making it the largest alcid to survive into the modern era, and the second-largest member of the alcid family overall (the prehistoric Miomancalla was larger). It had a black back and a white belly. The black beak was heavy and hooked, with grooves on its surface. During summer, great auk plumage showed a white patch over each eye. During winter, the great auk lost these patches, instead developing a white band stretching between the eyes. The wings were only long, rendering the bird flightless. Instead, the great auk was a powerful swimmer, a trait that it used in hunting. Its favourite prey were fish, including Atlantic menhaden and capelin, and crustaceans. Although agile in the water, it was clumsy on land. Great auk pairs mated for life. They nested in extremely dense and social colonies, laying one egg on bare rock. The egg was white with variable brown marbling. Both parents participated in the incubation of the egg for around six weeks before the young hatched. The young left the nest site after two to three weeks, although the parents continued to care for it.
The great auk was an important part of many Native American cultures, both as a food source and as a symbolic item. Many Maritime Archaic people were buried with great auk bones. One burial discovered included someone covered by more than 200 great auk beaks, which are presumed to be the remnants of a cloak made of great auks' skins. Early European explorers to the Americas used the great auk as a convenient food source or as fishing bait, reducing its numbers. The bird's down was in high demand in Europe, a factor that largely eliminated the European populations by the mid-16th century. Around the same time, nations such as Great Britain began to realize that the great auk was disappearing and it became the beneficiary of many early environmental laws, but despite that the great auk were still hunted.
Its growing rarity increased interest from European museums and private collectors in obtaining skins and eggs of the bird. On 3 June 1844, the last two confirmed specimens were killed on Eldey, off the coast of Iceland, ending the last known breeding attempt. Later reports of roaming individuals being seen or caught are unconfirmed. A report of one great auk in 1852 is considered by some to be the last sighting of a member of the species. The great auk is mentioned in several novels, and the scientific journal of the American Ornithological Society was named The Auk (now Ornithology) in honour of the bird until 2021.
Taxonomy and evolution
Analysis of mtDNA sequences has confirmed morphological and biogeographical studies suggesting that the razorbill is the closest living relative of the great auk. The great auk also was related closely to the little auk or dovekie, which underwent a radically different evolution compared to Pinguinus. Due to its outward similarity to the razorbill (apart from flightlessness and size), the great auk often was placed in the genus Alca, following Linnaeus.
The oldest known fossil records of the modern great auk are from the Boxgrove Palaeolithic site of England and Lower Town Hill Formation of Bermuda, both of which are dated to the Middle Pleistocene at least 400,000 years BP. The Pliocene sister species, Pinguinus alfrednewtoni, and molecular evidence show that the three closely related genera diverged soon after their common ancestor, a bird probably similar to a stout Xantus's murrelet, had spread to the coasts of the Atlantic. Apparently, by that time, the murres, or Atlantic guillemots, already had split from the other Atlantic alcids. Razorbill-like birds were common in the Atlantic during the Pliocene, but the evolution of the little auk is sparsely documented. The molecular data are compatible with either possibility, but the weight of evidence suggests placing the great auk in a distinct genus. Some ornithologists still believe it is more appropriate to retain the species in the genus Alca. It is the only recorded British bird made extinct in historic times.
The following cladogram shows the placement of the great auk among its closest relatives, based on a 2004 genetic study:
Pinguinus alfrednewtoni was a larger, and also flightless, member of the genus Pinguinus that lived during the Early Pliocene. Known from bones found in the Yorktown Formation of the Lee Creek Mine in North Carolina, it is believed to have split, along with the great auk, from a common ancestor. Pinguinus alfrednewtoni lived in the western Atlantic, while the great auk lived in the eastern Atlantic. After the former died out following the Pliocene, the great auk took over its territory. The great auk was not related closely to the other extinct genera of flightless alcids, Mancalla, Praemancalla, and Alcodes.
Etymology
The great auk was one of the 4,400 animal species formally described by Carl Linnaeus in his eighteenth-century work Systema Naturae, in which it was given the binomial Alca impennis. The name Alca is a Latin derivative of the Scandinavian word for razorbills and their relatives. The bird was known in literature even before this and was described by Charles d'Ecluse in 1605 as Mergus Americanus. This also included a woodcut which represents the oldest unambiguous visual depictions of the bird.
The species was not placed in its own scientific genus, Pinguinus, until 1791. The generic name is derived from the Spanish, Portuguese and French name for the species, in turn from Latin meaning "plump", and the specific name, impennis, is from Latin and refers to the lack of flight feathers, or pennae.
The Irish name for the great auk is , meaning "big seabird/auk". The Basque name is , meaning "spearbill". Its early French name was apponatz, while modern French uses . The Norse called the great auk geirfugl, which means "spearbird". This has led to an alternative English common name for the bird, garefowl or gairfowl. The Inuit name for the great auk was isarukitsok, which meant "little wing".
The word "penguin" first appears in the sixteenth century as a synonym for "great auk". Although the etymology is debated, the generic name "penguin" may be derived from the Welsh pen gwyn "white head", either because the birds lived in New Brunswick on White Head Island (Pen Gwyn in Welsh) or because the great auk had such large white circles on its head. When European explorers discovered what today are known as penguins in the Southern Hemisphere, they noticed their similar appearance to the great auk and named them after this bird, although biologically, they are not closely related. Whalers also lumped the northern and southern birds together under the common name "woggins".
Description
Standing about tall and weighing approximately as adult birds, the flightless great auk was the second-largest member of both its family and the order Charadriiformes overall, surpassed only by the mancalline Miomancalla. It is, however, the largest species to survive into modern times. The great auks that lived farther north averaged larger in size than the more southerly members of the species. Males and females were similar in plumage, although there is evidence for differences in size, particularly in the bill and femur length. The back was primarily a glossy black, and the belly was white. The neck and legs were short, and the head and wings small. During summer, it developed a wide white eye patch over each eye, which had a hazel or chestnut iris. Auks are known for their close resemblance to penguins, their webbed feet and countershading are a result of convergent evolution in the water. During winter the great auk moulted and lost this eye patch, which was replaced with a wide white band and a grey line of feathers that stretched from the eye to the ear. During the summer, its chin and throat were blackish-brown and the inside of the mouth was yellow. In winter, the throat became white. Some individuals reportedly had grey plumage on their flanks, but the purpose, seasonal duration, and frequency of this variation is unknown. The bill was large at long and curved downward at the top; the bill also had deep white grooves in both the upper and lower mandibles, up to seven on the upper mandible and twelve on the lower mandible in summer, although there were fewer in winter. The wings were only in length and the longest wing feathers were only long. Its feet and short claws were black, while the webbed skin between the toes was brownish black. The legs were far back on the bird's body, which gave it powerful swimming and diving abilities.
Hatchlings were described as grey and downy, but their exact appearance is unknown, since no skins exist today. Juvenile birds had fewer prominent grooves in their beaks than adults and they had mottled white and black necks, while the eye spot found in adults was not present; instead, a grey line ran through the eyes (which still had white eye rings) to just below the ears.
Great Auk calls included low croaking and a hoarse scream. A captive great auk was observed making a gurgling noise when anxious. It is not known what its other vocalizations were, but it is believed that they were similar to those of the razorbill, only louder and deeper.
Distribution and habitat
The great auk was found in the cold North Atlantic coastal waters along the coasts of Canada, the northeastern United States, Norway, Greenland, Iceland, the Faroe Islands, Ireland, Great Britain, France, and the Iberian Peninsula. Pleistocene fossils indicate the great auk also inhabited Southern France, Italy, and other coasts of the Mediterranean basin. It was common on the Grand Banks of Newfoundland. In recorded history, the great auk typically did not go farther south than Massachusetts Bay in the winter. Great auk bones have been found as far south as Florida, where it may have been present during four periods: approximately 1000 BC and 1000 AD, as well as during the fifteenth century and the seventeenth century. It has been suggested that some of the bones discovered in Florida may be the result of aboriginal trading. In the eastern Atlantic, the southernmost records of this species are two isolated bones, one from Madeira and another from the Neolithic site of El Harhoura 2 in Morocco.
The great auk left the North Atlantic waters for land only to breed, even roosting at sea when not breeding. The rookeries of the great auk were found from Baffin Bay to the Gulf of St. Lawrence, across the far northern Atlantic, including Iceland, and in Norway and the British Isles in Europe. For their nesting colonies the great auks required rocky islands with sloping shorelines that provided access to the sea. These were very limiting requirements and it is believed that the great auk never had more than 20 breeding colonies. The nesting sites also needed to be close to rich feeding areas and to be far enough from the mainland to discourage visitation by predators such as humans and polar bears. The localities of only seven former breeding colonies are known: Papa Westray in the Orkney Islands, St. Kilda off Scotland, Grimsey Island, Eldey Island, Geirfuglasker near Iceland, Funk Island near Newfoundland, and the Bird Rocks (Rochers-aux-Oiseaux) in the Gulf of St. Lawrence. Records suggest that this species may have bred on Cape Cod in Massachusetts. By the late eighteenth and early nineteenth centuries, the breeding range of the great auk was restricted to Funk Island, Grimsey Island, Eldey Island, the Gulf of St. Lawrence, and the St. Kilda islands. Funk Island was the largest known breeding colony. After the chicks fledged, the great auk migrated north and south away from the breeding colonies and they tended to go southward during late autumn and winter.
Ecology and behaviour
The great auk was never observed and described by modern scientists during its existence and is only known from the accounts of laymen, such as sailors, so its behaviour is not well known and difficult to reconstruct. Much may be inferred from its close, living relative, the razorbill, as well as from remaining soft tissue.
Great auks walked slowly and sometimes used their wings to help them traverse rough terrain. When they did run, it was awkwardly and with short steps in a straight line. They had few natural predators, mainly large marine mammals, such as the orca, and white-tailed eagles. Polar bears preyed on nesting colonies of the great auk. Based on observations by the Naturalist Otto Fabricius (the only scientist to make primary observations on the great auk), some auks were "stupid and tame" whilst others were difficult to approach which he suggested was related to the bird's age. Humans preyed upon them as food, for feathers, and as specimens for museums and private collections. Great auks reacted to noises, but were rarely frightened by the sight of something. They used their bills aggressively both in the dense nesting sites and when threatened or captured by humans. These birds are believed to have had a life span of approximately 20 to 25 years. During the winter, the great auk migrated south, either in pairs or in small groups, but never with the entire nesting colony.
The great auk was generally an excellent swimmer, using its wings to propel itself underwater. While swimming, the head was held up but the neck was drawn in. This species was capable of banking, veering, and turning underwater. The great auk was known to dive to depths of and it has been claimed that the species was able to dive to depths of . To conserve energy, most dives were shallow. It also could hold its breath for 15 minutes, longer than a seal. Its ability to dive so deeply reduced competition with other alcid species. The great auk was capable of accelerating underwater, then shooting out of the water to land on a rocky ledge above the ocean's surface.
Diet
This alcid typically fed in shoaling waters that were shallower than those frequented by other alcids, although after the breeding season, they had been sighted as far as from land. They are believed to have fed cooperatively in flocks. Their main food was fish, usually in length and weighing , but occasionally their prey was up to half the bird's own length. Based on remains associated with great auk bones found on Funk Island and on ecological and morphological considerations, it seems that Atlantic menhaden and capelin were their favoured prey. Other fish suggested as potential prey include lumpsuckers, shorthorn sculpins, cod, sand lance, as well as crustaceans. The young of the great auk are believed to have eaten plankton and, possibly, fish and crustaceans regurgitated by adults.
Reproduction
Historical descriptions of the great auk breeding behaviour are somewhat unreliable. Great Auks began pairing in early and mid-May. They are believed to have mated for life (although some theorize that great auks could have mated outside their pair, a trait seen in the razorbill). Once paired, they nested at the base of cliffs in colonies, likely where they copulated. Mated pairs had a social display in which they bobbed their heads and displayed their white eye patch, bill markings, and yellow mouth. These colonies were extremely crowded and dense, with some estimates stating that there was a nesting great auk for every of land. These colonies were very social. When the colonies included other species of alcid, the great auks were dominant due to their size.
Female great auks would lay only one egg each year, between late May and early June, although they could lay a replacement egg if the first one was lost. In years when there was a shortage of food, the great auks did not breed. A single egg was laid on bare ground up to from shore. The egg was ovate and elongate in shape, and it averaged in length and across at the widest point. The egg was yellowish white to light ochre with a varying pattern of black, brown, or greyish spots and lines that often were congregated on the large end. It is believed that the variation in the egg streaks enabled the parents to recognize their egg among those in the vast colony. The pair took turns incubating the egg in an upright position for the 39 to 44 days before the egg hatched, typically in June, although eggs could be present at the colonies as late as August.
The parents also took turns feeding their chick. According to one account, the chick was covered with grey down. The young bird took only two or three weeks to mature enough to abandon the nest and land for the water, typically around the middle of July. The parents cared for their young after they fledged, and adults would be seen swimming with their young perched on their backs. Great auks matured sexually when they were four to seven years old.
Relationship with humans
The great auk was a food source for Neanderthals more than 100,000 years ago, as evidenced by well-cleaned bones found by their campfires. Images believed to depict the great auk also were carved into the walls of the El Pendo Cave in Camargo, Spain, and Paglicci, Italy, more than 35,000 years ago, and cave paintings 20,000 years old have been found in France's Grotte Cosquer.
Native Americans valued the great auk as a food source during the winter and as an important cultural symbol. Images of the great auk have been found in bone necklaces. A person buried at the Maritime Archaic site at Port au Choix, Newfoundland, dating to about 2000 BC, was found surrounded by more than 200 great auk beaks, which are believed to have been part of a suit made from their skins, with the heads left attached as decoration. Nearly half of the bird bones found in graves at this site were of the great auk, suggesting that it had great cultural significance for the Maritime Archaic people. The extinct Beothuks of Newfoundland made pudding out of the eggs of the great auk. The Dorset Eskimos also hunted it. The Saqqaq in Greenland overhunted the species, causing a local reduction in range.
Later, European sailors used the great auks as a navigational beacon, as the presence of these birds signalled that the Grand Banks of Newfoundland were near.
This species is estimated to have had a maximum population in the millions. The great auk was hunted on a significant scale for food, eggs, and its down feathers from at least the eighth century. Prior to that, hunting by local natives may be documented from Late Stone Age Scandinavia and eastern North America, as well as from early fifth century Labrador, where the bird seems to have occurred only as stragglers. Early explorers, including Jacques Cartier, and numerous ships attempting to find gold on Baffin Island were not provisioned with food for the journey home, and therefore, used great auks as both a convenient food source and bait for fishing. Reportedly, some of the later vessels anchored next to a colony and ran out planks to the land. The sailors then herded hundreds of great auks onto the ships, where they were slaughtered. Some authors have questioned the reports of this hunting method and whether it was successful. Great auk eggs were also a valued food source, as the eggs were three times the size of a murre's and had a large yolk. These sailors also introduced rats onto the islands which preyed upon nests.
Extinction
The Little Ice Age may have reduced the population of the great auk by exposing more of their breeding islands to predation by polar bears, but massive exploitation by humans for their down drastically reduced the population, with recent evidence indicating the latter alone is likely the primary driver of its extinction. By the mid-sixteenth century, the nesting colonies along the European side of the Atlantic were nearly all eliminated by humans killing this bird for its down, which was used to make pillows. In 1553, the great auk received its first official protection. In 1794, Great Britain banned the killing of this species for its feathers. In St. John's, those violating a 1775 law banning hunting the great auk for its feathers or eggs were publicly flogged, though hunting for use as fishing bait was still permitted. On the North American side, eider down initially was preferred, but once the eiders were nearly driven to extinction in the 1770s, down collectors switched to the great auk at the same time that hunting for food, fishing bait, and oil decreased.
The great auk had disappeared from Funk Island by 1800. An account by Aaron Thomas of HMS Boston from 1794 described how the bird had been slaughtered systematically until then:
With its increasing rarity, specimens of the great auk and its eggs became collectible and highly prized by rich Europeans, and the loss of a large number of its eggs to collection contributed to the demise of the species. Eggers, individuals who visited the nesting sites of the great auk to collect their eggs, quickly realized that the birds did not all lay their eggs on the same day, so they could make return visits to the same breeding colony. Eggers only collected the eggs without embryos and typically, discarded the eggs with embryos growing inside them.
On the islet of Stac an Armin, St. Kilda, Scotland, in July 1840, the last great auk seen in Britain was caught and killed. Three men from St. Kilda caught a single "garefowl", noticing its little wings and the large white spot on its head. They tied it up and kept it alive for three days, until a large storm arose. Believing that the bird was a witch and was causing the storm, they then killed it by beating it with a stick.
The last colony of great auks lived on Geirfuglasker (the "Great Auk Rock") off Iceland. This islet was a volcanic rock surrounded by cliffs that made it inaccessible to humans, but in 1830, the islet submerged after a volcanic eruption, and the birds moved to the nearby island of Eldey, which was accessible from a single side. When the colony was discovered in 1835, nearly fifty birds were present. Museums, desiring the skins of the great auk for preservation and display, quickly began collecting birds from the colony. The last pair, found incubating an egg, was killed there on 3 June 1844, on request from a merchant who wanted specimens.
Jón Brandsson and Sigurður Ísleifsson, the men who had killed the last birds, were interviewed by great auk specialist John Wolley, and Sigurður described the act as follows:
A later claim of a live individual sighted in 1852 on the Grand Banks of Newfoundland has been accepted by the International Union for Conservation of Nature and Natural Resources.
Alleged sightings of the auk continued for decades after it was believed extinct. The last alleged sighting occurred in the Lofotens in 1927. Errol Fuller noted that several of the later sightings were hoaxes or misidentifications of penguins that had been released near Norway.
There is an ongoing discussion about the possibilities for reviving the great auk using its DNA from specimens collected. This possibility is controversial.
Preserved specimens
Today, 78 skins of the great auk remain, mostly in museum collections, along with approximately 75 eggs and 24 complete skeletons. All but four of the surviving skins are in summer plumage, and only two of these are immature. No hatchling specimens exist. Each egg and skin has been assigned a number by specialists. Although thousands of isolated bones were collected from nineteenth century Funk Island to Neolithic middens, only a few complete skeletons exist. Natural mummies also are known from Funk Island, and the eyes and internal organs of the last two birds from 1844 are stored in the Zoological Museum, Copenhagen. The whereabouts of the skins from the last two individuals has been unknown for more than a hundred years, but that mystery has been partly resolved using DNA extracted from the organs of the last individuals and the skins of the candidate specimens suggested by Errol Fuller (those in Übersee-Museum Bremen, Royal Belgian Institute of Natural Sciences, Zoological Museum of Kiel University, Los Angeles County Museum of Natural History, and Landesmuseum Natur und Mensch Oldenburg). A positive match was found between the organs from the male individual and the skin now in the Royal Belgian Institute of Natural Sciences in Brussels. No match was found between the female organs and a specimen from Fuller's list, but authors speculate that the skin in Cincinnati Museum of Natural History and Science may be a potential candidate due to a common history with the Los Angeles specimen.
Following the bird's extinction, remains of the great auk increased dramatically in value, and auctions of specimens created intense interest in Victorian Britain, where 15 specimens are now located, the largest number of any country. A specimen was bought in 1971 by the Icelandic Museum of National History for £9000, which placed it in the Guinness Book of Records as the most expensive stuffed bird ever sold. The price of its eggs sometimes reached up to 11 times the amount earned by a skilled worker in a year. The present whereabouts of six of the eggs are unknown. Several other eggs have been destroyed accidentally. Two mounted skins were destroyed in the twentieth century, one in the Mainz Museum during the Second World War, and one in the Museu Bocage, Lisbon that was destroyed by a fire in 1978.
Cultural depictions
Children's books
Charles Kingsley's The Water-Babies: A Fairy Tale for a Land-Baby (1863) features the last great auk (referred to in the book as a gairfowl) telling the tale of the demise of her species. Different illustrations of the auk are included in the original 1863 version, the 1889 version illustrated by Linley Sambourne, 1916 by Frank A. Nankivell, and 1916 by Jessie Willcox Smith. Kinglsey's auk implicates the "nasty fellows" who "shot us so, and knocked us on the head, and took our eggs." While Kingsley portrays the extinction as sad, he provides his opinion that "there are better things come in her place," namely human colonization of the islands for the cod fishing industry, which would serve to feed the poor. He concludes the discussion with a quote from Tennyson: "The old order changeth, giving place to the new; And God fulfils Himself in many ways."
Enid Blyton's The Island of Adventure (1944) sends one of the protagonists on a failed search for what he believes is a lost colony of the species.
Literature and journalism
The great auk is also present in a wide variety of other works of fiction.
In the short story The Harbor-Master by Robert W. Chambers, the discovery and attempted recovery of the last known pair of great auks is central to the plot (which also involves a proto-Lovecraftian element of suspense). The story first appeared in Ainslee's Magazine (August 1898) and was slightly revised to become the first five chapters of Chambers' episodic novel In Search of the Unknown, (Harper and Brothers Publishers, New York, 1904).
Penguin Island, a 1908 French satirical novel by the Nobel Prize winning author Anatole France, narrates the fictional history of a great auk population that is mistakenly baptized by a nearsighted missionary.
In his novel Ulysses (1922), James Joyce mentions the bird while the novel's main character is drifting into sleep. He associates the great auk with the mythical roc as a method of formally returning the main character to a sleepy land of fantasy and memory.
W. S. Merwin mentions the great auk in a short litany of extinct animals in his poem "For a Coming Extinction", one of the poems from his 1967 collection, "The Lice".
Night of the Auk, a 1956 Broadway drama by Arch Oboler, depicts a group of astronauts returning from the Moon to discover that a full-blown nuclear war has broken out. Obeler draws a parallel between the anthropogenic extinction of the great auk and of the story's nuclear extinction of humankind.
A great auk is collected by fictional naturalist Stephen Maturin in the Patrick O'Brian historical novel The Surgeon's Mate (1980). This work also details the harvesting of a colony of auks.
Farley Mowat devotes the first section, "Spearbill", of his book Sea of Slaughter (1984) to the history of the great auk.
Elizabeth Kolbert's Pulitzer Prize-winning book, The Sixth Extinction: An Unnatural History (2014), includes a chapter on the great auk.
Performing arts
The great auk is the subject of a ballet, Still Life at the Penguin Café (1988), and a song, "A Dream Too Far", in the ecological musical Rockford's Rock Opera (2010).
Mascots
The great auk is the mascot of the Archmere Academy in Claymont, Delaware, and the Adelaide University Choral Society (AUCS) in Australia.
The great auk was formerly the mascot of the Lindsay Frost campus of Sir Sandford Fleming College in Ontario. In 2012, the two separate sports programs of Fleming College were combined and the great auk mascot went extinct. The Lindsay Frost campus student owned bar, student centre, and lounge is still known as the Auk's Lodge.
It was also the mascot of the now ended Knowledge Masters educational competition.
Names
The scientific journal of the American Ornithologists' Union, Ornithology , was named The Auk until 2021 in honour of this bird.
According to Homer Hickam's memoir, Rocket Boys, and its film production, October Sky, the early rockets he and his friends built, were named "Auk".
A cigarette company, the British Great Auk Cigarettes, was named after this bird.
Fine arts
Walton Ford, the American painter, has featured great auks in two paintings: The Witch of St. Kilda and Funk Island. Replica skins and eggs were made and sold in the 1920s for collectors.
The English painter and writer Errol Fuller produced Last Stand for his monograph on the species.
The great auk also appeared on one stamp in a set of five depicting extinct birds issued by Cuba in 1974.
| Biology and health sciences | Charadriiformes | Animals |
12558 | https://en.wikipedia.org/wiki/Galaxy | Galaxy | A galaxy is a system of stars, stellar remnants, interstellar gas, dust, and dark matter bound together by gravity. The word is derived from the Greek (), literally 'milky', a reference to the Milky Way galaxy that contains the Solar System. Galaxies, averaging an estimated 100 million stars, range in size from dwarfs with less than a thousand stars, to the largest galaxies known – supergiants with one hundred trillion stars, each orbiting its galaxy's center of mass. Most of the mass in a typical galaxy is in the form of dark matter, with only a few percent of that mass visible in the form of stars and nebulae. Supermassive black holes are a common feature at the centres of galaxies.
Galaxies are categorised according to their visual morphology as elliptical, spiral, or irregular. The Milky Way is an example of a spiral galaxy. It is estimated that there are between 200 billion () to 2 trillion galaxies in the observable universe. Most galaxies are 1,000 to 100,000 parsecs in diameter (approximately 3,000 to 300,000 light years) and are separated by distances in the order of millions of parsecs (or megaparsecs). For comparison, the Milky Way has a diameter of at least 26,800 parsecs (87,400 ly) and is separated from the Andromeda Galaxy, its nearest large neighbour, by just over 750,000 parsecs (2.5 million ly).
The space between galaxies is filled with a tenuous gas (the intergalactic medium) with an average density of less than one atom per cubic metre. Most galaxies are gravitationally organised into groups, clusters and superclusters. The Milky Way is part of the Local Group, which it dominates along with the Andromeda Galaxy. The group is part of the Virgo Supercluster. At the largest scale, these associations are generally arranged into sheets and filaments surrounded by immense voids. Both the Local Group and the Virgo Supercluster are contained in a much larger cosmic structure named Laniakea.
Etymology
The word galaxy was borrowed via French and Medieval Latin from the Greek term for the Milky Way, () 'milky (circle)', named after its appearance as a milky band of light in the sky. In Greek mythology, Zeus places his son, born by a mortal woman, the infant Heracles, on Hera's breast while she is asleep so the baby will drink her divine milk and thus become immortal. Hera wakes up while breastfeeding and then realises she is nursing an unknown baby: she pushes the baby away, some of her milk spills, and it produces the band of light known as the Milky Way.
In the astronomical literature, the capitalised word "Galaxy" is often used to refer to the Milky Way galaxy, to distinguish it from the other galaxies in the observable universe. The English term Milky Way can be traced back to a story by Geoffrey Chaucer :
Galaxies were initially discovered telescopically and were known as spiral nebulae. Most 18th- to 19th-century astronomers considered them as either unresolved star clusters or anagalactic nebulae, and were just thought of as a part of the Milky Way, but their true composition and natures remained a mystery. Observations using larger telescopes of a few nearby bright galaxies, like the Andromeda Galaxy, began resolving them into huge conglomerations of stars, but based simply on the apparent faintness and sheer population of stars, the true distances of these objects placed them well beyond the Milky Way. For this reason they were popularly called island universes, but this term quickly fell into disuse, as the word universe implied the entirety of existence. Instead, they became known simply as galaxies.
Nomenclature
Millions of galaxies have been catalogued, but only a few have well-established names, such as the Andromeda Galaxy, the Magellanic Clouds, the Whirlpool Galaxy, and the Sombrero Galaxy. Astronomers work with numbers from certain catalogues, such as the Messier catalogue, the NGC (New General Catalogue), the IC (Index Catalogue), the CGCG (Catalogue of Galaxies and of Clusters of Galaxies), the MCG (Morphological Catalogue of Galaxies), the UGC (Uppsala General Catalogue of Galaxies), and the PGC (Catalogue of Principal Galaxies, also known as LEDA). All the well-known galaxies appear in one or more of these catalogues but each time under a different number. For example, Messier 109 (or "M109") is a spiral galaxy having the number 109 in the catalogue of Messier. It also has the designations NGC 3992, UGC 6937, CGCG 269–023, MCG +09-20-044, and PGC 37617 (or LEDA 37617), among others. Millions of fainter galaxies are known by their identifiers in sky surveys such as the Sloan Digital Sky Survey.
Observation history
Milky Way
Greek philosopher Democritus (450–370 BCE) proposed that the bright band on the night sky known as the Milky Way might consist of distant stars.
Aristotle (384–322 BCE), however, believed the Milky Way was caused by "the ignition of the fiery exhalation of some stars that were large, numerous and close together" and that the "ignition takes place in the upper part of the atmosphere, in the region of the World that is continuous with the heavenly motions." Neoplatonist philosopher Olympiodorus the Younger (–570 CE) was critical of this view, arguing that if the Milky Way was sublunary (situated between Earth and the Moon) it should appear different at different times and places on Earth, and that it should have parallax, which it did not. In his view, the Milky Way was celestial.
According to Mohani Mohamed, Arabian astronomer Ibn al-Haytham (965–1037) made the first attempt at observing and measuring the Milky Way's parallax, and he thus "determined that because the Milky Way had no parallax, it must be remote from the Earth, not belonging to the atmosphere." Persian astronomer al-Biruni (973–1048) proposed the Milky Way galaxy was "a collection of countless fragments of the nature of nebulous stars." Andalusian astronomer Avempace ( 1138) proposed that it was composed of many stars that almost touched one another, and appeared to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars as evidence of this occurring when two objects were near. In the 14th century, Syrian-born Ibn Qayyim al-Jawziyya proposed the Milky Way galaxy was "a myriad of tiny stars packed together in the sphere of the fixed stars."
Actual proof of the Milky Way consisting of many stars came in 1610 when the Italian astronomer Galileo Galilei used a telescope to study it and discovered it was composed of a huge number of faint stars. In 1750, English astronomer Thomas Wright, in his An Original Theory or New Hypothesis of the Universe, correctly speculated that it might be a rotating body of a huge number of stars held together by gravitational forces, akin to the Solar System but on a much larger scale, and that the resulting disk of stars could be seen as a band on the sky from a perspective inside it. In his 1755 treatise, Immanuel Kant elaborated on Wright's idea about the Milky Way's structure.
The first project to describe the shape of the Milky Way and the position of the Sun was undertaken by William Herschel in 1785 by counting the number of stars in different regions of the sky. He produced a diagram of the shape of the galaxy with the Solar System close to the center. Using a refined approach, Kapteyn in 1920 arrived at the picture of a small (diameter about 15 kiloparsecs) ellipsoid galaxy with the Sun close to the center. A different method by Harlow Shapley based on the cataloguing of globular clusters led to a radically different picture: a flat disk with diameter approximately 70 kiloparsecs and the Sun far from the centre. Both analyses failed to take into account the absorption of light by interstellar dust present in the galactic plane; but after Robert Julius Trumpler quantified this effect in 1930 by studying open clusters, the present picture of the Milky Way galaxy emerged.
Distinction from other nebulae
A few galaxies outside the Milky Way are visible on a dark night to the unaided eye, including the Andromeda Galaxy, Large Magellanic Cloud, Small Magellanic Cloud, and the Triangulum Galaxy. In the 10th century, Persian astronomer Abd al-Rahman al-Sufi made the earliest recorded identification of the Andromeda Galaxy, describing it as a "small cloud". In 964, he probably mentioned the Large Magellanic Cloud in his Book of Fixed Stars, referring to "Al Bakr of the southern Arabs", since at a declination of about 70° south it was not visible where he lived. It was not well known to Europeans until Magellan's voyage in the 16th century. The Andromeda Galaxy was later independently noted by Simon Marius in 1612.
In 1734, philosopher Emanuel Swedenborg in his Principia speculated that there might be other galaxies outside that were formed into galactic clusters that were minuscule parts of the universe that extended far beyond what could be seen. These views "are remarkably close to the present-day views of the cosmos."
In 1745, Pierre Louis Maupertuis conjectured that some nebula-like objects were collections of stars with unique properties, including a glow exceeding the light its stars produced on their own, and repeated Johannes Hevelius's view that the bright spots were massive and flattened due to their rotation.
In 1750, Thomas Wright correctly speculated that the Milky Way was a flattened disk of stars, and that some of the nebulae visible in the night sky might be separate Milky Ways.
Toward the end of the 18th century, Charles Messier compiled a catalog containing the 109 brightest celestial objects having nebulous appearance. Subsequently, William Herschel assembled a catalog of 5,000 nebulae. In 1845, Lord Rosse examined the nebulae catalogued by Herschel and observed the spiral structure of Messier object M51, now known as the Whirlpool Galaxy.
In 1912, Vesto M. Slipher made spectrographic studies of the brightest spiral nebulae to determine their composition. Slipher discovered that the spiral nebulae have high Doppler shifts, indicating that they are moving at a rate exceeding the velocity of the stars he had measured. He found that the majority of these nebulae are moving away from us.
In 1917, Heber Doust Curtis observed nova S Andromedae within the "Great Andromeda Nebula", as the Andromeda Galaxy, Messier object M31, was then known. Searching the photographic record, he found 11 more novae. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred within this galaxy. As a result, he was able to come up with a distance estimate of 150,000 parsecs. He became a proponent of the so-called "island universes" hypothesis, which holds that spiral nebulae are actually independent galaxies.
In 1920 a debate took place between Harlow Shapley and Heber Curtis, the Great Debate, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. To support his claim that the Great Andromeda Nebula is an external galaxy, Curtis noted the appearance of dark lanes resembling the dust clouds in the Milky Way, as well as the significant Doppler shift.
In 1922, the Estonian astronomer Ernst Öpik gave a distance determination that supported the theory that the Andromeda Nebula is indeed a distant extra-galactic object. Using the new 100-inch Mt. Wilson telescope, Edwin Hubble was able to resolve the outer parts of some spiral nebulae as collections of individual stars and identified some Cepheid variables, thus allowing him to estimate the distance to the nebulae: they were far too distant to be part of the Milky Way. In 1926 Hubble produced a classification of galactic morphology that is used to this day.
Multi-wavelength observation
Advances in astronomy have always been driven by technology. After centuries of success in optical astronomy, recent decades have seen major progress in other regions of the electromagnetic spectrum.
The dust present in the interstellar medium is opaque to visual light. It is more transparent to far-infrared, which can be used to observe the interior regions of giant molecular clouds and galactic cores in great detail. Infrared is also used to observe distant, red-shifted galaxies that were formed much earlier. Water vapor and carbon dioxide absorb a number of useful portions of the infrared spectrum, so high-altitude or space-based telescopes are used for infrared astronomy.
The first non-visual study of galaxies, particularly active galaxies, was made using radio frequencies. The Earth's atmosphere is nearly transparent to radio between 5 MHz and 30 GHz. The ionosphere blocks signals below this range. Large radio interferometers have been used to map the active jets emitted from active nuclei.
Ultraviolet and X-ray telescopes can observe highly energetic galactic phenomena. Ultraviolet flares are sometimes observed when a star in a distant galaxy is torn apart from the tidal forces of a nearby black hole. The distribution of hot gas in galactic clusters can be mapped by X-rays. The existence of supermassive black holes at the cores of galaxies was confirmed through X-ray astronomy.
Modern research
In 1944, Hendrik van de Hulst predicted that microwave radiation with wavelength of 21 cm would be detectable from interstellar atomic hydrogen gas; and in 1951 it was observed. This radiation is not affected by dust absorption, and so its Doppler shift can be used to map the motion of the gas in this galaxy. These observations led to the hypothesis of a rotating bar structure in the center of this galaxy. With improved radio telescopes, hydrogen gas could also be traced in other galaxies.
In the 1970s, Vera Rubin uncovered a discrepancy between observed galactic rotation speed and that predicted by the visible mass of stars and gas. Today, the galaxy rotation problem is thought to be explained by the presence of large quantities of unseen dark matter.
Beginning in the 1990s, the Hubble Space Telescope yielded improved observations. Among other things, its data helped establish that the missing dark matter in this galaxy could not consist solely of inherently faint and small stars. The Hubble Deep Field, an extremely long exposure of a relatively empty part of the sky, provided evidence that there are about 125 billion () galaxies in the observable universe. Improved technology in detecting the spectra invisible to humans (radio telescopes, infrared cameras, and x-ray telescopes) allows detection of other galaxies that are not detected by Hubble. Particularly, surveys in the Zone of Avoidance (the region of sky blocked at visible-light wavelengths by the Milky Way) have revealed a number of new galaxies.
A 2016 study published in The Astrophysical Journal, led by Christopher Conselice of the University of Nottingham, used 20 years of Hubble images to estimate that the observable universe contained at least two trillion () galaxies. However, later observations with the New Horizons space probe from outside the zodiacal light reduced this to roughly 200 billion ().
Types and morphology
Galaxies come in three main types: ellipticals, spirals, and irregulars. A slightly more extensive description of galaxy types based on their appearance is given by the Hubble sequence. Since the Hubble sequence is entirely based upon visual morphological type (shape), it may miss certain important characteristics of galaxies such as star formation rate in starburst galaxies and activity in the cores of active galaxies.
Many galaxies are thought to contain a supermassive black hole at their center. This includes the Milky Way, whose core region is called the Galactic Center.
Ellipticals
The Hubble classification system rates elliptical galaxies on the basis of their ellipticity, ranging from E0, being nearly spherical, up to E7, which is highly elongated. These galaxies have an ellipsoidal profile, giving them an elliptical appearance regardless of the viewing angle. Their appearance shows little structure and they typically have relatively little interstellar matter. Consequently, these galaxies also have a low portion of open clusters and a reduced rate of new star formation. Instead, they are dominated by generally older, more evolved stars that are orbiting the common center of gravity in random directions. The stars contain low abundances of heavy elements because star formation ceases after the initial burst. In this sense they have some similarity to the much smaller globular clusters.
Type-cD galaxies
The largest galaxies are the type-cD galaxies.
First described in 1964 by a paper by Thomas A. Matthews and others, they are a subtype of the more general class of D galaxies, which are giant elliptical galaxies, except that they are much larger. They are popularly known as the supergiant elliptical galaxies and constitute the largest and most luminous galaxies known. These galaxies feature a central elliptical nucleus with an extensive, faint halo of stars extending to megaparsec scales. The profile of their surface brightnesses as a function of their radius (or distance from their cores) falls off more slowly than their smaller counterparts.
The formation of these cD galaxies remains an active area of research, but the leading model is that they are the result of the mergers of smaller galaxies in the environments of dense clusters, or even those outside of clusters with random overdensities. These processes are the mechanisms that drive the formation of fossil groups or fossil clusters, where a large, relatively isolated, supergiant elliptical resides in the middle of the cluster and are surrounded by an extensive cloud of X-rays as the residue of these galactic collisions. Another older model posits the phenomenon of cooling flow, where the heated gases in clusters collapses towards their centers as they cool, forming stars in the process, a phenomenon observed in clusters such as Perseus, and more recently in the Phoenix Cluster.
Shell galaxy
A shell galaxy is a type of elliptical galaxy where the stars in its halo are arranged in concentric shells. About one-tenth of elliptical galaxies have a shell-like structure, which has never been observed in spiral galaxies. These structures are thought to develop when a larger galaxy absorbs a smaller companion galaxy—that as the two galaxy centers approach, they start to oscillate around a center point, and the oscillation creates gravitational ripples forming the shells of stars, similar to ripples spreading on water. For example, galaxy NGC 3923 has over 20 shells.
Spirals
Spiral galaxies resemble spiraling pinwheels. Though the stars and other visible material contained in such a galaxy lie mostly on a plane, the majority of mass in spiral galaxies exists in a roughly spherical halo of dark matter which extends beyond the visible component, as demonstrated by the universal rotation curve concept.
Spiral galaxies consist of a rotating disk of stars and interstellar medium, along with a central bulge of generally older stars. Extending outward from the bulge are relatively bright arms. In the Hubble classification scheme, spiral galaxies are listed as type S, followed by a letter (a, b, or c) which indicates the degree of tightness of the spiral arms and the size of the central bulge. An Sa galaxy has tightly wound, poorly defined arms and possesses a relatively large core region. At the other extreme, an Sc galaxy has open, well-defined arms and a small core region. A galaxy with poorly defined arms is sometimes referred to as a flocculent spiral galaxy; in contrast to the grand design spiral galaxy that has prominent and well-defined spiral arms. The speed in which a galaxy rotates is thought to correlate with the flatness of the disc as some spiral galaxies have thick bulges, while others are thin and dense.
In spiral galaxies, the spiral arms do have the shape of approximate logarithmic spirals, a pattern that can be theoretically shown to result from a disturbance in a uniformly rotating mass of stars. Like the stars, the spiral arms rotate around the center, but they do so with constant angular velocity. The spiral arms are thought to be areas of high-density matter, or "density waves". As stars move through an arm, the space velocity of each stellar system is modified by the gravitational force of the higher density. (The velocity returns to normal after the stars depart on the other side of the arm.) This effect is akin to a "wave" of slowdowns moving along a highway full of moving cars. The arms are visible because the high density facilitates star formation, and therefore they harbor many bright and young stars.
Barred spiral galaxy
A majority of spiral galaxies, including the Milky Way galaxy, have a linear, bar-shaped band of stars that extends outward to either side of the core, then merges into the spiral arm structure. In the Hubble classification scheme, these are designated by an SB, followed by a lower-case letter (a, b or c) which indicates the form of the spiral arms (in the same manner as the categorization of normal spiral galaxies). Bars are thought to be temporary structures that can occur as a result of a density wave radiating outward from the core, or else due to a tidal interaction with another galaxy. Many barred spiral galaxies are active, possibly as a result of gas being channeled into the core along the arms.
Our own galaxy, the Milky Way, is a large disk-shaped barred-spiral galaxy about 30 kiloparsecs in diameter and a kiloparsec thick. It contains about two hundred billion (2×1011) stars and has a total mass of about six hundred billion (6×1011) times the mass of the Sun.
Super-luminous spiral
Recently, researchers described galaxies called super-luminous spirals. They are very large with an upward diameter of 437,000 light-years (compared to the Milky Way's 87,400 light-year diameter). With a mass of 340 billion solar masses, they generate a significant amount of ultraviolet and mid-infrared light. They are thought to have an increased star formation rate around 30 times faster than the Milky Way.
Other morphologies
Peculiar galaxies are galactic formations that develop unusual properties due to tidal interactions with other galaxies.
A ring galaxy has a ring-like structure of stars and interstellar medium surrounding a bare core. A ring galaxy is thought to occur when a smaller galaxy passes through the core of a spiral galaxy. Such an event may have affected the Andromeda Galaxy, as it displays a multi-ring-like structure when viewed in infrared radiation.
A lenticular galaxy is an intermediate form that has properties of both elliptical and spiral galaxies. These are categorized as Hubble type S0, and they possess ill-defined spiral arms with an elliptical halo of stars (barred lenticular galaxies receive Hubble classification SB0).
Irregular galaxies are galaxies that can not be readily classified into an elliptical or spiral morphology.
An Irr-I galaxy has some structure but does not align cleanly with the Hubble classification scheme.
Irr-II galaxies do not possess any structure that resembles a Hubble classification, and may have been disrupted. Nearby examples of (dwarf) irregular galaxies include the Magellanic Clouds.
A dark or "ultra diffuse" galaxy is an extremely-low-luminosity galaxy. It may be the same size as the Milky Way, but have a visible star count only one percent of the Milky Way's. Multiple mechanisms for producing this type of galaxy have been proposed, and it is possible that different dark galaxies formed by different means. One candidate explanation for the low luminosity is that the galaxy lost its star-forming gas at an early stage, resulting in old stellar populations.
Dwarfs
Despite the prominence of large elliptical and spiral galaxies, most galaxies are dwarf galaxies. They are relatively small when compared with other galactic formations, being about one hundredth the size of the Milky Way, with only a few billion stars. Blue compact dwarf galaxies contains large clusters of young, hot, massive stars. Ultra-compact dwarf galaxies have been discovered that are only 100 parsecs across.
Many dwarf galaxies may orbit a single larger galaxy; the Milky Way has at least a dozen such satellites, with an estimated 300–500 yet to be discovered.
Most of the information we have about dwarf galaxies come from observations of the local group, containing two spiral galaxies, the Milky Way and Andromeda, and many dwarf galaxies. These dwarf galaxies are classified as either irregular or dwarf elliptical/dwarf spheroidal galaxies.
A study of 27 Milky Way neighbors found that in all dwarf galaxies, the central mass is approximately 10 million solar masses, regardless of whether it has thousands or millions of stars. This suggests that galaxies are largely formed by dark matter, and that the minimum size may indicate a form of warm dark matter incapable of gravitational coalescence on a smaller scale.
Variants
Interacting
Interactions between galaxies are relatively frequent, and they can play an important role in galactic evolution. Near misses between galaxies result in warping distortions due to tidal interactions, and may cause some exchange of gas and dust.
Collisions occur when two galaxies pass directly through each other and have sufficient relative momentum not to merge. The stars of interacting galaxies usually do not collide, but the gas and dust within the two forms interacts, sometimes triggering star formation. A collision can severely distort the galaxies' shapes, forming bars, rings or tail-like structures.
At the extreme of interactions are galactic mergers, where the galaxies' relative momentums are insufficient to allow them to pass through each other. Instead, they gradually merge to form a single, larger galaxy. Mergers can result in significant changes to the galaxies' original morphology. If one of the galaxies is much more massive than the other, the result is known as cannibalism, where the more massive larger galaxy remains relatively undisturbed, and the smaller one is torn apart. The Milky Way galaxy is currently in the process of cannibalizing the Sagittarius Dwarf Elliptical Galaxy and the Canis Major Dwarf Galaxy.
Starburst
Stars are created within galaxies from a reserve of cold gas that forms giant molecular clouds. Some galaxies have been observed to form stars at an exceptional rate, which is known as a starburst. If they continue to do so, they would consume their reserve of gas in a time span less than the galaxy's lifespan. Hence starburst activity usually lasts only about ten million years, a relatively brief period in a galaxy's history. Starburst galaxies were more common during the universe's early history, but still contribute an estimated 15% to total star production.
Starburst galaxies are characterized by dusty concentrations of gas and the appearance of newly formed stars, including massive stars that ionize the surrounding clouds to create H II regions. These stars produce supernova explosions, creating expanding remnants that interact powerfully with the surrounding gas. These outbursts trigger a chain reaction of star-building that spreads throughout the gaseous region. Only when the available gas is nearly consumed or dispersed does the activity end.
Starbursts are often associated with merging or interacting galaxies. The prototype example of such a starburst-forming interaction is M82, which experienced a close encounter with the larger M81. Irregular galaxies often exhibit spaced knots of starburst activity.
Radio galaxy
A radio galaxy is a galaxy with giant regions of radio emission extending well beyond its visible structure. These energetic radio lobes are powered by jets from its active galactic nucleus. Radio galaxies are classified according to their Fanaroff–Riley classification. The FR I class have lower radio luminosity and exhibit structures which are more elongated; the FR II class are higher radio luminosity. The correlation of radio luminosity and structure suggests that the sources in these two types of galaxies may differ.
Radio galaxies can also be classified as giant radio galaxies (GRGs), whose radio emissions can extend to scales of megaparsecs (3.26 million light-years). Alcyoneus is an FR II class low-excitation radio galaxy which has the largest observed radio emission, with lobed structures spanning 5 megaparsecs (16×106 ly). For comparison, another similarly sized giant radio galaxy is 3C 236, with lobes 15 million light-years across. It should however be noted that radio emissions are not always considered part of the main galaxy itself.
A giant radio galaxy is a special class of objects characterized by the presence of radio lobes generated by relativistic jets powered by the central galaxy's supermassive black hole. Giant radio galaxies are different from ordinary radio galaxies in that they can extend to much larger scales, reaching upwards to several megaparsecs across, far larger than the diameters of their host galaxies.
A "normal" radio galaxy do not have a source that is a supermassive black hole or monster neutron star; instead the source is synchrotron radiation from relativistic electrons accelerated by supernova. These sources are comparatively short lived, making the radio spectrum from normal radio galaxies an especially good way to study star formation.
Active galaxy
Some observable galaxies are classified as "active" if they contain an active galactic nucleus (AGN). A significant portion of the galaxy's total energy output is emitted by the active nucleus instead of its stars, dust and interstellar medium. There are multiple classification and naming schemes for AGNs, but those in the lower ranges of luminosity are called Seyfert galaxies, while those with luminosities much greater than that of the host galaxy are known as quasi-stellar objects or quasars. Models of AGNs suggest that a significant fraction of their light is shifted to far-infrared frequencies because optical and UV emission in the nucleus is absorbed and remitted by dust and gas surrounding it.
The standard model for an active galactic nucleus is based on an accretion disc that forms around a supermassive black hole (SMBH) at the galaxy's core region. The radiation from an active galactic nucleus results from the gravitational energy of matter as it falls toward the black hole from the disc. The AGN's luminosity depends on the SMBH's mass and the rate at which matter falls onto it.
In about 10% of these galaxies, a diametrically opposed pair of energetic jets ejects particles from the galaxy core at velocities close to the speed of light. The mechanism for producing these jets is not well understood.
Seyfert galaxy
Seyfert galaxies are one of the two largest groups of active galaxies, along with quasars. They have quasar-like nuclei (very luminous, distant and bright sources of electromagnetic radiation) with very high surface brightnesses; but unlike quasars, their host galaxies are clearly detectable. Seen through a telescope, a Seyfert galaxy appears like an ordinary galaxy with a bright star superimposed atop the core. Seyfert galaxies are divided into two principal subtypes based on the frequencies observed in their spectra.
Quasar
Quasars are the most energetic and distant members of active galactic nuclei. Extremely luminous, they were first identified as high redshift sources of electromagnetic energy, including radio waves and visible light, that appeared more similar to stars than to extended sources similar to galaxies. Their luminosity can be 100 times that of the Milky Way. The nearest known quasar, Markarian 231, is about 581 million light-years from Earth, while others have been discovered as far away as UHZ1, roughly 13.2 billion light-years distant. Quasars are noteworthy for providing the first demonstration of the phenomenon that gravity can act as a lens for light.
Other AGNs
Blazars are believed to be active galaxies with a relativistic jet pointed in the direction of Earth. A radio galaxy emits radio frequencies from relativistic jets. A unified model of these types of active galaxies explains their differences based on the observer's position.
Possibly related to active galactic nuclei (as well as starburst regions) are low-ionization nuclear emission-line regions (LINERs). The emission from LINER-type galaxies is dominated by weakly ionized elements. The excitation sources for the weakly ionized lines include post-AGB stars, AGN, and shocks. Approximately one-third of nearby galaxies are classified as containing LINER nuclei.
Luminous infrared galaxy
Luminous infrared galaxies (LIRGs) are galaxies with luminosities—the measurement of electromagnetic power output—above 1011 L☉ (solar luminosities). In most cases, most of their energy comes from large numbers of young stars which heat surrounding dust, which reradiates the energy in the infrared. Luminosity high enough to be a LIRG requires a star formation rate of at least 18 M☉ yr−1. Ultra-luminous infrared galaxies (ULIRGs) are at least ten times more luminous still and form stars at rates >180 M☉ yr−1. Many LIRGs also emit radiation from an AGN. Infrared galaxies emit more energy in the infrared than all other wavelengths combined, with peak emission typically at wavelengths of 60 to 100 microns. LIRGs are believed to be created from the strong interaction and merger of spiral galaxies. While uncommon in the local universe, LIRGs and ULIRGS were more prevalent when the universe was younger.
Physical diameters
Galaxies do not have a definite boundary by their nature, and are characterized by a gradually decreasing stellar density as a function of increasing distance from their center, making measurements of their true extents difficult. Nevertheless, astronomers over the past few decades have made several criteria in defining the sizes of galaxies.
Angular diameter
As early as the time of Edwin Hubble in 1936, there have been attempts to characterize the diameters of galaxies. The earliest efforts were based on the observed angle subtended by the galaxy and its estimated distance, leading to an angular diameter (also called "metric diameter").
Isophotal diameter
The isophotal diameter is introduced as a conventional way of measuring a galaxy's size based on its apparent surface brightness. Isophotes are curves in a diagram - such as a picture of a galaxy - that adjoins points of equal brightnesses, and are useful in defining the extent of the galaxy. The apparent brightness flux of a galaxy is measured in units of magnitudes per square arcsecond (mag/arcsec2; sometimes expressed as mag arcsec−2), which defines the brightness depth of the isophote. To illustrate how this unit works, a typical galaxy has a brightness flux of 18 mag/arcsec2 at its central region. This brightness is equivalent to the light of an 18th magnitude hypothetical point object (like a star) being spread out evenly in a one square arcsecond area of the sky. The isophotal diameter is typically defined as the region enclosing all the light down to 25 mag/arcsec2 in the blue B-band, which is then referred to as the D25 standard.
Effective radius (half-light) and its variations
The half-light radius (also known as effective radius; Re) is a measure that is based on the galaxy's overall brightness flux. This is the radius upon which half, or 50%, of the total brightness flux of the galaxy was emitted. This was first proposed by Gérard de Vaucouleurs in 1948. The choice of using 50% was arbitrary, but proved to be useful in further works by R. A. Fish in 1963, where he established a luminosity concentration law that relates the brightnesses of elliptical galaxies and their respective Re, and by José Luis Sérsic in 1968 that defined a mass-radius relation in galaxies.
In defining Re, it is necessary that the overall brightness flux galaxy should be captured, with a method employed by Bershady in 2000 suggesting to measure twice the size where the brightness flux of an arbitrarily chosen radius, defined as the local flux, divided by the overall average flux equals to 0.2. Using half-light radius allows a rough estimate of a galaxy's size, but is not particularly helpful in determining its morphology.
Variations of this method exist. In particular, in the ESO-Uppsala Catalogue of Galaxies values of 50%, 70%, and 90% of the total blue light (the light detected through a B-band specific filter) had been used to calculate a galaxy's diameter.
Petrosian magnitude
First described by Vahe Petrosian in 1976, a modified version of this method has been used by the Sloan Digital Sky Survey (SDSS). This method employs a mathematical model on a galaxy whose radius is determined by the azimuthally (horizontal) averaged profile of its brightness flux. In particular, the SDSS employed the Petrosian magnitude in the R-band (658 nm, in the red part of the visible spectrum) to ensure that the brightness flux of a galaxy would be captured as much as possible while counteracting the effects of background noise. For a galaxy whose brightness profile is exponential, it is expected to capture all of its brightness flux, and 80% for galaxies that follow a profile that follows de Vaucouleurs's law.
Petrosian magnitudes have the advantage of being redshift and distance independent, allowing the measurement of the galaxy's apparent size since the Petrosian radius is defined in terms of the galaxy's overall luminous flux.
A critique of an earlier version of this method has been issued by the Infrared Processing and Analysis Center, with the method causing a magnitude of error (upwards to 10%) of the values than using isophotal diameter. The use of Petrosian magnitudes also have the disadvantage of missing most of the light outside the Petrosian aperture, which is defined relative to the galaxy's overall brightness profile, especially for elliptical galaxies, with higher signal-to-noise ratios on higher distances and redshifts. A correction for this method has been issued by Graham et al. in 2005, based on the assumption that galaxies follow Sérsic's law.
Near-infrared method
This method has been used by 2MASS as an adaptation from the previously used methods of isophotal measurement. Since 2MASS operates in the near infrared, which has the advantage of being able to recognize dimmer, cooler, and older stars, it has a different form of approach compared to other methods that normally use B-filter. The detail of the method used by 2MASS has been described thoroughly in a document by Jarrett et al., with the survey measuring several parameters.
The standard aperture ellipse (area of detection) is defined by the infrared isophote at the Ks band (roughly 2.2 μm wavelength) of 20 mag/arcsec2. Gathering the overall luminous flux of the galaxy has been employed by at least four methods: the first being a circular aperture extending 7 arcseconds from the center, an isophote at 20 mag/arcsec2, a "total" aperture defined by the radial light distribution that covers the supposed extent of the galaxy, and the Kron aperture (defined as 2.5 times the first-moment radius, an integration of the flux of the "total" aperture).
Larger-scale structures
Deep-sky surveys show that galaxies are often found in groups and clusters. Solitary galaxies that have not significantly interacted with other galaxies of comparable mass in the past few billion years are relatively scarce. Only about 5% of the galaxies surveyed are isolated in this sense. However, they may have interacted and even merged with other galaxies in the past, and may still be orbited by smaller satellite galaxies.
On the largest scale, the universe is continually expanding, resulting in an average increase in the separation between individual galaxies (see Hubble's law). Associations of galaxies can overcome this expansion on a local scale through their mutual gravitational attraction. These associations formed early, as clumps of dark matter pulled their respective galaxies together. Nearby groups later merged to form larger-scale clusters. This ongoing merging process, as well as an influx of infalling gas, heats the intergalactic gas in a cluster to very high temperatures of 30–100 megakelvins. About 70–80% of a cluster's mass is in the form of dark matter, with 10–30% consisting of this heated gas and the remaining few percent in the form of galaxies.
Most galaxies are gravitationally bound to a number of other galaxies. These form a fractal-like hierarchical distribution of clustered structures, with the smallest such associations being termed groups. A group of galaxies is the most common type of galactic cluster; these formations contain the majority of galaxies (as well as most of the baryonic mass) in the universe. To remain gravitationally bound to such a group, each member galaxy must have a sufficiently low velocity to prevent it from escaping (see Virial theorem). If there is insufficient kinetic energy, however, the group may evolve into a smaller number of galaxies through mergers.
Clusters of galaxies consist of hundreds to thousands of galaxies bound together by gravity. Clusters of galaxies are often dominated by a single giant elliptical galaxy, known as the brightest cluster galaxy, which, over time, tidally destroys its satellite galaxies and adds their mass to its own.
Superclusters contain tens of thousands of galaxies, which are found in clusters, groups and sometimes individually. At the supercluster scale, galaxies are arranged into sheets and filaments surrounding vast empty voids. Above this scale, the universe appears to be the same in all directions (isotropic and homogeneous), though this notion has been challenged in recent years by numerous findings of large-scale structures that appear to be exceeding this scale. The Hercules–Corona Borealis Great Wall, currently the largest structure in the universe found so far, is 10 billion light-years (three gigaparsecs) in length.
The Milky Way galaxy is a member of an association named the Local Group, a relatively small group of galaxies that has a diameter of approximately one megaparsec. The Milky Way and the Andromeda Galaxy are the two brightest galaxies within the group; many of the other member galaxies are dwarf companions of these two. The Local Group itself is a part of a cloud-like structure within the Virgo Supercluster, a large, extended structure of groups and clusters of galaxies centered on the Virgo Cluster. In turn, the Virgo Supercluster is a portion of the Laniakea Supercluster.
Magnetic fields
Galaxies have magnetic fields of their own. A galaxy's magnetic field influences its dynamics in multiple ways, including affecting the formation of spiral arms and transporting angular momentum in gas clouds. The latter effect is particularly important, as it is a necessary factor for the gravitational collapse of those clouds, and thus for star formation.
The typical average equipartition strength for spiral galaxies is about 10 μG (microgauss) or 1nT (nanotesla). By comparison, the Earth's magnetic field has an average strength of about 0.3 G (Gauss) or 30 μT (microtesla). Radio-faint galaxies like M 31 and M33, the Milky Way's neighbors, have weaker fields (about 5μG), while gas-rich galaxies with high star-formation rates, like M 51, M 83 and NGC 6946, have 15 μG on average. In prominent spiral arms, the field strength can be up to 25 μG, in regions where cold gas and dust are also concentrated. The strongest total equipartition fields (50–100 μG) were found in starburst galaxies—for example, in M 82 and the Antennae; and in nuclear starburst regions, such as the centers of NGC 1097 and other barred galaxies.
Formation and evolution
Formation
Current models of the formation of galaxies in the early universe are based on the ΛCDM model. About 300,000 years after the Big Bang, atoms of hydrogen and helium began to form, in an event called recombination. Nearly all the hydrogen was neutral (non-ionized) and readily absorbed light, and no stars had yet formed. As a result, this period has been called the "dark ages". It was from density fluctuations (or anisotropic irregularities) in this primordial matter that larger structures began to appear. As a result, masses of baryonic matter started to condense within cold dark matter halos. These primordial structures allowed gasses to condense in to protogalaxies, large scale gas clouds that were precursors to the first galaxies.
As gas falls in to the gravity of the dark matter halos, its pressure and temperature rise. To condense further, the gas must radiate energy. This process was slow in the early universe dominated by hydrogen atoms and molecules which are inefficient radiators compared to heavier elements. As clumps of gas aggregate forming rotating disks, temperatures and pressures continue to increase. Some places within the disk reach high enough density to form stars.
Once protogalaxies began to form and contract, the first halo stars, called Population III stars, appeared within them. These were composed of primordial gas, almost entirely of hydrogen and helium.
Emission from the first stars heats the remaining gas helping to trigger additional star formation; the ultraviolet light emission from the first generation of stars re-ionized the surrounding neutral hydrogen in expanding spheres eventually reaching the entire universe, an event called reionization. The most massive stars collapse in violent supernova explosions releasing heavy elements ("metals") into the interstellar medium. This metal content is incorporated into population II stars.
Theoretical models for early galaxy formation have been verified and informed by a large number and variety of sophisticated astronomical observations. The photometric observations generally need spectroscopic confirmation due the large number mechanisms that can introduce systematic errors. For example, a high redshift (z ~ 16) photometric observation by James Webb Space Telescope (JWST) was later corrected to be closer to z ~ 5.
Nevertheless, confirmed observations from the JWST and other observatories are accumulating, allowing systematic comparison of early galaxies to predictions of theory.
Evidence for individual Population III stars in early galaxies is even more challenging. Even seemingly confirmed spectroscopic evidence may turn out to have other origins. For example, astronomers reported HeII emission evidence for Population III stars in the Cosmos Redshift 7 galaxy, with a redshift value of 6.60. Subsequent observations found metallic emission lines, OIII, inconsistent with an early-galaxy star.
Evolution
Once stars begin to form, emit radiation, and in some cases explode, the process of galaxy formation becomes very complex, involving interactions between the forces of gravity, radiation, and thermal energy. Many details are still poorly understood.
Within a billion years of a galaxy's formation, key structures begin to appear. Globular clusters, the central supermassive black hole, and a galactic bulge of metal-poor Population II stars form. The creation of a supermassive black hole appears to play a key role in actively regulating the growth of galaxies by limiting the total amount of additional matter added. During this early epoch, galaxies undergo a major burst of star formation.
During the following two billion years, the accumulated matter settles into a galactic disc. A galaxy will continue to absorb infalling material from high-velocity clouds and dwarf galaxies throughout its life. This matter is mostly hydrogen and helium. The cycle of stellar birth and death slowly increases the abundance of heavy elements, eventually allowing the formation of planets.
Star formation rates in galaxies depend upon their local environment. Isolated 'void' galaxies have highest rate per stellar mass, with 'field' galaxies associated with spiral galaxies having lower rates and galaxies in dense cluster having the lowest rates.
The evolution of galaxies can be significantly affected by interactions and collisions. Mergers of galaxies were common during the early epoch, and the majority of galaxies were peculiar in morphology. Given the distances between the stars, the great majority of stellar systems in colliding galaxies will be unaffected. However, gravitational stripping of the interstellar gas and dust that makes up the spiral arms produces a long train of stars known as tidal tails. Examples of these formations can be seen in NGC 4676 or the Antennae Galaxies.
The Milky Way galaxy and the nearby Andromeda Galaxy are moving toward each other at about 130 km/s, and—depending upon the lateral movements—the two might collide in about five to six billion years. Although the Milky Way has never collided with a galaxy as large as Andromeda before, it has collided and merged with other galaxies in the past. Cosmological simulations indicate that, 11 billion years ago, it merged with a particularly large galaxy that has been labeled the Kraken.
Such large-scale interactions are rare. As time passes, mergers of two systems of equal size become less common. Most bright galaxies have remained fundamentally unchanged for the last few billion years, and the net rate of star formation probably also peaked about ten billion years ago.
Future trends
Spiral galaxies, like the Milky Way, produce new generations of stars as long as they have dense molecular clouds of interstellar hydrogen in their spiral arms. Elliptical galaxies are largely devoid of this gas, and so form few new stars. The supply of star-forming material is finite; once stars have converted the available supply of hydrogen into heavier elements, new star formation will come to an end.
The current era of star formation is expected to continue for up to one hundred billion years, and then the "stellar age" will wind down after about ten trillion to one hundred trillion years (1013–1014 years), as the smallest, longest-lived stars in the visible universe, tiny red dwarfs, begin to fade. At the end of the stellar age, galaxies will be composed of compact objects: brown dwarfs, white dwarfs that are cooling or cold ("black dwarfs"), neutron stars, and black holes. Eventually, as a result of gravitational relaxation, all stars will either fall into central supermassive black holes or be flung into intergalactic space as a result of collisions.
Gallery
| Physical sciences | Astronomy | null |
12570 | https://en.wikipedia.org/wiki/Gigabyte | Gigabyte | The gigabyte () is a multiple of the unit byte for digital information. The prefix giga means 109 in the International System of Units (SI). Therefore, one gigabyte is one billion bytes. The unit symbol for the gigabyte is GB.
This definition is used in all contexts of science (especially data science), engineering, business, and many areas of computing, including storage capacities of hard drives, solid-state drives, and tapes, as well as data transmission speeds. The term is also used in some fields of computer science and information technology to denote (10243 or 230) bytes, however, particularly for sizes of RAM. Thus, some usage of gigabyte has been ambiguous. To resolve this difficulty, IEC 80000-13 clarifies that a gigabyte (GB) is 109 bytes and specifies the term gibibyte (GiB) to denote 230 bytes. These differences are still readily seen, for example, when a 400 GB drive's capacity is displayed by Microsoft Windows as 372 GB instead of 372 GiB. Analogously, a memory module that is labeled as having the size "" has one gibibyte () of storage capacity.
In response to litigation over whether the makers of electronic storage devices must conform to Microsoft Windows' use of a binary definition of "GB" instead of the metric/decimal definition, the United States District Court for the Northern District of California rejected that argument, ruling that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce.
Definition
The term gigabyte has a standard definition of 10003 bytes, as well as a discouraged meaning of 10243 bytes. The latter binary usage originated as compromise technical jargon for byte multiples that needed to be expressed in a power of 2, but lacked a convenient name. As 1024 (210) is approximately 1000 (103), roughly corresponding to SI multiples, it was used for binary multiples as well.
In 1998 the International Electrotechnical Commission (IEC) published standards for binary prefixes, requiring that the gigabyte strictly denote 10003 bytes and gibibyte denote 10243 bytes. By the end of 2007, the IEC Standard had been adopted by the IEEE, EU, and NIST, and in 2009 it was incorporated in the International System of Quantities. Nevertheless, the term gigabyte continues to be widely used with the following two different meanings:
Base 10 (decimal)
1 GB = bytes (= 10003 B = 109 B)
Based on powers of 10, this definition uses the prefix giga- as defined in the International System of Units (SI). This is the recommended definition by the International Electrotechnical Commission (IEC). This definition is used in networking contexts and most storage media, particularly hard drives, flash-based storage, and DVDs, and is also consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance. The file manager of Mac OS X version 10.6 and later versions are a notable example of this usage in software, which report files sizes in decimal units.
Base 2 (binary)
1 GiB = bytes (= 10243 B = 230 B).
The binary definition uses powers of the base 2, as does the architectural principle of binary computers.
This usage is widely promulgated by some operating systems, such as Microsoft Windows in reference to computer memory (e.g., RAM). This definition is synonymous with the unambiguous unit gibibyte.
Consumer confusion
Since the first disk drive, the IBM 350, disk drive manufacturers expressed hard drive capacities using decimal prefixes. With the advent of gigabyte-range drive capacities, manufacturers labelled many consumer hard drive, solid-state drive and USB flash drive capacities in certain size classes expressed in decimal gigabytes, such as "500 GB". The exact capacity of a given drive model is usually slightly larger than the class designation. Practically all manufacturers of hard disk drives and flash-memory disk devices continue to define one gigabyte as , which is displayed on the packaging. Some operating systems such as Mac OS X and Ubuntu, and Debian express hard drive capacity or file size using decimal multipliers, while others such as Microsoft Windows report size using binary multipliers. This discrepancy causes confusion, as a disk with an advertised capacity of, for example, (meaning , equal to 372 GiB) might be reported by the operating system as "".
For RAM, the JEDEC memory standards use IEEE 100 nomenclature which quote the gigabyte as (230 bytes).
The difference between units based on decimal and binary prefixes increases as a semi-logarithmic (linear-log) function—for example, the decimal kilobyte value is nearly 98% of the kibibyte, a megabyte is under 96% of a mebibyte, and a gigabyte is just over 93% of a gibibyte value. This means that a 300 GB (279 GiB) hard disk might be indicated variously as "300 GB", "279 GB" or "279 GiB", depending on the operating system. As storage sizes increase and larger units are used, these differences become more pronounced.
US lawsuits
A lawsuit decided in 2019 that arose from alleged breach of contract and other claims over the binary and decimal definitions used for "gigabyte" have ended in favour of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1,000,000,000 (109) bytes (the decimal definition). Specifically, the courts held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' .... The California Legislature has likewise adopted the decimal system for all 'transactions in this state'."
Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity.
Seagate was sued on similar grounds and also settled.
Other contexts
Because of their physical design, the capacity of modern computer random-access memory devices, such as DIMM modules, is always a multiple of a power of 1024. It is thus convenient to use prefixes denoting powers of 1024, known as binary prefixes, in describing them. For example, a memory capacity of (10243 B) is conveniently expressed as 1 GiB rather than as 1.074 GB. The former specification is, however, often quoted as "1 GB" when applied to random-access memory.
Software allocates memory in varying degrees of granularity as needed to fulfill data structure requirements and binary multiples are usually not required. Other computer capacities and rates, like storage hardware size, data transfer rates, clock speeds, operations per second, etc., do not depend on an inherent base, and are usually presented in decimal units. For example, the manufacturer of a "300 GB" hard drive is claiming a capacity of , not 300 × 10243 (which would be ) bytes.
Examples of gigabyte-sized storage
One hour of SDTV video at 2.2 Mbit/s is approximately 1 GB.
Seven minutes of HDTV video at 19.39 Mbit/s is approximately 1 GB.
114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s is approximately 1 GB.
A single-layer DVD+R disc can hold about 4.7 GB.
A dual-layered DVD+R disc can hold about 8.5 GB.
A single-layer Blu-ray can hold about 25 GB.
The largest Nintendo Switch cartridge available on the market holds about 32 GB.
A dual-layered Blu-ray can hold about 50 GB.
A triple-layered Ultra HD Blu-ray can hold about 100 GB.
Unicode character
The "gigabyte" symbol is encoded by Unicode at code point .
| Physical sciences | Information | Basics and measurement |
12571 | https://en.wikipedia.org/wiki/Galaxy%20groups%20and%20clusters | Galaxy groups and clusters | Galaxy groups and clusters are the largest known gravitationally bound objects to have arisen thus far in the process of cosmic structure formation. They form the densest part of the large-scale structure of the Universe. In models for the gravitational formation of structure with cold dark matter, the smallest structures collapse first and eventually build the largest structures, clusters of galaxies. Clusters are then formed relatively recently between 10 billion years ago and now. Groups and clusters may contain ten to thousands of individual galaxies. The clusters themselves are often associated with larger, non-gravitationally bound, groups called superclusters.
Groups of galaxies
Groups of galaxies are the smallest aggregates of galaxies. They typically contain no more than 50 galaxies in a diameter of 1 to 2 megaparsecs (Mpc)(see 1022 m for distance comparisons). Their mass is approximately 1013 solar masses. The spread of velocities for the individual galaxies is about 150 km/s. However, this definition should be used as a guide only, as larger and more massive galaxy systems are sometimes classified as galaxy groups. Groups are the most common structures of galaxies in the universe, comprising at least 50% of the galaxies in the local universe. Groups have a mass range between those of the very large elliptical galaxies and clusters of galaxies.
Our own galaxy, the Milky Way, is contained in the Local Group of more than 54 galaxies.
In July 2017 S. Paul, R. S. John et al. defined clear distinguishing parameters for classifying galaxy aggregations as ‘galaxy groups’ and ‘clusters’ on the basis of scaling laws that they followed. According to this paper, galaxy aggregations less massive than 8 × 1013 solar masses are classified as galaxy groups.
Clusters of galaxies
Clusters are larger than groups, although there is no sharp dividing line between the two. When observed visually, clusters appear to be collections of galaxies held together by mutual gravitational attraction. However, their velocities are too large for them to remain gravitationally bound by their mutual attractions, implying the presence of either an additional invisible mass component, or an additional attractive force besides gravity. X-ray studies have revealed the presence of large amounts of intergalactic gas known as the intracluster medium. This gas is very hot, between 107K and 108K, and hence emits X-rays in the form of bremsstrahlung and atomic line emission.
The total mass of the gas is greater than that of the galaxies by roughly a factor of two. However, this is still not enough mass to keep the galaxies in the cluster. Since this gas is in approximate hydrostatic equilibrium with the overall cluster gravitational field, the total mass distribution can be determined. It turns out the total mass deduced from this measurement is approximately six times larger than the mass of the galaxies or the hot gas. The missing component is known as dark matter and its nature is unknown. In a typical cluster perhaps only 5% of the total mass is in the form of galaxies, maybe 10% in the form of hot X-ray emitting gas and the remainder is dark matter. Brownstein and Moffat use a theory of modified gravity to explain X-ray cluster masses without dark matter. Observations of the Bullet Cluster are the strongest evidence for the existence of dark matter; however, Brownstein and Moffat have shown that their modified gravity theory can also account for the properties of the cluster.
Observational methods
Clusters of galaxies have been found in surveys by a number of observational techniques and have been studied in detail using many methods:
Optical or infrared: The individual galaxies of clusters can be studied through optical or infrared imaging and spectroscopy. Galaxy clusters are found by optical or infrared telescopes by searching for overdensities, and then confirmed by finding several galaxies at a similar redshift. Infrared searches are more useful for finding more distant (higher redshift) clusters.
X-ray: The hot plasma emits X-rays that can be detected by X-ray telescopes. The cluster gas can be studied using both X-ray imaging and X-ray spectroscopy. Clusters are quite prominent in X-ray surveys and along with AGN are the brightest X-ray emitting extragalactic objects.
Radio: A number of diffuse structures emitting at radio frequencies have been found in clusters. Groups of radio sources (that may include diffuse structures or AGN) have been used as tracers of cluster location. At high redshift imaging around individual radio sources (in this case AGN) has been used to detect proto-clusters (clusters in the process of forming).
Sunyaev-Zel'dovich effect: The hot electrons in the intracluster medium scatter radiation from the cosmic microwave background through inverse Compton scattering. This produces a "shadow" in the observed cosmic microwave background at some radio frequencies.
Gravitational lensing: Clusters of galaxies contain enough matter to distort the observed orientations of galaxies behind them. The observed distortions can be used to model the distribution of dark matter in the cluster.
Temperature and density
Clusters of galaxies are the most recent and most massive objects to have arisen in the hierarchical structure formation of the Universe and the study of clusters tells one about the way galaxies form and evolve. Clusters have two important properties: their masses are large enough to retain any energetic gas ejected from member galaxies and the thermal energy of the gas within the cluster is observable within the X-Ray bandpass. The observed state of gas within a cluster is determined by a combination of shock heating during accretion, radiative cooling, and thermal feedback triggered by that cooling. The density, temperature, and substructure of the intracluster X-Ray gas therefore represents the entire thermal history of cluster formation. To better understand this thermal history one needs to study the entropy of the gas because entropy is the quantity most directly changed by increasing or decreasing the thermal energy of intracluster gas.
List of groups and clusters
| Physical sciences | Basics_3 | null |
12572 | https://en.wikipedia.org/wiki/Grus%20%28constellation%29 | Grus (constellation) | Grus (, or colloquially ) is a constellation in the southern sky. Its name is Latin for the crane, a type of bird. It is one of twelve constellations conceived by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. Grus first appeared on a celestial globe published in 1598 in Amsterdam by Plancius and Jodocus Hondius and was depicted in Johann Bayer's star atlas Uranometria of 1603. French explorer and astronomer Nicolas-Louis de Lacaille gave Bayer designations to its stars in 1756, some of which had been previously considered part of the neighbouring constellation Piscis Austrinus. The constellations Grus, Pavo, Phoenix and Tucana are collectively known as the "Southern Birds".
The constellation's brightest star, Alpha Gruis, is also known as Alnair and appears as a 1.7-magnitude blue-white star. Beta Gruis is a red giant variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. Six star systems have been found to have planets: the red dwarf Gliese 832 is one of the closest stars to Earth to have a planetary system. Another—WASP-95—has a planet that orbits every two days. Deep-sky objects found in Grus include the planetary nebula IC 5148, also known as the Spare Tyre Nebula, and a group of four interacting galaxies known as the Grus Quartet.
History
The stars that form Grus were originally considered part of the neighbouring constellation Piscis Austrinus (the southern fish), with Gamma Gruis seen as part of the fish's tail. The stars were first defined as a separate constellation by the astronomer Petrus Plancius, who created twelve new constellations based on the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. Grus first appeared on a 35-centimetre-diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. Its first depiction in a celestial atlas was in the German cartographer Johann Bayer's Uranometria of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name Den Reygher, "The Heron", but Bayer followed Plancius and Hondius in using Grus.
An alternative name for the constellation, Phoenicopterus (Latin "flamingo"), was used briefly during the early 17th century, seen in the 1605 work Cosmographiae Generalis by Paul Merula of Leiden University and a c. 1625 globe by Dutch globe maker Pieter van den Keere. Astronomer Ian Ridpath has reported the symbolism likely came from Plancius originally, who had worked with both of these people. Grus and the nearby constellations Phoenix, Tucana and Pavo are collectively called the "Southern Birds".
The stars that correspond to Grus were generally too far south to be seen from China. In Chinese astronomy, Gamma and Lambda Gruis may have been included in the tub-shaped asterism Bàijiù, along with stars from Piscis Austrinus. In Central Australia, the Arrernte and Luritja people living on a mission in Hermannsburg viewed the sky as divided between them, east of the Milky Way representing Arrernte camps and west denoting Luritja camps. Alpha and Beta Gruis, along with Fomalhaut, Alpha Pavonis and the stars of Musca, were all claimed by the Arrernte.
Characteristics
Grus is bordered by Piscis Austrinus to the north, Sculptor to the northeast, Phoenix to the east, Tucana to the south, Indus to the southwest, and Microscopium to the west. Bayer straightened the tail of Piscis Austrinus to make way for Grus in his Uranometria. Covering 366 square degrees, it ranks 45th of the 88 modern constellations in size and covers 0.887% of the night sky. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Gru". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined as a polygon of 6 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −36.31° and −56.39°. Grus is located too far south to be seen by observers in the British Isles and the northern United States, though it can easily be seen from Florida or San Diego; the whole constellation is visible to observers south of latitude 33°N.
Features
Stars
Keyser and de Houtman assigned twelve stars to the constellation. Bayer depicted Grus on his chart, but did not assign its stars Bayer designations. French explorer and astronomer Nicolas-Louis de Lacaille labelled them Alpha to Phi in 1756 with some omissions. In 1879, American astronomer Benjamin Gould added Kappa, Nu, Omicron and Xi, which had all been catalogued by Lacaille but not given Bayer designations. Lacaille considered them too faint, while Gould thought otherwise. Xi Gruis had originally been placed in Microscopium. Conversely, Gould dropped Lacaille's Sigma as he thought it was too dim.
Grus has several bright stars. Marking the left wing is Alpha Gruis, a blue-white star of spectral type B6V and apparent magnitude 1.7, around 101 light-years from Earth. Its traditional name, Alnair, means "the bright one" and refers to its status as the brightest star in Grus (although the Arabians saw it as the brightest star in the Fish's tail, as Grus was then depicted).Alnair Alnair is around 380 times as luminous and has over 3 times the diameter of the Sun. Lying 5 degrees west of Alnair, denoting the Crane's heart is Beta Gruis (the proper name is Tiaki), a red giant of spectral type M5III. It has a diameter of 0.8 astronomical units (AU) (if placed in the Solar System it would extend to the orbit of Venus) located around 170 light-years from Earth. It is a variable star with a minimum magnitude of 2.3 and a maximum magnitude of 2.0. An imaginary line drawn from the Great Square of Pegasus through Fomalhaut will lead to Alnair and Beta Gruis.
Lying in the northwest corner of the constellation and marking the crane's eye is Gamma Gruis, a blue-white subgiant of spectral type B8III and magnitude 3.0 lying around 211 light-years from Earth. Also known as Al Dhanab, it has finished fusing its core hydrogen and has begun cooling and expanding, which will see it transform into a red giant.
There are several double stars visible to the naked eye in Grus. Forming a triangle with Alnair and Beta, Delta Gruis is an optical double whose components—Delta1 and Delta2—are separated by 45 arcseconds. Delta1 is a yellow giant of spectral type G7III and magnitude 4.0, 309 light-years from Earth, and may have its own magnitude 12 orange dwarf companion. Delta2 is a red giant of spectral type M4.5III and semiregular variable that ranges between magnitudes 3.99 and 4.2, located 325 light-years from Earth. It has around 3 times the mass and 135 times the diameter of the Sun. Mu Gruis, composed of Mu1 and Mu2, is also an optical double—both stars are yellow giants of spectral type G8III around 2.5 times as massive as the Sun with surface temperatures of around 4900 K. Mu1 is the brighter of the two at magnitude 4.8 located around 275 light-years from Earth, while Mu2 the dimmer at magnitude 5.11 lies 265 light-years distant from Earth. Pi Gruis, an optical double with a variable component, is composed of Pi1 Gruis and Pi2. Pi1 is a semi-regular red giant of spectral type S5, ranging from magnitude 5.31 to 7.01 over a period of 191 days, and is around 532 light-years from Earth. One of the brightest S-class stars to Earth viewers, it has a companion star of apparent magnitude 10.9 with sunlike properties, being a yellow main sequence star of spectral type G0V. The pair make up a likely binary system. Pi2 is a giant star of spectral type F3III-IV located around 130 light-years from Earth, and is often brighter than its companion at magnitude 5.6. Marking the right wing is Theta Gruis, yet another double star, lying 5 degrees east of Delta1 and Delta2.
RZ Gruis is a binary system of apparent magnitude 12.3 with occasional dimming to 13.4, whose components—a white dwarf and main sequence star—are thought to orbit each other roughly every 8.5 to 10 hours. It belongs to the UX Ursae Majoris subgroup of cataclysmic variable star systems, where material from the donor star is drawn to the white dwarf where it forms an accretion disc that remains bright and outshines the two component stars. The system is poorly understood, though the donor star has been calculated to be of spectral type F5V. These stars have spectra very similar to novae that have returned to quiescence after outbursts, yet they have not been observed to have erupted themselves. The American Association of Variable Star Observers recommends watching them for future events. CE Gruis (also known as Grus V-1) is a faint (magnitude 18–21) star system also composed of a white dwarf and donor star; in this case the two are so close they are tidally locked. Known as polars, material from the donor star does not form an accretion disc around the white dwarf, but rather streams directly onto it.
Six star systems are thought to have planetary systems. Tau1 Gruis is a yellow star of magnitude 6.0 located around 106 light-years away. It may be a main sequence star or be just beginning to depart from the sequence as it expands and cools. In 2002 the star was found to have a planetary companion. HD 215456, HD 213240 and WASP-95 are yellow sunlike stars discovered to have two planets, a planet and a remote red dwarf, and a hot Jupiter, respectively; this last—WASP-95b—completes an orbit round its sun in a mere two days. Gliese 832 is a red dwarf of spectral type M1.5V and apparent magnitude 8.66 located only 16.1 light-years distant; hence it is one of the nearest stars to the Solar System. A Jupiter-like planet—Gliese 832 b—orbiting the red dwarf over a period of 9.4±0.4 years was discovered in 2008. WISE 2220−3628 is a brown dwarf of spectral type Y, and hence one of the coolest star-like objects known. It has been calculated as being around 26 light-years distant from Earth.
In July 2019, astronomers reported finding a star, S5-HVS1, traveling , faster that any other star detected so far. The star is in the Grus constellation in the southern sky, and about 29,000 light-years from Earth, and may have been propelled out of the Milky Way galaxy after interacting with Sagittarius A*, the supermassive black hole at the center of the galaxy.
Deep-sky objects
Nicknamed the spare-tyre nebula, IC 5148 is a planetary nebula located around 1 degree west of Lambda Gruis. Around 3000 light-years distant, it is expanding at 50 kilometres a second, one of the fastest rates of expansion of all planetary nebulae.
Northeast of Theta Gruis are four interacting galaxies known as the Grus Quartet. These galaxies are NGC 7552, NGC 7590, NGC 7599, and NGC 7582. The latter three galaxies occupy an area of sky only 10 arcminutes across and are sometimes referred to as the "Grus Triplet," although all four are part of a larger loose group of galaxies called the IC 1459 Grus Group. NGC 7552 and 7582 are exhibiting high starburst activity; this is thought to have arisen because of the tidal forces from interacting. Located on the border of Grus with Piscis Austrinus, IC 1459 is a peculiar E3 giant elliptical galaxy. It has a fast counterrotating stellar core, and shells and ripples in its outer region. The galaxy has an apparent magnitude of 11.9 and is around 80 million light-years distant.
NGC 7424 is a barred spiral galaxy with an apparent magnitude of 10.4. located around 4 degrees west of the Grus Triplet. Approximately 37.5 million light-years distant, it is about 100,000 light-years in diameter, has well defined spiral arms and is thought to resemble the Milky Way. Two ultraluminous X-ray sources and one supernova have been observed in NGC 7424. SN 2001ig was discovered in 2001 and classified as a Type IIb supernova, one that initially showed a weak hydrogen line in its spectrum, but this emission later became undetectable and was replaced by lines of oxygen, magnesium and calcium, as well as other features that resembled the spectrum of a Type Ib supernova. A massive star of spectral type F, A or B is thought to be the surviving binary companion to SN 2001ig, which was believed to have been a Wolf–Rayet star.
Located near Alnair is NGC 7213, a face-on type 1 Seyfert galaxy located approximately 71.7 million light-years from Earth. It has an apparent magnitude of 12.1. Appearing undisturbed in visible light, it shows signs of having undergone a collision or merger when viewed at longer wavelengths, with disturbed patterns of ionized hydrogen including a filament of gas around 64,000 light-years long. It is part of a group of ten galaxies.
NGC 7410 is a spiral galaxy discovered by British astronomer John Herschel during observations at the Cape of Good Hope in October 1834. The galaxy has a visual magnitude of 11.7 and is approximately 122 million light-years distant from Earth.
| Physical sciences | Other | Astronomy |
12581 | https://en.wikipedia.org/wiki/Glass | Glass | Glass is an amorphous (non-crystalline) solid. Because it is often transparent and chemically inert, glass has found widespread practical, technological, and decorative use in window panes, tableware, and optics. Some common objects made of glass are named after the material, e.g., a "glass" for drinking, "glasses" for vision correction, and a "magnifying glass".
Glass is most often formed by rapid cooling (quenching) of the molten form. Some glasses such as volcanic glass are naturally occurring, and obsidian has been used to make arrowheads and knives since the Stone Age. Archaeological evidence suggests glassmaking dates back to at least 3600 BC in Mesopotamia, Egypt, or Syria. The earliest known glass objects were beads, perhaps created accidentally during metalworking or the production of faience, which is a form of pottery using lead glazes.
Due to its ease of formability into any shape, glass has been traditionally used for vessels, such as bowls, vases, bottles, jars and drinking glasses. Soda–lime glass, containing around 70% silica, accounts for around 90% of modern manufactured glass. Glass can be coloured by adding metal salts or painted and printed with vitreous enamels, leading to its use in stained glass windows and other glass art objects.
The refractive, reflective and transmission properties of glass make glass suitable for manufacturing optical lenses, prisms, and optoelectronics materials. Extruded glass fibres have applications as optical fibres in communications networks, thermal insulating material when matted as glass wool to trap air, or in glass-fibre reinforced plastic (fibreglass).
Microscopic structure
The standard definition of a glass (or vitreous solid) is a non-crystalline solid formed by rapid melt quenching. However, the term "glass" is often defined in a broader sense, to describe any non-crystalline (amorphous) solid that exhibits a glass transition when heated towards the liquid state.
Glass is an amorphous solid. Although the atomic-scale structure of glass shares characteristics of the structure of a supercooled liquid, glass exhibits all the mechanical properties of a solid. As in other amorphous solids, the atomic structure of a glass lacks the long-range periodicity observed in crystalline solids. Due to chemical bonding constraints, glasses do possess a high degree of short-range order with respect to local atomic polyhedra. The notion that glass flows to an appreciable extent over extended periods well below the glass transition temperature is not supported by empirical research or theoretical analysis (see viscosity in solids). Though atomic motion at glass surfaces can be observed, and viscosity on the order of 1017–1018 Pa s can be measured in glass, such a high value reinforces the fact that glass would not change shape appreciably over even large periods of time.
Formation from a supercooled liquid
For melt quenching, if the cooling is sufficiently rapid (relative to the characteristic crystallization time) then crystallization is prevented and instead, the disordered atomic configuration of the supercooled liquid is frozen into the solid state at Tg. The tendency for a material to form a glass while quenched is called glass-forming ability. This ability can be predicted by the rigidity theory. Generally, a glass exists in a structurally metastable state with respect to its crystalline form, although in certain circumstances, for example in atactic polymers, there is no crystalline analogue of the amorphous phase.
Glass is sometimes considered to be a liquid due to its lack of a first-order phase transition
where certain thermodynamic variables such as volume, entropy and enthalpy are discontinuous through the glass transition range. The glass transition may be described as analogous to a second-order phase transition where the intensive thermodynamic variables such as the thermal expansivity and heat capacity are discontinuous. However, the equilibrium theory of phase transformations does not hold for glass, and hence the glass transition cannot be classed as one of the classical equilibrium phase transformations in solids.
Occurrence in nature
Glass can form naturally from volcanic magma. Obsidian is a common volcanic glass with high silica (SiO2) content formed when felsic lava extruded from a volcano cools rapidly. Impactite is a form of glass formed by the impact of a meteorite, where Moldavite (found in central and eastern Europe), and Libyan desert glass (found in areas in the eastern Sahara, the deserts of eastern Libya and western Egypt) are notable examples. Vitrification of quartz can also occur when lightning strikes sand, forming hollow, branching rootlike structures called fulgurites. Trinitite is a glassy residue formed from the desert floor sand at the Trinity nuclear bomb test site. Edeowie glass, found in South Australia, is proposed to originate from Pleistocene grassland fires, lightning strikes, or hypervelocity impact by one or several asteroids or comets.
History
Naturally occurring obsidian glass was used by Stone Age societies as it fractures along very sharp edges, making it ideal for cutting tools and weapons.
Glassmaking dates back at least 6000 years, long before humans had discovered how to smelt iron. Archaeological evidence suggests that the first true synthetic glass was made in Lebanon and the coastal north Syria, Mesopotamia or ancient Egypt. The earliest known glass objects, of the mid-third millennium BC, were beads, perhaps initially created as accidental by-products of metalworking (slags) or during the production of faience, a pre-glass vitreous material made by a process similar to glazing.
Early glass was rarely transparent and often contained impurities and imperfections, and is technically faience rather than true glass, which did not appear until the 15th century BC. However, red-orange glass beads excavated from the Indus Valley Civilization dated before 1700 BC (possibly as early as 1900 BC) predate sustained glass production, which appeared around 1600 BC in Mesopotamia and 1500 BC in Egypt.
During the Late Bronze Age, there was a rapid growth in glassmaking technology in Egypt and Western Asia. Archaeological finds from this period include coloured glass ingots, vessels, and beads.
Much early glass production relied on grinding techniques borrowed from stoneworking, such as grinding and carving glass in a cold state.
The term glass has its origins in the late Roman Empire, in the Roman glass making centre at Trier (located in current-day Germany) where the late-Latin term glesum originated, likely from a Germanic word for a transparent, lustrous substance. Glass objects have been recovered across the Roman Empire in domestic, funerary, and industrial contexts, as well as trade items in marketplaces in distant provinces. Examples of Roman glass have been found outside of the former Roman Empire in China, the Baltics, the Middle East, and India. The Romans perfected cameo glass, produced by etching and carving through fused layers of different colours to produce a design in relief on the glass object.
In post-classical West Africa, Benin was a manufacturer of glass and glass beads.
Glass was used extensively in Europe during the Middle Ages. Anglo-Saxon glass has been found across England during archaeological excavations of both settlement and cemetery sites. From the 10th century onwards, glass was employed in stained glass windows of churches and cathedrals, with famous examples at Chartres Cathedral and the Basilica of Saint-Denis. By the 14th century, architects were designing buildings with walls of stained glass such as Sainte-Chapelle, Paris, (1203–1248) and the East end of Gloucester Cathedral. With the change in architectural style during the Renaissance period in Europe, the use of large stained glass windows became much less prevalent, although stained glass had a major revival with Gothic Revival architecture in the 19th century.
During the 13th century, the island of Murano, Venice, became a centre for glass making, building on medieval techniques to produce colourful ornamental pieces in large quantities. Murano glass makers developed the exceptionally clear colourless glass cristallo, so called for its resemblance to natural crystal, which was extensively used for windows, mirrors, ships' lanterns, and lenses. In the 13th, 14th, and 15th centuries, enamelling and gilding on glass vessels were perfected in Egypt and Syria. Towards the end of the 17th century, Bohemia became an important region for glass production, remaining so until the start of the 20th century. By the 17th century, glass in the Venetian tradition was also being produced in England. In about 1675, George Ravenscroft invented lead crystal glass, with cut glass becoming fashionable in the 18th century. Ornamental glass objects became an important art medium during the Art Nouveau period in the late 19th century.
Throughout the 20th century, new mass production techniques led to the widespread availability of glass in much larger amounts, making it practical as a building material and enabling new applications of glass. In the 1920s a mould-etch process was developed, in which art was etched directly into the mould so that each cast piece emerged from the mould with the image already on the surface of the glass. This reduced manufacturing costs and, combined with a wider use of coloured glass, led to cheap glassware in the 1930s, which later became known as Depression glass. In the 1950s, Pilkington Bros., England, developed the float glass process, producing high-quality distortion-free flat sheets of glass by floating on molten tin. Modern multi-story buildings are frequently constructed with curtain walls made almost entirely of glass. Laminated glass has been widely applied to vehicles for windscreens. Optical glass for spectacles has been used since the Middle Ages. The production of lenses has become increasingly proficient, aiding astronomers as well as having other applications in medicine and science. Glass is also employed as the aperture cover in many solar energy collectors.
In the 21st century, glass manufacturers have developed different brands of chemically strengthened glass for widespread application in touchscreens for smartphones, tablet computers, and many other types of information appliances. These include Gorilla Glass, developed and manufactured by Corning, AGC Inc.'s Dragontrail and Schott AG's Xensation.
Physical properties
Optical
Glass is in widespread use in optical systems due to its ability to refract, reflect, and transmit light following geometrical optics. The most common and oldest applications of glass in optics are as lenses, windows, mirrors, and prisms. The key optical properties refractive index, dispersion, and transmission, of glass are strongly dependent on chemical composition and, to a lesser degree, its thermal history. Optical glass typically has a refractive index of 1.4 to 2.4, and an Abbe number (which characterises dispersion) of 15 to 100. The refractive index may be modified by high-density (refractive index increases) or low-density (refractive index decreases) additives.
Glass transparency results from the absence of grain boundaries which diffusely scatter light in polycrystalline materials. Semi-opacity due to crystallization may be induced in many glasses by maintaining them for a long period at a temperature just insufficient to cause fusion. In this way, the crystalline, devitrified material, known as Réaumur's glass porcelain is produced. Although generally transparent to visible light, glasses may be opaque to other wavelengths of light. While silicate glasses are generally opaque to infrared wavelengths with a transmission cut-off at 4 μm, heavy-metal fluoride and chalcogenide glasses are transparent to infrared wavelengths of 7 to 18 μm. The addition of metallic oxides results in different coloured glasses as the metallic ions will absorb wavelengths of light corresponding to specific colours.
Other
In the manufacturing process, glasses can be poured, formed, extruded and moulded into forms ranging from flat sheets to highly intricate shapes. The finished product is brittle but can be laminated or tempered to enhance durability. Glass is typically inert, resistant to chemical attack, and can mostly withstand the action of water, making it an ideal material for the manufacture of containers for foodstuffs and most chemicals. Nevertheless, although usually highly resistant to chemical attack, glass will corrode or dissolve under some conditions. The materials that make up a particular glass composition affect how quickly the glass corrodes. Glasses containing a high proportion of alkali or alkaline earth elements are more susceptible to corrosion than other glass compositions.
The density of glass varies with chemical composition with values ranging from for fused silica to for dense flint glass. Glass is stronger than most metals, with a theoretical tensile strength for pure, flawless glass estimated at due to its ability to undergo reversible compression without fracture. However, the presence of scratches, bubbles, and other microscopic flaws lead to a typical range of in most commercial glasses. Several processes such as toughening can increase the strength of glass. Carefully drawn flawless glass fibres can be produced with a strength of up to .
Reputed flow
The observation that old windows are sometimes found to be thicker at the bottom than at the top is often offered as supporting evidence for the view that glass flows over a timescale of centuries, the assumption being that the glass has exhibited the liquid property of flowing from one shape to another. This assumption is incorrect, as once solidified, glass stops flowing. The sags and ripples observed in old glass were already there the day it was made; manufacturing processes used in the past produced sheets with imperfect surfaces and non-uniform thickness (the near-perfect float glass used today only became widespread in the 1960s).
A 2017 study computed the rate of flow of the medieval glass used in Westminster Abbey from the year 1268. The study found that the room temperature viscosity of this glass was roughly 1024Pa·s which is about 1016 times less viscous than a previous estimate made in 1998, which focused on soda-lime silicate glass. Even with this lower viscosity, the study authors calculated that the maximum flow rate of medieval glass is 1 nm per billion years, making it impossible to observe in a human timescale.
Types
Silicate glasses
Silicon dioxide (SiO2) is a common fundamental constituent of glass. Fused quartz is a glass made from chemically pure silica. It has very low thermal expansion and excellent resistance to thermal shock, being able to survive immersion in water while red hot, resists high temperatures (1000–1500 °C) and chemical weathering, and is very hard. It is also transparent to a wider spectral range than ordinary glass, extending from the visible further into both the UV and IR ranges, and is sometimes used where transparency to these wavelengths is necessary. Fused quartz is used for high-temperature applications such as furnace tubes, lighting tubes, melting crucibles, etc. However, its high melting temperature (1723 °C) and viscosity make it difficult to work with. Therefore, normally, other substances (fluxes) are added to lower the melting temperature and simplify glass processing.
Soda–lime glass
Sodium carbonate (Na2CO3, "soda") is a common additive and acts to lower the glass-transition temperature. However, sodium silicate is water-soluble, so lime (CaO, calcium oxide, generally obtained from limestone), along with magnesium oxide (MgO), and aluminium oxide (Al2O3), are commonly added to improve chemical durability. Soda–lime glasses (Na2O) + lime (CaO) + magnesia (MgO) + alumina (Al2O3) account for over 75% of manufactured glass, containing about 70 to 74% silica by weight. Soda–lime–silicate glass is transparent, easily formed, and most suitable for window glass and tableware. However, it has a high thermal expansion and poor resistance to heat. Soda–lime glass is typically used for windows, bottles, light bulbs, and jars.
Borosilicate glass
Borosilicate glasses (e.g. Pyrex, Duran) typically contain 5–13% boron trioxide (B2O3). Borosilicate glasses have fairly low coefficients of thermal expansion (7740 Pyrex CTE is 3.25/°C as compared to about 9/°C for a typical soda–lime glass). They are, therefore, less subject to stress caused by thermal expansion and thus less vulnerable to cracking from thermal shock. They are commonly used for e.g. labware, household cookware, and sealed beam car head lamps.
Lead glass
The addition of lead(II) oxide into silicate glass lowers the melting point and viscosity of the melt. The high density of lead glass (silica + lead oxide (PbO) + potassium oxide (K2O) + soda (Na2O) + zinc oxide (ZnO) + alumina) results in a high electron density, and hence high refractive index, making the look of glassware more brilliant and causing noticeably more specular reflection and increased optical dispersion. Lead glass has a high elasticity, making the glassware more workable and giving rise to a clear "ring" sound when struck. However, lead glass cannot withstand high temperatures well. Lead oxide also facilitates the solubility of other metal oxides and is used in coloured glass. The viscosity decrease of lead glass melt is very significant (roughly 100 times in comparison with soda glass); this allows easier removal of bubbles and working at lower temperatures, hence its frequent use as an additive in vitreous enamels and glass solders. The high ionic radius of the Pb2+ ion renders it highly immobile and hinders the movement of other ions; lead glasses therefore have high electrical resistance, about two orders of magnitude higher than soda–lime glass (108.5 vs 106.5 Ω⋅cm, DC at 250 °C).
Aluminosilicate glass
Aluminosilicate glass typically contains 5–10% alumina (Al2O3). Aluminosilicate glass tends to be more difficult to melt and shape compared to borosilicate compositions but has excellent thermal resistance and durability. Aluminosilicate glass is extensively used for fibreglass, used for making glass-reinforced plastics (boats, fishing rods, etc.), top-of-stove cookware, and halogen bulb glass.
Other oxide additives
The addition of barium also increases the refractive index. Thorium oxide gives glass a high refractive index and low dispersion and was formerly used in producing high-quality lenses, but due to its radioactivity has been replaced by lanthanum oxide in modern eyeglasses. Iron can be incorporated into glass to absorb infrared radiation, for example in heat-absorbing filters for movie projectors, while cerium(IV) oxide can be used for glass that absorbs ultraviolet wavelengths. Fluorine lowers the dielectric constant of glass. Fluorine is highly electronegative and lowers the polarizability of the material. Fluoride silicate glasses are used in the manufacture of integrated circuits as an insulator.
Glass-ceramics
Glass-ceramic materials contain both non-crystalline glass and crystalline ceramic phases. They are formed by controlled nucleation and partial crystallisation of a base glass by heat treatment. Crystalline grains are often embedded within a non-crystalline intergranular phase of grain boundaries. Glass-ceramics exhibit advantageous thermal, chemical, biological, and dielectric properties as compared to metals or organic polymers.
The most commercially important property of glass-ceramics is their imperviousness to thermal shock. Thus, glass-ceramics have become extremely useful for countertop cooking and industrial processes. The negative thermal expansion coefficient (CTE) of the crystalline ceramic phase can be balanced with the positive CTE of the glassy phase. At a certain point (~70% crystalline) the glass-ceramic has a net CTE near zero. This type of glass-ceramic exhibits excellent mechanical properties and can sustain repeated and quick temperature changes up to 1000 °C.
Fibreglass
Fibreglass (also called glass fibre reinforced plastic, GRP) is a composite material made by reinforcing a plastic resin with glass fibres. It is made by melting glass and stretching the glass into fibres. These fibres are woven together into a cloth and left to set in a plastic resin.
Fibreglass has the properties of being lightweight and corrosion resistant and is a good insulator enabling its use as building insulation material and for electronic housing for consumer products. Fibreglass was originally used in the United Kingdom and United States during World War II to manufacture radomes. Uses of fibreglass include building and construction materials, boat hulls, car body parts, and aerospace composite materials.
Glass-fibre wool is an excellent thermal and sound insulation material, commonly used in buildings (e.g. attic and cavity wall insulation), and plumbing (e.g. pipe insulation), and soundproofing. It is produced by forcing molten glass through a fine mesh by centripetal force and breaking the extruded glass fibres into short lengths using a stream of high-velocity air. The fibres are bonded with an adhesive spray and the resulting wool mat is cut and packed in rolls or panels.
Non-silicate glasses
Besides common silica-based glasses many other inorganic and organic materials may also form glasses, including metals, aluminates, phosphates, borates, chalcogenides, fluorides, germanates (glasses based on GeO2), tellurites (glasses based on TeO2), antimonates (glasses based on Sb2O3), arsenates (glasses based on As2O3), titanates (glasses based on TiO2), tantalates (glasses based on Ta2O5), nitrates, carbonates, plastics, acrylic, and many other substances. Some of these glasses (e.g. Germanium dioxide (GeO2, Germania), in many respects a structural analogue of silica, fluoride, aluminate, phosphate, borate, and chalcogenide glasses) have physicochemical properties useful for their application in fibre-optic waveguides in communication networks and other specialised technological applications.
Silica-free glasses may often have poor glass-forming tendencies. Novel techniques, including containerless processing by aerodynamic levitation (cooling the melt whilst it floats on a gas stream) or splat quenching (pressing the melt between two metal anvils or rollers), may be used to increase the cooling rate or to reduce crystal nucleation triggers.
Amorphous metals
In the past, small batches of amorphous metals with high surface area configurations (ribbons, wires, films, etc.) have been produced through the implementation of extremely rapid rates of cooling. Amorphous metal wires have been produced by sputtering molten metal onto a spinning metal disk.
Several alloys have been produced in layers with thicknesses exceeding 1 millimetre. These are known as bulk metallic glasses (BMG). Liquidmetal Technologies sells several zirconium-based BMGs.
Batches of amorphous steel have also been produced that demonstrate mechanical properties far exceeding those found in conventional steel alloys.
Experimental evidence indicates that the system Al-Fe-Si may undergo a first-order transition to an amorphous form (dubbed "q-glass") on rapid cooling from the melt. Transmission electron microscopy (TEM) images indicate that q-glass nucleates from the melt as discrete particles with uniform spherical growth in all directions. While x-ray diffraction reveals the isotropic nature of q-glass, a nucleation barrier exists implying an interfacial discontinuity (or internal surface) between the glass and melt phases.
Polymers
Important polymer glasses include amorphous and glassy pharmaceutical compounds. These are useful because the solubility of the compound is greatly increased when it is amorphous compared to the same crystalline composition. Many emerging pharmaceuticals are practically insoluble in their crystalline forms. Many polymer thermoplastics familiar to everyday use are glasses. For many applications, like glass bottles or eyewear, polymer glasses (acrylic glass, polycarbonate or polyethylene terephthalate) are a lighter alternative to traditional glass.
Molecular liquids and molten salts
Molecular liquids, electrolytes, molten salts, and aqueous solutions are mixtures of different molecules or ions that do not form a covalent network but interact only through weak van der Waals forces or transient hydrogen bonds. In a mixture of three or more ionic species of dissimilar size and shape, crystallization can be so difficult that the liquid can easily be supercooled into a glass. Examples include LiCl:RH2O (a solution of lithium chloride salt and water molecules) in the composition range 4<R<8. sugar glass, or Ca0.4K0.6(NO3)1.4. Glass electrolytes in the form of Ba-doped Li-glass and Ba-doped Na-glass have been proposed as solutions to problems identified with organic liquid electrolytes used in modern lithium-ion battery cells.
Production
Following the glass batch preparation and mixing, the raw materials are transported to the furnace. Soda–lime glass for mass production is melted in glass-melting furnaces. Smaller-scale furnaces for speciality glasses include electric melters, pot furnaces, and day tanks.
After melting, homogenization and refining (removal of bubbles), the glass is formed. This may be achieved manually by glassblowing, which involves gathering a mass of hot semi-molten glass, inflating it into a bubble using a hollow blowpipe, and forming it into the required shape by blowing, swinging, rolling, or moulding. While hot, the glass can be worked using hand tools, cut with shears, and additional parts such as handles or feet attached by welding.
Flat glass for windows and similar applications is formed by the float glass process, developed between 1953 and 1957 by Sir Alastair Pilkington and Kenneth Bickerstaff of the UK's Pilkington Brothers, who created a continuous ribbon of glass using a molten tin bath on which the molten glass flows unhindered under the influence of gravity. The top surface of the glass is subjected to nitrogen under pressure to obtain a polished finish. Container glass for common bottles and jars is formed by blowing and pressing methods. This glass is often slightly modified chemically (with more alumina and calcium oxide) for greater water resistance.
Once the desired form is obtained, glass is usually annealed for the removal of stresses and to increase the glass's hardness and durability. Surface treatments, coatings or lamination may follow to improve the chemical durability (glass container coatings, glass container internal treatment), strength (toughened glass, bulletproof glass, windshields), or optical properties (insulated glazing, anti-reflective coating).
New chemical glass compositions or new treatment techniques can be initially investigated in small-scale laboratory experiments. The raw materials for laboratory-scale glass melts are often different from those used in mass production because the cost factor has a low priority. In the laboratory mostly pure chemicals are used. Care must be taken that the raw materials have not reacted with moisture or other chemicals in the environment (such as alkali or alkaline earth metal oxides and hydroxides, or boron oxide), or that the impurities are quantified (loss on ignition). Evaporation losses during glass melting should be considered during the selection of the raw materials, e.g., sodium selenite may be preferred over easily evaporating selenium dioxide (SeO2). Also, more readily reacting raw materials may be preferred over relatively inert ones, such as aluminium hydroxide (Al(OH)3) over alumina (Al2O3). Usually, the melts are carried out in platinum crucibles to reduce contamination from the crucible material. Glass homogeneity is achieved by homogenizing the raw materials mixture (glass batch), stirring the melt, and crushing and re-melting the first melt. The obtained glass is usually annealed to prevent breakage during processing.
Colour
Colour in glass may be obtained by addition of homogenously distributed electrically charged ions (or colour centres). While ordinary soda–lime glass appears colourless in thin section, iron(II) oxide (FeO) impurities produce a green tint in thick sections. Manganese dioxide (MnO2), which gives glass a purple colour, may be added to remove the green tint given by FeO. FeO and chromium(III) oxide (Cr2O3) additives are used in the production of green bottles. Iron (III) oxide, on the other-hand, produces yellow or yellow-brown glass. Low concentrations (0.025 to 0.1%) of cobalt oxide (CoO) produce rich, deep blue cobalt glass. Chromium is a very powerful colouring agent, yielding dark green.
Sulphur combined with carbon and iron salts produces amber glass ranging from yellowish to almost black. A glass melt can also acquire an amber colour from a reducing combustion atmosphere. Cadmium sulfide produces imperial red, and combined with selenium can produce shades of yellow, orange, and red. Addition of copper(II) oxide (CuO) produces a turquoise colour in glass, in contrast to copper(I) oxide (Cu2O) which gives a dull red-brown colour.
Uses
Architecture and windows
Soda–lime sheet glass is typically used as a transparent glazing material, typically as windows in external walls of buildings. Float or rolled sheet glass products are cut to size either by scoring and snapping the material, laser cutting, water jets, or diamond-bladed saw. The glass may be thermally or chemically tempered (strengthened) for safety and bent or curved during heating. Surface coatings may be added for specific functions such as scratch resistance, blocking specific wavelengths of light (e.g. infrared or ultraviolet), dirt-repellence (e.g. self-cleaning glass), or switchable electrochromic coatings.
Structural glazing systems represent one of the most significant architectural innovations of modern times, where glass buildings now often dominate the skylines of many modern cities. These systems use stainless steel fittings countersunk into recesses in the corners of the glass panels allowing strengthened panes to appear unsupported creating a flush exterior. Structural glazing systems have their roots in iron and glass conservatories of the nineteenth century
Tableware
Glass is an essential component of tableware and is typically used for water, beer and wine drinking glasses. Wine glasses are typically stemware, i.e. goblets formed from a bowl, stem, and foot. Crystal or Lead crystal glass may be cut and polished to produce decorative drinking glasses with gleaming facets. Other uses of glass in tableware include decanters, jugs, plates, and bowls.
Packaging
The inert and impermeable nature of glass makes it a stable and widely used material for food and drink packaging as glass bottles and jars. Most container glass is soda–lime glass, produced by blowing and pressing techniques. Container glass has a lower magnesium oxide and sodium oxide content than flat glass, and a higher silica, calcium oxide, and aluminium oxide content. Its higher content of water-insoluble oxides imparts slightly higher chemical durability against water, which is advantageous for storing beverages and food. Glass packaging is sustainable, readily recycled, reusable and refillable.
For electronics applications, glass can be used as a substrate in the manufacture of integrated passive devices, thin-film bulk acoustic resonators, and as a hermetic sealing material in device packaging, including very thin solely glass based encapsulation of integrated circuits and other semiconductors in high manufacturing volumes.
Laboratories
Glass is an important material in scientific laboratories for the manufacture of experimental apparatus because it is relatively cheap, readily formed into required shapes for experiment, easy to keep clean, can withstand heat and cold treatment, is generally non-reactive with many reagents, and its transparency allows for the observation of chemical reactions and processes. Laboratory glassware applications include flasks, Petri dishes, test tubes, pipettes, graduated cylinders, glass-lined metallic containers for chemical processing, fractionation columns, glass pipes, Schlenk lines, gauges, and thermometers. Although most standard laboratory glassware has been mass-produced since the 1920s, scientists still employ skilled glassblowers to manufacture bespoke glass apparatus for their experimental requirements.
Optics
Glass is a ubiquitous material in optics because of its ability to refract, reflect, and transmit light. These and other optical properties can be controlled by varying chemical compositions, thermal treatment, and manufacturing techniques. The many applications of glass in optics include glasses for eyesight correction, imaging optics (e.g. lenses and mirrors in telescopes, microscopes, and cameras), fibre optics in telecommunications technology, and integrated optics. Microlenses and gradient-index optics (where the refractive index is non-uniform) find application in e.g. reading optical discs, laser printers, photocopiers, and laser diodes.
Modern Art
The 19th century saw a revival in ancient glassmaking techniques including cameo glass, achieved for the first time since the Roman Empire, initially mostly for pieces in a neo-classical style. The Art Nouveau movement made great use of glass, with René Lalique, Émile Gallé, and Daum of Nancy in the first French wave of the movement, producing coloured vases and similar pieces, often in cameo glass or lustre glass techniques.
Louis Comfort Tiffany in America specialised in stained glass, both secular and religious, in panels and his famous lamps. The early 20th century saw the large-scale factory production of glass art by firms such as Waterford and Lalique. Small studios may hand-produce glass artworks. Techniques for producing glass art include blowing, kiln-casting, fusing, slumping, pâte de verre, flame-working, hot-sculpting and cold-working. Cold work includes traditional stained glass work and other methods of shaping glass at room temperature. Objects made out of glass include vessels, paperweights, marbles, beads, sculptures and installation art.
| Technology | Materials | null |
12582 | https://en.wikipedia.org/wiki/Gel%20electrophoresis | Gel electrophoresis | Gel electrophoresis is an electrophoresis method for separation and analysis of biomacromolecules (DNA, RNA, proteins, etc.) and their fragments, based on their size and charge through a gel. It is used in clinical chemistry to separate proteins by charge or size (IEF agarose, essentially size independent) and in biochemistry and molecular biology to separate a mixed population of DNA and RNA fragments by length, to estimate the size of DNA and RNA fragments or to separate proteins by charge.
Nucleic acid molecules are separated by applying an electric field to move the negatively charged molecules through a gel matrix of agarose, polyacrylamide, or other substances. Shorter molecules move faster and migrate farther than longer ones because shorter molecules migrate more easily through the pores of the gel. This phenomenon is called sieving. Proteins are separated by the charge in agarose because the pores of the gel are too large to sieve proteins. Gel electrophoresis can also be used for the separation of nanoparticles.
Gel electrophoresis uses a gel as an anticonvective medium or sieving medium during electrophoresis. Gels suppress the thermal convection caused by the application of the electric field and can also simply serve to maintain the finished separation so that a post electrophoresis stain can be applied. DNA gel electrophoresis is usually performed for analytical purposes, often after amplification of DNA via polymerase chain reaction (PCR), but may be used as a preparative technique prior to use of other methods such as mass spectrometry, RFLP, PCR, cloning, DNA sequencing, or southern blotting for further characterization.
Physical basis
Electrophoresis is a process that enables the sorting of molecules based on charge, size, or shape. Using an electric field, molecules (such as DNA) can be made to move through a gel made of agarose or polyacrylamide. The electric field consists of a negative charge at one end which pushes the molecules through the gel, and a positive charge at the other end that pulls the molecules through the gel. The molecules being sorted are dispensed into a well in the gel material. The gel is placed in an electrophoresis chamber, which is then connected to a power source. When the electric field is applied, the larger molecules move more slowly through the gel while the smaller molecules move faster. The different sized molecules form distinct bands on the gel.
The term "gel" in this instance refers to the matrix used to contain, then separate the target molecules. In most cases, the gel is a crosslinked polymer whose composition and porosity are chosen based on the specific weight and composition of the target to be analyzed. When separating proteins or small nucleic acids (DNA, RNA, or oligonucleotides) the gel is usually composed of different concentrations of acrylamide and a cross-linker, producing different sized mesh networks of polyacrylamide. When separating larger nucleic acids (greater than a few hundred bases), the preferred matrix is purified agarose. In both cases, the gel forms a solid, yet porous matrix. Acrylamide, in contrast to polyacrylamide, is a neurotoxin and must be handled using appropriate safety precautions to avoid poisoning. Agarose is composed of long unbranched chains of uncharged carbohydrates without cross-links resulting in a gel with large pores allowing for the separation of macromolecules and macromolecular complexes.
Electrophoresis refers to the electromotive force (EMF) that is used to move the molecules through the gel matrix. By placing the molecules in wells in the gel and applying an electric field, the molecules will move through the matrix at different rates, determined largely by their mass when the charge-to-mass ratio (Z) of all species is uniform. However, when charges are not all uniform the electrical field generated by the electrophoresis procedure will cause the molecules to migrate differentially according to charge. Species that are net positively charged will migrate towards the cathode which is negatively charged (because this is an electrolytic rather than galvanic cell), whereas species that are net negatively charged will migrate towards the positively charged anode. Mass remains a factor in the speed with which these non-uniformly charged molecules migrate through the matrix toward their respective electrodes.
If several samples have been loaded into adjacent wells in the gel, they will run parallel in individual lanes. Depending on the number of different molecules, each lane shows the separation of the components from the original mixture as one or more distinct bands, one band per component. Incomplete separation of the components can lead to overlapping bands, or indistinguishable smears representing multiple unresolved components. Bands in different lanes that end up at the same distance from the top contain molecules that passed through the gel at the same speed, which usually means they are approximately the same size. There are molecular weight size markers available that contain a mixture of molecules of known sizes. If such a marker was run on one lane in the gel parallel to the unknown samples, the bands observed can be compared to those of the unknown to determine their size. The distance a band travels is approximately inversely proportional to the logarithm of the size of the molecule (alternatively, this can be stated as the distance traveled is inversely proportional to the log of samples's molecular weight).
There are limits to electrophoretic techniques. Since passing a current through a gel causes heating, gels may melt during electrophoresis. Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. There are also limitations in determining the molecular weight by SDS-PAGE, especially when trying to find the MW of an unknown protein. Certain biological variables are difficult or impossible to minimize and can affect electrophoretic migration. Such factors include protein structure, post-translational modifications, and amino acid composition. For example, tropomyosin is an acidic protein that migrates abnormally on SDS-PAGE gels. This is because the acidic residues are repelled by the negatively charged SDS, leading to an inaccurate mass-to-charge ratio and migration. Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons.
Types of gel
The types of gel most typically used are agarose and polyacrylamide gels. Each type of gel is well-suited to different types and sizes of the analyte. Polyacrylamide gels are usually used for proteins and have very high resolving power for small fragments of DNA (5-500 bp). Agarose gels, on the other hand, have lower resolving power for DNA but have a greater range of separation, and are therefore used for DNA fragments of usually 50–20,000 bp in size, but the resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). Polyacrylamide gels are run in a vertical configuration while agarose gels are typically run horizontally in a submarine mode. They also differ in their casting methodology, as agarose sets thermally, while polyacrylamide forms in a chemical polymerization reaction.
Agarose
Agarose gels are made from the natural polysaccharide polymers extracted from seaweed.
Agarose gels are easily cast and handled compared to other matrices because the gel setting is a physical rather than chemical change. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator.
Agarose gels do not have a uniform pore size, but are optimal for electrophoresis of proteins that are larger than 200 kDa. Agarose gel electrophoresis can also be used for the separation of DNA fragments ranging from 50 base pair to several megabases (millions of bases), the largest of which require specialized apparatus. The distance between DNA bands of different lengths is influenced by the percent agarose in the gel, with higher percentages requiring longer run times, sometimes days. Instead high percentage agarose gels should be run with a pulsed field electrophoresis (PFE), or field inversion electrophoresis.
"Most agarose gels are made with between 0.7% (good separation or resolution of large 5–10kb DNA fragments) and 2% (good resolution for small 0.2–1kb fragments) agarose dissolved in electrophoresis buffer. Up to 3% can be used for separating very tiny fragments but a vertical polyacrylamide gel is more appropriate in this case. Low percentage gels are very weak and may break when you try to lift them. High percentage gels are often brittle and do not set evenly. 1% gels are common for many applications."
Polyacrylamide
Polyacrylamide gel electrophoresis (PAGE) is used for separating proteins ranging in size from 5 to 2,000 kDa due to the uniform pore size provided by the polyacrylamide gel. Pore size is controlled by modulating the concentrations of acrylamide and bis-acrylamide powder used in creating a gel. Care must be used when creating this type of gel, as acrylamide is a potent neurotoxin in its liquid and powdered forms.
Traditional DNA sequencing techniques such as Maxam-Gilbert or Sanger methods used polyacrylamide gels to separate DNA fragments differing by a single base-pair in length so the sequence could be read. Most modern DNA separation methods now use agarose gels, except for particularly small DNA fragments. It is currently most often used in the field of immunology and protein analysis, often used to separate different proteins or isoforms of the same protein into separate bands. These can be transferred onto a nitrocellulose or PVDF membrane to be probed with antibodies and corresponding markers, such as in a western blot.
Typically resolving gels are made in 6%, 8%, 10%, 12% or 15%. Stacking gel (5%) is poured on top of the resolving gel and a gel comb (which forms the wells and defines the lanes where proteins, sample buffer, and ladders will be placed) is inserted. The percentage chosen depends on the size of the protein that one wishes to identify or probe in the sample. The smaller the known weight, the higher the percentage that should be used. Changes in the buffer system of the gel can help to further resolve proteins of very small sizes.
Starch
Partially hydrolysed potato starch makes for another non-toxic medium for protein electrophoresis. The gels are slightly more opaque than acrylamide or agarose. Non-denatured proteins can be separated according to charge and size. They are visualised using Napthal Black or Amido Black staining. Typical starch gel concentrations are 5% to 10%.
Gel conditions
Denaturing
Denaturing gels are run under conditions that disrupt the natural structure of the analyte, causing it to unfold into a linear chain. Thus, the mobility of each macromolecule depends only on its linear length and its mass-to-charge ratio. Thus, the secondary, tertiary, and quaternary levels of biomolecular structure are disrupted, leaving only the primary structure to be analyzed.
Nucleic acids are often denatured by including urea in the buffer, while proteins are denatured using sodium dodecyl sulfate, usually as part of the SDS-PAGE process. For full denaturation of proteins, it is also necessary to reduce the covalent disulfide bonds that stabilize their tertiary and quaternary structure, a method called reducing PAGE. Reducing conditions are usually maintained by the addition of beta-mercaptoethanol or dithiothreitol. For a general analysis of protein samples, reducing PAGE is the most common form of protein electrophoresis.
Denaturing conditions are necessary for proper estimation of molecular weight of RNA. RNA is able to form more intramolecular interactions than DNA which may result in change of its electrophoretic mobility. Urea, DMSO and glyoxal are the most often used denaturing agents to disrupt RNA structure. Originally, highly toxic methylmercury hydroxide was often used in denaturing RNA electrophoresis, but it may be method of choice for some samples.
Denaturing gel electrophoresis is used in the DNA and RNA banding pattern-based methods temperature gradient gel electrophoresis (TGGE) and denaturing gradient gel electrophoresis (DGGE).
Native
Native gels are run in non-denaturing conditions so that the analyte's natural structure is maintained. This allows the physical size of the folded or assembled complex to affect the mobility, allowing for analysis of all four levels of the biomolecular structure. For biological samples, detergents are used only to the extent that they are necessary to lyse lipid membranes in the cell. Complexes remain—for the most part—associated and folded as they would be in the cell. One downside, however, is that complexes may not separate cleanly or predictably, as it is difficult to predict how the molecule's shape and size will affect its mobility. Addressing and solving this problem is a major aim of preparative native PAGE.
Unlike denaturing methods, native gel electrophoresis does not use a charged denaturing agent. The molecules being separated (usually proteins or nucleic acids) therefore differ not only in molecular mass and intrinsic charge, but also the cross-sectional area, and thus experience different electrophoretic forces dependent on the shape of the overall structure. For proteins, since they remain in the native state they may be visualized not only by general protein staining reagents but also by specific enzyme-linked staining.
A specific experiment example of an application of native gel electrophoresis is to check for enzymatic activity to verify the presence of the enzyme in the sample during protein purification. For example, for the protein alkaline phosphatase, the staining solution is a mixture of 4-chloro-2-2methylbenzenediazonium salt with 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline in Tris buffer. This stain is commercially sold as a kit for staining gels. If the protein is present, the mechanism of the reaction takes place in the following order: it starts with the de-phosphorylation of 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline by alkaline phosphatase (water is needed for the reaction). The phosphate group is released and replaced by an alcohol group from water. The electrophile 4- chloro-2-2 methylbenzenediazonium (Fast Red TR Diazonium salt) displaces the alcohol group forming the final product Red Azo dye. As its name implies, this is the final visible-red product of the reaction. In undergraduate academic experimentation of protein purification, the gel is usually run next to commercial purified samples to visualize the results and conclude whether or not purification was successful.
Native gel electrophoresis is typically used in proteomics and metallomics. However, native PAGE is also used to scan genes (DNA) for unknown mutations as in single-strand conformation polymorphism.
Buffers
Buffers in gel electrophoresis are used to provide ions that carry a current and to maintain the pH at a relatively constant value.
These buffers have plenty of ions in them, which is necessary for the passage of electricity through them. Something like distilled water or benzene contains few ions, which is not ideal for the use in electrophoresis. There are a number of buffers used for electrophoresis. The most common being, for nucleic acids Tris/Acetate/EDTA (TAE), Tris/Borate/EDTA (TBE). Many other buffers have been proposed, e.g. lithium borate, which is rarely used, based on Pubmed citations (LB), isoelectric histidine, pK matched goods buffers, etc.; in most cases the purported rationale is lower current (less heat) matched ion mobilities, which leads to longer buffer life. Borate is problematic; Borate can polymerize, or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity but provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM Lithium borate).
Most SDS-PAGE protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus on a single sharp band in a process called isotachophoresis. Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins.
Visualization
After the electrophoresis is complete, the molecules in the gel can be stained to make them visible. DNA may be visualized using ethidium bromide which, when intercalated into DNA, fluoresce under ultraviolet light, while protein may be visualised using silver stain or Coomassie brilliant blue dye. Other methods may also be used to visualize the separation of the mixture's components on the gel. If the molecules to be separated contain radioactivity, for example in a DNA sequencing gel, an autoradiogram can be recorded of the gel. Photographs can be taken of gels, often using a Gel Doc system. Gels are then commonly labelled for presentation and scientific records on the popular figure-creation website, SciUGo.
Downstream processing
After separation, an additional separation method may then be used, such as isoelectric focusing or SDS-PAGE. The gel will then be physically cut, and the protein complexes extracted from each portion separately. Each extract may then be analysed, such as by peptide mass fingerprinting or de novo peptide sequencing after in-gel digestion. This can provide a great deal of information about the identities of the proteins in a complex.
Applications
Estimation of the size of DNA molecules following restriction enzyme digestion, e.g. in restriction mapping of cloned DNA.
Analysis of PCR products, e.g. in molecular genetic diagnosis or genetic fingerprinting
Separation of restricted genomic DNA prior to Southern transfer, or of RNA prior to Northern transfer.
Gel electrophoresis is used in forensics, molecular biology, genetics, microbiology and biochemistry. The results can be analyzed quantitatively by visualizing the gel with UV light and a gel imaging device. The image is recorded with a computer-operated camera, and the intensity of the band or spot of interest is measured and compared against standard or markers loaded on the same gel. The measurement and analysis are mostly done with specialized software.
Depending on the type of analysis being performed, other techniques are often implemented in conjunction with the results of gel electrophoresis, providing a wide range of field-specific applications.
Nucleic acids
In the case of nucleic acids, the direction of migration, from negative to positive electrodes, is due to the naturally occurring negative charge carried by their sugar-phosphate backbone.
Double-stranded DNA fragments naturally behave as long rods, so their migration through the gel is relative to their size or, for cyclic fragments, their radius of gyration. Circular DNA such as plasmids, however, may show multiple bands, the speed of migration may depend on whether it is relaxed or supercoiled. Single-stranded DNA or RNA tends to fold up into molecules with complex shapes and migrate through the gel in a complicated manner based on their tertiary structure. Therefore, agents that disrupt the hydrogen bonds, such as sodium hydroxide or formamide, are used to denature the nucleic acids and cause them to behave as long rods again.
Gel electrophoresis of large DNA or RNA is usually done by agarose gel electrophoresis. See the "chain termination method" page for an example of a polyacrylamide DNA sequencing gel. Characterization through ligand interaction of nucleic acids or fragments may be performed by mobility shift affinity electrophoresis.
Electrophoresis of RNA samples can be used to check for genomic DNA contamination and also for RNA degradation. RNA from eukaryotic organisms shows distinct bands of 28s and 18s rRNA, the 28s band being approximately twice as intense as the 18s band. Degraded RNA has less sharply defined bands, has a smeared appearance, and the intensity ratio is less than 2:1.
Proteins
Proteins, unlike nucleic acids, can have varying charges and complex shapes, therefore they may not migrate into the polyacrylamide gel at similar rates, or all when placing a negative to positive EMF on the sample. Proteins, therefore, are usually denatured in the presence of a detergent such as sodium dodecyl sulfate (SDS) that coats the proteins with a negative charge. Generally, the amount of SDS bound is relative to the size of the protein (usually 1.4g SDS per gram of protein), so that the resulting denatured proteins have an overall negative charge, and all the proteins have a similar charge-to-mass ratio. Since denatured proteins act like long rods instead of having a complex tertiary shape, the rate at which the resulting SDS coated proteins migrate in the gel is relative only to their size and not their charge or shape.
Proteins are usually analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), by native gel electrophoresis, by preparative native gel electrophoresis (QPNC-PAGE), or by 2-D electrophoresis.
Characterization through ligand interaction may be performed by electroblotting or by affinity electrophoresis in agarose or by capillary electrophoresis as for estimation of binding constants and determination of structural features like glycan content through lectin binding.
Nanoparticles
A novel application for gel electrophoresis is the separation or characterization of metal or metal oxide nanoparticles (e.g. Au, Ag, ZnO, SiO2) regarding the size, shape, or surface chemistry of the nanoparticles. The scope is to obtain a more homogeneous sample (e.g. narrower particle size distribution), which then can be used in further products/processes (e.g. self-assembly processes). For the separation of nanoparticles within a gel, the key parameter is the ratio of the particle size to the mesh size, whereby two migration mechanisms were identified: the unrestricted mechanism, where the particle size << mesh size, and the restricted mechanism, where particle size is similar to mesh size.
History
1930s – first reports of the use of sucrose for gel electrophoresis; moving-boundary electrophoresis (Tiselius)
1950 – introduction of "zone electrophoresis" (Tiselius); paper electrophoresis
1955 – introduction of starch gels, mediocre separation (Smithies)
1959 – introduction of acrylamide gels; discontinuous electrophoresis (Ornstein and Davis); accurate control of parameters such as pore size and stability; and (Raymond and Weintraub)
1965 – introduction of free-flow electrophoresis (Hannig)
1966 – first use of agar gels
1969 – introduction of denaturing agents especially SDS separation of protein subunit (Weber and Osborn)
1970 – Lämmli separated 28 components of T4 phage using a stacking gel and SDS
1972 – agarose gels with ethidium bromide stain
1975 – 2-dimensional gels (O’Farrell); isoelectric focusing, then SDS gel electrophoresis
1977 – sequencing gels (Sanger)
1981 – introduction of capillary electrophoresis (Jorgenson and Lukacs)
1984 – pulsed-field gel electrophoresis enables separation of large DNA molecules (Schwartz and Cantor)
2004 – introduction of a standardized polymerization time for acrylamide gel solutions to optimize gel properties, in particular gel stability (Kastenholz)
A 1959 book on electrophoresis by Milan Bier cites references from the 1800s. However, Oliver Smithies made significant contributions. Bier states: "The method of Smithies ... is finding wide application because of its unique separatory power." Taken in context, Bier clearly implies that Smithies' method is an improvement.
| Technology | Biotechnology | null |
12584 | https://en.wikipedia.org/wiki/Golgi%20apparatus | Golgi apparatus | The Golgi apparatus (), also known as the Golgi complex, Golgi body, or simply the Golgi, is an organelle found in most eukaryotic cells. Part of the endomembrane system in the cytoplasm, it packages proteins into membrane-bound vesicles inside the cell before the vesicles are sent to their destination. It resides at the intersection of the secretory, lysosomal, and endocytic pathways. It is of particular importance in processing proteins for secretion, containing a set of glycosylation enzymes that attach various sugar monomers to proteins as the proteins move through the apparatus.
The Golgi apparatus was identified in 1898 by the Italian biologist and pathologist Camillo Golgi. The organelle was later named after him in the 1910s.
Discovery
Because of its large size and distinctive structure, the Golgi apparatus was one of the first organelles to be discovered and observed in detail. It was discovered in 1898 by Italian physician Camillo Golgi during an investigation of the nervous system. After first observing it under his microscope, he termed the structure as apparato reticolare interno ("internal reticular apparatus"). Some doubted the discovery at first, arguing that the appearance of the structure was merely an optical illusion created by Golgi’s observation technique. With the development of modern microscopes in the twentieth century, the discovery was confirmed. Early references to the Golgi apparatus referred to it by various names, including the Golgi–Holmgren apparatus, Golgi–Holmgren ducts, and Golgi–Kopsch apparatus. The term Golgi apparatus was used in 1910 and first appeared in scientific literature in 1913, while "Golgi complex" was introduced in 1956.
Subcellular localization
The subcellular localization of the Golgi apparatus varies among eukaryotes. In mammals, a single Golgi apparatus is usually located near the cell nucleus, close to the centrosome. Tubular connections are responsible for linking the stacks together. Localization and tubular connections of the Golgi apparatus are dependent on microtubules. In experiments it is seen that as microtubules are depolymerized the Golgi apparatuses lose mutual connections and become individual stacks throughout the cytoplasm. In yeast, multiple Golgi apparatuses are scattered throughout the cytoplasm (as observed in Saccharomyces cerevisiae). In plants, Golgi stacks are not concentrated at the centrosomal region and do not form Golgi ribbons. Organization of the plant Golgi depends on actin cables and not microtubules. The common feature among Golgi is that they are adjacent to endoplasmic reticulum (ER) exit sites.
Structure
In most eukaryotes, the Golgi apparatus is made up of a series of compartments and is a collection of fused, flattened membrane-enclosed disks known as cisternae (singular: cisterna, also called "dictyosomes"), originating from vesicular clusters that bud off the endoplasmic reticulum (ER). A mammalian cell typically contains 40 to 100 stacks of cisternae. Between four and eight cisternae are usually present in a stack; however, in some protists as many as sixty cisternae have been observed. This collection of cisternae is broken down into cis, medial, and trans compartments, making up two main networks: the cis Golgi network (CGN) and the trans Golgi network (TGN). The CGN is the first cisternal structure, and the TGN is the final, from which proteins are packaged into vesicles destined to lysosomes, secretory vesicles, or the cell surface. The TGN is usually positioned adjacent to the stack, but can also be separate from it. The TGN may act as an early endosome in yeast and plants.
There are structural and organizational differences in the Golgi apparatus among eukaryotes. In some yeasts, Golgi stacking is not observed. Pichia pastoris does have stacked Golgi, while Saccharomyces cerevisiae does not. In plants, the individual stacks of the Golgi apparatus seem to operate independently.
The Golgi apparatus tends to be larger and more numerous in cells that synthesize and secrete large amounts of substances; for example, the antibody-secreting plasma B cells of the immune system have prominent Golgi complexes.
In all eukaryotes, each cisternal stack has a cis entry face and a trans exit face. These faces are characterized by unique morphology and biochemistry. Within individual stacks are assortments of enzymes responsible for selectively modifying protein cargo. These modifications influence the fate of the protein. The compartmentalization of the Golgi apparatus is advantageous for separating enzymes, thereby maintaining consecutive and selective processing steps: enzymes catalyzing early modifications are gathered in the cis face cisternae, and enzymes catalyzing later modifications are found in trans face cisternae of the Golgi stacks.
Function
The Golgi apparatus is a major collection and dispatch station of protein products received from the endoplasmic reticulum. Proteins synthesized in the ER are packaged into vesicles, which then fuse with the Golgi apparatus. These cargo proteins are modified and destined for secretion via exocytosis or for use in the cell.
In this respect, the Golgi can be thought of as similar to a post office: it packages and labels items which it then sends to different parts of the cell or to the extracellular space. The Golgi apparatus is also involved in lipid transport and lysosome formation.
The structure and function of the Golgi apparatus are intimately linked. Individual stacks have different assortments of enzymes, allowing for progressive processing of cargo proteins as they travel from the cisternae to the trans Golgi face. Enzymatic reactions within the Golgi stacks occur exclusively near its membrane surfaces, where enzymes are anchored. This feature is in contrast to the ER, which has soluble proteins and enzymes in its lumen. Much of the enzymatic processing is post-translational modification of proteins. For example, phosphorylation of oligosaccharides on lysosomal proteins occurs in the early CGN. Cis cisterna are associated with the removal of mannose residues. Removal of mannose residues and addition of N-acetylglucosamine occur in medial cisternae. Addition of galactose and sialic acid occurs in the trans cisternae. Sulfation of tyrosines and carbohydrates occurs within the TGN. Other general post-translational modifications of proteins include the addition of carbohydrates (glycosylation) and phosphates (phosphorylation). Protein modifications may form a signal sequence that determines the final destination of the protein. For example, the Golgi apparatus adds a mannose-6-phosphate label to proteins destined for lysosomes. Another important function of the Golgi apparatus is in the formation of proteoglycans. Enzymes in the Golgi append proteins to glycosaminoglycans, thus creating proteoglycans. Glycosaminoglycans are long unbranched polysaccharide molecules present in the extracellular matrix of animals.
Vesicular transport
The vesicles that leave the rough endoplasmic reticulum are transported to the cis face of the Golgi apparatus, where they fuse with the Golgi membrane and empty their contents into the lumen. Once inside the lumen, the molecules are modified, then sorted for transport to their next destinations.
Those proteins destined for areas of the cell other than either the endoplasmic reticulum or the Golgi apparatus are moved through the Golgi cisternae towards the trans face, to a complex network of membranes and associated vesicles known as the trans-Golgi network (TGN). This area of the Golgi is the point at which proteins are sorted and shipped to their intended destinations by their placement into one of at least three different types of vesicles, depending upon the signal sequence they carry.
Current models of vesicular transport and trafficking
Model 1: Anterograde vesicular transport between stable compartments
In this model, the Golgi is viewed as a set of stable compartments that work together. Each compartment has a unique collection of enzymes that work to modify protein cargo. Proteins are delivered from the ER to the cis face using COPII-coated vesicles. Cargo then progress toward the trans face in COPI-coated vesicles. This model proposes that COPI vesicles move in two directions: anterograde vesicles carry secretory proteins, while retrograde vesicles recycle Golgi-specific trafficking proteins.
Strengths: The model explains observations of compartments, polarized distribution of enzymes, and waves of moving vesicles. It also attempts to explain how Golgi-specific enzymes are recycled.
Weaknesses: Since the amount of COPI vesicles varies drastically among types of cells, this model cannot easily explain high trafficking activity within the Golgi for both small and large cargoes. Additionally, there is no convincing evidence that COPI vesicles move in both the anterograde and retrograde directions.
This model was widely accepted from the early 1980s until the late 1990s.
Model 2: Cisternal progression/maturation
In this model, the fusion of COPII vesicles from the ER begins the formation of the first cis-cisterna of the Golgi stack, which progresses later to become mature TGN cisternae. Once matured, the TGN cisternae dissolve to become secretory vesicles. While this progression occurs, COPI vesicles continually recycle Golgi-specific proteins by delivery from older to younger cisternae. Different recycling patterns may account for the differing biochemistry throughout the Golgi stack. Thus, the compartments within the Golgi are seen as discrete kinetic stages of the maturing Golgi apparatus.
Strengths: The model addresses the existence of Golgi compartments, as well as differing biochemistry within the cisternae, transport of large proteins, transient formation and disintegration of the cisternae, and retrograde mobility of native Golgi proteins, and it can account for the variability seen in the structures of the Golgi.
Weaknesses: This model cannot easily explain the observation of fused Golgi networks, tubular connections among cisternae, and differing kinetics of secretory cargo exit.
Model 3: Cisternal progression/maturation with heterotypic tubular transport
This model is an extension of the cisternal progression/maturation model. It incorporates the existence of tubular connections among the cisternae that form the Golgi ribbon, in which cisternae within a stack are linked. This model posits that the tubules are important for bidirectional traffic in the ER-Golgi system: they allow for fast anterograde traffic of small cargo and/or the retrograde traffic of native Golgi proteins.
Strengths: This model encompasses the strengths of the cisternal progression/maturation model that also explains rapid trafficking of cargo, and how native Golgi proteins can recycle independently of COPI vesicles.
Weaknesses: This model cannot explain the transport kinetics of large protein cargo, such as collagen. Additionally, tubular connections are not prevalent in plant cells. The roles that these connections have can be attributed to a cell-specific specialization rather than a universal trait. If the membranes are continuous, that suggests the existence of mechanisms that preserve the unique biochemical gradients observed throughout the Golgi apparatus.
Model 4: Rapid partitioning in a mixed Golgi
This rapid partitioning model is the most drastic alteration of the traditional vesicular trafficking point of view. Proponents of this model hypothesize that the Golgi works as a single unit, containing domains that function separately in the processing and export of protein cargo. Cargo from the ER move between these two domains, and randomly exit from any level of the Golgi to their final location. This model is supported by the observation that cargo exits the Golgi in a pattern best described by exponential kinetics. The existence of domains is supported by fluorescence microscopy data.
Strengths: Notably, this model explains the exponential kinetics of cargo exit of both large and small proteins, whereas other models cannot.
Weaknesses: This model cannot explain the transport kinetics of large protein cargo, such as collagen. This model falls short on explaining the observation of discrete compartments and polarized biochemistry of the Golgi cisternae. It also does not explain formation and disintegration of the Golgi network, nor the role of COPI vesicles.
Model 5: Stable compartments as cisternal model progenitors
This is the most recent model. In this model, the Golgi is seen as a collection of stable compartments defined by Rab (G-protein) GTPases.
Strengths: This model is consistent with numerous observations and encompasses some of the strengths of the cisternal progression/maturation model. Additionally, what is known of the Rab GTPase roles in mammalian endosomes can help predict putative roles within the Golgi. This model is unique in that it can explain the observation of "megavesicle" transport intermediates.
Weaknesses: This model does not explain morphological variations in the Golgi apparatus, nor define a role for COPI vesicles. This model does not apply well for plants, algae, and fungi in which individual Golgi stacks are observed (transfer of domains between stacks is not likely). Additionally, megavesicles are not established to be intra-Golgi transporters.
Though there are multiple models that attempt to explain vesicular traffic throughout the Golgi, no individual model can independently explain all observations of the Golgi apparatus. Currently, the cisternal progression/maturation model is the most accepted among scientists, accommodating many observations across eukaryotes. The other models are still important in framing questions and guiding future experimentation. Among the fundamental unanswered questions are the directionality of COPI vesicles and role of Rab GTPases in modulating protein cargo traffic.
Brefeldin A
Brefeldin A (BFA) is a fungal metabolite used experimentally to disrupt the secretion pathway as a method of testing Golgi function. BFA blocks the activation of some ADP-ribosylation factors (ARFs). ARFs are small GTPases which regulate vesicular trafficking through the binding of COPs to endosomes and the Golgi. BFA inhibits the function of several guanine nucleotide exchange factors (GEFs) that mediate GTP-binding of ARFs. Treatment of cells with BFA thus disrupts the secretion pathway, promoting disassembly of the Golgi apparatus and distributing Golgi proteins to the endosomes and ER.
Gallery
| Biology and health sciences | Organelles and other cell parts | null |
12608 | https://en.wikipedia.org/wiki/Geodesy | Geodesy | Geodesy or geodetics is the science of measuring and representing the geometry, gravity, and spatial orientation of the Earth in temporally varying 3D. It is called planetary geodesy when studying other astronomical bodies, such as planets or circumplanetary systems. Geodesy is an earth science and many consider the study of Earth's shape and gravity to be central to that science. It is also a discipline of applied mathematics.
Geodynamical phenomena, including crustal motion, tides, and polar motion, can be studied by designing global and national control networks, applying space geodesy and terrestrial geodetic techniques, and relying on datums and coordinate systems. Geodetic job titles include geodesist and geodetic surveyor.
History
Geodesy began in pre-scientific antiquity, so the very word geodesy comes from the Ancient Greek word or geodaisia (literally, "division of Earth").
Early ideas about the figure of the Earth held the Earth to be flat and the heavens a physical dome spanning over it. Two early arguments for a spherical Earth were that lunar eclipses appear to an observer as circular shadows and that Polaris appears lower and lower in the sky to a traveler headed South.
Definition
In English, geodesy refers to the science of measuring and representing geospatial information, while geomatics encompasses practical applications of geodesy on local and regional scales, including surveying.
In German, geodesy can refer to either higher geodesy ( or , literally "geomensuration") — concerned with measuring Earth on the global scale, or engineering geodesy () that includes surveying — measuring parts or regions of Earth.
For the longest time, geodesy was the science of measuring and understanding Earth's geometric shape, orientation in space, and gravitational field; however, geodetic science and operations are applied to other astronomical bodies in our Solar System also.
To a large extent, Earth's shape is the result of rotation, which causes its equatorial bulge, and the competition of geological processes such as the collision of plates, as well as of volcanism, resisted by Earth's gravitational field. This applies to the solid surface, the liquid surface (dynamic sea surface topography), and Earth's atmosphere. For this reason, the study of Earth's gravitational field is called physical geodesy.
Geoid and reference ellipsoid
The geoid essentially is the figure of Earth abstracted from its topographical features. It is an idealized equilibrium surface of seawater, the mean sea level surface in the absence of currents and air pressure variations, and continued under the continental masses. Unlike a reference ellipsoid, the geoid is irregular and too complicated to serve as the computational surface for solving geometrical problems like point positioning. The geometrical separation between the geoid and a reference ellipsoid is called geoidal undulation, and it varies globally between ±110 m based on the GRS 80 ellipsoid.
A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) a and flattening f. The quantity f = , where b is the semi-minor axis (polar radius), is purely geometrical. The mechanical ellipticity of Earth (dynamical flattening, symbol J2) can be determined to high precision by observation of satellite orbit perturbations. Its relationship with geometrical flattening is indirect and depends on the internal density distribution or, in simplest terms, the degree of central concentration of mass.
The 1980 Geodetic Reference System (GRS 80), adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG), posited a 6,378,137 m semi-major axis and a 1:298.257 flattening. GRS 80 essentially constitutes the basis for geodetic positioning by the Global Positioning System (GPS) and is thus also in widespread use outside the geodetic community. Numerous systems used for mapping and charting are becoming obsolete as countries increasingly move to global, geocentric reference systems utilizing the GRS 80 reference ellipsoid.
The geoid is a "realizable" surface, meaning it can be consistently located on Earth by suitable simple measurements from physical objects like a tide gauge. The geoid can, therefore, be considered a physical ("real") surface. The reference ellipsoid, however, has many possible instantiations and is not readily realizable, so it is an abstract surface. The third primary surface of geodetic interest — the topographic surface of Earth — is also realizable.
Coordinate systems in space
The locations of points in 3D space most conveniently are described by three cartesian or rectangular coordinates, X, Y, and Z. Since the advent of satellite positioning, such coordinate systems are typically geocentric, with the Z-axis aligned to Earth's (conventional or instantaneous) rotation axis.
Before the era of satellite geodesy, the coordinate systems associated with a geodetic datum attempted to be geocentric, but with the origin differing from the geocenter by hundreds of meters due to regional deviations in the direction of the plumbline (vertical). These regional geodetic datums, such as ED 50 (European Datum 1950) or NAD 27 (North American Datum 1927), have ellipsoids associated with them that are regional "best fits" to the geoids within their areas of validity, minimizing the deflections of the vertical over these areas.
It is only because GPS satellites orbit about the geocenter that this point becomes naturally the origin of a coordinate system defined by satellite geodetic means, as the satellite positions in space themselves get computed within such a system.
Geocentric coordinate systems used in geodesy can be divided naturally into two classes:
The inertial reference systems, where the coordinate axes retain their orientation relative to the fixed stars or, equivalently, to the rotation axes of ideal gyroscopes. The X-axis points to the vernal equinox.
The co-rotating reference systems (also ECEF or "Earth Centred, Earth Fixed"), in which the axes are "attached" to the solid body of Earth. The X-axis lies within the Greenwich observatory's meridian plane.
The coordinate transformation between these two systems to good approximation is described by (apparent) sidereal time, which accounts for variations in Earth's axial rotation (length-of-day variations). A more accurate description also accounts for polar motion as a phenomenon closely monitored by geodesists.
Coordinate systems in the plane
In geodetic applications like surveying and mapping, two general types of coordinate systems in the plane are in use:
Plano-polar, with points in the plane defined by their distance, s, from a specified point along a ray having a direction α from a baseline or axis.
Rectangular, with points defined by distances from two mutually perpendicular axes, x and y. Contrary to the mathematical convention, in geodetic practice, the x-axis points North and the y-axis East.
One can intuitively use rectangular coordinates in the plane for one's current location, in which case the x-axis will point to the local north. More formally, such coordinates can be obtained from 3D coordinates using the artifice of a map projection. It is impossible to map the curved surface of Earth onto a flat map surface without deformation. The compromise most often chosen — called a conformal projection — preserves angles and length ratios so that small circles get mapped as small circles and small squares as squares.
An example of such a projection is UTM (Universal Transverse Mercator). Within the map plane, we have rectangular coordinates x and y. In this case, the north direction used for reference is the map north, not the local north. The difference between the two is called meridian convergence.
It is easy enough to "translate" between polar and rectangular coordinates in the plane: let, as above, direction and distance be α and s respectively; then we have:
The reverse transformation is given by:
Heights
In geodesy, point or terrain heights are "above sea level" as an irregular, physically defined surface.
Height systems in use are:
Orthometric heights
Dynamic heights
Geopotential heights
Normal heights
Each system has its advantages and disadvantages. Both orthometric and normal heights are expressed in metres above sea level, whereas geopotential numbers are measures of potential energy (unit: m2 s−2) and not metric. The reference surface is the geoid, an equigeopotential surface approximating the mean sea level as described above. For normal heights, the reference surface is the so-called quasi-geoid, which has a few-metre separation from the geoid due to the density assumption in its continuation under the continental masses.
One can relate these heights through the geoid undulation concept to ellipsoidal heights (also known as geodetic heights), representing the height of a point above the reference ellipsoid. Satellite positioning receivers typically provide ellipsoidal heights unless fitted with special conversion software based on a model of the geoid.
Geodetic datums
Because coordinates and heights of geodetic points always get obtained within a system that itself was constructed based on real-world observations, geodesists introduced the concept of a "geodetic datum" (plural datums): a physical (real-world) realization of a coordinate system used for describing point locations. This realization follows from choosing (therefore conventional) coordinate values for one or more datum points. In the case of height data, it suffices to choose one datum point — the reference benchmark, typically a tide gauge at the shore. Thus we have vertical datums, such as the NAVD 88 (North American Vertical Datum 1988), NAP (Normaal Amsterdams Peil), the Kronstadt datum, the Trieste datum, and numerous others.
In both mathematics and geodesy, a coordinate system is a "coordinate system" per ISO terminology, whereas the International Earth Rotation and Reference Systems Service (IERS) uses the term "reference system" for the same. When coordinates are realized by choosing datum points and fixing a geodetic datum, ISO speaks of a "coordinate reference system", whereas IERS uses a "reference frame" for the same. The ISO term for a datum transformation again is a "coordinate transformation".
Positioning
General geopositioning, or simply positioning, is the determination of the location of points on Earth, by myriad techniques. Geodetic positioning employs geodetic methods to determine a set of precise geodetic coordinates of a point on land, at sea, or in space. It may be done within a coordinate system (point positioning or absolute positioning) or relative to another point (relative positioning). One computes the position of a point in space from measurements linking terrestrial or extraterrestrial points of known location ("known points") with terrestrial ones of unknown location ("unknown points"). The computation may involve transformations between or among astronomical and terrestrial coordinate systems. Known points used in point positioning can be GNSS continuously operating reference stations or triangulation points of a higher-order network.
Traditionally, geodesists built a hierarchy of networks to allow point positioning within a country. The highest in this hierarchy were triangulation networks, densified into the networks of traverses (polygons) into which local mapping and surveying measurements, usually collected using a measuring tape, a corner prism, and the red-and-white poles, are tied.
Commonly used nowadays is GPS, except for specialized measurements (e.g., in underground or high-precision engineering). The higher-order networks are measured with static GPS, using differential measurement to determine vectors between terrestrial points. These vectors then get adjusted in a traditional network fashion. A global polyhedron of permanently operating GPS stations under the auspices of the IERS is the basis for defining a single global, geocentric reference frame that serves as the "zero-order" (global) reference to which national measurements are attached.
Real-time kinematic positioning (RTK GPS) is employed frequently in survey mapping. In that measurement technique, unknown points can get quickly tied into nearby terrestrial known points.
One purpose of point positioning is the provision of known points for mapping measurements, also known as (horizontal and vertical) control. There can be thousands of those geodetically determined points in a country, usually documented by national mapping agencies. Surveyors involved in real estate and insurance will use these to tie their local measurements.
Geodetic problems
In geometrical geodesy, there are two main problems:
First geodetic problem (also known as direct or forward geodetic problem): given the coordinates of a point and the directional (azimuth) and distance to a second point, determine the coordinates of that second point.
Second geodetic problem (also known as inverse or reverse geodetic problem): given the coordinates of two points, determine the azimuth and length of the (straight, curved, or geodesic) line connecting those points.
The solutions to both problems in plane geometry reduce to simple trigonometry and are valid for small areas on Earth's surface; on a sphere, solutions become significantly more complex as, for example, in the inverse problem, the azimuths differ going between the two end points along the arc of the connecting great circle.
The general solution is called the geodesic for the surface considered, and the differential equations for the geodesic are solvable numerically. On the ellipsoid of revolution, geodesics are expressible in terms of elliptic integrals, which are usually evaluated in terms of a series expansion — see, for example, Vincenty's formulae.
Observational concepts
As defined in geodesy (and also astronomy), some basic observational concepts like angles and coordinates include (most commonly from the viewpoint of a local observer):
Plumbline or vertical: (the line along) the direction of local gravity.
Zenith: the (direction to the) intersection of the upwards-extending gravity vector at a point and the celestial sphere.
Nadir: the (direction to the) antipodal point where the downward-extending gravity vector intersects the (obscured) celestial sphere.
Celestial horizon: a plane perpendicular to the gravity vector at a point.
Azimuth: the direction angle within the plane of the horizon, typically counted clockwise from the north (in geodesy and astronomy) or the south (in France).
Elevation: the angular height of an object above the horizon; alternatively: zenith distance equal to 90 degrees minus elevation.
Local topocentric coordinates: azimuth (direction angle within the plane of the horizon), elevation angle (or zenith angle), distance.
North celestial pole: the extension of Earth's (precessing and nutating) instantaneous spin axis extended northward to intersect the celestial sphere. (Similarly for the south celestial pole.)
Celestial equator: the (instantaneous) intersection of Earth's equatorial plane with the celestial sphere.
Meridian plane: any plane perpendicular to the celestial equator and containing the celestial poles.
Local meridian: the plane which contains the direction to the zenith and the celestial pole.
Measurements
The reference surface (level) used to determine height differences and height reference systems is known as mean sea level. The traditional spirit level directly produces such (for practical purposes most useful) heights above sea level; the more economical use of GPS instruments for height determination requires precise knowledge of the figure of the geoid, as GPS only gives heights above the GRS80 reference ellipsoid. As geoid determination improves, one may expect that the use of GPS in height determination shall increase, too.
The theodolite is an instrument used to measure horizontal and vertical (relative to the local vertical) angles to target points. In addition, the tachymeter determines, electronically or electro-optically, the distance to a target and is highly automated or even robotic in operations. Widely used for the same purpose is the method of free station position.
Commonly for local detail surveys, tachymeters are employed, although the old-fashioned rectangular technique using an angle prism and steel tape is still an inexpensive alternative. As mentioned, also there are quick and relatively accurate real-time kinematic (RTK) GPS techniques. Data collected are tagged and recorded digitally for entry into Geographic Information System (GIS) databases.
Geodetic GNSS (most commonly GPS) receivers directly produce 3D coordinates in a geocentric coordinate frame. One such frame is WGS84, as well as frames by the International Earth Rotation and Reference Systems Service (IERS). GNSS receivers have almost completely replaced terrestrial instruments for large-scale base network surveys.
To monitor the Earth's rotation irregularities and plate tectonic motions and for planet-wide geodetic surveys, methods of very-long-baseline interferometry (VLBI) measuring distances to quasars, lunar laser ranging (LLR) measuring distances to prisms on the Moon, and satellite laser ranging (SLR) measuring distances to prisms on artificial satellites, are employed.
Gravity is measured using gravimeters, of which there are two kinds. First are absolute gravimeters, based on measuring the acceleration of free fall (e.g., of a reflecting prism in a vacuum tube). They are used to establish vertical geospatial control or in the field. Second, relative gravimeters are spring-based and more common. They are used in gravity surveys over large areas — to establish the figure of the geoid over these areas. The most accurate relative gravimeters are called superconducting gravimeters, which are sensitive to one-thousandth of one-billionth of Earth-surface gravity. Twenty-some superconducting gravimeters are used worldwide in studying Earth's tides, rotation, interior, oceanic and atmospheric loading, as well as in verifying the Newtonian constant of gravitation.
In the future, gravity and altitude might become measurable using the special-relativistic concept of time dilation as gauged by optical clocks.
Units and measures on the ellipsoid
Geographical latitude and longitude are stated in the units degree, minute of arc, and second of arc. They are angles, not metric
measures, and describe the direction of the local normal to the reference ellipsoid of revolution. This direction is approximately the same as the direction of the plumbline, i.e., local gravity, which is also the normal to the geoid surface. For this reason, astronomical position determination – measuring the direction of the plumbline by astronomical means – works reasonably well when one also uses an ellipsoidal model of the figure of the Earth.
One geographical mile, defined as one minute of arc on the equator, equals 1,855.32571922 m. One nautical mile is one minute of astronomical latitude. The radius of curvature of the ellipsoid varies with latitude, being the longest at the pole and the shortest at the equator same as with the nautical mile.
A metre was originally defined as the 10-millionth part of the length from the equator to the North Pole along the meridian through Paris (the target was not quite reached in actual implementation, as it is off by 200 ppm in the current definitions). This situation means that one kilometre roughly equals (1/40,000) * 360 * 60 meridional minutes of arc, or 0.54 nautical miles. (This is not exactly so as the two units had been defined on different bases, so the international nautical mile is 1,852 m exactly, which corresponds to rounding the quotient from 1,000/0.54 m to four digits).
Temporal changes
Various techniques are used in geodesy to study temporally changing surfaces, bodies of mass, physical fields, and dynamical systems. Points on Earth's surface change their location due to a variety of mechanisms:
Continental plate motion, plate tectonics
The episodic motion of tectonic origin, especially close to fault lines
Periodic effects due to tides and tidal loading
Postglacial land uplift due to isostatic adjustment
Mass variations due to hydrological changes, including the atmosphere, cryosphere, land hydrology, and oceans
Sub-daily polar motion
Length-of-day variability
Earth's center-of-mass (geocenter) variations
Anthropogenic movements such as reservoir construction or petroleum or water extraction
Geodynamics is the discipline that studies deformations and motions of Earth's crust and its solidity as a whole. Often the study of Earth's irregular rotation is included in the above definition. Geodynamical studies require terrestrial reference frames realized by the stations belonging to the Global Geodetic Observing System (GGOS).
Techniques for studying geodynamic phenomena on global scales include:
Satellite positioning by GPS, GLONASS, Galileo, and BeiDou
Very-long-baseline interferometry (VLBI)
Satellite laser ranging (SLR) and lunar laser ranging (LLR)
DORIS
Regionally and locally precise leveling
Precise tachymeters
Monitoring of gravity change using land, airborne, shipborne, and spaceborne gravimetry
Satellite altimetry based on microwave and laser observations for studying the ocean surface, sea level rise, and ice cover monitoring
Interferometric synthetic aperture radar (InSAR) using satellite images.
Notable geodesists
| Physical sciences | Geophysics | null |
12610 | https://en.wikipedia.org/wiki/Grand%20Unified%20Theory | Grand Unified Theory | A Grand Unified Theory (GUT) is any model in particle physics that merges the electromagnetic, weak, and strong forces (the three gauge interactions of the Standard Model) into a single force at high energies. Although this unified force has not been directly observed, many GUT models theorize its existence. If the unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct.
Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single combined electroweak interaction. GUT models predict that at even higher energy, the strong and electroweak interactions will unify into one electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. Unifying gravity with the electronuclear interaction would provide a more comprehensive theory of everything (TOE) rather than a Grand Unified Theory. Thus, GUTs are often seen as an intermediate step towards a TOE.
The novel particles predicted by GUT models are expected to have extremely high masses—around the GUT scale of GeV (just three orders of magnitude below the Planck scale of GeV)—and so are well beyond the reach of any foreseen particle hadron collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly, and instead the effects of grand unification might be detected through indirect observations of the following:
proton decay,
electric dipole moments of elementary particles,
or the properties of neutrinos.
Some GUTs, such as the Pati–Salam model, predict the existence of magnetic monopoles.
While GUTs might be expected to offer simplicity over the complications present in the Standard Model, realistic models remain complicated because they need to introduce additional fields and interactions, or even additional dimensions of space, in order to reproduce observed fermion masses and mixing angles. This difficulty, in turn, may be related to the existence of family symmetries beyond the conventional GUT models. Due to this and the lack of any observed effect of grand unification so far, there is no generally accepted GUT model.
Models that do not unify the three interactions using one simple group as the gauge symmetry but do so using semisimple groups can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well.
History
Historically, the first true GUT, which was based on the simple Lie group , was proposed by Howard Georgi and Sheldon Glashow in 1974. The Georgi–Glashow model was preceded by the semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati also in 1974, who pioneered the idea to unify gauge interactions.
The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use the acronym in a paper.
Motivation
The fact that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics. While the description of strong and weak interactions within the Standard Model is based on gauge symmetries governed by the simple symmetry groups and which allow only discrete charges, the remaining component, the weak hypercharge interaction is described by an abelian symmetry which in principle allows for arbitrary charge assignments. The observed charge quantization, namely the postulation that all known elementary particles carry electric charges which are exact multiples of one-third of the "elementary" charge, has led to the idea that hypercharge interactions and possibly the strong and weak interactions might be embedded in one Grand Unified interaction described by a single, larger simple symmetry group containing the Standard Model. This would automatically predict the quantized nature and values of all elementary particle charges. Since this also results in a prediction for the relative strengths of the fundamental interactions which we observe, in particular, the weak mixing angle, grand unification ideally reduces the number of independent input parameters but is also constrained by observations.
Grand unification is reminiscent of the unification of electric and magnetic forces by Maxwell's field theory of electromagnetism in the 19th century, but its physical implications and mathematical structure are qualitatively different.
Unification of matter particles
SU(5)
is the simplest GUT. The smallest simple Lie group which contains the standard model, and upon which the first Grand Unified Theory was based, is
.
Such group symmetries allow the reinterpretation of several known particles, including the photon, W and Z bosons, and gluon, as different states of a single particle field. However, it is not obvious that the simplest possible choices for the extended "Grand Unified" symmetry should yield the correct inventory of elementary particles. The fact that all currently known matter particles fit perfectly into three copies of the smallest group representations of and immediately carry the correct observed charges, is one of the first and most important reasons why people believe that a Grand Unified Theory might actually be realized in nature.
The two smallest irreducible representations of are (the defining representation) and . (These bold numbers indicate the dimension of the representation.) In the standard assignment, the contains the charge conjugates of the right-handed down-type quark color triplet and a left-handed lepton isospin doublet, while the contains the six up-type quark components, the left-handed down-type quark color triplet, and the right-handed electron. This scheme has to be replicated for each of the three known generations of matter. It is notable that the theory is anomaly free with this matter content.
The hypothetical right-handed neutrinos are a singlet of , which means its mass is not forbidden by any symmetry; it doesn't need a spontaneous electroweak symmetry breaking which explains why its mass would be heavy (see seesaw mechanism).
SO(10)
The next simple Lie group which contains the standard model is
.
Here, the unification of matter is even more complete, since the irreducible spinor representation contains both the and of and a right-handed neutrino, and thus the complete particle content of one generation of the extended standard model with neutrino masses. This is already the largest simple group that achieves the unification of matter in a scheme involving only the already known matter particles (apart from the Higgs sector).
Since different standard model fermions are grouped together in larger representations, GUTs specifically predict relations among the fermion masses, such as between the electron and the down quark, the muon and the strange quark, and the tau lepton and the bottom quark for and . Some of these mass relations hold approximately, but most don't (see Georgi-Jarlskog mass relation).
The boson matrix for is found by taking the matrix from the representation of and adding an extra row and column for the right-handed neutrino. The bosons are found by adding a partner to each of the 20 charged bosons (2 right-handed W bosons, 6 massive charged gluons and 12 X/Y type bosons) and adding an extra heavy neutral Z-boson to make 5 neutral bosons in total. The boson matrix will have a boson or its new partner in each row and column. These pairs combine to create the familiar 16D Dirac spinor matrices of .
E6
In some forms of string theory, including E8 × E8 heterotic string theory, the resultant four-dimensional theory after spontaneous compactification on a six-dimensional Calabi–Yau manifold resembles a GUT based on the group E6. Notably E6 is the only exceptional simple Lie group to have any complex representations, a requirement for a theory to contain chiral fermions (namely all weakly-interacting fermions). Hence the other four (G2, F4, E7, and E8) can't be the gauge group of a GUT.
Extended Grand Unified Theories
Non-chiral extensions of the Standard Model with vectorlike split-multiplet particle spectra which naturally appear in the higher SU(N) GUTs considerably modify the desert physics and lead to the realistic (string-scale) grand unification for conventional three quark-lepton families even without using supersymmetry (see below). On the other hand, due to a new missing VEV mechanism emerging in the supersymmetric SU(8) GUT the simultaneous solution to the gauge hierarchy (doublet-triplet splitting) problem and problem of unification of flavor can be argued.
GUTs with four families / generations, SU(8): Assuming 4 generations of fermions instead of 3 makes a total of types of particles. These can be put into representations of . This can be divided into which is the theory together with some heavy bosons which act on the generation number.
GUTs with four families / generations, O(16): Again assuming 4 generations of fermions, the 128 particles and anti-particles can be put into a single spinor representation of .
Symplectic groups and quaternion representations
Symplectic gauge groups could also be considered. For example, (which is called in the article symplectic group) has a representation in terms of quaternion unitary matrices which has a dimensional real representation and so might be considered as a candidate for a gauge group. has 32 charged bosons and 4 neutral bosons. Its subgroups include so can at least contain the gluons and photon of . Although it's probably not possible to have weak bosons acting on chiral fermions in this representation. A quaternion representation of the fermions might be:
A further complication with quaternion representations of fermions is that there are two types of multiplication: left multiplication and right multiplication which must be taken into account. It turns out that including left and right-handed quaternion matrices is equivalent to including a single right-multiplication by a unit quaternion which adds an extra SU(2) and so has an extra neutral boson and two more charged bosons. Thus the group of left- and right-handed quaternion matrices is which does include the standard model bosons:
If is a quaternion valued spinor, is quaternion hermitian matrix coming from and is a pure vector quaternion (both of which are 4-vector bosons) then the interaction term is:
Octonion representations
It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann) Jordan algebra, which has the symmetry group of one of the exceptional Lie groups (, , , or ) depending on the details.
Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that has subgroup and so is big enough to include the Standard Model. An gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of , these would either have to include anti-particles (and so have baryogenesis), have new undiscovered particles, or have gravity-like (spin connection) bosons affecting elements of the particles spin direction. Each of these possesses theoretical problems.
Beyond Lie groups
Other structures have been suggested including Lie 3-algebras and Lie superalgebras. Neither of these fit with Yang–Mills theory. In particular Lie superalgebras would introduce bosons with incorrect statistics. Supersymmetry, however, does fit with Yang–Mills.
Unification of forces and the role of supersymmetry
The unification of forces is possible due to the energy scale dependence of force coupling parameters in quantum field theory called renormalization group "running", which allows parameters with vastly different values at usual energies to converge to a single value at a much higher energy scale.
The renormalization group running of the three gauge couplings in the Standard Model has been found to nearly, but not quite, meet at the same point if the hypercharge is normalized so that it is consistent with or GUTs, which are precisely the GUT groups which lead to a simple fermion unification. This is a significant result, as other Lie groups lead to different normalizations. However, if the supersymmetric extension MSSM is used instead of the Standard Model, the match becomes much more accurate. In this case, the coupling constants of the strong and electroweak interactions meet at the grand unification energy, also known as the GUT scale:
.
It is commonly believed that this matching is unlikely to be a coincidence, and is often quoted as one of the main motivations to further investigate supersymmetric theories despite the fact that no supersymmetric partner particles have been experimentally observed. Also, most model builders simply assume supersymmetry because it solves the hierarchy problem—i.e., it stabilizes the electroweak Higgs mass against radiative corrections.
Neutrino masses
Since Majorana masses of the right-handed neutrino are forbidden by symmetry, GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism. These predictions are independent of the Georgi–Jarlskog mass relations, wherein some GUTs predict other fermion mass ratios.
Proposed theories
Several theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes all fundamental forces, including gravitation, is termed a theory of everything. Some common mainstream GUT models are:
Pati–Salam model —
Georgi–Glashow model — ; and Flipped —
model; and Flipped —
model; and Trinification —
minimal left-right model —
331 model —
chiral color
Not quite GUTs:
Technicolor models
Little Higgs
String theory
Causal fermion systems
M-theory
Preons
Loop quantum gravity
Causal dynamical triangulation theory
Note: These models refer to Lie algebras not to Lie groups. The Lie group could be just to take a random example.
The most promising candidate is .
(Minimal) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation. A number of other GUT models are based upon subgroups of . They are the minimal left-right model, , flipped and the Pati–Salam model. The GUT group contains , but models based upon it are significantly more complicated. The primary reason for studying models comes from heterotic string theory.
GUT models generically predict the existence of topological defects such as monopoles, cosmic strings, domain walls, and others. But none have been observed. Their absence is known as the monopole problem in cosmology. Many GUT models also predict proton decay, although not the Pati–Salam model. As of now, proton decay has never been experimentally observed. The minimal experimental limit on the proton's lifetime pretty much rules out minimal and heavily constrains the other models. The lack of detected supersymmetry to date also constrains many models.
Some GUT theories like and suffer from what is called the doublet-triplet problem. These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons, the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group.
Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations.
Ingredients
A GUT model consists of a gauge group which is a compact Lie group, a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter.
Current evidence
The discovery of neutrino oscillations indicates that the Standard Model is incomplete, but there is currently no clear evidence that nature is described by any Grand Unified Theory. Neutrino oscillations have led to renewed interest toward certain GUT such as .
One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT. However, minimum proton lifetimes from research (at or exceeding the ~ year range) have ruled out simpler GUTs and most non-SUSY models.
The maximum upper limit on proton lifetime (if unstable), is calculated at 6× years for SUSY models and 1.4× years for minimal non-SUSY GUTs.
The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to GeV (slightly less than the Planck energy of GeV), which is somewhat suggestive. This interesting numerical observation is called the gauge coupling unification, and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still, it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) models break with an intermediate gauge scale, such as the one of Pati–Salam group.
| Physical sciences | Particle physics: General | Physics |
12612 | https://en.wikipedia.org/wiki/General%20aviation | General aviation | General aviation (GA) is defined by the International Civil Aviation Organization (ICAO) as all civil aviation aircraft operations except for commercial air transport or aerial work, which is defined as specialized aviation services for other purposes. However, for statistical purposes, ICAO uses a definition of general aviation which includes aerial work.
General aviation thus represents the "private transport" and recreational components of aviation, most of which is accomplished with light aircraft.
Definition
The International Civil Aviation Organization (ICAO) defines civil aviation aircraft operations in three categories: General Aviation (GA), Aerial Work (AW) and Commercial Air Transport (CAT). Aerial work operations are separated from general aviation by ICAO by this definition. Aerial work is when an aircraft is used for specialized services such as agriculture, construction, photography, surveying, observation and patrol, search and rescue, and aerial advertisement. However, for statistical purposes ICAO includes aerial work within general aviation, and has proposed officially extending the definition of general aviation to include aerial work, to reflect common usage. The proposed ICAO classification includes instructional flying as part of general aviation (non-aerial-work).
The International Council of Aircraft Owner and Pilot Associations (IAOPA) refers to the category as general aviation/aerial work (GA/AW) to avoid ambiguity. Their definition of general aviation includes:
Corporate aviation: company own-use flight operations
Fractional ownership operations: aircraft operated by a specialized company on behalf of two or more co-owners
Business aviation (or travel): self-flown for business purposes
Personal/private travel: travel for personal reasons/personal transport
Air tourism: self-flown incoming/outgoing tourism
Recreational flying: powered/powerless leisure flying activities
Air sports: aerobatics, air races, competitions, rallies, etc.
General aviation thus includes both commercial and non-commercial activities.
IAOPA's definition of aerial work includes, but is not limited to:
Agricultural flights, including crop dusting
Banner towing
Aerial firefighting
Medical evacuation
Pilot training
Search and rescue
Sight seeing flights
Skydiving flights
Organ transplant transport flights
Commercial air transport includes:
Scheduled air services
Non-scheduled air transport
Air cargo services
Air taxi operations
However, in some countries, air taxi is regarded as being part of GA/AW.
Private flights are made in a wide variety of aircraft: light and ultra-light aircraft, sport aircraft, homebuilt aircraft, business aircraft (like private jets), gliders and helicopters. Flights can be carried out under both visual flight and instrument flight rules, and can use controlled airspace with permission.
The majority of the world's air traffic falls into the category of general aviation, and most of the world's airports serve GA exclusively. Flying clubs are considered a part of general aviation.
Geography
Europe
In 2003, the European Aviation Safety Agency was established as the central EU regulator, taking over responsibility for legislating airworthiness and environmental regulation from the national authorities.
United Kingdom
Of the 21,000 civil aircraft registered in the United Kingdom, 96 percent are engaged in GA operations, and annually the GA fleet accounts for between 1.25 and 1.35 million hours flown. There are 28,000 private pilot licence holders, and 10,000 certified glider pilots. Some of the 19,000 pilots who hold professional licences are also engaged in GA activities. GA operates from more than 1,800 airports and landing sites or aerodromes, ranging in size from large regional airports to farm strips.
GA is regulated by the Civil Aviation Authority. The main focus is on standards of airworthiness and pilot licensing, and the objective is to promote high standards of safety.
North America
General aviation is particularly popular in North America, with over 6,300 airports available for public use by pilots of general aviation aircraft (around 5,200 airports in the U.S. and over 1,000 in Canada). In comparison, scheduled flights operate from around 560 airports in the U.S. According to the U.S. Aircraft Owners and Pilots Association, general aviation provides more than one percent of the United States' GDP, accounting for 1.3 million jobs in professional services and manufacturing.
Regulation
Most countries have a civil aviation authority that oversees all civil aviation, including general aviation, adhering to the standardized codes of the International Civil Aviation Organization (ICAO).
Safety
Aviation accident rate statistics are necessarily estimates. According to the U.S. National Transportation Safety Board, general aviation in the United States (excluding charter) suffered 1.31 fatal accidents for every 100,000 hours of flying in 2005, compared to 0.016 for scheduled airline flights. In Canada, recreational flying accounted for 0.7 fatal accidents for every 1000 aircraft, while air taxi accounted for 1.1 fatal accidents for every 100,000 hours. More experienced GA pilots appear generally safer, although the relationship between flight hours, accident frequency, and accident rates are complex and often difficult to assess.
A small number of commercial aviation accidents in the United States have involved collisions with general aviation flights, notably TWA Flight 553, Piedmont Airlines Flight 22, Allegheny Airlines Flight 853, PSA Flight 182 and Aeroméxico Flight 498.
| Technology | Concepts of aviation | null |
12628 | https://en.wikipedia.org/wiki/GIMP | GIMP | The GNU Image Manipulation Program, commonly known by its acronym GIMP ( ), is a free and open-source raster graphics editor used for image manipulation (retouching) and image editing, free-form drawing, transcoding between different image file formats, and more specialized tasks. It is extensible by means of plugins, and scriptable. It is not designed to be used for drawing, though some artists and creators have used it in this way.
GIMP is part of the GNU project and released under the GNU General Public License (3.0-or-later) and is available for Linux, macOS, and Microsoft Windows.
History
In 1995, Spencer Kimball and Peter Mattis began developing GIMP—originally named General Image Manipulation Program—as a semester-long project at the University of California, Berkeley for the eXperimental Computing Facility. The acronym was coined first, with the letter G being added to -IMP as a reference to "the gimp" in the scene from the 1994 film Pulp Fiction.
1996 was the initial public release of GIMP (0.54). The editor was quickly adopted and a community of contributors formed. The community began developing tutorials and artwork and sharing better work-flows and techniques.
In the following year, Kimball and Mattis met with Richard Stallman of the GNU Project while he visited UC Berkeley and asked if they could change General in the application's name to GNU (the name of the operating system created by Stallman), and Stallman approved. The application subsequently formed part of the GNU software collection.
The first release only supported Unix systems, such as Linux, SGI IRIX and HP-UX. Since then, GIMP has been ported to other operating systems, including Microsoft Windows (1997, GIMP 1.1) and macOS.
A GUI toolkit called GTK (at the time known as the GIMP ToolKit) was developed to facilitate the development of GIMP. The development of the GIMP ToolKit has been attributed to Peter Mattis becoming disenchanted with the Motif toolkit GIMP originally used. Motif was used up to GIMP 0.60.
Mascot
GIMP's mascot is called Wilber and was created in GIMP by Tuomas Kuosmanen, known as tigert, on 25 September 1997. Wilber received additional accessories from other GIMP developers, which can be found in the Wilber Construction Kit, included in the GIMP source code as /docs/Wilber_Construction_Kit.xcf.gz.
Development
GIMP is primarily developed by volunteers as a free and open source software project associated with both the GNU and GNOME projects. Development takes place in a public git source code repository, on public mailing lists and in public chat channels on the GIMPNET IRC network.
New features are held in public separate source code branches and merged into the main (or development) branch when the GIMP team is sure they won't damage existing functions. Sometimes this means that features that appear complete do not get merged or take months or years before they become available in GIMP.
GIMP itself is released as source code. After a source code release, installers and packages are made for different operating systems by parties who might not be in contact with the maintainers of GIMP.
The version number used in GIMP is expressed in a major-minor-micro format, with each number carrying a specific meaning: the first (major) number is incremented only for major developments (and is currently 2). The second (minor) number is incremented with each release of new features, with odd numbers reserved for in-progress development versions and even numbers assigned to stable releases; the third (micro) number is incremented before and after each release (resulting in even numbers for releases, and odd numbers for development snapshots) with any bug fixes subsequently applied and released for a stable version.
Previously, GIMP applied for several positions in the Google Summer of Code (GSoC). From 2006 to 2009 there have been nine GSoC projects that have been listed as successful, although not all successful projects have been merged into GIMP immediately. The healing brush and perspective clone tools and Ruby bindings were created as part of the 2006 GSoC and can be used in version 2.8.0 of GIMP, although there were three other projects that were completed and are later available in a stable version of GIMP; those projects being Vector Layers (end 2008 in 2.8 and master), and a JPEG 2000 plug-in (mid 2009 in 2.8 and master). Several of the GSoC projects were completed in 2008, but have been merged into a stable GIMP release later in 2009 to 2014 for Version 2.8.xx and 2.10.x. Some of them needed some more code work for the master tree.
Second public Development 2.9-Version was 2.9.4 with many deep improvements after initial Public Version 2.9.2. Third Public 2.9-Development version is Version 2.9.6. One of the new features is removing the 4 GB size limit of XCF file. Increase of possible threads to 64 is also an important point for modern parallel execution in actual AMD Ryzen and Intel Xeon processors. Version 2.9.8 included many bug fixes and improvements in gradients and clips. Improvements in performance and optimization beyond bug hunting were the development targets for 2.10.0. MacOS Beta is available with Version 2.10.4.
The next stable version in the roadmap is 3.0 with a GTK3 port. 2.99-Series is the development Series to 3.0. The first release candidate for version 3.0, RC1, was released on 6 November 2024.
GIMP developers meet during the annual Libre Graphics Meeting. Interaction designers from OpenUsability have also contributed to GIMP.
Versions
Distribution
The current version of GIMP works with numerous operating systems, including Linux, macOS and Windows. Many Linux distributions, such as Fedora Linux and Debian, include GIMP as a part of their desktop operating systems.
GIMP began to host its own downloads after discontinuing use of SourceForge in 2013. The website later repossessed GIMP's dormant account and hosted advertising-laden versions of GIMP for Windows.
In 2022, GIMP was published on the Microsoft Store for Windows.
Professional reviews
Lifewire reviewed GIMP favorably in March 2019, writing that "[f]or those who have never experienced Photoshop, GIMP is simply a very powerful image manipulation program," and "[i]f you're willing to invest some time learning it, it can be a very good graphics tool."
GIMP's fitness for use in professional environments is regularly reviewed; it is often compared to and suggested as a possible replacement for Adobe Photoshop.
GIMP 2.6 was used to create nearly all of the art in Lucas the Game, an independent video game by developer Timothy Courtney. Courtney started development of Lucas the Game in early 2014, and the video game was published in July 2015 for PC and Mac. Courtney explains GIMP is a powerful tool, fully capable of large professional projects, such as video games.
The single-window mode introduced in GIMP 2.8 was reviewed in 2012 by Ryan Paul of Ars Technica, who noted that it made the user experience feel "more streamlined and less cluttered". Michael Burns, writing for Macworld in 2014, described the single-window interface of GIMP 2.8.10 as a "big improvement".
In his review of GIMP for ExtremeTech in October 2013, David Cardinal noted that GIMP's reputation of being hard to use and lacking features has "changed dramatically over the last couple years", and that it was "no longer a crippled alternative to Photoshop". He described GIMP's scripting as one of its strengths, but also remarked that some of Photoshop's features such as Text, 3D commands, Adjustment Layers and History are either less powerful or missing in GIMP. Cardinal favorably described the UFRaw converter for raw images used with GIMP, noting that it still "requires some patience to figure out how to use those more advanced capabilities". Cardinal stated that GIMP is "easy enough to try" despite not having as well developed documentation and help system as those for Photoshop, concluding that it "has become a worthy alternative to Photoshop for anyone on a budget who doesn't need all of Photoshop's vast feature set".
The user interface has been criticized for being "hard to use".
Features
Tools used to perform image editing can be accessed via the toolbox, through menus and dialogue windows. They include filters and brushes, as well as transformation, selection, layer and masking tools. GIMP's developers have asserted that it has, or at least aspire to it having, similar functionality to Photoshop, but has a different user interface. Also, as of 2024 and version 2.10, a fundamental and essential difference between GIMP, on one hand, and major commercial software like Photoshop and Serif Affinity Photo, on the other, is that very few of GIMP's editing operations occur as non-destructive edits, unlike the main commercial software.
Color
There are several ways of selecting colors, including palettes, color choosers and using an eyedropper tool to select a color on the canvas. The built-in color choosers include RGB/HSV/LAB/LCH selector or scales, water-color selector, CMYK selector and a color-wheel selector. Colors can also be selected using hexadecimal color codes, as used in HTML color selection. GIMP has native support for indexed color and RGB color spaces; other color spaces are supported using decomposition, where each channel of the new color space becomes a black-and-white image. CMYK, LAB and HSV (hue, saturation, value) are supported this way. Color blending can be achieved using the Blend tool, by applying a gradient to the surface of an image and using GIMP's color modes. Gradients are also integrated into tools such as the brush tool, when the user paints this way the output color slowly changes. There are a number of default gradients included with GIMP; a user can also create custom gradients with tools provided. Gradient plug-ins are also available.
Selections and paths
GIMP selection tools include a rectangular and circular selection tool, free select tool, and fuzzy select tool (also known as magic wand). More advanced selection tools include the select by color tool for selecting contiguous regions of color—and the scissors select tool, which creates selections semi-automatically between areas of highly contrasting colors. GIMP also supports a quick mask mode where a user can use a brush to paint the area of a selection. Visibly this looks like a red colored overlay being added or removed. The foreground select tool is an implementation of Simple interactive object extraction (SIOX), a method used to perform the extraction of foreground elements, such as a person or a tree in focus. The Paths Tool allows a user to create vectors (also known as Bézier curves). Users can use paths to create complex selections, including around natural curves. They can paint (or "stroke") the paths with brushes, patterns, or various line styles. Users can name and save paths for reuse.
Image editing
There are many tools that can be used for editing images in GIMP. The more common tools include a paint brush, pencil, airbrush, eraser and ink tools used to create new or blended pixels. The Bucket Fill tool can be used to fill a selection with a color or pattern. The Blend tool can be used to fill a selection with a color gradient. These color transitions can be applied to large regions or smaller custom path selections.
GIMP also provides "smart" tools that use a more complex algorithm to do things that otherwise would be time-consuming or impossible. These include:
Clone tool, which copies pixels using a brush
Healing brush, which copies pixels from an area and corrects tone and color
Perspective clone tool, which works like the clone tool but corrects for distance changes
Blur and sharpen tools
The Smudge tool can be used to subtly smear a selection where it stands
Dodge and burn tool is a brush that makes target pixels lighter (dodges) or darker (burns)
Layers, layer masks and channels
An image being edited in GIMP can consist of many layers in a stack. The user manual suggests that "A good way to visualize a GIMP image is as a stack of transparencies," where in GIMP terminology, each level (analogous to a transparency) is called a layer. Each layer in an image is made up of several channels. In an RGB image, there are normally 3 or 4 channels, each consisting of a red, green and blue channel. Color sublayers look like slightly different gray images, but when put together they make a complete image. The fourth channel that may be part of a layer is the alpha channel (or layer mask). This channel measures opacity where a whole or part of an image can be completely visible, partially visible or invisible. Each layer has a layer mode that can be set to change the colors in the image.
Text layers can be created using the text tool, allowing a user to write on an image. Text layers can be transformed in several ways, such as converting them to a path or selection.
Automation, scripts and plug-ins
GIMP has approximately 150 standard effects and filters, including Drop Shadow, Blur, Motion Blur and Noise.
GIMP operations can be automated with scripting languages. The Script-Fu is a Scheme-based language implemented using a TinyScheme interpreter built into GIMP. GIMP can also be scripted in Perl, Python (Python-Fu), or Tcl, using interpreters external to GIMP. New features can be added to GIMP not only by changing program code (GIMP core), but also by creating plug-ins. These are external programs that are executed and controlled by the main GIMP program. MathMap is an example of a plug-in written in C.
There is support for several methods of sharpening and blurring images, including the blur and sharpen tool. The unsharp mask tool is used to sharpen an image selectively – it sharpens only those areas of an image that are sufficiently detailed. The Unsharp Mask tool is considered to give more targeted results for photographs than a normal sharpening filter. The Selective Gaussian Blur tool works in a similar way, except it blurs areas of an image with little detail.
GIMP-ML is an extension for machine learning with 15 filters.
GEGL
The Generic Graphics Library (GEGL) was first introduced as part of GIMP on the 2.6 release of GIMP. This initial introduction does not yet exploit all of the capabilities of GEGL; as of the 2.6 release, GIMP can use GEGL to perform high bit-depth color operations; because of this, less information is lost when performing color operations. When GEGL is fully integrated, GIMP will have a higher color bit depth and better non-destructive work-flow. GIMP 2.8.xx supports only 8-bit color, which is much lower than digital cameras, e.g., produce (12-bit or higher). Full support for high bit depth is included with GIMP 2.10. OpenCL enables hardware acceleration for some operations.
CTX
CTX is a new rasterizer for vector graphics in GIMP 3.0. Some simple objects, like lines and circles, can be reduced to vector objects.
File formats
GIMP supports importing and exporting with a large number of different file formats. GIMP's native format XCF is designed to store all information GIMP can contain about an image; XCF is named after the eXperimental Computing Facility where GIMP was authored. Import and export capability can be extended to additional file formats by means of plug-ins. XCF file size is extended to more than 4 GB since 2.9.6 and new stable tree 2.10.x.
Forks and derivatives
Because of the free and open-source nature of GIMP, several forks, variants and derivatives of the computer program have been created to fit the needs of their creators. While GIMP is cross-platform, variants of GIMP may not be. These variants are neither hosted nor linked on the GIMP site. The GIMP site does not host GIMP builds for Windows or Unix-like operating systems either, although it does include a link to a Windows build.
Forks
CinePaint: Formerly Film Gimp, it is a fork of GIMP version 1.0.4, used for frame-by-frame retouching of feature film. CinePaint supports up to 32-bit IEEE-floating point color depth per channel, as well as color management and HDR. CinePaint is used primarily within the film industry due mainly to its support of high-fidelity image formats. It is available for BSD, Linux, and macOS.
GIMP classic: A patch against GIMP v2.6.8 source code created to undo changes made to the user interface in GIMP v2.4 through v2.6. A build of GIMP classic for Ubuntu is available. As of March 2011, a new patch could be downloaded that patches against the experimental GIMP v2.7.
GIMP Portable: A portable version of GIMP for Microsoft Windows XP or later that preserves brushes and presets between computers.
GIMPshop: Derivative that aimed to replicate the Adobe Photoshop in some form. Development of GIMPshop was halted in 2006 and the project disavowed by the developer, Scott Moschella, after an unrelated party registered "GIMPshop" as part of an Internet domain name and passed off the website as belonging to Moschella while accepting donations and making revenue from advertising but passing on none of the income to Moschella.
GimPhoto: GimPhoto follows the Photoshop-UI tradition of GIMPshop. More modifications are possible with the GimPad tool. GimPhoto stands at version 24.1 for Linux and Windows (based on GIMP v2.4.3) and version 26.1 on macOS (based on GIMP v2.6.8). Installers are included for Windows 7, 8.1, and 10; macOS 10.6+; Ubuntu 14 and Fedora; as well as source code. Only one developer is at work in this project, so fast updates and new versions based on Gimp 2.8.x or 2.9.x are not planned.
McGimp: An independent port for macOS that is aim to run GIMP directly on this platform, and integrated multiple plug-ins intended to optimize photos.
Seashore: easier to use image editing application for macOS.
Glimpse: a discontinued fork of GIMP that was started because the word "gimp" is also a derogatory word for disabled people.
Extensions
GIMP's functionality can be extended with plugins. Notable ones include:
GIMP-ML, which provides machine learning-based image enhancement. GIMP-ML with python 3 is next target in development.
GIMP Animation Package (GAP), official plugin for creating animations. GAP can save animations in several formats, including GIF and AVI.
Resynthesizer, which provides content-aware fill. Original part of Paul Harrison's PhD thesis, now maintained by Lloyd Konneker.
G'MIC, which adds image filters and effects.
| Technology | Multimedia_2 | null |
12630 | https://en.wikipedia.org/wiki/Geometric%20series | Geometric series | In mathematics, a geometric series is a series summing the terms of an infinite geometric sequence, in which the ratio of consecutive terms is constant. For example, the series is a geometric series with common ratio , which converges to the sum of . Each term in a geometric series is the geometric mean of the term before it and the term after it, in the same way that each term of an arithmetic series is the arithmetic mean of its neighbors.
While Greek philosopher Zeno's paradoxes about time and motion (5th century BCE) have been interpreted as involving geometric series, such series were formally studied and applied a century or two later by Greek mathematicians, for example used by Archimedes to calculate the area inside a parabola (3rd century BCE). Today, geometric series are used in mathematical finance, calculating areas of fractals, and various computer science topics.
Though geometric series most commonly involve real or complex numbers, there are also important results and applications for matrix-valued geometric series, function-valued geometric series, adic number geometric series, and most generally geometric series of elements of abstract algebraic fields, rings, and semirings.
Definition and examples
The geometric series is an infinite series derived from a special type of sequence called a geometric progression. This means that it is the sum of infinitely many terms of geometric progression: starting from the initial term , and the next one being the initial term multiplied by a constant number known as the common ratio . By multiplying each term with a common ratio continuously, the geometric series can be defined mathematically as:
The sum of a finite initial segment of an infinite geometric series is called a finite geometric series, that is:
When it is often called a growth rate or rate of expansion. When it is often called a decay rate or shrink rate, where the idea that it is a "rate" comes from interpreting as a sort of discrete time variable. When an application area has specialized vocabulary for specific types of growth, expansion, shrinkage, and decay, that vocabulary will also often be used to name parameters of geometric series. In economics, for instance, rates of increase and decrease of price levels are called inflation rates and deflation rates, while rates of increase in values of investments include rates of return and interest rates.
When summing infinitely many terms, the geometric series can either be convergent or divergent. Convergence means there is a value after summing infinitely many terms, whereas divergence means no value after summing. The convergence of a geometric series can be described depending on the value of a common ratio, see . Grandi's series is an example of a divergent series that can be expressed as , where the initial term is and the common ratio is ; this is because it has three different values.
Decimal numbers that have repeated patterns that continue forever can be interpreted as geometric series and thereby converted to expressions of the ratio of two integers. For example, the repeated decimal fraction can be written as the geometric series where the initial term is and the common ratio is .
Convergence of the series and its proof
The convergence of the infinite sequence of partial sums of the infinite geometric series depends on the magnitude of the common ratio alone:
If , the terms of the series approach zero (becoming smaller and smaller in magnitude) and the sequence of partial sums converge to a limit value of .
If , the terms of the series become larger and larger in magnitude and the partial sums of the terms also get larger and larger in magnitude, so the series diverges.
If , the terms of the series become no larger or smaller in magnitude and the sequence of partial sums of the series does not converge. When , all the terms of the series are the same and the grow to infinity. When , the terms take two values and alternately, and therefore the sequence of partial sums of the terms oscillates between the two values and 0. One example can be found in Grandi's series. When and , the partial sums circulate periodically among the values , never converging to a limit. Generally when for any integer and with any , the partial sums of the series will circulate indefinitely with a period of , never converging to a limit.
The rate of convergence shows how the sequence quickly approaches its limit. In the case of the geometric series—the relevant sequence is and its limit is —the rate and order are found via
where represents the order of convergence. Using and choosing the order of convergence gives:
When the series converges, the rate of convergence gets slower as approaches . The pattern of convergence also depends on the sign or complex argument of the common ratio. If and then terms all share the same sign and the partial sums of the terms approach their eventual limit monotonically. If and , adjacent terms in the geometric series alternate between positive and negative, and the partial sums of the terms oscillate above and below their eventual limit . For complex and the converge in a spiraling pattern.
The convergence is proved as follows. The partial sum of the first terms of a geometric series, up to and including the term,
is given by the closed form
where is the common ratio. The case is merely a simple addition, a case of an arithmetic series. The formula for the partial sums with can be derived as follows:
for . As approaches 1, polynomial division or L'Hospital's rule recovers the case .
As approaches infinity, the absolute value of must be less than one for this sequence of partial sums to converge to a limit. When it does, the series converges absolutely. The infinite series then becomes
for .
This convergence result is widely applied to prove the convergence of other series as well, whenever those series's terms can be bounded from above by a suitable geometric series; that proof strategy is the basis for the ratio test and root test for the convergence of infinite series.
Connection to the power series
Like the geometric series, a power series has one parameter for a common variable raised to successive powers corresponding to the geometric series's , but it has additional parameters one for each term in the series, for the distinct coefficients of each , rather than just a single additional parameter for all terms, the common coefficient of in each term of a geometric series. The geometric series can therefore be considered a class of power series in which the sequence of coefficients satisfies for all and .
This special class of power series plays an important role in mathematics, for instance for the study of ordinary generating functions in combinatorics and the summation of divergent series in analysis. Many other power series can be written as transformations and combinations of geometric series, making the geometric series formula a convenient tool for calculating formulas for those power series as well.
As a power series, the geometric series has a radius of convergence of 1. This could be seen as a consequence of the Cauchy–Hadamard theorem and the fact that for any or as a consequence of the ratio test for the convergence of infinite series, with implying convergence only for However, both the ratio test and the Cauchy–Hadamard theorem are proven using the geometric series formula as a logically prior result, so such reasoning would be subtly circular.
Background
2,500 years ago, Greek mathematicians believed that an infinitely long list of positive numbers must sum to infinity. Therefore, Zeno of Elea created a paradox, demonstrating as follows: in order to walk from one place to another, one must first walk half the distance there, and then half of the remaining distance, and half of that remaining distance, and so on, covering infinitely many intervals before arriving. In doing so, he partitioned a fixed distance into an infinitely long list of halved remaining distances, each with a length greater than zero. Zeno's paradox revealed to the Greeks that their assumption about an infinitely long list of positive numbers needing to add up to infinity was incorrect.
Euclid's Elements has the distinction of being the world's oldest continuously used mathematical textbook, and it includes a demonstration of the sum of finite geometric series in Book IX, Proposition 35, illustrated in an adjacent figure.
Archimedes in his The Quadrature of the Parabola used the sum of a geometric series to compute the area enclosed by a parabola and a straight line. Archimedes' theorem states that the total area under the parabola is of the area of the blue triangle. His method was to dissect the area into infinite triangles as shown in the adjacent figure. He determined that each green triangle has the area of the blue triangle, each yellow triangle has the area of a green triangle, and so forth. Assuming that the blue triangle has area 1, then, the total area is the sum of the infinite series
Here the first term represents the area of the blue triangle, the second term is the area of the two green triangles, the third term is the area of the four yellow triangles, and so on. Simplifying the fractions gives
a geometric series with common ratio and its sum is:
In addition to his elegantly simple proof of the divergence of the harmonic series, Nicole Oresme proved that the arithmetico-geometric series known as Gabriel's Staircase,
In the diagram for his geometric proof, similar to the adjacent diagram, shows a two-dimensional geometric series. The first dimension is horizontal, in the bottom row, representing the geometric series with initial value and common ratio
The second dimension is vertical, where the bottom row is a new initial term and each subsequent row above it shrinks according to the same common ratio , making another geometric series with sum ,
This approach generalizes usefully to higher dimensions, and that generalization is described below in .
Applications
As mentioned above, the geometric series can be applied in the field of economics. This leads to the common ratio of a geometric series that may refer to the rates of increase and decrease of price levels are called inflation rates and deflation rates; in contrast, the rates of increase in values of investments include rates of return and interest rates. More specifically in mathematical finance, geometric series can also be applied in time value of money; that is to represent the present values of perpetual annuities, sums of money to be paid each year indefinitely into the future. This sort of calculation is used to compute the annual percentage rate of a loan, such as a mortgage loan. It can also be used to estimate the present value of expected stock dividends, or the terminal value of a financial asset assuming a stable growth rate. However, the assumption that interest rates are constant is generally incorrect and payments are unlikely to continue forever since the issuer of the perpetual annuity may lose its ability or end its commitment to make continued payments, so estimates like these are only heuristic guidelines for decision making rather than scientific predictions of actual current values.
In addition to finding the area enclosed by a parabola and a line in Archimedes' The Quadrature of the Parabola, the geometric series may also be applied in finding the Koch snowflake's area described as the union of infinitely many equilateral triangles (see figure). Each side of the green triangle is exactly the size of a side of the large blue triangle and therefore has exactly the area. Similarly, each yellow triangle has the area of a green triangle, and so forth. All of these triangles can be represented in terms of geometric series: the blue triangle's area is the first term, the three green triangles' area is the second term, the twelve yellow triangles' area is the third term, and so forth. Excluding the initial 1, this series has a common ratio , and by taking the blue triangle as a unit of area, the total area of the snowflake is:
Various topics in computer science may include the application of geometric series in the following:
Algorithm analysis: analyzing the time complexity of recursive algorithms (like divide-and-conquer) and in amortized analysis for operations with varying costs, such as dynamic array resizing.
Data structures: analyzing the space and time complexities of operations in data structures like balanced binary search trees and heaps.
Computer graphics: crucial in rendering algorithms for anti-aliasing, for mipmapping, and for generating fractals, where the scale of detail varies geometrically.
Networking and communication: modelling retransmission delays in exponential backoff algorithms and are used in data compression and error-correcting codes for efficient communication.
Probabilistic and randomized algorithms: analyzing random walks, Markov chains, and geometric distributions, which are essential in probabilistic and randomized algorithms.
Beyond real and complex numbers
While geometric series with real and complex number parameters and are most common, geometric series of more general terms such as functions, matrices, and adic numbers also find application. The mathematical operations used to express a geometric series given its parameters are simply addition and repeated multiplication, and so it is natural, in the context of modern algebra, to define geometric series with parameters from any ring or field. Further generalization to geometric series with parameters from semirings is more unusual, but also has applications; for instance, in the study of fixed-point iteration of transformation functions, as in transformations of automata via rational series.
In order to analyze the convergence of these general geometric series, then on top of addition and multiplication, one must also have some metric of distance between partial sums of the series. This can introduce new subtleties into the questions of convergence, such as the distinctions between uniform convergence and pointwise convergence in series of functions, and can lead to strong contrasts with intuitions from the real numbers, such as in the convergence of the series with and to in the 2-adic numbers using the 2-adic absolute value as a convergence metric. In that case, the 2-adic absolute value of the common coefficient is , and while this is counterintuitive from the perspective of real number absolute value (where naturally), it is nonetheless well-justified in the context of p-adic analysis.
When the multiplication of the parameters is not commutative, as it often is not for matrices or general physical operators, particularly in quantum mechanics, then the standard way of writing the geometric series, , multiplying from the right, may need to be distinguished from the alternative , multiplying from the left, and also the symmetric , multiplying half on each side. These choices may correspond to important alternatives with different strengths and weaknesses in applications, as in the case of ordering the mutual interferences of drift and diffusion differently at infinitesimal temporal scales in Ito integration and Stratonovitch integration in stochastic calculus.
| Mathematics | Sequences and series | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.