text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
Nuclear magnetic resonance crystallography (NMR crystallography) is a method which utilizes primarily NMR spectroscopy to determine the structure of solid materials on the atomic scale. Thus, solid-state NMR spectroscopy would be used primarily, possibly supplemented by quantum chemistry calculations (e.g. density functional theory), powder diffraction etc. If suitable crystals can be grown, any crystallographic method would generally be preferred to determine the crystal structure comprising in case of organic compounds the molecular structures and molecular packing. The main interest in NMR crystallography is in microcrystalline materials which are amenable to this method but not to X-ray, neutron and electron diffraction. This is largely because interactions of comparably short range are measured in NMR crystallography.
== Introduction ==
When applied to organic molecules, NMR crystallography aims at including structural information not only of a single molecule but also on the molecular packing (i.e. crystal structure). Contrary to X-ray, single crystals are not necessary with solid-state NMR and structural information can be obtained from high-resolution spectra of disordered solids. E.g. polymorphism is an area of interest for NMR crystallography since this is encountered occasionally (and may often be previously undiscovered) in organic compounds. In this case a change in the molecular structure and/or in the molecular packing can lead to polymorphism, and this can be investigated by NMR crystallography.
== Dipolar couplings-based approach ==
The spin interaction that is usually employed for structural analyses via solid state NMR spectroscopy is the magnetic dipolar interaction.
Additional knowledge about other interactions within the studied system like the chemical shift or the electric quadrupole interaction can be helpful as well, and in some cases solely the chemical shift has been employed as e.g. for zeolites.
The “dipole coupling”-based approach parallels protein NMR spectroscopy to some extent in that e.g. multiple residual dipolar couplings are measured for proteins in solution, and these couplings are used as constraints in the protein structure calculation.
In NMR crystallography the observed spins in case of organic molecules would often be spin-1/2 nuclei of moderate frequency (13C, 15N, 31P, etc.). I.e. 1H is excluded due to its large magnetogyric ratio and high spin concentration leading to a network of strong homonuclear dipolar couplings. There are two solutions with respect to 1H: 1H spin diffusion experiments (see below) and specific labelling with 2H spins (spin = 1). The latter is also popular e.g. in NMR spectroscopic investigations of hydrogen bonds in solution and the solid state.
Both intra- and intermolecular structural elements can be investigated e.g. via deuterium REDOR (an established solid state NMR pulse sequence to measure dipolar couplings between deuterons and other spins).
This can provide an additional constraint for an NMR crystallographic structural investigation in that it can be used to find and characterize e.g. intermolecular hydrogen bonds.
=== Dipolar interaction ===
The above-mentioned dipolar interaction can be measured directly, e.g. between pairs of heteronuclear spins like 13C/15N in many organic compounds. Furthermore, the strength of the dipolar interaction modulates parameters like the longitudinal relaxation time or the spin diffusion rate which therefore can be examined to obtain structural information. E.g. 1H spin diffusion has been measured providing rich structural information.
=== Chemical shift interaction ===
The chemical shift interaction can be used in conjunction with the dipolar interaction to determine the orientation of the dipolar interaction frame (principal axes system) with respect to the molecular frame (dipolar chemical shift spectroscopy). For some cases there are rules for the chemical shift interaction tensor orientation as for the 13C spin in ketones due to symmetry arguments (sp2 hybridisation). If the orientation of a dipolar interaction (between the spin of interest and e.g. another heteronucleus) is measured with respect to the chemical shift interaction coordinate system, these two pieces of information (chemical shift tensor/molecular orientation and the dipole tensor/chemical shift tensor orientation) combined give the orientation of the dipole tensor in the molecular frame. However, this method is only suitable for small molecules (or polymers with a small repetition unit like polyglycine) and it provides only selective (and usually intramolecular) structural information.
== Crystal Structure Refinements ==
The dipolar interaction yields the most direct information with respect to structure as it makes it possible to measure the distances between the spins. The sensitivity of this interaction is however lacking and even though dipolar-based NMR crystallography makes the elucidation of structures possible, other methods are necessary to obtain high resolution structures. For these reasons much work was done to include the use other NMR observables such as chemical shift anisotropy, J-coupling and the quadrupolar interaction. These anisotropic interactions are highly sensitive to the 3D local environment making it possible to refine the structures of powdered samples to structures rivaling the quality of single crystal X-ray diffraction. These however rely on adequate methods for predicting these interactions as they do not depend in a straightforward fashion on the structure.
== Comparison with diffraction methods ==
A drawback of NMR crystallography is that the method is typically more time-consuming and more expensive (due to spectrometer costs and isotope labelling) than X-ray crystallography, it often elucidates only part of the structure, and isotope labelling and experiments may have to be tailored to obtain key structural information. Also a given molecular structure may not always be suitable for a pure NMR-based NMR crystallographic approach, but it can still play an important role in a multimodality (NMR+diffraction) study.
Unlike in the case of diffraction methods, it appears that NMR crystallography needs to work on a case-by-case basis. The reason is that different molecular systems will exhibit different spin physics and different observables which can be probed. The method may therefore not find widespread use as different systems will require tailored experimental designs to study them.
== References == | Wikipedia/Nuclear_magnetic_resonance_crystallography |
A carbohydrate () is a biomolecule composed of carbon (C), hydrogen (H), and oxygen (O) atoms. The typical hydrogen-to-oxygen atomic ratio is 2:1, analogous to that of water, and is represented by the empirical formula Cm(H2O)n (where m and n may differ). This formula does not imply direct covalent bonding between hydrogen and oxygen atoms; for example, in CH2O, hydrogen is covalently bonded to carbon, not oxygen. While the 2:1 hydrogen-to-oxygen ratio is characteristic of many carbohydrates, exceptions exist. For instance, uronic acids and deoxy-sugars like fucose deviate from this precise stoichiometric definition. Conversely, some compounds conforming to this definition, such as formaldehyde and acetic acid, are not classified as carbohydrates.
The term is predominantly used in biochemistry, functioning as a synonym for saccharide (from Ancient Greek σάκχαρον (sákkharon) 'sugar'), a group that includes sugars, starch, and cellulose. The saccharides are divided into four chemical groups: monosaccharides, disaccharides, oligosaccharides, and polysaccharides. Monosaccharides and disaccharides, the smallest (lower molecular weight) carbohydrates, are commonly referred to as sugars. While the scientific nomenclature of carbohydrates is complex, the names of the monosaccharides and disaccharides very often end in the suffix -ose, which was originally taken from the word glucose (from Ancient Greek γλεῦκος (gleûkos) 'wine, must'), and is used for almost all sugars (e.g., fructose (fruit sugar), sucrose (cane or beet sugar), ribose, lactose (milk sugar)).
Carbohydrates perform numerous roles in living organisms. Polysaccharides serve as an energy store (e.g., starch and glycogen) and as structural components (e.g., cellulose in plants and chitin in arthropods and fungi). The 5-carbon monosaccharide ribose is an important component of coenzymes (e.g., ATP, FAD and NAD) and the backbone of the genetic molecule known as RNA. The related deoxyribose is a component of DNA. Saccharides and their derivatives include many other important biomolecules that play key roles in the immune system, fertilization, preventing pathogenesis, blood clotting, and development.
Carbohydrates are central to nutrition and are found in a wide variety of natural and processed foods. Starch is a polysaccharide and is abundant in cereals (wheat, maize, rice), potatoes, and processed food based on cereal flour, such as bread, pizza or pasta. Sugars appear in human diet mainly as table sugar (sucrose, extracted from sugarcane or sugar beets), lactose (abundant in milk), glucose and fructose, both of which occur naturally in honey, many fruits, and some vegetables. Table sugar, milk, or honey is often added to drinks and many prepared foods such as jam, biscuits and cakes.
Cellulose, a polysaccharide found in the cell walls of all plants, is one of the main components of insoluble dietary fiber. Although it is not digestible by humans, cellulose and insoluble dietary fiber generally help maintain a healthy digestive system by facilitating bowel movements. Other polysaccharides contained in dietary fiber include resistant starch and inulin, which feed some bacteria in the microbiota of the large intestine, and are metabolized by these bacteria to yield short-chain fatty acids.
== Terminology ==
In scientific literature, the term "carbohydrate" has many synonyms, like "sugar" (in the broad sense), "saccharide", "ose", "glucide", "hydrate of carbon" or "polyhydroxy compounds with aldehyde or ketone". Some of these terms, especially "carbohydrate" and "sugar", are also used with other meanings.
In food science and in many informal contexts, the term "carbohydrate" often means any food that is particularly rich in the complex carbohydrate starch (such as cereals, bread and pasta) or simple carbohydrates, such as sugar (found in candy, jams, and desserts). This informality is sometimes confusing since it confounds chemical structure and digestibility in humans.
The term "carbohydrate" (or "carbohydrate by difference") refers also to dietary fiber, which is a carbohydrate, but, unlike sugars and starches, fibers are not hydrolyzed by human digestive enzymes. Fiber generally contributes little food energy in humans, but is often included in the calculation of total food energy. The fermentation of soluble fibers by gut microflora can yield short-chain fatty acids, and soluble fiber is estimated to provide about 2 kcal/g.
== History ==
The history of the discovery regarding carbohydrates dates back around 10,000 years ago in Papua New Guinea during the cultivation of sugarcane during the Neolithic agricultural revolution. The term "carbohydrate" was first proposed by German chemist Carl Schmidt (chemist) in 1844. In 1856, glycogen, a form of carbohydrate storage in animal livers, was discovered by French physiologist Claude Bernard.
== Structure ==
Formerly the name "carbohydrate" was used in chemistry for any compound with the formula Cm (H2O)n. Following this definition, some chemists considered formaldehyde (CH2O) to be the simplest carbohydrate, while others claimed that title for glycolaldehyde. Today, the term is generally understood in the biochemistry sense, which excludes compounds with only one or two carbons and includes many biological carbohydrates which deviate from this formula. For example, while the above representative formulas would seem to capture the commonly known carbohydrates, ubiquitous and abundant carbohydrates often deviate from this. For example, carbohydrates often display chemical groups such as: N-acetyl (e.g., chitin), sulfate (e.g., glycosaminoglycans), carboxylic acid and deoxy modifications (e.g., fucose and sialic acid).
Natural saccharides are generally built of simple carbohydrates called monosaccharides with general formula (CH2O)n where n is three or more. A typical monosaccharide has the structure H–(CHOH)x(C=O)–(CHOH)y–H, that is, an aldehyde or ketone with many hydroxyl groups added, usually one on each carbon atom that is not part of the aldehyde or ketone functional group. Examples of monosaccharides are glucose, fructose, and glyceraldehydes. However, some biological substances commonly called "monosaccharides" do not conform to this formula (e.g., uronic acids and deoxy-sugars such as fucose) and there are many chemicals that do conform to this formula but are not considered to be monosaccharides (e.g., formaldehyde CH2O and inositol (CH2O)6).
The open-chain form of a monosaccharide often coexists with a closed ring form where the aldehyde/ketone carbonyl group carbon (C=O) and hydroxyl group (–OH) react forming a hemiacetal with a new C–O–C bridge.
Monosaccharides can be linked together into what are called polysaccharides (or oligosaccharides) in a large variety of ways. Many carbohydrates contain one or more modified monosaccharide units that have had one or more groups replaced or removed. For example, deoxyribose, a component of DNA, is a modified version of ribose; chitin is composed of repeating units of N-acetyl glucosamine, a nitrogen-containing form of glucose.
== Division ==
Carbohydrates are polyhydroxy aldehydes, ketones, alcohols, acids, their simple derivatives and their polymers having linkages of the acetal type. They may be classified according to their degree of polymerization, and may be divided initially into three principal groups, namely sugars, oligosaccharides and polysaccharides.
== Monosaccharides ==
Monosaccharides are the simplest carbohydrates in that they cannot be hydrolyzed to smaller carbohydrates. They are aldehydes or ketones with two or more hydroxyl groups. The general chemical formula of an unmodified monosaccharide is (C•H2O)n, literally a "carbon hydrate". Monosaccharides are important fuel molecules as well as building blocks for nucleic acids. The smallest monosaccharides, for which n=3, are dihydroxyacetone and D- and L-glyceraldehydes.
=== Classification of monosaccharides ===
Monosaccharides are classified according to three different characteristics: the placement of its carbonyl group, the number of carbon atoms it contains, and its chiral handedness. If the carbonyl group is an aldehyde, the monosaccharide is an aldose; if the carbonyl group is a ketone, the monosaccharide is a ketose. Monosaccharides with three carbon atoms are called trioses, those with four are called tetroses, five are called pentoses, six are hexoses, and so on. These two systems of classification are often combined. For example, glucose is an aldohexose (a six-carbon aldehyde), ribose is an aldopentose (a five-carbon aldehyde), and fructose is a ketohexose (a six-carbon ketone).
Each carbon atom bearing a hydroxyl group (-OH), with the exception of the first and last carbons, are asymmetric, making them stereo centers with two possible configurations each (R or S). Because of this asymmetry, a number of isomers may exist for any given monosaccharide formula. Using Le Bel-van't Hoff rule, the aldohexose D-glucose, for example, has the formula (C·H2O)6, of which four of its six carbons atoms are stereogenic, making D-glucose one of 24=16 possible stereoisomers. In the case of glyceraldehydes, an aldotriose, there is one pair of possible stereoisomers, which are enantiomers and epimers. 1, 3-dihydroxyacetone, the ketose corresponding to the aldose glyceraldehydes, is a symmetric molecule with no stereo centers. The assignment of D or L is made according to the orientation of the asymmetric carbon furthest from the carbonyl group: in a standard Fischer projection if the hydroxyl group is on the right the molecule is a D sugar, otherwise it is an L sugar. The "D-" and "L-" prefixes should not be confused with "d-" or "l-", which indicate the direction that the sugar rotates plane polarized light. This usage of "d-" and "l-" is no longer followed in carbohydrate chemistry.
=== Ring-straight chain isomerism ===
The aldehyde or ketone group of a straight-chain monosaccharide will react reversibly with a hydroxyl group on a different carbon atom to form a hemiacetal or hemiketal, forming a heterocyclic ring with an oxygen bridge between two carbon atoms. Rings with five and six atoms are called furanose and pyranose forms, respectively, and exist in equilibrium with the straight-chain form.
During the conversion from straight-chain form to the cyclic form, the carbon atom containing the carbonyl oxygen, called the anomeric carbon, becomes a stereogenic center with two possible configurations: The oxygen atom may take a position either above or below the plane of the ring. The resulting possible pair of stereoisomers is called anomers. In the α anomer, the -OH substituent on the anomeric carbon rests on the opposite side (trans) of the ring from the CH2OH side branch. The alternative form, in which the CH2OH substituent and the anomeric hydroxyl are on the same side (cis) of the plane of the ring, is called the β anomer.
=== Use in living organisms ===
Monosaccharides are the major fuel source for metabolism, and glucose is an energy-rich molecule utilized to generate ATP in almost all living organisms. Glucose is a high-energy substrate produced in plants through photosynthesis by combining energy-poor water and carbon dioxide in an endothermic reaction fueled by solar energy. When monosaccharides are not immediately needed, they are often converted to more space-efficient (i.e., less water-soluble) forms, often polysaccharides. In animals, glucose circulating the blood is a major metabolic substrate and is oxidized in the mitochondria to produce ATP for performing useful cellular work. In humans and other animals, serum glucose levels must be regulated carefully to maintain glucose within acceptable limits and prevent the deleterious effects of hypo- or hyperglycemia. Hormones such as insulin and glucagon serve to keep glucose levels in balance: insulin stimulates glucose uptake into the muscle and fat cells when glucose levels are high, whereas glucagon helps to raise glucose levels if they dip too low by stimulating hepatic glucose synthesis. In many animals, including humans, this storage form is glycogen, especially in liver and muscle cells. In plants, starch is used for the same purpose. The most abundant carbohydrate, cellulose, is a structural component of the cell wall of plants and many forms of algae. Ribose is a component of RNA. Deoxyribose is a component of DNA. Lyxose is a component of lyxoflavin found in the human heart. Ribulose and xylulose occur in the pentose phosphate pathway. Galactose, a component of milk sugar lactose, is found in galactolipids in plant cell membranes and in glycoproteins in many tissues. Mannose occurs in human metabolism, especially in the glycosylation of certain proteins. Fructose, or fruit sugar, is found in many plants and humans, it is metabolized in the liver, absorbed directly into the intestines during digestion, and found in semen. Trehalose, a major sugar of insects, is rapidly hydrolyzed into two glucose molecules to support continuous flight.
== Disaccharides ==
Two joined monosaccharides are called a disaccharide, the simplest kind of polysaccharide. Examples include sucrose and lactose. They are composed of two monosaccharide units bound together by a covalent bond known as a glycosidic linkage formed via a dehydration reaction, resulting in the loss of a hydrogen atom from one monosaccharide and a hydroxyl group from the other. The formula of unmodified disaccharides is C12H22O11. Although there are numerous kinds of disaccharides, a handful of disaccharides are particularly notable.
Sucrose, pictured to the right, is the most abundant disaccharide, and the main form in which carbohydrates are transported in plants. It is composed of one D-glucose molecule and one D-fructose molecule. The systematic name for sucrose, O-α-D-glucopyranosyl-(1→2)-D-fructofuranoside, indicates four things:
Its monosaccharides: glucose and fructose
Their ring types: glucose is a pyranose and fructose is a furanose
How they are linked together: the oxygen on carbon number 1 (C1) of α-D-glucose is linked to the C2 of D-fructose.
The -oside suffix indicates that the anomeric carbon of both monosaccharides participates in the glycosidic bond.
Lactose, a disaccharide composed of one D-galactose molecule and one D-glucose molecule, occurs naturally in mammalian milk. The systematic name for lactose is O-β-D-galactopyranosyl-(1→4)-D-glucopyranose. Other notable disaccharides include maltose (two D-glucoses linked α-1,4) and cellobiose (two D-glucoses linked β-1,4). Disaccharides can be classified into two types: reducing and non-reducing disaccharides. If the functional group is present in bonding with another sugar unit, it is called a reducing disaccharide or biose.
== Oligosaccharides and polysaccharides ==
=== Oligosaccharides ===
Oligosaccharides are saccharide polymers composed of three to ten units of monosaccharides, connected via glycosidic linkages, similar to disaccharides. They are usually linked to lipids or amino acids glycosic linkage with oxygen or nitrogen to form glycolipids and glycoproteins, though some, like the raffinose series and the fructooligosaccharides, do not. They have roles in cell recognition and cell adhesion.
=== Polysaccharides ===
== Nutrition ==
Carbohydrate consumed in food yields 3.87 kilocalories of energy per gram for simple sugars, and 3.57 to 4.12 kilocalories per gram for complex carbohydrate in most other foods. Relatively high levels of carbohydrate are associated with processed foods or refined foods made from plants, including sweets, cookies and candy, table sugar, honey, soft drinks, breads and crackers, jams and fruit products, pastas and breakfast cereals. Refined carbohydrates from processed foods such as white bread or rice, soft drinks, and desserts are readily digestible, and many are known to have a high glycemic index, which reflects a rapid assimilation of glucose. By contrast, the digestion of whole, unprocessed, fiber-rich foods such as beans, peas, and whole grains produces a slower and steadier release of glucose and energy into the body. Animal-based foods generally have the lowest carbohydrate levels, although milk does contain a high proportion of lactose.
Organisms typically cannot metabolize all types of carbohydrate to yield energy. Glucose is a nearly universal and accessible source of energy. Many organisms also have the ability to metabolize other monosaccharides and disaccharides but glucose is often metabolized first. In Escherichia coli, for example, the lac operon will express enzymes for the digestion of lactose when it is present, but if both lactose and glucose are present, the lac operon is repressed, resulting in the glucose being used first (see: Diauxie). Polysaccharides are also common sources of energy. Many organisms can easily break down starches into glucose; most organisms, however, cannot metabolize cellulose or other polysaccharides such as chitin and arabinoxylans. These carbohydrate types can be metabolized by some bacteria and protists. Ruminants and termites, for example, use microorganisms to process cellulose, fermenting it to caloric short-chain fatty acids. Even though humans lack the enzymes to digest fiber, dietary fiber represents an important dietary element for humans. Fibers promote healthy digestion, help regulate postprandial glucose and insulin levels, reduce cholesterol levels, and promote satiety.
The Institute of Medicine recommends that American and Canadian adults get between 45 and 65% of dietary energy from whole-grain carbohydrates. The Food and Agriculture Organization and World Health Organization jointly recommend that national dietary guidelines set a goal of 55–75% of total energy from carbohydrates, but only 10% directly from sugars (their term for simple carbohydrates). A 2017 Cochrane Systematic Review concluded that there was insufficient evidence to support the claim that whole grain diets can affect cardiovascular disease.
=== Classification ===
The term complex carbohydrate was first used in the U.S. Senate Select Committee on Nutrition and Human Needs publication Dietary Goals for the United States (1977) where it was intended to distinguish sugars from other carbohydrates (which were perceived to be nutritionally superior). However, the report put "fruit, vegetables and whole-grains" in the complex carbohydrate column, despite the fact that these may contain sugars as well as polysaccharides. The standard usage, however, is to classify carbohydrates chemically: simple if they are sugars (monosaccharides and disaccharides) and complex if they are polysaccharides (or oligosaccharides). Carbohydrates are sometimes divided into "available carbohydrates", which are absorbed in the small intestine and "unavailable carbohydrates", which pass to the large intestine, where they are subject to fermentation by the gastrointestinal microbiota.
==== Glycemic index ====
The glycemic index (GI) and glycemic load concepts characterize the potential for carbohydrates in food to raise blood glucose compared to a reference food (generally pure glucose). Expressed numerically as GI, carbohydrate-containing foods can be grouped as high-GI (score more than 70), moderate-GI (56–69), or low-GI (less than 55) relative to pure glucose (GI=100). Consumption of carbohydrate-rich, high-GI foods causes an abrupt increase in blood glucose concentration that declines rapidly following the meal, whereas low-GI foods with lower carbohydrate content produces a lower blood glucose concentration that returns gradually after the meal.
Glycemic load is a measure relating the quality of carbohydrates in a food (low- vs. high-carbohydrate content – the GI) by the amount of carbohydrates in a single serving of that food.
=== Health effects of dietary carbohydrate restriction ===
Low-carbohydrate diets may miss the health advantages – such as increased intake of dietary fiber and phytochemicals – afforded by high-quality plant foods such as legumes and pulses, whole grains, fruits, and vegetables. A "meta-analysis, of moderate quality," included as adverse effects of the diet halitosis, headache and constipation.
Carbohydrate-restricted diets can be as effective as low-fat diets in helping achieve weight loss over the short term when overall calorie intake is reduced. An Endocrine Society scientific statement said that "when calorie intake is held constant [...] body-fat accumulation does not appear to be affected by even very pronounced changes in the amount of fat vs carbohydrate in the diet." In the long term, low-carbohydrate diets do not appear to confer a "metabolic advantage," and effective weight loss or maintenance depends on the level of calorie restriction, not the ratio of macronutrients in a diet. The reasoning of diet advocates that carbohydrates cause undue fat accumulation by increasing blood insulin levels, but a more balanced diet that restricts refined carbohydrates can also reduce serum glucose and insulin levels and may also suppress lipogenesis and promote fat oxidation. However, as far as energy expenditure itself is concerned, the claim that low-carbohydrate diets have a "metabolic advantage" is not supported by clinical evidence. Further, it is not clear how low-carbohydrate dieting affects cardiovascular health, although two reviews showed that carbohydrate restriction may improve lipid markers of cardiovascular disease risk.
Carbohydrate-restricted diets are no more effective than a conventional healthy diet in preventing the onset of type 2 diabetes, but for people with type 2 diabetes, they are a viable option for losing weight or helping with glycemic control. There is limited evidence to support routine use of low-carbohydrate dieting in managing type 1 diabetes. The American Diabetes Association recommends that people with diabetes should adopt a generally healthy diet, rather than a diet focused on carbohydrate or other macronutrients.
An extreme form of low-carbohydrate diet – the ketogenic diet – is established as a medical diet for treating epilepsy. Through celebrity endorsement during the early 21st century, it became a fad diet as a means of weight loss, but with risks of undesirable side effects, such as low energy levels and increased hunger, insomnia, nausea, and gastrointestinal discomfort. The British Dietetic Association named it one of the "top 5 worst celeb diets to avoid in 2018".
== Sources ==
Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbound glucose is one of the main ingredients of honey. Glucose is extremely abundant and has been isolated from a variety of natural sources across the world, including male cones of the coniferous tree Wollemia nobilis in Rome, the roots of Ilex asprella plants in China, and straws from rice in California.
^A The carbohydrate value is calculated in the USDA database and does not always correspond to the sum of the sugars, the starch, and the "dietary fiber".
== Metabolism ==
Carbohydrate metabolism is the series of biochemical processes responsible for the formation, breakdown and interconversion of carbohydrates in living organisms.
The most important carbohydrate is glucose, a simple sugar (monosaccharide) that is metabolized by nearly all known organisms. Glucose and other carbohydrates are part of a wide variety of metabolic pathways across species: plants synthesize carbohydrates from carbon dioxide and water by photosynthesis storing the absorbed energy internally, often in the form of starch or lipids. Plant components are consumed by animals and fungi, and used as fuel for cellular respiration. Oxidation of one gram of carbohydrate yields approximately 16 kJ (4 kcal) of energy, while the oxidation of one gram of lipids yields about 38 kJ (9 kcal). The human body stores between 300 and 500 g of carbohydrates depending on body weight, with the skeletal muscle contributing to a large portion of the storage. Energy obtained from metabolism (e.g., oxidation of glucose) is usually stored temporarily within cells in the form of ATP. Organisms capable of anaerobic and aerobic respiration metabolize glucose and oxygen (aerobic) to release energy, with carbon dioxide and water as byproducts.
=== Catabolism ===
Catabolism is the metabolic reaction which cells undergo to break down larger molecules, extracting energy. There are two major metabolic pathways of monosaccharide catabolism: glycolysis and the citric acid cycle.
In glycolysis, oligo- and polysaccharides are cleaved first to smaller monosaccharides by enzymes called glycoside hydrolases. The monosaccharide units can then enter into monosaccharide catabolism. A 2 ATP investment is required in the early steps of glycolysis to phosphorylate Glucose to Glucose 6-Phosphate (G6P) and Fructose 6-Phosphate (F6P) to Fructose 1,6-biphosphate (FBP), thereby pushing the reaction forward irreversibly. In some cases, as with humans, not all carbohydrate types are usable as the digestive and metabolic enzymes necessary are not present.
== Carbohydrate chemistry ==
Carbohydrate chemistry is a large and economically important branch of organic chemistry. Some of the main organic reactions that involve carbohydrates are:
Amadori rearrangement
Carbohydrate acetalisation
Carbohydrate digestion
Cyanohydrin reaction
Koenigs–Knorr reaction
Lobry de Bruyn–Van Ekenstein transformation
Nef reaction
Wohl degradation
Tipson-Cohen reaction
Ferrier rearrangement
Ferrier II reaction
== Chemical synthesis ==
Carbohydrate synthesis is a sub-field of organic chemistry concerned specifically with the generation of natural and unnatural carbohydrate structures. This can include the synthesis of monosaccharide residues or structures containing more than one monosaccharide, known as oligosaccharides. Selective formation of glycosidic linkages and selective reactions of hydroxyl groups are very important, and the usage of protecting groups is extensive.
Common reactions for glycosidic bond formation are as follows:
Chemical glycosylation
Fischer glycosidation
Koenigs-Knorr reaction
Crich beta-mannosylation
While some common protection methods are as below:
Carbohydrate acetalisation
Trimethylsilyl
Benzyl ether
p-Methoxybenzyl ether
== See also ==
Bioplastic
Carbohydrate NMR
Gluconeogenesis – A process where glucose can be synthesized by non-carbohydrate sources.
Glycobiology
Glycogen
Glycoinformatics
Glycolipid
Glycome
Glycomics
Glycosyl
Macromolecule
Saccharic acid
== References ==
== Further reading ==
"Compolition of foods raw, processed, prepared" (PDF). United States Department of Agriculture. September 2015. Archived (PDF) from the original on October 31, 2016. Retrieved October 30, 2016.
== External links ==
Carbohydrates, including interactive models and animations (Requires MDL Chime)
IUPAC-IUBMB Joint Commission on Biochemical Nomenclature (JCBN): Carbohydrate Nomenclature
Carbohydrates detailed
Carbohydrates and Glycosylation – The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Functional Glycomics Gateway, a collaboration between the Consortium for Functional Glycomics and Nature Publishing Group | Wikipedia/Carbohydrate_chemistry |
Characterization, when used in materials science, refers to the broad and general process by which a material's structure and properties are probed and measured. It is a fundamental process in the field of materials science, without which no scientific understanding of engineering materials could be ascertained. The scope of the term often differs; some definitions limit the term's use to techniques which study the microscopic structure and properties of materials, while others use the term to refer to any materials analysis process including macroscopic techniques such as mechanical testing, thermal analysis and density calculation. The scale of the structures observed in materials characterization ranges from angstroms, such as in the imaging of individual atoms and chemical bonds, up to centimeters, such as in the imaging of coarse grain structures in metals.
While many characterization techniques have been practiced for centuries, such as basic optical microscopy, new techniques and methodologies are constantly emerging. In particular the advent of the electron microscope and secondary ion mass spectrometry in the 20th century has revolutionized the field, allowing the imaging and analysis of structures and compositions on much smaller scales than was previously possible, leading to a huge increase in the level of understanding as to why different materials show different properties and behaviors. More recently, atomic force microscopy has further increased the maximum possible resolution for analysis of certain samples in the last 30 years.
== Microscopy ==
Microscopy is a category of characterization techniques which probe and map the surface and sub-surface structure of a material. These techniques can use photons, electrons, ions or physical cantilever probes to gather data about a sample's structure on a range of length scales. Some common examples of microscopy techniques include:
Optical microscopy
Scanning electron microscopy (SEM)
Transmission electron microscopy (TEM)
Field ion microscopy (FIM)
Scanning probe microscopy (SPM)
Atomic force microscopy (AFM)
Scanning tunneling microscopy (STM)
X-ray diffraction topography (XRT)
Atom-Probe Tomography (APT)
== Spectroscopy ==
Spectroscopy is a category of characterization techniques which use a range of principles to reveal the chemical composition, composition variation, crystal structure and photoelectric properties of materials. Some common examples of spectroscopy techniques include:
=== Optical radiation ===
Ultraviolet-visible spectroscopy (UV-vis)
Fourier transform infrared spectroscopy (FTIR)
Thermoluminescence (TL)
Photoluminescence (PL)
=== X-ray ===
X-ray diffraction (XRD)
Small-angle X-ray scattering (SAXS)
Energy-dispersive X-ray spectroscopy (EDX, EDS)
Wavelength dispersive X-ray spectroscopy (WDX, WDS)
Electron energy loss spectroscopy (EELS)
X-ray photoelectron spectroscopy (XPS)
Auger electron spectroscopy (AES)
X-ray photon correlation spectroscopy (XPCS)
=== Mass spectrometry ===
Modes of mass spectrometry:
Electron ionization (EI)
Thermal ionization mass spectrometry (TI-MS)
MALDI-TOF
Secondary ion mass spectrometry (SIMS)
=== Nuclear spectroscopy ===
Nuclear magnetic resonance spectroscopy (NMR)
Mössbauer spectroscopy (MBS)
Perturbed angular correlation (PAC)
=== Other ===
Photon correlation spectroscopy/Dynamic light scattering (DLS)
Terahertz spectroscopy (THz)
Electron paramagnetic/spin resonance (EPR, ESR)
Small-angle neutron scattering (SANS)
Rutherford backscattering spectrometry (RBS)
Spatially resolved acoustic spectroscopy (SRAS)
== Macroscopic testing ==
A huge range of techniques are used to characterize various macroscopic properties of materials, including:
Mechanical testing, including tensile, compressive, torsional, creep, fatigue, toughness and hardness testing
Differential thermal analysis (DTA)
Dielectric thermal analysis (DEA, DETA)
Thermogravimetric analysis (TGA)
Differential scanning calorimetry (DSC)
Impulse excitation technique (IET)
Ultrasound techniques, including resonant ultrasound spectroscopy and time domain ultrasonic testing methods
== See also ==
Analytical chemistry
Instrumental chemistry
Semiconductor characterization techniques
Wafer bond characterization
Polymer characterization
Lipid bilayer characterization
Lignin characterization
Characterization of nanoparticles
MEMS for in situ mechanical characterization
== References == | Wikipedia/Characterization_(materials_science) |
Tectonophysics, a branch of geophysics, is the study of the physical processes that underlie tectonic deformation. This includes measurement or calculation of the stress- and strain fields on Earth’s surface and the rheologies of the crust, mantle, lithosphere and asthenosphere.
== Overview ==
Tectonophysics is concerned with movements in the Earth's crust and deformations over scales from meters to thousands of kilometers. These govern processes on local and regional scales and at structural boundaries, such as the destruction of continental crust (e.g. gravitational instability) and oceanic crust (e.g. subduction), convection in the Earth's mantle (availability of melts), the course of continental drift, and second-order effects of plate tectonics such as thermal contraction of the lithosphere. This involves the measurement of a hierarchy of strains in rocks and plates as well as deformation rates; the study of laboratory analogues of natural systems; and the construction of models for the history of deformation.
== History ==
Tectonophysics was adopted as the name of a new section of AGU on April 19, 1940, at AGU's 21st Annual Meeting. According to the AGU website (https://tectonophysics.agu.org/agu-100/section-history/), using the words from Norman Bowen, the main goal of the tectonophysics section was to “designate this new borderline field between geophysics, physics and geology … for the solution of problems of tectonics.” Consequently, the claim below that the term was defined in 1954 by Gzolvskii is clearly incorrect. Since 1940 members of AGU had been presenting papers at AGU meetings, the contents of which defined the meaning of the field.
Tectonophysics was defined as a field in 1954 when Mikhail Vladimirovich Gzovskii published three papers in the journal Izvestiya Akad. Nauk SSSR, Sireya Geofizicheskaya: "On the tasks and content of tectonophysics", "Tectonic stress fields", and "Modeling of tectonic stress fields". He defined the main goals of tectonophysical research to be study of the mechanisms of folding and faulting as well as large structural units of the Earth's crust. He later created the Laboratory of Tectonophysics at the Institute of Physics of the Earth, Academy of Sciences of the USSR, Moscow.
== Applications ==
In coal mines, large amounts of horizontal stress on the rock (around two to three times greater than the vertical pressure from the overlying rock) are caused by tectonic stress and can be predicted with plate tectonics stress maps. For example, because West Virginia experiences tectonic stress from east to west, as of around the 1990s significantly more roof collapses occurred in mines running north to south than in mines running east to west.
== See also ==
== Notes ==
== References ==
== External links ==
American Geophysical Union Tectonophysics Section | Wikipedia/Tectonophysics |
Planetary science (or more rarely, planetology) is the scientific study of planets (including Earth), celestial bodies (such as moons, asteroids, comets) and planetary systems (in particular those of the Solar System) and the processes of their formation. It studies objects ranging in size from micrometeoroids to gas giants, with the aim of determining their composition, dynamics, formation, interrelations and history. It is a strongly interdisciplinary field, which originally grew from astronomy and Earth science, and now incorporates many disciplines, including planetary geology, cosmochemistry, atmospheric science, physics, oceanography, hydrology, theoretical planetary science, glaciology, and exoplanetology. Allied disciplines include space physics, when concerned with the effects of the Sun on the bodies of the Solar System, and astrobiology.
There are interrelated observational and theoretical branches of planetary science. Observational research can involve combinations of space exploration, predominantly with robotic spacecraft missions using remote sensing, and comparative, experimental work in Earth-based laboratories. The theoretical component involves considerable computer simulation and mathematical modelling.
Planetary scientists are generally located in the astronomy and physics or Earth sciences departments of universities or research centres, though there are several purely planetary science institutes worldwide. Generally, planetary scientists study one of the Earth sciences, astronomy, astrophysics, geophysics, or physics at the graduate level and concentrate their research in planetary science disciplines. There are several major conferences each year, and a wide range of peer reviewed journals. Some planetary scientists work at private research centres and often initiate partnership research tasks.
== History ==
The history of planetary science may be said to have begun with the Ancient Greek philosopher Democritus, who is reported by Hippolytus as saying The ordered worlds are boundless and differ in size, and that in some there is neither sun nor moon, but that in others, both are greater than with us, and yet with others more in number. And that the intervals between the ordered worlds are unequal, here more and there less, and that some increase, others flourish and others decay, and here they come into being and there they are eclipsed. But that they are destroyed by colliding with one another. And that some ordered worlds are bare of animals and plants and all water.
In more modern times, planetary science began in astronomy, from studies of the unresolved planets. In this sense, the original planetary astronomer would be Galileo, who discovered the four largest moons of Jupiter, the mountains on the Moon, and first observed the rings of Saturn, all objects of intense later study. Galileo's study of the lunar mountains in 1609 also began the study of extraterrestrial landscapes: his observation "that the Moon certainly does not possess a smooth and polished surface" suggested that it and other worlds might appear "just like the face of the Earth itself".
Advances in telescope construction and instrumental resolution gradually allowed increased identification of the atmospheric as well as surface details of the planets. The Moon was initially the most heavily studied, due to its proximity to the Earth, as it always exhibited elaborate features on its surface, and the technological improvements gradually produced more detailed lunar geological knowledge. In this scientific process, the main instruments were astronomical optical telescopes (and later radio telescopes) and finally robotic exploratory spacecraft, such as space probes.
The Solar System has now been relatively well-studied, and a good overall understanding of the formation and evolution of this planetary system exists. However, there are large numbers of unsolved questions, and the rate of new discoveries is very high, partly due to the large number of interplanetary spacecraft currently exploring the Solar System.
== Disciplines ==
Planetary science studies observational and theoretical astronomy, geology (astrogeology), atmospheric science, and an emerging subspecialty in planetary oceans, called planetary oceanography.
=== Planetary astronomy ===
This is both an observational and a theoretical science. Observational researchers are predominantly concerned with the study of the small bodies of the Solar System: those that are observed by telescopes, both optical and radio, so that characteristics of these bodies such as shape, spin, surface materials and weathering are determined, and the history of their formation and evolution can be understood.
Theoretical planetary astronomy is concerned with dynamics: the application of the principles of celestial mechanics to the Solar System and extrasolar planetary systems. Observing exoplanets and determining their physical properties, exoplanetology, is a major area of research besides Solar System studies. Every planet has its own branch.
=== Planetary geology ===
In planetary science, the term geology is used in its broadest sense, to mean the study of the surface and interior parts of planets and moons, from their core to their magnetosphere. The best-known research topics of planetary geology deal with the planetary bodies in the near vicinity of the Earth: the Moon, and the two neighboring planets: Venus and Mars. Of these, the Moon was studied first, using methods developed earlier on the Earth. Planetary geology focuses on celestial objects that exhibit a solid surface or have significant solid physical states as part of their structure. Planetary geology applies geology, geophysics and geochemistry to planetary bodies.
==== Planetary geomorphology ====
Geomorphology studies the features on planetary surfaces and reconstructs the history of their formation, inferring the physical processes that acted on the surface. Planetary geomorphology includes the study of several classes of surface features:
Impact features (multi-ringed basins, craters)
Volcanic and tectonic features (lava flows, fissures, rilles)
Glacial features
Aeolian features
Space weathering – erosional effects generated by the harsh environment of space (continuous micrometeorite bombardment, high-energy particle rain, impact gardening). For example, the thin dust cover on the surface of the lunar regolith is a result of micrometeorite bombardment.
Hydrological features: the liquid involved can range from water to hydrocarbon and ammonia, depending on the location within the Solar System. This category includes the study of paleohydrological features (paleochannels, paleolakes).
The history of a planetary surface can be deciphered by mapping features from top to bottom according to their deposition sequence, as first determined on terrestrial strata by Nicolas Steno. For example, stratigraphic mapping prepared the Apollo astronauts for the field geology they would encounter on their lunar missions. Overlapping sequences were identified on images taken by the Lunar Orbiter program, and these were used to prepare a lunar stratigraphic column and geological map of the Moon.
==== Cosmochemistry, geochemistry and petrology ====
One of the main problems when generating hypotheses on the formation and evolution of objects in the Solar System is the lack of samples that can be analyzed in the laboratory, where a large suite of tools are available, and the full body of knowledge derived from terrestrial geology can be brought to bear. Direct samples from the Moon, asteroids and Mars are present on Earth, removed from their parent bodies, and delivered as meteorites. Some of these have suffered contamination from the oxidising effect of Earth's atmosphere and the infiltration of the biosphere, but those meteorites collected in the last few decades from Antarctica are almost entirely pristine.
The different types of meteorites that originate from the asteroid belt cover almost all parts of the structure of differentiated bodies: meteorites even exist that come from the core-mantle boundary (pallasites). The combination of geochemistry and observational astronomy has also made it possible to trace the HED meteorites back to a specific asteroid in the main belt, 4 Vesta.
The comparatively few known Martian meteorites have provided insight into the geochemical composition of the Martian crust, although the unavoidable lack of information about their points of origin on the diverse Martian surface has meant that they do not provide more detailed constraints on theories of the evolution of the Martian lithosphere. As of July 24, 2013, 65 samples of Martian meteorites have been discovered on Earth. Many were found in either Antarctica or the Sahara Desert.
During the Apollo era, in the Apollo program, 384 kilograms of lunar samples were collected and transported to the Earth, and three Soviet Luna robots also delivered regolith samples from the Moon. These samples provide the most comprehensive record of the composition of any Solar System body besides the Earth. The numbers of lunar meteorites are growing quickly in the last few years – as of
April 2008 there are 54 meteorites that have been officially classified as lunar.
Eleven of these are from the US Antarctic meteorite collection, 6 are from the Japanese
Antarctic meteorite collection and the other 37 are from hot desert localities in Africa,
Australia, and the Middle East. The total mass of recognized lunar meteorites is close to
50 kg.
==== Planetary geophysics and space physics ====
Space probes made it possible to collect data in not only the visible light region but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics.
Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins.
If a planet's magnetic field is sufficiently strong, its interaction with the solar wind forms a magnetosphere around a planet. Early space probes discovered the gross dimensions of the terrestrial magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles, the Van Allen radiation belts.
Planetary geophysics includes, but is not limited to, seismology and tectonophysics, geophysical fluid dynamics, mineral physics, geodynamics, mathematical geophysics, and geophysical surveying.
==== Planetary geodesy ====
Planetary geodesy (also known as planetary geodetics) deals with the measurement and representation of the planets of the Solar System, their gravitational fields and geodynamic phenomena (polar motion in three-dimensional, time-varying space). The science of geodesy has elements of both astrophysics and planetary sciences. The shape of the Earth is to a large extent the result of its rotation, which causes its equatorial bulge, and the competition of geologic processes such as the collision of plates and of vulcanism, resisted by the Earth's gravity field. These principles can be applied to the solid surface of Earth (orogeny; Few mountains are higher than 10 km (6 mi), few deep sea trenches deeper than that because quite simply, a mountain as tall as, for example, 15 km (9 mi), would develop so much pressure at its base, due to gravity, that the rock there would become plastic, and the mountain would slump back to a height of roughly 10 km (6 mi) in a geologically insignificant time. Some or all of these geologic principles can be applied to other planets besides Earth. For instance on Mars, whose surface gravity is much less, the largest volcano, Olympus Mons, is 27 km (17 mi) high at its peak, a height that could not be maintained on Earth. The Earth geoid is essentially the figure of the Earth abstracted from its topographic features. Therefore, the Mars geoid (areoid) is essentially the figure of Mars abstracted from its topographic features. Surveying and mapping are two important fields of application of geodesy.
=== Planetary atmospheric science ===
An atmosphere is an important transitional zone between the solid planetary surface and the higher rarefied ionizing and radiation belts. Not all planets have atmospheres: their existence depends on the mass of the planet, and the planet's distance from the Sun – too distant and frozen atmospheres occur. Besides the four giant planets, three of the four terrestrial planets (Earth, Venus, and Mars) have significant atmospheres. Two moons have significant atmospheres: Saturn's moon Titan and Neptune's moon Triton. A tenuous atmosphere exists around Mercury.
The effects of the rotation rate of a planet about its axis can be seen in atmospheric streams and currents. Seen from space, these features show as bands and eddies in the cloud system and are particularly visible on Jupiter and Saturn.
=== Planetary oceanography ===
=== Exoplanetology ===
Exoplanetology studies exoplanets, the planets existing outside our Solar System. Until recently, the means of studying exoplanets have been extremely limited, but with the current rate of innovation in research technology, exoplanetology has become a rapidly developing subfield of astronomy.
== Comparative planetary science ==
Planetary science frequently makes use of the method of comparison to give a greater understanding of the object of study. This can involve comparing the dense atmospheres of Earth and Saturn's moon Titan, the evolution of outer Solar System objects at different distances from the Sun, or the geomorphology of the surfaces of the terrestrial planets, to give only a few examples.
The main comparison that can be made is to features on the Earth, as it is much more accessible and allows a much greater range of measurements to be made. Earth analog studies are particularly common in planetary geology, geomorphology, and also in atmospheric science.
The use of terrestrial analogs was first described by Gilbert (1886).
== In fiction ==
In Frank Herbert's 1965 science fiction novel Dune, the major secondary character Liet-Kynes serves as the "Imperial Planetologist" for the fictional planet Arrakis, a position he inherited from his father Pardot Kynes. In this role, a planetologist is described as having skills of an ecologist, geologist, meteorologist, and biologist, as well as basic understandings of human sociology. The planetologists apply this expertise to the study of entire planets.In the Dune series, planetologists are employed to understand planetary resources and to plan terraforming or other planetary-scale engineering projects. This fictional position in Dune has had an impact on the discourse surrounding planetary science itself and is referred to by one author as a "touchstone" within the related disciplines. In one example, a publication by Sybil P. Seitzinger in the journal Nature opens with a brief introduction on the fictional role in Dune, and suggests we should consider appointing individuals with similar skills to Liet-Kynes to help with managing human activity on Earth.
== Professional activity ==
=== Journals ===
=== Professional bodies ===
This non-exhaustive list includes those institutions and universities with major groups of people working in planetary science. Alphabetical order is used.
Division for Planetary Sciences (DPS) of the American Astronomical Society
American Geophysical Union
Meteoritical Society
Europlanet
==== Government space agencies ====
Canadian Space Agency (CSA)
China National Space Administration (CNSA, People's Republic of China).
Centre national d'études spatiales French National Centre of Space Research
Deutsches Zentrum für Luft- und Raumfahrt e.V., (German: abbreviated DLR), the German Aerospace Center
European Space Agency (ESA)
Indian Space Research Organisation (ISRO)
Israel Space Agency (ISA)
Italian Space Agency
Japan Aerospace Exploration Agency (JAXA)
NASA (National Aeronautics and Space Administration, United States of America)
JPL
GSFC
Ames
National Space Organization (Taiwan).
Russian Federal Space Agency
UK Space Agency (UKSA).
=== Major conferences ===
Lunar and Planetary Science Conference (LPSC), organized by the Lunar and Planetary Institute in Houston. Held annually since 1970, occurs in March.
Division for Planetary Sciences (DPS) meeting held annually since 1970 at a different location each year, predominantly within the mainland US. Occurs around October.
American Geophysical Union (AGU) annual Fall meeting in December in San Francisco.
American Geophysical Union (AGU) Joint Assembly (co-sponsored with other societies) in April–May, in various locations around the world.
Meteoritical Society annual meeting, held during the Northern Hemisphere summer, generally alternating between North America and Europe.
European Planetary Science Congress (EPSC), held annually around September at a location within Europe.
Smaller workshops and conferences on particular fields occur worldwide throughout the year.
== See also ==
Areography (geography of Mars)
Planetary cartography
Planetary coordinate system
Selenography – study of the surface and physical features of the Moon
Theoretical planetology
Timeline of Solar System exploration
== References ==
== Further reading ==
Carr, Michael H., Saunders, R. S., Strom, R. G., Wilhelms, D. E. 1984. The Geology of the Terrestrial Planets. NASA.
Morrison, David. 1994. Exploring Planetary Worlds. W. H. Freeman. ISBN 0-7167-5043-0
Hargitai H et al. (2015) Classification and Characterization of Planetary Landforms. In: Hargitai H (ed) Encyclopedia of Planetary Landforms. Springer. doi:10.1007/978-1-4614-3134-3 https://link.springer.com/content/pdf/bbm%3A978-1-4614-3134-3%2F1.pdf
Hauber E et al. (2019) Planetary geologic mapping. In: Hargitai H (ed) Planetary Cartography and GIS. Springer.
Page D (2015) The Geology of Planetary Landforms. In: Hargitai H (ed) Encyclopedia of Planetary Landforms. Springer.
Rossi, A.P., van Gasselt S (eds) (2018) Planetary Geology. Springer
== External links ==
Planetary Science Research Discoveries (articles)
The Planetary Society (world's largest space-interest group: see also their active news blog)
Planetary Exploration Newsletter (PSI-published professional newsletter, weekly distribution)
Women in Planetary Science (professional networking and news) | Wikipedia/Planetary_science |
Mineral physics is the science of materials that compose the interior of planets, particularly the Earth. It overlaps with petrophysics, which focuses on whole-rock properties. It provides information that allows interpretation of surface measurements of seismic waves, gravity anomalies, geomagnetic fields and electromagnetic fields in terms of properties in the deep interior of the Earth. This information can be used to provide insights into plate tectonics, mantle convection, the geodynamo and related phenomena.
Laboratory work in mineral physics require high pressure measurements. The most common tool is a diamond anvil cell, which uses diamonds to put a small sample under pressure that can approach the conditions in the Earth's interior.
== Creating high pressures ==
=== Shock compression ===
Many of the pioneering studies in mineral physics involved explosions or projectiles that subject a sample to a shock. For a brief time interval, the sample is under pressure as the shock wave passes through. Pressures as high as any in the Earth have been achieved by this method. However, the method has some disadvantages. The pressure is very non-uniform and is not adiabatic, so the pressure wave heats the sample up in passing. The conditions of the experiment must be interpreted in terms of a set of pressure-density curves called Hugoniot curves.
=== Multi-anvil press ===
Multi-anvil presses involve an arrangement of anvils to concentrate pressure from a press onto a sample. Typically the apparatus uses an arrangement eight cube-shaped tungsten carbide anvils to compress a ceramic octahedron containing the sample and a ceramic or Re metal furnace. The anvils are typically placed in a large hydraulic press. The method was developed by Kawai and Endo in Japan. Unlike shock compression, the pressure exerted is steady, and the sample can be heated using a furnace. Pressures of about 28 GPa (equivalent to depths of 840 km), and temperatures above 2300 °C, can be attained using WC anvils and a lanthanum chromite furnace. The apparatus is very bulky and cannot achieve pressures like those in the diamond anvil cell (below), but it can handle much larger samples that can be quenched and examined after the experiment. Recently, sintered diamond anvils have been developed for this type of press that can reach pressures of 90 GPa (2700 km depth).
=== Diamond anvil cell ===
The diamond anvil cell is a small table-top device for concentrating pressure. It can compress a small (sub-millimeter sized) piece of material to extreme pressures, which can exceed 3,000,000 atmospheres (300 gigapascals). This is beyond the pressures at the center of the Earth. The concentration of pressure at the tip of the diamonds is possible because of their hardness, while their transparency and high thermal conductivity allow a variety of probes can be used to examine the state of the sample. The sample can be heated to thousands of degrees.
== Creating high temperatures ==
Achieving temperatures found within the interior of the earth is just as important to the study of mineral physics as creating high pressures. Several methods are used to reach these temperatures and measure them. Resistive heating is the most common and simplest to measure. The application of a voltage to a wire heats the wire and surrounding area. A large variety of heater designs are available including those that heat the entire diamond anvil cell (DAC) body and those that fit inside the body to heat the sample chamber. Temperatures below 700 °C can be reached in air due to the oxidation of diamond above this temperature. With an argon atmosphere, higher temperatures up to 1700 °C can be reached without damaging the diamonds. A tungsten resistive heater with Ar in a BX90 DAC was reported to achieve temperatures of 1400 °C.
Laser heating is done in a diamond-anvil cell with Nd:YAG or CO2 lasers to achieve temperatures above 6000k. Spectroscopy is used to measure black-body radiation from the sample to determine the temperature. Laser heating is continuing to extend the temperature range that can be reached in diamond-anvil cell but suffers two significant drawbacks. First, temperatures below 1200 °C are difficult to measure using this method. Second, large temperature gradients exist in the sample because only the portion of sample hit by the laser is heated.
== Properties of materials ==
=== Equations of state ===
To deduce the properties of minerals in the deep Earth, it is necessary to know how their density varies with pressure and temperature. Such a relation is called an equation of state (EOS). A simple example of an EOS that is predicted by the Debye model for harmonic lattice vibrations is the Mie-Grünheisen equation of state:
(
d
P
d
T
)
=
γ
D
V
C
V
,
{\displaystyle \left({\frac {dP}{dT}}\right)={\frac {\gamma _{D}}{V}}C_{V},}
where
C
V
{\displaystyle C_{V}}
is the heat capacity and
γ
D
{\displaystyle \gamma _{D}}
is the Debye gamma. The latter is one of many Grünheisen parameters that play an important role in high-pressure physics. A more realistic EOS is the Birch–Murnaghan equation of state.: 66–73
=== Interpreting seismic velocities ===
Inversion of seismic data give profiles of seismic velocity as a function of depth. These must still be interpreted in terms of the properties of the minerals. A very useful heuristic was discovered by Francis Birch: plotting data for a large number of rocks, he found a linear relation of the compressional wave velocity
v
p
{\displaystyle v_{p}}
of rocks and minerals of a constant average atomic weight
M
¯
{\displaystyle {\overline {M}}}
with density
ρ
{\displaystyle \rho }
:
v
p
=
a
M
¯
+
b
ρ
{\displaystyle v_{p}=a{\overline {M}}+b\rho }
.
This relationship became known as Birch's law.
This makes it possible to extrapolate known velocities for minerals at the surface to predict velocities deeper in the Earth.
=== Other physical properties ===
Viscosity
Creep (deformation)
Melting
Electrical conduction and other transport properties
=== Methods of crystal interrogation ===
There are a number of experimental procedures designed to extract information from both single and powdered crystals. Some techniques can be used in a diamond anvil cell (DAC) or a multi anvil press (MAP). Some techniques are summarized in the following table.
=== First principles calculations ===
Using quantum mechanical numerical techniques, it is possible to achieve very accurate predictions of crystal's properties including structure, thermodynamic stability, elastic properties and transport properties. The limit of such calculations tends to be computing power, as computation run times of weeks or even months are not uncommon.: 107–109
== History ==
The field of mineral physics was not named until the 1960s, but its origins date back at least to the early 20th century and the recognition that the outer core is fluid because seismic work by Oldham and Gutenberg showed that it did not allow shear waves to propagate.
A landmark in the history of mineral physics was the publication of Density of the Earth by Erskine Williamson, a mathematical physicist, and Leason Adams, an experimentalist. Working at the Geophysical Laboratory in the Carnegie Institution of Washington, they considered a problem that had long puzzled scientists. It was known that the average density of the Earth was about twice that of the crust, but it was not known whether this was due to compression or changes in composition in the interior. Williamson and Adams assumed that deeper rock is compressed adiabatically (without releasing heat) and derived the Adams–Williamson equation, which determines the density profile from measured densities and elastic properties of rocks. They measured some of these properties using a 500-ton hydraulic press that applied pressures of up to 1.2 gigapascals (GPa). They concluded that the Earth's mantle had a different composition than the crust, perhaps ferromagnesian silicates, and the core was some combination of iron and nickel. They estimated the pressure and density at the center to be 320 GPa and 10,700 kg/m3, not far off the current estimates of 360 GPa and 13,000 kg/m3.
The experimental work at the Geophysical Laboratory benefited from the pioneering work of Percy Bridgman at Harvard University, who developed methods for high-pressure research that led to a Nobel Prize in Physics. A student of his, Francis Birch, led a program to apply high-pressure methods to geophysics.
Birch extended the Adams-Williamson equation to include the effects of temperature. In 1952, he published a classic paper, Elasticity and constitution of the Earth's interior, in which he established some basic facts: the mantle is predominantly silicates; there is a phase transition between the upper and lower mantle associated with a phase transition; and the inner and outer core are both iron alloys.
== References ==
== Further reading ==
== External links ==
"Teaching Mineral Physics Across the Curriculum". On the cutting edge - professional development for geoscience faculty. Retrieved 21 May 2012. | Wikipedia/Mineral_physics |
Bilbao Crystallographic Server is an open access website offering online crystallographic database and programs aimed at analyzing, calculating and visualizing problems of structural and mathematical crystallography, solid state physics and structural chemistry. Initiated in 1997 by the Materials Laboratory of the Department of Condensed Matter Physics at the University of the Basque Country, Bilbao, Spain, the Bilbao Crystallographic Server is developed and maintained by academics.
== Information on contents and an overview of tools hosted ==
Focusing on crystallographic data and applications of the group theory in solid state physics, the server is built on a core of databases and contains different shells.
=== Space Groups Retrieval Tools ===
The set of databases includes data from International Tables of Crystallography, Vol. A: Space-Group Symmetry, and the data of maximal subgroups of space groups as listed in International Tables of Crystallography, Vol. A1: Symmetry relations between space groups. A k-vector database with Brillouin zone figures and classification tables of the k-vectors for space groups is also available via the KVEC tool.
=== Magnetic Space Groups ===
In 2011, the Magnetic Space Groups data compiled from H.T. Stokes & B.J. Campbell's and D. Litvin's's works general positions/symmetry operations and Wyckoff positions for different settings, along with systematic absence rules have also been incorporated into the server and a new shell has been dedicated to the related tools (MGENPOS, MWYCKPOS, MAGNEXT).
=== Group-Subgroup Relations of Space Groups ===
This shell contains applications which are essential for problems involving group-subgroup relations between space groups. Given the space group types of G and H and their index, the program SUBGROUPGRAPH provides graphs of maximal subgroups for a group-subgroup pair G > H, all the different subgroups H and their distribution into conjugacy classes. The Wyckoff position splitting rules for a group-subgroup pair are calculated by the program WYCKSPLIT.
=== Representation Theory Applications ===
The fourth shell includes programs on representation theory of space and point groups. REPRES constructs little group and full group irreducible representations for a given space group and a k-vector; CORREL deals with the correlations between the irreducible representations of group-subgroup related space groups. The program POINT lists character tables of crystallographic point groups, Kronecker multiplication tables of their irreducible representations and further useful symmetry information.
=== Solid State Theory Applications ===
This shell is related to solid state physics and structural chemistry. The program PSEUDO performs an evaluation of the pseudosymmetry of a given structure with respect to supergroups of its space group. AMPLIMODES performs the symmetry-mode analysis of any distorted structure of displacive type. The analysis consists in decomposing the symmetry-breaking distortion present in the distorted structure into contributions from different symmetry-adapted modes. Given the high and low symmetry structures, the program calculates the amplitudes and polarization vectors of the distortion modes of different symmetry frozen in the structure. The program SAM calculates symmetry-adapted modes for the centre of the Brillouin zone and classifies them according to their infrared and Raman activity. NEUTRON computes the phonon extinction rules in inelastic neutron scattering. Its results are also relevant for diffuse-scattering experiments.
=== Structure Utilities ===
A set of structure utilities has been included for various applications such as: the transformation of unit cells (CELLTRAN) or complete structures (TRANSTRU); strain tensor calculation (STRAIN); assignment of Wyckoff Positions (WPASSIGN); equivalent descriptions of a given structure (EQUIVSTRU); comparison of different structures with support for the affine normalizers of monoclinic space groups. STRUCTURE RELATIONS calculates the possible transformation matrices for a given pair of group-subgroup related structures.
=== Incommensurate Crystal Structures Database ===
The Bilbao Crystallographic Server also hosts the B-IncStrDB: Bilbao Incommensurate Crystal Structures Database, a database for incommensurately modulated and composite structures.
== Scientific Research ==
In addition to receiving citations from scientific articles and theses, the Bilbao Crystallographic Server also actively publishes research reports in internationally reviewed articles, as well as hosting/participating in international workshops, summer schools and conferences. A list of these publications and events are accessible from the server's web page..
== Development History and People ==
The Bilbao Crystallographic Server came to life in 1997 as a scientific project by the Departments of Condensed Matter Physics and Applied Physics II of the University of the Basque Country (EHU) under the supervision of J. Manuel Perez-Mato (EHU) and Mois I. Aroyo (EHU), in coordination with Gotzon Madariaga (EHU) and Hans Wondratschek (Karlsruhe Institute of Technology, Germany) with funding from the Basque government and several ministries of the Spanish government. The initial code was written by then Ph.D. students Eli Kroumova (EHU) and Svet Ivantchev (EHU) and the very first shells related to retrieval tools, group-subgroup relations and space group representations have soon appeared online.
Afterwards, in collaboration with Harold T. Stokes and Dorian M. Hatch from Brigham Young University, USA, the server extended its services to include symmetry modes analysis. Asen K. Kirov, a Ph.D. student from Sofia University, Bulgaria contributed to the server, working on programs dedicated to irreducible representations and extinction rules.
In 2001, Ph.D. student Cesar Capillas began his research on the server and became the main developer and system administrator focusing on structure relations, such as pseudosymmetry and phase transitions. Danel Orobengoa, also a Ph.D. student, joined the developer team in 2005 and worked mainly on symmetry modes, k-vector classification tables and non-characteristic orbits (in collaboration with Massimo Nespolo of the Nancy-Université, France), writing his Ph.D. thesis on the applications of the server for ferroic materials.
In 2009, Ph.D. student Gemma de la Flor and post-doc Emre S. Tasci were recruited for the development team: de la Flor working mainly on the identification and interpretation of symmetry operations, structure comparison and Tasci becoming the new system administrator and main developer, focusing in the structure relations concerning phase transitions. The Bilbao Crystallographic Server team took its current (2012) line-up in 2010 with the addition of Ph.D. student Samuel Vidal Gallego, his main research field being the magnetic space groups.
== References ==
== External links ==
Official website | Wikipedia/Bilbao_Crystallographic_Server |
A crystallographer is a type of scientist who practices crystallography, in other words, who studies crystals.
== Career paths ==
The work of crystallographers spans several academic disciplines, including the life sciences, chemistry, physics, and materials science. They may work in research and manufacturing, which could include growing crystals for use in computer chips, solar cells, or medications. Within the life sciences, they may crystallize biological materials (such as proteins or viruses) or drugs. They may also come in hand in forensic science. They may also study materials using materials simulations.
Most working crystallographers have a graduate degree. There are very few opportunities for those with a bachelor's degree or associate degree.
== By country ==
=== Germany ===
In 2013, one working group, the Young Crystallographers, was established within the German Crystallographic Society (DGK). As of 2024, the Young Crystallographers have about 250 members. The working group also awards the annual Lieselotte Templeton Prize, named after the German-American scientist Lieselotte Templeton.
=== South Africa ===
Out of 78 South African crystallographers profiled in 2001/2, each scientist has 2.6 contacts on average within South Africa and 2.0 contacts on average internationally. The majority of these scientists worked in Gauteng.
=== United States ===
The U.S. Bureau of Labor Statistics groups crystallographers with geoscientists for statistical purposes. However, as of the 2010s, the largest demand for crystallographers actually comes from the medical and life sciences.
== References == | Wikipedia/Crystallographer |
Valence shell electron pair repulsion (VSEPR) theory ( VESP-ər,: 410 və-SEP-ər) is a model used in chemistry to predict the geometry of individual molecules from the number of electron pairs surrounding their central atoms. It is also named the Gillespie-Nyholm theory after its two main developers, Ronald Gillespie and Ronald Nyholm.
The premise of VSEPR is that the valence electron pairs surrounding an atom tend to repel each other. The greater the repulsion, the higher in energy (less stable) the molecule is. Therefore, the VSEPR-predicted molecular geometry of a molecule is the one that has as little of this repulsion as possible. Gillespie has emphasized that the electron-electron repulsion due to the Pauli exclusion principle is more important in determining molecular geometry than the electrostatic repulsion.
The insights of VSEPR theory are derived from topological analysis of the electron density of molecules. Such quantum chemical topology (QCT) methods include the electron localization function (ELF) and the quantum theory of atoms in molecules (AIM or QTAIM).
== History ==
The idea of a correlation between molecular geometry and number of valence electron pairs (both shared and unshared pairs) was originally proposed in 1939 by Ryutaro Tsuchida in Japan, and was independently presented in a Bakerian Lecture in 1940 by Nevil Sidgwick and Herbert Powell of the University of Oxford. In 1957, Ronald Gillespie and Ronald Sydney Nyholm of University College London refined this concept into a more detailed theory, capable of choosing between various alternative geometries.
== Overview ==
VSEPR theory is used to predict the arrangement of electron pairs around central atoms in molecules, especially simple and symmetric molecules. A central atom is defined in this theory as an atom which is bonded to two or more other atoms, while a terminal atom is bonded to only one other atom.: 398 For example, in the molecule methyl isocyanate (H3C-N=C=O), the two carbons and one nitrogen are central atoms, and the three hydrogens and one oxygen are terminal atoms.: 416 The geometry of the central atoms and their non-bonding electron pairs in turn determine the geometry of the larger whole molecule.
The number of electron pairs in the valence shell of a central atom is determined after drawing the Lewis structure of the molecule, and expanding it to show all bonding groups and lone pairs of electrons.: 410–417 In VSEPR theory, a double bond or triple bond is treated as a single bonding group. The sum of the number of atoms bonded to a central atom and the number of lone pairs formed by its nonbonding valence electrons is known as the central atom's steric number.
The electron pairs (or groups if multiple bonds are present) are assumed to lie on the surface of a sphere centered on the central atom and tend to occupy positions that minimize their mutual repulsions by maximizing the distance between them.: 410–417 The number of electron pairs (or groups), therefore, determines the overall geometry that they will adopt. For example, when there are two electron pairs surrounding the central atom, their mutual repulsion is minimal when they lie at opposite poles of the sphere. Therefore, the central atom is predicted to adopt a linear geometry. If there are 3 electron pairs surrounding the central atom, their repulsion is minimized by placing them at the vertices of an equilateral triangle centered on the atom. Therefore, the predicted geometry is trigonal. Likewise, for 4 electron pairs, the optimal arrangement is tetrahedral.: 410–417
As a tool in predicting the geometry adopted with a given number of electron pairs, an often used physical demonstration of the principle of minimal electron pair repulsion utilizes inflated balloons. Through handling, balloons acquire a slight surface electrostatic charge that results in the adoption of roughly the same geometries when they are tied together at their stems as the corresponding number of electron pairs. For example, five balloons tied together adopt the trigonal bipyramidal geometry, just as do the five bonding pairs of a PCl5 molecule.
=== Steric number ===
The steric number of a central atom in a molecule is the number of atoms bonded to that central atom, called its coordination number, plus the number of lone pairs of valence electrons on the central atom. In the molecule SF4, for example, the central sulfur atom has four ligands; the coordination number of sulfur is four. In addition to the four ligands, sulfur also has one lone pair in this molecule. Thus, the steric number is 4 + 1 = 5.
=== Degree of repulsion ===
The overall geometry is further refined by distinguishing between bonding and nonbonding electron pairs. The bonding electron pair shared in a sigma bond with an adjacent atom lies further from the central atom than a nonbonding (lone) pair of that atom, which is held close to its positively charged nucleus. VSEPR theory therefore views repulsion by the lone pair to be greater than the repulsion by a bonding pair. As such, when a molecule has 2 interactions with different degrees of repulsion, VSEPR theory predicts the structure where lone pairs occupy positions that allow them to experience less repulsion. Lone pair–lone pair (lp–lp) repulsions are considered stronger than lone pair–bonding pair (lp–bp) repulsions, which in turn are considered stronger than bonding pair–bonding pair (bp–bp) repulsions, distinctions that then guide decisions about overall geometry when 2 or more non-equivalent positions are possible.: 410–417 For instance, when 5 valence electron pairs surround a central atom, they adopt a trigonal bipyramidal molecular geometry with two collinear axial positions and three equatorial positions. An electron pair in an axial position has three close equatorial neighbors only 90° away and a fourth much farther at 180°, while an equatorial electron pair has only two adjacent pairs at 90° and two at 120°. The repulsion from the close neighbors at 90° is more important, so that the axial positions experience more repulsion than the equatorial positions; hence, when there are lone pairs, they tend to occupy equatorial positions as shown in the diagrams of the next section for steric number five.
The difference between lone pairs and bonding pairs may also be used to rationalize deviations from idealized geometries. For example, the H2O molecule has four electron pairs in its valence shell: two lone pairs and two bond pairs. The four electron pairs are spread so as to point roughly towards the apices of a tetrahedron. However, the bond angle between the two O–H bonds is only 104.5°, rather than the 109.5° of a regular tetrahedron, because the two lone pairs (whose density or probability envelopes lie closer to the oxygen nucleus) exert a greater mutual repulsion than the two bond pairs.: 410–417
A bond of higher bond order also exerts greater repulsion since the pi bond electrons contribute. For example, in isobutylene, (H3C)2C=CH2, the H3C−C=C angle (124°) is larger than the H3C−C−CH3 angle (111.5°). However, in the carbonate ion, CO2−3, all three C−O bonds are equivalent with angles of 120° due to resonance.
== AXE method ==
The "AXE method" of electron counting is commonly used when applying the VSEPR theory. The electron pairs around a central atom are represented by a formula AXmEn, where A represents the central atom and always has an implied subscript one. Each X represents a ligand (an atom bonded to A). Each E represents a lone pair of electrons on the central atom.: 410–417 The total number of X and E is known as the steric number. For example, in a molecule AX3E2, the atom A has a steric number of 5.
When the substituent (X) atoms are not all the same, the geometry is still approximately valid, but the bond angles may be slightly different from the ones where all the outside atoms are the same. For example, the double-bond carbons in alkenes like C2H4 are AX3E0, but the bond angles are not all exactly 120°. Likewise, SOCl2 is AX3E1, but because the X substituents are not identical, the X–A–X angles are not all equal.
Based on the steric number and distribution of Xs and Es, VSEPR theory makes the predictions in the following tables.
=== Main-group elements ===
For main-group elements, there are stereochemically active lone pairs E whose number can vary from 0 to 3. Note that the geometries are named according to the atomic positions only and not the electron arrangement. For example, the description of AX2E1 as a bent molecule means that the three atoms AX2 are not in one straight line, although the lone pair helps to determine the geometry.
=== Transition metals (Kepert model) ===
The lone pairs on transition metal atoms are usually stereochemically inactive, meaning that their presence does not change the molecular geometry. For example, the hexaaquo complexes M(H2O)6 are all octahedral for M = V3+, Mn3+, Co3+, Ni2+ and Zn2+, despite the fact that the electronic configurations of the central metal ion are d2, d4, d6, d8 and d10 respectively.: 542 The Kepert model ignores all lone pairs on transition metal atoms, so that the geometry around all such atoms corresponds to the VSEPR geometry for AXn with 0 lone pairs E.: 542 This is often written MLn, where M = metal and L = ligand. The Kepert model predicts the following geometries for coordination numbers of 2 through 9:
== Examples ==
The methane molecule (CH4) is tetrahedral because there are four pairs of electrons. The four hydrogen atoms are positioned at the vertices of a tetrahedron, and the bond angle is cos−1(−1⁄3) ≈ 109° 28′. This is referred to as an AX4 type of molecule. As mentioned above, A represents the central atom and X represents an outer atom.: 410–417
The ammonia molecule (NH3) has three pairs of electrons involved in bonding, but there is a lone pair of electrons on the nitrogen atom.: 392–393 It is not bonded with another atom; however, it influences the overall shape through repulsions. As in methane above, there are four regions of electron density. Therefore, the overall orientation of the regions of electron density is tetrahedral. On the other hand, there are only three outer atoms. This is referred to as an AX3E type molecule because the lone pair is represented by an E.: 410–417 By definition, the molecular shape or geometry describes the geometric arrangement of the atomic nuclei only, which is trigonal-pyramidal for NH3.: 410–417
Steric numbers of 7 or greater are possible, but are less common. The steric number of 7 occurs in iodine heptafluoride (IF7); the base geometry for a steric number of 7 is pentagonal bipyramidal. The most common geometry for a steric number of 8 is a square antiprismatic geometry.: 1165 Examples of this include the octacyanomolybdate (Mo(CN)4−8) and octafluorozirconate (ZrF4−8) anions.: 1165 The nonahydridorhenate ion (ReH2−9) in potassium nonahydridorhenate is a rare example of a compound with a steric number of 9, which has a tricapped trigonal prismatic geometry.: 254
Steric numbers beyond 9 are very rare, and it is not clear what geometry is generally favoured. Possible geometries for steric numbers of 10, 11, 12, or 14 are bicapped square antiprismatic (or bicapped dodecadeltahedral), octadecahedral, icosahedral, and bicapped hexagonal antiprismatic, respectively. No compounds with steric numbers this high involving monodentate ligands exist, and those involving multidentate ligands can often be analysed more simply as complexes with lower steric numbers when some multidentate ligands are treated as a unit.: 1165, 1721
== Exceptions ==
There are groups of compounds where VSEPR fails to predict the correct geometry.
=== Some AX2E0 molecules ===
The shapes of heavier Group 14 element alkyne analogues (RM≡MR, where M = Si, Ge, Sn or Pb) have been computed to be bent.
=== Some AX2E2 molecules ===
One example of the AX2E2 geometry is molecular lithium oxide, Li2O, a linear rather than bent structure, which is ascribed to its bonds being essentially ionic and the strong lithium-lithium repulsion that results. Another example is O(SiH3)2 with an Si–O–Si angle of 144.1°, which compares to the angles in Cl2O (110.9°), (CH3)2O (111.7°), and N(CH3)3 (110.9°). Gillespie and Robinson rationalize the Si–O–Si bond angle based on the observed ability of a ligand's lone pair to most greatly repel other electron pairs when the ligand electronegativity is greater than or equal to that of the central atom. In O(SiH3)2, the central atom is more electronegative, and the lone pairs are less localized and more weakly repulsive. The larger Si–O–Si bond angle results from this and strong ligand-ligand repulsion by the relatively large -SiH3 ligand. Burford et al. showed through X-ray diffraction studies that Cl3Al–O–PCl3 has a linear Al–O–P bond angle and is therefore a non-VSEPR molecule.
=== Some AX6E1 and AX8E1 molecules ===
Some AX6E1 molecules, e.g. xenon hexafluoride (XeF6) and the Te(IV) and Bi(III) anions, TeCl2−6, TeBr2−6, BiCl3−6, BiBr3−6 and BiI3−6, are octahedral, rather than pentagonal pyramids, and the lone pair does not affect the geometry to the degree predicted by VSEPR. Similarly, the octafluoroxenate ion (XeF2−8) in nitrosonium octafluoroxenate(VI): 498 is a square antiprism with minimal distortion, despite having a lone pair. One rationalization is that steric crowding of the ligands allows little or no room for the non-bonding lone pair; another rationalization is the inert-pair effect.: 214
=== Square planar ML4 complexes ===
The Kepert model predicts that ML4 transition metal molecules are tetrahedral in shape, and it cannot explain the formation of square planar complexes.: 542 The majority of such complexes exhibit a d8 configuration as for the tetrachloroplatinate (PtCl2−4) ion. The explanation of the shape of square planar complexes involves electronic effects and requires the use of crystal field theory.: 562–4
=== Complexes with strong d-contribution ===
Some transition metal complexes with low d electron count have unusual geometries, which can be ascribed to d subshell bonding interaction. Gillespie found that this interaction produces bonding pairs that also occupy the respective antipodal points (ligand opposed) of the sphere. This phenomenon is an electronic effect resulting from the bilobed shape of the underlying sdx hybrid orbitals. The repulsion of these bonding pairs leads to a different set of shapes.
The gas phase structures of the triatomic halides of the heavier members of group 2, (i.e., calcium, strontium and barium halides, MX2), are not linear as predicted but are bent, (approximate X–M–X angles: CaF2, 145°; SrF2, 120°; BaF2, 108°; SrCl2, 130°; BaCl2, 115°; BaBr2, 115°; BaI2, 105°). It has been proposed by Gillespie that this is also caused by bonding interaction of the ligands with the d subshell of the metal atom, thus influencing the molecular geometry.
=== Superheavy elements ===
Relativistic effects on the electron orbitals of superheavy elements is predicted to influence the molecular geometry of some compounds. For instance, the 6d5/2 electrons in nihonium play an unexpectedly strong role in bonding, so NhF3 should assume a T-shaped geometry, instead of a trigonal planar geometry like its lighter congener BF3. In contrast, the extra stability of the 7p1/2 electrons in tennessine are predicted to make TsF3 trigonal planar, unlike the T-shaped geometry observed for IF3 and predicted for AtF3; similarly, OgF4 should have a tetrahedral geometry, while XeF4 has a square planar geometry and RnF4 is predicted to have the same.
== Odd-electron molecules ==
The VSEPR theory can be extended to molecules with an odd number of electrons by treating the unpaired electron as a "half electron pair"—for example, Gillespie and Nyholm: 364–365 suggested that the decrease in the bond angle in the series NO+2 (180°), NO2 (134°), NO−2 (115°) indicates that a given set of bonding electron pairs exert a weaker repulsion on a single non-bonding electron than on a pair of non-bonding electrons. In effect, they considered nitrogen dioxide as an AX2E0.5 molecule, with a geometry intermediate between NO+2 and NO−2. Similarly, chlorine dioxide (ClO2) is an AX2E1.5 molecule, with a geometry intermediate between ClO+2 and ClO−2.
Finally, the methyl radical (CH3) is predicted to be trigonal pyramidal like the methyl anion (CH−3), but with a larger bond angle (as in the trigonal planar methyl cation (CH+3)). However, in this case, the VSEPR prediction is not quite true, as CH3 is actually planar, although its distortion to a pyramidal geometry requires very little energy.
== See also ==
Bent's rule (effect of ligand electronegativity)
Comparison of software for molecular mechanics modeling
Linear combination of atomic orbitals
Molecular geometry
Molecular modelling
Molecular Orbital Theory (MOT)
Thomson problem
Valence Bond Theory (VBT)
Valency interaction formula
== References ==
== Further reading ==
Lagowski, J. J., ed. (2004). Chemistry: Foundations and Applications. Vol. 3. New York: Macmillan. pp. 99–104. ISBN 978-0-02-865721-9.
== External links ==
VSEPR AR—3D VSEPR Theory Visualization with Augmented Reality app
3D Chem—Chemistry, structures, and 3D molecules
Indiana University Molecular Structure Center (IUMSC) | Wikipedia/VSEPR_theory |
The Crystallography Open Database (COD) is a database of crystal structures. Unlike similar crystallographic databases, the database is entirely open-access, with registered users able to contribute published and unpublished structures of small molecules and small to medium-sized unit cell crystals to the database. As of November 2024, the database has more than 520,000 entries. The database has various contributors, and contains Crystallographic Information Files as defined by the International Union of Crystallography (IUCr). There are currently five sites worldwide that mirror this database. The 3D structures of compounds can be converted to input files for 3D printers.
== See also ==
Crystallography
Crystallographic database
== References ==
== External links ==
https://www.crystallography.net
http://cod.ibt.lt/
https://archive.today/20130714225104/http://cod.ensicaen.fr/
https://qiserver.ugr.es/cod/
http://nanocrystallography.research.pdx.edu/search/codmirror/
https://nanocrystallography.research.pdx.edu
https://crystallography.io/ Archived 2022-05-16 at the Wayback Machine | Wikipedia/Crystallography_Open_Database |
Electroanalytical methods are a class of techniques in analytical chemistry which study an analyte by measuring the potential (volts) and/or current (amperes) in an electrochemical cell containing the analyte. These methods can be broken down into several categories depending on which aspects of the cell are controlled and which are measured. The three main categories are potentiometry (the difference in electrode potentials is measured), amperometry (electric current is the analytical signal), coulometry (charge passed during a certain time is recorded).
== Potentiometry ==
Potentiometry passively measures the potential of a solution between two electrodes, affecting the solution very little in the process. One electrode is called the reference electrode and has a constant potential, while the other one is an indicator electrode whose potential changes with the sample's composition. Therefore, the difference in potential between the two electrodes gives an assessment of the sample's composition. In fact, since the potentiometric measurement is a non-destructive measurement, assuming that the electrode is in equilibrium with the solution, we are measuring the solution's potential.
Potentiometry usually uses indicator electrodes made selectively sensitive to the ion of interest, such as fluoride in fluoride selective electrodes, so that the potential solely depends on the activity of this ion of interest.
The time that takes the electrode to establish equilibrium with the solution will affect the sensitivity or accuracy of the measurement. In aquatic environments, platinum is often used due to its high electron transfer kinetics, although an electrode made from several metals can be used in order to enhance the electron transfer kinetics. The most common potentiometric electrode is by far the glass-membrane electrode used in a pH meter.
A variant of potentiometry is chronopotentiometry which consists in using a constant current and measurement of potential as a function of time. It has been initiated by Weber.
== Amperometry ==
Amperometry indicates the whole of electrochemical techniques in which a current is measured as a function of an independent variable that is, typically, time (in a chronoamperometry) or electrode potential (in a voltammetry). Chronoamperometry is the technique in which the current is measured, at a fixed potential, at different times since the start of polarisation. Chronoamperometry is typically carried out in unstirred solution and at the fixed electrode, i.e., under experimental conditions avoiding convection as the mass transfer to the electrode. On the other hand, voltammetry is a subclass of amperometry, in which the current is measured by varying the potential applied to the electrode. According to the waveform that describes the way how the potential is varied as a function of time, the different voltammetric techniques are defined.
=== Chronoamperometry ===
In a chronoamperometry, a sudden step in potential is applied at the working electrode and the current is measured as a function of time. Since this is not an exhaustive method, microelectrodes are used and the amount of time used to perform the experiments is usually very short, typically 20 ms to 1 s, as to not consume the analyte.
=== Voltammetry ===
A voltammetry consists in applying a constant and/or varying potential at an electrode's surface and measuring the resulting current with a three-electrode system. This method can reveal the reduction potential of an analyte and its electrochemical reactivity. This method, in practical terms, is non-destructive since only a very small amount of the analyte is consumed at the two-dimensional surface of the working and auxiliary electrodes. In practice, the analyte solution is usually disposed of since it is difficult to separate the analyte from the bulk electrolyte, and the experiment requires a small amount of analyte. A normal experiment may involve 1–10 mL solution with an analyte concentration between 1 and 10 mmol/L. More advanced voltammetric techniques can work with microliter volumes and down to nanomolar concentrations. Chemically modified electrodes are employed for the analysis of organic and inorganic samples.
==== Polarography ====
Polarography is a subclass of voltammetry that uses a dropping mercury electrode as the working electrode.
== Coulometry ==
Coulometry uses applied current or potential to convert an analyte from one oxidation state to another completely. In these experiments, the total current passed is measured directly or indirectly to determine the number of electrons passed. Knowing the number of electrons passed can indicate the concentration of the analyte or when the concentration is known, the number of electrons transferred in the redox reaction. Typical forms of coulometry include bulk electrolysis, also known as Potentiostatic coulometry or controlled potential coulometry, as well as a variety of coulometric titrations.
== References ==
== Bibliography == | Wikipedia/Electroanalytical_methods |
In chemical analysis, chromatography is a laboratory technique for the separation of a mixture into its components. The mixture is dissolved in a fluid solvent (gas or liquid) called the mobile phase, which carries it through a system (a column, a capillary tube, a plate, or a sheet) on which a material called the stationary phase is fixed. Because the different constituents of the mixture tend to have different affinities for the stationary phase and are retained for different lengths of time depending on their interactions with its surface sites, the constituents travel at different apparent velocities in the mobile fluid, causing them to separate. The separation is based on the differential partitioning between the mobile and the stationary phases. Subtle differences in a compound's partition coefficient result in differential retention on the stationary phase and thus affect the separation.
Chromatography may be preparative or analytical. The purpose of preparative chromatography is to separate the components of a mixture for later use, and is thus a form of purification. This process is associated with higher costs due to its mode of production. Analytical chromatography is done normally with smaller amounts of material and is for establishing the presence or measuring the relative proportions of analytes in a mixture. The two types are not mutually exclusive.
== Etymology and pronunciation ==
Chromatography, pronounced , is derived from Greek χρῶμα chrōma, which means "color", and γράφειν gráphein, which means "to write". The combination of these two terms was directly inherited from the invention of the technique first used to separate biological pigments.
== History ==
The method was developed by botanist Mikhail Tsvet in 1901–1905 in universities of Kazan and Warsaw. He developed the technique and coined the term chromatography in the first decade of the 20th century, primarily for the separation of plant pigments such as chlorophyll, carotenes, and xanthophylls. Since these components separate in bands of different colors (green, orange, and yellow, respectively) they directly inspired the name of the technique. New types of chromatography developed during the 1930s and 1940s made the technique useful for many separation processes.
Chromatography technique developed substantially as a result of the work of Archer John Porter Martin and Richard Laurence Millington Synge during the 1940s and 1950s, for which they won the 1952 Nobel Prize in Chemistry. They established the principles and basic techniques of partition chromatography, and their work encouraged the rapid development of several chromatographic methods: paper chromatography, gas chromatography, and what would become known as high-performance liquid chromatography. Since then, the technology has advanced rapidly. Researchers found that the main principles of Tsvet's chromatography could be applied in many different ways, resulting in the different varieties of chromatography described below. Advances are continually improving the technical performance of chromatography, allowing the separation of increasingly similar molecules.
== Terms ==
Analyte – the substance to be separated during chromatography. It is also normally what is needed from the mixture.
Analytical chromatography – the use of chromatography to determine the existence and possibly also the concentration of analyte(s) in a sample.
Bonded phase – a stationary phase that is covalently bonded to the support particles or to the inside wall of the column tubing.
Chromatogram – the visual output of the chromatograph. In the case of an optimal separation, different peaks or patterns on the chromatogram correspond to different components of the separated mixture. Plotted on the x-axis is the retention time and plotted on the y-axis a signal (for example obtained by a spectrophotometer, mass spectrometer or a variety of other detectors) corresponding to the response created by the analytes exiting the system. In the case of an optimal system the signal is proportional to the concentration of the specific analyte separated.
Chromatograph – an instrument that enables a sophisticated separation, e.g. gas chromatographic or liquid chromatographic separation.
Chromatography – a physical method of separation that distributes components to separate between two phases, one stationary (stationary phase), the other (the mobile phase) moving in a definite direction.
Eluent (sometimes spelled eluant) – the solvent or solvent fixure used in elution chromatography and is synonymous with mobile phase.
Eluate – the mixture of solute (see Eluite) and solvent (see Eluent) exiting the column.
Effluent – the stream flowing out of a chromatographic column. In practise, it is used synonymously with eluate, but the term more precisely refers to the stream independent of separation taking place.
Eluite – a more precise term for solute or analyte. It is a sample component leaving the chromatographic column.
Eluotropic series – a list of solvents ranked according to their eluting power.
Immobilized phase – a stationary phase that is immobilized on the support particles, or on the inner wall of the column tubing.
Mobile phase – the phase that moves in a definite direction. It may be a liquid (LC and capillary electrochromatography, CEC), a gas (GC), or a supercritical fluid (supercritical-fluid chromatography, SFC). The mobile phase consists of the sample being separated/analyzed and the solvent that moves the sample through the column. In the case of HPLC the mobile phase consists of a non-polar solvent(s) such as hexane in normal phase or a polar solvent such as methanol in reverse phase chromatography and the sample being separated. The mobile phase moves through the chromatography column (the stationary phase) where the sample interacts with the stationary phase and is separated.
Preparative chromatography – the use of chromatography to purify sufficient quantities of a substance for further use, rather than analysis.
Retention time – the characteristic time it takes for a particular analyte to pass through the system (from the column inlet to the detector) under set conditions. See also: Kovats' retention index
Sample – the matter analyzed in chromatography. It may consist of a single component or it may be a mixture of components. When the sample is treated in the course of an analysis, the phase or the phases containing the analytes of interest is/are referred to as the sample whereas everything out of interest separated from the sample before or in the course of the analysis is referred to as waste.
Solute – the sample components in partition chromatography.
Solvent – any substance capable of solubilizing another substance, and especially the liquid mobile phase in liquid chromatography.
Stationary phase – the substance fixed in place for the chromatography procedure. Examples include the silica layer in thin-layer chromatography
Detector – the instrument used for qualitative and quantitative detection of analytes after separation.
Chromatography is based on the concept of partition coefficient. Any solute partitions between two immiscible solvents. When one make one solvent immobile (by adsorption on a solid support matrix) and another mobile it results in most common applications of chromatography. If the matrix support, or stationary phase, is polar (e.g., cellulose, silica etc.) it is forward phase chromatography. Otherwise this technique is known as reversed phase, where a non-polar stationary phase (e.g., non-polar derivative of C-18) is used.
== Techniques by chromatographic bed shape ==
=== Column chromatography ===
Column chromatography is a separation technique in which the stationary bed is within a tube. The particles of the solid stationary phase or the support coated with a liquid stationary phase may fill the whole inside volume of the tube (packed column) or be concentrated on or along the inside tube wall leaving an open, unrestricted path for the mobile phase in the middle part of the tube (open tubular column). Differences in rates of movement through the medium are calculated to different retention times of the sample.
In 1978, W. Clark Still introduced a modified version of column chromatography called flash column chromatography (flash). The technique is very similar to the traditional column chromatography, except that the solvent is driven through the column by applying positive pressure. This allowed most separations to be performed in less than 20 minutes, with improved separations compared to the old method. Modern flash chromatography systems are sold as pre-packed plastic cartridges, and the solvent is pumped through the cartridge. Systems may also be linked with detectors and fraction collectors providing automation. The introduction of gradient pumps resulted in quicker separations and less solvent usage.
In expanded bed adsorption, a fluidized bed is used, rather than a solid phase made by a packed bed. This allows omission of initial clearing steps such as centrifugation and filtration, for culture broths or slurries of broken cells.
Phosphocellulose chromatography utilizes the binding affinity of many DNA-binding proteins for phosphocellulose. The stronger a protein's interaction with DNA, the higher the salt concentration needed to elute that protein.
=== Planar chromatography ===
Planar chromatography is a separation technique in which the stationary phase is present as or on a plane. The plane can be a paper, serving as such or impregnated by a substance as the stationary bed (paper chromatography) or a layer of solid particles spread on a support such as a glass plate (thin-layer chromatography). Different compounds in the sample mixture travel different distances according to how strongly they interact with the stationary phase as compared to the mobile phase. The specific Retention factor (Rf) of each chemical can be used to aid in the identification of an unknown substance.
==== Paper chromatography ====
Paper chromatography is a technique that involves placing a small dot or line of sample solution onto a strip of chromatography paper. The paper is placed in a container with a shallow layer of solvent and sealed. As the solvent rises through the paper, it meets the sample mixture, which starts to travel up the paper with the solvent. This paper is made of cellulose, a polar substance, and the compounds within the mixture travel further if they are less polar. More polar substances bond with the cellulose paper more quickly, and therefore do not travel as far.
==== Thin-layer chromatography (TLC) ====
Thin-layer chromatography (TLC) is a widely employed laboratory technique used to separate different biochemicals on the basis of their relative attractions to the stationary and mobile phases. It is similar to paper chromatography. However, instead of using a stationary phase of paper, it involves a stationary phase of a thin layer of adsorbent like silica gel, alumina, or cellulose on a flat, inert substrate. TLC is very versatile; multiple samples can be separated simultaneously on the same layer, making it very useful for screening applications such as testing drug levels and water purity.
Possibility of cross-contamination is low since each separation is performed on a new layer. Compared to paper, it has the advantage of faster runs, better separations, better quantitative analysis, and the choice between different adsorbents. For even better resolution and faster separation that utilizes less solvent, high-performance TLC can be used. An older popular use had been to differentiate chromosomes by observing distance in gel (separation of was a separate step).
== Displacement chromatography ==
The basic principle of displacement chromatography is:
A molecule with a high affinity for the chromatography matrix (the displacer) competes effectively for binding sites, and thus displaces all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired for maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentrations.
== Techniques by physical state of mobile phase ==
=== Gas chromatography ===
Gas chromatography (GC), also sometimes known as gas-liquid chromatography, (GLC), is a separation technique in which the mobile phase is a gas. Gas chromatographic separation is always carried out in a column, which is typically "packed" or "capillary". Packed columns are the routine workhorses of gas chromatography, being cheaper and easier to use and often giving adequate performance. Capillary columns generally give far superior resolution and although more expensive are becoming widely used, especially for complex mixtures. Further, capillary columns can be split into three classes: porous layer open tubular (PLOT), wall-coated open tubular (WCOT) and support-coated open tubular (SCOT) columns. PLOT columns are unique in a way that the stationary phase is adsorbed to the column walls, while WCOT columns have a stationary phase that is chemically bonded to the walls. SCOT columns are in a way the combination of the two types mentioned in a way that they have support particles adhered to column walls, but those particles have liquid phase chemically bonded onto them. Both types of column are made from non-adsorbent and chemically inert materials. Stainless steel and glass are the usual materials for packed columns and quartz or fused silica for capillary columns.
Gas chromatography is based on a partition equilibrium of analyte between a solid or viscous liquid stationary phase (often a liquid silicone-based material) and a mobile gas (most often helium). The stationary phase is adhered to the inside of a small-diameter (commonly 0.53 – 0.18mm inside diameter) glass or fused-silica tube (a capillary column) or a solid matrix inside a larger metal tube (a packed column). It is widely used in analytical chemistry; though the high temperatures used in GC make it unsuitable for high molecular weight biopolymers or proteins (heat denatures them), frequently encountered in biochemistry, it is well suited for use in the petrochemical, environmental monitoring and remediation, and industrial chemical fields. It is also used extensively in chemistry research.
=== Liquid chromatography ===
Liquid chromatography (LC) is a separation technique in which the mobile phase is a liquid. It can be carried out either in a column or a plane. Present day liquid chromatography that generally utilizes very small packing particles and a relatively high pressure is referred to as high-performance liquid chromatography.
In HPLC the sample is forced by a liquid at high pressure (the mobile phase) through a column that is packed with a stationary phase composed of irregularly or spherically shaped particles, a porous monolithic layer, or a porous membrane. Monoliths are "sponge-like chromatographic media" and are made up of an unending block of organic or inorganic parts. HPLC is historically divided into two different sub-classes based on the polarity of the mobile and stationary phases. Methods in which the stationary phase is more polar than the mobile phase (e.g., toluene as the mobile phase, silica as the stationary phase) are termed normal phase liquid chromatography (NPLC) and the opposite (e.g., water-methanol mixture as the mobile phase and C18 (octadecylsilyl) as the stationary phase) is termed reversed phase liquid chromatography (RPLC).
=== Supercritical fluid chromatography ===
Supercritical fluid chromatography is a separation technique in which the mobile phase is a fluid above and relatively close to its critical temperature and pressure.
== Techniques by separation mechanism ==
=== Affinity chromatography ===
Affinity chromatography is based on selective non-covalent interaction between an analyte and specific molecules. It is very specific, but not very robust. It is often used in biochemistry in the purification of proteins bound to tags. These fusion proteins are labeled with compounds such as His-tags, biotin or antigens, which bind to the stationary phase specifically. After purification, these tags are usually removed and the pure protein is obtained.
Affinity chromatography often utilizes a biomolecule's affinity for the cations of a metal (Zn, Cu, Fe, etc.). Columns are often manually prepared and could be designed specifically for the proteins of interest. Traditional affinity columns are used as a preparative step to flush out unwanted biomolecules, or as a primary step in analyzing a protein with unknown physical properties.
However, liquid chromatography techniques exist that do utilize affinity chromatography properties. Immobilized metal affinity chromatography (IMAC) is useful to separate the aforementioned molecules based on the relative affinity for the metal. Often these columns can be loaded with different metals to create a column with a targeted affinity.
=== Ion exchange chromatography ===
Ion exchange chromatography (usually referred to as ion chromatography) uses an ion exchange mechanism to separate analytes based on their respective charges. It is usually performed in columns but can also be useful in planar mode. Ion exchange chromatography uses a charged stationary phase to separate charged compounds including anions, cations, amino acids, peptides, and proteins. In conventional methods the stationary phase is an ion-exchange resin that carries charged functional groups that interact with oppositely charged groups of the compound to retain. There are two types of ion exchange chromatography: Cation-Exchange and Anion-Exchange. In the Cation-Exchange Chromatography the stationary phase has negative charge and the exchangeable ion is a cation, whereas, in the Anion-Exchange Chromatography the stationary phase has positive charge and the exchangeable ion is an anion. Ion exchange chromatography is commonly used to purify proteins using FPLC.
=== Size-exclusion chromatography ===
Size-exclusion chromatography (SEC) is also known as gel permeation chromatography (GPC) or gel filtration chromatography and separates molecules according to their size (or more accurately according to their hydrodynamic diameter or hydrodynamic volume).
Smaller molecules are able to enter the pores of the media and, therefore, molecules are trapped and removed from the flow of the mobile phase. The average residence time in the pores depends upon the effective size of the analyte molecules. However, molecules that are larger than the average pore size of the packing are excluded and thus suffer essentially no retention; such species are the first to be eluted. It is generally a low-resolution chromatography technique and thus it is often reserved for the final, "polishing" step of a purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins, especially since it can be carried out under native solution conditions.
=== Expanded bed adsorption chromatographic separation ===
An expanded bed chromatographic adsorption (EBA) column for a biochemical separation process comprises a pressure equalization liquid distributor having a self-cleaning function below a porous blocking sieve plate at the bottom of the expanded bed, an upper part nozzle assembly having a backflush cleaning function at the top of the expanded bed, a better distribution of the feedstock liquor added into the expanded bed ensuring that the fluid passed through the expanded bed layer displays a state of piston flow. The expanded bed layer displays a state of piston flow. The expanded bed chromatographic separation column has advantages of increasing the separation efficiency of the expanded bed.
Expanded-bed adsorption (EBA) chromatography is a convenient and effective technique for the capture of proteins directly from unclarified crude sample. In EBA chromatography, the settled bed is first expanded by upward flow of equilibration buffer. The crude feed, which is a mixture of soluble proteins, contaminants, cells, and cell debris, is then passed upward through the expanded bed. Target proteins are captured on the adsorbent, while particulates and contaminants pass through. A change to elution buffer while maintaining upward flow results in desorption of the target protein in expanded-bed mode. Alternatively, if the flow is reversed, the adsorbed particles will quickly settle and the proteins can be desorbed by an elution buffer. The mode used for elution (expanded-bed versus settled-bed) depends on the characteristics of the feed. After elution, the adsorbent is cleaned with a predefined cleaning-in-place (CIP) solution, with cleaning followed by either column regeneration (for further use) or storage.
== Special techniques ==
=== Reversed-phase chromatography ===
Reversed-phase chromatography (RPC) is any liquid chromatography procedure in which the mobile phase is significantly more polar than the stationary phase. It is so named because in normal-phase liquid chromatography, the mobile phase is significantly less polar than the stationary phase. Hydrophobic molecules in the mobile phase tend to adsorb to the relatively hydrophobic stationary phase. Hydrophilic molecules in the mobile phase will tend to elute first. Separating columns typically comprise a C8 or C18 carbon-chain bonded to a silica particle substrate.
=== Hydrophobic interaction chromatography ===
Hydrophobic Interaction Chromatography (HIC) is a purification and analytical technique that separates analytes, such as proteins, based on hydrophobic interactions between that analyte and the chromatographic matrix. It can provide a non-denaturing orthogonal approach to reversed phase separation, preserving native structures and potentially protein activity. In hydrophobic interaction chromatography, the matrix material is lightly substituted with hydrophobic groups. These groups can range from methyl, ethyl, propyl, butyl, octyl, or phenyl groups. At high salt concentrations, non-polar sidechains on the surface on proteins "interact" with the hydrophobic groups; that is, both types of groups are excluded by the polar solvent (hydrophobic effects are augmented by increased ionic strength). Thus, the sample is applied to the column in a buffer which is highly polar, which drives an association of hydrophobic patches on the analyte with the stationary phase. The eluent is typically an aqueous buffer with decreasing salt concentrations, increasing concentrations of detergent (which disrupts hydrophobic interactions), or changes in pH. Of critical importance is the type of salt used, with more kosmotropic salts as defined by the Hofmeister series providing the most water structuring around the molecule and resulting hydrophobic pressure. Ammonium sulfate is frequently used for this purpose. The addition of organic solvents or other less polar constituents may assist in improving resolution.
In general, Hydrophobic Interaction Chromatography (HIC) is advantageous if the sample is sensitive to pH change or harsh solvents typically used in other types of chromatography but not high salt concentrations. Commonly, it is the amount of salt in the buffer which is varied. In 2012, Müller and Franzreb described the effects of temperature on HIC using Bovine Serum Albumin (BSA) with four different types of hydrophobic resin. The study altered temperature as to effect the binding affinity of BSA onto the matrix. It was concluded that cycling temperature from 40 to 10 degrees Celsius would not be adequate to effectively wash all BSA from the matrix but could be very effective if the column would only be used a few times. Using temperature to effect change allows labs to cut costs on buying salt and saves money.
If high salt concentrations along with temperature fluctuations want to be avoided one can use a more hydrophobic to compete with one's sample to elute it. This so-called salt independent method of HIC showed a direct isolation of Human Immunoglobulin G (IgG) from serum with satisfactory yield and used β-cyclodextrin as a competitor to displace IgG from the matrix. This largely opens up the possibility of using HIC with samples which are salt sensitive as we know high salt concentrations precipitate proteins.
=== Hydrodynamic chromatography ===
Hydrodynamic chromatography (HDC) is derived from the observed phenomenon that large droplets move faster than small ones. In a column, this happens because the center of mass of larger droplets is prevented from being as close to the sides of the column as smaller droplets because of their larger overall size. Larger droplets will elute first from the middle of the column while smaller droplets stick to the sides of the column and elute last. This form of chromatography is useful for separating analytes by molar mass (or molecular mass), size, shape, and structure when used in conjunction with light scattering detectors, viscometers, and refractometers. The two main types of HDC are open tube and packed column. Open tube offers rapid separation times for small particles, whereas packed column HDC can increase resolution and is better suited for particles with an average molecular mass larger than
10
5
{\displaystyle 10^{5}}
daltons. HDC differs from other types of chromatography because the separation only takes place in the interstitial volume, which is the volume surrounding and in between particles in a packed column.
HDC shares the same order of elution as Size Exclusion Chromatography (SEC) but the two processes still vary in many ways. In a study comparing the two types of separation, Isenberg, Brewer, Côté, and Striegel use both methods for polysaccharide characterization and conclude that HDC coupled with multiangle light scattering (MALS) achieves more accurate molar mass distribution when compared to off-line MALS than SEC in significantly less time. This is largely due to SEC being a more destructive technique because of the pores in the column degrading the analyte during separation, which tends to impact the mass distribution. However, the main disadvantage of HDC is low resolution of analyte peaks, which makes SEC a more viable option when used with chemicals that are not easily degradable and where rapid elution is not important.
HDC plays an especially important role in the field of microfluidics. The first successful apparatus for HDC-on-a-chip system was proposed by Chmela, et al. in 2002. Their design was able to achieve separations using an 80 mm long channel on the timescale of 3 minutes for particles with diameters ranging from 26 to 110 nm, but the authors expressed a need to improve the retention and dispersion parameters. In a 2010 publication by Jellema, Markesteijn, Westerweel, and Verpoorte, implementing HDC with a recirculating bidirectional flow resulted in high resolution, size based separation with only a 3 mm long channel. Having such a short channel and high resolution was viewed as especially impressive considering that previous studies used channels that were 80 mm in length. For a biological application, in 2007, Huh, et al. proposed a microfluidic sorting device based on HDC and gravity, which was useful for preventing potentially dangerous particles with diameter larger than 6 microns from entering the bloodstream when injecting contrast agents in ultrasounds. This study also made advances for environmental sustainability in microfluidics due to the lack of outside electronics driving the flow, which came as an advantage of using a gravity based device.
=== Two-dimensional chromatography ===
In some cases, the selectivity provided by the use of one column can be insufficient to provide resolution of analytes in complex samples. Two-dimensional chromatography aims to increase the resolution of these peaks by using a second column with different physico-chemical (chemical classification) properties. Since the mechanism of retention on this new solid support is different from the first dimensional separation, it can be possible to separate compounds by two-dimensional chromatography that are indistinguishable by one-dimensional chromatography. Furthermore, the separation on the second dimension occurs faster than the first dimension. An example of a TDC separation is where the sample is spotted at one corner of a square plate, developed, air-dried, then rotated by 90° and usually redeveloped in a second solvent system.
Two-dimensional chromatography can be applied to GC or LC separations. The heart-cutting approach selects a specific region of interest on the first dimension for separation, and the comprehensive approach uses all analytes in the second-dimension separation.
=== Simulated moving-bed chromatography ===
The simulated moving bed (SMB) technique is a variant of high performance liquid chromatography; it is used to separate particles and/or chemical compounds that would be difficult or impossible to resolve otherwise. This increased separation is brought about by a valve-and-column arrangement that is used to lengthen the stationary phase indefinitely.
In the moving bed technique of preparative chromatography the feed entry and the analyte recovery are simultaneous and continuous, but because of practical difficulties with a continuously moving bed, simulated moving bed technique was proposed. In the simulated moving bed technique instead of moving the bed, the sample inlet and the analyte exit positions are moved continuously, giving the impression of a moving bed.
True moving bed chromatography (TMBC) is only a theoretical concept. Its simulation, SMBC is achieved by the use of a multiplicity of columns in series and a complex valve arrangement. This valve arrangement provides for sample and solvent feed and analyte and waste takeoff at appropriate locations of any column, whereby it allows switching at regular intervals the sample entry in one direction, the solvent entry in the opposite direction, whilst changing the analyte and waste takeoff positions appropriately as well.
=== Pyrolysis gas chromatography ===
Pyrolysis–gas chromatography–mass spectrometry is a method of chemical analysis in which the sample is heated to decomposition to produce smaller molecules that are separated by gas chromatography and detected using mass spectrometry.
Pyrolysis is the thermal decomposition of materials in an inert atmosphere or a vacuum. The sample is put into direct contact with a platinum wire, or placed in a quartz sample tube, and rapidly heated to 600–1000 °C. Depending on the application even higher temperatures are used. Three different heating techniques are used in actual pyrolyzers: Isothermal furnace, inductive heating (Curie point filament), and resistive heating using platinum filaments. Large molecules cleave at their weakest points and produce smaller, more volatile fragments. These fragments can be separated by gas chromatography. Pyrolysis GC chromatograms are typically complex because a wide range of different decomposition products is formed. The data can either be used as fingerprints to prove material identity or the GC/MS data is used to identify individual fragments to obtain structural information. To increase the volatility of polar fragments, various methylating reagents can be added to a sample before pyrolysis.
Besides the usage of dedicated pyrolyzers, pyrolysis GC of solid and liquid samples can be performed directly inside Programmable Temperature Vaporizer (PTV) injectors that provide quick heating (up to 30 °C/s) and high maximum temperatures of 600–650 °C. This is sufficient for some pyrolysis applications. The main advantage is that no dedicated instrument has to be purchased and pyrolysis can be performed as part of routine GC analysis. In this case, quartz GC inlet liners have to be used. Quantitative data can be acquired, and good results of derivatization inside the PTV injector are published as well.
=== Fast protein liquid chromatography ===
Fast protein liquid chromatography (FPLC), is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the "mobile phase") and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous solution, or "buffer". The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application.
=== Countercurrent chromatography ===
Countercurrent chromatography (CCC) is a type of liquid-liquid chromatography, where both the stationary and mobile phases are liquids and the liquid stationary phase is held stagnant by a strong centrifugal force.
==== Hydrodynamic countercurrent chromatography (CCC) ====
The operating principle of CCC instrument requires a column consisting of an open tube coiled around a bobbin. The bobbin is rotated in a double-axis gyratory motion (a cardioid), which causes a variable gravity (G) field to act on the column during each rotation. This motion causes the column to see one partitioning step per revolution and components of the sample separate in the column due to their partitioning coefficient between the two immiscible liquid phases used. There are many types of CCC available today. These include HSCCC (High Speed CCC) and HPCCC (High Performance CCC). HPCCC is the latest and best-performing version of the instrumentation available currently.
==== Centrifugal partition chromatography (CPC) ====
In the CPC (centrifugal partition chromatography or hydrostatic countercurrent chromatography) instrument, the column consists of a series of cells interconnected by ducts attached to a rotor. This rotor rotates on its central axis creating the centrifugal field necessary to hold the stationary phase in place. The separation process in CPC is governed solely by the partitioning of solutes between the stationary and mobile phases, which mechanism can be easily described using the partition coefficients (KD) of solutes. CPC instruments are commercially available for laboratory, pilot, and industrial-scale separations with different sizes of columns ranging from some 10 milliliters to 10 liters in volume.
=== Periodic counter-current chromatography ===
In contrast to Counter current chromatography (see above), periodic counter-current chromatography (PCC) uses a solid stationary phase and only a liquid mobile phase. It thus is much more similar to conventional affinity chromatography than to counter current chromatography. PCC uses multiple columns, which during the loading phase are connected in line. This mode allows for overloading the first column in this series without losing product, which already breaks through the column before the resin is fully saturated. The breakthrough product is captured on the subsequent column(s). In a next step the columns are disconnected from one another. The first column is washed and eluted, while the other column(s) are still being loaded. Once the (initially) first column is re-equilibrated, it is re-introduced to the loading stream, but as last column. The process then continues in a cyclic fashion.
=== Chiral chromatography ===
Chiral chromatography involves the separation of stereoisomers. In the case of enantiomers, these have no chemical or physical differences apart from being three-dimensional mirror images. To enable chiral separations to take place, either the mobile phase or the stationary phase must themselves be made chiral, giving differing affinities between the analytes. Chiral chromatography HPLC columns (with a chiral stationary phase) in both normal and reversed phase are commercially available.
Conventional chromatography are incapable of separating racemic mixtures of enantiomers. However, in some cases nonracemic mixtures of enantiomers may be separated unexpectedly by conventional liquid chromatography (e.g. HPLC without chiral mobile phase or stationary phase ).
=== Aqueous normal-phase chromatography ===
Aqueous normal-phase (ANP) chromatography is characterized by the elution behavior of classical normal phase mode (i.e. where the mobile phase is significantly less polar than the stationary phase) in which water is one of the mobile phase solvent system components. It is distinguished from hydrophilic interaction liquid chromatography (HILIC) in that the retention mechanism is due to adsorption rather than partitioning.
== Applications ==
Chromatography is used in many fields including the pharmaceutical industry, the food and beverage industry, the chemical industry, forensic science, environment analysis, and hospitals.
== See also ==
== References ==
== External links ==
IUPAC Nomenclature for Chromatography
Overlapping Peaks Program – Learning by Simulations
Chromatography Videos – MIT OCW – Digital Lab Techniques Manual
Chromatography Equations Calculators – MicroSolv Technology Corporation | Wikipedia/Chromatography |
Computational materials science and engineering uses modeling, simulation, theory, and informatics to understand materials. The main goals include discovering new materials, determining material behavior and mechanisms, explaining experiments, and exploring materials theories. It is analogous to computational chemistry and computational biology as an increasingly important subfield of materials science.
== Introduction ==
Just as materials science spans all length scales, from electrons to components, so do its computational sub-disciplines. While many methods and variations have been and continue to be developed, seven main simulation techniques, or motifs, have emerged.
These computer simulation methods use underlying models and approximations to understand material behavior in more complex scenarios than pure theory generally allows and with more detail and precision than is often possible from experiments. Each method can be used independently to predict materials properties and mechanisms, to feed information to other simulation methods run separately or concurrently, or to directly compare or contrast with experimental results.
One notable sub-field of computational materials science is integrated computational materials engineering (ICME), which seeks to use computational results and methods in conjunction with experiments, with a focus on industrial and commercial application. Major current themes in the field include uncertainty quantification and propagation throughout simulations for eventual decision making, data infrastructure for sharing simulation inputs and results, high-throughput materials design and discovery, and new approaches given significant increases in computing power and the continuing history of supercomputing.
== Materials simulation methods ==
=== Electronic structure ===
Electronic structure methods solve the Schrödinger equation to calculate the energy of a system of electrons and atoms, the fundamental units of condensed matter.
Many variations of electronic structure methods exist of varying computational complexity, with a range of trade-offs between speed and accuracy.
==== Density functional theory ====
Due to its balance of computational cost and predictive capability density functional theory (DFT) has the most significant use in materials science. DFT most often refers to the calculation of the lowest energy state of the system; however, molecular dynamics (atomic motion through time) can be run with DFT computing forces between atoms.
While DFT and many other electronic structures methods are described as ab initio, there are still approximations and inputs. Within DFT there are increasingly complex, accurate, and slow approximations underlying the simulation because the exact exchange-correlation functional is not known. The simplest model is the Local-density approximation (LDA), becoming more complex with the generalized-gradient approximation (GGA) and beyond.
An additional common approximation is to use a pseudopotential in place of core electrons, significantly speeding up simulations.
=== Atomistic methods ===
This section discusses the two major atomic simulation methods in materials science. Other particle-based methods include material point method and particle-in-cell, most often used for solid mechanics and plasma physics, respectively.
==== Molecular dynamics ====
The term Molecular dynamics (MD) is the historical name used to classify simulations of classical atomic motion through time. Typically, interactions between atoms are defined and fit to both experimental and electronic structure data with a wide variety of models, called interatomic potentials. With the interactions prescribed (forces), Newtonian motion is numerically integrated. The forces for MD can also be calculated using electronic structure methods based on either the Born-Oppenheimer Approximation or Car-Parrinello approaches.
The simplest models include only van der Waals type attractions and steep repulsion to keep atoms apart, the nature of these models are derived from dispersion forces. Increasingly more complex models include effects due to coulomb interactions (e.g. ionic charges in ceramics), covalent bonds and angles (e.g. polymers), and electronic charge density (e.g. metals). Some models use fixed bonds, defined at the start of the simulation, while others have dynamic bonding. More recent efforts strive for robust, transferable models with generic functional forms: spherical harmonics, Gaussian kernels, and neural networks. In addition, MD can be used to simulate groupings of atoms within generic particles, called coarse-grained modeling, e.g. creating one particle per monomer within a polymer.
==== Kinetic Monte Carlo ====
Monte Carlo in the context of materials science most often refers to atomistic simulations relying on rates. In kinetic Monte Carlo (kMC) rates for all possible changes within the system are defined and probabilistically evaluated. Because there is no restriction of directly integrating motion (as in molecular dynamics), kMC methods are able to simulate significantly different problems with much longer timescales.
=== Mesoscale methods ===
The methods listed here are among the most common and the most directly tied to materials science specifically, where atomistic and electronic structure calculations are also widely used in computational chemistry and computational biology and continuum level simulations are common in a wide array of computational science application domains.
Other methods within materials science include cellular automata for solidification and grain growth, Potts model approaches for grain evolution and other Monte Carlo techniques, as well as direct simulation of grain structures analogous to dislocation dynamics.
==== Dislocation dynamics ====
Plastic deformation in metals is dominated by the movement of dislocations, which are crystalline defects in materials with line type character. Rather than simulating the movement of tens of billions of atoms to model plastic deformation, which would be prohibitively computationally expensive, discrete dislocation dynamics (DDD) simulates the movement of dislocation lines. The overall goal of dislocation dynamics is to determine the movement of a set of dislocations given their initial positions, and external load and interacting microstructure. From this, macroscale deformation behavior can be extracted from the movement of individual dislocations by theories of plasticity.
A typical DDD simulation goes as follows. A dislocation line can be modelled as a set of nodes connected by segments. This is similar to a mesh used in finite element modelling. Then, the forces on each of the nodes of the dislocation are calculated. These forces include any externally applied forces, forces due to the dislocation interacting with itself or other dislocations, forces from obstacles such as solutes or precipitates, and the drag force on the dislocation due to its motion, which is proportional to its velocity. The general method behind a DDD simulation is to calculate the forces on a dislocation at each of its nodes, from which the velocity of the dislocation at its nodes can be extracted. Then, the dislocation is moved forward according to this velocity and a given timestep. This procedure is then repeated. Over time, the dislocation may encounter enough obstacles such that it can no longer move and its velocity is near zero, at which point the simulation can be stopped and a new experiment can be conducted with this new dislocation arrangement.
Both small-scale and large-scale dislocation simulations exist. For example, 2D dislocation models have been used to model the glide of a dislocation through a single plane as it interacts with various obstacles, such as precipitates. This further captures phenomena such as shearing and bowing of precipitates. The drawback to 2D DDD simulations is that phenomena involving movement out of a glide plane cannot be captured, such as cross slip and climb, although they are easier to run computationally. Small 3D DDD simulations have been used to simulate phenomena such as dislocation multiplication at Frank-Read sources, and larger simulations can capture work hardening in a metal with many dislocations, which interact with each other and can multiply. A number of 3D DDD codes exist, such as ParaDiS, microMegas, and MDDP, among others.
There are other methods for simulating dislocation motion, from full molecular dynamics simulations, continuum dislocation dynamics, and phase field models.
==== Phase field ====
Phase field methods are focused on phenomena dependent on interfaces and interfacial motion. Both the free energy function and the kinetics (mobilities) are defined in order to propagate the interfaces within the system through time.
==== Crystal plasticity ====
Crystal plasticity simulates the effects of atomic-based, dislocation motion without directly resolving either. Instead, the crystal orientations are updated through time with elasticity theory, plasticity through yield surfaces, and hardening laws. In this way, the stress-strain behavior of a material can be determined.
=== Continuum simulation ===
==== Finite element method ====
Finite element methods divide systems in space and solve the relevant physical equations throughout that decomposition. This ranges from thermal, mechanical, electromagnetic, to other physical phenomena. It is important to note from a materials science perspective that continuum methods generally ignore material heterogeneity and assume local materials properties to be identical throughout the system.
== Materials modeling methods ==
All of the simulation methods described above contain models of materials behavior. The exchange-correlation functional for density functional theory, interatomic potential for molecular dynamics, and free energy functional for phase field simulations are examples. The degree to which each simulation method is sensitive to changes in the underlying model can be drastically different. Models themselves are often directly useful for materials science and engineering, not only to run a given simulation.
=== CALPHAD ===
Phase diagrams are integral to materials science and the development computational phase diagrams stands as one of the most important and successful examples of ICME. The Calculation of PHase Diagram (CALPHAD) method does not generally speaking constitute a simulation, but the models and optimizations instead result in phase diagrams to predict phase stability, extremely useful in materials design and materials process optimization.
== Comparison of methods ==
For each material simulation method, there is a fundamental unit, characteristic length and time scale, and associated model(s).
== Multi-scale simulation ==
Many of the methods described can be combined, either running simultaneously or separately, feeding information between length scales or accuracy levels.
=== Concurrent multi-scale ===
Concurrent simulations in this context means methods used directly together, within the same code, with the same time step, and with direct mapping between the respective fundamental units.
One type of concurrent multiscale simulation is quantum mechanics/molecular mechanics (QM/MM). This involves running a small portion (often a molecule or protein of interest) with a more accurate electronic structure calculation and surrounding it with a larger region of fast running, less accurate classical molecular dynamics. Many other methods exist, such as atomistic-continuum simulations, similar to QM/MM except using molecular dynamics and the finite element method as the fine (high-fidelity) and coarse (low-fidelity), respectively.
=== Hierarchical multi-scale ===
Hierarchical simulation refers to those which directly exchange information between methods, but are run in separate codes, with differences in length and/or time scales handled through statistical or interpolative techniques.
A common method of accounting for crystal orientation effects together with geometry embeds crystal plasticity within finite element simulations.
=== Model development ===
Building a materials model at one scale often requires information from another, lower scale. Some examples are included here.
The most common scenario for classical molecular dynamics simulations is to develop the interatomic model directly using density functional theory, most often electronic structure calculations. Classical MD can therefore be considered a hierarchical multi-scale technique, as well as a coarse-grained method (ignoring electrons). Similarly, coarse grained molecular dynamics are reduced or simplified particle simulations directly trained from all-atom MD simulations. These particles can represent anything from carbon-hydrogen pseudo-atoms, entire polymer monomers, to powder particles.
Density functional theory is also often used to train and develop CALPHAD-based phase diagrams.
== Software and tools ==
Each modeling and simulation method has a combination of commercial, open-source, and lab-based codes. Open source software is becoming increasingly common, as are community codes which combine development efforts together. Examples include Quantum ESPRESSO (DFT), LAMMPS (MD), ParaDIS (DD), FiPy (phase field), and MOOSE (Continuum). In addition, open software from other communities is often useful for materials science, e.g. GROMACS developed within computational biology.
== Conferences ==
All major materials science conferences include computational research. Focusing entirely on computational efforts, the TMS ICME World Congress meets biannually. The Gordon Research Conference on Computational Materials Science and Engineering began in 2020. Many other method specific smaller conferences are also regularly organized.
== Journals ==
Many materials science journals, as well as those from related disciplines welcome computational materials research. Those dedicated to the field include Computational Materials Science, Modelling and Simulation in Materials Science and Engineering, and npj Computational Materials.
== Related fields ==
Computational materials science is one sub-discipline of both computational science and computational engineering, containing significant overlap with computational chemistry and computational physics. In addition, many atomistic methods are common between computational chemistry, computational biology, and CMSE; similarly, many continuum methods overlap with many other fields of computational engineering.
== See also ==
== References ==
== External links ==
TMS World Congress on Integrated Computational Materials Engineering (ICME)
nanoHUB computational materials resources | Wikipedia/Computational_materials_science |
High-performance liquid chromatography (HPLC), formerly referred to as high-pressure liquid chromatography, is a technique in analytical chemistry used to separate, identify, and quantify specific components in mixtures. The mixtures can originate from food, chemicals, pharmaceuticals, biological, environmental and agriculture, etc., which have been dissolved into liquid solutions.
It relies on high pressure pumps, which deliver mixtures of various solvents, called the mobile phase, which flows through the system, collecting the sample mixture on the way, delivering it into a cylinder, called the column, filled with solid particles, made of adsorbent material, called the stationary phase.
Each component in the sample interacts differently with the adsorbent material, causing different migration rates for each component. These different rates lead to separation as the species flow out of the column into a specific detector such as UV detectors. The output of the detector is a graph, called a chromatogram. Chromatograms are graphical representations of the signal intensity versus time or volume, showing peaks, which represent components of the sample. Each sample appears in its respective time, called its retention time, having area proportional to its amount.
HPLC is widely used for manufacturing (e.g., during the production process of pharmaceutical and biological products), legal (e.g., detecting performance enhancement drugs in urine), research (e.g., separating the components of a complex biological sample, or of similar synthetic chemicals from each other), and medical (e.g., detecting vitamin D levels in blood serum) purposes.
Chromatography can be described as a mass transfer process involving adsorption and/or partition. As mentioned, HPLC relies on pumps to pass a pressurized liquid and a sample mixture through a column filled with adsorbent, leading to the separation of the sample components. The active component of the column, the adsorbent, is typically a granular material made of solid particles (e.g., silica, polymers, etc.), 1.5–50 μm in size, on which various reagents can be bonded. The components of the sample mixture are separated from each other due to their different degrees of interaction with the adsorbent particles. The pressurized liquid is typically a mixture of solvents (e.g., water, buffers, acetonitrile and/or methanol) and is referred to as a "mobile phase". Its composition and temperature play a major role in the separation process by influencing the interactions taking place between sample components and adsorbent. These interactions are physical in nature, such as hydrophobic (dispersive), dipole–dipole and ionic, most often a combination.
== Operation ==
The liquid chromatograph is complex and has sophisticated and delicate technology. In order to properly operate the system, there should be a minimum basis for understanding of how the device performs the data processing to avoid incorrect data and distorted results.
HPLC is distinguished from traditional ("low pressure") liquid chromatography because operational pressures are significantly higher (around 50–1400 bar), while ordinary liquid chromatography typically relies on the force of gravity to pass the mobile phase through the packed column. Due to the small sample amount separated in analytical HPLC, typical column dimensions are 2.1–4.6 mm diameter, and 30–250 mm length. Also HPLC columns are made with smaller adsorbent particles (1.5–50 μm in average particle size). This gives HPLC superior resolving power (the ability to distinguish between compounds) when separating mixtures, which makes it a popular chromatographic technique.
The schematic of an HPLC instrument typically includes solvents' reservoirs, one or more pumps, a solvent-degasser, a sampler, a column, and a detector. The solvents are prepared in advance according to the needs of the separation, they pass through the degasser to remove dissolved gasses, mixed to become the mobile phase, then flow through the sampler, which brings the sample mixture into the mobile phase stream, which then carries it into the column. The pumps deliver the desired flow and composition of the mobile phase through the stationary phase inside the column, then directly into a flow-cell inside the detector. The detector generates a signal proportional to the amount of sample component emerging from the column, hence allowing for quantitative analysis of the sample components. The detector also marks the time of emergence, the retention time, which serves for initial identification of the component. More advanced detectors, provide also additional information, specific to the analyte's characteristics, such as UV-VIS spectrum or mass spectrum, which can provide insight on its structural features. These detectors are in common use, such as UV/Vis, photodiode array (PDA) / diode array detector and mass spectrometry detector.
A digital microprocessor and user software control the HPLC instrument and provide data analysis. Some models of mechanical pumps in an HPLC instrument can mix multiple solvents together at a ratios changing in time, generating a composition gradient in the mobile phase. Most HPLC instruments also have a column oven that allows for adjusting the temperature at which the separation is performed.
The sample mixture to be separated and analyzed is introduced, in a discrete small volume (typically microliters), into the stream of mobile phase percolating through the column. The components of the sample move through the column, each at a different velocity, which are a function of specific physical interactions with the adsorbent, the stationary phase. The velocity of each component depends on its chemical nature, on the nature of the stationary phase (inside the column) and on the composition of the mobile phase. The time at which a specific analyte elutes (emerges from the column) is called its retention time. The retention time, measured under particular conditions, is an identifying characteristic of a given analyte.
Many different types of columns are available, filled with adsorbents varying in particle size, porosity, and surface chemistry. The use of smaller particle size packing materials requires the use of higher operational pressure ("backpressure") and typically improves chromatographic resolution (the degree of peak separation between consecutive analytes emerging from the column). Sorbent particles may be ionic, hydrophobic or polar in nature.
The most common mode of liquid chromatography is reversed phase, whereby the mobile phases used, include any miscible combination of water or buffers with various organic solvents (the most common are acetonitrile and methanol). Some HPLC techniques use water-free mobile phases (see normal-phase chromatography below). The aqueous component of the mobile phase may contain acids (such as formic, phosphoric or trifluoroacetic acid) or salts to assist in the separation of the sample components. The composition of the mobile phase may be kept constant ("isocratic elution mode") or varied ("gradient elution mode") during the chromatographic analysis. Isocratic elution is typically effective in the separation of simple mixtures. Gradient elution is required for complex mixtures, with varying interactions with the stationary and mobile phases. This is the reason why in gradient elution the composition of the mobile phase is varied typically from low to high eluting strength. The eluting strength of the mobile phase is reflected by analyte retention times, as the high eluting strength speeds up the elution (resulting in shortening of retention times). For example, a typical gradient profile in reversed phase chromatography for might start at 5% acetonitrile (in water or aqueous buffer) and progress linearly to 95% acetonitrile over 5–25 minutes. Periods of constant mobile phase composition (plateau) may be also part of a gradient profile. For example, the mobile phase composition may be kept constant at 5% acetonitrile for 1–3 min, followed by a linear change up to 95% acetonitrile.
The chosen composition of the mobile phase depends on the intensity of interactions between various sample components ("analytes") and stationary phase (e.g., hydrophobic interactions in reversed-phase HPLC). Depending on their affinity for the stationary and mobile phases, analytes partition between the two during the separation process taking place in the column. This partitioning process is similar to that which occurs during a liquid–liquid extraction but is continuous, not step-wise.
In the example using a water/acetonitrile gradient, the more hydrophobic components will elute (come off the column) later, then, once the mobile phase gets richer in acetonitrile (i.e., in a mobile phase becomes higher eluting solution), their elution speeds up.
The choice of mobile phase components, additives (such as salts or acids) and gradient conditions depends on the nature of the column and sample components. Often a series of trial runs is performed with the sample in order to find the HPLC method which gives adequate separation.
== History and development ==
Prior to HPLC, scientists used benchtop column liquid chromatographic techniques. Liquid chromatographic systems were largely inefficient due to the flow rate of solvents being dependent on gravity. Separations took many hours, and sometimes days to complete. Gas chromatography (GC) at the time was more powerful than liquid chromatography (LC), however, it was obvious that gas phase separation and analysis of very polar high molecular weight biopolymers was impossible. GC was ineffective for many life science and health applications for biomolecules, because they are mostly non-volatile and thermally unstable at the high temperatures of GC. As a result, alternative methods were hypothesized which would soon result in the development of HPLC.
Following on the seminal work of Martin and Synge in 1941, it was predicted by Calvin Giddings, Josef Huber, and others in the 1960s that LC could be operated in the high-efficiency mode by reducing the packing-particle diameter substantially below the typical LC (and GC) level of 150 μm and using pressure to increase the mobile phase velocity. These predictions underwent extensive experimentation and refinement throughout the 60s into the 70s until these very days. Early developmental research began to improve LC particles, for example the historic Zipax, a superficially porous particle.
The 1970s brought about many developments in hardware and instrumentation. Researchers began using pumps and injectors to make a rudimentary design of an HPLC system. Gas amplifier pumps were ideal because they operated at constant pressure and did not require leak-free seals or check valves for steady flow and good quantitation. Hardware milestones were made at Dupont IPD (Industrial Polymers Division) such as a low-dwell-volume gradient device being utilized as well as replacing the septum injector with a loop injection valve.
While instrumentation developments were important, the history of HPLC is primarily about the history and evolution of particle technology. After the introduction of porous layer particles, there has been a steady trend to reduced particle size to improve efficiency. However, by decreasing particle size, new problems arose. The practical disadvantages stem from the excessive pressure drop needed to force mobile fluid through the column and the difficulty of preparing a uniform packing of extremely fine materials. Every time particle size is reduced significantly, another round of instrument development usually must occur to handle the pressure.
== Types ==
=== Partition chromatography ===
Partition chromatography was one of the first kinds of chromatography that chemists developed, and is barely used these days. The partition coefficient principle has been applied in paper chromatography, thin layer chromatography, gas phase and liquid–liquid separation applications. The 1952 Nobel Prize in chemistry was earned by Archer John Porter Martin and Richard Laurence Millington Synge for their development of the technique, which was used for their separation of amino acids. Partition chromatography uses a retained solvent, on the surface or within the grains or fibers of an "inert" solid supporting matrix as with paper chromatography; or takes advantage of some coulombic and/or hydrogen donor interaction with the stationary phase. Analyte molecules partition between a liquid stationary phase and the eluent. Just as in hydrophilic interaction chromatography (HILIC; a sub-technique within HPLC), this method separates analytes based on differences in their polarity. HILIC most often uses a bonded polar stationary phase and a mobile phase made primarily of acetonitrile with water as the strong component. Partition HPLC has been used historically on unbonded silica or alumina supports. Each works effectively for separating analytes by relative polar differences. HILIC bonded phases have the advantage of separating acidic, basic and neutral solutes in a single chromatographic run.
The polar analytes diffuse into a stationary water layer associated with the polar stationary phase and are thus retained. The stronger the interactions between the polar analyte and the polar stationary phase (relative to the mobile phase) the longer the elution time. The interaction strength depends on the functional groups part of the analyte molecular structure, with more polarized groups (e.g., hydroxyl-) and groups capable of hydrogen bonding inducing more retention. Coulombic (electrostatic) interactions can also increase retention. Use of more polar solvents in the mobile phase will decrease the retention time of the analytes, whereas more hydrophobic solvents tend to increase retention times.
=== Normal–phase chromatography ===
Normal–phase chromatography was one of the first kinds of HPLC that chemists developed, but has decreased in use over the last decades. Also known as normal-phase HPLC (NP-HPLC), this method separates analytes based on their affinity for a polar stationary surface such as silica; hence it is based on analyte ability to engage in polar interactions (such as hydrogen-bonding or dipole-dipole type of interactions) with the sorbent surface. NP-HPLC uses a non-polar, non-aqueous mobile phase (e.g., chloroform), and works effectively for separating analytes readily soluble in non-polar solvents. The analyte associates with and is retained by the polar stationary phase. Adsorption strengths increase with increased analyte polarity. The interaction strength depends not only on the functional groups present in the structure of the analyte molecule, but also on steric factors. The effect of steric hindrance on interaction strength allows this method to resolve (separate) structural isomers.
The use of more polar solvents in the mobile phase will decrease the retention time of analytes, whereas more hydrophobic solvents tend to induce slower elution (increased retention times). Very polar solvents such as traces of water in the mobile phase tend to adsorb to the solid surface of the stationary phase forming a stationary bound (water) layer which is considered to play an active role in retention. This behavior is somewhat peculiar to normal phase chromatography because it is governed almost exclusively by an adsorptive mechanism (i.e., analytes interact with a solid surface rather than with the solvated layer of a ligand attached to the sorbent surface; see also reversed-phase HPLC below). Adsorption chromatography is still somewhat used for structural isomer separations in both column and thin-layer chromatography formats on activated (dried) silica or alumina supports.
Partition- and NP-HPLC fell out of favor in the 1970s with the development of reversed-phase HPLC because of poor reproducibility of retention times due to the presence of a water or protic organic solvent layer on the surface of the silica or alumina chromatographic media. This layer changes with any changes in the composition of the mobile phase (e.g., moisture level) causing drifting retention times.
Recently, partition chromatography has become popular again with the development of Hilic bonded phases which demonstrate improved reproducibility, and due to a better understanding of the range of usefulness of the technique.
=== Displacement chromatography ===
The use of displacement chromatography is rather limited, and is mostly used for preparative chromatography. The basic principle is based on a molecule with a high affinity for the chromatography matrix (the displacer) which is used to compete effectively for binding sites, and thus displace all molecules with lesser affinities.
There are distinct differences between displacement and elution chromatography. In elution mode, substances typically emerge from a column in narrow, Gaussian peaks. Wide separation of peaks, preferably to baseline, is desired in order to achieve maximum purification. The speed at which any component of a mixture travels down the column in elution mode depends on many factors. But for two substances to travel at different speeds, and thereby be resolved, there must be substantial differences in some interaction between the biomolecules and the chromatography matrix. Operating parameters are adjusted to maximize the effect of this difference. In many cases, baseline separation of the peaks can be achieved only with gradient elution and low column loadings. Thus, two drawbacks to elution mode chromatography, especially at the preparative scale, are operational complexity, due to gradient solvent pumping, and low throughput, due to low column loadings. Displacement chromatography has advantages over elution chromatography in that components are resolved into consecutive zones of pure substances rather than "peaks". Because the process takes advantage of the nonlinearity of the isotherms, a larger column feed can be separated on a given column with the purified components recovered at significantly higher concentration.
=== Reversed-phase liquid chromatography (RP-LC) ===
Reversed phase HPLC (RP-HPLC) is the most widespread mode of chromatography. It has a non-polar stationary phase and an aqueous, moderately polar mobile phase. In the reversed phase methods, the substances are retained in the system the more hydrophobic they are. For the retention of organic materials, the stationary phases, packed inside the columns, are consisted mainly of porous granules of silica gel in various shapes, mainly spherical, at different diameters (1.5, 2, 3, 5, 7, 10 um), with varying pore diameters (60, 100, 150, 300, A), on whose surface are chemically bound various hydrocarbon ligands such as C3, C4, C8, C18. There are also polymeric hydrophobic particles that serve as stationary phases, when solutions at extreme pH are needed, or hybrid silica, polymerized with organic substances. The longer the hydrocarbon ligand on the stationary phase, the longer the sample components can be retained. Most of the current methods of separation of biomedical materials use C-18 type of columns, sometimes called by a trade names such as ODS (octadecylsilane) or RP-18 (Reversed Phase 18).
The most common RP stationary phases are based on a silica support, which is surface-modified by bonding RMe2SiCl, where R is a straight chain alkyl group such as C18H37 or C8H17.
With such stationary phases, retention time is longer for lipophylic molecules, whereas polar molecules elute more readily (emerge early in the analysis). A chromatographer can increase retention times by adding more water to the mobile phase, thereby making the interactions of the hydrophobic analyte with the hydrophobic stationary phase relatively stronger. Similarly, an investigator can decrease retention time by adding more organic solvent to the mobile phase. RP-HPLC is so commonly used among the biologists and life science users, therefore it is often incorrectly referred to as just "HPLC" without further specification. The pharmaceutical industry also regularly employs RP-HPLC to qualify drugs before their release.
RP-HPLC operates on the principle of hydrophobic interactions, which originates from the high symmetry in the dipolar water structure and plays the most important role in all processes in life science. RP-HPLC allows the measurement of these interactive forces. The binding of the analyte to the stationary phase is proportional to the contact surface area around the non-polar segment of the analyte molecule upon association with the ligand on the stationary phase. This solvophobic effect is dominated by the force of water for "cavity-reduction" around the analyte and the C18-chain versus the complex of both. The energy released in this process is proportional to the surface tension of the eluent (water: 7.3×10−6 J/cm2, methanol: 2.2×10−6 J/cm2) and to the hydrophobic surface of the analyte and the ligand respectively. The retention can be decreased by adding a less polar solvent (methanol, acetonitrile) into the mobile phase to reduce the surface tension of water. Gradient elution uses this effect by automatically reducing the polarity and the surface tension of the aqueous mobile phase during the course of the analysis.
Structural properties of the analyte molecule can play an important role in its retention characteristics. In theory, an analyte with a larger hydrophobic surface area (C–H, C–C, and generally non-polar atomic bonds, such as S-S and others) can be retained longer as it does not interact with the water structure. On the other hand, analytes with higher polar surface area (as a result of the presence of polar groups, such as -OH, -NH2, COO− or -NH3+ in their structure) are less retained, as they are better integrated into water. The interactions with the stationary phase can also affected by steric effects, or exclusion effects, whereby a component of very large molecule may have only restricted access to the pores of the stationary phase, where the interactions with surface ligands (alkyl chains) take place. Such surface hindrance typically results in less retention.
Retention time increases with more hydrophobic (non-polar) surface area of the molecules. For example, branched chain compounds can elute more rapidly than their corresponding linear isomers because their overall surface area is lower. Similarly organic compounds with single C–C bonds frequently elute later than those with a C=C or even triple bond, as the double or triple bond makes the molecule more compact than a single C–C bond.
Another important factor is the mobile phase pH since it can change the hydrophobic character of the ionizable analyte. For this reason most methods use a buffering agent, such as sodium phosphate, to control the pH. Buffers serve multiple purposes: control of pH which affects the ionization state of the ionizable analytes, affect the charge upon the ionizable silica surface of the stationary phase in between the bonded phase linands, and in some cases even act as ion pairing agents to neutralize analyte charge. Ammonium formate is commonly added in mass spectrometry to improve detection of certain analytes by the formation of analyte-ammonium adducts. A volatile organic acid such as acetic acid, or most commonly formic acid, is often added to the mobile phase if mass spectrometry is used to analyze the column effluents.
Trifluoroacetic acid (TFA) as additive to the mobile phase is widely used for complex mixtures of biomedical samples, mostly peptides and proteins, using mostly UV based detectors. They are rarely used in mass spectrometry methods, due to residues it can leave in the detector and solvent delivery system, which interfere with the analysis and detection. However, TFA can be highly effective in improving retention of analytes such as carboxylic acids, in applications utilizing other detectors such as UV-VIS, as it is a fairly strong organic acid. The effects of acids and buffers vary by application but generally improve chromatographic resolution when dealing with ionizable components.
Reversed phase columns are quite difficult to damage compared to normal silica columns, thanks to the shielding effect of the bonded hydrophobic ligands; however, most reversed phase columns consist of alkyl derivatized silica particles, and are prone to hydrolysis of the silica at extreme pH conditions in the mobile phase. Most types of RP columns should not be used with aqueous bases as these will hydrolyze the underlying silica particle and dissolve it. There are selected brands of hybrid or enforced silica based particles of RP columns which can be used at extreme pH conditions. The use of extreme acidic conditions is also not recommended, as they also might hydrolyzed as well as corrode the inside walls of the metallic parts of the HPLC equipment.
As a rule, in most cases RP-HPLC columns should be flushed with clean solvent after use to remove residual acids or buffers, and stored in an appropriate composition of solvent. Some biomedical applications require non metallic environment for the optimal separation. For such sensitive cases there is a test for the metal content of a column is to inject a sample which is a mixture of 2,2'- and 4,4'-bipyridine. Because the 2,2'-bipy can chelate the metal, the shape of the peak for the 2,2'-bipy will be distorted (tailed) when metal ions are present on the surface of the silica...
=== Size-exclusion chromatography ===
Size-exclusion chromatography (SEC) separates polymer molecules and biomolecules based on differences in their molecular size (actually by a particle's Stokes radius). The separation process is based on the ability of sample molecules to permeate through the pores of gel spheres, packed inside the column, and is dependent on the relative size of analyte molecules and the respective pore size of the absorbent. The process also relies on the absence of any interactions with the packing material surface.
Two types of SEC are usually termed:
Gel permeation chromatography (GPC)—separation of synthetic polymers (aqueous or organic soluble). GPC is a powerful technique for polymer characterization using primarily organic solvents.
Gel filtration chromatography (GFC)—separation of water-soluble biopolymers. GFC uses primarily aqueous solvents (typically for aqueous soluble biopolymers, such as proteins, etc.).
The separation principle in SEC is based on the fully, or partially penetrating of the high molecular weight substances of the sample into the porous stationary-phase particles during their transport through column. The mobile-phase eluent is selected in such a way that it totally prevents interactions with the stationary phase's surface. Under these conditions, the smaller the size of the molecule, the more it is able to penetrate inside the pore space and the movement through the column takes longer. On the other hand, the bigger the molecular size, the higher the probability the molecule will not fully penetrate the pores of the stationary phase, and even travel around them, thus, will be eluted earlier. The molecules are separated in order of decreasing molecular weight, with the largest molecules eluting from the column first and smaller molecules eluting later. Molecules larger than the pore size do not enter the pores at all, and elute together as the first peak in the chromatogram and this is called total exclusion volume which defines the exclusion limit for a particular column. Small molecules will permeate fully through the pores of the stationary phase particles and will be eluted last, marking the end of the chromatogram, and may appear as a total penetration marker.
In biomedical sciences it is generally considered as a low resolution chromatography and thus it is often reserved for the final, "polishing" step of the purification. It is also useful for determining the tertiary structure and quaternary structure of purified proteins. SEC is used primarily for the analysis of large molecules such as proteins or polymers. SEC works also in a preparative way by trapping the smaller molecules in the pores of a particles. The larger molecules simply pass by the pores as they are too large to enter the pores. Larger molecules therefore flow through the column quicker than smaller molecules: that is, the smaller the molecule, the longer the retention time.
This technique is widely used for the molecular weight determination of polysaccharides. SEC is the official technique (suggested by European pharmacopeia) for the molecular weight comparison of different commercially available low-molecular weight heparins.
=== Ion-exchange chromatography ===
Ion-exchange chromatography (IEC) or ion chromatography (IC) is an analytical technique for the separation and determination of ionic solutes in aqueous samples from environmental and industrial origins such as metal industry, industrial waste water, in biological systems, pharmaceutical samples, food, etc. Retention is based on the attraction between solute ions and charged sites bound to the stationary phase. Solute ions charged the same as the ions on the column are repulsed and elute without retention, while solute ions charged oppositely to the charged sites of the column are retained on it. Solute ions that are retained on the column can be eluted from it by changing the mobile phase composition, such as increasing its salt concentration and pH or increasing the column temperature, etc.
Types of ion exchangers include polystyrene resins, cellulose and dextran ion exchangers (gels), and controlled-pore glass or porous silica gel. Polystyrene resins allow cross linkage, which increases the stability of the chain. Higher cross linkage reduces swerving, which increases the equilibration time and ultimately improves selectivity. Cellulose and dextran ion exchangers possess larger pore sizes and low charge densities making them suitable for protein separation.
In general, ion exchangers favor the binding of ions of higher charge and smaller radius.
An increase in counter ion (with respect to the functional groups in resins) concentration reduces the retention time, as it creates a strong competition with the solute ions. A decrease in pH reduces the retention time in cation exchange while an increase in pH reduces the retention time in anion exchange. By lowering the pH of the solvent in a cation exchange column, for instance, more hydrogen ions are available to compete for positions on the anionic stationary phase, thereby eluting weakly bound cations.
This form of chromatography is widely used in the following applications: water purification, preconcentration of trace components, ligand-exchange chromatography, ion-exchange chromatography of proteins, high-pH anion-exchange chromatography of carbohydrates and oligosaccharides, and others.
=== Bioaffinity chromatography ===
High performance affinity chromatography (HPAC) works by passing a sample solution through a column packed with a stationary phase that contains an immobilized biologically active ligand. The ligand is in fact a substrate that has a specific binding affinity for the target molecule in the sample solution. The target molecule binds to the ligand, while the other molecules in the sample solution pass through the column, having little or no retention. The target molecule is then eluted from the column using a suitable elution buffer.
This chromatographic process relies on the capability of the bonded active substances to form stable, specific, and reversible complexes thanks to their biological recognition of certain specific sample components. The formation of these complexes involves the participation of common molecular forces such as the Van der Waals interaction, electrostatic interaction, dipole-dipole interaction, hydrophobic interaction, and the hydrogen bond. An efficient, biospecific bond is formed by a simultaneous and concerted action of several of these forces in the complementary binding sites.
=== Aqueous normal-phase chromatography ===
Aqueous normal-phase chromatography (ANP) is also called hydrophilic interaction liquid chromatography (HILIC). This is a chromatographic technique which encompasses the mobile phase region between reversed-phase chromatography (RP) and organic normal phase chromatography (ONP). HILIC is used to achieve unique selectivity for hydrophilic compounds, showing normal phase elution order, using "reversed-phase solvents", i.e., relatively polar mostly non-aqueous solvents in the mobile phase. Many biological molecules, especially those found in biological fluids, are small polar compounds that do not retain well by reversed phase-HPLC. This has made hydrophilic interaction LC (HILIC) an attractive alternative and useful approach for analysis of polar molecules. Additionally, because HILIC is routinely used with traditional aqueous mixtures with polar organic solvents such as ACN and methanol, it can be easily coupled to MS.
== Isocratic and gradient elution ==
A separation in which the mobile phase composition remains constant throughout the procedure is termed isocratic (meaning constant composition). The word was coined by Csaba Horvath who was one of the pioneers of HPLC.
The mobile phase composition does not have to remain constant. A separation in which the mobile phase composition is changed during the separation process is described as a gradient elution. For example, a gradient can start at 10% methanol in water, and end at 90% methanol in water after 20 minutes. The two components of the mobile phase are typically termed "A" and "B"; A is the "weak" solvent which allows the solute to elute only slowly, while B is the "strong" solvent which rapidly elutes the solutes from the column. In reversed-phase chromatography, solvent A is often water or an aqueous buffer, while B is an organic solvent miscible with water, such as acetonitrile, methanol, THF, or isopropanol.
In isocratic elution, peak width increases with retention time linearly according to the equation for N, the number of theoretical plates. This can be a major disadvantage when analyzing a sample that contains analytes with a wide range of retention factors. Using a weaker mobile phase, the runtime is lengthened and results in slowly eluting peaks to be broad, leading to reduced sensitivity. A stronger mobile phase would improve issues of runtime and broadening of later peaks but results in diminished peak separation, especially for quickly eluting analytes which may have insufficient time to fully resolve. This issue is addressed through the changing mobile phase composition of gradient elution.
By starting from a weaker mobile phase and strengthening it during the runtime, gradient elution decreases the retention of the later-eluting components so that they elute faster, giving narrower (and taller) peaks for most components, while also allowing for the adequate separation of earlier-eluting components. This also improves the peak shape for tailed peaks, as the increasing concentration of the organic eluent pushes the tailing part of a peak forward. This also increases the peak height (the peak looks "sharper"), which is important in trace analysis. The gradient program may include sudden "step" increases in the percentage of the organic component, or different slopes at different times – all according to the desire for optimum separation in minimum time.
In isocratic elution, the retention order does not change if the column dimensions (length and inner diameter) change – that is, the peaks elute in the same order. In gradient elution, however, the elution order may change as the dimensions or flow rate change. if they are no scaled down or up according to the change
The driving force in reversed phase chromatography originates in the high order of the water structure. The role of the organic component of the mobile phase is to reduce this high order and thus reduce the retarding strength of the aqueous component.
== Parameters ==
=== Theoretical ===
The theory of high performance liquid chromatography-HPLC is, at its core, the same as general chromatography theory. This theory has been used as the basis for system-suitability tests, as can be seen in the USP Pharmacopeia, which are a set of quantitative criteria, which test the suitability of the HPLC system to the required analysis at any step of it.
This relation is also represented as a normalized unit-less factor known as the retention factor, or retention parameter, which is the experimental measurement of the capacity ratio, as shown in the Figure of Performance Criteria as well. tR is the retention time of the specific component and t0 is the time it takes for a non-retained substance to elute through the system without any retention, thus it is called the Void Time.
The ratio between the retention factors, k', of every two adjacent peaks in the chromatogram is used in the evaluation of the degree of separation between them, and is called selectivity factor, α, as shown in the Performance Criteria graph.
The plate count N as a criterion for system efficiency was developed for isocratic conditions, i.e., a constant mobile phase composition throughout the run. In gradient conditions, where the mobile phase changes with time during the chromatographic run, it is more appropriate to use the parameter peak capacity Pc as a measure for the system efficiency. The definition of peak capacity in chromatography is the number of peaks that can be separated within a retention window for a specific pre-defined resolution factor, usually ~1. It could also be envisioned as the runtime measured in number of peaks' average widths. The equation is shown in the Figure of the performance criteria. In this equation tg is the gradient time and w(ave) is the average peaks width at the base.
The parameters are largely derived from two sets of chromatographic theory: plate theory (as part of partition chromatography), and the rate theory of chromatography / Van Deemter equation. Of course, they can be put in practice through analysis of HPLC chromatograms, although rate theory is considered the more accurate theory.
They are analogous to the calculation of retention factor for a paper chromatography separation, but describes how well HPLC separates a mixture into two or more components that are detected as peaks (bands) on a chromatogram. The HPLC parameters are the: efficiency factor(N), the retention factor (kappa prime), and the separation factor (alpha). Together the factors are variables in a resolution equation, which describes how well two components' peaks separated or overlapped each other. These parameters are mostly only used for describing HPLC reversed phase and HPLC normal phase separations, since those separations tend to be more subtle than other HPLC modes (e.g., ion exchange and size exclusion).
Void volume is the amount of space in a column that is occupied by solvent. It is the space within the column that is outside of the column's internal packing material. Void volume is measured on a chromatogram as the first component peak detected, which is usually the solvent that was present in the sample mixture; ideally the sample solvent flows through the column without interacting with the column, but is still detectable as distinct from the HPLC solvent. The void volume is used as a correction factor.
Efficiency factor (N) practically measures how sharp component peaks on the chromatogram are, as ratio of the component peak's area ("retention time") relative to the width of the peaks at their widest point (at the baseline). Peaks that are tall, sharp, and relatively narrow indicate that separation method efficiently removed a component from a mixture; high efficiency. Efficiency is very dependent upon the HPLC column and the HPLC method used. Efficiency factor is synonymous with plate number, and the 'number of theoretical plates'.
Retention factor (kappa prime) measures how long a component of the mixture stuck to the column, measured by the area under the curve of its peak in a chromatogram (since HPLC chromatograms are a function of time). Each chromatogram peak will have its own retention factor (e.g., kappa1 for the retention factor of the first peak). This factor may be corrected for by the void volume of the column.
Separation factor (alpha) is a relative comparison on how well two neighboring components of the mixture were separated (i.e., two neighboring bands on a chromatogram). This factor is defined in terms of a ratio of the retention factors of a pair of neighboring chromatogram peaks, and may also be corrected for by the void volume of the column. The greater the separation factor value is over 1.0, the better the separation, until about 2.0 beyond which an HPLC method is probably not needed for separation.
Resolution equations relate the three factors such that high efficiency and separation factors improve the resolution of component peaks in an HPLC separation.
=== Internal diameter ===
The internal diameter (ID) of an HPLC column is an important parameter. It can influence the detection response when reduced due to the reduced lateral diffusion of the solute band. It can also affect the separation selectivity, when flow rate and injection volumes are not scaled down or up proportionally to the smaller or larger diameter used, both in the isocratic and in gradient modes. It determines the quantity of analyte that can be loaded onto the column. Larger diameter columns are usually seen in preparative applications, such as the purification of a drug product for later use. Low-ID columns have improved sensitivity and lower solvent consumption in the recent ultra-high performance liquid chromatography (UHPLC).
Larger ID columns (over 10 mm) are used to purify usable amounts of material because of their large loading capacity.
Analytical scale columns (4.6 mm) have been the most common type of columns, though narrower columns are rapidly gaining in popularity. They are used in traditional quantitative analysis of samples and often use a UV-Vis absorbance detector.
Narrow-bore columns (1–2 mm) are used for applications when more sensitivity is desired either with special UV-vis detectors, fluorescence detection or with other detection methods like liquid chromatography-mass spectrometry
Capillary columns (under 0.3 mm) are used almost exclusively with alternative detection means such as mass spectrometry. They are usually made from fused silica capillaries, rather than the stainless steel tubing that larger columns employ.
=== Particle size ===
Most traditional HPLC is performed with the stationary phase attached to the outside of small spherical silica particles (very small beads). These particles come in a variety of sizes with 5 μm beads being the most common. Smaller particles generally provide more surface area and better separations, but the pressure required for optimum linear velocity increases by the inverse of the particle diameter squared.
According to the equations of the column velocity, efficiency and backpressure, reducing the particle diameter by half and keeping the size of the column the same, will double the column velocity and efficiency; but four times increase the backpressure. And the small particles HPLC also can decrease the width broadening. Larger particles are used in preparative HPLC (column diameters 5 cm up to >30 cm) and for non-HPLC applications such as solid-phase extraction.
=== Pore size ===
Many stationary phases are porous to provide greater surface area. Small pores provide greater surface area while larger pore size has better kinetics, especially for larger analytes. For example, a protein which is only slightly smaller than a pore might enter the pore but does not easily leave once inside.
=== Pump pressure ===
Pumps vary in pressure capacity, but their performance is measured on their ability to yield a consistent and reproducible volumetric flow rate. Pressure may reach as high as 60 MPa (6000 lbf/in2), or about 600 atmospheres. Modern HPLC systems have been improved to work at much higher pressures, and therefore are able to use much smaller particle sizes in the columns (<2 μm). These "ultra high performance liquid chromatography" systems or UHPLCs, which could also be known as ultra high pressure chromatography systems, can work at up to 120 MPa (17,405 lbf/in2), or about 1200 atmospheres. The term "UPLC" is a trademark of the Waters Corporation, but is sometimes used to refer to the more general technique of UHPLC.
=== Detectors ===
HPLC detectors fall into two main categories: universal or selective. Universal detectors typically measure a bulk property (e.g., refractive index) by measuring a difference of a physical property between the mobile phase and mobile phase with solute while selective detectors measure a solute property (e.g., UV-Vis absorbance) by simply responding to the physical or chemical property of the solute. HPLC most commonly uses a UV-Vis absorbance detector; however, a wide range of other chromatography detectors can be used. A universal detector that complements UV-Vis absorbance detection is the charged aerosol detector (CAD). A kind of commonly utilized detector includes refractive index detectors, which provide readings by measuring the changes in the refractive index of the eluant as it moves through the flow cell. In certain cases, it is possible to use multiple detectors, for example LCMS normally combines UV-Vis with a mass spectrometer.
When used with an electrochemical detector (ECD) the HPLC-ECD selectively detects neurotransmitters such as: norepinephrine, dopamine, serotonin, glutamate, GABA, acetylcholine and others in neurochemical analysis research applications. The HPLC-ECD detects neurotransmitters to the femtomolar range. Other methods to detect neurotransmitters include liquid chromatography-mass spectrometry, ELISA, or radioimmunoassays.
=== Autosamplers ===
Large numbers of samples can be automatically injected onto an HPLC system, by the use of HPLC autosamplers. In addition, HPLC autosamplers have an injection volume and technique which is exactly the same for each injection, consequently they provide a high degree of injection volume precision.
It is possible to enable sample stirring within the sampling-chamber, thus promoting homogeneity.
== Applications ==
=== Manufacturing ===
HPLC has many applications in both laboratory and clinical science. It is a common technique used in pharmaceutical development, as it is a dependable way to obtain and ensure product purity. While HPLC can produce extremely high quality (pure) products, it is not always the primary method used in the production of bulk drug materials. According to the European pharmacopoeia, HPLC is used in only 15.5% of syntheses. However, it plays a role in 44% of syntheses in the United States pharmacopoeia. This could possibly be due to differences in monetary and time constraints, as HPLC on a large scale can be an expensive technique. An increase in specificity, precision, and accuracy that occurs with HPLC unfortunately corresponds to an increase in cost.
=== Legal ===
This technique is also used for detection of illicit drugs in various samples. The most common method of drug detection has been an immunoassay. This method is much more convenient. However, convenience comes at the cost of specificity and coverage of a wide range of drugs, therefore, HPLC has been used as well as an alternative method. As HPLC is a method of determining (and possibly increasing) purity, using HPLC alone in evaluating concentrations of drugs was somewhat insufficient. Therefore, HPLC in this context is often performed in conjunction with mass spectrometry. Using liquid chromatography-mass spectrometry (LC-MS) instead of gas chromatography-mass spectrometry (GC-MS) circumvents the necessity for derivitizing with acetylating or alkylation agents, which can be a burdensome extra step. LC-MS has been used to detect a variety of agents like doping agents, drug metabolites, glucuronide conjugates, amphetamines, opioids, cocaine, BZDs, ketamine, LSD, cannabis, and pesticides. Performing HPLC in conjunction with mass spectrometry reduces the absolute need for standardizing HPLC experimental runs.
=== Research ===
Similar assays can be performed for research purposes, detecting concentrations of potential clinical candidates like anti-fungal and asthma drugs. This technique is obviously useful in observing multiple species in collected samples, as well, but requires the use of standard solutions when information about species identity is sought out. It is used as a method to confirm results of synthesis reactions, as purity is essential in this type of research. However, mass spectrometry is still the more reliable way to identify species.
=== Medical and health sciences ===
Medical use of HPLC typically use mass spectrometer (MS) as the detector, so the technique is called LC-MS or LC-MS/MS for tandem MS, where two types of MS are operated sequentially. When the HPLC instrument is connected to more than one detector, it is called a hyphenated LC system. Pharmaceutical applications are the major users of HPLC, LC-MS and LC-MS/MS. This includes drug development and pharmacology, which is the scientific study of the effects of drugs and chemicals on living organisms, personalized medicine, public health and diagnostics. While urine is the most common medium for analyzing drug concentrations, blood serum is the sample collected for most medical analyses with HPLC. One of the most important roles of LC-MS and LC-MS/MS in the clinical lab is the Newborn Screening (NBS) for metabolic disorders and follow-up diagnostics. The infants' samples come in the shape of dried blood spot (DBS), which is simple to prepare and transport, enabling safe and accessible diagnostics, both locally and globally.
Other methods of detection of molecules that are useful for clinical studies have been tested against HPLC, namely immunoassays. In one example of this, competitive protein binding assays (CPBA) and HPLC were compared for sensitivity in detection of vitamin D. Useful for diagnosing vitamin D deficiencies in children, it was found that sensitivity and specificity of this CPBA reached only 40% and 60%, respectively, of the capacity of HPLC. While an expensive tool, the accuracy of HPLC is nearly unparalleled.
== See also ==
History of chromatography
Capillary electrochromatography
Column chromatography
Csaba Horváth
Ion chromatography
Micellar liquid chromatography
== References ==
== Further reading ==
L. R. Snyder, J.J. Kirkland, and J. W. Dolan, Introduction to Modern Liquid Chromatography, John Wiley & Sons, New York, 2009.
M.W. Dong, Modern HPLC for practicing scientists. Wiley, 2006.
L. R. Snyder, J.J. Kirkland, and J. L. Glajch, Practical HPLC Method Development, John Wiley & Sons, New York, 1997.
S. Ahuja and H. T. Rasmussen (ed), HPLC Method Development for Pharmaceuticals, Academic Press, 2007.
S. Ahuja and M.W. Dong (ed), Handbook of Pharmaceutical Analysis by HPLC, Elsevier/Academic Press, 2005.
Y. V. Kazakevich and R. LoBrutto (ed.), HPLC for Pharmaceutical Scientists, Wiley, 2007.
U. D. Neue, HPLC Columns: Theory, Technology, and Practice, Wiley-VCH, New York, 1997.
M. C. McMaster, HPLC, a practical user's guide, Wiley, 2007.
== External links ==
{{usurped|1=HPLC Chromatography Principle, Application [Basic Note]}} – 2020. at Rxlalit.com | Wikipedia/High-performance_liquid_chromatography |
Chemistry is often called the central science because of its role in connecting the physical sciences, which include chemistry, with the life sciences, pharmaceutical sciences and applied sciences such as medicine and engineering. The nature of this relationship is one of the main topics in the philosophy of chemistry and in scientometrics. The phrase was popularized by its use in a textbook by Theodore L. Brown and H. Eugene LeMay, titled Chemistry: The Central Science, which was first published in 1977, with a fifteenth edition published in 2021.
The central role of chemistry can be seen in the systematic and hierarchical classification of the sciences by Auguste Comte. Each discipline provides a more general framework for the area it precedes (mathematics → astronomy → physics → chemistry → biology → social sciences). Balaban and Klein have more recently proposed a diagram showing the partial ordering of sciences in which chemistry may be argued is "the central science" since it provides a significant degree of branching. In forming these connections the lower field cannot be fully reduced to the higher ones. It is recognized that the lower fields possess emergent ideas and concepts that do not exist in the higher fields of science.
Thus chemistry is built on an understanding of laws of physics that govern particles such as atoms, protons, neutrons, electrons, thermodynamics, etc. although it has been shown that it has not been "fully 'reduced' to quantum mechanics". Concepts such as the periodicity of the elements and chemical bonds in chemistry are emergent in that they are more than the underlying forces defined by physics.
In the same way, biology cannot be fully reduced to chemistry, although the machinery that is responsible for life is composed of molecules. For instance, the machinery of evolution may be described in terms of chemistry by the understanding that it is a mutation in the order of genetic base pairs in the DNA of an organism. However, chemistry cannot fully describe the process since it does not contain concepts such as natural selection that are responsible for driving evolution. Chemistry is fundamental to biology since it provides a methodology for studying and understanding the molecules that compose cells.
Connections made by chemistry are formed through various sub-disciplines that utilize concepts from multiple scientific disciplines. Chemistry and physics are both needed in the areas of physical chemistry, nuclear chemistry, and theoretical chemistry. Chemistry and biology intersect in the areas of biochemistry, medicinal chemistry, molecular biology, chemical biology, molecular genetics, and immunochemistry. Chemistry and the earth sciences intersect in areas like geochemistry and hydrology.
== See also ==
Fundamental science
Hard and soft science
Philosophy of chemistry
Special sciences
Unity of science
== References == | Wikipedia/The_central_science |
Materials informatics is a field of study that applies the principles of informatics and data science to materials science and engineering to improve the understanding, use, selection, development, and discovery of materials. The term "materials informatics" is frequently used interchangeably with "data science", "machine learning", and "artificial intelligence" by the community. This is an emerging field, with a goal to achieve high-speed and robust acquisition, management, analysis, and dissemination of diverse materials data with the goal of greatly reducing the time and risk required to develop, produce, and deploy new materials, which generally takes longer than 20 years.
This field of endeavor is not limited to some traditional understandings of the relationship between materials and information. Some more narrow interpretations include combinatorial chemistry, process modeling, materials databases, materials data management, and product life cycle management. Materials informatics is at the convergence of these concepts, but also transcends them and has the potential to achieve greater insights and deeper understanding by applying lessons learned from data gathered on one type of material to others. By gathering appropriate meta data, the value of each individual data point can be greatly expanded.
== Databases ==
Databases are essential for any informatics research and applications. In material informatics many databases exist containing both empirical data obtained experimentally, and theoretical data obtained computationally. Big data that can be used for machine learning is particularly difficult to obtain for experimental data due to the lack of a standard for reporting data and the variability in the experimental environment. This lack of big data has led to growing effort in developing machine learning techniques that utilize data extremely data sets. On the other hand, large uniform database of theoretical density functional theory (DFT) calculations exists. These databases have proven their utility in high-throughput material screening and discovery.
Some common DFT databases and high throughput tools are listed below:
Databases: MaterialsProject.org, MaterialsWeb.org (University of Florida)
HT software: Pymatgen, MPInterfaces
== Beyond computational methods? ==
The concept of materials informatics is addressed by the Materials Research Society. For example, materials informatics was the theme of the December 2006 issue of the MRS Bulletin. The issue was guest-edited by John Rodgers of Innovative Materials, Inc., and David Cebon of Cambridge University, who described the "high payoff for developing methodologies that will accelerate the insertion of materials, thereby saving millions of investment dollars."
The editors focused on the limited definition of materials informatics as primarily focused on computational methods to process and interpret data. They stated that "specialized informatics tools for data capture, management, analysis, and dissemination" and "advances in computing power, coupled with computational modeling and simulation and materials properties databases" will enable such accelerated insertion of materials.
A broader definition of materials informatics goes beyond the use of computational methods to carry out the same experimentation, viewing materials informatics as a framework in which a measurement or computation is one step in an information-based learning process that uses the power of a collective to achieve greater efficiency in exploration. When properly organized, this framework crosses materials boundaries to uncover fundamental knowledge of the basis of physical, mechanical, and engineering properties.
== Challenges ==
While there are many who believe in the future of informatics in the materials development and scaling process, many challenges remain. Hill, et al., write that "Today, the materials community faces serious challenges to bringing about this data-accelerated research paradigm, including diversity of research areas within materials, lack of data standards, and missing incentives for sharing, among others. Nonetheless, the landscape is rapidly changing in ways that should benefit the entire materials research enterprise."
This remaining tension between traditional materials development methodologies and the use of more computationally, machine learning, and analytics approaches will likely exist for some time as the materials industry overcomes some of the cultural barriers necessary to fully embrace such new ways of thinking.
== Analogy from Biology ==
The overarching goals of bioinformatics and systems biology may provide a useful analogy. Andrew Murray of Harvard University expresses the hope that such an approach "will save us from the era of "one graduate student, one gene, one PhD". Similarly, the goal of materials informatics is to save us from one graduate student, one alloy, one PhD. Such goals will require more sophisticated strategies and research paradigms than applying data-science methods to the same tasks set currently undertaken by students.
== See also ==
Material selection
Structural bioinformatics
Data mining
Cheminformatics
== External links ==
Primary Journals: Journal of Materials Informatics (Editor-in-Chief: Tong-Yi Zhang), Materials Informatics and Data Science (Editor-in-Chief: Yaroslava G. Yingling)
ICME community on MaterialsTechnology@TMS
The Material Informatics Workshop: Theory and Application (March 2007 JOM-e issue on M.I.)
K. Rajan, Materials informatics, Materials Today, Volume 8, Issue 10, October 2005, Pages 38-45, ISSN 1369-7021, doi:10.1016/S1369-7021(05)71123-8.
May 2016 APL Materials Issues on Materials Genome/Materials Informatics—P. Littlewood and C.L. Phillips, APL Materials, Volume 4, Issue 5, May 2016
Material Informatics Industry Outlook to 2030
== References ==
Chapter 5: The Importance of Data [1] in Going to Extremes: Meeting the Emerging Demand for Durable Polymer Matrix Composites [2]
MRS Bulletin: Materials Informatics, Volume 31, No. 12.[3] | Wikipedia/Materials_informatics |
Stratigraphy is a branch of geology concerned with the study of rock layers (strata) and layering (stratification). It is primarily used in the study of sedimentary and layered volcanic rocks.
Stratigraphy has three related subfields: lithostratigraphy (lithologic stratigraphy), biostratigraphy (biologic stratigraphy), and chronostratigraphy (stratigraphy by age).
== Historical development ==
Catholic priest Nicholas Steno established the theoretical basis for stratigraphy when he introduced the law of superposition, the principle of original horizontality and the principle of lateral continuity in a 1669 work on the fossilization of organic remains in layers of sediment.
The first practical large-scale application of stratigraphy was by William Smith in the 1790s and early 19th century. Known as the "Father of English geology", Smith recognized the significance of strata or rock layering and the importance of fossil markers for correlating strata; he created the first geological map of England. Other influential applications of stratigraphy in the early 19th century were by Georges Cuvier and Alexandre Brongniart, who studied the geology of the region around Paris.
== Lithostratigraphy ==
Variation in rock units, most obviously displayed as visible layering, is due to physical contrasts in rock type (lithology). This variation can occur vertically as layering (bedding), or laterally, and reflects changes in environments of deposition (known as facies change). These variations provide a lithostratigraphy or lithologic stratigraphy of the rock unit. Key concepts in stratigraphy involve understanding how certain geometric relationships between rock layers arise and what these geometries imply about their original depositional environment. The basic concept in stratigraphy, called the law of superposition, states: in an undeformed stratigraphic sequence, the oldest strata occur at the base of the sequence.
Chemostratigraphy studies the changes in the relative proportions of trace elements and isotopes within and between lithologic units. Carbon and oxygen isotope ratios vary with time, and researchers can use those to map subtle changes that occurred in the paleoenvironment. This has led to the specialized field of isotopic stratigraphy.
Cyclostratigraphy documents the often cyclic changes in the relative proportions of minerals (particularly carbonates), grain size, thickness of sediment layers (varves) and fossil diversity with time, related to seasonal or longer term changes in palaeoclimates.
== Biostratigraphy ==
Biostratigraphy or paleontologic stratigraphy is based on fossil evidence in the rock layers. Strata from widespread locations containing the same fossil fauna and flora are said to be correlatable in time. Biologic stratigraphy was based on William Smith's principle of faunal succession, which predated, and was one of the first and most powerful lines of evidence for, biological evolution. It provides strong evidence for the formation (speciation) and extinction of species. The geologic time scale was developed during the 19th century, based on the evidence of biologic stratigraphy and faunal succession. This timescale remained a relative scale until the development of radiometric dating, which was based on an absolute time framework, leading to the development of chronostratigraphy.
One important development is the Vail curve, which attempts to define a global historical sea-level curve according to inferences from worldwide stratigraphic patterns. Stratigraphy is also commonly used to delineate the nature and extent of hydrocarbon-bearing reservoir rocks, seals, and traps of petroleum geology.
== Chronostratigraphy ==
Chronostratigraphy is the branch of stratigraphy that places an absolute age, rather than a relative age on rock strata. The branch is concerned with deriving geochronological data for rock units, both directly and inferentially, so that a sequence of time-relative events that created the rocks formation can be derived. The ultimate aim of chronostratigraphy is to place dates on the sequence of deposition of all rocks within a geological region, and then to every region, and by extension to provide an entire geologic record of the Earth.
A gap or missing strata in the geological record of an area is called a stratigraphic hiatus. This may be the result of a halt in the deposition of sediment. Alternatively, the gap may be due to removal by erosion, in which case it may be called a stratigraphic vacuity. It is called a hiatus because deposition was on hold for a period of time. A physical gap may represent both a period of non-deposition and a period of erosion. A geologic fault may cause the appearance of a hiatus.
=== Magnetostratigraphy ===
Magnetostratigraphy is a chronostratigraphic technique used to date sedimentary and volcanic sequences. The method works by collecting oriented samples at measured intervals throughout a section. The samples are analyzed to determine their detrital remanent magnetism (DRM), that is, the polarity of Earth's magnetic field at the time a stratum was deposited. For sedimentary rocks this is possible because, as they fall through the water column, very fine-grained magnetic minerals (< 17 μm) behave like tiny compasses, orienting themselves with Earth's magnetic field. Upon burial, that orientation is preserved. For volcanic rocks, magnetic minerals, which form in the melt, orient themselves with the ambient magnetic field, and are fixed in place upon crystallization of the lava.
Oriented paleomagnetic core samples are collected in the field; mudstones, siltstones, and very fine-grained sandstones are the preferred lithologies because the magnetic grains are finer and more likely to orient with the ambient field during deposition. If the ancient magnetic field were oriented similar to today's field (North Magnetic Pole near the North Rotational Pole), the strata would retain a normal polarity. If the data indicate that the North Magnetic Pole were near the South Rotational Pole, the strata would exhibit reversed polarity.
Results of the individual samples are analyzed by removing the natural remanent magnetization (NRM) to reveal the DRM. Following statistical analysis, the results are used to generate a local magnetostratigraphic column that can then be compared against the Global Magnetic Polarity Time Scale.
This technique is used to date sequences that generally lack fossils or interbedded igneous rocks. The continuous nature of the sampling means that it is also a powerful technique for the estimation of sediment-accumulation rates.
== See also ==
== References ==
== Further reading ==
Christopherson, R. W., 2008. Geosystems: An Introduction to Physical Geography, 7th ed., New York: Pearson Prentice-Hall. ISBN 978-0-13-600598-8.
Montenari, M., 2016. Stratigraphy and Timescales, 1st ed., Amsterdam: Academic Press (Elsevier). ISBN 978-0-12-811549-7.
== External links ==
ICS Subcommission for Stratigraphic Information
University of South Carolina Sequence Stratigraphy Web
International Commission on Stratigraphy
University of Georgia (USA) Stratigraphy Lab
Stratigraphy.net A stratigraphic data provider.
Agenames.org A global index of stratigraphic terms | Wikipedia/Stratigraphy |
Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach).
== Molecular mechanics ==
Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics.
E
=
E
bonds
+
E
angle
+
E
dihedral
+
E
non-bonded
{\displaystyle E=E_{\text{bonds}}+E_{\text{angle}}+E_{\text{dihedral}}+E_{\text{non-bonded}}\,}
E
non-bonded
=
E
electrostatic
+
E
van der Waals
{\displaystyle E_{\text{non-bonded}}=E_{\text{electrostatic}}+E_{\text{van der Waals}}\,}
This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law,
F
=
m
a
{\displaystyle \mathbf {F} =m\mathbf {a} }
. Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects.
== Variables ==
Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations.
=== Coordinate representations ===
Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method.
== Applications ==
Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes.
== See also ==
== References ==
== Further reading == | Wikipedia/Molecular_modelling |
The Protein Data Bank (PDB) is a database for the three-dimensional structural data of large biological molecules such as proteins and nucleic acids, which is overseen by the Worldwide Protein Data Bank (wwPDB). This structural data is obtained and deposited by biologists and biochemists worldwide through the use of experimental methodologies such as X-ray crystallography, NMR spectroscopy, and, increasingly, cryo-electron microscopy. All submitted data are reviewed by expert biocurators and, once approved, are made freely available on the Internet under the CC0 Public Domain Dedication. Global access to the data is provided by the websites of the wwPDB member organizations (PDBe, PDBj, RCSB PDB, and BMRB).
The PDB is a key in areas of structural biology, such as structural genomics. Most major scientific journals and some funding agencies now require scientists to submit their structure data to the PDB. Many other databases use protein structures deposited in the PDB. For example, SCOP and CATH classify protein structures, while PDBsum provides a graphic overview of PDB entries using information from other sources, such as Gene Ontology.
== History ==
Two forces converged to initiate the PDB: a small but growing collection of sets of protein structure data determined by X-ray diffraction; and the newly available (1968) molecular graphics display, the Brookhaven RAster Display (BRAD), to visualize these protein structures in 3-D. In 1969, with the sponsorship of Walter Hamilton at the Brookhaven National Laboratory, Edgar Meyer (Texas A&M University) began to write software to store atomic coordinate files in a common format to make them available for geometric and graphical evaluation. By 1971, one of Meyer's programs, SEARCH, enabled researchers to remotely access information from the database to study protein structures offline. SEARCH was instrumental in enabling networking, thus marking the functional beginning of the PDB.
The Protein Data Bank was announced in October 1971 in Nature New Biology as a joint venture between Cambridge Crystallographic Data Centre, UK and Brookhaven National Laboratory, US.
Upon Hamilton's death in 1973, Tom Koetzle took over direction of the PDB for the subsequent 20 years. In January 1994, Joel Sussman of Israel's Weizmann Institute of Science was appointed head of the PDB. In October 1998,
the PDB was transferred to the Research Collaboratory for Structural Bioinformatics (RCSB); the transfer was completed in June 1999. The new director was Helen M. Berman of Rutgers University (one of the managing institutions of the RCSB, the other being the San Diego Supercomputer Center at UC San Diego). In 2003, with the formation of the wwPDB, the PDB became an international organization. The founding members are PDBe (Europe), RCSB (US), and PDBj (Japan). The BMRB joined in 2006. Each of the four members of wwPDB can act as deposition, data processing and distribution centers for PDB data. The data processing refers to the fact that wwPDB staff review and annotate each submitted entry. The data are then automatically checked for plausibility (the source code for this validation software has been made available to the public at no charge).
== Contents ==
The PDB database is updated weekly (UTC+0 Wednesday), along with its holdings list. As of 10 January 2023, the PDB comprised:
162,041 structures in the PDB have a structure factor file.
11,242 structures have an NMR restraint file.
5,774 structures in the PDB have a chemical shifts file.
13,388 structures in the PDB have a 3DEM map file deposited in EM Data Bank
Most structures are determined by X-ray diffraction, but about 7% of structures are determined by protein NMR. When using X-ray diffraction, approximations of the coordinates of the atoms of the protein are obtained, whereas using NMR, the distance between pairs of atoms of the protein is estimated. The final conformation of the protein is obtained from NMR by solving a distance geometry problem. After 2013, a growing number of proteins are determined by cryo-electron microscopy.
For PDB structures determined by X-ray diffraction that have a structure factor file, their electron density map may be viewed. The data of such structures may be viewed on the three PDB websites.
Historically, the number of structures in the PDB has grown at an approximately exponential rate, with 100 registered structures in 1982, 1,000 structures in 1993, 10,000 in 1999, 100,000 in 2014, and 200,000 in January 2023.
== File format ==
The file format initially used by the PDB was called the PDB file format. The original format was restricted by the width of computer punch cards to 80 characters per line. Around 1996, the "macromolecular Crystallographic Information file" format, mmCIF, which is an extension of the CIF format was phased in. mmCIF became the standard format for the PDB archive in 2014. In 2019, the wwPDB announced that depositions for crystallographic methods would only be accepted in mmCIF format.
An XML version of PDB, called PDBML, was described in 2005.
The structure files can be downloaded in any of these three formats, though an increasing number of structures do not fit the legacy PDB format. Individual files are easily downloaded into graphics packages from Internet URLs:
For PDB format files, use, e.g., http://www.pdb.org/pdb/files/4hhb.pdb.gz or http://pdbe.org/download/4hhb
For PDBML (XML) files, use, e.g., http://www.pdb.org/pdb/files/4hhb.xml.gz or http://pdbe.org/pdbml/4hhb
The "4hhb" is the PDB identifier. Each structure published in PDB receives a four-character alphanumeric identifier, its PDB ID. (This is not a unique identifier for biomolecules, because several structures for the same molecule—in different environments or conformations—may be contained in PDB with different PDB IDs.)
== Viewing the data ==
The structure files may be viewed using one of several free and open source computer programs, including Jmol, Pymol, VMD, Molstar and Rasmol. Other non-free, shareware programs include ICM-Browser, MDL Chime, UCSF Chimera, Swiss-PDB Viewer, StarBiochem (a Java-based interactive molecular viewer with integrated search of protein databank), Sirius, and VisProt3DS (a tool for Protein Visualization in 3D stereoscopic view in anaglyph and other modes), and Discovery Studio. The RCSB PDB website contains an extensive list of both free and commercial molecule visualization programs and web browser plugins.
== See also ==
Crystallographic database
Protein structure
Protein structure prediction
Protein structure database
PDBREPORT lists all anomalies (also errors) in PDB structures
PDBsum—extracts data from other databases about PDB structures
Proteopedia—a collaborative 3D encyclopedia of proteins and other molecules
== References ==
== External links ==
The Worldwide Protein Data Bank (wwPDB)—parent site to regional hosts (below)
RCSB Protein Data Bank (US)
PDBe (Europe)
PDBj (Japan)
BMRB, Biological Magnetic Resonance Data Bank (US)
wwPDB Documentation—documentation on both the PDB and PDBML file formats
Looking at Structures Archived 2011-03-24 at the Wayback Machine—The RCSB's introduction to crystallography
PDBsum Home Page—Extracts data from other databases about PDB structures.
Nucleic Acid Database, NDB—a PDB mirror especially for searching for nucleic acids
Introductory PDB tutorial sponsored by PDB
PDBe: Quick Tour on EBI Train OnLine | Wikipedia/Protein_Data_Bank |
In nuclear physics and particle physics, the strong interaction, also called the strong force or strong nuclear force, is one of the four known fundamental interactions. It confines quarks into protons, neutrons, and other hadron particles, and also binds neutrons and protons to create atomic nuclei, where it is called the nuclear force.
Most of the mass of a proton or neutron is the result of the strong interaction energy; the individual quarks provide only about 1% of the mass of a proton. At the range of 10−15 m (1 femtometer, slightly more than the radius of a nucleon), the strong force is approximately 100 times as strong as electromagnetism, 106 times as strong as the weak interaction, and 1038 times as strong as gravitation.
In the context of atomic nuclei, the force binds protons and neutrons together to form a nucleus and is called the nuclear force (or residual strong force). Because the force is mediated by massive, short lived mesons on this scale, the residual strong interaction obeys a distance-dependent behavior between nucleons that is quite different from when it is acting to bind quarks within hadrons. There are also differences in the binding energies of the nuclear force with regard to nuclear fusion versus nuclear fission. Nuclear fusion accounts for most energy production in the Sun and other stars. Nuclear fission allows for decay of radioactive elements and isotopes, although it is often mediated by the weak interaction. Artificially, the energy associated with the nuclear force is partially released in nuclear power and nuclear weapons, both in uranium or plutonium-based fission weapons and in fusion weapons like the hydrogen bomb.
== History ==
Before 1971, physicists were uncertain as to how the atomic nucleus was bound together. It was known that the nucleus was composed of protons and neutrons and that protons possessed positive electric charge, while neutrons were electrically neutral. By the understanding of physics at that time, positive charges would repel one another and the positively charged protons should cause the nucleus to fly apart. However, this was never observed. New physics was needed to explain this phenomenon.
A stronger attractive force was postulated to explain how the atomic nucleus was bound despite the protons' mutual electromagnetic repulsion. This hypothesized force was called the strong force, which was believed to be a fundamental force that acted on the protons and neutrons that make up the nucleus.
In 1964, Murray Gell-Mann, and separately George Zweig, proposed that baryons, which include protons and neutrons, and mesons were composed of elementary particles. Zweig called the elementary particles "aces" while Gell-Mann called them "quarks"; the theory came to be called the quark model. The strong attraction between nucleons was the side-effect of a more fundamental force that bound the quarks together into protons and neutrons. The theory of quantum chromodynamics explains that quarks carry what is called a color charge, although it has no relation to visible color. Quarks with unlike color charge attract one another as a result of the strong interaction, and the particle that mediates this was called the gluon.
== Behavior of the strong interaction ==
The strong interaction is observable at two ranges, and mediated by different force carriers in each one. On a scale less than about 0.8 fm (roughly the radius of a nucleon), the force is carried by gluons and holds quarks together to form protons, neutrons, and other hadrons. On a larger scale, up to about 3 fm, the force is carried by mesons and binds nucleons (protons and neutrons) together to form the nucleus of an atom. In the former context, it is often known as the color force, and is so strong that if hadrons are struck by high-energy particles, they produce jets of massive particles instead of emitting their constituents (quarks and gluons) as freely moving particles. This property of the strong force is called color confinement.
=== Within hadrons ===
The word strong is used since the strong interaction is the "strongest" of the four fundamental forces. At a distance of 10−15 m, its strength is around 100 times that of the electromagnetic force, some 106 times as great as that of the weak force, and about 1038 times that of gravitation.
The strong force is described by quantum chromodynamics (QCD), a part of the Standard Model of particle physics. Mathematically, QCD is a non-abelian gauge theory based on a local (gauge) symmetry group called SU(3).
The force carrier particle of the strong interaction is the gluon, a massless gauge boson. Gluons are thought to interact with quarks and other gluons by way of a type of charge called color charge. Color charge is analogous to electromagnetic charge, but it comes in three types (±red, ±green, and ±blue) rather than one, which results in different rules of behavior. These rules are described by quantum chromodynamics (QCD), the theory of quark–gluon interactions.
Unlike the photon in electromagnetism, which is neutral, the gluon carries a color charge. Quarks and gluons are the only fundamental particles that carry non-vanishing color charge, and hence they participate in strong interactions only with each other. The strong force is the expression of the gluon interaction with other quark and gluon particles.
All quarks and gluons in QCD interact with each other through the strong force. The strength of interaction is parameterized by the strong coupling constant. This strength is modified by the gauge color charge of the particle, a group-theoretical property.
The strong force acts between quarks. Unlike all other forces (electromagnetic, weak, and gravitational), the strong force does not diminish in strength with increasing distance between pairs of quarks. After a limiting distance (about the size of a hadron) has been reached, it remains at a strength of about 10000 N, no matter how much farther the distance between the quarks.: 164 As the separation between the quarks grows, the energy added to the pair creates new pairs of matching quarks between the original two; hence it is impossible to isolate quarks. The explanation is that the amount of work done against a force of 10000 N is enough to create particle–antiparticle pairs within a very short distance. The energy added to the system by pulling two quarks apart would create a pair of new quarks that will pair up with the original ones. In QCD, this phenomenon is called color confinement; as a result, only hadrons, not individual free quarks, can be observed. The failure of all experiments that have searched for free quarks is considered to be evidence of this phenomenon.
The elementary quark and gluon particles involved in a high energy collision are not directly observable. The interaction produces jets of newly created hadrons that are observable. Those hadrons are created, as a manifestation of mass–energy equivalence, when sufficient energy is deposited into a quark–quark bond, as when a quark in one proton is struck by a very fast quark of another impacting proton during a particle accelerator experiment. However, quark–gluon plasmas have been observed.
=== Between hadrons ===
While color confinement implies that the strong force acts without distance-diminishment between pairs of quarks in compact collections of bound quarks (hadrons), at distances approaching or greater than the radius of a proton, a residual force (described below) remains. It manifests as a force between the "colorless" hadrons, and is known as the nuclear force or residual strong force (and historically as the strong nuclear force).
The nuclear force acts between hadrons, known as mesons and baryons. This "residual strong force", acting indirectly, transmits gluons that form part of the virtual π and ρ mesons, which, in turn, transmit the force between nucleons that holds the nucleus (beyond hydrogen-1 nucleus) together.
The residual strong force is thus a minor residuum of the strong force that binds quarks together into protons and neutrons. This same force is much weaker between neutrons and protons, because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (van der Waals forces) are much weaker than the electromagnetic forces that hold electrons in association with the nucleus, forming the atoms.
Unlike the strong force, the residual strong force diminishes with distance, and does so rapidly. The decrease is approximately as a negative exponential power of distance, though there is no simple expression known for this; see Yukawa potential. The rapid decrease with distance of the attractive residual force and the less rapid decrease of the repulsive electromagnetic force acting between protons within a nucleus, causes the instability of larger atomic nuclei, such as all those with atomic numbers larger than 82 (the element lead).
Although the nuclear force is weaker than the strong interaction itself, it is still highly energetic: transitions produce gamma rays. The mass of a nucleus is significantly different from the summed masses of the individual nucleons. This mass defect is due to the potential energy associated with the nuclear force. Differences between mass defects power nuclear fusion and nuclear fission.
== Unification ==
The so-called Grand Unified Theories (GUT) aim to describe the strong interaction and the electroweak interaction as aspects of a single force, similarly to how the electromagnetic and weak interactions were unified by the Glashow–Weinberg–Salam model into electroweak interaction. The strong interaction has a property called asymptotic freedom, wherein the strength of the strong force diminishes at higher energies (or temperatures). The theorized energy where its strength becomes equal to the electroweak interaction is the grand unification energy. However, no Grand Unified Theory has yet been successfully formulated to describe this process, and Grand Unification remains an unsolved problem in physics.
If GUT is correct, after the Big Bang and during the electroweak epoch of the universe, the electroweak force separated from the strong force. Accordingly, a grand unification epoch is hypothesized to have existed prior to this.
== See also ==
Mathematical formulation of quantum mechanics
Mathematical formulation of the Standard Model
Nuclear binding energy
QCD matter
Quantum field theory
Yukawa interaction
== References ==
== Further reading ==
Christman, J.R. (2001). "MISN-0-280: The Strong Interaction" (PDF).
Ding, Minghui; Roberts, Craig; Schmidt, Sebastian, eds. (2024). Strong Interactions in the Standard Model: Massless Bosons to Compact Stars. MDPI. ISBN 978-3-7258-1502-9.
Halzen, F.; Martin, A.D. (1984). Quarks and Leptons: An Introductory Course in Modern Particle Physics. John Wiley & Sons. ISBN 978-0-471-88741-6.
Kane, G.L. (1987). Modern Elementary Particle Physics. Perseus Books. ISBN 978-0-201-11749-3.
Morris, R. (2003). The Last Sorcerers: The Path from Alchemy to the Periodic Table. Joseph Henry Press. ISBN 978-0-309-50593-2.
== External links == | Wikipedia/Strong_nuclear_force |
In crystallography, direct methods are a family of methods for estimating the phases of the Fourier transform of the scattering density from the corresponding magnitudes. The methods generally exploit constraints or statistical correlations between the phases of different Fourier components that result from the fact that the scattering density must be a positive real number.
In two dimensions, it is relatively easy to solve the phase problem directly, but not so in three dimensions. The key step was taken by Hauptman and Karle, who developed a practical method to employ the Sayre equation for which they were awarded the 1985 Nobel prize in Chemistry. The Nobel Prize citation was "for their outstanding achievements in the development of direct methods for the determination of crystal structures."
At present, direct methods are the preferred method for phasing crystals of small molecules having up to 1000 atoms in the asymmetric unit. However, they are generally not feasible by themselves for larger molecules such as proteins.
Several software packages implement direct methods.
== See also ==
Direct methods (electron microscopy)
Phase problem
X-ray crystallography
== References == | Wikipedia/Direct_methods_(crystallography) |
Clinical chemistry (also known as chemical pathology, clinical biochemistry or medical biochemistry) is a division in pathology and medical laboratory sciences focusing on qualitative tests of important compounds, referred to as analytes or markers, in bodily fluids and tissues using analytical techniques and specialized instruments. This interdisciplinary field includes knowledge from medicine, biology, chemistry, biomedical engineering, informatics, and an applied form of biochemistry (not to be confused with medicinal chemistry, which involves basic research for drug development).
The discipline originated in the late 19th century with the use of simple chemical reaction tests for various components of blood and urine. Many decades later, clinical chemists use automated analyzers in many clinical laboratories. These instruments perform experimental techniques ranging from pipetting specimens and specimen labelling to advanced measurement techniques such as spectrometry, chromatography, photometry, potentiometry, etc. These instruments provide different results that help identify uncommon analytes, changes in light and electronic voltage properties of naturally-occurring analytes such as enzymes, ions, electrolytes, and their concentrations, all of which are important for diagnosing diseases.
Blood and urine are the most common test specimens clinical chemists or medical laboratory scientists collect for clinical routine tests, with a main focus on serum and plasma in blood. There are now many blood tests and clinical urine tests with extensive diagnostic capabilities. Some clinical tests require clinical chemists to process the specimen before testing. Clinical chemists and medical laboratory scientists serve as the interface between the laboratory side and the clinical practice, providing suggestions to physicians on which test panel to order and interpret any irregularities in test results that reflect on the patient's health status and organ system functionality. This allows healthcare providers to make more accurate evaluation of a patient's health and to diagnose disease, predicting the progression of a disease (prognosis), screening, and monitoring the treatment's efficiency in a timely manner. The type of test required dictates what type of sample is used.
== Common Analytes ==
Some common analytes that clinical chemistry tests analyze include:
== Panel tests ==
A physician may order many laboratory tests on one specimen, referred to as a test panel, when a single test cannot provide sufficient information to make a swift and accurate diagnosis and treatment plan. A test panel is a group of many tests a clinical chemists do on one sample to look for changes in many analytes that may be indicative of specific medical concerns or the health status of an organ system. Thus, panel tests provide a more extensive evaluation of a patient's health, have higher predictive values for confirming or disproving a disease, and are quick and cost-effective.
=== Metabolic Panel ===
A Metabolic Panel (MP) is a routine group of blood tests commonly used for health screenings, disease detection, and monitoring vital signs of hospitalized patients with specific medical conditions. MP panel analyzes common analytes in the blood to assess the functions of the kidneys and liver, as well as electrolyte and acid-base balances. There are two types of MPs - Basic Metabolic Panel (BMP) or Comprehensive Metabolic Panel (CMP).
==== Basic Metabolic Panel ====
BMP is a panel of tests that measures eight analytes in the blood's fluid portion (plasma). The results of the BMP provide valuable information about a patient's kidney function, blood sugar level, electrolyte levels, and the acid-base balance. Abnormal changes in one or more of these analytes can be a sign of serious health issues:
Sodium, Potassium, Chloride, and Carbon Dioxide: they are electrolytes that have electrical charges that manage the body’s water level, acid-base balance in the blood, and kidney function.
Calcium: This charged electrolyte is essential for the proper functions of nerve, muscle, blood clotting, and bone health. Changes in the calcium level can be signs of bone disease, muscle cramps/ spasms, thyroid disease, or other conditions.
Glucose: This measures the blood sugar levels, which is a crucial energy for your body and brain. High glucose levels can be a sign of diabetes or insulin resistance.
Urea and Creatinine: These are waste products that the kidney filters out from blood. Urea measurements are helpful in detecting and treating kidney failure and related metabolic disorders, whereas creatinine measurements give information on kidney’s health, tracking renal dialysis treatment, and monitor hospitalized patients that are on diuretics.
==== Comprehensive Metabolic Panel ====
Comprehensive metabolic panel (CMP) - 14 tests - above BMP plus total protein, albumin, alkaline phosphatase (ALP), alanine amino transferase (ALT), aspartate amino transferase (AST), bilirubin.
== Specimen Processing ==
For blood tests, clinical chemists must process the specimen to obtain plasma and serum before testing for targeted analytes. This is most easily done by centrifugation, which packs the denser blood cells and platelets to the bottom of the centrifuge tube, leaving the liquid serum fraction resting above the packed cells. This initial step before analysis has recently been included in instruments that operate on the "integrated system" principle. Plasma is obtained by centrifugation before clotting occurs.
== Instruments ==
Most current medical laboratories now have highly automated analyzers to accommodate the high workload typical of a hospital laboratory, and accept samples for up to about 700 different kinds of tests. Even the largest of laboratories rarely do all these tests themselves, and some must be referred to other labs. Tests performed are closely monitored and quality controlled.
== Specialties ==
The large array of tests can be categorised into sub-specialities of:
General or routine chemistry – commonly ordered blood chemistries (e.g., liver and kidney function tests).
Special chemistry – elaborate techniques such as electrophoresis, and manual testing methods.
Clinical endocrinology – the study of hormones, and diagnosis of endocrine disorders.
Toxicology – the study of drugs of abuse and other chemicals.
Therapeutic Drug Monitoring – measurement of therapeutic medication levels to optimize dosage.
Urinalysis – chemical analysis of urine for a wide array of diseases, along with other fluids such as CSF and effusions
Fecal analysis – mostly for detection of gastrointestinal disorders.
== See also ==
Reference ranges for common blood tests
Medical technologist
Clinical Biochemistry (journal)
== Notes and references ==
== Bibliography ==
Burtis, Carl A.; Ashwood, Edward R.; Bruns, David E. (2006). Tietz textbook of clinical chemistry (4th ed.). Saunders. p. 2448. ISBN 978-0-7216-0189-2.
== External links ==
American Association of Clinical Chemistry
Association for Mass Spectrometry: Applications to the Clinical Lab (MSACL) | Wikipedia/Clinical_chemistry |
Crystallographic Information File (CIF) is a standard text file format for representing crystallographic information, promulgated by the International Union of Crystallography (IUCr). CIF was developed by the IUCr Working Party on Crystallographic Information in an effort sponsored by the IUCr Commission on Crystallographic Data and the IUCr Commission on Journals. The file format was initially published by Hall, Allen, and Brown and has since been revised, most recently versions 1.1 and 2.0. Full specifications for the format are available at the IUCr website. Many computer programs for molecular viewing are compatible with this format, including Jmol.
== mmCIF ==
Closely related is mmCIF, macromolecular CIF, which is intended as an successor to the Protein Data Bank (PDB) format. It is now the default format used by the Protein Data Bank.
Also closely related is Crystallographic Information Framework, a broader system of exchange protocols based on data dictionaries and relational rules expressible in different machine-readable manifestations, including, but not restricted to, Crystallographic Information File and XML.
== References ==
== External links ==
International Union of Crystallography | Wikipedia/Crystallographic_Information_File |
Polymer physics is the field of physics that studies polymers, their fluctuations, mechanical properties, as well as the kinetics of reactions involving degradation of polymers and polymerisation of monomers.
While it focuses on the perspective of condensed matter physics, polymer physics was originally a branch of statistical physics. Polymer physics and polymer chemistry are also related to the field of polymer science, which is considered to be the applicative part of polymers.
Polymers are large molecules and thus are very complicated for solving using a deterministic method. Yet, statistical approaches can yield results and are often pertinent, since large polymers (i.e., polymers with many monomers) are describable efficiently in the thermodynamic limit of infinitely many monomers (although the actual size is clearly finite).
Thermal fluctuations continuously affect the shape of polymers in liquid solutions, and modeling their effect requires the use of principles from statistical mechanics and dynamics. As a corollary, temperature strongly affects the physical behavior of polymers in solution, causing phase transitions, melts, and so on.
The statistical approach to polymer physics is based on an analogy between polymer behavior and either Brownian motion or another type of a random walk, the self-avoiding walk. The simplest possible polymer model is presented by the ideal chain, corresponding to a simple random walk. Experimental approaches for characterizing polymers are also common, using polymer characterization methods, such as size exclusion chromatography, viscometry, dynamic light scattering, and Automatic Continuous Online Monitoring of Polymerization Reactions (ACOMP) for determining the chemical, physical, and material properties of polymers. These experimental methods help the mathematical modeling of polymers and give a better understanding of the properties of polymers.
Flory is considered the first scientist establishing the field of polymer physics.
French scientists contributed since the 70s (e.g. Pierre-Gilles de Gennes, J. des Cloizeaux).
Doi and Edwards wrote a famous book in polymer physics.
Soviet/Russian school of physics (I. M. Lifshitz, A. Yu. Grosberg, A.R. Khokhlov, V.N. Pokrovskii) have been very active in the development of polymer physics.
== Models ==
Models of polymer chains are split into two types: "ideal" models, and "real" models. Ideal chain models assume that there are no interactions between chain monomers. This assumption is valid for certain polymeric systems, where the positive and negative interactions between the monomer effectively cancel out. Ideal chain models provide a good starting point for the investigation of more complex systems and are better suited for equations with more parameters.
=== Ideal chains ===
The freely-jointed chain is the simplest model of a polymer. In this model, fixed length polymer segments are linearly connected, and all bond and torsion angles are equiprobable. The polymer can therefore be described by a simple random walk and ideal chain. The model can be extended to include extensible segments in order to represent bond stretching.
The freely-rotating chain improves the freely-jointed chain model by taking into account that polymer segments make a fixed bond angle to neighbouring units because of specific chemical bonding. Under this fixed angle, the segments are still free to rotate and all torsion angles are equally likely.
The hindered rotation model assumes that the torsion angle is hindered by a potential energy. This makes the probability of each torsion angle proportional to a Boltzmann factor:
P
(
θ
)
∝
exp
(
−
U
(
θ
)
/
k
T
)
{\displaystyle P(\theta )\propto {}\exp \left(-U(\theta )/kT\right)}
, where
U
(
θ
)
{\displaystyle U(\theta )}
is the potential determining the probability of each value of
θ
{\displaystyle \theta }
.
In the rotational isomeric state model, the allowed torsion angles are determined by the positions of the minima in the rotational potential energy. Bond lengths and bond angles are constant.
The Worm-like chain is a more complex model. It takes the persistence length into account. Polymers are not completely flexible; bending them requires energy. At the length scale below persistence length, the polymer behaves more or less like a rigid rod.
The finite extensible nonlinear elastic model takes into account non-linearity for finite chains. It is used for computational simulations.
=== Real chains ===
Interactions between chain monomers can be modelled as excluded volume. This causes a reduction in the conformational possibilities of the chain, and leads to a self-avoiding random walk. Self-avoiding random walks have different statistics to simple random walks.
== Solvent and temperature effect ==
The statistics of a single polymer chain depends upon the solubility of the polymer in the solvent. For a solvent in which the polymer is very soluble (a "good" solvent), the chain is more expanded, while for a solvent in which the polymer is insoluble or barely soluble (a "bad" solvent), the chain segments stay close to each other. In the limit of a very bad solvent the polymer chain merely collapses to form a hard sphere, while in a good solvent the chain swells in order to maximize the number of polymer-fluid contacts. For this case the radius of gyration is approximated using Flory's mean field approach which yields a scaling for the radius of gyration of:
R
g
∼
N
ν
{\displaystyle R_{g}\sim N^{\nu }}
,
where
R
g
{\displaystyle R_{g}}
is the radius of gyration of the polymer,
N
{\displaystyle N}
is the number of bond segments (equal to the degree of polymerization) of the chain and
ν
{\displaystyle \nu }
is the Flory exponent.
For good solvent,
ν
≈
3
/
5
{\displaystyle \nu \approx 3/5}
; for poor solvent,
ν
=
1
/
3
{\displaystyle \nu =1/3}
. Therefore, polymer in good solvent has larger size and behaves like a fractal object. In bad solvent it behaves like a solid sphere.
In the so-called
θ
{\displaystyle \theta }
solvent,
ν
=
1
/
2
{\displaystyle \nu =1/2}
, which is the result of simple random walk. The chain behaves as if it were an ideal chain.
The quality of solvent depends also on temperature. For a flexible polymer, low temperature may correspond to poor quality and high temperature makes the same solvent good. At a particular temperature called theta (θ) temperature, the solvent behaves as an ideal chain.
== Excluded volume interaction ==
The ideal chain model assumes that polymer segments can overlap with each other as if the chain were a phantom chain. In reality, two segments cannot occupy the same space at the same time. This interaction between segments is called the excluded volume interaction.
The simplest formulation of excluded volume is the self-avoiding random walk, a random walk that cannot repeat its previous path. A path of this walk of N steps in three dimensions represents a conformation of a polymer with excluded volume interaction. Because of the self-avoiding nature of this model, the number of possible conformations is significantly reduced. The radius of gyration is generally larger than that of the ideal chain.
== Flexibility and reptation ==
Whether a polymer is flexible or not depends on the scale of interest. For example, the persistence length of double-stranded DNA is about 50 nm. Looking at length scale smaller than 50 nm, it behaves more or less like a rigid rod. At length scale much larger than 50 nm, it behaves like a flexible chain.
Reptation is the thermal motion of very long linear, entangled basically
macromolecules in polymer melts or concentrated polymer solutions. Derived from the word reptile, reptation suggests the movement of entangled polymer chains as being analogous to snakes slithering through one another. Pierre-Gilles de Gennes introduced (and named) the concept of reptation into polymer physics in 1971 to explain the dependence of the mobility of a macromolecule on its length. Reptation is used as a mechanism to explain viscous flow in an amorphous polymer. Sir Sam Edwards and Masao Doi later refined reptation theory. The consistent theory of thermal motion of polymers was given by Vladimir Pokrovskii
. Similar phenomena also occur in proteins.
== Example model (simple random-walk, freely jointed) ==
The study of long chain polymers has been a source of problems within the realms of statistical mechanics since about the 1950s. One of the reasons however that scientists were interested in their study is that the equations governing the behavior of a polymer chain were independent of the chain chemistry. What is more, the governing equation turns out to be a random walk, or diffusive walk, in space. Indeed, the Schrödinger equation is itself a diffusion equation in imaginary time, t' = it.
=== Random walks in time ===
The first example of a random walk is one in space, whereby a particle undergoes a random motion due to external forces in its surrounding medium. A typical example would be a pollen grain in a beaker of water. If one could somehow "dye" the path the pollen grain has taken, the path observed is defined as a random walk.
Consider a toy problem, of a train moving along a 1D track in the x-direction. Suppose that the train moves either a distance of +b or −b (b is the same for each step), depending on whether a coin lands heads or tails when flipped. Lets start by considering the statistics of the steps the toy train takes (where Si is the ith step taken):
⟨
S
i
⟩
=
0
{\displaystyle \langle S_{i}\rangle =0}
; due to a priori equal probabilities
⟨
S
i
S
j
⟩
=
b
2
δ
i
j
.
{\displaystyle \langle S_{i}S_{j}\rangle =b^{2}\delta _{ij}.}
The second quantity is known as the correlation function. The delta is the kronecker delta which tells us that if the indices i and j are different, then the result is 0, but if i = j then the kronecker delta is 1, so the correlation function returns a value of b2. This makes sense, because if i = j then we are considering the same step. Rather trivially then it can be shown that the average displacement of the train on the x-axis is 0;
x
=
∑
i
=
1
N
S
i
{\displaystyle x=\sum _{i=1}^{N}S_{i}}
⟨
x
⟩
=
⟨
∑
i
=
1
N
S
i
⟩
{\displaystyle \langle x\rangle =\left\langle \sum _{i=1}^{N}S_{i}\right\rangle }
⟨
x
⟩
=
∑
i
=
1
N
⟨
S
i
⟩
.
{\displaystyle \langle x\rangle =\sum _{i=1}^{N}\langle S_{i}\rangle .}
As stated
⟨
S
i
⟩
=
0
{\displaystyle \langle S_{i}\rangle =0}
, so the sum is still 0.
It can also be shown, using the same method demonstrated above, to calculate the root mean square value of problem. The result of this calculation is given below
x
r
m
s
=
⟨
x
2
⟩
=
b
N
.
{\displaystyle x_{\mathrm {rms} }={\sqrt {\langle x^{2}\rangle }}=b{\sqrt {N}}.}
From the diffusion equation it can be shown that the distance a diffusing particle moves in a medium is proportional to the root of the time the system has been diffusing for, where the proportionality constant is the root of the diffusion constant. The above relation, although cosmetically different reveals similar physics, where N is simply the number of steps moved (is loosely connected with time) and b is the characteristic step length. As a consequence we can consider diffusion as a random walk process.
=== Random walks in space ===
Random walks in space can be thought of as snapshots of the path taken by a random walker in time. One such example is the spatial configuration of long chain polymers.
There are two types of random walk in space: self-avoiding random walks, where the links of the polymer chain interact and do not overlap in space, and pure random walks, where the links of the polymer chain are non-interacting and links are free to lie on top of one another. The former type is most applicable to physical systems, but their solutions are harder to get at from first principles.
By considering a freely jointed, non-interacting polymer chain, the end-to-end vector is
R
=
∑
i
=
1
N
r
i
{\displaystyle \mathbf {R} =\sum _{i=1}^{N}\mathbf {r} _{i}}
where ri is the vector position of the i-th link in the chain.
As a result of the central limit theorem, if N ≫ 1 then we expect a Gaussian distribution for the end-to-end vector. We can also make statements of the statistics of the links themselves;
⟨
r
i
⟩
=
0
{\displaystyle \langle \mathbf {r} _{i}\rangle =0}
; by the isotropy of space
⟨
r
i
⋅
r
j
⟩
=
3
b
2
δ
i
j
{\displaystyle \langle \mathbf {r} _{i}\cdot \mathbf {r} _{j}\rangle =3b^{2}\delta _{ij}}
; all the links in the chain are uncorrelated with one another
Using the statistics of the individual links, it is easily shown that
⟨
R
⟩
=
0
{\displaystyle \langle \mathbf {R} \rangle =0}
⟨
R
⋅
R
⟩
=
3
N
b
2
{\displaystyle \langle \mathbf {R} \cdot \mathbf {R} \rangle =3Nb^{2}}
.
Notice this last result is the same as that found for random walks in time.
Assuming, as stated, that that distribution of end-to-end vectors for a very large number of identical polymer chains is gaussian, the probability distribution has the following form
P
=
1
(
2
π
N
b
2
3
)
3
/
2
exp
(
−
3
R
⋅
R
2
N
b
2
)
.
{\displaystyle P={\frac {1}{\left({\frac {2\pi Nb^{2}}{3}}\right)^{3/2}}}\exp \left({\frac {-3\mathbf {R} \cdot \mathbf {R} }{2Nb^{2}}}\right).}
What use is this to us? Recall that according to the principle of equally likely a priori probabilities, the number of microstates, Ω, at some physical value is directly proportional to the probability distribution at that physical value, viz;
Ω
(
R
)
=
c
P
(
R
)
{\displaystyle \Omega \left(\mathbf {R} \right)=cP\left(\mathbf {R} \right)}
where c is an arbitrary proportionality constant. Given our distribution function, there is a maxima corresponding to R = 0. Physically this amounts to there being more microstates which have an end-to-end vector of 0 than any other microstate. Now by considering
S
(
R
)
=
k
B
ln
Ω
(
R
)
{\displaystyle S\left(\mathbf {R} \right)=k_{B}\ln \Omega {\left(\mathbf {R} \right)}}
Δ
S
(
R
)
=
S
(
R
)
−
S
(
0
)
{\displaystyle \Delta S\left(\mathbf {R} \right)=S\left(\mathbf {R} \right)-S\left(0\right)}
Δ
F
=
−
T
Δ
S
(
R
)
{\displaystyle \Delta F=-T\Delta S\left(\mathbf {R} \right)}
where F is the Helmholtz free energy, and it can be shown that
Δ
F
=
k
B
T
3
R
2
2
N
b
2
=
1
2
K
R
2
;
K
=
3
k
B
T
N
b
2
.
{\displaystyle \Delta F=k_{B}T{\frac {3R^{2}}{2Nb^{2}}}={\frac {1}{2}}KR^{2}\quad ;K={\frac {3k_{B}T}{Nb^{2}}}.}
which has the same form as the potential energy of a spring, obeying Hooke's law.
This result is known as the entropic spring result and amounts to saying that upon stretching a polymer chain you are doing work on the system to drag it away from its (preferred) equilibrium state. An example of this is a common elastic band, composed of long chain (rubber) polymers. By stretching the elastic band you are doing work on the system and the band behaves like a conventional spring, except that unlike the case with a metal spring, all of the work done appears immediately as thermal energy, much as in the thermodynamically similar case of compressing an ideal gas in a piston.
It might at first be astonishing that the work done in stretching the polymer chain can be related entirely to the change in entropy of the system as a result of the stretching. However, this is typical of systems that do not store any energy as potential energy, such as ideal gases. That such systems are entirely driven by entropy changes at a given temperature, can be seen whenever it is the case that are allowed to do work on the surroundings (such as when an elastic band does work on the environment by contracting, or an ideal gas does work on the environment by expanding). Because the free energy change in such cases derives entirely from entropy change rather than internal (potential) energy conversion, in both cases the work done can be drawn entirely from thermal energy in the polymer, with 100% efficiency of conversion of thermal energy to work. In both the ideal gas and the polymer, this is made possible by a material entropy increase from contraction that makes up for the loss of entropy from absorption of the thermal energy, and cooling of the material.
== See also ==
File dynamics
Important publications in polymer physics.
Polymer characterization
Protein dynamics
Reptation
Soft matter
Flory–Huggins solution theory
Time–temperature superposition
== References ==
== External links ==
Plastic & polymer formulations | Wikipedia/Polymer_physics |
Neutron diffraction or elastic neutron scattering is the application of neutron scattering to the determination of the atomic and/or magnetic structure of a material. A sample to be examined is placed in a beam of thermal or cold neutrons to obtain a diffraction pattern that provides information of the structure of the material. The technique is similar to X-ray diffraction but due to their different scattering properties, neutrons and X-rays provide complementary information: X-Rays are suited for superficial analysis, strong x-rays from synchrotron radiation are suited for shallow depths or thin specimens, while neutrons having high penetration depth are suited for bulk samples.
== History ==
=== Discovery of the neutron ===
In 1921, American chemist and physicist William D. Harkins introduced the term "neutron" while studying atomic structure and nuclear reactions. He proposed the existence of a neutral particle within the atomic nucleus, though there was no experimental evidence for it at the time. In 1932, British physicist James Chadwick provided experimental proof of the neutron's existence. His discovery confirmed the presence of this neutral subatomic particle, earning him the Nobel Prize in Physics in 1935. Chadwick's research was influenced by earlier work from Irène and Frédéric Joliot-Curie, who had detected unexplained neutral radiation but had not recognized it as a distinct particle. Neutrons are subatomic particles that exist in the nucleus of the atom, it has higher mass than protons but no electrical charge.
In the 1930s Enrico Fermi and colleagues gave theoretical contributions establishing the foundation of neutron scattering. Fermi developed a framework to understand how neutrons interact with atomic nuclei.
=== Early diffraction work ===
Diffraction was first observed in 1936 by two groups, von Halban and Preiswerk and by Mitchell and Powers. In 1944, Ernest O. Wollan, with a background in X-ray scattering from his PhD work under Arthur Compton, recognized the potential for applying thermal neutrons from the newly operational X-10 nuclear reactor to crystallography. Joined by Clifford G. Shull they developed neutron diffraction throughout the 1940s.
Neutron diffraction experiments were carried out in 1945 by Ernest O. Wollan using the Graphite Reactor at Oak Ridge. He was joined shortly thereafter (June 1946) by Clifford Shull, and together they established the basic principles of the technique, and applied it successfully to many different materials, addressing problems like the structure of ice and the microscopic arrangements of magnetic moments in materials. For this achievement, Shull was awarded one half of the 1994 Nobel Prize in Physics. (Wollan died in 1984). (The other half of the 1994 Nobel Prize for Physics went to Bert Brockhouse for development of the inelastic scattering technique at the Chalk River facility of AECL. This also involved the invention of the triple axis spectrometer).
=== 1950–60s ===
The development of neutron sources such as reactors and spallation sources emerged. This allowed high-intensity neutron beams, enabling advanced scattering experiments. Notably, the high flux isotope reactor (HFIR) at Oak Ridge and Institut Laue Langevin (ILL) in Grenoble, France, emerged as key institutions for neutron scattering studies.
=== 1970–1980s ===
This period saw major advancements in neutron scattering techniques by developing techniques to explore different aspects of material science, structure and behaviour.
Small angle neutron scattering (SANS): Used to investigate large-scale structural features in materials. The works of Glatter and Kratky also helped in the advancements of this method, though it was primarily developed for X-rays.
Inelastic neutron scattering (INS): Provides insights into the dynamic process at the microscopic level. Majorly used to examine atomic and molecular motions.
=== 1990-present ===
Recent advancements focus on improved sources, using sophisticated detectors and enhanced computational techniques. Spallation sources have been developed at SNS (Spallation Neutron Source) in the U.S. and ISIS Neutron and Muon Source in the U.K., which can generate pulsed neutron beams for time-of-flight experiments. Neutron imaging and reflectometry were also developed, which are powerful tools to analyse surfaces, interfaces and thin film structures, thus providing valuable insights into the material properties.
== Comparison of neutron scattering, XRD and electron scattering ==
== Principle ==
=== Processes ===
Neutrons are produced through three major processes, fission, spallation, and Low energy nuclear reactions.
==== Fission ====
In research reactors, fission takes place when a fissile nucleus, such as uranium-235 (235U), absorbs a neutron and subsequently splits into two smaller fragments. This process releases energy along with additional neutrons. On average, each fission event produces about 2.5 neutrons. While one neutron is required to maintain the chain reaction, the surplus neutrons can be utilized for various experimental applications.
==== Spallation ====
In spallation sources, high-energy protons (on the order of 1 GeV) bombard a heavy metal target (e.g., uranium (U), tungsten (W), tantalum (Ta), lead (Pb), or mercury (Hg)). This interaction causes the nuclei to spit out neutrons. Proton interactions result in around ten to thirty neutrons per event, of which the bulk are known as "evaporation neutrons"(~2 MeV), while a minority are identified as "cascade neutrons" with energies reaching up to the GeV range. Although spallation is a very efficient technique of neutron production, the technique generates high energy particles, therefore requiring shielding for safety.
==== Low energy nuclear reactions ====
Low-energy nuclear reactions are the basis of neutron production in accelerator-driven sources. The selected target materials are based on the energy levels; lighter metals such as lithium (Li) and beryllium (Be) can be used toachieve their maximum possible reaction rate under 30 MeV, while heavier elements such as tungsten (W) and carbon (C) provide better performance above 312 MeV. These Compact Accelerator-driven Neutron Sources (CANS) have matured and are now approaching the performance of fission and spallation sources.
=== De-Broglie relation ===
Neutron scattering relies on the wave-particle dual nature of neutrons. The De-Broglie relation links the wavelength (λ) of a neutron to its energy (E)
λ
=
h
/
m
v
{\displaystyle \lambda =h/mv}
where h is the Planck constant, p is the momentum of the neutron, m is the mass of the neutron, v is the velocity of the neutron.
== Scattering ==
Neutron scattering is used to detect the distance between atoms and study the dynamics of materials. It involves two major principles: elastic scattering and inelastic scattering.
Elastic scattering provides insight into the structural properties of materials by looking at the angles at which neutrons are scattered. The resulting pattern of the scattering provides information regarding the atomic structure of crystals, liquids and amorphous materials.
Inelastic scattering focuses on material dynamics through the study of neutron energy and momentum changes during interactions. It is key to study phonons, magnons, and other excitations of solid materials.
== Neutron matter interaction ==
X- rays interact with matter through electrostatic interaction by interacting with the electron cloud of atoms, this limits their application as they can be scattered strongly from electrons. While being neutral, neutrons primarily interact with matter through the short-range strong force with atomic nuclei. Nuclei are far smaller than the electron cloud, meaning most materials are transparent to neutrons and allow deeper penetration. The interaction between neutrons and nuclei is described by the Fermi pseudopotential, that is, neutrons are well above their meson mass threshold, and thus can be treated effectively as point-like scatterers. While most elements have a low tendency to absorb neutrons, certain ones such as cadmium (Cd), gadolinium (Gd), helium (3He), lithium (6Li), and boron (10B) exhibit strong neutron absorption due to nuclear resonance effects. The likelihood of absorption increases with neutron wavelength (σa ∝ λ), meaning slower neutrons are absorbed more readily than faster ones.
== Instrumental and sample requirements ==
The technique requires a source of neutrons. Neutrons are usually produced in a nuclear reactor or spallation source. At a research reactor, other components are needed, including a crystal monochromator (in the case of thermal neutrons), as well as filters to select the desired neutron wavelength. Some parts of the setup may also be movable. For the long-wavelength neutrons, crystals cannot be used and gratings are used instead as diffractive optical components. At a spallation source, the time of flight technique is used to sort the energies of the incident neutrons (higher energy neutrons are faster), so no monochromator is needed, but rather a series of aperture elements synchronized to filter neutron pulses with the desired wavelength.
The technique is most commonly performed as powder diffraction, which only requires a polycrystalline powder. Single crystal work is also possible, but the crystals must be much larger than those that are used in single-crystal X-ray crystallography. It is common to use crystals that are about 1 mm3.
The technique also requires a device that can detect the neutrons after they have been scattered.
Summarizing, the main disadvantage to neutron diffraction is the requirement for a nuclear reactor. For single crystal work, the technique requires relatively large crystals, which are usually challenging to grow. The advantages to the technique are many - sensitivity to light atoms, ability to distinguish isotopes, absence of radiation damage, as well as a penetration depth of several cm
== Nuclear scattering ==
Like all quantum particles, neutrons can exhibit wave phenomena typically associated with light or sound. Diffraction is one of these phenomena; it occurs when waves encounter obstacles whose size is comparable with the wavelength. If the wavelength of a quantum particle is short enough, atoms or their nuclei can serve as diffraction obstacles. When a beam of neutrons emanating from a reactor is slowed and selected properly by their speed, their wavelength lies near one angstrom (0.1 nm), the typical separation between atoms in a solid material. Such a beam can then be used to perform a diffraction experiment. Impinging on a crystalline sample, it will scatter under a limited number of well-defined angles, according to the same Bragg law that describes X-ray diffraction.
Neutrons and X-rays interact with matter differently. X-rays interact primarily with the electron cloud surrounding each atom. The contribution to the diffracted x-ray intensity is therefore larger for atoms with larger atomic number (Z). On the other hand, neutrons interact directly with the nucleus of the atom, and the contribution to the diffracted intensity depends on each isotope; for example, regular hydrogen and deuterium contribute differently. It is also often the case that light (low Z) atoms contribute strongly to the diffracted intensity, even in the presence of large-Z atoms. The scattering length varies from isotope to isotope rather than linearly with the atomic number. An element like vanadium strongly scatters X-rays, but its nuclei hardly scatters neutrons, which is why it is often used as a container material. Non-magnetic neutron diffraction is directly sensitive to the positions of the nuclei of the atoms.
The nuclei of atoms, from which neutrons scatter, are tiny. Furthermore, there is no need for an atomic form factor to describe the shape of the electron cloud of the atom and the scattering power of an atom does not fall off with the scattering angle as it does for X-rays. Diffractograms therefore can show strong, well-defined diffraction peaks even at high angles, particularly if the experiment is done at low temperatures. Many neutron sources are equipped with liquid helium cooling systems that allow data collection at temperatures down to 4.2 K. The superb high angle (i.e. high resolution) information means that the atomic positions in the structure can be determined with high precision. On the other hand, Fourier maps (and to a lesser extent difference Fourier maps) derived from neutron data suffer from series termination errors, sometimes so much that the results are meaningless.
== Magnetic scattering ==
Although neutrons are uncharged, they carry a magnetic moment, and therefore interact with magnetic moments, including those arising from the electron cloud around an atom. Neutron diffraction can therefore reveal the microscopic magnetic structure of a material.
Magnetic scattering does require an atomic form factor as it is caused by the much larger electron cloud around the tiny nucleus. The intensity of the magnetic contribution to the diffraction peaks will therefore decrease towards higher angles.
== Uses ==
Neutron diffraction can be used to determine the static structure factor of gases, liquids or amorphous solids. Most experiments, however, aim at the structure of crystalline solids, making neutron diffraction an important tool of crystallography.
Neutron diffraction is closely related to X-ray powder diffraction. In fact, the single crystal version of the technique is less commonly used because currently available neutron sources require relatively large samples and large single crystals are hard or impossible to come by for most materials. Future developments, however, may well change this picture. Because the data is typically a 1D powder diffractogram they are usually processed using Rietveld refinement. In fact the latter found its origin in neutron diffraction (at Petten in the Netherlands) and was later extended for use in X-ray diffraction.
One practical application of elastic neutron scattering/diffraction is that the lattice constant of metals and other crystalline materials can be very accurately measured. Together with an accurately aligned micropositioner a map of the lattice constant through the metal can be derived. This can easily be converted to the stress field experienced by the material. This has been used to analyse stresses in aerospace and automotive components to give just two examples. The high penetration depth permits measuring residual stresses in bulk components as crankshafts, pistons, rails, gears. This technique has led to the development of dedicated stress diffractometers, such as the ENGIN-X instrument at the ISIS neutron source.
Neutron diffraction can also be employed to give insight into the 3D structure any material that diffracts.
Another use is for the determination of the solvation number of ion pairs in electrolytes solutions.
The magnetic scattering effect has been used since the establishment of the neutron diffraction technique to quantify magnetic moments in materials, and study the magnetic dipole orientation and structure. One of the earliest applications of neutron diffraction was in the study of magnetic dipole orientations in antiferromagnetic transition metal oxides such as manganese, iron, nickel, and cobalt oxides. These experiments, first performed by Clifford Shull, were the first to show the existence of the antiferromagnetic arrangement of magnetic dipoles in a material structure. Now, neutron diffraction continues to be used to characterize newly developed magnetic materials.
=== Hydrogen, null-scattering and contrast variation ===
Neutron diffraction can be used to establish the structure of low atomic number materials like proteins and surfactants much more easily with lower flux than at a synchrotron radiation source. This is because some low atomic number materials have a higher cross section for neutron interaction than higher atomic weight materials.
One major advantage of neutron diffraction over X-ray diffraction is that the latter is rather insensitive to the presence of hydrogen (H) in a structure, whereas the nuclei 1H and 2H (i.e. Deuterium, D) are strong scatterers for neutrons. The greater scattering power of protons and deuterons means that the position of hydrogen in a crystal and its thermal motions can be determined with greater precision by neutron diffraction. The structures of metal hydride complexes, e.g., Mg2FeH6 have been assessed by neutron diffraction.
The neutron scattering lengths bH = −3.7406(11) fm and bD = 6.671(4) fm, for H and D respectively, have opposite sign, which allows the technique to distinguish them. In fact there is a particular isotope ratio for which the contribution of the element would cancel, this is called null-scattering.
It is undesirable to work with the relatively high concentration of H in a sample. The scattering intensity by H-nuclei has a large inelastic component, which creates a large continuous background that is more or less independent of scattering angle. The elastic pattern typically consists of sharp Bragg reflections if the sample is crystalline. They tend to drown in the inelastic background. This is even more serious when the technique is used for the study of liquid structure. Nevertheless, by preparing samples with different isotope ratios, it is possible to vary the scattering contrast enough to highlight one element in an otherwise complicated structure. The variation of other elements is possible but usually rather expensive. Hydrogen is inexpensive and particularly interesting, because it plays an exceptionally large role in biochemical structures and is difficult to study structurally in other ways.
== Applications ==
=== Study of hydrogen storage materials ===
Since neutron diffraction is particularly sensitive to lighter elements like hydrogen, it can be used for its detection. It can play a role in determining the crystal structure and hydrogen binding sites within metal hydrides, a class of materials of interest for hydrogen storage applications. The order of hydrogen atoms in the lattice reflects the storage capacity and kinetics of the material.
=== Magnetic structure determination ===
Neutron diffraction is also a useful technique for determining magnetic structures in materials, as neutrons can interact with magnetic moments. It can be used to determine the antiferromagnetic structure of manganese oxide (MnO) using neutron diffraction. Neutron Diffraction Studies can be used to measure the magnetic moment. Orientation study demonstrates how neutron diffraction can detect the precise alignment of the magnetic moment in materials, something that is much more challenging with X-rays.
=== Phase transition in ferroelectrics ===
Neutron diffraction has been widely employed to understand phase transitions in materials including ferroelectrics, which show the transition of crystal structure with temperature or pressure. It can be utilised to study the ferroelectric phase transition in lead titanate (PbTiO3). It can be used to analyse atomic displacements and corresponding lattice distortions.
=== Residual stress analysis in engineering materials ===
Neutron diffraction can be used as a technique for the nondestructive assessment of residual stresses in engineering materials, including metals and alloys. Also used for measuring residual stresses in engineering materials.
=== Lithium-ion batteries ===
Neutron diffraction is especially useful for the investigation of lithium-ion battery materials, because lithium atoms are almost opaque to X-ray radiation. It can further be used to investigate the structural evolution of lithium-ion battery cathode materials during charge and discharge cycles.
=== High temperature superconductors ===
Neutron diffraction has played an important role in revealing the crystal and magnetic structures in high-temperature superconductors. A neutron diffraction study of magnetic order in the high-temperature superconductor YBa2Cu3O6+x was done. The work of each of these scientific teams together with others across the globe has revealed the origins of the relationship between magnetic ordering and superconductivity, delivering crucial insights into the mechanism of high-temperature superconductivity.
=== Mechanical behaviour of alloys ===
Advancements in neutron diffraction have facilitated in situ investigations into the mechanical deformation of alloys under load, permitting observations on the mechanisms of deformation. The deformation behavior of titanium alloys under mechanical loads can be investigated using in situ neutron diffraction. This technique allows real-time monitoring of lattice strains and phase transformations throughout deformation.
=== Neutron diffraction for ion channels ===
Neutron diffraction can be used to study ion channels, highlighting how neutrons interact with biological structures to reveal atomic details. Neutron diffraction is particularly sensitive to light elements like hydrogen, making it ideal for mapping water molecules, ion positions, and hydrogen bonds within the channel. By analysing neutron scattering patterns, researchers can determine ion binding sites, hydration structures, and conformational changes essential for ion transport and selectivity.
== Current developments in neutron diffraction ==
=== Advancements in Neutron Diffraction Research ===
Neutron diffraction has made significant progress, particularly at Oak Ridge National Laboratory (ORNL), which operates a suite of 12 diffractometers—seven at the Spallation Neutron Source (SNS) and five at the High Flux Isotope Reactor (HFIR). These instruments are designed for different applications and are grouped into three categories: powder diffraction, single crystal diffraction, and advanced diffraction techniques.
To further enhance neutron diffraction research, ORNL is undertaking several key projects:
Expansion of the SNS First Target Station: New beamlines equipped with state-of-the-art instruments are being installed to broaden the scope of scientific investigations.
Proton Power Upgrade: This initiative aims to double the proton power used for neutron production, which will enhance research efficiency, allow for the study of smaller and more complex samples, and support the eventual development of a next-generation neutron source at SNS.
Development of the SNS Second Target Station: A new facility is being constructed to house 22 beamlines, making it a leading source for cold neutron research, crucial for studying soft matter, biological systems, and quantum materials.
Enhancements at HFIR: Planned upgrades include optimizing the cold neutron guide hall to improve experimental capabilities, expanding isotope production (including plutonium-238 for space exploration), and enhancing the performance of existing instruments.
These advancements are set to significantly improve neutron diffraction techniques, allowing for more precise and detailed analysis of material structures. By expanding research capabilities and increasing neutron production efficiency, these developments will support a wide range of scientific fields, from materials science to energy research and quantum physics.
=== Modern trends in neutron scattering information technology ===
Neutron diffraction technology is evolving rapidly, with a focus on improving beam intensity and instrument efficiency. Modern instruments are designed to produce smaller, more intense beams, enabling high-precision studies of smaller samples, which is particularly beneficial for new material research. Advanced detectors, such as boron-based alternatives to helium-3, are being developed to address material shortages, while improved neutron spin manipulation enhances the study of magnetic and structural properties. Computational advancements, including simulations and virtual instruments, are optimizing neutron sources, streamlining experimental design, and integrating machine learning for data analysis. Multiplexing and event-based acquisition systems are enhancing data collection by capturing multiple datasets simultaneously. Additionally,next-generation spallation sources like the European Spallation Source (ESS) and Oak Ridge's Second Target Station (STS) are increasing neutron production efficiency. Lastly, the rise of remote-controlled experiments and automation is improving accessibility and precision in neutron diffraction research.
=== Current trends in structural biology ===
Modern advancements in neutron diffraction are enhancing data precision, broadening structural research applications, and refining experimental methodologies. A key focus is the improved visualization of hydrogen atoms in biological macromolecules, crucial for studying enzymatic activity and hydrogen bonding. The expansion of specialized diffractometers has increased accessibility in structural biology, with techniques like monochromatic, quasi-Laue, and time-of-flight methods being optimized for efficiency. Innovations in sample preparation, particularly protein deuteration, are minimizing background noise and reducing the need for large crystals. Additionally, computational tools, including quantum chemical modeling, are aiding in the interpretation of complex molecular interactions. Improved neutron sources, such as spallation facilities, along with advanced detectors, are further boosting measurement accuracy and structural resolution. These developments are solidifying neutron diffraction as a critical technique for exploring the molecular architecture of biological systems.
== See also ==
Crystallography
Crystallographic database
Electron diffraction
Grazing incidence diffraction
Inelastic neutron scattering
X-ray diffraction computed tomography
== References ==
== Further reading ==
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 0-19-852015-8.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 0-19-852017-4.
Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 0-486-69447-X.
Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 0-19-855577-6.
== External links ==
National Institute of Standards and Technology Center for Neutron Research
From Bragg's law to neutron diffraction
Integrated Infrastructure Initiative for Neutron Scattering and Muon Spectroscopy (NMI3) - a European consortium of 18 partner organisations from 12 countries, including all major facilities in the fields of neutron scattering and muon spectroscopy
Frank Laboratory of Neutron Physics of Joint Institute for Nuclear Research (JINR)
IAEA neutron beam instrument database | Wikipedia/Neutron_crystallography |
Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. Chemical thermodynamics involves not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical questions and the spontaneity of processes.
The structure of chemical thermodynamics is based on the first two laws of thermodynamics. Starting from the first and second laws of thermodynamics, four equations called the "fundamental equations of Gibbs" can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics. This outlines the mathematical framework of chemical thermodynamics.
== History ==
In 1865, the German physicist Rudolf Clausius, in his Mechanical Theory of Heat, suggested that the principles of thermochemistry, e.g. the heat evolved in combustion reactions, could be applied to the principles of thermodynamics. Building on the work of Clausius, between the years 1873-76 the American mathematical physicist Willard Gibbs published a series of three papers, the most famous one being the paper On the Equilibrium of Heterogeneous Substances. In these papers, Gibbs showed how the first two laws of thermodynamics could be measured graphically and mathematically to determine both the thermodynamic equilibrium of chemical reactions as well as their tendencies to occur or proceed. Gibbs’ collection of papers provided the first unified body of thermodynamic theorems from the principles developed by others, such as Clausius and Sadi Carnot.
During the early 20th century, two major publications successfully applied the principles developed by Gibbs to chemical processes and thus established the foundation of the science of chemical thermodynamics. The first was the 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity with the term free energy in the English-speaking world. The second was the 1933 book Modern Thermodynamics by the methods of Willard Gibbs written by E. A. Guggenheim. In this manner, Lewis, Randall, and Guggenheim are considered as the founders of modern chemical thermodynamics because of the major contribution of these two books in unifying the application of thermodynamics to chemistry.
== Overview ==
The primary objective of chemical thermodynamics is the establishment of a criterion for determination of the feasibility or spontaneity of a given transformation. In this manner, chemical thermodynamics is typically used to predict the energy exchanges that occur in the following processes:
Chemical reactions
Phase changes
The formation of solutions
The following state functions are of primary concern in chemical thermodynamics:
Internal energy (U)
Enthalpy (H)
Entropy (S)
Gibbs free energy (G)
Most identities in chemical thermodynamics arise from application of the first and second laws of thermodynamics, particularly the law of conservation of energy, to these state functions.
The three laws of thermodynamics (global, unspecific forms):
The energy of the universe is constant.
In any spontaneous process, there is always an increase in entropy of the universe.
The entropy of a perfect crystal (well ordered) at 0 Kelvin is zero.
== Chemical energy ==
Chemical energy is the energy that can be released when chemical substances undergo a transformation through a chemical reaction. Breaking and making chemical bonds involves energy release or uptake, often as heat that may be either absorbed by or evolved from the chemical system.
Energy released (or absorbed) because of a reaction between chemical substances ("reactants") is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in internal energy of a chemical system. It can be calculated from
Δ
f
U
r
e
a
c
t
a
n
t
s
o
{\displaystyle \Delta _{\rm {f}}U_{\mathrm {reactants} }^{\rm {o}}}
, the internal energy of formation of the reactant molecules related to the bond energies of the molecules under consideration, and
Δ
f
U
p
r
o
d
u
c
t
s
o
{\displaystyle \Delta _{\rm {f}}U_{\mathrm {products} }^{\rm {o}}}
, the internal energy of formation of the product molecules. The change in internal energy is equal to the heat change if it is measured under conditions of constant volume (at STP condition), as in a closed rigid container such as a bomb calorimeter. However, at constant pressure, as in reactions in vessels open to the atmosphere, the measured heat is usually not equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the widely tabulated enthalpies of formation are used.)
A related term is the heat of combustion, which is the chemical energy released due to a combustion reaction and of interest in the study of fuels. Food is similar to hydrocarbon and carbohydrate fuels, and when it is oxidized, its energy release is similar (though assessed differently than for a hydrocarbon fuel — see food energy).
In chemical thermodynamics, the term used for the chemical potential energy is chemical potential, and sometimes the Gibbs-Duhem equation is used.
== Chemical reactions ==
In most cases of interest in chemical thermodynamics there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy in the universe unless they are at equilibrium or are maintained at a "running equilibrium" through "quasi-static" changes by being coupled to constraining devices, such as pistons or electrodes, to deliver and receive external work. Even for homogeneous "bulk" systems, the free-energy functions depend on the composition, as do all the extensive thermodynamic potentials, including the internal energy. If the quantities { Ni }, the number of chemical species, are omitted from the formulae, it is impossible to describe compositional changes.
=== Gibbs function or Gibbs Energy ===
For an unstructured, homogeneous "bulk" system, there are still various extensive compositional variables { Ni } that G depends on, which specify the composition (the amounts of each chemical substance, expressed as the numbers of molecules present or the numbers of moles). Explicitly,
G
=
G
(
T
,
P
,
{
N
i
}
)
.
{\displaystyle G=G(T,P,\{N_{i}\})\,.}
For the case where only PV work is possible,
d
G
=
−
S
d
T
+
V
d
P
+
∑
i
μ
i
d
N
i
{\displaystyle \mathrm {d} G=-S\,\mathrm {d} T+V\,\mathrm {d} P+\sum _{i}\mu _{i}\,\mathrm {d} N_{i}\,}
a restatement of the fundamental thermodynamic relation, in which μi is the chemical potential for the i-th component in the system
μ
i
=
(
∂
G
∂
N
i
)
T
,
P
,
N
j
≠
i
,
e
t
c
.
.
{\displaystyle \mu _{i}=\left({\frac {\partial G}{\partial N_{i}}}\right)_{T,P,N_{j\neq i},etc.}\,.}
The expression for dG is especially useful at constant T and P, conditions, which are easy to achieve experimentally and which approximate the conditions in living creatures
(
d
G
)
T
,
P
=
∑
i
μ
i
d
N
i
.
{\displaystyle (\mathrm {d} G)_{T,P}=\sum _{i}\mu _{i}\,\mathrm {d} N_{i}\,.}
=== Chemical affinity ===
While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a process involved in changing the composition; e.g., a chemical reaction (or many), or movement of molecules from one phase (liquid) to another (gas or solid). We should find a notation which does not seem to imply that the amounts of the components ( Ni ) can be changed independently. All real processes obey conservation of mass, and in addition, conservation of the numbers of atoms of each kind.
Consequently, we introduce an explicit variable to represent the degree of advancement of a process, a progress variable ξ for the extent of reaction (Prigogine & Defay, p. 18; Prigogine, pp. 4–7; Guggenheim, p. 37.62), and to the use of the partial derivative ∂G/∂ξ (in place of the widely used "ΔG", since the quantity at issue is not a finite change). The result is an understandable expression for the dependence of dG on chemical reactions (or other processes). If there is just one reaction
(
d
G
)
T
,
P
=
(
∂
G
∂
ξ
)
T
,
P
d
ξ
.
{\displaystyle (\mathrm {d} G)_{T,P}=\left({\frac {\partial G}{\partial \xi }}\right)_{T,P}\,\mathrm {d} \xi .\,}
If we introduce the stoichiometric coefficient for the i-th component in the reaction
ν
i
=
∂
N
i
/
∂
ξ
{\displaystyle \nu _{i}=\partial N_{i}/\partial \xi \,}
(negative for reactants), which tells how many molecules of i are produced or consumed, we obtain an algebraic expression for the partial derivative
(
∂
G
∂
ξ
)
T
,
P
=
∑
i
μ
i
ν
i
=
−
A
{\displaystyle \left({\frac {\partial G}{\partial \xi }}\right)_{T,P}=\sum _{i}\mu _{i}\nu _{i}=-\mathbb {A} \,}
where we introduce a concise and historical name for this quantity, the "affinity", symbolized by A, as introduced by Théophile de Donder in 1923.(De Donder; Progogine & Defay, p. 69; Guggenheim, pp. 37, 240) The minus sign ensures that in a spontaneous change, when the change in the Gibbs free energy of the process is negative, the chemical species have a positive affinity for each other. The differential of G takes on a simple form that displays its dependence on composition change
(
d
G
)
T
,
P
=
−
A
d
ξ
.
{\displaystyle (\mathrm {d} G)_{T,P}=-\mathbb {A} \,d\xi \,.}
If there are a number of chemical reactions going on simultaneously, as is usually the case,
(
d
G
)
T
,
P
=
−
∑
k
A
k
d
ξ
k
.
{\displaystyle (\mathrm {d} G)_{T,P}=-\sum _{k}\mathbb {A} _{k}\,d\xi _{k}\,.}
with a set of reaction coordinates { ξj }, avoiding the notion that the amounts of the components ( Ni ) can be changed independently. The expressions above are equal to zero at thermodynamic equilibrium, while they are negative when chemical reactions proceed at a finite rate, producing entropy. This can be made even more explicit by introducing the reaction rates dξj/dt. For every physically independent process (Prigogine & Defay, p. 38; Prigogine, p. 24)
A
ξ
˙
≤
0
.
{\displaystyle \mathbb {A} \ {\dot {\xi }}\leq 0\,.}
This is a remarkable result since the chemical potentials are intensive system variables, depending only on the local molecular milieu. They cannot "know" whether temperature and pressure (or any other system variables) are going to be held constant over time. It is a purely local criterion and must hold regardless of any such constraints. Of course, it could have been obtained by taking partial derivatives of any of the other fundamental state functions, but nonetheless is a general criterion for (−T times) the entropy production from that spontaneous process; or at least any part of it that is not captured as external work. (See Constraints below.)
We now relax the requirement of a homogeneous "bulk" system by letting the chemical potentials and the affinity apply to any locality in which a chemical reaction (or any other process) is occurring. By accounting for the entropy production due to irreversible processes, the equality for dG is now replaced by
d
G
=
−
S
d
T
+
V
d
P
−
∑
k
A
k
d
ξ
k
+
δ
W
′
{\displaystyle \mathrm {d} G=-S\,\mathrm {d} T+V\,\mathrm {d} P-\sum _{k}\mathbb {A} _{k}\,\mathrm {d} \xi _{k}+\mathrm {\delta } W'\,}
or
d
G
T
,
P
=
−
∑
k
A
k
d
ξ
k
+
δ
W
′
.
{\displaystyle \mathrm {d} G_{T,P}=-\sum _{k}\mathbb {A} _{k}\,\mathrm {d} \xi _{k}+\mathrm {\delta } W'.\,}
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and its surrounding. Or it may go partly toward doing external work and partly toward creating entropy. The important point is that the extent of reaction for a chemical reaction may be coupled to the displacement of some external mechanical or electrical quantity in such a way that one can advance only if the other also does. The coupling may occasionally be rigid, but it is often flexible and variable.
=== Solutions ===
In solution chemistry and biochemistry, the Gibbs free energy decrease (∂G/∂ξ, in molar units, denoted cryptically by ΔG) is commonly used as a surrogate for (−T times) the global entropy produced by spontaneous chemical reactions in situations where no work is being done; or at least no "useful" work; i.e., other than perhaps ± P dV. The assertion that all spontaneous reactions have a negative ΔG is merely a restatement of the second law of thermodynamics, giving it the physical dimensions of energy and somewhat obscuring its significance in terms of entropy. When no useful work is being done, it would be less misleading to use the Legendre transforms of the entropy appropriate for constant T, or for constant T and P, the Massieu functions −F/T and −G/T, respectively.
== Non-equilibrium ==
Generally the systems treated with the conventional chemical thermodynamics are either at equilibrium or near equilibrium. Ilya Prigogine developed the thermodynamic treatment of open systems that are far from equilibrium. In doing so he has discovered phenomena and structures of completely new and completely unexpected types. His generalized, nonlinear and irreversible thermodynamics has found surprising applications in a wide variety of fields.
The non-equilibrium thermodynamics has been applied for explaining how ordered structures e.g. the biological systems, can develop from disorder. Even if Onsager's relations are utilized, the classical principles of equilibrium in thermodynamics still show that linear systems close to equilibrium always develop into states of disorder which are stable to perturbations and cannot explain the occurrence of ordered structures.
Prigogine called these systems dissipative systems, because they are formed and maintained by the dissipative processes which take place because of the exchange of energy between the system and its environment and because they disappear if that exchange ceases. They may be said to live in symbiosis with their environment.
The method which Prigogine used to study the stability of the dissipative structures to perturbations is of very great general interest. It makes it possible to study the most varied problems, such as city traffic problems, the stability of insect communities, the development of ordered biological structures and the growth of cancer cells to mention but a few examples.
=== System constraints ===
In this regard, it is crucial to understand the role of walls and other constraints, and the distinction between independent processes and coupling. Contrary to the clear implications of many reference sources, the previous analysis is not restricted to homogeneous, isotropic bulk systems which can deliver only PdV work to the outside world, but applies even to the most structured systems. There are complex systems with many chemical "reactions" going on at the same time, some of which are really only parts of the same, overall process. An independent process is one that could proceed even if all others were unaccountably stopped in their tracks. Understanding this is perhaps a "thought experiment" in chemical kinetics, but actual examples exist.
A gas-phase reaction at constant temperature and pressure which results in an increase in the number of molecules will lead to an increase in volume. Inside a cylinder closed with a piston, it can proceed only by doing work on the piston. The extent variable for the reaction can increase only if the piston moves out, and conversely if the piston is pushed inward, the reaction is driven backwards.
Similarly, a redox reaction might occur in an electrochemical cell with the passage of current through a wire connecting the electrodes. The half-cell reactions at the electrodes are constrained if no current is allowed to flow. The current might be dissipated as Joule heating, or it might in turn run an electrical device like a motor doing mechanical work. An automobile lead-acid battery can be recharged, driving the chemical reaction backwards. In this case as well, the reaction is not an independent process. Some, perhaps most, of the Gibbs free energy of reaction may be delivered as external work.
The hydrolysis of ATP to ADP and phosphate can drive the force-times-distance work delivered by living muscles, and synthesis of ATP is in turn driven by a redox chain in mitochondria and chloroplasts, which involves the transport of ions across the membranes of these cellular organelles. The coupling of processes here, and in the previous examples, is often not complete. Gas can leak slowly past a piston, just as it can slowly leak out of a rubber balloon. Some reaction may occur in a battery even if no external current is flowing. There is usually a coupling coefficient, which may depend on relative rates, which determines what percentage of the driving free energy is turned into external work, or captured as "chemical work", a misnomer for the free energy of another chemical process.
== See also ==
Thermodynamic databases for pure substances
laws of thermodynamics
== References ==
== Further reading ==
Herbert B. Callen (1960). Thermodynamics. Wiley & Sons. The clearest account of the logical foundations of the subject. ISBN 0-471-13035-4. {{cite book}}: ISBN / Date incompatibility (help) Library of Congress Catalog No. 60-5597
Ilya Prigogine & R. Defay, translated by D.H. Everett; Chapter IV (1954). Chemical Thermodynamics. Longmans, Green & Co. Exceptionally clear on the logical foundations as applied to chemistry; includes non-equilibrium thermodynamics.{{cite book}}: CS1 maint: multiple names: authors list (link)
Ilya Prigogine (1967). Thermodynamics of Irreversible Processes, 3rd ed. Interscience: John Wiley & Sons. A simple, concise monograph explaining all the basic ideas. Library of Congress Catalog No. 67-29540
E.A. Guggenheim (1967). Thermodynamics: An Advanced Treatment for Chemists and Physicists, 5th ed. North Holland; John Wiley & Sons (Interscience). A remarkably astute treatise. Library of Congress Catalog No. 67-20003
Th. De Donder (1922). "L'affinite. Applications aux gaz parfaits". Bulletin de la Classe des Sciences, Académie Royale de Belgique. Series 5. 8: 197–205.
Th. De Donder (1922). "Sur le theoreme de Nernst". Bulletin de la Classe des Sciences, Académie Royale de Belgique. Series 5. 8: 205–210.
== External links ==
Chemical Thermodynamics - University of North Carolina
Chemical energetics (Introduction to thermodynamics and the First Law)
Thermodynamics of chemical equilibrium (Entropy, Second Law and free energy) | Wikipedia/Chemical_thermodynamics |
Crystal Growth & Design is a monthly peer-reviewed scientific journal published by the American Chemical Society. It was established in January 2001 as a bimonthly journal and changed to a monthly frequency in 2006. The editor-in-chief is Jonathan W. Steed from Durham University.
== Aims and scope ==
The focus of the journal is theory and experimentation pertaining to the design, growth, and application of crystalline materials. Processes involved in achieving the various stages of development are also covered. Included in this focus is the physical, chemical, and biological properties of these materials, and how these impact the process. Fundamental research, which contributes to these areas, is also part of the scope of this journal. The intended audience is scientists and engineers working in the fields of crystal growth, crystal engineering, and the industrial application of crystalline materials.
== Abstracting and indexing ==
The journal is indexed in Science Citation Index, Current Contents/Physical, Chemical & Earth Sciences, Scopus, EBSCOhost, and Chemical Abstracts Service/CASSI.
== Most cited articles ==
The most cited recent articles are:
Desiraju, Gautam R. (2008). "Polymorphism: the Same and Not Quite the Same". Crystal Growth & Design. 8: 3–5. doi:10.1021/cg701000q.
Schultheiss, Nate; Newman, Ann (2009). "Pharmaceutical Cocrystals and Their Physicochemical Properties". Crystal Growth & Design. 9 (6): 2950–2967. doi:10.1021/cg900129f. PMC 2690398. PMID 19503732.
Good, David J.; Rodriguez-Hornedo, Naír (2009). "Solubility Advantage of Pharmaceutical Cocrystals". Crystal Growth & Design. 9 (5): 2252. doi:10.1021/cg801039j.
== References == | Wikipedia/Crystal_Growth_&_Design |
Membrane proteins are common proteins that are part of, or interact with, biological membranes. Membrane proteins fall into several broad categories depending on their location. Integral membrane proteins are a permanent part of a cell membrane and can either penetrate the membrane (transmembrane) or associate with one or the other side of a membrane (integral monotopic). Peripheral membrane proteins are transiently associated with the cell membrane.
Membrane proteins are common, and medically important—about a third of all human proteins are membrane proteins, and these are targets for more than half of all drugs. Nonetheless, compared to other classes of proteins, determining membrane protein structures remains a challenge in large part due to the difficulty in establishing experimental conditions that can preserve the correct (native) conformation of the protein in isolation from its native environment.
== Function ==
Membrane proteins perform a variety of functions vital to the survival of organisms:
Membrane receptor proteins relay signals between the cell's internal and external environments.
Transport proteins move molecules and ions across the membrane. They can be categorized according to the Transporter Classification database.
Membrane enzymes may have many activities, such as oxidoreductase, transferase or hydrolase.
Cell adhesion molecules allow cells to identify each other and interact. For example, proteins involved in immune response
The localization of proteins in membranes can be predicted reliably using hydrophobicity analyses of protein sequences, i.e. the localization of hydrophobic amino acid sequences.
=== Integral membrane proteins ===
Integral membrane proteins are permanently attached to the membrane. Such proteins can be separated from the biological membranes only using detergents, nonpolar solvents, or sometimes denaturing agents. They can be classified according to their relationship with the bilayer:
Integral polytopic proteins are transmembrane proteins that span across the membrane more than once. These proteins may have different transmembrane topology. These proteins have one of two structural architectures:
Helix bundle proteins, which are present in all types of biological membranes;
Beta barrel proteins, which are found only in outer membranes of Gram-negative bacteria, and outer membranes of mitochondria and chloroplasts.
Bitopic proteins are transmembrane proteins that span across the membrane only once. Transmembrane helices from these proteins have significantly different amino acid distributions to transmembrane helices from polytopic proteins.
Integral monotopic proteins are integral membrane proteins that are attached to only one side of the membrane and do not span the whole way across.
=== Peripheral membrane proteins ===
Peripheral membrane proteins are temporarily attached either to the lipid bilayer or to integral proteins by a combination of hydrophobic, electrostatic, and other non-covalent interactions. Peripheral proteins dissociate following treatment with a polar reagent, such as a solution with an elevated pH or high salt concentrations.
Integral and peripheral proteins may be post-translationally modified, with added fatty acid, diacylglycerol or prenyl chains, or GPI (glycosylphosphatidylinositol), which may be anchored in the lipid bilayer.
=== Polypeptide toxins ===
Polypeptide toxins and many antibacterial peptides, such as colicins or hemolysins, and certain proteins involved in apoptosis, are sometimes considered a separate category. These proteins are water-soluble but can undergo significant conformational changes, form oligomeric complexes and associate irreversibly or reversibly with the lipid bilayer.
== In genomes ==
Membrane proteins, like soluble globular proteins, fibrous proteins, and disordered proteins, are common. It is estimated that 20–30% of all genes in most genomes encode for membrane proteins. For instance, about 1000 of the ~4200 proteins of E. coli are thought to be membrane proteins, 600 of which have been experimentally verified to be membrane resident. In humans, current thinking suggests that fully 30% of the genome encodes membrane proteins.
== In disease ==
Membrane proteins are the targets of over 50% of all modern medicinal drugs. Among the human diseases in which membrane proteins have been implicated are heart disease, Alzheimer's and cystic fibrosis.
== Purification of membrane proteins ==
Although membrane proteins play an important role in all organisms, their purification has historically, and continues to be, a huge challenge for protein scientists. In 2008, 150 unique structures of membrane proteins were available, and by 2019 only 50 human membrane proteins had had their structures elucidated. In contrast, approximately 25% of all proteins are membrane proteins. Their hydrophobic surfaces make structural and especially functional characterization difficult. Detergents can be used to render membrane proteins water-soluble, but these can also alter protein structure and function. Making membrane proteins water-soluble can also be achieved through engineering the protein sequence, replacing selected hydrophobic amino acids with hydrophilic ones, taking great care to maintain secondary structure while revising overall charge.
Affinity chromatography is one of the best solutions for purification of membrane proteins. The polyhistidine-tag is a commonly used tag for membrane protein purification, and the alternative rho1D4 tag has also been successfully used.
== See also ==
== References ==
== Further reading ==
Johnson JE, Cornell RB (1999). "Amphitropic proteins: regulation by reversible membrane interactions (review)". Molecular Membrane Biology. 16 (3): 217–35. doi:10.1080/096876899294544. PMID 10503244.
Alenghat FJ, Golan DE (2013). "Membrane protein dynamics and functional implications in mammalian cells". Current Topics in Membranes. 72: 89–120. doi:10.1016/b978-0-12-417027-8.00003-9. ISBN 9780124170278. PMC 4193470. PMID 24210428.
== External links ==
=== Organizations ===
Membrane Protein Structural Dynamics Consortium
Experts for Membrane Protein Research and Purification
=== Membrane protein databases ===
TCDB - Transporter Classification database, a comprehensive classification of transmembrane transporter proteins
Orientations of Proteins in Membranes (OPM) database - 3D structures of integral and peripheral membrane proteins arranged in the lipid bilayer
Protein Data Bank of Transmembrane Proteins - 3D models of transmembrane proteins approximately arranged in the lipid bilayer.
TransportDB - Genomics-oriented database of transporters from TIGR
Membrane PDB Archived 2020-08-03 at the Wayback Machine - Database of 3D structures of integral membrane proteins and hydrophobic peptides with an emphasis on crystallization conditions
Mpstruc database Archived 2013-12-25 at the Wayback Machine - A curated list of selected transmembrane proteins from the Protein Data Bank
MemProtMD - a database of membrane protein structures simulated by coarse-grained molecular dynamics
Membranome database provides information about bitopic proteins from several model organisms | Wikipedia/Membrane_protein |
Reflection high-energy electron diffraction (RHEED) is a technique used to characterize the surface of crystalline materials. RHEED systems gather information only from the surface layer of the sample, which distinguishes RHEED from other materials characterization methods that also rely on diffraction of high-energy electrons. Transmission electron microscopy, another common electron diffraction method samples mainly the bulk of the sample due to the geometry of the system, although in special cases it can provide surface information. Low-energy electron diffraction (LEED) is also surface sensitive, but LEED achieves surface sensitivity through the use of low energy electrons.
== Introduction ==
A RHEED system requires an electron source (gun), photoluminescent detector screen and a sample with a clean surface, although modern RHEED systems have additional parts to optimize the technique. The electron gun generates a beam of electrons which strike the sample at a very small angle relative to the sample surface. Incident electrons diffract from atoms at the surface of the sample, and a small fraction of the diffracted electrons interfere constructively at specific angles and form regular patterns on the detector. The electrons interfere according to the position of atoms on the sample surface, so the diffraction pattern at the detector is a function of the sample surface. Figure 1 shows the most basic setup of a RHEED system.
== Surface diffraction ==
In the RHEED setup, only atoms at the sample surface contribute to the RHEED pattern. The glancing angle of incident electrons allows them to escape the bulk of the sample and to reach the detector. Atoms at the sample surface diffract (scatter) the incident electrons due to the wavelike properties of electrons.
The diffracted electrons interfere constructively at specific angles according to the crystal structure and spacing of the atoms at the sample surface and the wavelength of the incident electrons. Some of the electron waves created by constructive interference collide with the detector, creating specific diffraction patterns according to the surface features of the sample. Users characterize the crystallography of the sample surface through analysis of the diffraction patterns. Figure 2 shows a RHEED pattern. Video 1 depicts a metrology instrument recording the RHEED intensity oscillations and deposition rate for process control and analysis.
Two types of diffraction contribute to RHEED patterns. Some incident electrons undergo a single, elastic scattering event at the crystal surface, a process termed kinematic scattering. Dynamic scattering occurs when electrons undergo multiple diffraction events in the crystal and lose some of their energy due to interactions with the sample. Users extract non-qualitative data from the kinematically diffracted electrons. These electrons account for the high intensity spots or rings common to RHEED patterns. RHEED users also analyze dynamically scattered electrons with complex techniques and models to gather quantitative information from RHEED patterns.
=== Kinematic scattering analysis ===
RHEED users construct Ewald's spheres to find the crystallographic properties of the sample surface. Ewald's spheres show the allowed diffraction conditions for kinematically scattered electrons in a given RHEED setup. The diffraction pattern at the screen relates to the Ewald's sphere geometry, so RHEED users can directly calculate the reciprocal lattice of the sample with a RHEED pattern, the energy of the incident electrons and the distance from the detector to the sample. The user must relate the geometry and spacing of the spots of a perfect pattern to the Ewald's sphere in order to determine the reciprocal lattice of the sample surface.
The Ewald's sphere analysis is similar to that for bulk crystals, however the reciprocal lattice for the sample differs from that for a 3D material due to the surface sensitivity of the RHEED process. The reciprocal lattices of bulk crystals consist of a set of points in 3D space. However, only the first few layers of the material contribute to the diffraction in RHEED, so there are no diffraction conditions in the dimension perpendicular to the sample surface. Due to the lack of a third diffracting condition, the reciprocal lattice of a crystal surface is a series of infinite rods extending perpendicular to the sample's surface. These rods originate at the conventional 2D reciprocal lattice points of the sample's surface.
The Ewald's sphere is centered on the sample surface with a radius equal to the magnitude of the wavevector of the incident electrons,
where λ is the electrons' de Broglie wavelength.
Diffraction conditions are satisfied where the rods of reciprocal lattice intersect the Ewald's sphere. Therefore, the magnitude of a vector from the origin of the Ewald's sphere to the intersection of any reciprocal lattice rods is equal in magnitude to that of the incident beam. This is expressed as
|
k
h
l
|
=
|
k
i
|
{\displaystyle |k_{hl}|=|k_{i}|}
(2)
Here, khl is the wave vector of the elastically diffracted electrons of the order (hl) at any intersection of reciprocal lattice rods with Ewald's sphere
The projections of the two vectors onto the plane of the sample's surface differ by a reciprocal lattice vector Ghl,
G
h
l
=
k
h
l
|
|
−
k
i
|
|
{\displaystyle G_{hl}=k_{hl}^{||}-k_{i}^{||}}
(3)
Figure 3 shows the construction of the Ewald's sphere and provides examples of the G, khl and ki vectors.
Many of the reciprocal lattice rods meet the diffraction condition, however the RHEED system is designed such that only the low orders of diffraction are incident on the detector. The RHEED pattern at the detector is a projection only of the k vectors that are within the angular range that contains the detector. The size and position of the detector determine which of the diffracted electrons are within the angular range that reaches the detector, so the geometry of the RHEED pattern can be related back to the geometry of the reciprocal lattice of the sample surface through use of trigonometric relations and the distance from the sample to detector.
The k vectors are labeled such that the vector k00 that forms the smallest angle with the sample surface is called the 0th order beam. The 0th order beam is also known as the specular beam. Each successive intersection of a rod and the sphere further from the sample surface is labeled as a higher order reflection.
Because of the way the center of the Ewald's sphere is positioned, the specular beam forms the same angle with the substrate as the incident electron beam. The specular point has the greatest intensity on a RHEED pattern and is labeled as the (00) point by convention. The other points on the RHEED pattern are indexed according to the reflection order they project.
The radius of the Ewald's sphere is much larger than the spacing between reciprocal lattice rods because the incident beam has a very short wavelength due to its high-energy electrons. Rows of reciprocal lattice rods actually intersect the Ewald's sphere as an approximate plane because identical rows of parallel reciprocal lattice rods sit directly in front and behind the single row shown. Figure 3 shows a cross sectional view of a single row of reciprocal lattice rods filling of the diffraction conditions. The reciprocal lattice rods in Figure 3 show the end on view of these planes, which are perpendicular to the computer screen in the figure.
The intersections of these effective planes with the Ewald's sphere forms circles, called Laue circles. The RHEED pattern is a collection of points on the perimeters of concentric Laue circles around the center point. However, interference effects between the diffracted electrons still yield strong intensities at single points on each Laue circle. Figure 4 shows the intersection of one of these planes with the Ewald's Sphere.
The azimuthal angle affects the geometry and intensity of RHEED patterns. The azimuthal angle is the angle at which the incident electrons intersect the ordered crystal lattice on the surface of the sample. Most RHEED systems are equipped with a sample holder that can rotate the crystal around an axis perpendicular to the sample surface. RHEED users rotate the sample to optimize the intensity profiles of patterns. Users generally index at least 2 RHEED scans at different azimuth angles for reliable characterization of the crystal's surface structure. Figure 5 shows a schematic diagram of an electron beam incident on the sample at different azimuth angles.
Users sometimes rotate the sample around an axis perpendicular to the sampling surface during RHEED experiments to create a RHEED pattern called the azimuthal plot. Rotating the sample changes the intensity of the diffracted beams due to their dependence on the azimuth angle. RHEED specialists characterize film morphologies by measuring the changes in beam intensity and comparing these changes to theoretical calculations, which can effectively model the dependence of the intensity of diffracted beams on the azimuth angle.
=== Dynamic scattering analysis ===
The dynamically, or inelastically, scattered electrons provide several types of information about the sample as well. The brightness or intensity at a point on the detector depends on dynamic scattering, so all analysis involving the intensity must account for dynamic scattering. Some inelastically scattered electrons penetrate the bulk crystal and fulfill Bragg diffraction conditions. These inelastically scattered electrons can reach the detector to yield Kikuchi diffraction patterns, which are useful for calculating diffraction conditions. Kikuchi patterns are characterized by lines connecting the intense diffraction points on a RHEED pattern. Figure 6 shows a RHEED pattern with visible Kikuchi lines.
== RHEED system requirements ==
=== Electron gun ===
The electron gun is one of the most important piece of equipment in a RHEED system. The gun limits the resolution and testing limits of the system. Tungsten filaments are the primary electron source for the electron gun of most RHEED systems due to the low work function of tungsten. In the typical setup, the tungsten filament is the cathode and a positively biased anode draws electrons from the tip of the tungsten filament.
The magnitude of the anode bias determines the energy of the incident electrons. The optimal anode bias is dependent upon the type of information desired. At large incident angles, electrons with high energy can penetrate the surface of the sample and degrade the surface sensitivity of the instrument. However, the dimensions of the Laue zones are proportional to the inverse square of the electron energy meaning that more information is recorded at the detector at higher incident electron energies. For general surface characterization, the electron gun is operated the range of 10-30 keV.
In a typical RHEED setup, one magnetic and one electric field focus the incident beam of electrons. A negatively biased Wehnelt electrode positioned between the cathode filament and anode applies a small electric field, which focuses the electrons as they pass through the anode. An adjustable magnetic lens focuses the electrons onto the sample surface after they pass through the anode. A typical RHEED source has a focal length around 50 cm. The beam is focused to the smallest possible point at the detector rather than the sample surface so that the diffraction pattern has the best resolution.
Phosphor screens that exhibit photoluminescence are widely used as detectors. These detectors emit green light from areas where electrons hit their surface and are common to TEM as well. The detector screen is useful for aligning the pattern to an optimal position and intensity. CCD cameras capture the patterns to allow for digital analysis.
=== Sample surface ===
The sample surface must be extremely clean for effective RHEED experiments. Contaminants on the sample surface interfere with the electron beam and degrade the quality of the RHEED pattern. RHEED users employ two main techniques to create clean sample surfaces. Small samples can be cleaved in the vacuum chamber prior to RHEED analysis. The newly exposed, cleaved surface is analyzed. Large samples, or those that are not able to be cleaved prior to RHEED analysis can be coated with a passive oxide layer prior to analysis. Subsequent heat treatment under the vacuum of the RHEED chamber removes the oxide layer and exposes the clean sample surface.
=== Vacuum requirements ===
Because gas molecules diffract electrons and affect the quality of the electron gun, RHEED experiments are performed under vacuum. The RHEED system must operate at a pressure low enough to prevent significant scattering of the electron beams by gas molecules in the chamber. At electron energies of 10 keV, a chamber pressure of 10−5 mbar or lower is necessary to prevent significant scattering of electrons by the background gas. In practice, RHEED systems are operated under ultra high vacuums. The chamber pressure is minimized as much as possible in order to optimize the process. The vacuum conditions limit the types of materials and processes that can be monitored in situ with RHEED.
== RHEED patterns of real surfaces ==
Previous analysis focused only on diffraction from a perfectly flat surface of a crystal surface. However, non-flat surfaces add additional diffraction conditions to RHEED analysis.
Streaked or elongated spots are common to RHEED patterns. As Fig 3 shows, the reciprocal lattice rods with the lowest orders intersect the Ewald sphere at very small angles, so the intersection between the rods and sphere is not a singular point if the sphere and rods have thickness. The incident electron beam diverges and electrons in the beam have a range of energies, so in practice, the Ewald sphere is not infinitely thin as it is theoretically modeled. The reciprocal lattice rods have a finite thickness as well, with their diameters dependent on the quality of the sample surface. Streaks appear in the place of perfect points when broadened rods intersect the Ewald sphere. Diffraction conditions are fulfilled over the entire intersection of the rods with the sphere, yielding elongated points or ‘streaks’ along the vertical axis of the RHEED pattern. In real cases, streaky RHEED patterns indicate a flat sample surface while the broadening of the streaks indicate small area of coherence on the surface.
Surface features and polycrystalline surfaces add complexity or change RHEED patterns from those from perfectly flat surfaces. Growing films, nucleating particles, crystal twinning, grains of varying size and adsorbed species add complicated diffraction conditions to those of a perfect surface. Superimposed patterns of the substrate and heterogeneous materials, complex interference patterns and degradation of the resolution are characteristic of complex surfaces or those partially covered with heterogeneous materials.
== Specialized RHEED techniques ==
=== Film growth ===
RHEED is an extremely popular technique for monitoring the growth of thin films. In particular, RHEED is well suited for use with molecular beam epitaxy (MBE), a process used to form high quality, ultrapure thin films under ultrahigh vacuum growth conditions. The intensities of individual spots on the RHEED pattern fluctuate in a periodic manner as a result of the relative surface coverage of the growing thin film. Figure 8 shows an example of the intensity fluctuating at a single RHEED point during MBE growth.
Each full period corresponds to formation of a single atomic layer thin film. The oscillation period is highly dependent on the material system, electron energy and incident angle, so researchers obtain empirical data to correlate the intensity oscillations and film coverage before using RHEED for monitoring film growth.
Video 1 depicts a metrology instrument recording the RHEED intensity oscillations and deposition rate for process control and analysis.
=== RHEED-TRAXS ===
Reflection high energy electron diffraction - total reflection angle X-ray spectroscopy is a technique for monitoring the chemical composition of crystals. RHEED-TRAXS analyzes X-ray spectral lines emitted from a crystal as a result of electrons from a RHEED gun colliding with the surface.
RHEED-TRAXS is preferential to X-ray microanalysis (XMA)(such as EDS and WDS) because the incidence angle of the electrons on the surface is very small, typically less than 5°. As a result, the electrons do not penetrate deeply into the crystal, meaning the X-ray emission is restricted to the top of the crystal, allowing for real-time, in-situ monitoring of surface stoichiometry.
The experimental setup is fairly simple. Electrons are fired onto a sample causing X-ray emission. These X-rays are then detected using a silicon-lithium Si-Li crystal placed behind beryllium windows, used to maintain vacuum.
=== MCP-RHEED ===
MCP-RHEED is a system in which an electron beam is amplified by a micro-channel plate (MCP). This system consists of an electron gun and an MCP equipped with a fluorescent screen opposite to the electron gun. Because of the amplification, the intensity of the electron beam can be decreased by several orders of magnitude and the damage to the samples is diminished. This method is used to observe the growth of insulator crystals such as organic films and alkali halide films, which are easily damaged by electron beams.
== References ==
== Further reading ==
Introduction to RHEED, A.S. Arrot, Ultrathin Magnetic Structures I, Springer-Verlag, 1994, pp. 177–220
A Review of the Geometrical Fundamentals of RHEED with Application to Silicon Surfaces, John E. Mahan, Kent M. Geib, G.Y. Robinson, and Robert G. Long, J.V.S.T., A 8, 1990, pp. 3692–3700 | Wikipedia/Reflection_high-energy_electron_diffraction |
In materials science, segregation is the enrichment of atoms, ions, or molecules at a microscopic region in a materials system. While the terms segregation and adsorption are essentially synonymous, in practice, segregation is often used to describe the partitioning of molecular constituents to defects from solid solutions, whereas adsorption is generally used to describe such partitioning from liquids and gases to surfaces. The molecular-level segregation discussed in this article is distinct from other types of materials phenomena that are often called segregation, such as particle segregation in granular materials, and phase separation or precipitation, wherein molecules are segregated in to macroscopic regions of different compositions. Segregation has many practical consequences, ranging from the formation of soap bubbles, to microstructural engineering in materials science, to the stabilization of colloidal suspensions.
Segregation can occur in various materials classes. In polycrystalline solids, segregation occurs at defects, such as dislocations, grain boundaries, stacking faults, or the interface between two phases. In liquid solutions, chemical gradients exist near second phases and surfaces due to combinations of chemical and electrical effects.
Segregation which occurs in well-equilibrated systems due to the instrinsic chemical properties of the system is termed equilibrium segregation. Segregation that occurs due to the processing history of the sample (but that would disappear at long times) is termed non-equilibrium segregation.
== History ==
Equilibrium segregation is associated with the lattice disorder at interfaces, where there are sites of energy different from those within the lattice at which the solute atoms can deposit themselves. The equilibrium segregation is so termed because the solute atoms segregate themselves to the interface or surface in accordance with the statistics of thermodynamics in order to minimize the overall free energy of the system. This sort of partitioning of solute atoms between the grain boundary and the lattice was predicted by McLean in 1957.
Non-equilibrium segregation, first theorized by Westbrook in 1964, occurs as a result of solutes coupling to vacancies which are moving to grain boundary sources or sinks during quenching or application of stress. It can also occur as a result of solute pile-up at a moving interface.
There are two main features of non-equilibrium segregation, by which it is most easily distinguished from equilibrium segregation. In the non-equilibrium effect, the magnitude of the segregation increases with increasing temperature and the alloy can be homogenized without further quenching because its lowest energy state corresponds to a uniform solute distribution. In contrast, the equilibrium segregated state, by definition, is the lowest energy state in a system that exhibits equilibrium segregation, and the extent of the segregation effect decreases with increasing temperature. The details of non-equilibrium segregation are not going to be discussed here, but can be found in the review by Harries and Marwick.
== Importance ==
Segregation of a solute to surfaces and grain boundaries in a solid produces a section of material with a discrete composition and its own set of properties that can have important (and often deleterious) effects on the overall properties of the material. These 'zones' with an increased concentration of solute can be thought of as the cement between the bricks of a building. The structural integrity of the building depends not only on the material properties of the brick, but also greatly on the properties of the long lines of mortar in between.
Segregation to grain boundaries, for example, can lead to grain boundary fracture as a result of temper brittleness, creep embrittlement, stress relief cracking of weldments, hydrogen embrittlement, environmentally assisted fatigue, grain boundary corrosion, and some kinds of intergranular stress corrosion cracking. A very interesting and important field of study of impurity segregation processes involves AES of grain boundaries of materials. This technique includes tensile fracturing of special specimens directly inside the UHV chamber of the Auger Electron Spectrometer that was developed by Ilyin.
Segregation to grain boundaries can also affect their respective migration rates, and so affects sinterability, as well as the grain boundary diffusivity (although sometimes these effects can be used advantageously).
Segregation to free surfaces also has important consequences involving the purity of metallurgical samples. Because of the favorable segregation of some impurities to the surface of the material, a very small concentration of impurity in the bulk of the sample can lead to a very significant coverage of the impurity on a cleaved surface of the sample. In applications where an ultra-pure surface is needed (for example, in some nanotechnology applications), the segregation of impurities to surfaces requires a much higher purity of bulk material than would be needed if segregation effects did not exist. The following figure illustrates this concept with two cases in which the total fraction of impurity atoms is 0.25 (25 impurity atoms in 100 total). In the representation on the left, these impurities are equally distributed throughout the sample, and so the fractional surface coverage of impurity atoms is also approximately 0.25. In the representation to the right, however, the same number of impurity atoms are shown segregated on the surface, so that an observation of the surface composition would yield a much higher impurity fraction (in this case, about 0.69). In fact, in this example, were impurities to completely segregate to the surface, an impurity fraction of just 0.36 could completely cover the surface of the material. In an application where surface interactions are important, this result could be disastrous.
While the intergranular failure problems noted above are sometimes severe, they are rarely the cause of major service failures (in structural steels, for example), as suitable safety margins are included in the designs. Perhaps the greater concern is that with the development of new technologies and materials with new and more extensive mechanical property requirements, and with the increasing impurity contents as a result of the increased recycling of materials, we may see intergranular failure in materials and situations not seen currently. Thus, a greater understanding of all of the mechanisms surrounding segregation might lead to being able to control these effects in the future. Modeling potentials, experimental work, and related theories are still being developed to explain these segregation mechanisms for increasingly complex systems.
== Theories of Segregation ==
Several theories describe the equilibrium segregation activity in materials. The adsorption theories for the solid-solid interface and the solid-vacuum surface are direct analogues of theories well known in the field of gas adsorption on the free surfaces of solids.
=== Langmuir–McLean theory for surface and grain boundary segregation in binary systems ===
This is the earliest theory specifically for grain boundaries, in which McLean uses a model of P solute atoms distributed at random amongst N lattice sites and p solute atoms distributed at random amongst n independent grain boundary sites. The total free energy due to the solute atoms is then:
G
=
p
e
+
P
E
−
k
T
[
ln
(
n
!
N
!
)
−
ln
(
n
−
p
)
!
p
!
(
N
−
P
)
!
P
!
]
{\displaystyle G=pe+PE-kT[\ln(n!N!)-\ln(n-p)!p!(N-P)!P!]}
where E and e are energies of the solute atom in the lattice and in the grain boundary, respectively and the kln term represents the configurational entropy of the arrangement of the solute atoms in the bulk and grain boundary. McLean used basic statistical mechanics to find the fractional monolayer of segregant,
X
b
{\displaystyle X_{b}}
, at which the system energy was minimized (at the equilibrium state), differentiating G with respect to p, noting that the sum of p and P is constant. Here the grain boundary analogue of Langmuir adsorption at free surfaces becomes:
X
b
X
b
0
−
X
b
=
X
c
1
−
X
c
exp
(
−
Δ
G
R
T
)
{\displaystyle {\frac {X_{b}}{X_{b}^{0}-X_{b}}}={\frac {X_{c}}{1-X_{c}}}\exp \left({\frac {-\Delta G}{RT}}\right)}
Here,
X
b
0
{\displaystyle X_{b}^{0}}
is the fraction of the grain boundary monolayer available for segregated atoms at saturation,
X
b
{\displaystyle X_{b}}
is the actual fraction covered with segregant,
X
c
{\displaystyle X_{c}}
is the bulk solute molar fraction, and
Δ
G
{\displaystyle \Delta G}
is the free energy of segregation per mole of solute.
Values of
Δ
G
{\displaystyle \Delta G}
were estimated by McLean using the elastic strain energy,
E
el
{\displaystyle E_{\text{el}}}
, released by the segregation of solute atoms. The solute atom is represented by an elastic sphere fitted into a spherical hole in an elastic matrix continuum. The elastic energy associated with the solute atom is given by:
E
el
=
24
π
K
μ
0
r
0
(
r
1
−
r
0
)
2
3
K
+
4
μ
0
{\displaystyle E_{\text{el}}={\frac {24\pi \mathrm {K} \mu _{0}r_{0}(r_{1}-r_{0})^{2}}{3\mathrm {K} +4\mu _{0}}}}
where
K
{\displaystyle \mathrm {K} }
is the solute bulk modulus,
μ
0
,
{\displaystyle \mu _{0},}
is the matrix shear modulus, and
r
0
,
{\displaystyle r_{0},}
and
r
1
,
{\displaystyle r_{1},}
are the atomic radii of the matrix and impurity atoms, respectively. This method gives values correct to within a factor of two (as compared with experimental data for grain boundary segregation), but a greater accuracy is obtained using the method of Seah and Hondros, described in the following section.
=== Free energy of grain boundary segregation in binary systems ===
Using truncated BET theory (the gas adsorption theory developed by Brunauer, Emmett, and Teller), Seah and Hondros write the solid-state analogue as:
X
b
X
b
0
−
X
b
=
X
c
X
c
0
exp
{\displaystyle {\frac {X_{b}}{X_{b}^{0}-X_{b}}}={\frac {X_{c}}{X_{c}^{0}}}\exp }
(
−
Δ
G
′
R
T
)
{\displaystyle \left({\frac {-\Delta G'}{RT}}\right)}
where
Δ
G
=
Δ
G
′
+
Δ
G
sol
{\displaystyle \Delta G=\Delta G'+\Delta G_{\text{sol}}}
X
c
0
{\displaystyle X_{c}^{0}}
is the solid solubility, which is known for many elements (and can be found in metallurgical handbooks). In the dilute limit, a slightly soluble substance has
X
c
0
=
exp
(
Δ
G
sol
R
T
)
{\displaystyle X_{c}^{0}=\exp \left({\frac {\Delta G_{\text{sol}}}{RT}}\right)}
, so the above equation reduces to that found with the Langmuir-McLean theory. This equation is only valid for
X
c
≤
X
c
0
{\displaystyle X_{c}\leq X_{c}^{0}}
. If there is an excess of solute such that a second phase appears, the solute content is limited to
X
c
0
{\displaystyle X_{c}^{0}}
and the equation becomes
X
b
X
b
0
−
X
b
=
exp
(
−
Δ
G
′
R
T
)
{\displaystyle {\frac {X_{b}}{X_{b}^{0}-X_{b}}}=\exp \left({\frac {-\Delta G'}{RT}}\right)}
This theory for grain boundary segregation, derived from truncated BET theory, provides excellent agreement with experimental data obtained by Auger electron spectroscopy and other techniques.
=== More complex systems ===
Other models exist to model more complex binary systems. The above theories operate on the assumption that the segregated atoms are non-interacting. If, in a binary system, adjacent adsorbate atoms are allowed an interaction energy
ω
{\displaystyle \omega \,}
, such that they can attract (when
ω
{\displaystyle \omega \,}
is negative) or repel (when
ω
{\displaystyle \omega \,}
is positive) each other, the solid-state analogue of the Fowler adsorption theory is developed as
X
b
X
b
0
−
X
b
=
X
c
1
−
X
c
exp
[
−
Δ
G
−
Z
1
ω
X
b
X
b
0
R
T
]
.
{\displaystyle {\frac {X_{b}}{X_{b}^{0}-X_{b}}}={\frac {X_{c}}{1-X_{c}}}\exp \left[{\frac {-\Delta G-Z_{1}\omega \,{\frac {X_{b}}{X_{b}^{0}}}}{RT}}\right].}
When
ω
{\displaystyle \omega \,}
is zero, this theory reduces to that of Langmuir and McLean. However, as
ω
{\displaystyle \omega \,}
becomes more negative, the segregation shows progressively sharper rises as the temperature falls until eventually the rise in segregation is discontinuous at a certain temperature, as shown in the following figure.
Guttman, in 1975, extended the Fowler theory to allow for interactions between two co-segregating species in multicomponent systems. This modification is vital to explaining the segregation behavior that results in the intergranular failures of engineering materials. More complex theories are detailed in the work by Guttmann and McLean and Guttmann.
=== The free energy of surface segregation in binary systems ===
The Langmuir–McLean equation for segregation, when using the regular solution model for a binary system, is valid for surface segregation (although sometimes the equation will be written replacing
X
b
{\displaystyle X_{b}}
with
X
s
{\displaystyle X_{s}}
). The free energy of surface segregation is
Δ
G
s
=
Δ
H
s
−
T
Δ
S
{\displaystyle \Delta G_{s}=\Delta H_{s}-T\,\Delta S}
. The enthalpy is given by
−
Δ
H
s
=
γ
0
s
−
γ
1
s
−
2
H
m
Z
X
c
(
1
−
X
c
)
[
Z
1
(
X
c
−
X
s
)
+
Z
v
(
X
c
−
1
2
)
]
+
24
π
K
μ
0
r
0
(
r
1
−
r
0
)
2
3
K
+
4
μ
0
{\displaystyle -\Delta H_{s}=\gamma _{0}^{s}-\gamma _{1}^{s}-{\frac {2H_{m}}{ZX_{c}(1-X_{c})}}\left[Z_{1}(X_{c}-X_{s})+Z_{v}\left(X_{c}-{\frac {1}{2}}\right)\right]+{\frac {24\pi \mathrm {K} \mu _{0}r_{0}(r_{1}-r_{0})^{2}}{3\mathrm {K} +4\mu _{0}}}}
where
γ
0
{\displaystyle \gamma _{0}}
and
γ
1
{\displaystyle \gamma _{1}}
are matrix surface energies without and with solute,
H
1
{\displaystyle H_{1}}
is their heat of mixing, Z and
Z
1
{\displaystyle Z_{1}}
are the coordination numbers in the matrix and at the surface, and
Z
v
{\displaystyle Z_{v}}
is the coordination number for surface atoms to the layer below. The last term in this equation is the elastic strain energy
E
el
{\displaystyle E_{\text{el}}}
, given above, and is governed by the mismatch between the solute and the matrix atoms. For solid metals, the surface energies scale with the melting points. The surface segregation enrichment ratio increases when the solute atom size is larger than the matrix atom size and when the melting point of the solute is lower than that of the matrix.
A chemisorbed gaseous species on the surface can also have an effect on the surface composition of a binary alloy. In the presence of a coverage of a chemisorbed species theta, it is proposed that the Langmuir-McLean model is valid with the free energy of surface segregation given by
Δ
G
chem
{\displaystyle \Delta G_{\text{chem}}}
, where
Δ
G
chem
=
Δ
G
s
+
(
E
B
−
E
A
)
Θ
{\displaystyle \Delta G_{\text{chem}}=\Delta G_{s}+(E_{B}-E_{A})\Theta \,}
E
A
{\displaystyle E_{A}}
and
E
B
{\displaystyle E_{B}}
are the chemisorption energies of the gas on solute A and matrix B and
Θ
{\displaystyle \Theta }
is the fractional coverage. At high temperatures, evaporation from the surface can take place, causing a deviation from the McLean equation. At lower temperatures, both grain boundary and surface segregation can be limited by the diffusion of atoms from the bulk to the surface or interface.
== Kinetics of segregation ==
In some situations where segregation is important, the segregant atoms do not have sufficient time to reach their equilibrium level as defined by the above adsorption theories. The kinetics of segregation become a limiting factor and must be analyzed as well. Most existing models of segregation kinetics follow the McLean approach. In the model for equilibrium monolayer segregation, the solute atoms are assumed to segregate to a grain boundary from two infinite half-crystals or to a surface from one infinite half-crystal. The diffusion in the crystals is described by Fick's laws. The ratio of the solute concentration in the grain boundary to that in the adjacent atomic layer of the bulk is given by an enrichment ratio,
β
{\displaystyle \beta }
. Most models assume
β
{\displaystyle \beta }
to be a constant, but in practice this is only true for dilute systems with low segregation levels. In this dilute limit, if
X
b
0
{\displaystyle X_{b}^{0}}
is one monolayer,
β
{\displaystyle \beta }
is given as
β
=
X
b
X
c
=
exp
(
−
Δ
G
′
R
T
)
X
c
0
{\displaystyle \beta ={\frac {X_{b}}{X_{c}}}={\frac {\exp \left({\frac {-\Delta G'}{RT}}\right)}{X_{c}^{0}}}}
.
The kinetics of segregation can be described by the following equation:
X
b
(
t
)
−
X
b
(
0
)
X
b
(
∞
)
−
X
b
(
0
)
=
1
−
exp
(
F
D
t
β
2
f
2
)
{\displaystyle {\frac {X_{b}(t)-X_{b}(0)}{X_{b}(\infty )-X_{b}(0)}}=1-\exp \left({\frac {FDt}{\beta ^{2}f^{2}}}\right)}
erfc
(
F
D
t
β
2
f
2
)
1
/
2
{\displaystyle \operatorname {erfc} \left({\frac {FDt}{\beta ^{2}f^{2}}}\right)^{1/2}}
where
F
=
4
{\displaystyle F=4}
for grain boundaries and 1 for the free surface,
X
b
(
t
)
{\displaystyle X_{b}(t)}
is the boundary content at time
t
{\displaystyle t}
,
D
{\displaystyle D}
is the solute bulk diffusivity,
f
{\displaystyle f}
is related to the atomic sizes of the solute and the matrix,
b
{\displaystyle b}
and
a
{\displaystyle a}
, respectively, by
f
=
a
3
b
−
2
{\displaystyle f=a^{3}b^{-2}}
. For short times, this equation is approximated by:
X
b
(
t
)
−
X
b
(
0
)
X
b
(
∞
)
−
X
b
(
0
)
=
2
β
f
F
D
t
π
=
2
β
b
2
a
3
F
D
t
π
{\displaystyle {\frac {X_{b}(t)-X_{b}(0)}{X_{b}(\infty )-X_{b}(0)}}={\frac {2}{\beta f}}{\sqrt {\frac {FDt}{\pi }}}={\frac {2}{\beta }}{\frac {b^{2}}{a^{3}}}{\sqrt {\frac {FDt}{\pi }}}}
In practice,
β
{\displaystyle \beta }
is not a constant but generally falls as segregation proceeds due to saturation. If
β
{\displaystyle \beta }
starts high and falls rapidly as the segregation saturates, the above equation is valid until the point of saturation.
== In metal castings ==
All metal castings experience segregation to some extent, and a distinction is made between macrosegregation and microsegregation. Microsegregation refers to localized differences in composition between dendrite arms, and can be significantly reduced by a homogenizing heat treatment. This is possible because the distances involved (typically on the order of 10 to 100 μm) are sufficiently small for diffusion to be a significant mechanism. This is not the case in macrosegregation. Therefore, macrosegregation in metal castings cannot be remedied or removed using heat treatment.
== Further reading ==
== See also ==
== References == | Wikipedia/Segregation_(materials_science) |
Geodynamics is a subfield of geophysics dealing with dynamics of the Earth. It applies physics, chemistry and mathematics to the understanding of how mantle convection leads to plate tectonics and geologic phenomena such as seafloor spreading, mountain building, volcanoes, earthquakes, faulting. It also attempts to probe the internal activity by measuring magnetic fields, gravity, and seismic waves, as well as the mineralogy of rocks and their isotopic composition. Methods of geodynamics are also applied to exploration of other planets.
== Overview ==
Geodynamics is generally concerned with processes that move materials throughout the Earth. In the Earth's interior, movement happens when rocks melt or deform and flow in response to a stress field. This deformation may be brittle, elastic, or plastic, depending on the magnitude of the stress and the material's physical properties, especially the stress relaxation time scale. Rocks are structurally and compositionally heterogeneous and are subjected to variable stresses, so it is common to see different types of deformation in close spatial and temporal proximity. When working with geological timescales and lengths, it is convenient to use the continuous medium approximation and equilibrium stress fields to consider the average response to average stress.
Experts in geodynamics commonly use data from geodetic GPS, InSAR, and seismology, along with numerical models, to study the evolution of the Earth's lithosphere, mantle and core.
Work performed by geodynamicists may include:
Modeling brittle and ductile deformation of geologic materials
Predicting patterns of continental accretion and breakup of continents and supercontinents
Observing surface deformation and relaxation due to ice sheets and post-glacial rebound, and making related conjectures about the viscosity of the mantle
Finding and understanding the driving mechanisms behind plate tectonics.
== Deformation of rocks ==
Rocks and other geological materials experience strain according to three distinct modes, elastic, plastic, and brittle depending on the properties of the material and the magnitude of the stress field. Stress is defined as the average force per unit area exerted on each part of the rock. Pressure is the part of stress that changes the volume of a solid; shear stress changes the shape. If there is no shear, the fluid is in hydrostatic equilibrium. Since, over long periods, rocks readily deform under pressure, the Earth is in hydrostatic equilibrium to a good approximation. The pressure on rock depends only on the weight of the rock above, and this depends on gravity and the density of the rock. In a body like the Moon, the density is almost constant, so a pressure profile is readily calculated. In the Earth, the compression of rocks with depth is significant, and an equation of state is needed to calculate changes in density of rock even when it is of uniform composition.
=== Elastic ===
Elastic deformation is always reversible, which means that if the stress field associated with elastic deformation is removed, the material will return to its previous state. Materials only behave elastically when the relative arrangement along the axis being considered of material components (e.g. atoms or crystals) remains unchanged. This means that the magnitude of the stress cannot exceed the yield strength of a material, and the time scale of the stress cannot approach the relaxation time of the material. If stress exceeds the yield strength of a material, bonds begin to break (and reform), which can lead to ductile or brittle deformation.
=== Ductile ===
Ductile or plastic deformation happens when the temperature of a system is high enough so that a significant fraction of the material microstates (figure 1) are unbound, which means that a large fraction of the chemical bonds are in the process of being broken and reformed. During ductile deformation, this process of atomic rearrangement redistributes stress and strain towards equilibrium faster than they can accumulate. Examples include bending of the lithosphere under volcanic islands or sedimentary basins, and bending at oceanic trenches. Ductile deformation happens when transport processes such as diffusion and advection that rely on chemical bonds to be broken and reformed redistribute strain about as fast as it accumulates.
=== Brittle ===
When strain localizes faster than these relaxation processes can redistribute it, brittle deformation occurs. The mechanism for brittle deformation involves a positive feedback between the accumulation or propagation of defects especially those produced by strain in areas of high strain, and the localization of strain along these dislocations and fractures. In other words, any fracture, however small, tends to focus strain at its leading edge, which causes the fracture to extend.
In general, the mode of deformation is controlled not only by the amount of stress, but also by the distribution of strain and strain associated features. Whichever mode of deformation ultimately occurs is the result of a competition between processes that tend to localize strain, such as fracture propagation, and relaxational processes, such as annealing, that tend to delocalize strain.
=== Deformation structures ===
Structural geologists study the results of deformation, using observations of rock, especially the mode and geometry of deformation to reconstruct the stress field that affected the rock over time. Structural geology is an important complement to geodynamics because it provides the most direct source of data about the movements of the Earth. Different modes of deformation result in distinct geological structures, e.g. brittle fracture in rocks or ductile folding.
== Thermodynamics ==
The physical characteristics of rocks that control the rate and mode of strain, such as yield strength or viscosity, depend on the thermodynamic state of the rock and composition. The most important thermodynamic variables in this case are temperature and pressure. Both of these increase with depth, so to a first approximation the mode of deformation can be understood in terms of depth. Within the upper lithosphere, brittle deformation is common because under low pressure rocks have relatively low brittle strength, while at the same time low temperature reduces the likelihood of ductile flow. After the brittle-ductile transition zone, ductile deformation becomes dominant. Elastic deformation happens when the time scale of stress is shorter than the relaxation time for the material. Seismic waves are a common example of this type of deformation. At temperatures high enough to melt rocks, the ductile shear strength approaches zero, which is why shear mode elastic deformation (S-Waves) will not propagate through melts.
== Forces ==
The main motive force behind stress in the Earth is provided by thermal energy from radioisotope decay, friction, and residual heat. Cooling at the surface and heat production within the Earth create a metastable thermal gradient from the hot core to the relatively cool lithosphere. This thermal energy is converted into mechanical energy by thermal expansion. Deeper and hotter rocks often have higher thermal expansion and lower density relative to overlying rocks. Conversely, rock that is cooled at the surface can become less buoyant than the rock below it. Eventually this can lead to a Rayleigh-Taylor instability (Figure 2), or interpenetration of rock on different sides of the buoyancy contrast.
Negative thermal buoyancy of the oceanic plates is the primary cause of subduction and plate tectonics, while positive thermal buoyancy may lead to mantle plumes, which could explain intraplate volcanism. The relative importance of heat production vs. heat loss for buoyant convection throughout the whole Earth remains uncertain and understanding the details of buoyant convection is a key focus of geodynamics.
== Methods ==
Geodynamics is a broad field which combines observations from many different types of geological study into a broad picture of the dynamics of Earth. Close to the surface of the Earth, data includes field observations, geodesy, radiometric dating, petrology, mineralogy, drilling boreholes and remote sensing techniques. However, beyond a few kilometers depth, most of these kinds of observations become impractical. Geologists studying the geodynamics of the mantle and core must rely entirely on remote sensing, especially seismology, and experimentally recreating the conditions found in the Earth in high pressure high temperature experiments.(see also Adams–Williamson equation).
=== Numerical modeling ===
Because of the complexity of geological systems, computer modeling is used to test theoretical predictions about geodynamics using data from these sources.
There are two main ways of geodynamic numerical modeling.
Modelling to reproduce a specific observation: This approach aims to answer what causes a specific state of a particular system.
Modelling to produce basic fluid dynamics: This approach aims to answer how a specific system works in general.
Basic fluid dynamics modelling can further be subdivided into instantaneous studies, which aim to reproduce the instantaneous flow in a system due to a given buoyancy distribution, and time-dependent studies, which either aim to reproduce a possible evolution of a given initial condition over time or a statistical (quasi) steady-state of a given system.
== See also ==
Computational Infrastructure for Geodynamics – Organization that advances Earth science
Cytherodynamics
== References ==
Bibliography
== External links ==
Geological Survey of Canada - Geodynamics Program
Geodynamics Homepage - JPL/NASA
NASA Planetary geodynamics
Los Alamos National Laboratory–Geodynamics & National Security
Computational Infrastructure for Geodynamics Archived 2014-05-17 at the Wayback Machine | Wikipedia/Geodynamics |
Interface and colloid science is an interdisciplinary intersection of branches of chemistry, physics, nanoscience and other fields dealing with colloids, heterogeneous systems consisting of a mechanical mixture of particles between 1 nm and 1000 nm dispersed in a continuous medium. A colloidal solution is a heterogeneous mixture in which the particle size of the substance is intermediate between a true solution and a suspension, i.e. between 1–1000 nm. Smoke from a fire is an example of a colloidal system in which tiny particles of solid float in air. Just like true solutions, colloidal particles are small and cannot be seen by the naked eye. They easily pass through filter paper. But colloidal particles are big enough to be blocked by parchment paper or animal membrane.
Interface and colloid science has applications and ramifications in the chemical industry, pharmaceuticals, biotechnology, ceramics, minerals, nanotechnology, and microfluidics, among others.
There are many books dedicated to this scientific discipline, and there is a glossary of terms, Nomenclature in Dispersion Science and Technology, published by the US National Institute of Standards and Technology.
== See also ==
Interface (matter)
Electrokinetic phenomena
Surface science
== References ==
== External links ==
Max Planck Institute of Colloids and Interfaces
American Chemical Society division of Colloid & Surface Chemistry | Wikipedia/Interface_and_colloid_science |
In crystallography, a crystallographic point group is a three-dimensional point group whose symmetry operations are compatible with a three-dimensional crystallographic lattice. According to the crystallographic restriction it may only contain one-, two-, three-, four- and sixfold rotations or rotoinversions. This reduces the number of crystallographic point groups to 32 (from an infinity of general point groups). These 32 groups are the same as the 32 types of morphological (external) crystalline symmetries derived in 1830 by Johann Friedrich Christian Hessel from a consideration of observed crystal forms. In 1867 Axel Gadolin, who was unaware of the previous work of Hessel, found the crystallographic point groups independently using stereographic projection to represent the symmetry elements of the 32 groups.: 379
In the classification of crystals, to each space group is associated a crystallographic point group by "forgetting" the translational components of the symmetry operations, that is, by turning screw rotations into rotations, glide reflections into reflections and moving all symmetry elements into the origin. Each crystallographic point group defines the (geometric) crystal class of the crystal.
The point group of a crystal determines, among other things, the directional variation of physical properties that arise from its structure, including optical properties such as birefringency, or electro-optical features such as the Pockels effect.
== Notation ==
The point groups are named according to their component symmetries. There are several standard notations used by crystallographers, mineralogists, and physicists.
For the correspondence of the two systems below, see crystal system.
=== Schoenflies notation ===
In Schoenflies notation, point groups are denoted by a letter symbol with a subscript. The symbols used in crystallography mean the following:
Cn (for cyclic) indicates that the group has an n-fold rotation axis. Cnh is Cn with the addition of a mirror (reflection) plane perpendicular to the axis of rotation. Cnv is Cn with the addition of n mirror planes parallel to the axis of rotation.
S2n (for Spiegel, German for mirror) denotes a group with only a 2n-fold rotation-reflection axis.
Dn (for dihedral, or two-sided) indicates that the group has an n-fold rotation axis plus n twofold axes perpendicular to that axis. Dnh has, in addition, a mirror plane perpendicular to the n-fold axis. Dnd has, in addition to the elements of Dn, mirror planes parallel to the n-fold axis.
The letter T (for tetrahedron) indicates that the group has the symmetry of a tetrahedron. Td includes improper rotation operations, T excludes improper rotation operations, and Th is T with the addition of an inversion.
The letter O (for octahedron) indicates that the group has the symmetry of an octahedron, with (Oh) or without (O) improper operations (those that change handedness).
Due to the crystallographic restriction theorem, n = 1, 2, 3, 4, or 6 in 2- or 3-dimensional space.
D4d and D6d are actually forbidden because they contain improper rotations with n=8 and 12 respectively. The 27 point groups in the table plus T, Td, Th, O and Oh constitute 32 crystallographic point groups.
=== Hermann–Mauguin notation ===
An abbreviated form of the Hermann–Mauguin notation commonly used for space groups also serves to describe crystallographic point groups. Group names are
=== The correspondence between different notations ===
== Isomorphisms ==
Many of the crystallographic point groups share the same internal structure. For example, the point groups 1, 2, and m contain different geometric symmetry operations, (inversion, rotation, and reflection, respectively) but all share the structure of the cyclic group C2. All isomorphic groups are of the same order, but not all groups of the same order are isomorphic. The point groups which are isomorphic are shown in the following table:
This table makes use of cyclic groups (C1, C2, C3, C4, C6), dihedral groups (D2, D3, D4, D6), one of the alternating groups (A4), and one of the symmetric groups (S4). Here the symbol " × " indicates a direct product.
== Deriving the crystallographic point group (crystal class) from the space group ==
Leave out the Bravais lattice type.
Convert all symmetry elements with translational components into their respective symmetry elements without translation symmetry. (Glide planes are converted into simple mirror planes; screw axes are converted into simple axes of rotation.)
Axes of rotation, rotoinversion axes, and mirror planes remain unchanged.
== See also ==
Molecular symmetry
Point group
Space group
Point groups in three dimensions
Crystal system
== References ==
== External links ==
Point-group symbols in International Tables for Crystallography (2006). Vol. A, ch. 12.1, pp. 818-820
Names and symbols of the 32 crystal classes in International Tables for Crystallography (2006). Vol. A, ch. 10.1, p. 794
Pictorial overview of the 32 groups | Wikipedia/Crystallographic_point_group |
High-resolution transmission electron microscopy is an imaging mode of specialized transmission electron microscopes that allows for direct imaging of the atomic structure of samples. It is a powerful tool to study properties of materials on the atomic scale, such as semiconductors, metals, nanoparticles and sp2-bonded carbon (e.g., graphene, C nanotubes). While this term is often also used to refer to high resolution scanning transmission electron microscopy, mostly in high angle annular dark field mode, this article describes mainly the imaging of an object by recording the two-dimensional spatial wave amplitude distribution in the image plane, similar to a "classic" light microscope. For disambiguation, the technique is also often referred to as phase contrast transmission electron microscopy, although this term is less appropriate. At present, the highest point resolution realised in high resolution transmission electron microscopy is around 0.5 ångströms (0.050 nm). At these small scales, individual atoms of a crystal and defects can be resolved. For 3-dimensional crystals, it is necessary to combine several views, taken from different angles, into a 3D map. This technique is called electron tomography.
One of the difficulties with high resolution transmission electron microscopy is that image formation relies on phase contrast. In phase-contrast imaging, contrast is not intuitively interpretable, as the image is influenced by aberrations of the imaging lenses in the microscope. The largest contributions for uncorrected instruments typically come from defocus, spherical aberration and astigmatism of the objective lens. These values can be estimated from the so-called Thon ring pattern (named for Friedrich Thon) appearing in the Fourier transform modulus of an image of a thin amorphous film.
== Image contrast and interpretation ==
The contrast of a high resolution transmission electron microscopy image arises from the interference in the image plane of the electron wave with itself. Due to our inability to record the phase of an electron wave, only the amplitude in the image plane is recorded. However, a large part of the structure information of the sample is contained in the phase of the electron wave. In order to detect it, the aberrations of the microscope (like defocus) have to be tuned in a way that converts the phase of the wave at the specimen exit plane into amplitudes in the image plane.
The interaction of the electron wave with the crystallographic structure of the sample is complex, but a qualitative idea of the interaction can readily be obtained. Each imaging electron interacts independently with the sample. Above the sample, the wave of an electron can be approximated as a plane wave incident on the sample surface. As it penetrates the sample, it is attracted by the positive atomic potentials of the atom cores, and channels along the atom columns of the crystallographic lattice (s-state model). At the same time, the interaction between the electron wave in different atom columns leads to Bragg diffraction. The exact description of dynamical scattering of electrons in a sample not satisfying the weak phase object approximation, which is almost all real samples, still remains the holy grail of electron microscopy. However, the physics of electron scattering and electron microscope image formation are sufficiently well known to allow accurate simulation of electron microscope images.
As a result of the interaction with a crystalline sample, the electron exit wave right below the sample φe(x,u) as a function of the spatial coordinate x is a superposition of a plane wave and a multitude of diffracted beams with different in plane spatial frequencies u (spatial frequencies correspond to scattering angles, or distances of rays from the optical axis in a diffraction plane). The phase change φe(x,u) relative to the incident wave peaks at the location of the atom columns. The exit wave now passes through the imaging system of the microscope where it undergoes further phase change and interferes as the image wave in the imaging plane (mostly a digital pixel detector like a CCD camera). The recorded image is not a direct representation of the samples crystallographic structure. For instance, high intensity might or might not indicate the presence of an atom column in that precise location (see simulation). The relationship between the exit wave and the image wave is a highly nonlinear one and is a function of the aberrations of the microscope. It is described by the contrast transfer function.
=== The phase contrast transfer function ===
The phase contrast transfer function is a function of limiting apertures and aberrations in the imaging lenses of a microscope. It describes their effect on the phase of the exit wave φe(x,u) and propagates it to the image wave. Following Williams and Carter, assume the weak phase object approximation (thin sample), then the contrast transfer function becomes
C
T
F
(
u
)
=
A
(
u
)
E
(
u
)
2
sin
(
χ
(
u
)
)
{\displaystyle CTF(u)=A(u)E(u)2\sin(\chi (u))}
where A(u) is the aperture function, E(u) describes the attenuation of the wave for higher spatial frequency u, also called envelope function. χ(u) is a function of the aberrations of the electron optical system.
The last, sinusoidal term of the contrast transfer function will determine the sign with which components of frequency u will enter contrast in the final image. If one takes into account only spherical aberration to third order and defocus, χ is rotationally symmetric about the optical axis of the microscope and thus only depends on the modulus u = |u|, given by
χ
(
u
)
=
π
2
C
s
λ
3
u
4
−
π
Δ
f
λ
u
2
{\displaystyle \chi (u)={\frac {\pi }{2}}C_{s}\lambda ^{3}u^{4}-\pi \Delta f\lambda u^{2}}
where Cs is the spherical aberration coefficient, λ is the electron wavelength, and Δf is the defocus. In transmission electron microscopy, defocus can easily be controlled and measured to high precision. Thus one can easily alter the shape of the contrast transfer function by defocusing the sample. Contrary to optical applications, defocusing can increase the precision and interpretability of the micrographs.
The aperture function cuts off beams scattered above a certain critical angle (given by the objective pole piece for ex), thus effectively limiting the attainable resolution. However it is the envelope function E(u) which usually dampens the signal of beams scattered at high angles, and imposes a maximum to the transmitted spatial frequency. This maximum determines the highest resolution attainable with a microscope and is known as the information limit. E(u) can be described as a product of single envelopes:
E
(
u
)
=
E
s
(
u
)
E
c
(
u
)
E
d
(
u
)
E
v
(
u
)
E
D
(
u
)
,
{\displaystyle E(u)=E_{s}(u)E_{c}(u)E_{d}(u)E_{v}(u)E_{D}(u),\,}
due to
Es(u): angular spread of the source
Ec(u): chromatic aberration
Ed(u): specimen drift
Ev(u): specimen vibration
ED(u): detector
Specimen drift and vibration can be minimized in a stable environment. It is usually the spherical aberration Cs that limits spatial coherency and defines Es(u) and the chromatic aberration Cc, together with current and voltage instabilities that define the temporal coherency in Ec(u). These two envelopes determine the information limit by damping the signal transfer in Fourier space with increasing spatial frequency u
E
s
(
u
)
=
exp
[
−
(
π
α
λ
)
2
(
δ
X
(
u
)
δ
u
)
2
]
=
exp
[
−
(
π
α
λ
)
2
(
C
s
λ
3
u
3
+
Δ
f
λ
u
)
2
]
,
{\displaystyle E_{s}(u)=\exp \left[-\left({\frac {\pi \alpha }{\lambda }}\right)^{2}\left({\frac {\delta \mathrm {X} (u)}{\delta u}}\right)^{2}\right]=\exp \left[-\left({\frac {\pi \alpha }{\lambda }}\right)^{2}(C_{s}\lambda ^{3}u^{3}+\Delta f\lambda u)^{2}\right],}
where α is the semiangle of the pencil of rays illuminating the sample. Clearly, if the wave aberration ('here represented by Cs and Δf) vanished, this envelope function would be a constant one. In case of an uncorrected transmission electron microscope with fixed Cs, the damping due to this envelope function can be minimized by optimizing the defocus at which the image is recorded (Lichte defocus).
The temporal envelope function can be expressed as
E
c
(
u
)
=
exp
[
−
1
2
(
π
λ
δ
)
2
u
4
]
,
{\displaystyle E_{c}(u)=\exp \left[-{\frac {1}{2}}\left(\pi \lambda \delta \right)^{2}u^{4}\right],}
.
Here, δ is the focal spread with the chromatic aberration Cc as the parameter:
δ
=
C
c
4
(
Δ
I
obj
I
obj
)
2
+
(
Δ
E
V
acc
)
2
+
(
Δ
V
acc
V
acc
)
2
,
{\displaystyle \delta =C_{c}{\sqrt {4\left({\frac {\Delta I_{\text{obj}}}{I_{\text{obj}}}}\right)^{2}+\left({\frac {\Delta E}{V_{\text{acc}}}}\right)^{2}+\left({\frac {\Delta V_{\text{acc}}}{V_{\text{acc}}}}\right)^{2}}},}
The terms
Δ
I
obj
/
I
obj
{\displaystyle \Delta I_{\text{obj}}/I_{\text{obj}}}
and
Δ
V
acc
/
V
acc
{\displaystyle \Delta V_{\text{acc}}/V_{\text{acc}}}
represent instabilities in of the total current in the magnetic lenses and the acceleration voltage.
Δ
E
/
V
acc
{\displaystyle \Delta E/V_{\text{acc}}}
is the energy spread of electrons emitted by the source.
The information limit of current state-of-the-art transmission electron microscopes is well below 1 Å. The TEAM project at Lawrence Berkeley National Laboratory resulted in the first transmission electron microscope to reach an information limit of <0.5 Å in 2009 by the use of a highly stable mechanical and electrical environment, an ultra-bright, monochromated electron source and double-hexapole aberration correctors.
=== Optimum defocus in high resolution transmission electron microscopy ===
Choosing the optimum defocus is crucial to fully exploit the capabilities of an electron microscope in high resolution transmission electron microscopy mode. However, there is no simple answer as to which one is the best.
In Gaussian focus one sets the defocus to zero, the sample is in focus. As a consequence contrast in the image plane gets its image components from the minimal area of the sample, the contrast is localized (no blurring and information overlap from other parts of the sample). The contrast transfer function becomes a function that oscillates quickly with Csu4. This means that for certain diffracted beams with a spatial frequency u the contribution to contrast in the recorded image will be reversed, thus making interpretation of the image difficult.
==== Scherzer defocus ====
In Scherzer defocus, one aims to counter the term in u4 with the parabolic term Δfu2 of χ(u). Thus by choosing the right defocus value Δf one flattens χ(u) and creates a wide band where low spatial frequencies u are transferred into image intensity with a similar phase. In 1949, Scherzer found that the optimum defocus depends on microscope properties like the spherical aberration Cs and the accelerating voltage (through λ) in the following way:
Δ
f
Scherzer
=
−
1.2
C
s
λ
{\displaystyle \Delta f_{\text{Scherzer}}=-1.2{\sqrt {C_{s}\lambda }}\,}
where the factor 1.2 defines the extended Scherzer defocus. For the CM300 at NCEM, Cs = 0.6mm and an accelerating voltage of 300keV (λ = 1.97 pm) (Wavelength calculation) result in ΔfScherzer = -41.25 nm.
The point resolution of a microscope is defined as the spatial frequency ures where the contrast transfer function crosses the abscissa for the first time. At Scherzer defocus this value is maximized:
u
res
(
Scherzer
)
=
0.6
λ
3
/
4
C
s
1
/
4
,
{\displaystyle u_{\text{res}}({\text{Scherzer}})=0.6\lambda ^{3/4}C_{s}^{1/4},}
which corresponds to 6.1 nm−1 on the CM300. Contributions with a spatial frequency higher than the point resolution can be filtered out with an appropriate aperture leading to easily interpretable images at the cost of a lot of information lost.
==== Gabor defocus ====
Gabor defocus is used in electron holography where both amplitude and phase of the image wave are recorded. One thus wants to minimize crosstalk between the two. The Gabor defocus can be expressed as a function of the Scherzer defocus as
Δ
f
Gabor
=
0.56
Δ
f
Scherzer
{\displaystyle \Delta f_{\text{Gabor}}=0.56\Delta f_{\text{Scherzer}}}
==== Lichte defocus ====
To exploit all beams transmitted through the microscope up to the information limit, one relies on a complex method called exit wave reconstruction which consists in mathematically reversing the effect of the contrast transfer function to recover the original exit wave φe(x,u). To maximize the information throughput, Hannes Lichte proposed in 1991 a defocus of a fundamentally different nature than the Scherzer defocus: because the dampening of the envelope function scales with the first derivative of χ(u), Lichte proposed a focus minimizing the modulus of dχ(u)/du
Δ
f
Lichte
=
−
0.75
C
s
(
u
max
λ
)
2
,
{\displaystyle \Delta f_{\text{Lichte}}=-0.75C_{s}(u_{\max }\lambda )^{2},}
where umax is the maximum transmitted spatial frequency. For the CM300 with an information limit of 0.8 Å Lichte defocus lies at −272 nm.
=== Exit wave reconstruction ===
To calculate back to φe(x,u) the wave in the image plane is back propagated numerically to the sample. If all properties of the microscope are well known, it is possible to recover the real exit wave with very high accuracy.
First however, both phase and amplitude of the electron wave in the image plane must be measured. As our instruments only record amplitudes, an alternative method to recover the phase has to be used. There are two methods in use today:
Holography, which was developed by Gabor expressly for transmission electron microscopy applications, uses a prism to split the beam into a reference beam and a second one passing through the sample. Phase changes between the two are then translated in small shifts of the interference pattern, which allows recovering both phase and amplitude of the interfering wave.
Through focal series method takes advantage of the fact that the contrast transfer function is focus dependent. A series of about 20 pictures is shot under the same imaging conditions with the exception of the focus which is incremented between each take. Together with exact knowledge of the contrast transfer function, the series allows for computation of φe(x,u) (see figure).
Both methods extend the point resolution of the microscope past the information limit, which is the highest possible resolution achievable on a given machine. The ideal defocus value for this type of imaging is known as Lichte defocus and is usually several hundred nanometers negative.
== See also ==
== Articles ==
Topical review "Optics of high-performance electron Microscopes" Sci. Technol. Adv. Mater. 9 (2008) 014107 (30pages) free download
CTF Explorer by Max V. Sidorov, freeware program to calculate the contrast transfer function
High Resolution Transmission Electron Microscopy Overview
== References == | Wikipedia/High-resolution_electron_microscopy |
A crystallographic database is a database specifically designed to store information about the structure of molecules and crystals. Crystals are solids having, in all three dimensions of space, a regularly repeating arrangement of atoms, ions, or molecules. They are characterized by symmetry, morphology, and directionally dependent physical properties. A crystal structure describes the arrangement of atoms, ions, or molecules in a crystal. (Molecules need to crystallize into solids so that their regularly repeating arrangements can be taken advantage of in X-ray, neutron, and electron diffraction based crystallography).
Crystal structures of crystalline material are typically determined from X-ray or neutron single-crystal diffraction data and stored in crystal structure databases. They are routinely identified by comparing reflection intensities and lattice spacings from X-ray powder diffraction data with entries in powder-diffraction fingerprinting databases.
Crystal structures of nanometer sized crystalline samples can be determined via structure factor amplitude information from single-crystal electron diffraction data or structure factor amplitude and phase angle information from Fourier transforms of HRTEM images of crystallites. They are stored in crystal structure databases specializing in nanocrystals and can be identified by comparing zone axis subsets in lattice-fringe fingerprint plots with entries in a lattice-fringe fingerprinting database.
Crystallographic databases differ in access and usage rights and offer varying degrees of search and analysis capacity. Many provide structure visualization capabilities. They can be browser based or installed locally. Newer versions are built on the relational database model and support the Crystallographic Information File (CIF) as a universal data exchange format.
== Overview ==
Crystallographic data are primarily extracted from published scientific articles and supplementary material. Newer versions of crystallographic databases are built on the relational database model, which enables efficient cross-referencing of tables. Cross-referencing serves to derive additional data or enhance the search capacity of the database.
Data exchange among crystallographic databases, structure visualization software, and structure refinement programs has been facilitated by the emergence of the Crystallographic Information File (CIF) format. The CIF format is the standard file format for the exchange and archiving of crystallographic data.
It was adopted by the International Union of Crystallography (IUCr), who also provides full specifications of the format. It is supported by all major crystallographic databases.
The increasing automation of the crystal structure determination process has resulted in ever higher publishing rates of new crystal structures and, consequentially, new publishing models. Minimalistic articles contain only crystal structure tables, structure images, and, possibly, abstract-like structure description. They tend to be published in author-financed or subsidized open-access journals. Acta Crystallographica Section E and Zeitschrift für Kristallographie belong in this category. More elaborate contributions may go to traditional subscriber-financed journals. Hybrid journals, on the other hand, embed individual author-financed open-access articles among subscriber-financed ones. Publishers may also make scientific articles available online, as Portable Document Format (PDF) files.
Crystal structure data in CIF format are linked to scientific articles as supplementary material. CIFs may be accessible directly from the publisher's website, crystallographic databases, or both. In recent years, many publishers of crystallographic journals have come to interpret CIFs as formatted versions of open data, i.e. representing non-copyrightable facts, and therefore tend to make them freely available online, independent of the accessibility status of linked scientific articles.
== Trends ==
As of 2008, more than 700,000 crystal structures had been published and stored in crystal structure databases. The publishing rate has reached more than 50,000 crystal structures per year. These numbers refer to published and republished crystal structures from experimental data. Crystal structures are republished owing to corrections for symmetry errors, improvements of lattice and atomic parameters, and differences in diffraction technique or experimental conditions. As of 2016, there are about 1,000,000 molecule and crystal structures known and published, approximately half of them in open access.
Crystal structures are typically categorized as minerals, metals-alloys, inorganics, organics, nucleic acids, and biological macromolecules. Individual crystal structure databases cater for users in specific chemical, molecular-biological, or related disciplines by covering super- or subsets of these categories. Minerals are a subset of mostly inorganic compounds. The category ‘metals-alloys’ covers metals, alloys, and intermetallics. Metals-alloys and inorganics can be merged into ‘non-organics’. Organic compounds and biological macromolecules are separated according to molecular size. Organic salts, organometallics, and metalloproteins tend to be attributed to organics or biological macromolecules, respectively. Nucleic acids are a subset of biological macromolecules.
Comprehensiveness can refer to the number of entries in a database. On those terms, a crystal structure database can be regarded as comprehensive, if it contains a collection of all (re-)published crystal structures in the category of interest and is updated frequently. Searching for structures in such a database can replace more time-consuming scanning of the open literature. Access to crystal structure databases differs widely. It can be divided into reading and writing access. Reading access rights (search, download) affect the number and range of users. Restricted reading access is often coupled with restricted usage rights. Writing access rights (upload, edit, delete), on the other hand, determine the number and range of contributors to the database. Restricted writing access is often coupled with high data integrity.
In terms of user numbers and daily access rates, comprehensive and thoroughly vetted open-access crystal structure databases naturally surpass comparable databases with more restricted access and usage rights. Independent of comprehensiveness, open-access crystal structure databases have spawned open-source software projects, such as search-analysis tools, visualization software, and derivative databases. Scientific progress has been slowed down by restricting access or usage rights as well as limiting comprehensiveness or data integrity. Restricted access or usage rights are commonly associated with commercial crystal structure databases. Lack of comprehensiveness or data integrity, on the other hand, are associated with some of the open-access crystal structure databases other than the Crystallography Open Database (COD), and is "macromolecular open-access counterpart", the world wide Protein Database. Apart from that, several crystal structure databases are freely available for primarily educational purposes, in particular mineralogical databases and educational offshoots of the COD .
Crystallographic databases can specialize in crystal structures, crystal phase identification, crystallization, crystal morphology, or various physical properties. More integrative databases combine several categories of compounds or specializations. Structures of incommensurate phases, 2D materials, nanocrystals, thin films on substrates, and predicted crystal structures are collected in tailored special structure databases.
== Search ==
Search capacities of crystallographic databases differ widely. Basic functionality comprises search by keywords, physical properties, and chemical elements. Of particular importance is search by compound name and lattice parameters. Very useful are search options that allow the use of wildcard characters and logical connectives in search strings. If supported, the scope of the search can be constrained by the exclusion of certain chemical elements.
More sophisticated algorithms depend on the material type covered. Organic compounds might be searched for on the basis of certain molecular fragments. Inorganic compounds, on the other hand, might be of interest with regard to a certain type of coordination geometry. More advanced algorithms deal with conformation analysis (organics), supramolecular chemistry (organics), interpolyhedral connectivity (‘non-organics’) and higher-order molecular structures (biological macromolecules). Search algorithms used for a more complex analysis of physical properties, e.g. phase transitions or structure-property relationships, might apply group-theoretical concepts.
Modern versions of crystallographic databases are based on the relational database model. Communication with the database usually happens via a dialect of the Structured Query Language (SQL). Web-based databases typically process the search algorithm on the server interpreting supported scripting elements, while desktop-based databases run locally installed and usually precompiled search engines.
== Crystal phase identification ==
Crystalline material may be divided into single crystals, twin crystals, polycrystals, and crystal powder. In a single crystal, the arrangement of atoms, ions, or molecules is defined by a single crystal structure in one orientation. Twin crystals, on the other hand, consist of single-crystalline twin domains, which are aligned by twin laws and separated by domain walls.
Polycrystals are made of a large number of small single crystals, or crystallites, held together by thin layers of amorphous solid. Crystal powder is obtained by grinding crystals, resulting in powder particles, made up of one or more crystallites. Both polycrystals and crystal powder consist of many crystallites with varying orientation.
Crystal phases are defined as regions with the same crystal structure, irrespective of orientation or twinning. Single and twinned crystalline specimens therefore constitute individual crystal phases. Polycrystalline or crystal powder samples may consist of more than one crystal phase. Such a phase comprises all the crystallites in the sample with the same crystal structure.
Crystal phases can be identified by successfully matching suitable crystallographic parameters with their counterparts in database entries. Prior knowledge of the chemical composition of the crystal phase can be used to reduce the number of database entries to a small selection of candidate structures and thus simplify the crystal phase identification process considerably.
=== Powder diffraction fingerprinting (1D) ===
Applying standard diffraction techniques to crystal powders or polycrystals is tantamount to collapsing the 3D reciprocal space, as obtained via single-crystal diffraction, onto a 1D axis. The resulting partial-to-total overlap of symmetry-independent reflections renders the structure determination process more difficult, if not impossible.
Powder diffraction data can be plotted as diffracted intensity (I) versus reciprocal lattice spacing (1/d). Reflection positions and intensities of known crystal phases, mostly from X-ray diffraction data, are stored, as d-I data pairs, in the Powder Diffraction File (PDF) database. The list of d-I data pairs is highly characteristic of a crystal phase and, thus, suitable for the identification, also called ‘fingerprinting’, of crystal phases.
Search-match algorithms compare selected test reflections of an unknown crystal phase with entries in the database. Intensity-driven algorithms utilize the three most intense lines (so-called ‘Hanawalt search’), while d-spacing-driven algorithms are based on the eight to ten largest d-spacings (so-called ‘Fink search’).
X-ray powder diffraction fingerprinting has become the standard tool for the identification of single or multiple crystal phases and is widely used in such fields as metallurgy, mineralogy, forensic science, archeology, condensed matter physics, and the biological and pharmaceutical sciences.
=== Lattice-fringe fingerprinting (2D) ===
Powder diffraction patterns of very small single crystals, or crystallites, are subject to size-dependent peak broadening, which, below a certain size, renders powder diffraction fingerprinting useless. In this case, peak resolution is only possible in 3D reciprocal space,
i.e. by applying single-crystal electron diffraction techniques.
High-Resolution Transmission Electron Microscopy (HRTEM) provides images and diffraction patterns of nanometer sized crystallites. Fourier transforms of HRTEM images and electron diffraction patterns both supply information about the projected reciprocal lattice geometry
for a certain crystal orientation, where the projection axis coincides with the optical axis of the microscope.
Projected lattice geometries can be represented by so-called ‘lattice-fringe fingerprint plots’ (LFFPs), also called angular covariance plots. The horizontal axis of such a plot is given in reciprocal lattice length and is limited by the point resolution of the microscope. The vertical axis is defined as acute angle between Fourier transformed lattice fringes or electron diffraction spots. A 2D data point is defined by the length of a reciprocal lattice vector and its (acute) angle with another reciprocal lattice vector. Sets of 2D data points that obey Weiss's zone law are subsets of the entirety of data points in an LFFP. A suitable search-match algorithm using LFFPs, therefore, tries to find matching zone axis subsets in the database. It is, essentially, a variant of a lattice matching algorithm.
In the case of electron diffraction patterns, structure factor amplitudes can be used, in a later step, to further discern among a selection of candidate structures (so-called 'structure factor fingerprinting'). Structure factor amplitudes from electron diffraction data are far less reliable than their counterparts from X-ray single-crystal and powder diffraction data. Existing precession electron diffraction techniques greatly improve the quality of structure factor amplitudes, increase their number and, thus, make structure factor amplitude information much more useful for the fingerprinting process.
Fourier transforms of HRTEM images, on the other hand, supply information not only about the projected reciprocal lattice geometry and structure factor amplitudes, but also structure factor phase angles. After crystallographic image processing, structure factor phase angles are far more reliable than structure factor amplitudes. Further discernment of candidate structures is then mainly based on structure factor phase angles and, to a lesser extent, structure factor amplitudes (so-called 'structure factor fingerprinting').
=== Morphological fingerprinting (3D) ===
The Generalized Steno Law states that the interfacial angles between identical faces of any single crystal of the same material are, by nature, restricted to the same value. This offers the opportunity to fingerprint crystalline materials on the basis of optical goniometry, which is also known as crystallometry. In order to employ this technique successfully, one must consider the observed point group symmetry of the measured faces and creatively apply the rule that "crystal morphologies are often combinations of simple (i.e. low multiplicity) forms where the individual faces have the lowest possible Miller indices for any given zone axis". This shall ensure that the correct indexing of the crystal faces is obtained for any single crystal.
It is in many cases possible to derive the ratios of the crystal axes for crystals with low symmetry from optical goniometry with high accuracy and precision and to identify a crystalline material on their basis alone employing databases such as 'Crystal Data'. Provided that the crystal faces have been correctly indexed and the interfacial angles were measured to better than a few fractions of a tenth of a degree, a crystalline material can be identified quite unambiguously on the basis of angle comparisons to two rather comprehensive databases: the 'Bestimmungstabellen für Kristalle (Определитель Кристаллов)' and the 'Barker Index of Crystals'.
Since Steno's Law can be further generalized for a single crystal of any material to include the angles between either all identically indexed net planes (i.e. vectors of the reciprocal lattice, also known as 'potential reflections in diffraction experiments') or all identically indexed lattice directions (i.e. vectors of the direct lattice, also known as zone axes), opportunities exist for morphological fingerprinting of nanocrystals in the transmission electron microscope (TEM) by means of transmission electron goniometry.
The specimen goniometer of a TEM is thereby employed analogously to the goniometer head of an optical goniometer. The optical axis of the TEM is then analogous to the reference direction of an optical goniometer. While in optical goniometry net-plane normals (reciprocal lattice vectors) need to be successively aligned parallel to the reference direction of an optical goniometer in order to derive measurements of interfacial angles, the corresponding alignment needs to be done for zone axes (direct lattice vector) in transmission electron goniometry. (Note that such alignments are by their nature quite trivial for nanocrystals in a TEM after the microscope has been aligned by standard procedures.)
Since transmission electron goniometry is based on Bragg's Law for the transmission (Laue) case (diffraction of electron waves), interzonal angles (i.e. angles between lattice directions) can be measured by a procedure that is analogous to the measurement of interfacial angles in an optical goniometer on the basis of Snell's Law, i.e. the reflection of light. The complements to interfacial angles of external crystal faces can, on the other hand, be directly measured from a zone-axis diffraction pattern or from the Fourier transform of a high resolution TEM image that shows crossed lattice fringes.
=== Lattice matching (3D) ===
Lattice parameters of unknown crystal phases can be obtained from X-ray, neutron, or electron diffraction data. Single-crystal diffraction experiments supply orientation matrices, from which lattice parameters can be deduced. Alternatively, lattice parameters can be obtained from powder or polycrystal diffraction data via profile fitting without structural model (so-called 'Le Bail method').
Arbitrarily defined unit cells can be transformed to a standard setting and, from there, further reduced to a primitive smallest cell. Sophisticated algorithms compare such reduced cells with corresponding database entries. More powerful algorithms also consider derivative super- and subcells. The lattice-matching process can be further sped up by precalculating and storing reduced cells for all entries. The algorithm searches for matches within a certain range of the lattice parameters. More accurate lattice parameters allow a narrower range and, thus, a better match.
Lattice matching is useful in identifying crystal phases in the early stages of single-crystal
diffraction experiments and, thus, avoiding unnecessary full data collection and structure determination procedures for already known crystal structures. The method is particularly important for single-crystalline samples that need to be preserved. If, on the other hand, some or all of the crystalline sample material can be ground, powder diffraction fingerprinting is usually the better option for crystal phase identification, provided that the peak resolution is good enough. However, lattice matching algorithms are still better at treating derivative super- and subcells.
== Visualization ==
Newer versions of crystal structure databases integrate the visualization of crystal and molecular structures. Specialized or integrative crystallographic databases may provide morphology or tensor visualization output.
=== Crystal structures ===
The crystal structure describes the three-dimensional periodic arrangement of atoms, ions, or molecules in a crystal. The unit cell represents the simplest repeating unit of the crystal structure. It is a parallelepiped containing a certain spatial arrangement of atoms, ions, molecules, or molecular fragments. From the unit cell the crystal structure can be fully reconstructed via translations.
The visualization of a crystal structure can be reduced to the arrangement of atoms, ions, or molecules in the unit cell, with or without cell outlines. Structure elements extending beyond single unit cells, such as isolated molecular or polyhedral units as well as chain, net, or framework structures, can often be better understood by extending the structure representation into adjacent cells.
The space group of a crystal is a mathematical description of the symmetry inherent in the structure. The motif of the crystal structure is given by the asymmetric unit, a minimal subset of the unit cell contents. The unit cell contents can be fully reconstructed via the symmetry operations of the space group on the asymmetric unit. Visualization interfaces usually allow for switching between asymmetric unit and full structure representations.
Bonds between atoms or ions can be identified by characteristic short distances between them. They can be classified as covalent, ionic, hydrogen, or other bonds including hybrid forms. Bond angles can be deduced from the bond vectors in groups of atoms or ions. Bond distances and angles can be made available to the user in tabular form or interactively, by selecting pairs or groups of atoms or ions. In ball-and-stick models of crystal structures, balls represent atoms and sticks represent bonds.
Since organic chemists are particularly interested in molecular structures, it might be useful to be able to single out individual molecular units interactively from the drawing. Organic molecular units need to be given both as 2D structural formulae and full 3D molecular structures. Molecules on special-symmetry positions need to be reconstructed from the asymmetric unit. Protein crystallographers are interested in molecular structures of biological macromolecules, so that provisions need to be made to be able to represent molecular subunits as helices, sheets, or coils, respectively.
Crystal structure visualization can be integrated into a crystallographic database. Alternatively, the crystal structure data are exchanged between the database and the visualization software, preferably using the CIF format. Web-based crystallographic databases can integrate crystal structure visualization capability. Depending on the complexity of the structure, lighting, and 3D effects, crystal structure visualization can require a significant amount of processing power, which is why the actual visualization is typically run on the client.
Currently, web-integrated crystal structure visualization is based on Java applets from open-source projects such as Jmol. Web-integrated crystal structure visualization is tailored for examining crystal structures in web browsers, often supporting wide color spectra (up to 32 bit) and window size adaptation. However, web-generated crystal structure images are not always suitable for publishing due to issues such as resolution depth, color choice, grayscale contrast, or labeling (positioning, font type, font size).
=== Morphology and physical properties ===
Mineralogists, in particular, are interested in morphological appearances of individual crystals, as defined by the actually formed crystal faces (tracht) and their relative sizes (habit). More advanced visualization capabilities allow for displaying surface characteristics, imperfections inside the crystal, lighting (reflection, shadow, and translucency), and 3D effects (interactive rotatability, perspective, and stereo viewing).
Crystal physicists, in particular, are interested in anisotropic physical properties of crystals. The directional dependence of a crystal's physical property is described by a 3D tensor and depends on the orientation of the crystal. Tensor shapes are more palpable by adding lighting effects (reflection and shadow). 2D sections of interest are selected for display by rotating the tensor interactively around one or more axes.
Crystal morphology or physical property data can be stored in specialized databases or added to more comprehensive crystal structure databases. The Crystal Morphology Database (CMD) is an example for a web-based crystal morphology database with integrated visualization capabilities.
== See also ==
Chemical database
Biological database
== References ==
== External links ==
=== Crystal structures ===
American Mineralogist Crystal Structure Database (AMCSD) (contents: crystal structures of minerals, access: free, size: large)
Cambridge Structural Database (CSD) (contents: crystal structures of organics and metal-organics, access: restricted, size: very large)
Crystallography Open Database (COD) (contents: crystal structures of organics, metalorganics, minerals, inorganics, metals, alloys, and intermetallics, access: free, size: very large)
COD+ (Web Interface for COD) (contents: crystal structures of organics, metalorganics, minerals, inorganics, metals, alloys, and intermetallics, access: free, size: very large)
Database of Zeolite Structures (contents: crystal structures of zeolites, access: free, size: small)
Incommensurate Structures Database (contents: incommensurate structures, access: free, size: small)
Inorganic Crystal Structure Database (ICSD) (contents: crystal structures of minerals and inorganics, access: restricted, size: large)
MaterialsProject Database (contents: crystal structures of inorganic compounds, access: free, size: large)
Materials Platform for Data Science (MPDS) or PAULING FILE (contents: critically evaluated crystal structures, as well as physical properties and phase diagrams, from the world scientific literature, access: partially free, size: very large)
MaterialsWeb Database (contents: crystal structures of inorganic 2D materials and bulk compounds, access: free, size: large)
Metals Structure Database (CRYSTMET) (contents: crystal structures of metals, alloys, and intermetallics, access: restricted, size: large)
Mineralogy Database (contents: crystal structures of minerals, access: free, size: medium)
MinCryst (contents: crystal structures of minerals, access: free, size: medium)
NIST Structural Database NIST Structural Database (contents: crystal structures of metals, alloys, and intermetallics, access: restricted, size: large)
NIST Surface Structure Database (contents: surface and interface structures, access: restricted, size: small-medium)
Nucleic Acid Database (contents: crystal and molecular structures of nucleic acids, access: free, size: medium)
Pearson's Crystal Data (contents: crystal structures of inorganics, minerals, salts, oxides, hydrides, metals, alloys, and intermetallics, access: restricted, size: very large)
Worldwide Protein Data Bank (PDB) (contents: crystal and molecular structures of biological macromolecules, access: free, size: very large)
Wiki Crystallography Database (WCD) (contents: crystal structures of organics, metalorganics, minerals, inorganics, metals, alloys, and intermetallics, access: free, size: medium)
=== Crystal phase identification ===
Match! (method: powder diffraction fingerprinting)
NIST Crystal Data (method: lattice matching)
Powder Diffraction File (PDF) (method: powder diffraction fingerprinting)
=== Specialized databases ===
Educational Subset of the Crystallography Open Database (EDU-COD) (specialization: crystal and molecule structures for college education, access: free, size: medium)
Biological Macromolecule Crystallization Database (BMCD) (specialization: crystallization of biological macromolecules, access: free, size: medium)
Crystal Morphology Database (CMD) (specialization: morphology of crystals, access: free, size: very small)
Database of Hypothetical Structures Archived 2016-01-24 at the Wayback Machine (specialization: predicted zeolite-like crystal structures, access: free, size: large)
Database of Zeolite Structures (specialization: crystal structures of zeolites, access: free, size: small)
Hypothetical MOFs Database Archived 2019-02-19 at the Wayback Machine (specialization: predicted metal-organic framework crystal structures, access: free, size: large)
Incommensurate Structures Database (specialization: incommensurate structures, access: free, size: small)
Marseille Protein Crystallization Database (MPCD) (specialization: crystallization of biological macromolecules, access: free, size: medium)
MOFomics (specialization: pore structures of metal-organic frameworks, access: free, size: medium)
Nano-Crystallography Database (NCD) (specialization: crystal structures of nanometer sized crystallites, access: free, size: small)
NIST Surface Structure Database (specialization: surface and interface structures, access: restricted, size: small-medium)
Predicted Crystallography Open Database (PCOD) (spezialization: predicted crystal structures of organics, metal-organics, metals, alloys, intermetallics, and inorganics, access: free, size: very large)
Theoretical Crystallography Open Database (TCOD) (spezialization: crystal structures of organics, metal-organics, metals, alloys, intermetallics, and inorganics that were refined or predicted from density functional theory with some experimental input, access: free, size: small)
ZEOMICS (specialization: pore structures of zeolites, access: free, size: small) | Wikipedia/Crystallographic_database |
X-ray crystallography is the experimental science of determining the atomic and molecular structure of a crystal, in which the crystalline structure causes a beam of incident X-rays to diffract in specific directions. By measuring the angles and intensities of the X-ray diffraction, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal and the positions of the atoms, as well as their chemical bonds, crystallographic disorder, and other information.
X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences between various materials, especially minerals and alloys. The method has also revealed the structure and function of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the primary method for characterizing the atomic structure of materials and in differentiating materials that appear similar in other experiments. X-ray crystal structures can also help explain unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases.
Modern work involves a number of steps all of which are important. The preliminary steps include preparing good quality samples, careful recording of the diffracted intensities, and processing of the data to remove artifacts. A variety of different methods are then used to obtain an estimate of the atomic structure, generically called direct methods. With an initial estimate further computational techniques such as those involving difference maps are used to complete the structure. The final step is a numerical refinement of the atomic positions against the experimental data, sometimes assisted by ab-initio calculations. In almost all cases new structures are deposited in databases available to the international community.
== History ==
Crystals, though long admired for their regularity and symmetry, were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles. The Danish scientist Nicolas Steno (1669) pioneered experimental investigations of crystal symmetry. Steno showed that the angles between the faces are the same in every exemplar of a particular type of crystal (law of constancy of interfacial angles). René Just Haüy (1784) discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size (law of decrements). Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which remain in use for identifying crystal faces. Haüy's study led to the idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johan Hessel, Auguste Bravais, Evgraf Fedorov, Arthur Schönflies and (belatedly) William Barlow (1894). Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too scarce in the 1880s to accept his models as conclusive.
Wilhelm Röntgen discovered X-rays in 1895. Physicists were uncertain of the nature of X-rays, but suspected that they were waves of electromagnetic radiation. The Maxwell theory of electromagnetic radiation was well accepted, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation for sharp spectral lines, noting in 1909 two separate energies, at first naming them "A" and "B" and then supposing that there may be lines prior to "A", he started an alphabet numbering beginning with "K." Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom. X-rays are not only waves but also have particle properties causing Sommerfeld to coin the name Bremsstrahlung for the continuous spectra when they were formed when electrons bombarded a material. Albert Einstein introduced the photon concept in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 confirmed that X-rays are a form of electromagnetic radiation.
The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. The results were presented to the Bavarian Academy of Sciences and Humanities in June 1912 as "Interferenz-Erscheinungen bei Röntgenstrahlen" (Interference phenomena in X-rays). Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914.
After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the scattering with evenly spaced planes within a crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple; as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated arrangements of atoms.
The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be "solved" (i.e., determined) in 1914 was that of table salt. The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds. The structure of diamond was solved in the same year, proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was about 1.52 angstroms. Other early structures included copper, calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2) in 1914; spinel (MgAl2O4) in 1915; the rutile and anatase forms of titanium dioxide (TiO2) in 1916; pyrochroite (Mn(OH)2) and, by extension, brucite (Mg(OH)2) in 1919. Also in 1919, sodium nitrate (NaNO3) and caesium dichloroiodide (CsICl2) were determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS) structure was determined in 1920.
The structure of graphite was solved in 1916 by the related method of powder diffraction, which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. Hull also used the powder method to determine the structures of various metals, such as iron and magnesium.
== Contributions in different areas ==
=== Chemistry ===
X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), and the resonance observed in the planar carbonate group and in aromatic molecules. Kathleen Lonsdale's 1928 structure of hexamethylbenzene established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement.
The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was rapidly followed by several studies of different long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll.
In the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide.
The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds, metal-metal quadruple bonds, and three-center, two-electron bonds. X-ray crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly covalent character of hydrogen bonds. In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds, while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes. Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host–guest chemistry.
=== Materials science and mineralogy ===
The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy also occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals. Many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry. The Cambridge Structural Database contains over 1,000,000 structures as of June 2019; most of these structures were determined by X-ray crystallography.
On October 17, 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes.
=== Biological macromolecular crystallography ===
X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), penicillin (1946) and vitamin B12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years.
Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Sir John Cowdery Kendrew, for which he shared the Nobel Prize in Chemistry with Max Perutz in 1962. Since that success, 190,000 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. The nearest competing method in number of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved less than one tenth as many. Crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is used routinely to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it. However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other denaturants to solubilize them in isolation, and such detergents often interfere with crystallization. Membrane proteins are a large component of the genome, and include many proteins of great physiological importance, such as ion channels and receptors. Helium cryogenics are used to prevent radiation damage in protein crystals.
== Methods ==
=== Overview ===
Two limiting cases of X-ray crystallography—"small-molecule" (which includes continuous inorganic solids) and "macromolecular" crystallography—are often used. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. In contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved; the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses and proteins with hundreds of thousands of atoms, through improved crystallographic imaging and technology.
The technique of single-crystal X-ray crystallography has three basic steps. The first—and often most difficult—step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning.
In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. The angles and intensities of diffracted X-rays are measured, with each compound having a unique diffraction pattern. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections.
In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement—now called a crystal structure—is usually stored in a public database.
=== Crystallization ===
Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with macromolecular crystal annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure.
Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out so as to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded.
Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. Other approaches involve crystallizing proteins under oil, where aqueous protein solutions are dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have different evaporation permeabilities, therefore yielding changes in concentration rates from different percipient/protein mixture.
It is difficult to predict good conditions for nucleation or growth of well-ordered crystals. In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallization-grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1 microliter).
Several factors are known to inhibit crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior.
=== Data collection ===
==== Mounting the crystal ====
The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor). Crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen. This freezing reduces the radiation damage of the X-rays, as well as thermal motion (the Debye-Waller effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. This pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error.
The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer.
==== Recording the reflections ====
The relative intensities of the reflections provides information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector (such as a pixel detector) or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point.
One set of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full three dimensional set. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of reciprocal space.
Multiple data sets may be necessary for certain phasing methods. For example, multi-wavelength anomalous dispersion phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken.
=== Crystal symmetry, unit cell, and image scaling ===
The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional set. Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)).
A full data set may consist of hundreds of separate images taken at different orientations of the crystal. These have to be merged and scaled using peaks that appear in two or more images (merging) and scaling so there is a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry-related R-factor, a reliability index based upon how similar are the measured intensities of symmetry-equivalent reflections, thus assessing the quality of the data.
=== Initial phasing ===
The intensity of each diffraction 'spot' is proportional to the modulus squared of the structure factor. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways:
Ab initio phasing or direct methods – This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections.
Molecular replacement – if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps.
Anomalous X-ray scattering (MAD or SAD phasing) – the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and hence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A multi-wavelength anomalous dispersion (MAD) experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases.
Heavy atom methods (multiple isomorphous replacement) – If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in multi-wavelength anomalous dispersion phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by multi-wavelength anomalous dispersion phasing with selenomethionine.
=== Model building and phase refinement ===
Having obtained initial phases, an initial model can be built. The atomic positions in the model and their respective Debye-Waller factors (or B-factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases. A new model can then be fit to the new electron density map and successive rounds of refinement are carried out. This iterative process continues until the correlation between the diffraction data and the model is maximized. The agreement is measured by an R-factor defined as
R
=
∑
all reflections
|
F
obs
−
F
calc
|
∑
all reflections
|
F
obs
|
,
{\displaystyle R={\frac {\sum _{\text{all reflections}}\left|F_{\text{obs}}-F_{\text{calc}}\right|}{\sum _{\text{all reflections}}\left|F_{\text{obs}}\right|}},}
where F is the structure factor. A similar quality criterion is Rfree, which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be approximately the resolution in angstroms divided by 10; thus, a data-set with 2 Å resolution should yield a final Rfree ~ 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. In iterative model building, it is common to encounter phase bias or model bias: because phase estimations come from the model, each round of calculated map tends to show density wherever the model has density, regardless of whether there truly is a density. This problem can be mitigated by maximum-likelihood weighting and checking using omit maps.
It may not be possible to observe every atom in the asymmetric unit. In many cases, crystallographic disorder smears the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization.
=== Disorder ===
A common challenge in refinement of crystal structures results from crystallographic disorder. Disorder can take many forms but in general involves the coexistence of two or more species or conformations. Failure to recognize disorder results in flawed interpretation. Pitfalls from improper modeling of disorder are illustrated by the discounted hypothesis of bond stretch isomerism. Disorder is modelled with respect to the relative population of the components, often only two, and their identity. In structures of large molecules and ions, solvent and counterions are often disordered.
=== Applied computational data analysis ===
The use of computational methods for the powder X-ray diffraction data analysis is now generalized. It typically compares the experimental data to the simulated diffractogram of a model structure, taking into account the instrumental parameters, and refines the structural or microstructural parameters of the model using least squares based minimization algorithm. Most available tools allowing phase identification and structural refinement are based on the Rietveld method, some of them being open and free software such as FullProf Suite, Jana2006, MAUD, Rietan, GSAS, etc. while others are available under commercial licenses such as Diffrac.Suite TOPAS, Match!, etc. Most of these tools also allow Le Bail refinement (also referred to as profile matching), that is, refinement of the cell parameters based on the Bragg peaks positions and peak profiles, without taking into account the crystallographic structure by itself. More recent tools allow the refinement of both structural and microstructural data, such as the FAULTS program included in the FullProf Suite, which allows the refinement of structures with planar defects (e.g. stacking faults, twinnings, intergrowths).
=== Deposition of the structure ===
Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic compounds) or the Protein Data Bank (for protein and sometimes nucleic acids). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins are not deposited in public crystallographic databases.
== Contribution of women to X-ray crystallography ==
A number of women were pioneers in X-ray crystallography at a time when they were excluded from most other branches of physical science.
Kathleen Lonsdale was a research student of William Henry Bragg, who had 11 women research students out of a total of 18. She is known for both her experimental and theoretical work. Lonsdale joined his crystallography research team at the Royal Institution in London in 1923, and after getting married and having children, went back to work with Bragg as a researcher. She confirmed the structure of the benzene ring, carried out studies of diamond, was one of the first two women to be elected to the Royal Society in 1945, and in 1949 was appointed the first female tenured professor of chemistry and head of the Department of crystallography at University College London. Lonsdale always advocated greater participation of women in science and said in 1970: "Any country that wants to make full use of all its potential scientists and technologists could do so, but it must not expect to get the women quite so simply as it gets the men. ... It is utopian, then, to suggest that any country that really wants married women to return to a scientific career, when her children no longer need her physical presence, should make special arrangements to encourage her to do so?". During this period, Lonsdale began a collaboration with William T. Astbury on a set of 230 space group tables which was published in 1924 and became an essential tool for crystallographers.
In 1932 Dorothy Hodgkin joined the laboratory of the physicist John Desmond Bernal, who was a former student of Bragg, in Cambridge, UK. She and Bernal took the first X-ray photographs of crystalline proteins. Hodgkin also played a role in the foundation of the International Union of Crystallography. She was awarded the Nobel Prize in Chemistry in 1964 for her work using X-ray techniques to study the structures of penicillin, insulin and vitamin B12. Her work on penicillin began in 1942 during the war and on vitamin B12 in 1948. While her group slowly grew, their predominant focus was on the X-ray analysis of natural products. She is the only British woman ever to have won a Nobel Prize in a science subject.
Rosalind Franklin took the X-ray photograph of a DNA fibre that proved key to James Watson and Francis Crick's discovery of the double helix, for which they both won the Nobel Prize for Physiology or Medicine in 1962. Watson revealed in his autobiographic account of the discovery of the structure of DNA, The Double Helix, that he had used Franklin's X-ray photograph without her permission. Franklin died of cancer in her 30s, before Watson received the Nobel Prize. Franklin also carried out important structural studies of carbon in coal and graphite, and of plant and animal viruses.
Isabella Karle of the United States Naval Research Laboratory developed an experimental approach to the mathematical theory of crystallography. Her work improved the speed and accuracy of chemical and biomedical analysis. Yet only her husband Jerome shared the 1985 Nobel Prize in Chemistry with Herbert Hauptman, "for outstanding achievements in the development of direct methods for the determination of crystal structures". Other prize-giving bodies have showered Isabella with awards in her own right.
Women have written many textbooks and research papers in the field of X-ray crystallography. For many years Lonsdale edited the International Tables for Crystallography, which provide information on crystal lattices, symmetry, and space groups, as well as mathematical, physical and chemical data on structures. Olga Kennard of the University of Cambridge, founded and ran the Cambridge Crystallographic Data Centre, an internationally recognized source of structural data on small molecules, from 1965 until 1997. Jenny Pickworth Glusker, a British scientist, co-authored Crystal Structure Analysis: A Primer, first published in 1971 and as of 2010 in its third edition. Eleanor Dodson, an Australian-born biologist, who began as Dorothy Hodgkin's technician, was the main instigator behind CCP4, the collaborative computing project that currently shares more than 250 software tools with protein crystallographers worldwide.
== Nobel Prizes involving X-ray crystallography ==
== See also ==
== Notes ==
== References ==
== Further reading ==
=== International Tables for Crystallography ===
=== Bound collections of articles ===
=== Textbooks ===
=== Applied computational data analysis ===
=== Historical ===
== External links ==
=== Tutorials ===
Learning Crystallography
Simple, non technical introduction
The Crystallography Collection, video series from the Royal Institution
"Small Molecule Crystalization" (PDF) at Illinois Institute of Technology website
International Union of Crystallography
Crystallography 101
Interactive structure factor tutorial, demonstrating properties of the diffraction pattern of a 2D crystal.
Picturebook of Fourier Transforms, illustrating the relationship between crystal and diffraction pattern in 2D.
Lecture notes on X-ray crystallography and structure determination
Online lecture on Modern X-ray Scattering Methods for Nanoscale Materials Analysis by Richard J. Matyi
Interactive Crystallography Timeline Archived 2021-06-30 at the Wayback Machine from the Royal Institution
=== Primary databases ===
Crystallography Open Database (COD)
Protein Data Bank (PDB)
Nucleic Acid Databank Archived 2018-07-14 at the Wayback Machine (NDB)
Cambridge Structural Database (CSD)
Inorganic Crystal Structure Database (ICSD)
Biological Macromolecule Crystallization Database (BMCD)
=== Derivative databases ===
PDBsum
Proteopedia – the collaborative, 3D encyclopedia of proteins and other molecules
RNABase
HIC-Up database of PDB ligands Archived 2020-08-08 at the Wayback Machine
Structural Classification of Proteins database
CATH Protein Structure Classification
List of transmembrane proteins with known 3D structure Archived 2011-04-11 at the Wayback Machine
Orientations of Proteins in Membranes database
=== Structural validation ===
MolProbity structural validation suite
ProSA-web
NQ-Flipper (check for unfavorable rotamers of Asn and Gln residues)
DALI server (identifies proteins similar to a given protein) | Wikipedia/X-ray_crystallography |
The French Crystallographic Association (L’Association française de cristallographie or AFC) brings together physicists, chemists and biologists that use crystals and crystallography in their research or develop new crystallographic methods. Originally part of the French Society of Mineralogy, the AFC was founded in 1953 by Hubert Curien and André Guinier.
Today, its main goals are to promote dissemination of knowledge and exchange between French speaking crystallographers from all fields, and in particular to organize or support specialized or interdisciplinary workshops and conferences, educational actions and training courses in the area of crystallography. During the biannual AFC conferences, the AFC awards three PhD prizes in each of its research areas: Physics, Chemistry and Biology.
Claude Sauter, scientist at the Institut de Biologie Moléculaire et Cellulaire in Strasbourg is the President of the AFC from January 1st, 2022.
== Presidents of the AFC ==
== See also ==
International Union of Crystallography
European Crystallographic Association
== References ==
== External links ==
Website of the French Crystallographic Association | Wikipedia/French_Crystallographic_Association |
Molecular mechanics uses classical mechanics to model molecular systems. The Born–Oppenheimer approximation is assumed valid and the potential energy of all systems is calculated as a function of the nuclear coordinates using force fields. Molecular mechanics can be used to study molecule systems ranging in size and complexity from small to large biological systems or material assemblies with many thousands to millions of atoms.
All-atomistic molecular mechanics methods have the following properties:
Each atom is simulated as one particle
Each particle is assigned a radius (typically the van der Waals radius), polarizability, and a constant net charge (generally derived from quantum calculations and/or experiment)
Bonded interactions are treated as springs with an equilibrium distance equal to the experimental or calculated bond length
Variants on this theme are possible. For example, many simulations have historically used a united-atom representation in which each terminal methyl group or intermediate methylene unit was considered one particle, and large protein systems are commonly simulated using a bead model that assigns two to four particles per amino acid.
== Functional form ==
The following functional abstraction, termed an interatomic potential function or force field in chemistry, calculates the molecular system's potential energy (E) in a given conformation as a sum of individual energy terms.
E
=
E
covalent
+
E
noncovalent
{\displaystyle \ E=E_{\text{covalent}}+E_{\text{noncovalent}}\,}
where the components of the covalent and noncovalent contributions are given by the following summations:
E
covalent
=
E
bond
+
E
angle
+
E
dihedral
{\displaystyle \ E_{\text{covalent}}=E_{\text{bond}}+E_{\text{angle}}+E_{\text{dihedral}}}
E
noncovalent
=
E
electrostatic
+
E
van der Waals
{\displaystyle \ E_{\text{noncovalent}}=E_{\text{electrostatic}}+E_{\text{van der Waals}}}
The exact functional form of the potential function, or force field, depends on the particular simulation program being used. Generally the bond and angle terms are modeled as harmonic potentials centered around equilibrium bond-length values derived from experiment or theoretical calculations of electronic structure performed with software which does ab-initio type calculations such as Gaussian. For accurate reproduction of vibrational spectra, the Morse potential can be used instead, at computational cost. The dihedral or torsional terms typically have multiple minima and thus cannot be modeled as harmonic oscillators, though their specific functional form varies with the implementation. This class of terms may include improper dihedral terms, which function as correction factors for out-of-plane deviations (for example, they can be used to keep benzene rings planar, or correct geometry and chirality of tetrahedral atoms in a united-atom representation).
The non-bonded terms are much more computationally costly to calculate in full, since a typical atom is bonded to only a few of its neighbors, but interacts with every other atom in the molecule. Fortunately the van der Waals term falls off rapidly. It is typically modeled using a 6–12 Lennard-Jones potential, which means that attractive forces fall off with distance as r−6 and repulsive forces as r−12, where r represents the distance between two atoms. The repulsive part r−12 is however unphysical, because repulsion increases exponentially. Description of van der Waals forces by the Lennard-Jones 6–12 potential introduces inaccuracies, which become significant at short distances. Generally a cutoff radius is used to speed up the calculation so that atom pairs which distances are greater than the cutoff have a van der Waals interaction energy of zero.
The electrostatic terms are notoriously difficult to calculate well because they do not fall off rapidly with distance, and long-range electrostatic interactions are often important features of the system under study (especially for proteins). The basic functional form is the Coulomb potential, which only falls off as r−1. A variety of methods are used to address this problem, the simplest being a cutoff radius similar to that used for the van der Waals terms. However, this introduces a sharp discontinuity between atoms inside and atoms outside the radius. Switching or scaling functions that modulate the apparent electrostatic energy are somewhat more accurate methods that multiply the calculated energy by a smoothly varying scaling factor from 0 to 1 at the outer and inner cutoff radii. Other more sophisticated but computationally intensive methods are particle mesh Ewald (PME) and the multipole algorithm.
In addition to the functional form of each energy term, a useful energy function must be assigned parameters for force constants, van der Waals multipliers, and other constant terms. These terms, together with the equilibrium bond, angle, and dihedral values, partial charge values, atomic masses and radii, and energy function definitions, are collectively termed a force field. Parameterization is typically done through agreement with experimental values and theoretical calculations results. Norman L. Allinger's force field in the last MM4 version calculate for hydrocarbons heats of formation with a RMS error of 0.35 kcal/mol, vibrational spectra with a RMS error of 24 cm−1, rotational barriers with a RMS error of 2.2°, C−C bond lengths within 0.004 Å and C−C−C angles within 1°. Later MM4 versions cover also compounds with heteroatoms such as aliphatic amines.
Each force field is parameterized to be internally consistent, but the parameters are generally not transferable from one force field to another.
== Areas of application ==
The main use of molecular mechanics is in the field of molecular dynamics. This uses the force field to calculate the forces acting on each particle and a suitable integrator to model the dynamics of the particles and predict trajectories. Given enough sampling and subject to the ergodic hypothesis, molecular dynamics trajectories can be used to estimate thermodynamic parameters of a system or probe kinetic properties, such as reaction rates and mechanisms.
Molecular mechanics is also used within QM/MM, which allows study of proteins and enzyme kinetics. The system is divided into two regions—one of which is treated with quantum mechanics (QM) allowing breaking and formation of bonds and the rest of the protein is modeled using molecular mechanics (MM). MM alone does not allow the study of mechanisms of enzymes, which QM allows. QM also produces more exact energy calculation of the system although it is much more computationally expensive.
Another application of molecular mechanics is energy minimization, whereby the force field is used as an optimization criterion. This method uses an appropriate algorithm (e.g. steepest descent) to find the molecular structure of a local energy minimum. These minima correspond to stable conformers of the molecule (in the chosen force field) and molecular motion can be modelled as vibrations around and interconversions between these stable conformers. It is thus common to find local energy minimization methods combined with global energy optimization, to find the global energy minimum (and other low energy states). At finite temperature, the molecule spends most of its time in these low-lying states, which thus dominate the molecular properties. Global optimization can be accomplished using simulated annealing, the Metropolis algorithm and other Monte Carlo methods, or using different deterministic methods of discrete or continuous optimization. While the force field represents only the enthalpic component of free energy (and only this component is included during energy minimization), it is possible to include the entropic component through the use of additional methods, such as normal mode analysis.
Molecular mechanics potential energy functions have been used to calculate binding constants, protein folding kinetics, protonation equilibria, active site coordinates, and to design binding sites.
== Environment and solvation ==
In molecular mechanics, several ways exist to define the environment surrounding a molecule or molecules of interest. A system can be simulated in vacuum (termed a gas-phase simulation) with no surrounding environment, but this is usually undesirable because it introduces artifacts in the molecular geometry, especially in charged molecules. Surface charges that would ordinarily interact with solvent molecules instead interact with each other, producing molecular conformations that are unlikely to be present in any other environment. The most accurate way to solvate a system is to place explicit water molecules in the simulation box with the molecules of interest and treat the water molecules as interacting particles like those in the other molecule(s). A variety of water models exist with increasing levels of complexity, representing water as a simple hard sphere (a united-atom model), as three separate particles with fixed bond angle, or even as four or five separate interaction centers to account for unpaired electrons on the oxygen atom. As water models grow more complex, related simulations grow more computationally intensive. A compromise method has been found in implicit solvation, which replaces the explicitly represented water molecules with a mathematical expression that reproduces the average behavior of water molecules (or other solvents such as lipids). This method is useful to prevent artifacts that arise from vacuum simulations and reproduces bulk solvent properties well, but cannot reproduce situations in which individual water molecules create specific interactions with a solute that are not well captured by the solvent model, such as water molecules that are part of the hydrogen bond network within a protein.
== Software packages ==
This is a limited list; many more packages are available.
== See also ==
== References ==
== Literature ==
== External links ==
Molecular dynamics simulation methods revised
Molecular mechanics - it is simple | Wikipedia/Molecular_mechanics |
Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific 3D structure that determines its activity.
A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than 20–30 residues, are rarely considered to be proteins and are commonly called peptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code. In general, the genetic code specifies 20 standard amino acids; but in certain organisms the genetic code can include selenocysteine and—in certain archaea—pyrrolysine. Shortly after or even during synthesis, the residues in a protein are often chemically modified by post-translational modification, which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Some proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can work together to achieve a particular function, and they often associate to form stable protein complexes.
Once formed, proteins only exist for a certain period and are then degraded and recycled by the cell's machinery through the process of protein turnover. A protein's lifespan is measured in terms of its half-life and covers a wide range. They can exist for minutes or years with an average lifespan of 1–2 days in mammalian cells. Abnormal or misfolded proteins are degraded more rapidly either due to being targeted for destruction or due to being unstable.
Like other biological macromolecules such as polysaccharides and nucleic acids, proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyse biochemical reactions and are vital to metabolism. Some proteins have structural or mechanical functions, such as actin and myosin in muscle, and the cytoskeleton's scaffolding proteins that maintain cell shape. Other proteins are important in cell signaling, immune responses, cell adhesion, and the cell cycle. In animals, proteins are needed in the diet to provide the essential amino acids that cannot be synthesized. Digestion breaks the proteins down for metabolic use.
== History and etymology ==
=== Discovery and early studies ===
Proteins have been studied and recognized since the 1700s by Antoine Fourcroy and others, who often collectively called them "albumins", or "albuminous materials" (Eiweisskörper, in German). Gluten, for example, was first separated from wheat in published research around 1747, and later determined to exist in many plants. In 1789, Antoine Fourcroy recognized three distinct varieties of animal proteins: albumin, fibrin, and gelatin. Vegetable (plant) proteins studied in the late 1700s and early 1800s included gluten, plant albumin, gliadin, and legumin.
Proteins were first described by the Dutch chemist Gerardus Johannes Mulder and named by the Swedish chemist Jöns Jacob Berzelius in 1838. Mulder carried out elemental analysis of common proteins and found that nearly all proteins had the same empirical formula, C400H620N100O120P1S1. He came to the erroneous conclusion that they might be composed of a single type of (very large) molecule. The term "protein" to describe these molecules was proposed by Mulder's associate Berzelius; protein is derived from the Greek word πρώτειος (proteios), meaning "primary", "in the lead", or "standing in front", + -in. Mulder went on to identify the products of protein degradation such as the amino acid leucine for which he found a (nearly correct) molecular weight of 131 Da.
Early nutritional scientists such as the German Carl von Voit believed that protein was the most important nutrient for maintaining the structure of the body, because it was generally believed that "flesh makes flesh." Around 1862, Karl Heinrich Ritthausen isolated the amino acid glutamic acid. Thomas Burr Osborne compiled a detailed review of the vegetable proteins at the Connecticut Agricultural Experiment Station. Osborne, alongside Lafayette Mendel, established several nutritionally essential amino acids in feeding experiments with laboratory rats. Diets lacking an essential amino acid stunts the rats' growth, consistent with Liebig's law of the minimum. The final essential amino acid to be discovered, threonine, was identified by William Cumming Rose.
The difficulty in purifying proteins impeded work by early protein biochemists. Proteins could be obtained in large quantities from blood, egg whites, and keratin, but individual proteins were unavailable. In the 1950s, the Armour Hot Dog Company purified 1 kg of bovine pancreatic ribonuclease A and made it freely available to scientists. This gesture helped ribonuclease A become a major target for biochemical study for the following decades.
=== Polypeptides ===
The understanding of proteins as polypeptides, or chains of amino acids, came through the work of Franz Hofmeister and Hermann Emil Fischer in 1902. The central role of proteins as enzymes in living organisms that catalyzed reactions was not fully appreciated until 1926, when James B. Sumner showed that the enzyme urease was in fact a protein.
Linus Pauling is credited with the successful prediction of regular protein secondary structures based on hydrogen bonding, an idea first put forth by William Astbury in 1933. Later work by Walter Kauzmann on denaturation, based partly on previous studies by Kaj Linderstrøm-Lang, contributed an understanding of protein folding and structure mediated by hydrophobic interactions.
The first protein to have its amino acid chain sequenced was insulin, by Frederick Sanger, in 1949. Sanger correctly determined the amino acid sequence of insulin, thus conclusively demonstrating that proteins consisted of linear polymers of amino acids rather than branched chains, colloids, or cyclols. He won the Nobel Prize for this achievement in 1958. Christian Anfinsen's studies of the oxidative folding process of ribonuclease A, for which he won the nobel prize in 1972, solidified the thermodynamic hypothesis of protein folding, according to which the folded form of a protein represents its free energy minimum.
=== Structure ===
With the development of X-ray crystallography, it became possible to determine protein structures as well as their sequences. The first protein structures to be solved were hemoglobin by Max Perutz and myoglobin by John Kendrew, in 1958. The use of computers and increasing computing power has supported the sequencing of complex proteins. In 1999, Roger Kornberg sequenced the highly complex structure of RNA polymerase using high intensity X-rays from synchrotrons.
Since then, cryo-electron microscopy (cryo-EM) of large macromolecular assemblies has been developed. Cryo-EM uses protein samples that are frozen rather than crystals, and beams of electrons rather than X-rays. It causes less damage to the sample, allowing scientists to obtain more information and analyze larger structures. Computational protein structure prediction of small protein structural domains has helped researchers to approach atomic-level resolution of protein structures.
As of April 2024, the Protein Data Bank contains 181,018 X-ray, 19,809 EM and 12,697 NMR protein structures.
== Classification ==
Proteins are primarily classified by sequence and structure, although other classifications are commonly used. Especially for enzymes the EC number system provides a functional classification scheme. Similarly, gene ontology classifies both genes and proteins by their biological and biochemical function, and by their intracellular location.
Sequence similarity is used to classify proteins both in terms of evolutionary and functional similarity. This may use either whole proteins or protein domains, especially in multi-domain proteins. Protein domains allow protein classification by a combination of sequence, structure and function, and they can be combined in many ways. In an early study of 170,000 proteins, about two-thirds were assigned at least one domain, with larger proteins containing more domains (e.g. proteins larger than 600 amino acids having an average of more than 5 domains).
== Biochemistry ==
Most proteins consist of linear polymers built from series of up to 20 L-α-amino acids. All proteinogenic amino acids have a common structure where an α-carbon is bonded to an amino group, a carboxyl group, and a variable side chain. Only proline differs from this basic structure as its side chain is cyclical, bonding to the amino group, limiting protein chain flexibility. The side chains of the standard amino acids have a variety of chemical structures and properties, and it is the combined effect of all amino acids that determines its three-dimensional structure and chemical reactivity.
The amino acids in a polypeptide chain are linked by peptide bonds between amino and carboxyl group. An individual amino acid in a chain is called a residue, and the linked series of carbon, nitrogen, and oxygen atoms are known as the main chain or protein backbone.: 19 The peptide bond has two resonance forms that confer some double-bond character to the backbone. The alpha carbons are roughly coplanar with the nitrogen and the carbonyl (C=O) group. The other two dihedral angles in the peptide bond determine the local shape assumed by the protein backbone. One conseqence of the N-C(O) double bond character is that proteins are somewhat rigid.: 31 A polypeptide chain ends with a free amino group, known as the N-terminus or amino terminus, and a free carboxyl group, known as the C-terminus or carboxy terminus. By convention, peptide sequences are written N-terminus to C-terminus, correlating with the order in which proteins are synthesized by ribosomes.
The words protein, polypeptide, and peptide are a little ambiguous and can overlap in meaning. Protein is generally used to refer to the complete biological molecule in a stable conformation, whereas peptide is generally reserved for a short amino acid oligomers often lacking a stable 3D structure. But the boundary between the two is not well defined and usually lies near 20–30 residues.
Proteins can interact with many types of molecules and ions, including with other proteins, with lipids, with carbohydrates, and with DNA.
=== Abundance in cells ===
A typical bacterial cell, e.g. E. coli and Staphylococcus aureus, is estimated to contain about 2 million proteins. Smaller bacteria, such as Mycoplasma or spirochetes contain fewer molecules, on the order of 50,000 to 1 million. By contrast, eukaryotic cells are larger and thus contain much more protein. For instance, yeast cells have been estimated to contain about 50 million proteins and human cells on the order of 1 to 3 billion. The concentration of individual protein copies ranges from a few molecules per cell up to 20 million. Not all genes coding proteins are expressed in most cells and their number depends on, for example, cell type and external stimuli. For instance, of the 20,000 or so proteins encoded by the human genome, only 6,000 are detected in lymphoblastoid cells. The most abundant protein in nature is thought to be RuBisCO, an enzyme that catalyzes the incorporation of carbon dioxide into organic matter in photosynthesis. Plants can consist of as much as 1% by weight of this enzyme.
== Synthesis ==
=== Biosynthesis ===
Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein. The genetic code is a set of three-nucleotide sets called codons and each three-nucleotide combination designates an amino acid, for example AUG (adenine–uracil–guanine) is the code for methionine. Because DNA contains four nucleotides, the total number of possible codons is 64; hence, there is some redundancy in the genetic code, with some amino acids specified by more than one codon.: 1002–42 Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA (a primary transcript) using various forms of post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second.
The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus.: 1002–42
The size of a synthesized protein can be measured by the number of amino acids it contains and by its total molecular mass, which is normally reported in units of daltons (synonymous with atomic mass units), or the derivative unit kilodalton (kDa). The average size of a protein increases from Archaea to Bacteria to Eukaryote (283, 311, 438 residues and 31, 34, 49 kDa respectively) due to a bigger number of protein domains constituting proteins in higher organisms. For instance, yeast proteins are on average 466 amino acids long and 53 kDa in mass. The largest known proteins are the titins, a component of the muscle sarcomere, with a molecular mass of almost 3,000 kDa and a total length of almost 27,000 amino acids.
=== Chemical synthesis ===
Short proteins can be synthesized chemically by a family of peptide synthesis methods. These rely on organic synthesis techniques such as chemical ligation to produce peptides in high yield. Chemical synthesis allows for the introduction of non-natural amino acids into polypeptide chains, such as attachment of fluorescent probes to amino acid side chains. These methods are useful in laboratory biochemistry and cell biology, though generally not for commercial applications. Chemical synthesis is inefficient for polypeptides longer than about 300 amino acids, and the synthesized proteins may not readily assume their native tertiary structure. Most chemical synthesis methods proceed from C-terminus to N-terminus, opposite the biological reaction.
== Structure ==
Most proteins fold into unique 3D structures. The shape into which a protein naturally folds is known as its native conformation.: 36 Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states.: 37 Biochemists often refer to four distinct aspects of a protein's structure:: 30–34
Primary structure: the amino acid sequence. A protein is a polyamide.
Secondary structure: regularly repeating local structures stabilized by hydrogen bonds. The most common examples are the α-helix, β-sheet and turns. Because secondary structures are local, many regions of distinct secondary structure can be present in the same protein molecule.
Tertiary structure: the overall shape of a single protein molecule; the spatial relationship of the secondary structures to one another. Tertiary structure is generally stabilized by nonlocal interactions, most commonly the formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even post-translational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein.
Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex.
Quinary structure: the signatures of protein surface that organize the crowded cellular interior. Quinary structure is dependent on transient, yet essential, macromolecular interactions that occur inside living cells.
Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution, protein structures vary because of thermal vibration and collisions with other molecules.: 368–75
Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane.: 165–85
A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons.
=== Protein domains ===
Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units.: 134 Domains usually have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules.: 155–156
=== Sequence motif ===
Short amino acid sequences within proteins often act as recognition sites for other proteins. For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database.
== Cellular functions ==
Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome.: 120
The chief characteristic of proteins that allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (> 1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine.
Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can bind to, or be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks.: 830–49
As interactions between proteins are reversible and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types.
=== Enzymes ===
The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes. The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme).
The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site.: 389
Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes.
=== Cell signaling and ligand binding ===
Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell.: 251–81
Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high.: 275–50
Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, and release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom.: 222–29 Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins.
Transmembrane proteins can serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions.: 232–34
=== Structural proteins ===
Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells.: 178–81 Some globular proteins can play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size.: 490
Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They generate the forces exerted by contracting muscles: 258–64, 272 and play essential roles in intracellular transport.: 481, 490
== Methods of study ==
Methods commonly used to study protein structure and function include immunohistochemistry, site-directed mutagenesis, X-ray crystallography, nuclear magnetic resonance and mass spectrometry. The activities and structures of proteins may be examined in vitro, in vivo, and in silico. In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism, and can often provide more information about protein behavior in different contexts. In silico studies use computational methods to study proteins.
=== Protein purification ===
Proteins may be purified from other cellular components using a variety of techniques such as ultracentrifugation, precipitation, electrophoresis, and chromatography;: 21–24 the advent of genetic engineering has made possible a number of methods to facilitate purification.
To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity.: 21–24 The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing.
For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed over a chromatography column containing nickel, the histidine residues ligate the nickel and attach to the column while the untagged components of the lysate pass unimpeded. A number of tags have been developed to help researchers purify specific proteins from complex mixtures.
=== Cellular localization ===
The study of proteins in vivo is often concerned with the synthesis and localization of the protein within the cell. Although many intracellular proteins are synthesized in the cytoplasm and membrane-bound or secreted proteins in the endoplasmic reticulum, the specifics of how proteins are targeted to specific organelles or cellular structures is often unclear. A useful technique for assessing cellular localization uses genetic engineering to express in a cell a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP). The fused protein's position within the cell can then be cleanly and efficiently visualized using microscopy.
Other methods for elucidating the cellular location of proteins requires the use of known compartmental markers for regions such as the ER, the Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma membrane, etc. With the use of fluorescently tagged versions of these markers or of antibodies to known markers, it becomes much simpler to identify the localization of a protein of interest. For example, indirect immunofluorescence will allow for fluorescence colocalization and demonstration of location. Fluorescent dyes are used to label cellular compartments for a similar purpose.
Other possibilities exist, as well. For example, immunohistochemistry usually uses an antibody to one or more proteins of interest that are conjugated to enzymes yielding either luminescent or chromogenic signals that can be compared between samples, allowing for localization information. Another applicable technique is cofractionation in sucrose (or other material) gradients using isopycnic centrifugation. While this technique does not prove colocalization of a compartment of known density and the protein of interest, it indicates an increased likelihood.
Finally, the gold-standard method of cellular localization is immunoelectron microscopy. This technique uses an antibody to the protein of interest, along with classical electron microscopy techniques. The sample is prepared for normal electron microscopic examination, and then treated with an antibody to the protein of interest that is conjugated to an extremely electro-dense material, usually gold. This allows for the localization of both ultrastructural details as well as the protein of interest.
Through another genetic engineering application known as site-directed mutagenesis, researchers can alter the protein sequence and hence its structure, cellular localization, and susceptibility to regulation. This technique even allows the incorporation of unnatural amino acids into proteins, using modified tRNAs, and may allow the rational design of new proteins with novel properties.
=== Proteomics ===
The total complement of proteins present at a time in a cell or cell type is known as its proteome, and the study of such large-scale data sets defines the field of proteomics, named by analogy to the related field of genomics. Key experimental techniques in proteomics include 2D electrophoresis, which allows the separation of many proteins, mass spectrometry, which allows rapid high-throughput identification of proteins and sequencing of peptides (most often after in-gel digestion), protein microarrays, which allow the detection of the relative levels of the various proteins present in a cell, and two-hybrid screening, which allows the systematic exploration of protein–protein interactions. The total complement of biologically possible such interactions is known as the interactome. A systematic attempt to determine the structures of proteins representing every possible fold is known as structural genomics.
=== Structure determination ===
Discovering the tertiary structure of a protein, or the quaternary structure of its complexes, can provide important clues about how the protein performs its function and how it can be affected, i.e. in drug design. As proteins are too small to be seen under a light microscope, other methods have to be employed to determine their structure. Common experimental methods include X-ray crystallography and NMR spectroscopy, both of which can produce structural information at atomic resolution. However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins. Cryoelectron microscopy is used to produce lower-resolution structural information about very large protein complexes, including assembled viruses;: 340–41 a variant known as electron crystallography can produce high-resolution information in some cases, especially for two-dimensional crystals of membrane proteins. Solved structures are usually deposited in the Protein Data Bank (PDB), a freely available resource from which structural data about thousands of proteins can be obtained in the form of Cartesian coordinates for each atom in the protein.
Many more gene sequences are known than protein structures. Further, the set of solved structures is biased toward proteins that can be easily subjected to the conditions required in X-ray crystallography, one of the major structure determination methods. In particular, globular proteins are comparatively easy to crystallize in preparation for X-ray crystallography. Membrane proteins and large protein complexes, by contrast, are difficult to crystallize and are underrepresented in the PDB. Structural genomics initiatives have attempted to remedy these deficiencies by systematically solving representative structures of major fold classes. Protein structure prediction methods attempt to provide a means of generating a plausible structure for proteins whose structures have not been experimentally determined.
=== Structure prediction ===
Complementary to the field of structural genomics, protein structure prediction develops efficient mathematical models of proteins to computationally predict the molecular formations in theory, instead of detecting structures with laboratory observation. The most successful type of structure prediction, known as homology modeling, relies on the existence of a "template" structure with sequence similarity to the protein being modeled; structural genomics' goal is to provide sufficient representation in solved structures to model most of those that remain. Although producing accurate models remains a challenge when only distantly related template structures are available, it has been suggested that sequence alignment is the bottleneck in this process, as quite accurate models can be produced if a "perfect" sequence alignment is known. Many structure prediction methods have served to inform the emerging field of protein engineering, in which novel protein folds have already been designed. Many proteins (in eukaryotes ~33%) contain large unstructured but biologically functional segments and can be classified as intrinsically disordered proteins. Predicting and analysing protein disorder is an important part of protein structure characterisation.
=== In silico simulation of dynamical processes ===
A more complex computational problem is the prediction of intermolecular interactions, such as in molecular docking, protein folding, protein–protein interaction and chemical reactivity. Mathematical models to simulate these dynamical processes involve molecular mechanics, in particular, molecular dynamics. In this regard, in silico simulations discovered the folding of small α-helical protein domains such as the villin headpiece, the HIV accessory protein and hybrid methods combining standard molecular dynamics with quantum mechanical mathematics have explored the electronic states of rhodopsins.
Beyond classical molecular dynamics, quantum dynamics methods allow the simulation of proteins in atomistic detail with an accurate description of quantum mechanical effects. Examples include the multi-layer multi-configuration time-dependent Hartree method and the hierarchical equations of motion approach, which have been applied to plant cryptochromes and bacteria light-harvesting complexes, respectively. Both quantum and classical mechanical simulations of biological-scale systems are extremely computationally demanding, so distributed computing initiatives such as the Folding@home project facilitate the molecular modeling by exploiting advances in GPU parallel processing and Monte Carlo techniques.
=== Chemical analysis ===
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
== Digestion ==
In the absence of catalysts, proteins are slow to hydrolyze. The breakdown of proteins to small peptides and amino acids (proteolysis) is a step in digestion; these breakdown products are then absorbed in the small intestine. The hydrolysis of proteins relies on enzymes called proteases or peptidases. Proteases, which are themselves proteins, come in several types according to the particular peptide bonds that they cleave as well as their tendency to cleave peptide bonds at the terminus of a protein (exopeptidases) vs peptide bonds at the interior of the protein (endopeptidases). Pepsin is an endopeptidase in the stomach. Subsequent to the stomach, the pancreas secretes other proteases to complete the hydrolysis, these include trypsin and chymotrypsin.
Protein hydrolysis is employed commercially as a means of producing amino acids from bulk sources of protein, such as blood meal, feathers, keratin. Such materials are treated with hot hydrochloric acid, which effects the hydrolysis of the peptide bonds.
== Mechanical properties ==
The mechanical properties of proteins are highly diverse and are often central to their biological function, as in the case of proteins like keratin and collagen. For instance, the ability of muscle tissue to continually expand and contract is directly tied to the elastic properties of their underlying protein makeup. Beyond fibrous proteins, the conformational dynamics of enzymes and the structure of biological membranes, among other biological functions, are governed by the mechanical properties of the proteins. Outside of their biological context, the unique mechanical properties of many proteins, along with their relative sustainability when compared to synthetic polymers, have made them desirable targets for next-generation materials design.
Young's modulus, E, is calculated as the axial stress σ over the resulting strain ε. It is a measure of the relative stiffness of a material. In the context of proteins, this stiffness often directly correlates to biological function. For example, collagen, found in connective tissue, bones, and cartilage, and keratin, found in nails, claws, and hair, have observed stiffnesses that are several orders of magnitude higher than that of elastin, which is though to give elasticity to structures such as blood vessels, pulmonary tissue, and bladder tissue, among others. In comparison to this, globular proteins, such as Bovine Serum Albumin, which float relatively freely in the cytosol and often function as enzymes (and thus undergoing frequent conformational changes) have comparably much lower Young's moduli.
The Young's modulus of a single protein can be found through molecular dynamics simulation. Using either atomistic force-fields, such as CHARMM or GROMOS, or coarse-grained forcefields like Martini, a single protein molecule can be stretched by a uniaxial force while the resulting extension is recorded in order to calculate the strain. Experimentally, methods such as atomic force microscopy can be used to obtain similar data. The internal dynamics of proteins involve subtle elastic and plastic deformations induced by viscoelastic forces, which can be probed by nano-rheology techniques. These estimates yield typical spring constants around k ≈ 100 pN/nm, equivalent to Yonung's moduli of E ≈ 100 MPa, and typical friction coefficients of γ ≈ 0.1 pN·s/nm, corresponding to viscosity of η ≈ 0.01 pN·s/nm2 = 107cP (that is, 107 more viscous than water).
At the macroscopic level, the Young's modulus of cross-linked protein networks can be obtained through more traditional mechanical testing. Experimentally observed values for a few proteins can be seen below.
== See also ==
== References ==
== Further reading ==
Textbooks
History
Tanford C, Reynolds JA (2001). Nature's Robots: A History of Proteins. Oxford New York: Oxford University Press, USA. ISBN 978-0-19-850466-5.
== External links ==
=== Databases and projects ===
NCBI Entrez Protein database
NCBI Protein Structure database
Human Protein Reference Database
Human Proteinpedia
Folding@Home (Stanford University) Archived 2012-09-08 at the Wayback Machine
Protein Databank in Europe (see also PDBeQuips, short articles and tutorials on interesting PDB structures)
Research Collaboratory for Structural Bioinformatics (see also Molecule of the Month Archived 2020-07-24 at the Wayback Machine, presenting short accounts on selected proteins from the PDB)
Proteopedia – Life in 3D: rotatable, zoomable 3D model with wiki annotations for every known protein molecular structure.
UniProt the Universal Protein Resource
=== Tutorials and educational websites ===
"An Introduction to Proteins" from HOPES (Huntington's Disease Outreach Project for Education at Stanford)
Proteins: Biogenesis to Degradation – The Virtual Library of Biochemistry and Cell Biology | Wikipedia/Protein |
In materials science, slip is the large displacement of one part of a crystal relative to another part along crystallographic planes and directions. Slip occurs by the passage of dislocations on close/packed planes, which are planes containing the greatest number of atoms per area and in close-packed directions (most atoms per length). Close-packed planes are known as slip or glide planes. A slip system describes the set of symmetrically identical slip planes and associated family of slip directions for which dislocation motion can easily occur and lead to plastic deformation. The magnitude and direction of slip are represented by the Burgers vector, b.
An external force makes parts of the crystal lattice glide along each other, changing the material's geometry. A critical resolved shear stress is required to initiate a slip.
== Slip systems ==
=== Face centered cubic crystals ===
Slip in face centered cubic (fcc) crystals occurs along the close packed plane. Specifically, the slip plane is of type {111}, and the direction is of type <110>. In the diagram on the right, the specific plane and direction are (111) and [110], respectively.
Given the permutations of the slip plane types and direction types, fcc crystals have 12 slip systems. In the fcc lattice, the norm of the Burgers vector, b, can be calculated using the following equation:
|
b
|
=
a
2
|
⟨
110
⟩
|
=
a
2
2
{\displaystyle |b|={\frac {a}{2}}|\langle 110\rangle |={\frac {a{\sqrt {2}}}{2}}}
Where a is the lattice constant of the unit cell.
=== Body centered cubic crystals ===
Slip in body-centered cubic (bcc) crystals occurs along the plane of shortest Burgers vector as well; however, unlike fcc, there are no truly close-packed planes in the bcc crystal structure.
Thus, a slip system in bcc requires heat to activate.
Some bcc materials (e.g. α-Fe) can contain up to 48 slip systems.
There are six slip planes of type {110}, each with two <111> directions (12 systems). There are 24 {123} and 12 {112} planes each with one <111> direction (36 systems, for a total of 48). Although the number of possible slip systems is much higher in bcc crystals than fcc crystals, the ductility is not necessarily higher due to increased lattice friction stresses.
While the {123} and {112} planes are not exactly identical in activation energy to {110}, they are so close in energy that for all intents and purposes they can be treated as identical.
In the diagram on the right the specific slip plane and direction are (110) and [111], respectively.
|
b
|
=
a
2
|
⟨
111
⟩
|
=
3
a
2
{\displaystyle |b|={\frac {a}{2}}|\langle 111\rangle |={\frac {{\sqrt {3}}a}{2}}}
=== Hexagonal close packed crystals ===
Slip in hexagonal close packed (hcp) metals is much more limited than in bcc and fcc crystal structures. Usually, hcp crystal structures allow slip on the densely packed basal {0001} planes along the <1120> directions.
The activation of other slip planes depends on various parameters, e.g. the c/a ratio.
Since there are only 2 independent slip systems on the basal planes, for arbitrary plastic deformation additional slip or twin systems needs to be activated. This typically requires a much higher resolved shear stress and can result in the brittle behavior of some hcp polycrystals. However, other hcp materials such as pure titanium show large amounts of ductility.
Cadmium, zinc, magnesium, titanium, and beryllium have a slip plane at {0001} and a slip direction of <1120>. This creates a total of three slip systems, depending on orientation. Other combinations are also possible.
There are two types of dislocations in crystals that can induce slip - edge dislocations and screw dislocations. Edge dislocations have the direction of the Burgers vector perpendicular to the dislocation line, while screw dislocations have the direction of the Burgers vector parallel to the dislocation line. The type of dislocations generated largely depends on the direction of the applied stress, temperature, and other factors. Screw dislocations can easily cross slip from one plane to another if the other slip plane contains the direction of the Burgers vector.
== Slip band ==
Formation of slip bands indicates a concentrated unidirectional slip on certain planes causing a stress concentration. Typically, slip bands induce surface steps (i.e. roughness due persistent slip bands during fatigue) and a stress concentration which can be a crack nucleation site. Slip bands extend until impinged by a boundary, and the generated stress from dislocation pile-up against that boundary will either stop or transmit the operating slip.
Formation of slip bands under cyclic conditions is addressed as persistent slip bands (PSBs) where formation under monotonic condition is addressed as dislocation planar arrays (or simply slip-bands). Slip-bands can be simply viewed as boundary sliding due to dislocation glide that lacks (the complexity of ) PSBs high plastic deformation localisation manifested by tongue- and ribbon-like extrusion. And, where PSBs normally studied with (effective) Burger’s vector aligned with extrusion plane because PSB extends across the grain and exacerbate during fatigue; monotonic slip-band has a Burger’s vector for propagation and another for plane extrusions both controlled by the conditions at the tip.
== Identification of slip activity ==
The main methods to identify the active slip system involve either slip trace analysis of single crystals or polycrystals, using diffraction techniques such as neutron diffraction and high angular resolution electron backscatter diffraction elastic strain analysis, or Transmission electron microscopy diffraction imaging of dislocations.
In slip trace analysis, only the slip plane is measured, and the slip direction is inferred. In zirconium, for example, this enables the identification of slip activity on a basal, prism, or 1st/2nd order pyramidal plane. In the case of a 1st-order pyramidal plane trace, the slip could be in either ⟨𝑎⟩ or ⟨𝑐 + 𝑎⟩ directions; slip trace analysis cannot discriminate between these.
Diffraction-based studies measure the residual dislocation content instead of the slipped dislocations, which is only a good approximation for systems that accumulate networks of geometrically necessary dislocations, such as Face-centred cubic polycrystals. In low-symmetry crystals such as hexagonal zirconium, there could be regions of the predominantly single slip where geometrically necessary dislocations may not necessarily accumulate. Residual dislocation content does not distinguish between glissile and sessile dislocations. Glissile dislocations contribute to slip and hardening, but sessile dislocations contribute only to latent hardening.
Diffraction methods cannot generally resolve the slip plane of a residual dislocation. For example, in Zr, the screw components of ⟨𝑎⟩ dislocations could slip on prismatic, basal, or 1st-order pyramidal planes. Similarly, ⟨𝑐 + 𝑎⟩ screw dislocations could slip on either 1st or 2nd order pyramidal planes.
== See also ==
Miller indices
Persistent slip bands
== References ==
== External links ==
An online tutorial on slip, explained on DoITPoMS | Wikipedia/Slip_(materials_science) |
A macromolecule is a very large molecule important to biological processes, such as a protein or nucleic acid. It is composed of thousands of covalently bonded atoms. Many macromolecules are polymers of smaller molecules called monomers. The most common macromolecules in biochemistry are biopolymers (nucleic acids, proteins, and carbohydrates) and large non-polymeric molecules such as lipids, nanogels and macrocycles. Synthetic fibers and experimental materials such as carbon nanotubes are also examples of macromolecules.
== Definition ==
The term macromolecule (macro- + molecule) was coined by Nobel laureate Hermann Staudinger in the 1920s, although his first relevant publication on this field only mentions high molecular compounds (in excess of 1,000 atoms). At that time the term polymer, as introduced by Berzelius in 1832, had a different meaning from that of today: it simply was another form of isomerism for example with benzene and acetylene and had little to do with size.
Usage of the term to describe large molecules varies among the disciplines. For example, while biology refers to macromolecules as the four large molecules comprising living things, in chemistry, the term may refer to aggregates of two or more molecules held together by intermolecular forces rather than covalent bonds but which do not readily dissociate.
According to the standard IUPAC definition, the term macromolecule as used in polymer science refers only to a single molecule. For example, a single polymeric molecule is appropriately described as a "macromolecule" or "polymer molecule" rather than a "polymer," which suggests a substance composed of macromolecules.
Because of their size, macromolecules are not conveniently described in terms of stoichiometry alone. The structure of simple macromolecules, such as homopolymers, may be described in terms of the individual monomer subunit and total molecular mass. Complicated biomacromolecules, on the other hand, require multi-faceted structural description such as the hierarchy of structures used to describe proteins. In British English, the word "macromolecule" tends to be called "high polymer".
== Properties ==
Macromolecules often have unusual physical properties that do not occur for smaller molecules.
Another common macromolecular property that does not characterize smaller molecules is their relative insolubility in water and similar solvents, instead forming colloids. Many require salts or particular ions to dissolve in water. Similarly, many proteins will denature if the solute concentration of their solution is too high or too low.
High concentrations of macromolecules in a solution can alter the rates and equilibrium constants of the reactions of other macromolecules, through an effect known as macromolecular crowding. This comes from macromolecules excluding other molecules from a large part of the volume of the solution, thereby increasing the effective concentrations of these molecules.
== Major macromolecules ==
Proteins are polymers of amino acids joined by peptide bonds.
DNA and RNA are polymers of nucleotides joined by phosphodiester bonds. These nucleotides consist of a phosphate group, a sugar (ribose in the case of RNA, deoxyribose in the case of DNA), and a nucleotide base (either adenine, guanine, thymine, uracil, or cytosine, where thymine occurs only in DNA and uracil only in RNA).
Polysaccharides (such as starch, cellulose, and chitin) are polymers of monosaccharides joined by glycosidic bonds.
Some lipids (organic nonpolar molecules) are macromolecules, with a variety of different structures.
== Linear biopolymers ==
All living organisms are dependent on three essential biopolymers for their biological functions: DNA, RNA and proteins. Each of these molecules is required for life since each plays a distinct, indispensable role in the cell. The simple summary is that DNA makes RNA, and then RNA makes proteins.
DNA, RNA, and proteins all consist of a repeating structure of related building blocks (nucleotides in the case of DNA and RNA, amino acids in the case of proteins). In general, they are all unbranched polymers, and so can be represented in the form of a string. Indeed, they can be viewed as a string of beads, with each bead representing a single nucleotide or amino acid monomer linked together through covalent chemical bonds into a very long chain.
In most cases, the monomers within the chain have a strong propensity to interact with other amino acids or nucleotides. In DNA and RNA, this can take the form of Watson–Crick base pairs (G–C and A–T or A–U), although many more complicated interactions can and do occur.
=== Structural features ===
Because of the double-stranded nature of DNA, essentially all of the nucleotides take the form of Watson–Crick base pairs between nucleotides on the two complementary strands of the double helix.
In contrast, both RNA and proteins are normally single-stranded. Therefore, they are not constrained by the regular geometry of the DNA double helix, and so fold into complex three-dimensional shapes dependent on their sequence. These different shapes are responsible for many of the common properties of RNA and proteins, including the formation of specific binding pockets, and the ability to catalyse biochemical reactions.
==== DNA is optimised for encoding information ====
DNA is an information storage macromolecule that encodes the complete set of instructions (the genome) that are required to assemble, maintain, and reproduce every living organism.
DNA and RNA are both capable of encoding genetic information, because there are biochemical mechanisms which read the information coded within a DNA or RNA sequence and use it to generate a specified protein. On the other hand, the sequence information of a protein molecule is not used by cells to functionally encode genetic information.: 5
DNA has three primary attributes that allow it to be far better than RNA at encoding genetic information. First, it is normally double-stranded, so that there are a minimum of two copies of the information encoding each gene in every cell. Second, DNA has a much greater stability against breakdown than does RNA, an attribute primarily associated with the absence of the 2'-hydroxyl group within every nucleotide of DNA. Third, highly sophisticated DNA surveillance and repair systems are present which monitor damage to the DNA and repair the sequence when necessary. Analogous systems have not evolved for repairing damaged RNA molecules. Consequently, chromosomes can contain many billions of atoms, arranged in a specific chemical structure.
==== Proteins are optimised for catalysis ====
Proteins are functional macromolecules responsible for catalysing the biochemical reactions that sustain life.: 3 Proteins carry out all functions of an organism, for example photosynthesis, neural function, vision, and movement.
The single-stranded nature of protein molecules, together with their composition of 20 or more different amino acid building blocks, allows them to fold in to a vast number of different three-dimensional shapes, while providing binding pockets through which they can specifically interact with all manner of molecules. In addition, the chemical diversity of the different amino acids, together with different chemical environments afforded by local 3D structure, enables many proteins to act as enzymes, catalyzing a wide range of specific biochemical transformations within cells. In addition, proteins have evolved the ability to bind a wide range of cofactors and coenzymes, smaller molecules that can endow the protein with specific activities beyond those associated with the polypeptide chain alone.
==== RNA is multifunctional ====
RNA is multifunctional, its primary function is to encode proteins, according to the instructions within a cell's DNA.: 5 They control and regulate many aspects of protein synthesis in eukaryotes.
RNA encodes genetic information that can be translated into the amino acid sequence of proteins, as evidenced by the messenger RNA molecules present within every cell, and the RNA genomes of a large number of viruses. The single-stranded nature of RNA, together with tendency for rapid breakdown and a lack of repair systems means that RNA is not so well suited for the long-term storage of genetic information as is DNA.
In addition, RNA is a single-stranded polymer that can, like proteins, fold into a very large number of three-dimensional structures. Some of these structures provide binding sites for other molecules and chemically active centers that can catalyze specific chemical reactions on those bound molecules. The limited number of different building blocks of RNA (4 nucleotides vs >20 amino acids in proteins), together with their lack of chemical diversity, results in catalytic RNA (ribozymes) being generally less-effective catalysts than proteins for most biological reactions.
== Branched biopolymers ==
Carbohydrate macromolecules (polysaccharides) are formed from polymers of monosaccharides.: 11 Because monosaccharides have multiple functional groups, polysaccharides can form linear polymers (e.g. cellulose) or complex branched structures (e.g. glycogen). Polysaccharides perform numerous roles in living organisms, acting as energy stores (e.g. starch) and as structural components (e.g. chitin in arthropods and fungi). Many carbohydrates contain modified monosaccharide units that have had functional groups replaced or removed.
Polyphenols consist of a branched structure of multiple phenolic subunits. They can perform structural roles (e.g. lignin) as well as roles as secondary metabolites involved in signalling, pigmentation and defense.
== Synthetic macromolecules ==
Some examples of macromolecules are synthetic polymers (plastics, synthetic fibers, and synthetic rubber), graphene, and carbon nanotubes. Polymers may be prepared from inorganic matter as well as for instance in inorganic polymers and geopolymers. The incorporation of inorganic elements enables the tunability of properties and/or responsive behavior as for instance in smart inorganic polymers.
== See also ==
List of biophysically important macromolecular crystal structures
Small molecule
Soft matter
== References ==
== External links ==
Synopsis of Chapter 5, Campbell & Reece, 2002
Lecture notes on the structure and function of macromolecules
Several (free) introductory macromolecule related internet-based courses Archived 2011-07-18 at the Wayback Machine
Giant Molecules! by Ulysses Magee, ISSA Review Winter 2002–2003, ISSN 1540-9864. Cached HTML version of a missing PDF file. Retrieved March 10, 2010. The article is based on the book, Inventing Polymer Science: Staudinger, Carothers, and the Emergence of Macromolecular Chemistry by Yasu Furukawa. | Wikipedia/Macromolecule |
The crystallographic restriction theorem in its basic form was based on the observation that the rotational symmetries of a crystal are usually limited to 2-fold, 3-fold, 4-fold, and 6-fold. However, quasicrystals can occur with other diffraction pattern symmetries, such as 5-fold; these were not discovered until 1982 by Dan Shechtman.
Crystals are modeled as discrete lattices, generated by a list of independent finite translations (Coxeter 1989). Because discreteness requires that the spacings between lattice points have a lower bound, the group of rotational symmetries of the lattice at any point must be a finite group (alternatively, the point is the only system allowing for infinite rotational symmetry). The strength of the theorem is that not all finite groups are compatible with a discrete lattice; in any dimension, we will have only a finite number of compatible groups.
== Dimensions 2 and 3 ==
The special cases of 2D (wallpaper groups) and 3D (space groups) are most heavily used in applications, and they can be treated together.
=== Lattice proof ===
A rotation symmetry in dimension 2 or 3 must move a lattice point to a succession of other lattice points in the same plane, generating a regular polygon of coplanar lattice points. We now confine our attention to the plane in which the symmetry acts (Scherrer 1946), illustrated with lattice vectors in the figure.
Now consider an 8-fold rotation, and the displacement vectors between adjacent points of the polygon. If a displacement exists between any two lattice points, then that same displacement is repeated everywhere in the lattice. So collect all the edge displacements to begin at a single lattice point. The edge vectors become radial vectors, and their 8-fold symmetry implies a regular octagon of lattice points around the collection point. But this is impossible, because the new octagon is about 80% as large as the original. The significance of the shrinking is that it is unlimited. The same construction can be repeated with the new octagon, and again and again until the distance between lattice points is as small as we like; thus no discrete lattice can have 8-fold symmetry. The same argument applies to any k-fold rotation, for k greater than 6.
A shrinking argument also eliminates 5-fold symmetry. Consider a regular pentagon of lattice points. If it exists, then we can take every other edge displacement and (head-to-tail) assemble a 5-point star, with the last edge returning to the starting point. The vertices of such a star are again vertices of a regular pentagon with 5-fold symmetry, but about 60% smaller than the original.
Thus the theorem is proved.
The existence of quasicrystals and Penrose tilings shows that the assumption of a linear translation is necessary. Penrose tilings may have 5-fold rotational symmetry and a discrete lattice, and any local neighborhood of the tiling is repeated infinitely many times, but there is no linear translation for the tiling as a whole. And without the discrete lattice assumption, the above construction not only fails to reach a contradiction, but produces a (non-discrete) counterexample. Thus 5-fold rotational symmetry cannot be eliminated by an argument missing either of those assumptions. A Penrose tiling of the whole (infinite) plane can only have exact 5-fold rotational symmetry (of the whole tiling) about a single point, however, whereas the 4-fold and 6-fold lattices have infinitely many centres of rotational symmetry.
=== Trigonometry proof ===
Consider two lattice points A and B separated by a translation vector r. Consider an angle α such that a rotation of angle α about any lattice point is a symmetry of the lattice. Rotating about point B by α maps point A to a new point A'. Similarly, rotating about point A by α maps B to a point B'. Since both rotations mentioned are symmetry operations, A' and B' must both be lattice points. Due to periodicity of the crystal, the new vector r' which connects them must be equal to an integer multiple of r:
r
′
=
m
r
{\displaystyle \mathbf {r} '=m\mathbf {r} }
with
m
{\displaystyle m}
integer. The four translation vectors, three of length
r
=
|
r
|
{\displaystyle r=|\mathbf {r} |}
and one, connecting A' and B', of length
r
′
=
|
r
′
|
{\displaystyle r'=|\mathbf {r} '|}
, form a trapezium. Therefore, the length of r' is also given by:
r
′
=
2
r
cos
α
−
r
.
{\displaystyle r'=2r\cos \alpha -r.}
Combining the two equations gives:
cos
α
=
m
+
1
2
=
M
2
{\displaystyle \cos \alpha ={\frac {m+1}{2}}={\frac {M}{2}}}
where
M
=
m
+
1
{\displaystyle M=m+1}
is also an integer. Bearing in mind that
|
cos
α
|
≤
1
{\displaystyle |\cos \alpha |\leq 1}
we have allowed integers
M
∈
{
−
2
,
−
1
,
0
,
1
,
2
}
{\displaystyle M\in \{-2,-1,0,1,2\}}
. Solving for possible values of
α
{\displaystyle \alpha }
reveals that the only values in the 0° to 180° range are 0°, 60°, 90°, 120°, and 180°. In radians, the only allowed rotations consistent with lattice periodicity are given by 2π/n, where n = 1, 2, 3, 4, 6. This corresponds to 1-, 2-, 3-, 4-, and 6-fold symmetry, respectively, and therefore excludes the possibility of 5-fold or greater than 6-fold symmetry.
=== Short trigonometry proof ===
Consider a line of atoms A-O-B, separated by distance a. Rotate the entire row by θ = +2π/n and θ = −2π/n, with point O kept fixed. After the rotation by +2π/n, A is moved to the lattice point C and after the rotation by -2π/n, B is moved to the lattice point D. Due to the assumed periodicity of the lattice, the two lattice points C and D will be also in a line directly below the initial row; moreover C and D will be separated by r = ma, with m an integer. But by trigonometry, the separation between these points is:
2
a
cos
θ
=
2
a
cos
2
π
n
{\displaystyle 2a\cos {\theta }=2a\cos {\frac {2\pi }{n}}}
.
Equating the two relations gives:
2
cos
2
π
n
=
m
{\displaystyle 2\cos {\frac {2\pi }{n}}=m}
This is satisfied by only n = 1, 2, 3, 4, 6.
=== Matrix proof ===
For an alternative proof, consider matrix properties. The sum of the diagonal elements of a matrix is called the trace of the matrix. In 2D and 3D every rotation is a planar rotation, and the trace is a function of the angle alone. For a 2D rotation, the trace is 2 cos θ; for a 3D rotation, 1 + 2 cos θ.
Examples
Consider a 60° (6-fold) rotation matrix with respect to an orthonormal basis in 2D.
[
1
/
2
−
3
/
2
3
/
2
1
/
2
]
{\displaystyle {\begin{bmatrix}{1/2}&-{{\sqrt {3}}/2}\\{{\sqrt {3}}/2}&{1/2}\end{bmatrix}}}
The trace is precisely 1, an integer.
Consider a 45° (8-fold) rotation matrix.
[
1
/
2
−
1
/
2
1
/
2
1
/
2
]
{\displaystyle {\begin{bmatrix}{1/{\sqrt {2}}}&-{1/{\sqrt {2}}}\\{1/{\sqrt {2}}}&{1/{\sqrt {2}}}\end{bmatrix}}}
The trace is 2/√2, not an integer.
Selecting a basis formed from vectors that spans the lattice, neither orthogonality nor unit length is guaranteed, only linear independence. However the trace of the rotation matrix is the same with respect to any basis. The trace is a similarity invariant under linear transformations. In the lattice basis, the rotation operation must map every lattice point into an integer number of lattice vectors, so the entries of the rotation matrix in the lattice basis – and hence the trace – are necessarily integers. Similar as in other proofs, this implies that the only allowed rotational symmetries correspond to 1,2,3,4 or 6-fold invariance. For example, wallpapers and crystals cannot be rotated by 45° and remain invariant, the only possible angles are: 360°, 180°, 120°, 90° or 60°.
Example
Consider a 60° (360°/6) rotation matrix with respect to the oblique lattice basis for a tiling by equilateral triangles.
[
0
−
1
1
1
]
{\displaystyle {\begin{bmatrix}0&-1\\1&1\end{bmatrix}}}
The trace is still 1. The determinant (always +1 for a rotation) is also preserved.
The general crystallographic restriction on rotations does not guarantee that a rotation will be compatible with a specific lattice. For example, a 60° rotation will not work with a square lattice; nor will a 90° rotation work with a rectangular lattice.
== Higher dimensions ==
When the dimension of the lattice rises to four or more, rotations need no longer be planar; the 2D proof is inadequate. However, restrictions still apply, though more symmetries are permissible. For example, the hypercubic lattice has an eightfold rotational symmetry, corresponding to an eightfold rotational symmetry of the hypercube. This is of interest, not just for mathematics, but for the physics of quasicrystals under the cut-and-project theory. In this view, a 3D quasicrystal with 8-fold rotation symmetry might be described as the projection of a slab cut from a 4D lattice.
The following 4D rotation matrix is the aforementioned eightfold symmetry of the hypercube (and the cross-polytope):
A
=
[
0
0
0
−
1
1
0
0
0
0
−
1
0
0
0
0
−
1
0
]
.
{\displaystyle A={\begin{bmatrix}0&0&0&-1\\1&0&0&0\\0&-1&0&0\\0&0&-1&0\end{bmatrix}}.}
Transforming this matrix to the new coordinates given by
B
=
[
−
1
/
2
0
−
1
/
2
2
/
2
1
/
2
2
/
2
−
1
/
2
0
−
1
/
2
0
−
1
/
2
−
2
/
2
−
1
/
2
2
/
2
1
/
2
0
]
{\displaystyle B={\begin{bmatrix}-1/2&0&-1/2&{\sqrt {2}}/2\\1/2&{\sqrt {2}}/2&-1/2&0\\-1/2&0&-1/2&-{\sqrt {2}}/2\\-1/2&{\sqrt {2}}/2&1/2&0\end{bmatrix}}}
will produce:
B
A
B
−
1
=
[
2
/
2
2
/
2
0
0
−
2
/
2
2
/
2
0
0
0
0
−
2
/
2
2
/
2
0
0
−
2
/
2
−
2
/
2
]
.
{\displaystyle BAB^{-1}={\begin{bmatrix}{\sqrt {2}}/2&{\sqrt {2}}/2&0&0\\-{\sqrt {2}}/2&{\sqrt {2}}/2&0&0\\0&0&-{\sqrt {2}}/2&{\sqrt {2}}/2\\0&0&-{\sqrt {2}}/2&-{\sqrt {2}}/2\end{bmatrix}}.}
This third matrix then corresponds to a rotation both by 45° (in the first two dimensions) and by 135° (in the last two). Projecting a slab of hypercubes along the first two dimensions of the new coordinates produces an Ammann–Beenker tiling (another such tiling is produced by projecting along the last two dimensions), which therefore also has 8-fold rotational symmetry on average.
The A4 lattice and F4 lattice have order 10 and order 12 rotational symmetries, respectively.
To state the restriction for all dimensions, it is convenient to shift attention away from rotations alone and concentrate on the integer matrices (Bamberg, Cairns & Kilminster 2003). We say that a matrix A has order k when its k-th power (but no lower), Ak, equals the identity. Thus a 6-fold rotation matrix in the equilateral triangle basis is an integer matrix with order 6. Let OrdN denote the set of integers that can be the order of an N×N integer matrix. For example, Ord2 = {1, 2, 3, 4, 6}. We wish to state an explicit formula for OrdN.
Define a function ψ based on Euler's totient function φ; it will map positive integers to non-negative integers. For an odd prime, p, and a positive integer, k, set ψ(pk) equal to the totient function value,
φ(pk), which in this case is pk−pk−1. Do the same for ψ(2k) when k > 1. Set ψ(2) and ψ(1) to 0. Using the fundamental theorem of arithmetic, we can write any other positive integer uniquely as a product of prime powers, m = Πα pαk α; set ψ(m) = Σα ψ(pαk α). This differs from the totient itself, because it is a sum instead of a product.
The crystallographic restriction in general form states that OrdN consists of those positive integers m such that ψ(m) ≤ N.
For m>2, the values of ψ(m) are equal to twice the algebraic degree of cos(2π/m); therefore, ψ(m) is strictly less than m and reaches this maximum value if and only if m is a prime.
These additional symmetries do not allow a planar slice to have, say, 8-fold rotation symmetry. In the plane, the 2D restrictions still apply. Thus the cuts used to model quasicrystals necessarily have thickness.
Integer matrices are not limited to rotations; for example, a reflection is also a symmetry of order 2. But by insisting on determinant +1, we can restrict the matrices to proper rotations.
== Formulation in terms of isometries ==
The crystallographic restriction theorem can be formulated in terms of isometries of Euclidean space. A set of isometries can form a group. By a discrete isometry group we will mean an isometry group that maps each point to a discrete subset of RN, i.e. the orbit of any point is a set of isolated points. With this terminology, the crystallographic restriction theorem in two and three dimensions can be formulated as follows.
For every discrete isometry group in two- and three-dimensional space which includes translations spanning the whole space, all isometries of finite order are of order 1, 2, 3, 4 or 6.
Isometries of order n include, but are not restricted to, n-fold rotations. The theorem also excludes S8, S12, D4d, and D6d (see point groups in three dimensions), even though they have 4- and 6-fold rotational symmetry only.
Rotational symmetry of any order about an axis is compatible with translational symmetry along that axis.
The result in the table above implies that for every discrete isometry group in four- and five-dimensional space which includes translations spanning the whole space, all isometries of finite order are of order 1, 2, 3, 4, 5, 6, 8, 10, or 12.
All isometries of finite order in six- and seven-dimensional space are of order 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 18, 20, 24 or 30 .
== See also ==
Crystallographic point group
Crystallography
== Notes ==
== References ==
Bamberg, John; Cairns, Grant; Kilminster, Devin (March 2003), "The crystallographic restriction, permutations, and Goldbach's conjecture" (PDF), American Mathematical Monthly, 110 (3): 202–209, CiteSeerX 10.1.1.124.8582, doi:10.2307/3647934, JSTOR 3647934
Elliott, Stephen (1998), The Physics and Chemistry of Solids, Wiley, ISBN 978-0-471-98194-7
Coxeter, H. S. M. (1989), Introduction to Geometry (2nd ed.), Wiley, ISBN 978-0-471-50458-0
Scherrer, W. (1946), "Die Einlagerung eines regulären Vielecks in ein Gitter", Elemente der Mathematik, 1 (6): 97–98
Shechtman, D.; Blech, I.; Gratias, D.; Cahn, JW (1984), "Metallic phase with long-range orientational order and no translational symmetry", Physical Review Letters, 53 (20): 1951–1953, Bibcode:1984PhRvL..53.1951S, doi:10.1103/PhysRevLett.53.1951
== External links ==
The crystallographic restriction
The crystallographic restriction theorem by CSIC | Wikipedia/Crystallographic_restriction_theorem |
The Journal of Chemical Crystallography is a peer-reviewed scientific journal publishing original (primary) research and review articles on crystallography and spectroscopy. It is published monthly by Springer Science+Business Media.
The editor-in-chief of Journal of Chemical Crystallography is W.T. Pennington. According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.603.
== Scope ==
The Journal of Chemical Crystallography covers crystal chemistry and physics and their relation to problems of molecular structure; structural studies of solids, liquids, gases, and solutions involving spectroscopic, spectrometric, X-ray, and electron and neutron diffraction; and theoretical studies.
== Abstracting and indexing ==
Journal of Chemical Crystallography is abstracted and indexed in the following databases:
Chemical Abstracts Service - CASSI
Science Citation Index - Web of Science
Scopus
GeoRef
EMBiology
== References ==
== External links ==
Official website | Wikipedia/Journal_of_Chemical_Crystallography |
Polymer science or macromolecular science is a subfield of materials science concerned with polymers, primarily synthetic polymers such as plastics and elastomers. The field of polymer science includes researchers in multiple disciplines including chemistry, physics, and engineering.
== Subdisciplines ==
This science comprises three main sub-disciplines:
Polymer chemistry or macromolecular chemistry is concerned with the chemical synthesis and chemical properties of polymers.
Polymer physics is concerned with the physical properties of polymer materials and engineering applications. Specifically, it seeks to present the mechanical, thermal, electronic and optical properties of polymers with respect to the underlying physics governing a polymer microstructure. Despite originating as an application of statistical physics to chain structures, polymer physics has now evolved into a discipline in its own right.
Polymer characterization is concerned with the analysis of chemical structure, morphology, and the determination of physical properties in relation to compositional and structural parameters.
== History of polymer science ==
The first modern example of polymer science is Henri Braconnot's work in the 1830s. Henri, along with Christian Schönbein and others, developed derivatives of the natural polymer cellulose, producing new, semi-synthetic materials, such as celluloid and cellulose acetate. The term "polymer" was coined in 1833 by Jöns Jakob Berzelius, though Berzelius did little that would be considered polymer science in the modern sense. In the 1840s, Friedrich Ludersdorf and Nathaniel Hayward independently discovered that adding sulfur to raw natural rubber (polyisoprene) helped prevent the material from becoming sticky. In 1844 Charles Goodyear received a U.S. patent for vulcanizing natural rubber with sulfur and heat. Thomas Hancock had received a patent for the same process in the UK the year before. This process strengthened natural rubber and prevented it from melting with heat without losing flexibility. This made practical products such as waterproofed articles possible. It also facilitated practical manufacture of such rubberized materials. Vulcanized rubber represents the first commercially successful product of polymer research. In 1884 Hilaire de Chardonnet started the first artificial fiber plant based on regenerated cellulose, or viscose rayon, as a substitute for silk, but it was very flammable. In 1907 Leo Baekeland invented the first synthetic plastic, a thermosetting phenol–formaldehyde resin called Bakelite.
Despite significant advances in polymer synthesis, the molecular nature of polymers was not understood until the work of Hermann Staudinger in 1922. Prior to Staudinger's work, polymers were understood in terms of the association theory or aggregate theory, which originated with Thomas Graham in 1861. Graham proposed that cellulose and other polymers were colloids, aggregates of molecules having small molecular mass connected by an unknown intermolecular force. Hermann Staudinger was the first to propose that polymers consisted of long chains of atoms held together by covalent bonds. It took over a decade for Staudinger's work to gain wide acceptance in the scientific community, work for which he was awarded the Nobel Prize in 1953.
The World War II era marked the emergence of a strong commercial polymer industry. The limited or restricted supply of natural materials such as silk and rubber necessitated the increased production of synthetic substitutes, such as nylon and synthetic rubber. In the intervening years, the development of advanced polymers such as Kevlar and Teflon have continued to fuel a strong and growing polymer industry.
The growth in industrial applications was mirrored by the establishment of strong academic programs and research institutes. In 1946, Herman Mark established the Polymer Research Institute at Brooklyn Polytechnic, the first research facility in the United States dedicated to polymer research. Mark is also recognized as a pioneer in establishing curriculum and pedagogy for the field of polymer science. In 1950, the POLY division of the American Chemical Society was formed, and has since grown to the second-largest division in this association with nearly 8,000 members. Fred W. Billmeyer, Jr., a Professor of Analytical Chemistry had once said that "although the scarcity of education in polymer science is slowly diminishing but it is still evident in many areas. What is most unfortunate is that it appears to exist, not because of a lack of awareness but, rather, a lack of interest."
== Nobel prizes related to polymer science ==
2005 (Chemistry) Robert Grubbs, Richard Schrock, Yves Chauvin for olefin metathesis.
2002 (Chemistry) John Bennett Fenn, Koichi Tanaka, and Kurt Wüthrich for the development of methods for identification and structure analyses of biological macromolecules.
2000 (Chemistry) Alan G. MacDiarmid, Alan J. Heeger, and Hideki Shirakawa for work on conductive polymers, contributing to the advent of molecular electronics.
1991 (Physics) Pierre-Gilles de Gennes for developing a generalized theory of phase transitions with particular applications to describing ordering and phase transitions in polymers.
1974 (Chemistry) Paul J. Flory for contributions to theoretical polymer chemistry.
1963 (Chemistry) Giulio Natta and Karl Ziegler for contributions in polymer synthesis. (Ziegler-Natta catalysis).
1953 (Chemistry) Hermann Staudinger for contributions to the understanding of macromolecular chemistry.
== References ==
McLeish T.C.B. (2009) Polymer Physics. In: Meyers R. (eds) Encyclopedia of Complexity and Systems Science. Springer, New York, NY. doi:10.1007/978-0-387-30440-3_409
Asua, José M. (August 2007). Polymer Reaction Engineering (Hardcover - 392 pages). Wiley, John & Sons. ISBN 978-1-4051-4442-1.
== External links ==
List of scholarly journals pertaining to polymer science | Wikipedia/Polymer_science |
Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases, that arise from electromagnetic forces between atoms and electrons. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperatures, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, the Bose–Einstein condensates found in ultracold atomic systems, and liquid crystals. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models and predict the properties of extremely large groups of atoms.
The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division of the American Physical Society. These include solid state and soft matter physicists, who study quantum and non-quantum physical properties of matter respectively. Both types study a great range of materials, providing many research, funding and employment opportunities. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics.
A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. According to the founding director of the Max Planck Institute for Solid State Research, physics professor Manuel Cardona, it was Albert Einstein who created the modern field of condensed matter physics starting with his seminal 1905 article on the photoelectric effect and photoluminescence which opened the fields of photoelectron spectroscopy and photoluminescence spectroscopy, and later his 1907 article on the specific heat of solids which introduced, for the first time, the effect of lattice vibrations on the thermodynamic properties of crystals, in particular the specific heat. Deputy Director of the Yale Quantum Institute A. Douglas Stone makes a similar priority case for Einstein in his work on the synthetic history of quantum mechanics.
== Etymology ==
According to physicist Philip Warren Anderson, the use of the term "condensed matter" to designate a field of study was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge, from Solid state theory to Theory of Condensed Matter in 1967, as they felt it better included their interest in liquids, nuclear matter, and so on. Although Anderson and Heine helped popularize the name "condensed matter", it had been used in Europe for some years, most prominently in the Springer-Verlag journal Physics of Condensed Matter, launched in 1963. The name "condensed matter physics" emphasized the commonality of scientific problems encountered by physicists working on solids, liquids, plasmas, and other complex matter, whereas "solid state physics" was often associated with restricted industrial applications of metals and semiconductors. In the 1960s and 70s, some physicists felt the more comprehensive name better fit the funding environment and Cold War politics of the time.
References to "condensed" states can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies. As a matter of fact, it would be more correct to unify them under the title of 'condensed bodies'".
== History ==
=== Classical physics ===
One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals.
In 1823, Michael Faraday, then an assistant in Davy's lab, successfully liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen, hydrogen, and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, and Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures.: 35–38 By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to liquefy hydrogen and the then newly discovered helium respectively.
Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, and was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law.: 27–29 However, despite the success of Drude's model, it had one notable problem: it was unable to correctly explain the electronic contribution to the specific heat and magnetic properties of metals, and the temperature dependence of resistivity at low temperatures.: 366–368
In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value. The phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Albert Einstein, in 1922, said regarding contemporary theories of superconductivity that "with our far-reaching ignorance of the quantum mechanics of composite systems we are very far from being able to compose a theory out of these vague ideas."
=== Advent of quantum mechanics ===
Drude's classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch and other physicists. Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926. Shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice.: 366–368
The mathematics of crystal structures developed by Auguste Bravais, Yevgraf Fyodorov and others was used to classify crystals by their symmetry group, and tables of crystal structures were the basis for the series International Tables of Crystallography, first published in 1935. Band structure calculations were first used in 1930 to predict the properties of new materials, and in 1947 John Bardeen, Walter Brattain and William Shockley developed the first semiconductor-based transistor, heralding a revolution in electronics.
In 1879, Edwin Herbert Hall working at the Johns Hopkins University discovered that a voltage developed across conductors which was transverse to both an electric current in the conductor and a magnetic field applied perpendicular to the current. This phenomenon, arising due to the nature of charge carriers in the conductor, came to be termed the Hall effect, but it was not properly explained at the time because the electron was not experimentally discovered until 18 years later. After the advent of quantum mechanics, Lev Landau in 1930 developed the theory of Landau quantization and laid the foundation for a theoretical explanation of the quantum Hall effect which was discovered half a century later.: 458–460
Magnetism as a property of matter has been known in China since 4000 BC.: 1–2 However, the first modern studies of magnetism only started with the development of electrodynamics by Faraday, Maxwell and others in the nineteenth century, which included classifying materials as ferromagnetic, paramagnetic and diamagnetic based on their response to magnetization. Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the main properties of ferromagnets.: 9 The first attempt at a microscopic description of magnetism was by Wilhelm Lenz and Ernst Ising through the Ising model that described magnetic materials as consisting of a periodic lattice of spins that collectively acquired magnetization. The Ising model was solved exactly to show that spontaneous magnetization can occur in one dimension and it is possible in higher-dimensional lattices. Further research such as by Bloch on spin waves and Néel on antiferromagnetism led to developing new magnetic materials with applications to magnetic storage devices.: 36–38, g48
=== Modern many-body physics ===
The Sommerfeld model and spin models for ferromagnetism illustrated the successful application of quantum mechanics to condensed matter problems in the 1930s. However, there still were several unsolved problems, most notably the description of superconductivity and the Kondo effect. After World War II, several ideas from quantum field theory were applied to condensed matter problems. These included recognition of collective excitation modes of solids and the important notion of a quasiparticle. Soviet physicist Lev Landau used the idea for the Fermi liquid theory wherein low energy properties of interacting fermion systems were given in terms of what are now termed Landau-quasiparticles. Landau also developed a mean-field theory for continuous phase transitions, which described ordered phases as spontaneous breakdown of symmetry. The theory also introduced the notion of an order parameter to distinguish between ordered phases. Eventually in 1956, John Bardeen, Leon Cooper and Robert Schrieffer developed the so-called BCS theory of superconductivity, based on the discovery that arbitrarily small attraction between two electrons of opposite spin mediated by phonons in the lattice can give rise to a bound state called a Cooper pair.
The study of phase transitions and the critical behavior of observables, termed critical phenomena, was a major field of interest in the 1960s. Leo Kadanoff, Benjamin Widom and Michael Fisher developed the ideas of critical exponents and widom scaling. These ideas were unified by Kenneth G. Wilson in 1972, under the formalism of the renormalization group in the context of quantum field theory.
The quantum Hall effect was discovered by Klaus von Klitzing, Dorda and Pepper in 1980 when they observed the Hall conductance to be integer multiples of a fundamental constant
e
2
/
h
{\displaystyle e^{2}/h}
.(see figure) The effect was observed to be independent of parameters such as system size and impurities. In 1981, theorist Robert Laughlin proposed a theory explaining the unanticipated precision of the integral plateau. It also implied that the Hall conductance is proportional to a topological invariant, called Chern number, whose relevance for the band structure of solids was formulated by David J. Thouless and collaborators.: 69, 74 Shortly after, in 1982, Horst Störmer and Daniel Tsui observed the fractional quantum Hall effect where the conductance was now a rational multiple of the constant
e
2
/
h
{\displaystyle e^{2}/h}
. Laughlin, in 1983, realized that this was a consequence of quasiparticle interaction in the Hall states and formulated a variational method solution, named the Laughlin wavefunction. The study of topological properties of the fractional Hall effect remains an active field of research. Decades later, the aforementioned topological band theory advanced by David J. Thouless and collaborators was further expanded leading to the discovery of topological insulators.
In 1986, Karl Müller and Johannes Bednorz discovered the first high temperature superconductor, La2-xBaxCuO4, which is superconducting at temperatures as high as 39 kelvin. It was realized that the high temperature superconductors are examples of strongly correlated materials where the electron–electron interactions play an important role. A satisfactory theoretical description of high-temperature superconductors is still not known and the field of strongly correlated materials continues to be an active research topic.
In 2012, several groups released preprints which suggest that samarium hexaboride has the properties of a topological insulator in accord with the earlier theoretical predictions. Since samarium hexaboride is an established Kondo insulator, i.e. a strongly correlated electron material, it is expected that the existence of a topological Dirac surface state in this material would lead to a topological insulator with strong electronic correlations.
== Theoretical ==
Theoretical condensed matter physics involves the use of theoretical models to understand properties of states of matter. These include models to study the electronic properties of solids, such as the Drude model, the band structure and the density functional theory. Theoretical models have also been developed to study the physics of phase transitions, such as the Ginzburg–Landau theory, critical exponents and the use of mathematical methods of quantum field theory and the renormalization group. Modern theoretical studies involve the use of numerical computation of electronic structure and mathematical tools to understand phenomena such as high-temperature superconductivity, topological phases, and gauge symmetries.
=== Emergence ===
Theoretical understanding of condensed matter physics is closely related to the notion of emergence, wherein complex assemblies of particles behave in ways dramatically different from their individual constituents. For example, a range of phenomena related to high temperature superconductivity are understood poorly, although the microscopic physics of individual electrons and lattices is well known. Similarly, models of condensed matter systems have been studied where collective excitations behave like photons and electrons, thereby describing electromagnetism as an emergent phenomenon. Emergent properties can also occur at the interface between materials: one example is the lanthanum aluminate-strontium titanate interface, where two band-insulators are joined to create conductivity and superconductivity.
=== Electronic theory of solids ===
The metallic state has historically been an important building block for studying properties of solids. The first theoretical description of metals was given by Paul Drude in 1900 with the Drude model, which explained electrical and thermal properties by describing a metal as an ideal gas of then-newly discovered electrons. He was able to derive the empirical Wiedemann-Franz law and get results in close agreement with the experiments.: 90–91 This classical model was then improved by Arnold Sommerfeld who incorporated the Fermi–Dirac statistics of electrons and was able to explain the anomalous behavior of the specific heat of metals in the Wiedemann–Franz law.: 101–103 In 1912, The structure of crystalline solids was studied by Max von Laue and Paul Knipping, when they observed the X-ray diffraction pattern of crystals, and concluded that crystals get their structure from periodic lattices of atoms.: 48 In 1928, Swiss physicist Felix Bloch provided a wave function solution to the Schrödinger equation with a periodic potential, known as Bloch's theorem.
Calculating electronic properties of metals by solving the many-body wavefunction is often computationally hard, and hence, approximation methods are needed to obtain meaningful predictions. The Thomas–Fermi theory, developed in the 1920s, was used to estimate system energy and electronic density by treating the local electron density as a variational parameter. Later in the 1930s, Douglas Hartree, Vladimir Fock and John Slater developed the so-called Hartree–Fock wavefunction as an improvement over the Thomas–Fermi model. The Hartree–Fock method accounted for exchange statistics of single particle electron wavefunctions. In general, it is very difficult to solve the Hartree–Fock equation. Only the free electron gas case can be solved exactly.: 330–337 Finally in 1964–65, Walter Kohn, Pierre Hohenberg and Lu Jeu Sham proposed the density functional theory (DFT) which gave realistic descriptions for bulk and surface properties of metals. The density functional theory has been widely used since the 1970s for band structure calculations of variety of solids.
=== Symmetry breaking ===
Some states of matter exhibit symmetry breaking, where the relevant laws of physics possess some form of symmetry that is broken. A common example is crystalline solids, which break continuous translational symmetry. Other examples include magnetized ferromagnets, which break rotational symmetry, and more exotic states such as the ground state of a BCS superconductor, that breaks U(1) phase rotational symmetry.
Goldstone's theorem in quantum field theory states that in a system with broken continuous symmetry, there may exist excitations with arbitrarily low energy, called the Goldstone bosons. For example, in crystalline solids, these correspond to phonons, which are quantized versions of lattice vibrations.
=== Phase transition ===
Phase transition refers to the change of phase of a system, which is brought about by change in an external parameter such as temperature, pressure, or molar composition. In a single-component system, a classical phase transition occurs at a temperature (at a specific pressure) where there is an abrupt change in the order of the system. For example, when ice melts and becomes water, the ordered hexagonal crystal structure of ice is modified to a hydrogen bonded, mobile arrangement of water molecules.
In quantum phase transitions, the temperature is set to absolute zero, and the non-thermal control parameter, such as pressure or magnetic field, causes the phase transitions when order is destroyed by quantum fluctuations originating from the Heisenberg uncertainty principle. Here, the different quantum phases of the system refer to distinct ground states of the Hamiltonian matrix. Understanding the behavior of quantum phase transition is important in the difficult tasks of explaining the properties of rare-earth magnetic insulators, high-temperature superconductors, and other substances.
Two classes of phase transitions occur: first-order transitions and second-order or continuous transitions. For the latter, the two phases involved do not co-exist at the transition temperature, also called the critical point. Near the critical point, systems undergo critical behavior, wherein several of their properties such as correlation length, specific heat, and magnetic susceptibility diverge exponentially. These critical phenomena present serious challenges to physicists because normal macroscopic laws are no longer valid in the region, and novel ideas and methods must be invented to find the new laws that can describe the system.: 75ff
The simplest theory that can describe continuous phase transitions is the Ginzburg–Landau theory, which works in the so-called mean-field approximation. However, it can only roughly explain continuous phase transition for ferroelectrics and type I superconductors which involves long range microscopic interactions. For other types of systems that involves short range interactions near the critical point, a better theory is needed.: 8–11
Near the critical point, the fluctuations happen over broad range of size scales while the feature of the whole system is scale invariant. Renormalization group methods successively average out the shortest wavelength fluctuations in stages while retaining their effects into the next stage. Thus, the changes of a physical system as viewed at different size scales can be investigated systematically. The methods, together with powerful computer simulation, contribute greatly to the explanation of the critical phenomena associated with continuous phase transition.: 11
== Experimental ==
Experimental condensed matter physics involves the use of experimental probes to try to discover new properties of materials. Such probes include effects of electric and magnetic fields, measuring response functions, transport properties and thermometry. Commonly used experimental methods include spectroscopy, with probes such as X-rays, infrared light and inelastic neutron scattering; study of thermal response, such as specific heat and measuring transport via thermal and heat conduction.
=== Scattering ===
Several condensed matter experiments involve scattering of an experimental probe, such as X-ray, optical photons, neutrons, etc., on constituents of a material. The choice of scattering probe depends on the observation energy scale of interest. Visible light has energy on the scale of 1 electron volt (eV) and is used as a scattering probe to measure variations in material properties such as the dielectric constant and refractive index. X-rays have energies of the order of 10 keV and hence are able to probe atomic length scales, and are used to measure variations in electron charge density and crystal structure.: 33–34
Neutrons can also probe atomic length scales and are used to study the scattering off nuclei and electron spins and magnetization (as neutrons have spin but no charge). Coulomb and Mott scattering measurements can be made by using electron beams as scattering probes.: 33–34 : 39–43 Similarly, positron annihilation can be used as an indirect measurement of local electron density. Laser spectroscopy is an excellent tool for studying the microscopic properties of a medium, for example, to study forbidden transitions in media with nonlinear optical spectroscopy. : 258–259
=== External magnetic fields ===
In experimental condensed matter physics, external magnetic fields act as thermodynamic variables that control the state, phase transitions and properties of material systems. Nuclear magnetic resonance (NMR) is a method by which external magnetic fields are used to find resonance modes of individual nuclei, thus giving information about the atomic, molecular, and bond structure of their environment. NMR experiments can be made in magnetic fields with strengths up to 60 tesla. Higher magnetic fields can improve the quality of NMR measurement data.: 69 : 185 Quantum oscillations is another experimental method where high magnetic fields are used to study material properties such as the geometry of the Fermi surface. High magnetic fields will be useful in experimental testing of the various theoretical predictions such as the quantized magnetoelectric effect, image magnetic monopole, and the half-integer quantum Hall effect.: 57
=== Magnetic resonance spectroscopy ===
The local structure, as well as the structure of the nearest neighbour atoms, can be investigated in condensed matter with magnetic resonance methods, such as electron paramagnetic resonance (EPR) and nuclear magnetic resonance (NMR), which are very sensitive to the details of the surrounding of nuclei and electrons by means of the hyperfine coupling. Both localized electrons and specific stable or unstable isotopes of the nuclei become the probe of these hyperfine interactions), which couple the electron or nuclear spin to the local electric and magnetic fields. These methods are suitable to study defects, diffusion, phase transitions and magnetic order. Common experimental methods include NMR, nuclear quadrupole resonance (NQR), implanted radioactive probes as in the case of muon spin spectroscopy (
μ
{\displaystyle \mu }
SR), Mössbauer spectroscopy,
β
{\displaystyle \beta }
NMR and perturbed angular correlation (PAC). PAC is especially ideal for the study of phase changes at extreme temperatures above 2000 °C due to the temperature independence of the method.
=== Cold atomic gases ===
Ultracold atom trapping in optical lattices is an experimental tool commonly used in condensed matter physics, and in atomic, molecular, and optical physics. The method involves using optical lasers to form an interference pattern, which acts as a lattice, in which ions or atoms can be placed at very low temperatures. Cold atoms in optical lattices are used as quantum simulators, that is, they act as controllable systems that can model behavior of more complicated systems, such as frustrated magnets. In particular, they are used to engineer one-, two- and three-dimensional lattices for a Hubbard model with pre-specified parameters, and to study phase transitions for antiferromagnetic and spin liquid ordering.
In 1995, a gas of rubidium atoms cooled down to a temperature of 170 nK was used to experimentally realize the Bose–Einstein condensate, a novel state of matter originally predicted by S. N. Bose and Albert Einstein, wherein a large number of atoms occupy one quantum state.
== Applications ==
Research in condensed matter physics has given rise to several device applications, such as the development of the semiconductor transistor, laser technology, magnetic storage, liquid crystals, optical fibres and several phenomena studied in the context of nanotechnology.: 111ff Methods such as scanning-tunneling microscopy can be used to control processes at the nanometer scale, and have given rise to the study of nanofabrication. Such molecular machines were developed for example by Nobel laureates in chemistry Ben Feringa, Jean-Pierre Sauvage and Fraser Stoddart. Feringa and his team developed multiple molecular machines such as the molecular car, molecular windmill and many more.
In quantum computation, information is represented by quantum bits, or qubits. The qubits may decohere quickly before useful computation is completed. This serious problem must be solved before quantum computing may be realized. To solve this problem, several promising approaches are proposed in condensed matter physics, including Josephson junction qubits, spintronic qubits using the spin orientation of magnetic materials, and the topological non-Abelian anyons from fractional quantum Hall effect states.
Condensed matter physics also has important uses for biomedicine. For example, magnetic resonance imaging is widely used in medical imaging of soft tissue and other physiological features which cannot be viewed with traditional x-ray imaging.
== See also ==
== Notes ==
== References ==
== Further reading ==
Anderson, Philip W. (2018-03-09). Basic Notions Of Condensed Matter Physics. CRC Press. ISBN 978-0-429-97374-1.
Girvin, Steven M.; Yang, Kun (2019-02-28). Modern Condensed Matter Physics. Cambridge University Press. ISBN 978-1-108-57347-4.
Coleman, Piers (2015). Introduction to Many-Body Physics, Cambridge University Press, ISBN 0-521-86488-7.
P. M. Chaikin and T. C. Lubensky (2000). Principles of Condensed Matter Physics, Cambridge University Press; 1st edition, ISBN 0-521-79450-1
Alexander Altland and Ben Simons (2006). Condensed Matter Field Theory, Cambridge University Press, ISBN 0-521-84508-4.
Michael P. Marder (2010). Condensed Matter Physics, second edition, John Wiley and Sons, ISBN 0-470-61798-5.
Lillian Hoddeson, Ernest Braun, Jürgen Teichmann and Spencer Weart, eds. (1992). Out of the Crystal Maze: Chapters from the History of Solid State Physics, Oxford University Press, ISBN 0-19-505329-X.
== External links ==
Media related to Condensed matter physics at Wikimedia Commons | Wikipedia/Condensed-matter_physics |
Zeitschrift für Kristallographie – New Crystal Structures is a bimonthly peer-reviewed scientific journal published in English. Its first issue was published in December 1997 and bore the subtitle "International journal for structural, physical, and chemical aspects of crystalline materials." Created as a spin-off of Zeitschrift für Kristallographie for reporting novel and refined crystal structures, it began at volume 212 in order to remain aligned with the numbering of the parent journal. Paul von Groth, Professor of Mineralogy at the University of Strasbourg, established Zeitschrift für Krystallographie und Mineralogie in 1877; after several name changes, the journal adopted its present name, Zeitschrift für Kristallographie – Crystalline Materials, in 2010.
The inaugural editors-in-chief were Hans Georg von Schnering of the Max Planck Institute for Solid State Research in Stuttgart and Heinz Hermann Schulz of the Ludwig-Maximilians-Universität München. In 2016, the editor-in-chief was Hubert Huppertz (Universität Innsbruck). In the last years the journal sharpened its profile as a journal providing new crystal structure determinations (and redeterminations) together with a short description of the source of the material and the most important features of each structure.
Editorial Board
Christian Hübschle, Bayreuth University, Germany; Oliver Janka, Münster University, Germany; Andreas Lemmerer, Johannesburg University, South Africa; Guido J. Reiss, Düsseldorf University, Germany; Edward R. T. Tiekink, Sunway University, Malaysia
The journal is indexed in various databases and, according to the Journal Citation Reports, had a 2020 impact factor of 0.451.
== Abstracting and indexing ==
The journal is abstracted and indexed in:
Chemical Abstracts Service
Current Contents/Physical, Chemical and Earth Sciences
EBSCO databases
Inspec
Science Citation Index Expanded
Scopus
publons
Web of Science - Current Contents/Physical, Chemical and Earth Science
Reaxys
== References ==
== External links ==
Official website | Wikipedia/Zeitschrift_für_Kristallographie_–_New_Crystal_Structures |
Electron crystallography is a subset of methods in electron diffraction focusing upon detailed determination of the positions of atoms in solids using a transmission electron microscope (TEM). It can involve the use of high-resolution transmission electron microscopy images, electron diffraction patterns including convergent-beam electron diffraction or combinations of these. It has been successful in determining some bulk structures, and also surface structures. Two related methods are low-energy electron diffraction which has solved the structure of many surfaces, and reflection high-energy electron diffraction which is used to monitor surfaces often during growth.
The technique date back to soon after the discovery of electron diffraction in 1927-28, and was used in many early works. However, for many years quantitative electron crystallography was not used, instead the diffraction information was combined qualitatively with imaging results. A number of advances from the 1950s in particular laid the foundation for more quantitative work, ranging from accurate methods to perform forward calculations to methods to invert to maps of the atomic structure. With the improvement of the imaging capabilities of electron microscopes crystallographic data is now commonly obtained by combining images with electron diffraction information, or in some cases by collecting three dimensional electron diffraction data by a number of different approaches.
== History ==
The general approach dates back to the work in 1924 of Louis de Broglie in his PhD thesis Recherches sur la théorie des quanta where he introduced the concept of electrons as matter waves. The wave nature was experimentally confirmed for electron beams in the work of two groups, the first the Davisson–Germer experiment, the other by George Paget Thomson and Alexander Reid. Alexander Reid, who was Thomson's graduate student, performed the first experiments, but he died soon after in a motorcycle accident. These experiments were rapidly followed by the first non-relativistic diffraction model for electrons by Hans Bethe based upon the Schrödinger equation, which is very close to how electron diffraction is now described. Significantly, Clinton Davisson and Lester Germer noticed that their results could not be interpreted using a Bragg's law approach as the positions were systematically different; the approach of Hans Bethe which includes both multiple scattering and the refraction due to the average potential yielded more accurate results. Very quickly there were multiple advances, for instance Seishi Kikuchi's observations of lines that can be used for crystallographic indexing due to combined elastic and inelastic scattering, gas electron diffraction developed by Herman Mark and Raymond Weil, diffraction in liquids by Louis Maxwell, and the first electron microscopes developed by Max Knoll and Ernst Ruska.
Despite early successes such as the determination of the positions of hydrogen atoms in NH4Cl crystals by W. E. Laschkarew and I. D. Usykin in 1933, boric acid by John M. Cowley in 1953 and orthoboric acid by William Houlder Zachariasen in 1954, electron diffraction for many years was a qualitative technique used to check samples within electron microscopes. John M Cowley explains in a 1968 paper:Thus was founded the belief, amounting in some cases almost to an article of faith, and persisting even to the present day, that it is impossible to interpret the intensities of electron diffraction patterns to gain structural information.This has slowly changed. One key step was the development in 1936 by Walther Kossel and Gottfried Möllenstedt of convergent beam electron diffraction (CBED), This approach was extended by Peter Goodman and Gunter Lehmpfuhl, then mainly by the groups of John Steeds and Michiyoshi Tanaka who showed how to use CBED patterns to determine point groups and space groups. This was combined with other transmission electron microscopy approaches, typically where both local microstructure and atomic structure was of importance.
A second key set of work was that by the group of Boris Vainshtein who demonstrated solving the structure of many different materials such as clays and micas using powder diffraction patterns, a success attributed to the samples being relatively thin. (Since the advent of precession electron diffraction it has become clear that averaging over many different electron beam directions and thicknesses significantly reduces dynamical diffraction effects, so was probably also important.)
More complete crystallographic analysis of intensity data was slow to develop. One of the key steps was the demonstration in 1976 by Douglas L. Dorset and Herbert A. Hauptman that conventional direct methods for x-ray crystallography could be used. Another was the demonstration in 1986 that a Patterson function could be powerful in the seminal solution of the silicon (111) 7x7 reconstructed surface by Kunio Takanayagi using ultra-high vacuum electron diffraction. More complete analyses were the demonstration that classical inversion methods could be used for surfaces in 1997 by Dorset and Laurence D. Marks, and in 1998 the work by Jon Gjønnes who combined three-dimensional electron diffraction with precession electron diffraction and direct methods to solve an intermetallic, also using dynamical refinements.
At the same time as approaches to invert diffraction data using electrons were established, the resolution of electron microscopes became good enough that images could be combined with diffraction information. At first resolution was poor, with in 1956 James Menter publishing the first electron microscope images showing the lattice structure of a material at 1.2nm resolution. In 1968 Aaron Klug and David DeRosier used electron microscopy to visualise the structure of the tail of bacteriophage T4, a common virus, a key step in the use of electrons for macromolecular structure determination. The first quantitative matching of atomic scale images and dynamical simulations was published in 1972 by J. G. Allpress, E. A. Hewat, A. F. Moodie and J. V. Sanders. In the early 1980s the resolution of electron microscopes was now sufficient to resolve the atomic structure of materials, for instance with the 600 kV instrument led by Vernon Cosslett, so combinations of high-resolution transmission electron microscopy and diffraction became standard across many areas of science. Most of the research published using these approaches is described as electron microscopy, without the addition of the term electron crystallography.
== Comparison with X-ray crystallography ==
It can complement X-ray crystallography for studies of very small crystals (<0.1 micrometers), both inorganic, organic, and proteins, such as membrane proteins, that cannot easily form the large 3-dimensional crystals required for that process. Protein structures are usually determined from either 2-dimensional crystals (sheets or helices), polyhedrons such as viral capsids, or dispersed individual proteins. Electrons can be used in these situations, whereas X-rays cannot, because electrons interact more strongly with atoms than X-rays do. Thus, X-rays will travel through a thin 2-dimensional crystal without diffracting significantly, whereas electrons can be used to form an image. Conversely, the strong interaction between electrons and protons makes thick (e.g. 3-dimensional > 1 micrometer) crystals impervious to electrons, which only penetrate short distances.
One of the main difficulties in X-ray crystallography is determining phases in the diffraction pattern. Because of the complexity of X-ray lenses, it is difficult to form an image of the crystal being diffracted, and hence phase information is lost. Fortunately, electron microscopes can resolve atomic structure in real space and the crystallographic structure factor phase information can be experimentally determined from an image's Fourier transform. The Fourier transform of an atomic resolution image is similar, but different, to a diffraction pattern—with reciprocal lattice spots reflecting the symmetry and spacing of a crystal. Aaron Klug was the first to realize that the phase information could be read out directly from the Fourier transform of an electron microscopy image that had been scanned into a computer, already in 1968. For this, and his studies on virus structures and transfer-RNA, Klug received the Nobel Prize for chemistry in 1982.
== Radiation damage ==
A common problem to X-ray crystallography and electron crystallography is radiation damage, by which especially organic molecules and proteins are damaged as they are being imaged, limiting the resolution that can be obtained. This is especially troublesome in the setting of electron crystallography, where that radiation damage is focused on far fewer atoms. One technique used to limit radiation damage is electron cryomicroscopy, in which the samples undergo cryofixation and imaging takes place at liquid nitrogen or even liquid helium temperatures. Because of this problem, X-ray crystallography has been much more successful in determining the structure of proteins that are especially vulnerable to radiation damage. Radiation damage was recently investigated using MicroED of thin 3D crystals in a frozen hydrated state.
== Protein structures determined by electron crystallography ==
The first electron crystallographic protein structure to achieve atomic resolution was bacteriorhodopsin, determined by Richard Henderson and coworkers at the Medical Research Council Laboratory of Molecular Biology in 1990. However, already in 1975 Unwin and Henderson had determined the first membrane protein structure at intermediate resolution (7 Ångström), showing for the first time the internal structure of a membrane protein, with its alpha-helices standing perpendicular to the plane of the membrane. Since then, several other high-resolution structures have been determined by electron crystallography, including the light-harvesting complex, the nicotinic acetylcholine receptor, and the bacterial flagellum. The highest resolution protein structure solved by electron crystallography of 2D crystals is that of the water channel aquaporin-0. In 2012, Jan Pieter Abrahams and coworkers extended electron crystallography to 3D protein nanocrystals by rotation electron diffraction (RED).
== Application to inorganic materials ==
Electron crystallographic studies on inorganic crystals using high-resolution electron microscopy (HREM) images were first performed by Aaron Klug in 1978 and by Sven Hovmöller and coworkers in 1984. HREM images were used because they allow to select (by computer software) only the very thin regions close to the edge of the crystal for structure analysis (see also crystallographic image processing). This is of crucial importance since in the thicker parts of the crystal the exit-wave function (which carries the information about the intensity and position of the projected atom columns) is no longer linearly related to the projected crystal structure. Moreover, not only do the HREM images change their appearance with increasing crystal thickness, they are also very sensitive to the chosen setting of the defocus Δf of the objective lens (see the HREM images of GaN for example). To cope with this complexity methods based upon the Cowley-Moodie multislice algorithm and non-linear imaging theory have been developed to simulate images; this only became possible once the FFT method was developed.
In addition to electron microscopy images, it is also possible to use electron diffraction (ED) patterns for crystal structure determination. The utmost care must be taken to record such ED patterns from the thinnest areas in order to keep most of the structure related intensity differences between the reflections (quasi-kinematical diffraction conditions). Just as with X-ray diffraction patterns, the important crystallographic structure factor phases are lost in electron diffraction patterns and must be uncovered by special crystallographic methods such as direct methods, maximum likelihood or (more recently) by the charge-flipping method. On the other hand, ED patterns of inorganic crystals have often a high resolution (= interplanar spacings with high Miller indices) much below 1 Ångström. This is comparable to the point resolution of the best electron microscopes. Under favourable conditions it is possible to use ED patterns from a single orientation to determine the complete crystal structure. Alternatively a hybrid approach can be used which uses HRTEM images for solving and intensities from ED for refining the crystal structure.
Recent progress for structure analysis by ED was made by introducing the Vincent-Midgley precession technique for recording electron diffraction patterns. The thereby obtained intensities are usually much closer to the kinematical intensities, so that even structures can be determined that are out of range when processing conventional (selected area) electron diffraction data.
Crystal structures determined via electron crystallography can be checked for their quality by using first-principles calculations within density functional theory (DFT). This approach has been used to assist in solving surface structures and for the validation of several metal-rich structures which were only accessible by HRTEM and ED, respectively.
Recently, two very complicated zeolite structures have been determined by electron crystallography combined with X-ray powder diffraction. These are more complex than the most complex zeolite structures determined by X-ray crystallography.
== References ==
== Further reading ==
Zou, XD, Hovmöller, S. and Oleynikov, P. "Electron Crystallography - Electron microscopy and Electron Diffraction". IUCr Texts on Crystallography 16, Oxford university press 2011. http://ukcatalogue.oup.com/product/9780199580200.do ISBN 978-0-19-958020-0
Downing, K. H.; Meisheng, H.; Wenk, H.-R.; O'Keefe, M. A. (1990). "Resolution of oxygen atoms in staurolite by three-dimensional transmission electron microscopy". Nature. 348 (6301): 525–528. Bibcode:1990Natur.348..525D. doi:10.1038/348525a0. S2CID 4340756.
Zou, X.D.; Hovmöller, S. (2008). "Electron crystallography: Imaging and Single Crystal Diffraction from Powders". Acta Crystallographica A. 64 (Pt 1): 149–160. Bibcode:2008AcCrA..64..149Z. doi:10.1107/S0108767307060084. PMID 18156680.
T.E. Weirich, X.D. Zou & J.L. Lábár (2006). Electron Crystallography: Novel Approaches for Structure Determination of Nanosized Materials. Springer Netherlands, ISBN 978-1-4020-3919-5
== External links ==
Interview with Aaron Klug Nobel Laureate for work on crystallograph electron microscopy Freeview video by the Vega Science Trust.
Raunser, S; Walz, T (2009). "Electron Crystallography as a Technique to Study the Structure on Membrane Proteins in a Lipidic Environment". Annual Review of Biophysics. 38 (1): 89–105. doi:10.1146/annurev.biophys.050708.133649. PMID 19416061. | Wikipedia/Electron_crystallography |
A molecule is a group of two or more atoms that are held together by attractive forces known as chemical bonds; depending on context, the term may or may not include ions that satisfy this criterion. In quantum physics, organic chemistry, and biochemistry, the distinction from ions is dropped and molecule is often used when referring to polyatomic ions.
A molecule may be homonuclear, that is, it consists of atoms of one chemical element, e.g. two atoms in the oxygen molecule (O2); or it may be heteronuclear, a chemical compound composed of more than one element, e.g. water (two hydrogen atoms and one oxygen atom; H2O). In the kinetic theory of gases, the term molecule is often used for any gaseous particle regardless of its composition. This relaxes the requirement that a molecule contains two or more atoms, since the noble gases are individual atoms. Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are typically not considered single molecules.
Concepts similar to molecules have been discussed since ancient times, but modern investigation into the nature of molecules and their bonds began in the 17th century. Refined over time by scientists such as Robert Boyle, Amedeo Avogadro, Jean Perrin, and Linus Pauling, the study of molecules is today known as molecular physics or molecular chemistry.
== Etymology ==
According to Merriam-Webster and the Online Etymology Dictionary, the word "molecule" derives from the Latin "moles" or small unit of mass. The word is derived from French molécule (1678), from Neo-Latin molecula, diminutive of Latin moles "mass, barrier". The word, which until the late 18th century was used only in Latin form, became popular after being used in works of philosophy by Descartes.
== History ==
The definition of the molecule has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules.
The modern concept of molecules can be traced back towards pre-scientific and Greek philosophers such as Leucippus and Democritus who argued that all the universe is composed of atoms and voids. Circa 450 BC Empedocles imagined fundamental elements (fire (), earth (), air (), and water ()) and "forces" of attraction and repulsion allowing the elements to interact.
A fifth element, the incorruptible quintessence aether, was considered to be the fundamental building block of the heavenly bodies. The viewpoint of Leucippus and Empedocles, along with the aether, was accepted by Aristotle and passed to medieval and renaissance Europe.
In a more concrete manner, however, the concept of aggregates or units of bonded atoms, i.e. "molecules", traces its origins to Robert Boyle's 1661 hypothesis, in his famous treatise The Sceptical Chymist, that matter is composed of clusters of particles and that chemical change results from the rearrangement of the clusters. Boyle argued that matter's basic elements consisted of various sorts and sizes of particles, called "corpuscles", which were capable of arranging themselves into groups. In 1789, William Higgins published views on what he called combinations of "ultimate" particles, which foreshadowed the concept of valency bonds. If, for example, according to Higgins, the force between the ultimate particle of oxygen and the ultimate particle of nitrogen were 6, then the strength of the force would be divided accordingly, and similarly for the other combinations of ultimate particles.
Amedeo Avogadro created the word "molecule". His 1811 paper "Essay on Determining the Relative Masses of the Elementary Molecules of Bodies", he essentially states, i.e. according to Partington's A Short History of Chemistry, that:The smallest particles of gases are not necessarily simple atoms, but are made up of a certain number of these atoms united by attraction to form a single molecule.In coordination with these concepts, in 1833 the French chemist Marc Antoine Auguste Gaudin presented a clear account of Avogadro's hypothesis, regarding atomic weights, by making use of "volume diagrams", which clearly show both semi-correct molecular geometries, such as a linear water molecule, and correct molecular formulas, such as H2O:
In 1917, an unknown American undergraduate chemical engineer named Linus Pauling was learning the Dalton hook-and-eye bonding method, which was the mainstream description of bonds between atoms at the time. Pauling, however, was not satisfied with this method and looked to the newly emerging field of quantum physics for a new method. In 1926, French physicist Jean Perrin received the Nobel Prize in physics for proving, conclusively, the existence of molecules. He did this by calculating the Avogadro constant using three different methods, all involving liquid phase systems. First, he used a gamboge soap-like emulsion, second by doing experimental work on Brownian motion, and third by confirming Einstein's theory of particle rotation in the liquid phase.
In 1927, the physicists Fritz London and Walter Heitler applied the new quantum mechanics to the deal with the saturable, nondynamic forces of attraction and repulsion, i.e., exchange forces, of the hydrogen molecule. Their valence bond treatment of this problem, in their joint paper, was a landmark in that it brought chemistry under quantum mechanics. Their work was an influence on Pauling, who had just received his doctorate and visited Heitler and London in Zürich on a Guggenheim Fellowship.
Subsequently, in 1931, building on the work of Heitler and London and on theories found in Lewis' famous article, Pauling published his ground-breaking article "The Nature of the Chemical Bond" in which he used quantum mechanics to calculate properties and structures of molecules, such as angles between bonds and rotation about bonds. On these concepts, Pauling developed hybridization theory to account for bonds in molecules such as CH4, in which four sp³ hybridised orbitals are overlapped by hydrogen's 1s orbital, yielding four sigma (σ) bonds. The four bonds are of the same length and strength, which yields a molecular structure as shown below:
== Molecular science ==
The science of molecules is called molecular chemistry or molecular physics, depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system (bound state) composed of two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i.e., short-lived assemblies (resonances) of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose–Einstein condensate.
== Prevalence ==
Molecules as components of matter are common. They also make up most of the oceans and atmosphere. Most organic substances are molecules. The substances of life are molecules, e.g. proteins, the amino acids of which they are composed, the nucleic acids (DNA and RNA), sugars, carbohydrates, fats, and vitamins. The nutrient minerals are generally ionic compounds, thus they are not molecules, e.g. iron sulfate.
However, the majority of familiar solid substances on Earth are made partly or completely of crystals or ionic compounds, which are not made of molecules. These include all of the minerals that make up the substance of the Earth, sand, clay, pebbles, rocks, boulders, bedrock, the molten interior, and the core of the Earth. All of these contain many chemical bonds, but are not made of identifiable molecules.
No typical molecule can be defined for salts nor for covalent crystals, although these are often composed of repeating unit cells that extend either in a plane, e.g. graphene; or three-dimensionally e.g. diamond, quartz, sodium chloride. The theme of repeated unit-cellular-structure also holds for most metals which are condensed phases with metallic bonding. Thus solid metals are not made of molecules. In glasses, which are solids that exist in a vitreous disordered state, the atoms are held together by chemical bonds with no presence of any definable molecule, nor any of the regularity of repeating unit-cellular-structure that characterizes salts, covalent crystals, and metals.
== Bonding ==
Molecules are generally held together by covalent bonding. Several non-metallic elements exist only as molecules in the environment either in compounds or as homonuclear molecules, not as free atoms: for example, hydrogen.
While some people say a metallic crystal can be considered a single giant molecule held together by metallic bonding, others point out that metals behave very differently than molecules.
=== Covalent ===
A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are termed shared pairs or bonding pairs, and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed covalent bonding.
=== Ionic ===
Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. The ions are atoms that have lost one or more electrons (termed cations) and atoms that have gained one or more electrons (termed anions). This transfer of electrons is termed electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complicated nature, e.g. molecular ions like NH4+ or SO42−. At normal temperatures and pressures, ionic bonding mostly creates solids (or occasionally liquids) without separate identifiable molecules, but the vaporization/sublimation of such materials does produce separate molecules where electrons are still transferred fully enough for the bonds to be considered ionic rather than covalent.
== Molecular size ==
Most molecules are far too small to be seen with the naked eye, although molecules of many polymers can reach macroscopic sizes, including biopolymers such as DNA. Molecules commonly used as building blocks for organic synthesis have a dimension of a few angstroms (Å) to several dozen Å, or around one billionth of a meter. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope. Some of the largest molecules are macromolecules or supermolecules.
The smallest molecule is the diatomic hydrogen (H2), with a bond length of 0.74 Å.
Effective molecular radius is the size a molecule displays in solution.
The table of permselectivity for different substances contains examples.
== Molecular formulas ==
=== Chemical formula types ===
The chemical formula for a molecule uses one line of chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, and plus (+) and minus (−) signs. These are limited to one typographic line of symbols, which may include subscripts and superscripts.
A compound's empirical formula is a very simple type of chemical formula. It is the simplest integer ratio of the chemical elements that constitute it. For example, water is always composed of a 2:1 ratio of hydrogen to oxygen atoms, and ethanol (ethyl alcohol) is always composed of carbon, hydrogen, and oxygen in a 2:6:1 ratio. However, this does not determine the kind of molecule uniquely – dimethyl ether has the same ratios as ethanol, for instance. Molecules with the same atoms in different arrangements are called isomers. Also carbohydrates, for example, have the same ratio (carbon:hydrogen:oxygen= 1:2:1) (and thus the same empirical formula) but different total numbers of atoms in the molecule.
The molecular formula reflects the exact number of atoms that compose the molecule and so characterizes different molecules. However different isomers can have the same atomic composition while being different molecules.
The empirical formula is often the same as the molecular formula but not always. For example, the molecule acetylene has molecular formula C2H2, but the simplest integer ratio of elements is CH.
The molecular mass can be calculated from the chemical formula and is expressed in conventional atomic mass units equal to 1/12 of the mass of a neutral carbon-12 (12C isotope) atom. For network solids, the term formula unit is used in stoichiometric calculations.
=== Structural formula ===
For molecules with a complicated 3-dimensional structure, especially involving atoms bonded to four different substituents, a simple molecular formula or even semi-structural chemical formula may not be enough to completely specify the molecule. In this case, a graphical type of formula called a structural formula may be needed. Structural formulas may in turn be represented with a one-dimensional chemical name, but such chemical nomenclature requires many words and terms which are not part of chemical formulas.
== Molecular geometry ==
Molecules have fixed equilibrium geometries—bond lengths and angles— about which they continuously oscillate through vibrational and rotational motions. A pure substance is composed of molecules with the same average geometrical structure. The chemical formula and the structure of a molecule are the two important factors that determine its properties, particularly its reactivity. Isomers share a chemical formula but normally have very different properties because of their different structures. Stereoisomers, a particular type of isomer, may have very similar physico-chemical properties and at the same time different biochemical activities.
== Molecular spectroscopy ==
Molecular spectroscopy deals with the response (spectrum) of molecules interacting with probing signals of known energy (or frequency, according to the Planck relation). Molecules have quantized energy levels that can be analyzed by detecting the molecule's energy exchange through absorbance or emission.
Spectroscopy does not generally refer to diffraction studies where particles such as neutrons, electrons, or high energy X-rays interact with a regular arrangement of molecules (as in a crystal).
Microwave spectroscopy commonly measures changes in the rotation of molecules, and can be used to identify molecules in outer space. Infrared spectroscopy measures the vibration of molecules, including stretching, bending or twisting motions. It is commonly used to identify the kinds of bonds or functional groups in molecules. Changes in the arrangements of electrons yield absorption or emission lines in ultraviolet, visible or near infrared light, and result in colour. Nuclear resonance spectroscopy measures the environment of particular nuclei in the molecule, and can be used to characterise the numbers of atoms in different positions in a molecule.
== Theoretical aspects ==
The study of molecules by molecular physics and theoretical chemistry is largely based on quantum mechanics and is essential for the understanding of the chemical bond. The simplest of molecules is the hydrogen molecule-ion, H2+, and the simplest of all the chemical bonds is the one-electron bond. H2+ is composed of two positively charged protons and one negatively charged electron, which means that the Schrödinger equation for the system can be solved more easily due to the lack of electron–electron repulsion. With the development of fast digital computers, approximate solutions for more complicated molecules became possible and are one of the main aspects of computational chemistry.
When trying to define rigorously whether an arrangement of atoms is sufficiently stable to be considered a molecule, IUPAC suggests that it "must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state". This definition does not depend on the nature of the interaction between the atoms, but only on the strength of the interaction. In fact, it includes weakly bound species that would not traditionally be considered molecules, such as the helium dimer, He2, which has one vibrational bound state and is so loosely bound that it is only likely to be observed at very low temperatures.
Whether or not an arrangement of atoms is sufficiently stable to be considered a molecule is inherently an operational definition. Philosophically, therefore, a molecule is not a fundamental entity (in contrast, for instance, to an elementary particle); rather, the concept of a molecule is the chemist's way of making a useful statement about the strengths of atomic-scale interactions in the world that we observe.
== See also ==
== References ==
== External links ==
Molecule of the Month – School of Chemistry, University of Bristol | Wikipedia/Molecule |
In mathematics, the tensor representations of the general linear group are those that are obtained by taking finitely many tensor products of the fundamental representation and its dual. The irreducible factors of such a representation are also called tensor representations, and can be obtained by applying Schur functors (associated to Young tableaux). These coincide with the rational representations of the general linear group.
More generally, a matrix group is any subgroup of the general linear group. A tensor representation of a matrix group is any representation that is contained in a tensor representation of the general linear group. For example, the orthogonal group O(n) admits a tensor representation on the space of all trace-free symmetric tensors of order two. For orthogonal groups, the tensor representations are contrasted with the spin representations.
The classical groups, like the symplectic group, have the property that all finite-dimensional representations are tensor representations (by Weyl's construction), while other representations (like the metaplectic representation) exist in infinite dimensions.
== References ==
Roe Goodman; Nolan Wallach (2009), Symmetry, representations, and invariants, Springer, chapters 9 and 10.
Bargmann, V., & Todorov, I. T. (1977). Spaces of analytic functions on a complex cone as carriers for the symmetric tensor representations of SO(n). Journal of Mathematical Physics, 18(6), 1141–1148. | Wikipedia/Tensor_representation |
John Wiley & Sons, Inc., commonly known as Wiley (), is an American multinational publishing company that focuses on academic publishing and instructional materials. The company was founded in 1807 and produces books, journals, and encyclopedias, in print and electronically, as well as online products and services, training materials, and educational materials for undergraduate, graduate, and continuing education students.
== History ==
The company was established in 1807 when Charles Wiley opened a print shop in Manhattan. The company was the publisher of 19th century American literary figures like James Fenimore Cooper, Washington Irving, Herman Melville, and Edgar Allan Poe, as well as of legal, religious, and other non-fiction titles. The firm took its current name in 1865. Wiley later shifted its focus to scientific, technical, and engineering subject areas, abandoning its literary interests.
Wiley's son John (born in Flatbush, New York, October 4, 1808; died in East Orange, New Jersey, February 21, 1891) took over the business when Charles Wiley died in 1826. The firm was successively named Wiley, Lane & Co., then Wiley & Putnam, and then John Wiley. The company acquired its present name in 1876, when John's second son William H. Wiley joined his brother Charles in the business.
Through the 20th century, the company expanded its publishing activities, the sciences, and higher education.
In 1960 Wiley set up a European branch in London, which later moved to Chichester.
In 1982, Wiley acquired the publishing operations of the British firm Heyden & Son.
In 1989, Wiley acquired the life science publisher Liss.
In 1996, Wiley acquired the German technical publisher VCH.
In 1997, Wiley acquired the professional publisher Van Nostrand Reinhold (the successor to the company started by David Van Nostrand) from Thomson Learning.
In 1999, Wiley acquired the professional publisher Jossey-Bass from Pearson.
In 2001, Wiley acquired the publisher Hungry Minds (formerly IDG Books, including most titles formerly published by Macmillan General Reference) from International Data Group.
In 2005, Wiley acquired the British medical publisher Whurr.
Wiley marked its bicentennial in 2007. In conjunction with the anniversary, the company published Knowledge for Generations: Wiley and the Global Publishing Industry, 1807–2007, depicting Wiley's role in the evolution of publishing against a social, cultural, and economic backdrop. Wiley has also created an online community called Wiley Living History, offering excerpts from Knowledge for Generations and a forum for visitors and Wiley employees to post their comments and anecdotes.
In 2021, Wiley acquired Hindawi and J&J Editorial.
In 2023, Academic Partnerships acquired Wiley's online education business for $150 million.
=== High-growth and emerging markets ===
In December 2010, Wiley opened an office in Dubai. Wiley established publishing operations in India in 2006 (though it has had a sales presence since 1966), and has established a presence in North Africa through sales contracts with academic institutions in Tunisia, Libya, and Egypt. On April 16, 2012, the company announced the establishment of Wiley Brasil Editora LTDA in São Paulo, Brazil, effective May 1, 2012.
=== Strategic acquisition and divestiture ===
Wiley's scientific, technical, and medical business was expanded by the acquisition of Blackwell Publishing in February 2007 for US$1.12 billion, its largest purchase to that time. The combined business, named Scientific, Technical, Medical, and Scholarly (also known as Wiley-Blackwell), publishes, in print and online, 1,600 scholarly peer-reviewed journals and an extensive collection of books, reference works, databases, and laboratory manuals in the life and physical sciences, medicine and allied health, engineering, the humanities, and the social sciences.
Through a backfile initiative completed in 2007, 8.2 million pages of journal content have been made available online, a collection dating back to 1799. Wiley-Blackwell also publishes on behalf of about 700 professional and scholarly societies; among them are the American Cancer Society (ACS), for which it publishes Cancer, the flagship ACS journal; the Sigma Theta Tau International Honor Society of Nursing; and the American Anthropological Association. Other journals published include Angewandte Chemie, Advanced Materials, Hepatology, International Finance and Liver Transplantation.
Launched as a pilot in 1997 with fifty journals and expanded through 1998, Wiley Interscience provided online access to Wiley journals, reference works, and books, including backfile content. Journals previously from Blackwell Publishing were available online from Blackwell Synergy until they were integrated into Wiley Interscience on June 30, 2008. In December 2007, Wiley also began distributing its technical titles through the Safari Books Online e-reference service. Interscience was supplanted by Wiley Online Library in 2010.
On February 17, 2012, Wiley announced the acquisition of Inscape Holdings Inc., which provides DISC assessments and training for interpersonal business skills. A month later, Wiley announced its intention to divest assets in the areas of travel (including the Frommer's brand), culinary, general interest, nautical, pets, and crafts, as well as the Webster's New World and CliffsNotes brands. The planned divestiture was aligned with Wiley's "increased strategic focus on content and services for research, learning, and professional practices, and on lifelong learning through digital technology". In May 2012, the company acquired publishing company Harlan Davidson, Inc., which is a family-owned business based in Illinois. On August 13 of the same year, Wiley announced it entered into a definitive agreement to sell all of its travel assets, including all of its interests in the Frommer's brand, to Google Inc. On November 6, 2012, Houghton Mifflin Harcourt acquired Wiley's cookbooks, dictionaries and study guides. In 2013, Wiley sold its pets, crafts and general interest lines to Turner Publishing Company and its nautical line to Fernhurst Books. HarperCollins acquired parts of Wiley Canada's trade operations in 2013; the remaining Canadian trade operations were merged into Wiley U.S.
In 2021, Wiley acquired the Hindawi publishing firm for $298 million in cash to expand its open access journals portfolio.
Wiley stated it would keep the Hindawi journals under their previous brand and continue developing the open source publishing platform Phenom. In 2023 and after over 7000 article retractions in Hindawi journals related to the publication of articles originating from paper mills, Wiley announced that it will cease using the Hindawi brand and will integrate Hindawi's 200 remaining journals into its main portfolio. The Wiley CEO who initiated the Hindawi acquisition stepped down in the wake of those announcements.
In 2021, Wiley announced the acquisition of eJournalPress (EJP), a provider of web-based technology solutions for scholarly publishing companies.
== Products ==
=== Brands and partnerships ===
Wiley's Professional Development brands include For Dummies, Jossey-Bass, Pfeiffer, Wrox Press, J.K. Lasser, Sybex, Fisher Investments Press, and Bloomberg Press. The STMS business is also known as Wiley-Blackwell, formed following the acquisition of Blackwell Publishing in February 2007. Brands include The Cochrane Library and more than 1,500 journals.
Wiley has publishing alliances with partners including Microsoft, CFA Institute, the Culinary Institute of America, the American Institute of Architects, the National Geographic Society, and the Institute of Electrical and Electronics Engineers (IEEE). Wiley-Blackwell also publishes journals on behalf of more than 700 professional and scholarly society partners including the New York Academy of Sciences, American Cancer Society, The Physiological Society, British Ecological Society, American Association of Anatomists, Society for the Psychological Study of Social Issues and The London School of Economics and Political Science, making it the world's largest society publisher.
Wiley partners with GreyCampus to provide professional learning solutions around big data and digital literacy. Wiley has also partnered with five other higher-education publishers to create CourseSmart, a company developed to sell college textbooks in eTextbook format on a common platform. In 2002, Wiley created a partnership with French publisher Anuman Interactive in order to launch a series of e-books adapted from the For Dummies collection. In 2013, Wiley partnered with American Graphics Institute to create an online education video and e-book subscription service called The Digital Classroom.
In 2016, Wiley launched a worldwide partnership with Christian H. Cooper to create a program for candidates taking the Financial Risk Manager exam offered by the Global Association of Risk Professionals. The program will be built on the existing Wiley efficient learning platform and Christian's legacy Financial Risk Manager product. The partnership is built on the view the FRM designation will rapidly grow to be one of the premier financial designations for practitioners that will track the growth of the Chartered Financial Analyst designation. The program will serve tens of thousands of FRM candidates worldwide and is based on the adaptive learning technology of Wiley's efficient learning platform and Christian's unique writing style and legacy book series.
With the integration of digital technology and the traditional print medium, Wiley has stated that in the near future its customers will be able to search across all its content regardless of original medium and assemble a custom product in the format of choice. Web resources are also enabling new types of publisher-customer interactions within the company's various businesses.
=== Open access ===
In 2016, Wiley started a collaboration with the open access publisher Hindawi to help convert nine Wiley journals to full open access. In 2018 a further announcement was made indicating that the Wiley-Hindawi collaboration would launch an additional four new fully open access journals.
On January 18, 2019, Wiley signed a contract with Project DEAL to begin open access to its academic journals for more than 700 academic institutions. It is the first contract between a publisher and a leading research nation (Germany) toward open access to scientific research.
=== Higher education ===
Higher Education's "WileyPLUS" is an online product that combines electronic versions of texts with media resources and tools for instructors and students. It is intended to provide a single source from which instructors can manage their courses, create presentations, and assign and grade homework and tests; students can receive hints and explanations as they work on homework, and link back to relevant sections of the text.
"Wiley Custom Select" launched in February 2009 as a custom textbook system allowing instructors to combine content from different Wiley textbooks and lab manuals and add in their own material. The company has begun to make content from its STMS business available to instructors through the system, with content from its Professional/Trade business to follow.
In September 2019, Wiley entered into a collaboration with IIM Lucknow to offer analytics courses for finance executives.
=== Online Program Management ===
In November 2011, Wiley Education Services announced the purchase Deltak for $220 million. Wiley later acquired The Learning House in 2018. This made Wiley one of the largest OPM providers at the time, with 60 university partners and more than 700 online programs.
In June 2023, Wiley announced they would divest several business units, including Wiley University Services. Wiley's 2023 full year revenue was $208 million, an 8% reduction from the prior year. In 2020, Wiley reported $232 million in OPM revenue with organic growth of 11% compared to prior year.
In November 2023, Academic Partnerships announced they would purchase Wiley's OPM business for $110 million.
=== Medicine ===
In January 2008, Wiley launched a new version of its evidence-based medicine (EBM) product, InfoPOEMs with InfoRetriever, under the name Essential Evidence Plus, providing primary-care clinicians with point-of-care access to the most extensive source of EBM information via their PDAs/handheld devices and desktop computers. Essential Evidence Plus includes the InfoPOEMs daily EBM content alerting service and two new content resources—EBM Guidelines, a collection of practice guidelines, evidence summaries, and images, and e-Essential Evidence, a reference for general practitioners, nurses, and physician assistants providing first-contact care.
=== Architecture and design ===
In October 2008, Wiley launched a new online service providing continuing education units (CEU) and professional development hour (PDH) credits to architects and designers. The initial courses are adapted from Wiley books, extending their reach into the digital space. Wiley is an accredited AIA continuing education provider.
=== Wiley Online Library ===
Wiley Online Library is a subscription-based library of John Wiley & Sons that launched on August 7, 2010, replacing Wiley Interscience. It is a collection of online resources covering life, health, and physical sciences as well as social science and the humanities. To its members, Wiley Online Library delivers access to over 4 million articles from 1,600 journals, more than 22,000 books, and hundreds of reference works, laboratory protocols, and databases from John Wiley & Sons and its imprints, including Wiley-Blackwell, Wiley-VCH, and Jossey-Bass. The online library is implemented on top of the Literatum platform, developed by Atypon which Wiley acquired in 2016.
== Corporate structure ==
=== Governance and operations ===
While the company is led by an independent management team and Board of Directors, the involvement of the Wiley family is ongoing, with sixth-generation members (and siblings) Peter Booth Wiley as the non-executive chairman of the board and Bradford Wiley II as a Director and past chairman of the board. Seventh-generation members Jesse and Nate Wiley work in the company's Professional/Trade and Scientific, Technical, Medical, and Scholarly businesses, respectively.
Wiley has been publicly owned since 1962, and listed on the New York Stock Exchange since 1995; its stock is traded under the symbols NYSE: WLY (for its Class A stock) and NYSE: WLYB (for its class B stock).
Wiley's operations are organized into three business divisions:
Scientific, Technical, Medical, and Scholarly (STMS), also known as Wiley-Blackwell
Professional Development
Global Education
The company has approximately 10,000 employees worldwide, with headquarters in Hoboken, New Jersey, since 2002.
=== Corporate culture ===
In 2008, Wiley was named for the second consecutive year to Forbes magazine's annual list of the "400 Best Big Companies in America". In 2007, Book Business magazine cited Wiley as "One of the 20 Best Book Publishing Companies to Work For". For two consecutive years, 2006 and 2005, Fortune magazine named Wiley one of the "100 Best Companies to Work For". Wiley Canada was named to Canadian Business magazine's 2006 list of "Best Workplaces in Canada", and Wiley Australia has received the Australian government's "Employer of Choice for Women" citation every year since its inception in 2001. In 2004, Wiley was named to the U.S. Environmental Protection Agency's "Best Workplaces for Commuters" list. Working Mother magazine in 2003 listed Wiley as one of the "100 Best Companies for Working Mothers", and that same year, the company received the Enterprise Award from the New Jersey Business & Industry Association in recognition of its contribution to the state's economic growth. In 1998, Financial Times selected Wiley as one of the "most respected companies" with a "strong and well thought out strategy" in its global survey of CEOs.
In August 2009, the company announced a proposed reduction of Wiley-Blackwell staff in content management operations in the UK and Australia by approximately 60, in conjunction with an increase of staff in Asia. In March 2010, it announced a similar reorganization of its Wiley-Blackwell central marketing operations that would lay off approximately 40 employees. The company's position was that the primary goal of this restructuring was to increase workflow efficiency. In June 2012, it announced the proposed closing of its Edinburgh facility in June 2013 with the intention of relocating journal content management activities currently performed there to Oxford and Asia. The move would lay off approximately 50 employees.
Wiley is a signatory of the SDG Publishers Compact, and has taken steps to support the achievement of the Sustainable Development Goals (SDGs) in the publishing industry. These include becoming carbon neutral and supporting reforestation.
Wiley's Natural Resources Forum was one of six out of 100 journals to receive the highest possible "Five Wheel" impact rating from an SDG Impact Intensity journal rating system analyzing data from 2016 to 2020.
=== Gender pay gap ===
Wiley reported a mean 2017 gender pay gap of 21.1% for its UK workforce, while the median was 21.5%. The gender bonus gaps are far higher, at 50.7% for the median measure and 42.3% for the mean. Wiley said: "Our mean and median bonus gaps are driven by our highest earners, who are predominantly male."
== Controversies ==
=== Forced inclusion of authors into AI LLM's ===
In August 2024, it was reported that Wiley was projected to earn $44 million (£33 million) from partnerships with Artificial Intelligence (AI) firms that utilize authors' content to train Large Language Models (LLMs). Authors are not provided with an opt-out option for these deals.
=== Journal protests ===
In 2020, the entire editorial board of the European Law Journal resigned over a dispute about contract terms and the behavior of its publisher, Wiley. Wiley did not allow the editorial board members to decide over editorial appointments and decisions.
A majority of the editorial board of the journal Diversity & Distributions resigned in 2018 after Wiley allegedly blocked the publication of a letter protesting the publisher's decision to make the journal entirely open access.
=== Publication practices ===
According to Retraction Watch, Wiley makes some articles disappear from their journals without any explanation.
=== Manipulation of bibliometrics ===
According to Goodhart's law and concerned academics like the signatories of the San Francisco Declaration on Research Assessment, commercial academic publishers benefit from manipulation of bibliometrics and scientometrics like the journal impact factor, which is often used as proxy of prestige and can influence revenues, including public subsidies in the form of subscriptions and free work from academics.
Five Wiley journals, which exhibited unusual levels of self-citation, had their journal impact factor of 2019 suspended from Journal Citation Reports in 2020, a sanction which hit 34 journals in total.
=== Publication of "Paper Mill" generated papers ===
In April 2022, the journal Science revealed that a Ukrainian company, International Publisher Ltd., run by Ksenia Badziun, operates a Russian website where academics can purchase authorships in soon-to-be-published academic papers. Over a two-year period, researchers found that at least 419 articles "appeared to match manuscripts that later appeared in dozens of different journals" and that "more than 100 of these identified papers were published in 68 journals run by established publishers, including Elsevier, Oxford University Press, Springer Nature, Taylor & Francis, Wolters Kluwer, and Wiley-Blackwell." Wiley-Blackwell claimed that they were examining the specific papers that were identified and brought to their attention.
In 2024, Wiley closed down 19 of the about 250 journals it had acquired in the Hindawi deal, after retracting "more than 11,300 'compromised' studies over the past two years"; Wiley had earlier shuttered four journals for publishing fake articles coming from paper mills.
=== COI between climate research and fossil fuel industry ===
Wiley is a publisher of climate change research, but also publishes a journal dedicated to fossil fuel exploration. Climate scientists are concerned that this conflict of interest could undermine the credibility of climate science because they believe that fossil fuel extraction and climate action are incompatible.
== Copyright cases ==
=== Hindawi case ===
In 2021, Wiley purchased another Open Access company named Hindawi. Shortly after, many articles published by Hindawi were retracted and Scopus disconnected all of them from their database.
=== Photographer copyrights ===
A 2013 lawsuit brought by a stock photo agency for alleged violation of a 1997 license was dismissed for procedural reasons.
A 2014 ruling by the District Court for the Southern District of New York, later affirmed by the Second Circuit, says that Wiley infringed on the copyright of photographer Tom Bean by using his photos beyond the scope of the license it had purchased. The case was connected to a larger set of copyright infringement cases brought by photo agency DRK against various publishers.
A 2015 9th Circuit Court of Appeals opinion established that another photo agency had standing to sue Wiley for its usage of photos beyond the scope of the license acquired.
=== Used books ===
In 2018, a Southern District of New York court upheld the award of over $39 million to Wiley and other textbook publishers in a vast litigation against Book Dog Books, a re-seller of used books which was found to hold and distribute counterfeit copies. The Court found that circumstantial evidence was sufficient to establish distribution of 116 titles for which counterfeit copies had been presented and of other 5 titles. It also found that unchallenged testimony on how the publishers usually acquired licenses from authors was sufficient to establish the publishers' copyright on the books in question.
=== Kirtsaeng v. John Wiley & Sons ===
In 2008, John Wiley & Sons filed suit against Thailand native Supap Kirtsaeng over the sale of textbooks made outside of the United States and then imported into the country. In 2013, the U.S. Supreme Court held 6–3 that the first-sale doctrine applied to copies of copyrighted works made and sold abroad at lower prices, reversing the Second Circuit decision which had favored Wiley.
=== Internet Archive lawsuit ===
In June 2020, Wiley was one of a group of publishers who sued the Internet Archive, arguing that its collection of e-books was denying authors and publishers revenue and accusing the library of "willful mass copyright infringement".
== Antitrust cases ==
In September 2024, Lucina Uddin, a neuroscience professor at UCLA, sued John Wiley & Sons along with five other academic journal publishers in a proposed class-action lawsuit, alleging that the publishers violated antitrust law by agreeing not to compete against each other for manuscripts and by denying scholars payment for peer review services.
== References ==
== Further reading ==
The First One Hundred and Fifty Years: A History of John Wiley and Sons Incorporated 1807–1957. New York: John Wiley & Sons. 1957.
Moore, John Hammond (1982). Wiley: One Hundred and Seventy Five Years of Publishing. New York: John Wiley & Sons. ISBN 978-0-471-86082-2.
Munroe, Mary H. (2004). "John Wiley Timeline". The Academic Publishing Industry: A Story of Merger and Acquisition. Archived from the original on October 20, 2014 – via Northern Illinois University.
Wiley, Peter Booth; Chaves, Frances; Grolier Club (2010). John Wiley & Sons: 200 years of publishing (PDF). Hoboken, NJ: John Wiley & Sons.
Wright, Robert E.; Jacobson, Timothy C.; Smith, George David (2007). Knowledge for Generations: Wiley and the Global Publishing Industry, 1807–2007. Hoboken, New Jersey: John Wiley & Sons. ISBN 978-0-471-75721-4.
== External links ==
Official website | Wikipedia/Wiley_Interscience |
In tensor analysis, a mixed tensor is a tensor which is neither strictly covariant nor strictly contravariant; at least one of the indices of a mixed tensor will be a subscript (covariant) and at least one of the indices will be a superscript (contravariant).
A mixed tensor of type or valence
(
M
N
)
{\textstyle {\binom {M}{N}}}
, also written "type (M, N)", with both M > 0 and N > 0, is a tensor which has M contravariant indices and N covariant indices. Such a tensor can be defined as a linear function which maps an (M + N)-tuple of M one-forms and N vectors to a scalar.
== Changing the tensor type ==
Consider the following octet of related tensors:
T
α
β
γ
,
T
α
β
γ
,
T
α
β
γ
,
T
α
β
γ
,
T
α
β
γ
,
T
α
β
γ
,
T
α
β
γ
,
T
α
β
γ
.
{\displaystyle T_{\alpha \beta \gamma },\ T_{\alpha \beta }{}^{\gamma },\ T_{\alpha }{}^{\beta }{}_{\gamma },\ T_{\alpha }{}^{\beta \gamma },\ T^{\alpha }{}_{\beta \gamma },\ T^{\alpha }{}_{\beta }{}^{\gamma },\ T^{\alpha \beta }{}_{\gamma },\ T^{\alpha \beta \gamma }.}
The first one is covariant, the last one contravariant, and the remaining ones mixed. Notationally, these tensors differ from each other by the covariance/contravariance of their indices. A given contravariant index of a tensor can be lowered using the metric tensor gμν, and a given covariant index can be raised using the inverse metric tensor gμν. Thus, gμν could be called the index lowering operator and gμν the index raising operator.
Generally, the covariant metric tensor, contracted with a tensor of type (M, N), yields a tensor of type (M − 1, N + 1), whereas its contravariant inverse, contracted with a tensor of type (M, N), yields a tensor of type (M + 1, N − 1).
=== Examples ===
As an example, a mixed tensor of type (1, 2) can be obtained by raising an index of a covariant tensor of type (0, 3),
T
α
β
λ
=
T
α
β
γ
g
γ
λ
,
{\displaystyle T_{\alpha \beta }{}^{\lambda }=T_{\alpha \beta \gamma }\,g^{\gamma \lambda },}
where
T
α
β
λ
{\displaystyle T_{\alpha \beta }{}^{\lambda }}
is the same tensor as
T
α
β
γ
{\displaystyle T_{\alpha \beta }{}^{\gamma }}
, because
T
α
β
λ
δ
λ
γ
=
T
α
β
γ
,
{\displaystyle T_{\alpha \beta }{}^{\lambda }\,\delta _{\lambda }{}^{\gamma }=T_{\alpha \beta }{}^{\gamma },}
with Kronecker δ acting here like an identity matrix.
Likewise,
T
α
λ
γ
=
T
α
β
γ
g
β
λ
,
{\displaystyle T_{\alpha }{}^{\lambda }{}_{\gamma }=T_{\alpha \beta \gamma }\,g^{\beta \lambda },}
T
α
λ
ϵ
=
T
α
β
γ
g
β
λ
g
γ
ϵ
,
{\displaystyle T_{\alpha }{}^{\lambda \epsilon }=T_{\alpha \beta \gamma }\,g^{\beta \lambda }\,g^{\gamma \epsilon },}
T
α
β
γ
=
g
γ
λ
T
α
β
λ
,
{\displaystyle T^{\alpha \beta }{}_{\gamma }=g_{\gamma \lambda }\,T^{\alpha \beta \lambda },}
T
α
λ
ϵ
=
g
λ
β
g
ϵ
γ
T
α
β
γ
.
{\displaystyle T^{\alpha }{}_{\lambda \epsilon }=g_{\lambda \beta }\,g_{\epsilon \gamma }\,T^{\alpha \beta \gamma }.}
Raising an index of the metric tensor is equivalent to contracting it with its inverse, yielding the Kronecker delta,
g
μ
λ
g
λ
ν
=
g
μ
ν
=
δ
μ
ν
,
{\displaystyle g^{\mu \lambda }\,g_{\lambda \nu }=g^{\mu }{}_{\nu }=\delta ^{\mu }{}_{\nu },}
so any mixed version of the metric tensor will be equal to the Kronecker delta, which will also be mixed.
== See also ==
Covariance and contravariance of vectors
Einstein notation
Ricci calculus
Tensor (intrinsic definition)
Two-point tensor
== References ==
D.C. Kay (1988). Tensor Calculus. Schaum’s Outlines, McGraw Hill (USA). ISBN 0-07-033484-6.
Wheeler, J.A.; Misner, C.; Thorne, K.S. (1973). "§3.5 Working with Tensors". Gravitation. W.H. Freeman & Co. pp. 85–86. ISBN 0-7167-0344-0.
R. Penrose (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4.
== External links ==
Index Gymnastics, Wolfram Alpha | Wikipedia/Type_of_a_tensor |
The elasticity tensor is a fourth-rank tensor describing the stress-strain relation in
a linear elastic material. Other names are elastic modulus tensor and stiffness tensor. Common symbols include
C
{\displaystyle \mathbf {C} }
and
Y
{\displaystyle \mathbf {Y} }
.
The defining equation can be written as
T
i
j
=
C
i
j
k
l
E
k
l
{\displaystyle T^{ij}=C^{ijkl}E_{kl}}
where
T
i
j
{\displaystyle T^{ij}}
and
E
k
l
{\displaystyle E_{kl}}
are the components of the Cauchy stress tensor and infinitesimal strain tensor, and
C
i
j
k
l
{\displaystyle C^{ijkl}}
are the components of the elasticity tensor. Summation over repeated indices is implied. This relationship can be interpreted as a generalization of Hooke's law to a 3D continuum.
A general fourth-rank tensor
F
{\displaystyle \mathbf {F} }
in 3D has 34 = 81 independent components
F
i
j
k
l
{\displaystyle F_{ijkl}}
, but the elasticity tensor has at most 21 independent components. This fact follows from the symmetry of the stress and strain tensors, together with the requirement that the stress derives from an elastic energy potential. For isotropic materials, the elasticity tensor has just two independent components, which can be chosen to be the bulk modulus and shear modulus.
== Definition ==
The most general linear relation between two second-rank tensors
T
,
E
{\displaystyle \mathbf {T} ,\mathbf {E} }
is
T
i
j
=
C
i
j
k
l
E
k
l
{\displaystyle T^{ij}=C^{ijkl}E_{kl}}
where
C
i
j
k
l
{\displaystyle C^{ijkl}}
are the components of a fourth-rank tensor
C
{\displaystyle \mathbf {C} }
. The elasticity tensor is defined as
C
{\displaystyle \mathbf {C} }
for the case where
T
{\displaystyle \mathbf {T} }
and
E
{\displaystyle \mathbf {E} }
are the stress and strain tensors, respectively.
The compliance tensor
K
{\displaystyle \mathbf {K} }
is defined from the inverse stress-strain relation:
E
i
j
=
K
i
j
k
l
T
k
l
{\displaystyle E^{ij}=K^{ijkl}T_{kl}}
The two are related by
K
i
j
p
q
C
p
q
k
l
=
1
2
(
δ
i
k
δ
j
l
+
δ
i
l
δ
j
k
)
{\displaystyle K_{ijpq}C^{pqkl}={\frac {1}{2}}\left(\delta _{i}^{k}\delta _{j}^{l}+\delta _{i}^{l}\delta _{j}^{k}\right)}
where
δ
n
m
{\displaystyle \delta _{n}^{m}}
is the Kronecker delta.
Unless otherwise noted, this article assumes
C
{\displaystyle \mathbf {C} }
is defined from the stress-strain relation of a linear elastic material, in the limit of small strain.
== Special cases ==
=== Isotropic ===
For an isotropic material,
C
{\displaystyle \mathbf {C} }
simplifies to
C
i
j
k
l
=
λ
(
X
)
g
i
j
g
k
l
+
μ
(
X
)
(
g
i
k
g
j
l
+
g
i
l
g
k
j
)
{\displaystyle C^{ijkl}=\lambda \!\left(X\right)g^{ij}g^{kl}+\mu \!\left(X\right)\left(g^{ik}g^{jl}+g^{il}g^{kj}\right)}
where
λ
{\displaystyle \lambda }
and
μ
{\displaystyle \mu }
are scalar functions of the material coordinates
X
{\displaystyle X}
, and
g
{\displaystyle \mathbf {g} }
is the metric tensor in the reference frame of the material. In an orthonormal Cartesian coordinate basis, there is no distinction between upper and lower indices, and the metric tensor can be replaced with the Kronecker delta:
C
i
j
k
l
=
λ
(
X
)
δ
i
j
δ
k
l
+
μ
(
X
)
(
δ
i
k
δ
j
l
+
δ
i
l
δ
k
j
)
[Cartesian coordinates]
{\displaystyle C_{ijkl}=\lambda \!\left(X\right)\delta _{ij}\delta _{kl}+\mu \!\left(X\right)\left(\delta _{ik}\delta _{jl}+\delta _{il}\delta _{kj}\right)\quad {\text{[Cartesian coordinates]}}}
Substituting the first equation into the stress-strain relation and summing over repeated indices gives
T
i
j
=
λ
(
X
)
⋅
(
T
r
E
)
g
i
j
+
2
μ
(
X
)
E
i
j
{\displaystyle T^{ij}=\lambda \!\left(X\right)\cdot \left(\mathrm {Tr} \,\mathbf {E} \right)g^{ij}+2\mu \!\left(X\right)E^{ij}}
where
T
r
E
≡
E
i
i
{\displaystyle \mathrm {Tr} \,\mathbf {E} \equiv E_{\,i}^{i}}
is the trace of
E
{\displaystyle \mathbf {E} }
.
In this form,
μ
{\displaystyle \mu }
and
λ
{\displaystyle \lambda }
can be identified with the first and second Lamé parameters.
An equivalent expression is
T
i
j
=
K
(
X
)
⋅
(
T
r
E
)
g
i
j
+
2
μ
(
X
)
Σ
i
j
{\displaystyle T^{ij}=K\!\left(X\right)\cdot \left(\mathrm {Tr} \,\mathbf {E} \right)g^{ij}+2\mu \!\left(X\right)\Sigma ^{ij}}
where
K
=
λ
+
(
2
/
3
)
μ
{\displaystyle K=\lambda +(2/3)\mu }
is the bulk modulus, and
Σ
i
j
≡
E
i
j
−
(
1
/
3
)
(
T
r
E
)
g
i
j
{\displaystyle \Sigma ^{ij}\equiv E^{ij}-(1/3)\left(\mathrm {Tr} \,\mathbf {E} \right)g^{ij}}
are the components of the shear tensor
Σ
{\displaystyle \mathbf {\Sigma } }
.
=== Cubic crystals ===
The elasticity tensor of a cubic crystal has components
C
i
j
k
l
=
λ
g
i
j
g
k
l
+
μ
(
g
i
k
g
j
l
+
g
i
l
g
k
j
)
+
α
(
a
i
a
j
a
k
a
l
+
b
i
b
j
b
k
b
l
+
c
i
c
j
c
k
c
l
)
{\displaystyle {\begin{aligned}C^{ijkl}&=\lambda g^{ij}g^{kl}+\mu \left(g^{ik}g^{jl}+g^{il}g^{kj}\right)\\&+\alpha \left(a^{i}a^{j}a^{k}a^{l}+b^{i}b^{j}b^{k}b^{l}+c^{i}c^{j}c^{k}c^{l}\right)\end{aligned}}}
where
a
{\displaystyle \mathbf {a} }
,
b
{\displaystyle \mathbf {b} }
, and
c
{\displaystyle \mathbf {c} }
are unit vectors corresponding to the three mutually perpendicular axes of the crystal unit cell. The coefficients
λ
{\displaystyle \lambda }
,
μ
{\displaystyle \mu }
, and
α
{\displaystyle \alpha }
are scalars; because they are coordinate-independent, they are intrinsic material constants. Thus, a crystal with cubic symmetry is described by three independent elastic constants.
In an orthonormal Cartesian coordinate basis, there is no distinction between upper and lower indices, and
g
i
j
{\displaystyle g^{ij}}
is the Kronecker delta, so the expression simplifies to
C
i
j
k
l
=
λ
δ
i
j
δ
k
l
+
μ
(
δ
i
k
δ
j
l
+
δ
i
l
δ
k
j
)
+
α
(
a
i
a
j
a
k
a
l
+
b
i
b
j
b
k
b
l
+
c
i
c
j
c
k
c
l
)
{\displaystyle {\begin{aligned}C_{ijkl}&=\lambda \delta _{ij}\delta _{kl}+\mu \left(\delta _{ik}\delta _{jl}+\delta _{il}\delta _{kj}\right)\\&+\alpha \left(a_{i}a_{j}a_{k}a_{l}+b_{i}b_{j}b_{k}b_{l}+c_{i}c_{j}c_{k}c_{l}\right)\end{aligned}}}
=== Other crystal classes ===
There are similar expressions for the components of
C
{\displaystyle \mathbf {C} }
in other crystal symmetry classes. The number of independent elastic constants for several of these is given in table 1.
== Properties ==
=== Symmetries ===
The elasticity tensor has several symmetries that follow directly from its defining equation
T
i
j
=
C
i
j
k
l
E
k
l
{\displaystyle T^{ij}=C^{ijkl}E_{kl}}
. The symmetry of the stress and strain tensors implies that
C
i
j
k
l
=
C
j
i
k
l
and
C
i
j
k
l
=
C
i
j
l
k
,
{\displaystyle C_{ijkl}=C_{jikl}\qquad {\text{and}}\qquad C_{ijkl}=C_{ijlk},}
Usually, one also assumes that the stress derives from an elastic energy potential
U
{\displaystyle U}
:
T
i
j
=
∂
U
∂
E
i
j
{\displaystyle T^{ij}={\frac {\partial U}{\partial E_{ij}}}}
which implies
C
i
j
k
l
=
∂
2
U
∂
E
i
j
∂
E
k
l
{\displaystyle C_{ijkl}={\frac {\partial ^{2}U}{\partial E_{ij}\partial E_{kl}}}}
Hence,
C
{\displaystyle \mathbf {C} }
must be symmetric under interchange of the first and second pairs of indices:
C
i
j
k
l
=
C
k
l
i
j
{\displaystyle C_{ijkl}=C_{klij}}
The symmetries listed above reduce the number of independent components from 81 to 21. If a material has additional symmetries, then this number is further reduced.
=== Transformations ===
Under rotation, the components
C
i
j
k
l
{\displaystyle C^{ijkl}}
transform as
C
i
j
k
l
′
=
R
i
p
R
j
q
R
k
r
R
l
s
C
p
q
r
s
{\displaystyle C'_{ijkl}=R_{ip}R_{jq}R_{kr}R_{ls}C^{pqrs}}
where
C
i
j
k
l
′
{\displaystyle C'_{ijkl}}
are the covariant components in the rotated basis, and
R
i
j
{\displaystyle R_{ij}}
are the elements of the corresponding rotation matrix. A similar transformation rule holds for other linear transformations.
=== Invariants ===
The components of
C
{\displaystyle \mathbf {C} }
generally acquire different values under a change of basis. Nevertheless, for certain types of transformations,
there are specific combinations of components, called invariants, that remain unchanged. Invariants are defined with respect to a given set of transformations, formally known as a group operation. For example, an invariant with respect to the group of proper orthogonal transformations, called SO(3), is a quantity that remains constant under arbitrary 3D rotations.
C
{\displaystyle \mathbf {C} }
possesses two linear invariants and seven quadratic invariants with respect to SO(3). The linear invariants are
L
1
=
C
i
j
i
j
L
2
=
C
j
j
i
i
{\displaystyle {\begin{aligned}L_{1}&=C_{\,\,\,ij}^{ij}\\L_{2}&=C_{\,\,\,jj}^{ii}\end{aligned}}}
and the quadratic invariants are
{
L
1
2
,
L
2
2
,
L
1
L
2
,
C
i
j
k
l
C
i
j
k
l
,
C
i
i
k
l
C
j
j
k
l
,
C
i
i
k
l
C
j
k
j
l
,
C
k
i
i
l
C
k
j
j
l
}
{\displaystyle \left\{L_{1}^{2},\,L_{2}^{2},\,L_{1}L_{2},\,C_{ijkl}C^{ijkl},\,C_{iikl}C^{jjkl},\,C_{iikl}C^{jkjl},\,C_{kiil}C^{kjjl}\right\}}
These quantities are linearly independent, that is, none can be expressed as a linear combination of the others.
They are also complete, in the sense that there are no additional independent linear or quadratic invariants.
== Decompositions ==
A common strategy in tensor analysis is to decompose a tensor into simpler components that can be analyzed separately. For example, the
displacement gradient tensor
W
=
∇
ξ
{\displaystyle \mathbf {W} =\mathbf {\nabla } \mathbf {\xi } }
can be decomposed as
W
=
1
3
Θ
g
+
Σ
+
R
{\displaystyle \mathbf {W} ={\frac {1}{3}}\Theta \mathbf {g} +\mathbf {\Sigma } +\mathbf {R} }
where
Θ
{\displaystyle \Theta }
is a rank-0 tensor (a scalar), equal to the trace of
W
{\displaystyle \mathbf {W} }
;
Σ
{\displaystyle \mathbf {\Sigma } }
is symmetric and trace-free; and
R
{\displaystyle \mathbf {R} }
is antisymmetric. Component-wise,
Σ
i
j
≡
W
(
i
j
)
=
1
2
(
W
i
j
+
W
j
i
)
−
1
3
(
T
r
W
)
g
i
j
R
i
j
≡
W
[
i
j
]
=
1
2
(
W
i
j
−
W
j
i
)
{\displaystyle {\begin{aligned}\Sigma ^{ij}\equiv W^{(ij)}&={\frac {1}{2}}\left(W^{ij}+W^{ji}\right)-{\frac {1}{3}}\left(\mathrm {Tr} \,\mathbf {W} \right)g^{ij}\\R^{ij}\equiv W^{[ij]}&={\frac {1}{2}}\left(W^{ij}-W^{ji}\right)\end{aligned}}}
Here and later, symmeterization and antisymmeterization are denoted by
(
i
j
)
{\displaystyle (ij)}
and
[
i
j
]
{\displaystyle [ij]}
, respectively. This decomposition is irreducible, in the sense of being invariant under rotations, and is an important tool in the conceptual development of continuum mechanics.
The elasticity tensor has rank 4, and its decompositions are more complex and varied than those of a rank-2 tensor. A few examples are described below.
=== M and N tensors ===
This decomposition is obtained by symmeterization and antisymmeterization of the middle two indices:
C
i
j
k
l
=
M
i
j
k
l
+
N
i
j
k
l
{\displaystyle C^{ijkl}=M^{ijkl}+N^{ijkl}}
where
M
i
j
k
l
≡
C
i
(
j
k
)
l
=
1
2
(
C
i
j
k
l
+
C
i
k
j
l
)
N
i
j
k
l
≡
C
i
[
j
k
]
l
=
1
2
(
C
i
j
k
l
−
C
i
k
j
l
)
{\displaystyle {\begin{aligned}M^{ijkl}\equiv C^{i(jk)l}={\frac {1}{2}}\left(C^{ijkl}+C^{ikjl}\right)\\N^{ijkl}\equiv C^{i[jk]l}={\frac {1}{2}}\left(C^{ijkl}-C^{ikjl}\right)\end{aligned}}}
A disadvantage of this decomposition is that
M
i
j
k
l
{\displaystyle M^{ijkl}}
and
N
i
j
k
l
{\displaystyle N^{ijkl}}
do not
obey all original symmetries of
C
i
j
k
l
{\displaystyle C^{ijkl}}
, as they are not symmetric under interchange of the first two indices. In addition, it is not irreducible, so it is not invariant under linear transformations such as rotations.
=== Irreducible representations ===
An irreducible representation can be built by considering the notion of a totally symmetric tensor, which is invariant under the interchange of any two indices. A totally symmetric tensor
S
{\displaystyle \mathbf {S} }
can be constructed from
C
{\displaystyle \mathbf {C} }
by summing over all
4
!
=
24
{\displaystyle 4!=24}
permutations of the indices
S
i
j
k
l
=
1
4
!
∑
(
i
,
j
,
k
,
l
)
∈
S
4
C
i
j
k
l
=
1
4
!
(
C
i
j
k
l
+
C
j
i
k
l
+
C
i
k
j
l
+
…
)
{\displaystyle {\begin{aligned}S^{ijkl}&={\frac {1}{4!}}\sum _{(i,j,k,l)\in S_{4}}C^{ijkl}\\&={\frac {1}{4!}}\left(C^{ijkl}+C^{jikl}+C^{ikjl}+\ldots \right)\end{aligned}}}
where
S
4
{\displaystyle \mathbb {S} _{4}}
is the set of all permutations of the four indices. Owing to the symmetries of
C
i
j
k
l
{\displaystyle C^{ijkl}}
, this sum reduces to
S
i
j
k
l
=
1
3
(
C
i
j
k
l
+
C
i
k
l
j
+
C
i
l
j
k
)
{\displaystyle S^{ijkl}={\frac {1}{3}}\left(C^{ijkl}+C^{iklj}+C^{iljk}\right)}
The difference
A
i
j
k
l
≡
C
i
j
k
l
−
S
i
j
k
l
=
1
3
(
2
C
i
j
k
l
−
C
i
l
k
j
−
C
i
k
l
j
)
{\displaystyle A^{ijkl}\equiv C^{ijkl}-S^{ijkl}={\frac {1}{3}}\left(2C^{ijkl}-C^{ilkj}-C^{iklj}\right)}
is an asymmetric tensor (not antisymmetric). The decomposition
C
i
j
k
l
=
S
i
j
k
l
+
A
i
j
k
l
{\displaystyle C^{ijkl}=S^{ijkl}+A^{ijkl}}
can be shown to be unique and irreducible with respect to
S
4
{\displaystyle \mathbb {S} _{4}}
. In other words, any additional symmetrization operations on
S
{\displaystyle \mathbf {S} }
or
A
{\displaystyle \mathbf {A} }
will either leave it unchanged or evaluate to zero. It is also irreducible with respect to arbitrary linear transformations, that is, the general linear group
G
(
3
,
R
)
{\displaystyle G(3,\mathbb {R} )}
.
However, this decomposition is not irreducible with respect to the group of rotations SO(3). Instead,
S
{\displaystyle \mathbf {S} }
decomposes into three irreducible parts, and
A
{\displaystyle \mathbf {A} }
into two:
C
i
j
k
l
=
S
i
j
k
l
+
A
i
j
k
l
=
(
(
1
)
S
i
j
k
l
+
(
2
)
S
i
j
k
l
+
(
3
)
S
i
j
k
l
)
+
(
(
1
)
A
i
j
k
l
+
(
2
)
A
i
j
k
l
)
{\displaystyle {\begin{aligned}C^{ijkl}&=S^{ijkl}+A^{ijkl}\\&=\left(^{(1)}\!S^{ijkl}+\,^{(2)}\!S^{ijkl}+\,^{(3)}\!S^{ijkl}\right)+\,\left(^{(1)}\!A^{ijkl}+^{(2)}\!A^{ijkl}\right)\end{aligned}}}
See Itin (2020) for explicit expressions in terms of the components of
C
{\displaystyle \mathbf {C} }
.
This representation decomposes the space of elasticity tensors into a direct sum of subspaces:
C
=
(
(
1
)
C
⊕
(
2
)
C
⊕
(
3
)
C
)
⊕
(
(
4
)
C
⊕
(
5
)
C
)
{\displaystyle {\mathcal {C}}=\left(^{(1)}\!{\mathcal {C}}\oplus \,^{(2)}\!{\mathcal {C}}\oplus \,^{(3)}\!{\mathcal {C}}\right)\oplus \,\left(^{(4)}\!{\mathcal {C}}\oplus \,^{(5)}\!{\mathcal {C}}\right)}
with dimensions
21
=
(
1
⊕
5
⊕
9
)
⊕
(
1
⊕
5
)
{\displaystyle 21=(1\oplus 5\oplus 9)\oplus (1\oplus 5)}
These subspaces are each isomorphic to a harmonic tensor space
H
n
(
R
3
)
{\displaystyle \mathbb {H} _{n}(\mathbb {R} ^{3})}
. Here,
H
n
(
R
3
)
{\displaystyle \mathbb {H} _{n}(\mathbb {R} ^{3})}
is the space of 3D, totally symmetric, traceless tensors of rank
n
{\displaystyle n}
. In particular,
(
1
)
C
{\displaystyle ^{(1)}\!{\mathcal {C}}}
and
(
4
)
C
{\displaystyle ^{(4)}\!{\mathcal {C}}}
correspond to
H
0
{\displaystyle \mathbb {H} _{0}}
,
(
2
)
C
{\displaystyle ^{(2)}\!{\mathcal {C}}}
and
(
5
)
C
{\displaystyle ^{(5)}\!{\mathcal {C}}}
correspond to
H
2
{\displaystyle \mathbb {H} _{2}}
, and
(
3
)
C
{\displaystyle ^{(3)}\!{\mathcal {C}}}
corresponds to
H
4
{\displaystyle \mathbb {H} _{4}}
.
== See also ==
Continuum mechanics
Solid mechanics
Constitutive equation
Strength of materials
List of materials properties § Mechanical properties
Representation theory of finite groups
Voigt notation
== Footnotes ==
== References ==
== Bibliography == | Wikipedia/Elasticity_tensor |
In multilinear algebra, a tensor decomposition is any scheme for expressing a "data tensor" (M-way array) as a sequence of elementary operations acting on other, often simpler tensors. Many tensor decompositions generalize some matrix decompositions.
Tensors are generalizations of matrices to higher dimensions (or rather to higher orders, i.e. the higher number of dimensions) and can consequently be treated as multidimensional fields.
The main tensor decompositions are:
Tensor rank decomposition;
Higher-order singular value decomposition;
Tucker decomposition;
matrix product states, and operators or tensor trains;
Online Tensor Decompositions
hierarchical Tucker decomposition;
block term decomposition
== Notation ==
This section introduces basic notations and operations that are widely used in the field.
== Introduction ==
A multi-way graph with K perspectives is a collection of K matrices
X
1
,
X
2
.
.
.
.
.
X
K
{\displaystyle {X_{1},X_{2}.....X_{K}}}
with dimensions I × J (where I, J are the number of nodes). This collection of matrices is naturally represented as a tensor X of size I × J × K. In order to avoid overloading the term “dimension”, we call an I × J × K tensor a three “mode” tensor, where “modes” are the numbers of indices used to index the tensor.
== References == | Wikipedia/Tensor_decomposition |
In mathematics, there are usually many different ways to construct a topological tensor product of two topological vector spaces. For Hilbert spaces or nuclear spaces there is a simple well-behaved theory of tensor products (see Tensor product of Hilbert spaces), but for general Banach spaces or locally convex topological vector spaces the theory is notoriously subtle.
== Motivation ==
One of the original motivations for topological tensor products
⊗
^
{\displaystyle {\hat {\otimes }}}
is the fact that tensor products of the spaces of smooth real-valued functions on
R
n
{\displaystyle \mathbb {R} ^{n}}
do not behave as expected. There is an injection
C
∞
(
R
n
)
⊗
C
∞
(
R
m
)
↪
C
∞
(
R
n
+
m
)
{\displaystyle C^{\infty }(\mathbb {R} ^{n})\otimes C^{\infty }(\mathbb {R} ^{m})\hookrightarrow C^{\infty }(\mathbb {R} ^{n+m})}
but this is not an isomorphism. For example, the function
f
(
x
,
y
)
=
e
x
y
{\displaystyle f(x,y)=e^{xy}}
cannot be expressed as a finite linear combination of smooth functions in
C
∞
(
R
x
)
⊗
C
∞
(
R
y
)
.
{\displaystyle C^{\infty }(\mathbb {R} _{x})\otimes C^{\infty }(\mathbb {R} _{y}).}
We only get an isomorphism after constructing the topological tensor product; i.e.,
C
∞
(
R
n
)
⊗
^
C
∞
(
R
m
)
≅
C
∞
(
R
n
+
m
)
.
{\displaystyle C^{\infty }(\mathbb {R} ^{n})\mathop {\hat {\otimes }} C^{\infty }(\mathbb {R} ^{m})\cong C^{\infty }(\mathbb {R} ^{n+m}).}
This article first details the construction in the Banach space case. The space
C
∞
(
R
n
)
{\displaystyle C^{\infty }(\mathbb {R} ^{n})}
is not a Banach space and further cases are discussed at the end.
== Tensor products of Hilbert spaces ==
The algebraic tensor product of two Hilbert spaces A and B has a natural positive definite sesquilinear form (scalar product) induced by the sesquilinear forms of A and B. So in particular it has a natural positive definite quadratic form, and the corresponding completion is a Hilbert space A ⊗ B, called the (Hilbert space) tensor product of A and B.
If the vectors ai and bj run through orthonormal bases of A and B, then the vectors ai⊗bj form an orthonormal basis of A ⊗ B.
== Cross norms and tensor products of Banach spaces ==
We shall use the notation from (Ryan 2002) in this section. The obvious way to define the tensor product of two Banach spaces
A
{\displaystyle A}
and
B
{\displaystyle B}
is to copy the method for Hilbert spaces: define a norm on the algebraic tensor product, then take the completion in this norm. The problem is that there is more than one natural way to define a norm on the tensor product.
If
A
{\displaystyle A}
and
B
{\displaystyle B}
are Banach spaces the algebraic tensor product of
A
{\displaystyle A}
and
B
{\displaystyle B}
means the tensor product of
A
{\displaystyle A}
and
B
{\displaystyle B}
as vector spaces and is denoted by
A
⊗
B
.
{\displaystyle A\otimes B.}
The algebraic tensor product
A
⊗
B
{\displaystyle A\otimes B}
consists of all finite sums
x
=
∑
i
=
1
n
a
i
⊗
b
i
,
{\displaystyle x=\sum _{i=1}^{n}a_{i}\otimes b_{i},}
where
n
{\displaystyle n}
is a natural number depending on
x
{\displaystyle x}
and
a
i
∈
A
{\displaystyle a_{i}\in A}
and
b
i
∈
B
{\displaystyle b_{i}\in B}
for
i
=
1
,
…
,
n
.
{\displaystyle i=1,\ldots ,n.}
When
A
{\displaystyle A}
and
B
{\displaystyle B}
are Banach spaces, a crossnorm (or cross norm)
p
{\displaystyle p}
on the algebraic tensor product
A
⊗
B
{\displaystyle A\otimes B}
is a norm satisfying the conditions
p
(
a
⊗
b
)
=
‖
a
‖
‖
b
‖
,
{\displaystyle p(a\otimes b)=\|a\|\|b\|,}
p
′
(
a
′
⊗
b
′
)
=
‖
a
′
‖
‖
b
′
‖
.
{\displaystyle p'(a'\otimes b')=\|a'\|\|b'\|.}
Here
a
′
{\displaystyle a^{\prime }}
and
b
′
{\displaystyle b^{\prime }}
are elements of the topological dual spaces of
A
{\displaystyle A}
and
B
,
{\displaystyle B,}
respectively, and
p
′
{\displaystyle p^{\prime }}
is the dual norm of
p
.
{\displaystyle p.}
The term reasonable crossnorm is also used for the definition above.
There is a cross norm
π
{\displaystyle \pi }
called the projective cross norm, given by
π
(
x
)
=
inf
{
∑
i
=
1
n
‖
a
i
‖
‖
b
i
‖
:
x
=
∑
i
=
1
n
a
i
⊗
b
i
}
,
{\displaystyle \pi (x)=\inf \left\{\sum _{i=1}^{n}\|a_{i}\|\|b_{i}\|:x=\sum _{i=1}^{n}a_{i}\otimes b_{i}\right\},}
where
x
∈
A
⊗
B
.
{\displaystyle x\in A\otimes B.}
It turns out that the projective cross norm agrees with the largest cross norm (Ryan 2002, pp. 15–16).
There is a cross norm
ε
{\displaystyle \varepsilon }
called the injective cross norm, given by
ε
(
x
)
=
sup
{
|
(
a
′
⊗
b
′
)
(
x
)
|
:
a
′
∈
A
′
,
b
′
∈
B
′
,
‖
a
′
‖
=
‖
b
′
‖
=
1
}
{\displaystyle \varepsilon (x)=\sup \left\{\left|(a'\otimes b')(x)\right|:a'\in A',b'\in B',\|a'\|=\|b'\|=1\right\}}
where
x
∈
A
⊗
B
.
{\displaystyle x\in A\otimes B.}
Here
A
′
{\displaystyle A^{\prime }}
and
B
′
{\displaystyle B^{\prime }}
denote the topological duals of
A
{\displaystyle A}
and
B
,
{\displaystyle B,}
respectively.
Note hereby that the injective cross norm is only in some reasonable sense the "smallest".
The completions of the algebraic tensor product in these two norms are called the projective and injective tensor products, and are denoted by
A
⊗
^
π
B
{\displaystyle A\operatorname {\hat {\otimes }} _{\pi }B}
and
A
⊗
^
ε
B
.
{\displaystyle A\operatorname {\hat {\otimes }} _{\varepsilon }B.}
When
A
{\displaystyle A}
and
B
{\displaystyle B}
are Hilbert spaces, the norm used for their Hilbert space tensor product is not equal to either of these norms in general. Some authors denote it by
σ
,
{\displaystyle \sigma ,}
so the Hilbert space tensor product in the section above would be
A
⊗
^
σ
B
.
{\displaystyle A\operatorname {\hat {\otimes }} _{\sigma }B.}
A uniform crossnorm
α
{\displaystyle \alpha }
is an assignment to each pair
(
X
,
Y
)
{\displaystyle (X,Y)}
of Banach spaces of a reasonable crossnorm on
X
⊗
Y
{\displaystyle X\otimes Y}
so that if
X
,
W
,
Y
,
Z
{\displaystyle X,W,Y,Z}
are arbitrary Banach spaces then for all (continuous linear) operators
S
:
X
→
W
{\displaystyle S:X\to W}
and
T
:
Y
→
Z
{\displaystyle T:Y\to Z}
the operator
S
⊗
T
:
X
⊗
α
Y
→
W
⊗
α
Z
{\displaystyle S\otimes T:X\otimes _{\alpha }Y\to W\otimes _{\alpha }Z}
is continuous and
‖
S
⊗
T
‖
≤
‖
S
‖
‖
T
‖
.
{\displaystyle \|S\otimes T\|\leq \|S\|\|T\|.}
If
A
{\displaystyle A}
and
B
{\displaystyle B}
are two Banach spaces and
α
{\displaystyle \alpha }
is a uniform cross norm then
α
{\displaystyle \alpha }
defines a reasonable cross norm on the algebraic tensor product
A
⊗
B
.
{\displaystyle A\otimes B.}
The normed linear space obtained by equipping
A
⊗
B
{\displaystyle A\otimes B}
with that norm is denoted by
A
⊗
α
B
.
{\displaystyle A\otimes _{\alpha }B.}
The completion of
A
⊗
α
B
,
{\displaystyle A\otimes _{\alpha }B,}
which is a Banach space, is denoted by
A
⊗
^
α
B
.
{\displaystyle A\operatorname {\hat {\otimes }} _{\alpha }B.}
The value of the norm given by
α
{\displaystyle \alpha }
on
A
⊗
B
{\displaystyle A\otimes B}
and on the completed tensor product
A
⊗
^
α
B
{\displaystyle A\operatorname {\hat {\otimes }} _{\alpha }B}
for an element
x
{\displaystyle x}
in
A
⊗
^
α
B
{\displaystyle A\operatorname {\hat {\otimes }} _{\alpha }B}
(or
A
⊗
α
B
{\displaystyle A\otimes _{\alpha }B}
) is denoted by
α
A
,
B
(
x
)
or
α
(
x
)
.
{\displaystyle \alpha _{A,B}(x){\text{ or }}\alpha (x).}
A uniform crossnorm
α
{\displaystyle \alpha }
is said to be finitely generated if, for every pair
(
X
,
Y
)
{\displaystyle (X,Y)}
of Banach spaces and every
u
∈
X
⊗
Y
,
{\displaystyle u\in X\otimes Y,}
α
(
u
;
X
⊗
Y
)
=
inf
{
α
(
u
;
M
⊗
N
)
:
dim
M
,
dim
N
<
∞
}
.
{\displaystyle \alpha (u;X\otimes Y)=\inf\{\alpha (u;M\otimes N):\dim M,\dim N<\infty \}.}
A uniform crossnorm
α
{\displaystyle \alpha }
is cofinitely generated if, for every pair
(
X
,
Y
)
{\displaystyle (X,Y)}
of Banach spaces and every
u
∈
X
⊗
Y
,
{\displaystyle u\in X\otimes Y,}
α
(
u
)
=
sup
{
α
(
(
Q
E
⊗
Q
F
)
u
;
(
X
/
E
)
⊗
(
Y
/
F
)
)
:
dim
X
/
E
,
dim
Y
/
F
<
∞
}
.
{\displaystyle \alpha (u)=\sup\{\alpha ((Q_{E}\otimes Q_{F})u;(X/E)\otimes (Y/F)):\dim X/E,\dim Y/F<\infty \}.}
A tensor norm is defined to be a finitely generated uniform crossnorm. The projective cross norm
π
{\displaystyle \pi }
and the injective cross norm
ε
{\displaystyle \varepsilon }
defined above are tensor norms and they are called the projective tensor norm and the injective tensor norm, respectively.
If
A
{\displaystyle A}
and
B
{\displaystyle B}
are arbitrary Banach spaces and
α
{\displaystyle \alpha }
is an arbitrary uniform cross norm then
ε
A
,
B
(
x
)
≤
α
A
,
B
(
x
)
≤
π
A
,
B
(
x
)
.
{\displaystyle \varepsilon _{A,B}(x)\leq \alpha _{A,B}(x)\leq \pi _{A,B}(x).}
== Tensor products of locally convex topological vector spaces ==
The topologies of locally convex topological vector spaces
A
{\displaystyle A}
and
B
{\displaystyle B}
are given by families of seminorms. For each choice of seminorm on
A
{\displaystyle A}
and on
B
{\displaystyle B}
we can define the corresponding family of cross norms on the algebraic tensor product
A
⊗
B
,
{\displaystyle A\otimes B,}
and by choosing one cross norm from each family we get some cross norms on
A
⊗
B
,
{\displaystyle A\otimes B,}
defining a topology. There are in general an enormous number of ways to do this. The two most important ways are to take all the projective cross norms, or all the injective cross norms. The completions of the resulting topologies on
A
⊗
B
{\displaystyle A\otimes B}
are called the projective and injective tensor products, and denoted by
A
⊗
γ
B
{\displaystyle A\otimes _{\gamma }B}
and
A
⊗
λ
B
.
{\displaystyle A\otimes _{\lambda }B.}
There is a natural map from
A
⊗
γ
B
{\displaystyle A\otimes _{\gamma }B}
to
A
⊗
λ
B
.
{\displaystyle A\otimes _{\lambda }B.}
If
A
{\displaystyle A}
or
B
{\displaystyle B}
is a nuclear space then the natural map from
A
⊗
γ
B
{\displaystyle A\otimes _{\gamma }B}
to
A
⊗
λ
B
{\displaystyle A\otimes _{\lambda }B}
is an isomorphism. Roughly speaking, this means that if
A
{\displaystyle A}
or
B
{\displaystyle B}
is nuclear, then there is only one sensible tensor product of
A
{\displaystyle A}
and
B
{\displaystyle B}
.
This property characterizes nuclear spaces.
== See also ==
Fréchet space – Locally convex topological vector space that is also a complete metric space
Fredholm kernel – type of a kernel on a Banach spacePages displaying wikidata descriptions as a fallback
Inductive tensor product – binary operation on topological vector spacesPages displaying wikidata descriptions as a fallback
Projective topology – Coarsest topology making certain functions continuousPages displaying short descriptions of redirect targets
== References ==
Ryan, R.A. (2002), Introduction to Tensor Products of Banach Spaces, New York: Springer.
Grothendieck, A. (1955), "Produits tensoriels topologiques et espaces nucléaires", Memoirs of the American Mathematical Society, 16. | Wikipedia/Topological_tensor_product |
In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree. For example,
x
5
+
2
x
3
y
2
+
9
x
y
4
{\displaystyle x^{5}+2x^{3}y^{2}+9xy^{4}}
is a homogeneous polynomial of degree 5, in two variables; the sum of the exponents in each term is always 5. The polynomial
x
3
+
3
x
2
y
+
z
7
{\displaystyle x^{3}+3x^{2}y+z^{7}}
is not homogeneous, because the sum of exponents does not match from term to term. The function defined by a homogeneous polynomial is always a homogeneous function.
An algebraic form, or simply form, is a function defined by a homogeneous polynomial. A binary form is a form in two variables. A form is also a function defined on a vector space, which may be expressed as a homogeneous function of the coordinates over any basis.
A polynomial of degree 0 is always homogeneous; it is simply an element of the field or ring of the coefficients, usually called a constant or a scalar. A form of degree 1 is a linear form. A form of degree 2 is a quadratic form. In geometry, the Euclidean distance is the square root of a quadratic form.
Homogeneous polynomials are ubiquitous in mathematics and physics. They play a fundamental role in algebraic geometry, as a projective algebraic variety is defined as the set of the common zeros of a set of homogeneous polynomials.
== Properties ==
A homogeneous polynomial defines a homogeneous function. This means that, if a multivariate polynomial P is homogeneous of degree d, then
P
(
λ
x
1
,
…
,
λ
x
n
)
=
λ
d
P
(
x
1
,
…
,
x
n
)
,
{\displaystyle P(\lambda x_{1},\ldots ,\lambda x_{n})=\lambda ^{d}\,P(x_{1},\ldots ,x_{n})\,,}
for every
λ
{\displaystyle \lambda }
in any field containing the coefficients of P. Conversely, if the above relation is true for infinitely many
λ
{\displaystyle \lambda }
then the polynomial is homogeneous of degree d.
In particular, if P is homogeneous then
P
(
x
1
,
…
,
x
n
)
=
0
⇒
P
(
λ
x
1
,
…
,
λ
x
n
)
=
0
,
{\displaystyle P(x_{1},\ldots ,x_{n})=0\quad \Rightarrow \quad P(\lambda x_{1},\ldots ,\lambda x_{n})=0,}
for every
λ
.
{\displaystyle \lambda .}
This property is fundamental in the definition of a projective variety.
Any nonzero polynomial may be decomposed, in a unique way, as a sum of homogeneous polynomials of different degrees, which are called the homogeneous components of the polynomial.
Given a polynomial ring
R
=
K
[
x
1
,
…
,
x
n
]
{\displaystyle R=K[x_{1},\ldots ,x_{n}]}
over a field (or, more generally, a ring) K, the homogeneous polynomials of degree d form a vector space (or a module), commonly denoted
R
d
.
{\displaystyle R_{d}.}
The above unique decomposition means that
R
{\displaystyle R}
is the direct sum of the
R
d
{\displaystyle R_{d}}
(sum over all nonnegative integers).
The dimension of the vector space (or free module)
R
d
{\displaystyle R_{d}}
is the number of different monomials of degree d in n variables (that is the maximal number of nonzero terms in a homogeneous polynomial of degree d in n variables). It is equal to the binomial coefficient
(
d
+
n
−
1
n
−
1
)
=
(
d
+
n
−
1
d
)
=
(
d
+
n
−
1
)
!
d
!
(
n
−
1
)
!
.
{\displaystyle {\binom {d+n-1}{n-1}}={\binom {d+n-1}{d}}={\frac {(d+n-1)!}{d!(n-1)!}}.}
Homogeneous polynomial satisfy Euler's identity for homogeneous functions. That is, if P is a homogeneous polynomial of degree d in the indeterminates
x
1
,
…
,
x
n
,
{\displaystyle x_{1},\ldots ,x_{n},}
one has, whichever is the commutative ring of the coefficients,
d
P
=
∑
i
=
1
n
x
i
∂
P
∂
x
i
,
{\displaystyle dP=\sum _{i=1}^{n}x_{i}{\frac {\partial P}{\partial x_{i}}},}
where
∂
P
∂
x
i
{\displaystyle \textstyle {\frac {\partial P}{\partial x_{i}}}}
denotes the formal partial derivative of P with respect to
x
i
.
{\displaystyle x_{i}.}
== Homogenization ==
A non-homogeneous polynomial P(x1,...,xn) can be homogenized by introducing an additional variable x0 and defining the homogeneous polynomial sometimes denoted hP:
h
P
(
x
0
,
x
1
,
…
,
x
n
)
=
x
0
d
P
(
x
1
x
0
,
…
,
x
n
x
0
)
,
{\displaystyle {^{h}\!P}(x_{0},x_{1},\dots ,x_{n})=x_{0}^{d}P\left({\frac {x_{1}}{x_{0}}},\dots ,{\frac {x_{n}}{x_{0}}}\right),}
where d is the degree of P. For example, if
P
(
x
1
,
x
2
,
x
3
)
=
x
3
3
+
x
1
x
2
+
7
,
{\displaystyle P(x_{1},x_{2},x_{3})=x_{3}^{3}+x_{1}x_{2}+7,}
then
h
P
(
x
0
,
x
1
,
x
2
,
x
3
)
=
x
3
3
+
x
0
x
1
x
2
+
7
x
0
3
.
{\displaystyle ^{h}\!P(x_{0},x_{1},x_{2},x_{3})=x_{3}^{3}+x_{0}x_{1}x_{2}+7x_{0}^{3}.}
A homogenized polynomial can be dehomogenized by setting the additional variable x0 = 1. That is
P
(
x
1
,
…
,
x
n
)
=
h
P
(
1
,
x
1
,
…
,
x
n
)
.
{\displaystyle P(x_{1},\dots ,x_{n})={^{h}\!P}(1,x_{1},\dots ,x_{n}).}
== See also ==
Multi-homogeneous polynomial
Quasi-homogeneous polynomial
Diagonal form
Graded algebra
Hilbert series and Hilbert polynomial
Multilinear form
Multilinear map
Polarization of an algebraic form
Schur polynomial
Symbol of a differential operator
== Notes ==
== References ==
== External links ==
Media related to Homogeneous polynomials at Wikimedia Commons
Weisstein, Eric W. "Homogeneous Polynomial". MathWorld. | Wikipedia/Algebraic_form |
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component (dependent on the embedding) and the intrinsic covariant derivative component.
The name is motivated by the importance of changes of coordinate in physics: the covariant derivative transforms covariantly under a general coordinate transformation, that is, linearly via the Jacobian matrix of the transformation.
This article presents an introduction to the covariant derivative of a vector field with respect to a vector field, both in a coordinate-free language and using a local coordinate system and the traditional index notation. The covariant derivative of a tensor field is presented as an extension of the same concept. The covariant derivative generalizes straightforwardly to a notion of differentiation associated to a connection on a vector bundle, also known as a Koszul connection.
== History ==
Historically, at the turn of the 20th century, the covariant derivative was introduced by Gregorio Ricci-Curbastro and Tullio Levi-Civita in the theory of Riemannian and pseudo-Riemannian geometry. Ricci and Levi-Civita (following ideas of Elwin Bruno Christoffel) observed that the Christoffel symbols used to define the curvature could also provide a notion of differentiation which generalized the classical directional derivative of vector fields on a manifold. This new derivative – the Levi-Civita connection – was covariant in the sense that it satisfied Riemann's requirement that objects in geometry should be independent of their description in a particular coordinate system.
It was soon noted by other mathematicians, prominent among these being Hermann Weyl, Jan Arnoldus Schouten, and Élie Cartan, that a covariant derivative could be defined abstractly without the presence of a metric. The crucial feature was not a particular dependence on the metric, but that the Christoffel symbols satisfied a certain precise second-order transformation law. This transformation law could serve as a starting point for defining the derivative in a covariant manner. Thus the theory of covariant differentiation forked off from the strictly Riemannian context to include a wider range of possible geometries.
In the 1940s, practitioners of differential geometry began introducing other notions of covariant differentiation in general vector bundles which were, in contrast to the classical bundles of interest to geometers, not part of the tensor analysis of the manifold. By and large, these generalized covariant derivatives had to be specified ad hoc by some version of the connection concept. In 1950, Jean-Louis Koszul unified these new ideas of covariant differentiation in a vector bundle by means of what is known today as a Koszul connection or a connection on a vector bundle. Using ideas from Lie algebra cohomology, Koszul successfully converted many of the analytic features of covariant differentiation into algebraic ones. In particular, Koszul connections eliminated the need for awkward manipulations of Christoffel symbols (and other analogous non-tensorial objects) in differential geometry. Thus they quickly supplanted the classical notion of covariant derivative in many post-1950 treatments of the subject.
== Motivation ==
The covariant derivative is a generalization of the directional derivative from vector calculus. As with the directional derivative, the covariant derivative is a rule,
∇
u
v
{\displaystyle \nabla _{\mathbf {u} }{\mathbf {v} }}
, which takes as its inputs: (1) a vector, u, defined at a point P, and (2) a vector field v defined in a neighborhood of P. The output is the vector
∇
u
v
(
P
)
{\displaystyle \nabla _{\mathbf {u} }{\mathbf {v} }(P)}
, also at the point P. The primary difference from the usual directional derivative is that
∇
u
v
{\displaystyle \nabla _{\mathbf {u} }{\mathbf {v} }}
must, in a certain precise sense, be independent of the manner in which it is expressed in a coordinate system.
A vector may be described as a list of numbers in terms of a basis, but as a geometrical object the vector retains its identity regardless of how it is described. For a geometric vector written in components with respect to one basis, when the basis is changed the components transform according to a change of basis formula, with the coordinates undergoing a covariant transformation. The covariant derivative is required to transform, under a change in coordinates, by a covariant transformation in the same way as a basis does (hence the name).
In the case of Euclidean space, one usually defines the directional derivative of a vector field in terms of the difference between two vectors at two nearby points.
In such a system one translates one of the vectors to the origin of the other, keeping it parallel, then takes their difference within the same vector space. With a Cartesian (fixed orthonormal) coordinate system "keeping it parallel" amounts to keeping the components constant. This ordinary directional derivative on Euclidean space is the first example of a covariant derivative.
Next, one must take into account changes of the coordinate system. For example, if the Euclidean plane is described by polar coordinates, "keeping it parallel" does not amount to keeping the polar components constant under translation, since the coordinate grid itself "rotates". Thus, the same covariant derivative written in polar coordinates contains extra terms that describe how the coordinate grid itself rotates, or how in more general coordinates the grid expands, contracts, twists, interweaves, etc.
Consider the example of a particle moving along a curve γ(t) in the Euclidean plane. In polar coordinates, γ may be written in terms of its radial and angular coordinates by γ(t) = (r(t), θ(t)). A vector at a particular time t (for instance, a constant acceleration of the particle) is expressed in terms of
(
e
r
,
e
θ
)
{\displaystyle (\mathbf {e} _{r},\mathbf {e} _{\theta })}
, where
e
r
{\displaystyle \mathbf {e} _{r}}
and
e
θ
{\displaystyle \mathbf {e} _{\theta }}
are unit tangent vectors for the polar coordinates, serving as a basis to decompose a vector in terms of radial and tangential components. At a slightly later time, the new basis in polar coordinates appears slightly rotated with respect to the first set. The covariant derivative of the basis vectors (the Christoffel symbols) serve to express this change.
In a curved space, such as the surface of the Earth (regarded as a sphere), the translation of tangent vectors between different points is not well defined, and its analog, parallel transport, depends on the path along which the vector is translated. A vector on a globe on the equator at point Q is directed to the north. Suppose we transport the vector (keeping it parallel) first along the equator to the point P, then drag it along a meridian to the N pole, and finally transport it along another meridian back to Q. Then we notice that the parallel-transported vector along a closed circuit does not return as the same vector; instead, it has another orientation. This would not happen in Euclidean space and is caused by the curvature of the surface of the globe. The same effect occurs if we drag the vector along an infinitesimally small closed surface subsequently along two directions and then back. This infinitesimal change of the vector is a measure of the curvature, and can be defined in terms of the covariant derivative.
=== Remarks ===
The definition of the covariant derivative does not use the metric in space. However, for each metric there is a unique torsion-free covariant derivative called the Levi-Civita connection such that the covariant derivative of the metric is zero.
The properties of a derivative imply that
∇
v
u
{\displaystyle \nabla _{\mathbf {v} }\mathbf {u} }
depends on the values of u in a neighborhood of a point p in the same way as e.g. the derivative of a scalar function f along a curve at a given point p depends on the values of f in a neighborhood of p.
The information in a neighborhood of a point p in the covariant derivative can be used to define parallel transport of a vector. Also the curvature, torsion, and geodesics may be defined only in terms of the covariant derivative or other related variation on the idea of a linear connection.
== Informal definition using an embedding into Euclidean space ==
Suppose an open subset U of a d-dimensional Riemannian manifold M is embedded into Euclidean space
(
R
n
,
⟨
⋅
,
⋅
⟩
)
{\displaystyle (\mathbb {R} ^{n},\langle \cdot ,\cdot \rangle )}
via a twice continuously-differentiable (C2) mapping
Ψ
→
:
R
d
⊃
U
→
R
n
{\displaystyle {\vec {\Psi }}:\mathbb {R} ^{d}\supset U\to \mathbb {R} ^{n}}
such that the tangent space at
Ψ
→
(
p
)
{\displaystyle {\vec {\Psi }}(p)}
is spanned by the vectors
{
∂
Ψ
→
∂
x
i
|
p
:
i
∈
{
1
,
…
,
d
}
}
{\displaystyle \left\{\left.{\frac {\partial {\vec {\Psi }}}{\partial x^{i}}}\right|_{p}:i\in \{1,\dots ,d\}\right\}}
and the scalar product
⟨
⋅
,
⋅
⟩
{\displaystyle \left\langle \cdot ,\cdot \right\rangle }
on
R
n
{\displaystyle \mathbb {R} ^{n}}
is compatible with the metric on M:
g
i
j
=
⟨
∂
Ψ
→
∂
x
i
,
∂
Ψ
→
∂
x
j
⟩
.
{\displaystyle g_{ij}=\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{i}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{j}}}\right\rangle .}
(Since the manifold metric is always assumed to be regular, the compatibility condition implies linear independence of the partial derivative tangent vectors.)
For a tangent vector field,
V
→
=
v
j
∂
Ψ
→
∂
x
j
{\displaystyle {\vec {V}}=v^{j}{\frac {\partial {\vec {\Psi }}}{\partial x^{j}}}}
, one has
∂
V
→
∂
x
i
=
∂
∂
x
i
(
v
j
∂
Ψ
→
∂
x
j
)
=
∂
v
j
∂
x
i
∂
Ψ
→
∂
x
j
+
v
j
∂
2
Ψ
→
∂
x
i
∂
x
j
.
{\displaystyle {\frac {\partial {\vec {V}}}{\partial x^{i}}}={\frac {\partial }{\partial x^{i}}}\left(v^{j}{\frac {\partial {\vec {\Psi }}}{\partial x^{j}}}\right)={\frac {\partial v^{j}}{\partial x^{i}}}{\frac {\partial {\vec {\Psi }}}{\partial x^{j}}}+v^{j}{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\,\partial x^{j}}}.}
The last term is not tangential to M, but can be expressed as a linear combination of the tangent space base vectors using the Christoffel symbols as linear factors plus a vector orthogonal to the tangent space:
v
j
∂
2
Ψ
→
∂
x
i
∂
x
j
=
v
j
Γ
k
i
j
∂
Ψ
→
∂
x
k
+
n
→
.
{\displaystyle v^{j}{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\,\partial x^{j}}}=v^{j}{\Gamma ^{k}}_{ij}{\frac {\partial {\vec {\Psi }}}{\partial x^{k}}}+{\vec {n}}.}
In the case of the Levi-Civita connection, the covariant derivative
∇
e
i
V
→
{\displaystyle \nabla _{\mathbf {e} _{i}}{\vec {V}}}
, also written
∇
i
V
→
{\displaystyle \nabla _{i}{\vec {V}}}
, is defined as the orthogonal projection of the usual derivative onto tangent space:
∇
e
i
V
→
:=
∂
V
→
∂
x
i
−
n
→
=
(
∂
v
k
∂
x
i
+
v
j
Γ
k
i
j
)
∂
Ψ
→
∂
x
k
.
{\displaystyle \nabla _{\mathbf {e} _{i}}{\vec {V}}:={\frac {\partial {\vec {V}}}{\partial x^{i}}}-{\vec {n}}=\left({\frac {\partial v^{k}}{\partial x^{i}}}+v^{j}{\Gamma ^{k}}_{ij}\right){\frac {\partial {\vec {\Psi }}}{\partial x^{k}}}.}
From here it may be computationally convenient to obtain a relation between the Christoffel symbols for the Levi-Civita connection and the metric. To do this we first note that, since the vector
n
→
{\displaystyle {\vec {n}}}
in the previous equation is orthogonal to the tangent space,
⟨
∂
2
Ψ
→
∂
x
i
∂
x
j
,
∂
Ψ
→
∂
x
l
⟩
=
⟨
Γ
k
i
j
∂
Ψ
→
∂
x
k
+
n
→
,
∂
Ψ
→
∂
x
l
⟩
=
⟨
∂
Ψ
→
∂
x
k
,
∂
Ψ
→
∂
x
l
⟩
Γ
k
i
j
=
g
k
l
Γ
k
i
j
.
{\displaystyle \left\langle {\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\,\partial x^{j}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{l}}}\right\rangle =\left\langle {\Gamma ^{k}}_{ij}{\frac {\partial {\vec {\Psi }}}{\partial x^{k}}}+{\vec {n}},{\frac {\partial {\vec {\Psi }}}{\partial x^{l}}}\right\rangle =\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{k}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{l}}}\right\rangle {\Gamma ^{k}}_{ij}=g_{kl}\,{\Gamma ^{k}}_{ij}.}
Then, since the partial derivative of a component
g
a
b
{\displaystyle g_{ab}}
of the metric with respect to a coordinate
x
c
{\displaystyle x^{c}}
is
∂
g
a
b
∂
x
c
=
∂
∂
x
c
⟨
∂
Ψ
→
∂
x
a
,
∂
Ψ
→
∂
x
b
⟩
=
⟨
∂
2
Ψ
→
∂
x
c
∂
x
a
,
∂
Ψ
→
∂
x
b
⟩
+
⟨
∂
Ψ
→
∂
x
a
,
∂
2
Ψ
→
∂
x
c
∂
x
b
⟩
,
{\displaystyle {\frac {\partial g_{ab}}{\partial x^{c}}}={\frac {\partial }{\partial x^{c}}}\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{a}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{b}}}\right\rangle =\left\langle {\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{c}\,\partial x^{a}}},{\frac {\partial {\vec {\Psi }}}{\partial x^{b}}}\right\rangle +\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{a}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{c}\,\partial x^{b}}}\right\rangle ,}
any triplet
i
,
j
,
k
{\displaystyle i,j,k}
of indices yields a system of equations
{
∂
g
j
k
∂
x
i
=
⟨
∂
Ψ
→
∂
x
j
,
∂
2
Ψ
→
∂
x
k
∂
x
i
⟩
+
⟨
∂
Ψ
→
∂
x
k
,
∂
2
Ψ
→
∂
x
i
∂
x
j
⟩
∂
g
k
i
∂
x
j
=
⟨
∂
Ψ
→
∂
x
i
,
∂
2
Ψ
→
∂
x
j
∂
x
k
⟩
+
⟨
∂
Ψ
→
∂
x
k
,
∂
2
Ψ
→
∂
x
i
∂
x
j
⟩
∂
g
i
j
∂
x
k
=
⟨
∂
Ψ
→
∂
x
i
,
∂
2
Ψ
→
∂
x
j
∂
x
k
⟩
+
⟨
∂
Ψ
→
∂
x
j
,
∂
2
Ψ
→
∂
x
k
∂
x
i
⟩
.
{\displaystyle \left\{{\begin{alignedat}{2}{\frac {\partial g_{jk}}{\partial x^{i}}}=&&\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{j}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{k}\partial x^{i}}}\right\rangle &+\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{k}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\partial x^{j}}}\right\rangle \\{\frac {\partial g_{ki}}{\partial x^{j}}}=&\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{i}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{j}\partial x^{k}}}\right\rangle &&+\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{k}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\partial x^{j}}}\right\rangle \\{\frac {\partial g_{ij}}{\partial x^{k}}}=&\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{i}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{j}\partial x^{k}}}\right\rangle &+\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{j}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{k}\partial x^{i}}}\right\rangle &&.\end{alignedat}}\right.}
(Here the symmetry of the scalar product has been used and the order of partial differentiations have been swapped.)
Adding the first two equations and subtracting the third, we obtain
∂
g
j
k
∂
x
i
+
∂
g
k
i
∂
x
j
−
∂
g
i
j
∂
x
k
=
2
⟨
∂
Ψ
→
∂
x
k
,
∂
2
Ψ
→
∂
x
i
∂
x
j
⟩
.
{\displaystyle {\frac {\partial g_{jk}}{\partial x^{i}}}+{\frac {\partial g_{ki}}{\partial x^{j}}}-{\frac {\partial g_{ij}}{\partial x^{k}}}=2\left\langle {\frac {\partial {\vec {\Psi }}}{\partial x^{k}}},{\frac {\partial ^{2}{\vec {\Psi }}}{\partial x^{i}\,\partial x^{j}}}\right\rangle .}
Thus the Christoffel symbols for the Levi-Civita connection are related to the metric by
g
k
l
Γ
k
i
j
=
1
2
(
∂
g
j
l
∂
x
i
+
∂
g
l
i
∂
x
j
−
∂
g
i
j
∂
x
l
)
.
{\displaystyle g_{kl}{\Gamma ^{k}}_{ij}={\frac {1}{2}}\left({\frac {\partial g_{jl}}{\partial x^{i}}}+{\frac {\partial g_{li}}{\partial x^{j}}}-{\frac {\partial g_{ij}}{\partial x^{l}}}\right).}
If g is nondegenerate then
Γ
k
i
j
{\displaystyle {\Gamma ^{k}}_{ij}}
can be solved for directly as
Γ
k
i
j
=
1
2
g
k
l
(
∂
g
j
l
∂
x
i
+
∂
g
l
i
∂
x
j
−
∂
g
i
j
∂
x
l
)
.
{\displaystyle {\Gamma ^{k}}_{ij}={\frac {1}{2}}g^{kl}\left({\frac {\partial g_{jl}}{\partial x^{i}}}+{\frac {\partial g_{li}}{\partial x^{j}}}-{\frac {\partial g_{ij}}{\partial x^{l}}}\right).}
For a very simple example that captures the essence of the description above, draw a circle on a flat sheet of paper. Travel around the circle at a constant speed. The derivative of your velocity, your acceleration vector, always points radially inward. Roll this sheet of paper into a cylinder. Now the (Euclidean) derivative of your velocity has a component that sometimes points inward toward the axis of the cylinder depending on whether you're near a solstice or an equinox. (At the point of the circle when you are moving parallel to the axis, there is no inward acceleration. Conversely, at a point (1/4 of a circle later) when the velocity is along the cylinder's bend, the inward acceleration is maximum.) This is the (Euclidean) normal component. The covariant derivative component is the component parallel to the cylinder's surface, and is the same as that before you rolled the sheet into a cylinder.
== Formal definition ==
A covariant derivative is a (Koszul) connection on the tangent bundle and other tensor bundles: it differentiates vector fields in a way analogous to the usual differential on functions. The definition extends to a differentiation on the dual of vector fields (i.e. covector fields) and to arbitrary tensor fields, in a unique way that ensures compatibility with the tensor product and trace operations (tensor contraction).
=== Functions ===
Given a point
p
∈
M
{\displaystyle p\in M}
of the manifold M, a real function
f
:
M
→
R
{\displaystyle f:M\to \mathbb {R} }
on the manifold and a tangent vector
v
∈
T
p
M
{\displaystyle \mathbf {v} \in T_{p}M}
, the covariant derivative of f at p along v is the scalar at p, denoted
(
∇
v
f
)
p
{\displaystyle \left(\nabla _{\mathbf {v} }f\right)_{p}}
, that represents the principal part of the change in the value of f when the argument of f is changed by the infinitesimal displacement vector v. (This is the differential of f evaluated against the vector v.) Formally, there is a differentiable curve
ϕ
:
[
−
1
,
1
]
→
M
{\displaystyle \phi :[-1,1]\to M}
such that
ϕ
(
0
)
=
p
{\displaystyle \phi (0)=p}
and
ϕ
′
(
0
)
=
v
{\displaystyle \phi '(0)=\mathbf {v} }
, and the covariant derivative of f at p is defined by
(
∇
v
f
)
p
=
(
f
∘
ϕ
)
′
(
0
)
=
lim
t
→
0
f
(
ϕ
(
t
)
)
−
f
(
p
)
t
.
{\displaystyle \left(\nabla _{\mathbf {v} }f\right)_{p}=\left(f\circ \phi \right)^{\prime }\left(0\right)=\lim _{t\to 0}{\frac {f(\phi \left(t\right))-f(p)}{t}}.}
When
v
:
M
→
T
p
M
{\displaystyle \mathbf {v} :M\to T_{p}M}
is a vector field on M, the covariant derivative
∇
v
f
:
M
→
R
{\displaystyle \nabla _{\mathbf {v} }f:M\to \mathbb {R} }
is the function that associates with each point p in the common domain of f and v the scalar
(
∇
v
f
)
p
{\displaystyle \left(\nabla _{\mathbf {v} }f\right)_{p}}
.
For a scalar function f and vector field v, the covariant derivative
∇
v
f
{\displaystyle \nabla _{\mathbf {v} }f}
coincides with the Lie derivative
L
v
(
f
)
{\displaystyle L_{v}(f)}
, and with the exterior derivative
d
f
(
v
)
{\displaystyle df(v)}
.
=== Vector fields ===
Given a point p of the manifold M, a vector field
u
:
M
→
T
p
M
{\displaystyle \mathbf {u} :M\to T_{p}M}
defined in a neighborhood of p and a tangent vector
v
∈
T
p
M
{\displaystyle \mathbf {v} \in T_{p}M}
, the covariant derivative of u at p along v is the tangent vector at p, denoted
(
∇
v
u
)
p
{\displaystyle (\nabla _{\mathbf {v} }\mathbf {u} )_{p}}
, such that the following properties hold (for any tangent vectors v, x and y at p, vector fields u and w defined in a neighborhood of p, scalar values g and h at p, and scalar function f defined in a neighborhood of p):
(
∇
v
u
)
p
{\displaystyle \left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}}
is linear in
v
{\displaystyle \mathbf {v} }
so
(
∇
g
x
+
h
y
u
)
p
=
g
(
p
)
(
∇
x
u
)
p
+
h
(
p
)
(
∇
y
u
)
p
{\displaystyle \left(\nabla _{g\mathbf {x} +h\mathbf {y} }\mathbf {u} \right)_{p}=g(p)\left(\nabla _{\mathbf {x} }\mathbf {u} \right)_{p}+h(p)\left(\nabla _{\mathbf {y} }\mathbf {u} \right)_{p}}
(
∇
v
u
)
p
{\displaystyle \left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}}
is additive in
u
{\displaystyle \mathbf {u} }
so:
(
∇
v
[
u
+
w
]
)
p
=
(
∇
v
u
)
p
+
(
∇
v
w
)
p
{\displaystyle \left(\nabla _{\mathbf {v} }\left[\mathbf {u} +\mathbf {w} \right]\right)_{p}=\left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}+\left(\nabla _{\mathbf {v} }\mathbf {w} \right)_{p}}
(
∇
v
u
)
p
{\displaystyle (\nabla _{\mathbf {v} }\mathbf {u} )_{p}}
obeys the product rule; i.e., where
∇
v
f
{\displaystyle \nabla _{\mathbf {v} }f}
is defined above,
(
∇
v
[
f
u
]
)
p
=
f
(
p
)
(
∇
v
u
)
p
+
(
∇
v
f
)
p
u
p
.
{\displaystyle \left(\nabla _{\mathbf {v} }\left[f\mathbf {u} \right]\right)_{p}=f(p)\left(\nabla _{\mathbf {v} }\mathbf {u} )_{p}+(\nabla _{\mathbf {v} }f\right)_{p}\mathbf {u} _{p}.}
Note that
(
∇
v
u
)
p
{\displaystyle \left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}}
depends not only on the value of u at p but also on values of u in a neighborhood of p, because the last property, the product rule, involves the directional derivative of f (by the vector v).
If u and v are both vector fields defined over a common domain, then
∇
v
u
{\displaystyle \nabla _{\mathbf {v} }\mathbf {u} }
denotes the vector field whose value at each point p of the domain is the tangent vector
(
∇
v
u
)
p
{\displaystyle \left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}}
.
=== Covector fields ===
Given a field of covectors (or one-form)
α
{\displaystyle \alpha }
defined in a neighborhood of p, its covariant derivative
(
∇
v
α
)
p
{\displaystyle (\nabla _{\mathbf {v} }\alpha )_{p}}
is defined in a way to make the resulting operation compatible with tensor contraction and the product rule. That is,
(
∇
v
α
)
p
{\displaystyle (\nabla _{\mathbf {v} }\alpha )_{p}}
is defined as the unique one-form at p such that the following identity is satisfied for all vector fields u in a neighborhood of p
(
∇
v
α
)
p
(
u
p
)
=
∇
v
[
α
(
u
)
]
p
−
α
p
[
(
∇
v
u
)
p
]
.
{\displaystyle \left(\nabla _{\mathbf {v} }\alpha \right)_{p}\left(\mathbf {u} _{p}\right)=\nabla _{\mathbf {v} }\left[\alpha \left(\mathbf {u} \right)\right]_{p}-\alpha _{p}\left[\left(\nabla _{\mathbf {v} }\mathbf {u} \right)_{p}\right].}
The covariant derivative of a covector field along a vector field v is again a covector field.
=== Tensor fields ===
Once the covariant derivative is defined for fields of vectors and covectors it can be defined for arbitrary tensor fields by imposing the following identities for every pair of tensor fields
φ
{\displaystyle \varphi }
and
ψ
{\displaystyle \psi }
in a neighborhood of the point p:
∇
v
(
φ
⊗
ψ
)
p
=
(
∇
v
φ
)
p
⊗
ψ
(
p
)
+
φ
(
p
)
⊗
(
∇
v
ψ
)
p
,
{\displaystyle \nabla _{\mathbf {v} }\left(\varphi \otimes \psi \right)_{p}=\left(\nabla _{\mathbf {v} }\varphi \right)_{p}\otimes \psi (p)+\varphi (p)\otimes \left(\nabla _{\mathbf {v} }\psi \right)_{p},}
and for
φ
{\displaystyle \varphi }
and
ψ
{\displaystyle \psi }
of the same valence
∇
v
(
φ
+
ψ
)
p
=
(
∇
v
φ
)
p
+
(
∇
v
ψ
)
p
.
{\displaystyle \nabla _{\mathbf {v} }(\varphi +\psi )_{p}=(\nabla _{\mathbf {v} }\varphi )_{p}+(\nabla _{\mathbf {v} }\psi )_{p}.}
The covariant derivative of a tensor field along a vector field v is again a tensor field of the same type.
Explicitly, let T be a tensor field of type (p, q). Consider T to be a differentiable multilinear map of smooth sections α1, α2, ..., αq of the cotangent bundle T∗M and of sections X1, X2, ..., Xp of the tangent bundle TM, written T(α1, α2, ..., X1, X2, ...) into R. The covariant derivative of T along Y is given by the formula
(
∇
Y
T
)
(
α
1
,
α
2
,
…
,
X
1
,
X
2
,
…
)
=
∇
Y
(
T
(
α
1
,
α
2
,
…
,
X
1
,
X
2
,
…
)
)
−
T
(
∇
Y
α
1
,
α
2
,
…
,
X
1
,
X
2
,
…
)
−
T
(
α
1
,
∇
Y
α
2
,
…
,
X
1
,
X
2
,
…
)
−
⋯
−
T
(
α
1
,
α
2
,
…
,
∇
Y
X
1
,
X
2
,
…
)
−
T
(
α
1
,
α
2
,
…
,
X
1
,
∇
Y
X
2
,
…
)
−
⋯
{\displaystyle {\begin{aligned}(\nabla _{Y}T)\left(\alpha _{1},\alpha _{2},\ldots ,X_{1},X_{2},\ldots \right)=&{}\nabla _{Y}\left(T\left(\alpha _{1},\alpha _{2},\ldots ,X_{1},X_{2},\ldots \right)\right)\\&{}-T\left(\nabla _{Y}\alpha _{1},\alpha _{2},\ldots ,X_{1},X_{2},\ldots \right)-T\left(\alpha _{1},\nabla _{Y}\alpha _{2},\ldots ,X_{1},X_{2},\ldots \right)-\cdots \\&{}-T\left(\alpha _{1},\alpha _{2},\ldots ,\nabla _{Y}X_{1},X_{2},\ldots \right)-T\left(\alpha _{1},\alpha _{2},\ldots ,X_{1},\nabla _{Y}X_{2},\ldots \right)-\cdots \end{aligned}}}
== Coordinate description ==
Given coordinate functions
x
i
,
i
=
0
,
1
,
2
,
…
,
{\displaystyle x^{i},\ i=0,1,2,\dots ,}
any tangent vector can be described by its components in the basis
e
i
=
∂
∂
x
i
.
{\displaystyle \mathbf {e} _{i}={\frac {\partial }{\partial x^{i}}}.}
The covariant derivative of a basis vector along a basis vector is again a vector and so can be expressed as a linear combination
Γ
k
e
k
{\displaystyle \Gamma ^{k}\mathbf {e} _{k}}
.
To specify the covariant derivative it is enough to specify the covariant derivative of each basis vector field
e
i
{\displaystyle \mathbf {e} _{i}}
along
e
j
{\displaystyle \mathbf {e} _{j}}
.
∇
e
j
e
i
=
Γ
k
i
j
e
k
,
{\displaystyle \nabla _{\mathbf {e} _{j}}\mathbf {e} _{i}={\Gamma ^{k}}_{ij}\mathbf {e} _{k},}
the coefficients
Γ
i
j
k
{\displaystyle \Gamma _{ij}^{k}}
are the components of the connection with respect to a system of local coordinates. In the theory of Riemannian and pseudo-Riemannian manifolds, the components of the Levi-Civita connection with respect to a system of local coordinates are called Christoffel symbols.
Then using the rules in the definition, we find that for general vector fields
v
=
v
j
e
j
{\displaystyle \mathbf {v} =v^{j}\mathbf {e} _{j}}
and
u
=
u
i
e
i
{\displaystyle \mathbf {u} =u^{i}\mathbf {e} _{i}}
we get
∇
v
u
=
∇
v
j
e
j
u
i
e
i
=
v
j
∇
e
j
u
i
e
i
=
v
j
u
i
∇
e
j
e
i
+
v
j
e
i
∇
e
j
u
i
=
v
j
u
i
Γ
k
i
j
e
k
+
v
j
∂
u
i
∂
x
j
e
i
{\displaystyle {\begin{aligned}\nabla _{\mathbf {v} }\mathbf {u} &=\nabla _{v^{j}\mathbf {e} _{j}}u^{i}\mathbf {e} _{i}\\&=v^{j}\nabla _{\mathbf {e} _{j}}u^{i}\mathbf {e} _{i}\\&=v^{j}u^{i}\nabla _{\mathbf {e} _{j}}\mathbf {e} _{i}+v^{j}\mathbf {e} _{i}\nabla _{\mathbf {e} _{j}}u^{i}\\&=v^{j}u^{i}{\Gamma ^{k}}_{ij}\mathbf {e} _{k}+v^{j}{\partial u^{i} \over \partial x^{j}}\mathbf {e} _{i}\end{aligned}}}
so
∇
v
u
=
(
v
j
u
i
Γ
k
i
j
+
v
j
∂
u
k
∂
x
j
)
e
k
.
{\displaystyle \nabla _{\mathbf {v} }\mathbf {u} =\left(v^{j}u^{i}{\Gamma ^{k}}_{ij}+v^{j}{\partial u^{k} \over \partial x^{j}}\right)\mathbf {e} _{k}.}
The first term in this formula is responsible for "twisting" the coordinate system with respect to the covariant derivative and the second for changes of components of the vector field u. In particular
∇
e
j
u
=
∇
j
u
=
(
∂
u
i
∂
x
j
+
u
k
Γ
i
k
j
)
e
i
{\displaystyle \nabla _{\mathbf {e} _{j}}\mathbf {u} =\nabla _{j}\mathbf {u} =\left({\frac {\partial u^{i}}{\partial x^{j}}}+u^{k}{\Gamma ^{i}}_{kj}\right)\mathbf {e} _{i}}
In words: the covariant derivative is the usual derivative along the coordinates with correction terms which tell how the coordinates change.
For covectors similarly we have
∇
e
j
θ
=
(
∂
θ
i
∂
x
j
−
θ
k
Γ
k
i
j
)
e
∗
i
{\displaystyle \nabla _{\mathbf {e} _{j}}{\mathbf {\theta } }=\left({\frac {\partial \theta _{i}}{\partial x^{j}}}-\theta _{k}{\Gamma ^{k}}_{ij}\right){\mathbf {e} ^{*}}^{i}}
where
e
∗
i
(
e
j
)
=
δ
i
j
{\displaystyle {\mathbf {e} ^{*}}^{i}(\mathbf {e} _{j})={\delta ^{i}}_{j}}
.
The covariant derivative of a type (r, s) tensor field along
e
c
{\displaystyle e_{c}}
is given by the expression:
(
∇
e
c
T
)
a
1
…
a
r
b
1
…
b
s
=
∂
∂
x
c
T
a
1
…
a
r
b
1
…
b
s
+
Γ
a
1
d
c
T
d
a
2
…
a
r
b
1
…
b
s
+
⋯
+
Γ
a
r
d
c
T
a
1
…
a
r
−
1
d
b
1
…
b
s
−
Γ
d
b
1
c
T
a
1
…
a
r
d
b
2
…
b
s
−
⋯
−
Γ
d
b
s
c
T
a
1
…
a
r
b
1
…
b
s
−
1
d
.
{\displaystyle {\begin{aligned}{(\nabla _{e_{c}}T)^{a_{1}\ldots a_{r}}}_{b_{1}\ldots b_{s}}={}&{\frac {\partial }{\partial x^{c}}}{T^{a_{1}\ldots a_{r}}}_{b_{1}\ldots b_{s}}\\&+\,{\Gamma ^{a_{1}}}_{dc}{T^{da_{2}\ldots a_{r}}}_{b_{1}\ldots b_{s}}+\cdots +{\Gamma ^{a_{r}}}_{dc}{T^{a_{1}\ldots a_{r-1}d}}_{b_{1}\ldots b_{s}}\\&-\,{\Gamma ^{d}}_{b_{1}c}{T^{a_{1}\ldots a_{r}}}_{db_{2}\ldots b_{s}}-\cdots -{\Gamma ^{d}}_{b_{s}c}{T^{a_{1}\ldots a_{r}}}_{b_{1}\ldots b_{s-1}d}.\end{aligned}}}
Or, in words: take the partial derivative of the tensor and add:
+
Γ
a
i
d
c
{\displaystyle +{\Gamma ^{a_{i}}}_{dc}}
for every upper index
a
i
{\displaystyle a_{i}}
, and
−
Γ
d
b
i
c
{\displaystyle -{\Gamma ^{d}}_{b_{i}c}}
for every lower index
b
i
{\displaystyle b_{i}}
.
If instead of a tensor, one is trying to differentiate a tensor density (of weight +1), then one also adds a term
−
Γ
d
d
c
T
a
1
…
a
r
b
1
…
b
s
.
{\displaystyle -{\Gamma ^{d}}_{dc}{T^{a_{1}\ldots a_{r}}}_{b_{1}\ldots b_{s}}.}
If it is a tensor density of weight W, then multiply that term by W.
For example,
−
g
{\textstyle {\sqrt {-g}}}
is a scalar density (of weight +1), so we get:
(
−
g
)
;
c
=
(
−
g
)
,
c
−
−
g
Γ
d
d
c
{\displaystyle \left({\sqrt {-g}}\right)_{;c}=\left({\sqrt {-g}}\right)_{,c}-{\sqrt {-g}}\,{\Gamma ^{d}}_{dc}}
where semicolon ";" indicates covariant differentiation and comma "," indicates partial differentiation. Incidentally, this particular expression is equal to zero, because the covariant derivative of a function solely of the metric is always zero.
== Notation ==
In textbooks on physics, the covariant derivative is sometimes simply stated in terms of its components in this equation.
Often a notation is used in which the covariant derivative is given with a semicolon, while a normal partial derivative is indicated by a comma. In this notation we write the same as:
∇
e
j
v
=
d
e
f
v
s
;
j
e
s
v
i
;
j
=
v
i
,
j
+
v
k
Γ
i
k
j
{\displaystyle \nabla _{e_{j}}\mathbf {v} \ {\stackrel {\mathrm {def} }{=}}\ {v^{s}}_{;j}\mathbf {e} _{s}\;\;\;\;\;\;{v^{i}}_{;j}={v^{i}}_{,j}+v^{k}{\Gamma ^{i}}_{kj}}
In case two or more indexes appear after the semicolon, all of them must be understood as covariant derivatives:
∇
e
k
(
∇
e
j
v
)
=
d
e
f
v
s
;
j
k
e
s
{\displaystyle \nabla _{e_{k}}\left(\nabla _{e_{j}}\mathbf {v} \right)\ {\stackrel {\mathrm {def} }{=}}\ {v^{s}}_{;jk}\mathbf {e} _{s}}
In some older texts (notably Adler, Bazin & Schiffer, Introduction to General Relativity), the covariant derivative is denoted by a double pipe and the partial derivative by single pipe:
∇
e
j
v
=
d
e
f
v
i
|
|
j
=
v
i
|
j
+
v
k
Γ
i
k
j
{\displaystyle \nabla _{e_{j}}\mathbf {v} \ {\stackrel {\mathrm {def} }{=}}\ {v^{i}}_{||j}={v^{i}}_{|j}+v^{k}{\Gamma ^{i}}_{kj}}
== Covariant derivative by field type ==
For a scalar field
ϕ
{\displaystyle \phi \,}
, covariant differentiation is simply partial differentiation:
ϕ
;
a
≡
∂
a
ϕ
{\displaystyle \phi _{;a}\equiv \partial _{a}\phi }
For a contravariant vector field
λ
a
{\displaystyle \lambda ^{a}}
, we have:
λ
a
;
b
≡
∂
b
λ
a
+
Γ
a
b
c
λ
c
{\displaystyle {\lambda ^{a}}_{;b}\equiv \partial _{b}\lambda ^{a}+{\Gamma ^{a}}_{bc}\lambda ^{c}}
For a covariant vector field
λ
a
{\displaystyle \lambda _{a}}
, we have:
λ
a
;
c
≡
∂
c
λ
a
−
Γ
b
c
a
λ
b
{\displaystyle \lambda _{a;c}\equiv \partial _{c}\lambda _{a}-{\Gamma ^{b}}_{ca}\lambda _{b}}
For a type (2,0) tensor field
τ
a
b
{\displaystyle \tau ^{ab}}
, we have:
τ
a
b
;
c
≡
∂
c
τ
a
b
+
Γ
a
c
d
τ
d
b
+
Γ
b
c
d
τ
a
d
{\displaystyle {\tau ^{ab}}_{;c}\equiv \partial _{c}\tau ^{ab}+{\Gamma ^{a}}_{cd}\tau ^{db}+{\Gamma ^{b}}_{cd}\tau ^{ad}}
For a type (0,2) tensor field
τ
a
b
{\displaystyle \tau _{ab}}
, we have:
τ
a
b
;
c
≡
∂
c
τ
a
b
−
Γ
d
c
a
τ
d
b
−
Γ
d
c
b
τ
a
d
{\displaystyle \tau _{ab;c}\equiv \partial _{c}\tau _{ab}-{\Gamma ^{d}}_{ca}\tau _{db}-{\Gamma ^{d}}_{cb}\tau _{ad}}
For a type (1,1) tensor field
τ
a
b
{\displaystyle {\tau ^{a}}_{b}}
, we have:
τ
a
b
;
c
≡
∂
c
τ
a
b
+
Γ
a
c
d
τ
d
b
−
Γ
d
c
b
τ
a
d
{\displaystyle {\tau ^{a}}_{b;c}\equiv \partial _{c}{\tau ^{a}}_{b}+{\Gamma ^{a}}_{cd}{\tau ^{d}}_{b}-{\Gamma ^{d}}_{cb}{\tau ^{a}}_{d}}
The notation above is meant in the sense
τ
a
b
;
c
≡
(
∇
e
c
τ
)
a
b
{\displaystyle {\tau ^{ab}}_{;c}\equiv \left(\nabla _{\mathbf {e} _{c}}\tau \right)^{ab}}
== Properties ==
In general, covariant derivatives do not commute. By example, the covariant derivatives of vector field
λ
a
;
b
c
≠
λ
a
;
c
b
{\displaystyle \lambda _{a;bc}\neq \lambda _{a;cb}}
. The Riemann tensor
R
d
a
b
c
{\displaystyle {R^{d}}_{abc}}
is defined such that:
λ
a
;
b
c
−
λ
a
;
c
b
=
R
d
a
b
c
λ
d
{\displaystyle \lambda _{a;bc}-\lambda _{a;cb}={R^{d}}_{abc}\lambda _{d}}
or, equivalently,
λ
a
;
b
c
−
λ
a
;
c
b
=
−
R
a
d
b
c
λ
d
{\displaystyle {\lambda ^{a}}_{;bc}-{\lambda ^{a}}_{;cb}=-{R^{a}}_{dbc}\lambda ^{d}}
The covariant derivative of a (2,0)-tensor field fulfills:
τ
a
b
;
c
d
−
τ
a
b
;
d
c
=
−
R
a
e
c
d
τ
e
b
−
R
b
e
c
d
τ
a
e
{\displaystyle {\tau ^{ab}}_{;cd}-{\tau ^{ab}}_{;dc}=-{R^{a}}_{ecd}\tau ^{eb}-{R^{b}}_{ecd}\tau ^{ae}}
The latter can be shown by taking (without loss of generality) that
τ
a
b
=
λ
a
μ
b
{\displaystyle \tau ^{ab}=\lambda ^{a}\mu ^{b}}
.
== Derivative along a curve ==
Since the covariant derivative
∇
X
T
{\displaystyle \nabla _{X}T}
of a tensor field T at a point p depends only on the value of the vector field X at p one can define the covariant derivative along a smooth curve
γ
(
t
)
{\displaystyle \gamma (t)}
in a manifold:
D
t
T
=
∇
γ
˙
(
t
)
T
.
{\displaystyle D_{t}T=\nabla _{{\dot {\gamma }}(t)}T.}
Note that the tensor field T only needs to be defined on the curve
γ
(
t
)
{\displaystyle \gamma (t)}
for this definition to make sense.
In particular,
γ
˙
(
t
)
{\displaystyle {\dot {\gamma }}(t)}
is a vector field along the curve
γ
{\displaystyle \gamma }
itself. If
∇
γ
˙
(
t
)
γ
˙
(
t
)
{\displaystyle \nabla _{{\dot {\gamma }}(t)}{\dot {\gamma }}(t)}
vanishes then the curve is called a geodesic of the covariant derivative. If the covariant derivative is the Levi-Civita connection of a positive-definite metric then the geodesics for the connection are precisely the geodesics of the metric that are parametrized by arc length.
The derivative along a curve is also used to define the parallel transport along the curve.
Sometimes the covariant derivative along a curve is called absolute or intrinsic derivative.
== Relation to Lie derivative ==
A covariant derivative introduces an extra geometric structure on a manifold that allows vectors in neighboring tangent spaces to be compared: there is no canonical way to compare vectors from different tangent spaces because there is no canonical coordinate system.
There is however another generalization of directional derivatives which is canonical: the Lie derivative, which evaluates the change of one vector field along the flow of another vector field. Thus, one must know both vector fields in a neighborhood, not merely at a single point. The covariant derivative on the other hand introduces its own change for vectors in a given direction, and it only depends on the vector direction at a single point, rather than a vector field in a neighborhood of a point. In other words, the covariant derivative is linear (over C∞(M)) in the direction argument, while the Lie derivative is linear in neither argument.
Note that the antisymmetrized covariant derivative ∇uv − ∇vu, and the Lie derivative Luv differ by the torsion of the connection, so that if a connection is torsion free, then its antisymmetrization is the Lie derivative.
== See also ==
== Notes ==
== References ==
Kobayashi, Shoshichi; Nomizu, Katsumi (1996). Foundations of Differential Geometry, Vol. 1 (New ed.). Wiley Interscience. ISBN 0-471-15733-3.
I.Kh. Sabitov (2001) [1994], "Covariant differentiation", Encyclopedia of Mathematics, EMS Press
Sternberg, Shlomo (1964). Lectures on Differential Geometry. Prentice-Hall.
Spivak, Michael (1999). A Comprehensive Introduction to Differential Geometry (Volume Two). Publish or Perish, Inc. | Wikipedia/Tensor_derivative |
The Maxwell stress tensor (named after James Clerk Maxwell) is a symmetric second-order tensor in three dimensions that is used in classical electromagnetism to represent the interaction between electromagnetic forces and mechanical momentum. In simple situations, such as a point charge moving freely in a homogeneous magnetic field, it is easy to calculate the forces on the charge from the Lorentz force law. When the situation becomes more complicated, this ordinary procedure can become impractically difficult, with equations spanning multiple lines. It is therefore convenient to collect many of these terms in the Maxwell stress tensor, and to use tensor arithmetic to find the answer to the problem at hand.
In the relativistic formulation of electromagnetism, the nine components of the Maxwell stress tensor appear, negated, as components of the electromagnetic stress–energy tensor, which is the electromagnetic component of the total stress–energy tensor. The latter describes the density and flux of energy and momentum in spacetime.
== Motivation ==
As outlined below, the electromagnetic force is written in terms of
E
{\displaystyle \mathbf {E} }
and
B
{\displaystyle \mathbf {B} }
. Using vector calculus and Maxwell's equations, symmetry is sought for in the terms containing
E
{\displaystyle \mathbf {E} }
and
B
{\displaystyle \mathbf {B} }
, and introducing the Maxwell stress tensor simplifies the result.
in the above relation for conservation of momentum,
∇
⋅
σ
{\displaystyle {\boldsymbol {\nabla }}\cdot {\boldsymbol {\sigma }}}
is the momentum flux density and plays a role similar to
S
{\displaystyle \mathbf {S} }
in Poynting's theorem.
The above derivation assumes complete knowledge of both
ρ
{\displaystyle \rho }
and
J
{\displaystyle \mathbf {J} }
(both free and bounded charges and currents). For the case of nonlinear materials (such as magnetic iron with a BH-curve), the nonlinear Maxwell stress tensor must be used.
== Equation ==
In physics, the Maxwell stress tensor is the stress tensor of an electromagnetic field. As derived above, it is given by:
σ
i
j
=
ϵ
0
E
i
E
j
+
1
μ
0
B
i
B
j
−
1
2
(
ϵ
0
E
2
+
1
μ
0
B
2
)
δ
i
j
{\displaystyle \sigma _{ij}=\epsilon _{0}E_{i}E_{j}+{\frac {1}{\mu _{0}}}B_{i}B_{j}-{\frac {1}{2}}\left(\epsilon _{0}E^{2}+{\frac {1}{\mu _{0}}}B^{2}\right)\delta _{ij}}
,
where
ϵ
0
{\displaystyle \epsilon _{0}}
is the electric constant and
μ
0
{\displaystyle \mu _{0}}
is the magnetic constant,
E
{\displaystyle \mathbf {E} }
is the electric field,
B
{\displaystyle \mathbf {B} }
is the magnetic field and
δ
i
j
{\displaystyle \delta _{ij}}
is Kronecker's delta. In the Gaussian system, it is given by:
σ
i
j
=
1
4
π
(
E
i
E
j
+
H
i
H
j
−
1
2
(
E
2
+
H
2
)
δ
i
j
)
{\displaystyle \sigma _{ij}={\frac {1}{4\pi }}\left(E_{i}E_{j}+H_{i}H_{j}-{\frac {1}{2}}\left(E^{2}+H^{2}\right)\delta _{ij}\right)}
,
where
H
{\displaystyle \mathbf {H} }
is the magnetizing field.
An alternative way of expressing this tensor is:
σ
↔
=
1
4
π
[
E
⊗
E
+
H
⊗
H
−
E
2
+
H
2
2
I
]
{\displaystyle {\overset {\leftrightarrow }{\boldsymbol {\sigma }}}={\frac {1}{4\pi }}\left[\mathbf {E} \otimes \mathbf {E} +\mathbf {H} \otimes \mathbf {H} -{\frac {E^{2}+H^{2}}{2}}\mathbb {I} \right]}
where
⊗
{\displaystyle \otimes }
is the dyadic product, and the last tensor is the unit dyad:
I
≡
(
1
0
0
0
1
0
0
0
1
)
=
(
x
^
⊗
x
^
+
y
^
⊗
y
^
+
z
^
⊗
z
^
)
{\displaystyle \mathbb {I} \equiv {\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}}=\left(\mathbf {\hat {x}} \otimes \mathbf {\hat {x}} +\mathbf {\hat {y}} \otimes \mathbf {\hat {y}} +\mathbf {\hat {z}} \otimes \mathbf {\hat {z}} \right)}
The element
i
j
{\displaystyle ij}
of the Maxwell stress tensor has units of momentum per unit of area per unit time and gives the flux of momentum parallel to the
i
{\displaystyle i}
th axis crossing a surface normal to the
j
{\displaystyle j}
th axis (in the negative direction) per unit of time.
These units can also be seen as units of force per unit of area (negative pressure), and the
i
j
{\displaystyle ij}
element of the tensor can also be interpreted as the force parallel to the
i
{\displaystyle i}
th axis suffered by a surface normal to the
j
{\displaystyle j}
th axis per unit of area. Indeed, the diagonal elements give the tension (pulling) acting on a differential area element normal to the corresponding axis. Unlike forces due to the pressure of an ideal gas, an area element in the electromagnetic field also feels a force in a direction that is not normal to the element. This shear is given by the off-diagonal elements of the stress tensor.
It has recently been shown that the Maxwell stress tensor is the real part of a more general complex electromagnetic stress tensor whose imaginary part accounts for reactive electrodynamical forces.
== In magnetostatics ==
If the field is only magnetic (which is largely true in motors, for instance), some of the terms drop out, and the equation in SI units becomes:
σ
i
j
=
1
μ
0
B
i
B
j
−
1
2
μ
0
B
2
δ
i
j
.
{\displaystyle \sigma _{ij}={\frac {1}{\mu _{0}}}B_{i}B_{j}-{\frac {1}{2\mu _{0}}}B^{2}\delta _{ij}\,.}
For cylindrical objects, such as the rotor of a motor, this is further simplified to:
σ
r
t
=
1
μ
0
B
r
B
t
−
1
2
μ
0
B
2
δ
r
t
.
{\displaystyle \sigma _{rt}={\frac {1}{\mu _{0}}}B_{r}B_{t}-{\frac {1}{2\mu _{0}}}B^{2}\delta _{rt}\,.}
where
r
{\displaystyle r}
is the shear in the radial (outward from the cylinder) direction, and
t
{\displaystyle t}
is the shear in the tangential (around the cylinder) direction. It is the tangential force which spins the motor.
B
r
{\displaystyle B_{r}}
is the flux density in the radial direction, and
B
t
{\displaystyle B_{t}}
is the flux density in the tangential direction.
== In electrostatics ==
In electrostatics the effects of magnetism are not present. In this case the magnetic field vanishes, i.e.
B
=
0
{\displaystyle \mathbf {B} =\mathbf {0} }
, and we obtain the electrostatic Maxwell stress tensor. It is given in component form by
σ
i
j
=
ε
0
E
i
E
j
−
1
2
ε
0
E
2
δ
i
j
{\displaystyle \sigma _{ij}=\varepsilon _{0}E_{i}E_{j}-{\frac {1}{2}}\varepsilon _{0}E^{2}\delta _{ij}}
and in symbolic form by
σ
=
ε
0
E
⊗
E
−
1
2
ε
0
(
E
⋅
E
)
I
{\displaystyle {\boldsymbol {\sigma }}=\varepsilon _{0}\mathbf {E} \otimes \mathbf {E} -{\frac {1}{2}}\varepsilon _{0}(\mathbf {E} \cdot \mathbf {E} )\mathbf {I} }
where
I
{\displaystyle \mathbf {I} }
is the appropriate identity tensor
(
{\displaystyle {\big (}}
usually
3
×
3
)
{\displaystyle 3\times 3{\big )}}
.
== Eigenvalue ==
The eigenvalues of the Maxwell stress tensor are given by:
{
λ
}
=
{
−
(
ϵ
0
2
E
2
+
1
2
μ
0
B
2
)
,
±
(
ϵ
0
2
E
2
−
1
2
μ
0
B
2
)
2
+
ϵ
0
μ
0
(
E
⋅
B
)
2
}
{\displaystyle \{\lambda \}=\left\{-\left({\frac {\epsilon _{0}}{2}}E^{2}+{\frac {1}{2\mu _{0}}}B^{2}\right),~\pm {\sqrt {\left({\frac {\epsilon _{0}}{2}}E^{2}-{\frac {1}{2\mu _{0}}}B^{2}\right)^{2}+{\frac {\epsilon _{0}}{\mu _{0}}}\left({\boldsymbol {E}}\cdot {\boldsymbol {B}}\right)^{2}}}\right\}}
These eigenvalues are obtained by iteratively applying the matrix determinant lemma, in conjunction with the Sherman–Morrison formula.
Noting that the characteristic equation matrix,
σ
↔
−
λ
I
{\displaystyle {\overleftrightarrow {\boldsymbol {\sigma }}}-\lambda \mathbf {\mathbb {I} } }
, can be written as
σ
↔
−
λ
I
=
−
(
λ
+
V
)
I
+
ϵ
0
E
E
T
+
1
μ
0
B
B
T
{\displaystyle {\overleftrightarrow {\boldsymbol {\sigma }}}-\lambda \mathbf {\mathbb {I} } =-\left(\lambda +V\right)\mathbf {\mathbb {I} } +\epsilon _{0}\mathbf {E} \mathbf {E} ^{\textsf {T}}+{\frac {1}{\mu _{0}}}\mathbf {B} \mathbf {B} ^{\textsf {T}}}
where
V
=
1
2
(
ϵ
0
E
2
+
1
μ
0
B
2
)
{\displaystyle V={\frac {1}{2}}\left(\epsilon _{0}E^{2}+{\frac {1}{\mu _{0}}}B^{2}\right)}
we set
U
=
−
(
λ
+
V
)
I
+
ϵ
0
E
E
T
{\displaystyle \mathbf {U} =-\left(\lambda +V\right)\mathbf {\mathbb {I} } +\epsilon _{0}\mathbf {E} \mathbf {E} ^{\textsf {T}}}
Applying the matrix determinant lemma once, this gives us
det
(
σ
↔
−
λ
I
)
=
(
1
+
1
μ
0
B
T
U
−
1
B
)
det
(
U
)
{\displaystyle \det {\left({\overleftrightarrow {\boldsymbol {\sigma }}}-\lambda \mathbf {\mathbb {I} } \right)}=\left(1+{\frac {1}{\mu _{0}}}\mathbf {B} ^{\textsf {T}}\mathbf {U} ^{-1}\mathbf {B} \right)\det {\left(\mathbf {U} \right)}}
Applying it again yields,
det
(
σ
↔
−
λ
I
)
=
(
1
+
1
μ
0
B
T
U
−
1
B
)
(
1
−
ϵ
0
E
T
E
λ
+
V
)
(
−
λ
−
V
)
3
{\displaystyle \det {\left({\overleftrightarrow {\boldsymbol {\sigma }}}-\lambda \mathbf {\mathbb {I} } \right)}=\left(1+{\frac {1}{\mu _{0}}}\mathbf {B} ^{\textsf {T}}\mathbf {U} ^{-1}\mathbf {B} \right)\left(1-{\frac {\epsilon _{0}\mathbf {E} ^{\textsf {T}}\mathbf {E} }{\lambda +V}}\right)\left(-\lambda -V\right)^{3}}
From the last multiplicand on the RHS, we immediately see that
λ
=
−
V
{\displaystyle \lambda =-V}
is one of the eigenvalues.
To find the inverse of
U
{\displaystyle \mathbf {U} }
, we use the Sherman-Morrison formula:
U
−
1
=
−
(
λ
+
V
)
−
1
−
ϵ
0
E
E
T
(
λ
+
V
)
2
−
(
λ
+
V
)
ϵ
0
E
T
E
{\displaystyle \mathbf {U} ^{-1}=-\left(\lambda +V\right)^{-1}-{\frac {\epsilon _{0}\mathbf {E} \mathbf {E} ^{\textsf {T}}}{\left(\lambda +V\right)^{2}-\left(\lambda +V\right)\epsilon _{0}\mathbf {E} ^{\textsf {T}}\mathbf {E} }}}
Factoring out a
(
−
λ
−
V
)
{\displaystyle \left(-\lambda -V\right)}
term in the determinant, we are left with finding the zeros of the rational function:
(
−
(
λ
+
V
)
−
ϵ
0
(
E
⋅
B
)
2
μ
0
(
−
(
λ
+
V
)
+
ϵ
0
E
T
E
)
)
(
−
(
λ
+
V
)
+
ϵ
0
E
T
E
)
{\displaystyle \left(-\left(\lambda +V\right)-{\frac {\epsilon _{0}\left(\mathbf {E} \cdot \mathbf {B} \right)^{2}}{\mu _{0}\left(-\left(\lambda +V\right)+\epsilon _{0}\mathbf {E} ^{\textsf {T}}\mathbf {E} \right)}}\right)\left(-\left(\lambda +V\right)+\epsilon _{0}\mathbf {E} ^{\textsf {T}}\mathbf {E} \right)}
Thus, once we solve
−
(
λ
+
V
)
(
−
(
λ
+
V
)
+
ϵ
0
E
2
)
−
ϵ
0
μ
0
(
E
⋅
B
)
2
=
0
{\displaystyle -\left(\lambda +V\right)\left(-\left(\lambda +V\right)+\epsilon _{0}E^{2}\right)-{\frac {\epsilon _{0}}{\mu _{0}}}\left(\mathbf {E} \cdot \mathbf {B} \right)^{2}=0}
we obtain the other two eigenvalues.
== See also ==
Ricci calculus
Energy density of electric and magnetic fields
Poynting vector
Electromagnetic stress–energy tensor
Magnetic pressure
Magnetic tension
== References ==
David J. Griffiths, "Introduction to Electrodynamics" pp. 351–352, Benjamin Cummings Inc., 2008
John David Jackson, "Classical Electrodynamics, 3rd Ed.", John Wiley & Sons, Inc., 1999
Richard Becker, "Electromagnetic Fields and Interactions", Dover Publications Inc., 1964 | Wikipedia/Maxwell_stress_tensor |
Scalar quantities or simply scalars are physical quantities that can be described by a single pure number (a scalar, typically a real number), accompanied by a unit of measurement, as in "10 cm" (ten centimeters).
Examples of scalar are length, mass, charge, volume, and time.
Scalars may represent the magnitude of physical quantities, such as speed is to velocity. Scalars do not represent a direction.
Scalars are unaffected by changes to a vector space basis (i.e., a coordinate rotation) but may be affected by translations (as in relative speed).
A change of a vector space basis changes the description of a vector in terms of the basis used but does not change the vector itself, while a scalar has nothing to do with this change. In classical physics, like Newtonian mechanics, rotations and reflections preserve scalars, while in relativity, Lorentz transformations or space-time translations preserve scalars. The term "scalar" has origin in the multiplication of vectors by a unitless scalar, which is a uniform scaling transformation.
== Relationship with the mathematical concept ==
A scalar in physics and other areas of science is also a scalar in mathematics, as an element of a mathematical field used to define a vector space. For example, the magnitude (or length) of an electric field vector is calculated as the square root of its absolute square (the inner product of the electric field with itself); so, the inner product's result is an element of the mathematical field for the vector space in which the electric field is described. As the vector space in this example and usual cases in physics is defined over the mathematical field of real numbers or complex numbers, the magnitude is also an element of the field, so it is mathematically a scalar. Since the inner product is independent of any vector space basis, the electric field magnitude is also physically a scalar.
The mass of an object is unaffected by a change of vector space basis so it is also a physical scalar, described by a real number as an element of the real number field. Since a field is a vector space with addition defined based on vector addition and multiplication defined as scalar multiplication, the mass is also a mathematical scalar.
== Scalar field ==
Since scalars mostly may be treated as special cases of multi-dimensional quantities such as vectors and tensors, physical scalar fields might be regarded as a special case of more general fields, like vector fields, spinor fields, and tensor fields.
== Units ==
Like other physical quantities, a physical quantity of scalar is also typically expressed by a numerical value and a physical unit, not merely a number, to provide its physical meaning. It may be regarded as the product of the number and the unit (e.g., 1 km as a physical distance is the same as 1,000 m). A physical distance does not depend on the length of each base vector of the coordinate system where the base vector length corresponds to the physical distance unit in use. (E.g., 1 m base vector length means the meter unit is used.) A physical distance differs from a metric in the sense that it is not just a real number while the metric is calculated to a real number, but the metric can be converted to the physical distance by converting each base vector length to the corresponding physical unit.
Any change of a coordinate system may affect the formula for computing scalars (for example, the Euclidean formula for distance in terms of coordinates relies on the basis being orthonormal), but not the scalars themselves. Vectors themselves also do not change by a change of a coordinate system, but their descriptions changes (e.g., a change of numbers representing a position vector by rotating a coordinate system in use).
== Classical scalars ==
An example of a scalar quantity is temperature: the temperature at a given point is a single number. Velocity, on the other hand, is a vector quantity.
Other examples of scalar quantities are mass, charge, volume, time, speed, pressure, and electric potential at a point inside a medium. The distance between two points in three-dimensional space is a scalar, but the direction from one of those points to the other is not, since describing a direction requires two physical quantities such as the angle on the horizontal plane and the angle away from that plane. Force cannot be described using a scalar, since force has both direction and magnitude; however, the magnitude of a force alone can be described with a scalar, for instance the gravitational force acting on a particle is not a scalar, but its magnitude is. The speed of an object is a scalar (e.g., 180 km/h), while its velocity is not (e.g. a velocity of 180 km/h in a roughly northwest direction might consist of 108 km/h northward and 144 km/h westward).
Some other examples of scalar quantities in Newtonian mechanics are electric charge and charge density.
== Relativistic scalars ==
In the theory of relativity, one considers changes of coordinate systems that trade space for time. As a consequence, several physical quantities that are scalars in "classical" (non-relativistic) physics need to be combined with other quantities and treated as four-vectors or tensors. For example, the charge density at a point in a medium, which is a scalar in classical physics, must be combined with the local current density (a 3-vector) to comprise a relativistic 4-vector. Similarly, energy density must be combined with momentum density and pressure into the stress–energy tensor.
Examples of scalar quantities in relativity include electric charge, spacetime interval (e.g., proper time and proper length), and invariant mass.
== Pseudoscalar ==
== See also ==
Invariant (physics)
Relative scalar
Scalar (mathematics)
== Notes ==
== References ==
Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. Vol. 1. ISBN 978-0-201-02116-5. {{cite book}}: ISBN / Date incompatibility (help)
Arfken, George (1985). Mathematical Methods for Physicists (third ed.). Academic press. ISBN 0-12-059820-5.
== External links ==
Media related to Scalar physical quantities at Wikimedia Commons | Wikipedia/Scalar_(physics) |
In physics, a covariant transformation is a rule that specifies how certain entities, such as vectors or tensors, change under a change of basis. The transformation that describes the new basis vectors as a linear combination of the old basis vectors is defined as a covariant transformation. Conventionally, indices identifying the basis vectors are placed as lower indices and so are all entities that transform in the same way. The inverse of a covariant transformation is a contravariant transformation. Whenever a vector should be invariant under a change of basis, that is to say it should represent the same geometrical or physical object having the same magnitude and direction as before, its components must transform according to the contravariant rule. Conventionally, indices identifying the components of a vector are placed as upper indices and so are all indices of entities that transform in the same way. The sum over pairwise matching indices of a product with the same lower and upper indices is invariant under a transformation.
A vector itself is a geometrical quantity, in principle, independent (invariant) of the chosen basis. A vector v is given, say, in components vi on a chosen basis ei. On another basis, say e′j, the same vector v has different components v′j and
v
=
∑
i
v
i
e
i
=
∑
j
v
′
j
e
j
′
.
{\displaystyle \mathbf {v} =\sum _{i}v^{i}{\mathbf {e} }_{i}=\sum _{j}{v'\,}^{j}\mathbf {e} '_{j}.}
As a vector, v should be invariant to the chosen coordinate system and independent of any chosen basis, i.e. its "real world" direction and magnitude should appear the same regardless of the basis vectors. If we perform a change of basis by transforming the vectors ei into the basis vectors e′j, we must also ensure that the components vi transform into the new components v′j to compensate.
The needed transformation of v is called the contravariant transformation rule.
In the shown example, a vector
v
=
∑
i
∈
{
x
,
y
}
v
i
e
i
=
∑
j
∈
{
r
,
ϕ
}
v
′
j
e
j
′
{\textstyle \mathbf {v} =\sum _{i\in \{x,y\}}v^{i}{\mathbf {e} }_{i}=\sum _{j\in \{r,\phi \}}{v'\,}^{j}\mathbf {e} '_{j}}
is described by two different coordinate systems: a rectangular coordinate system (the black grid), and a radial coordinate system (the red grid). Basis vectors have been chosen for both coordinate systems: ex and ey for the rectangular coordinate system, and er and eφ for the radial coordinate system. The radial basis vectors er and eφ appear rotated anticlockwise with respect to the rectangular basis vectors ex and ey. The covariant transformation, performed to the basis vectors, is thus an anticlockwise rotation, rotating from the first basis vectors to the second basis vectors.
The coordinates of v must be transformed into the new coordinate system, but the vector v itself, as a mathematical object, remains independent of the basis chosen, appearing to point in the same direction and with the same magnitude, invariant to the change of coordinates. The contravariant transformation ensures this, by compensating for the rotation between the different bases. If we view v from the context of the radial coordinate system, it appears to be rotated more clockwise from the basis vectors er and eφ. compared to how it appeared relative to the rectangular basis vectors ex and ey. Thus, the needed contravariant transformation to v in this example is a clockwise rotation.
== Examples of covariant transformation ==
=== The derivative of a function transforms covariantly ===
The explicit form of a covariant transformation is best introduced with the transformation properties of the derivative of a function. Consider a scalar function f (like the temperature at a location in a space) defined on a set of points p, identifiable in a given coordinate system
x
i
,
i
=
0
,
1
,
…
{\displaystyle x^{i},\;i=0,1,\dots }
(such a collection is called a manifold). If we adopt a new coordinates system
x
′
j
,
j
=
0
,
1
,
…
{\displaystyle {x'}^{j},j=0,1,\dots }
then for each i, the original coordinate
x
i
{\displaystyle {x}^{i}}
can be expressed as a function of the new coordinates, so
x
i
(
x
′
j
)
,
j
=
0
,
1
,
…
{\displaystyle x^{i}\left({x'}^{j}\right),j=0,1,\dots }
One can express the derivative of f in old coordinates in terms of the new coordinates, using the chain rule of the derivative, as
∂
f
∂
x
i
=
∂
f
∂
x
′
j
∂
x
′
j
∂
x
i
{\displaystyle {\frac {\partial f}{\partial {x}^{i}}}={\frac {\partial f}{\partial {x'}^{j}}}\;{\frac {\partial {x'}^{j}}{\partial {x}^{i}}}}
This is the explicit form of the covariant transformation rule. The notation of a normal derivative with respect to the coordinates sometimes uses a comma, as follows
f
,
i
=
d
e
f
∂
f
∂
x
i
{\displaystyle f_{,i}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {\partial f}{\partial x^{i}}}}
where the index i is placed as a lower index, because of the covariant transformation.
=== Basis vectors transform covariantly ===
A vector can be expressed in terms of basis vectors. For a certain coordinate system, we can choose the vectors tangent to the coordinate grid. This basis is called the coordinate basis.
To illustrate the transformation properties, consider again the set of points p, identifiable in a given coordinate system
x
i
{\displaystyle x^{i}}
where
i
=
0
,
1
,
…
{\displaystyle i=0,1,\dots }
(manifold). A scalar function f, that assigns a real number to every point p in this space, is a function of the coordinates
f
(
x
0
,
x
1
,
…
)
{\displaystyle f\;\left(x^{0},x^{1},\dots \right)}
. A curve is a one-parameter collection of points c, say with curve parameter λ, c(λ). A tangent vector v to the curve is the derivative
d
c
/
d
λ
{\displaystyle dc/d\lambda }
along the curve with the derivative taken at the point p under consideration. Note that we can see the tangent vector v as an operator (the directional derivative) which can be applied to a function
v
[
f
]
=
d
e
f
d
f
d
λ
=
d
d
λ
f
(
c
(
λ
)
)
{\displaystyle \mathbf {v} [f]\ {\stackrel {\mathrm {def} }{=}}\ {\frac {df}{d\lambda }}={\frac {d\;\;}{d\lambda }}f(c(\lambda ))}
The parallel between the tangent vector and the operator can also be worked out in coordinates
v
[
f
]
=
d
x
i
d
λ
∂
f
∂
x
i
{\displaystyle \mathbf {v} [f]={\frac {dx^{i}}{d\lambda }}{\frac {\partial f}{\partial x^{i}}}}
or in terms of operators
∂
/
∂
x
i
{\displaystyle \partial /\partial x^{i}}
v
=
d
x
i
d
λ
∂
∂
x
i
=
d
x
i
d
λ
e
i
{\displaystyle \mathbf {v} ={\frac {dx^{i}}{d\lambda }}{\frac {\partial \;\;}{\partial x^{i}}}={\frac {dx^{i}}{d\lambda }}\mathbf {e} _{i}}
where we have written
e
i
=
∂
/
∂
x
i
{\displaystyle \mathbf {e} _{i}=\partial /\partial x^{i}}
, the tangent vectors to the curves which are simply the coordinate grid itself.
If we adopt a new coordinates system
x
′
i
,
i
=
0
,
1
,
…
{\displaystyle {x'}^{i},\;i=0,1,\dots }
then for each i, the old coordinate
x
i
{\displaystyle {x^{i}}}
can be expressed as function of the new system, so
x
i
(
x
′
j
)
,
j
=
0
,
1
,
…
{\displaystyle x^{i}\left({x'}^{j}\right),j=0,1,\dots }
Let
e
i
′
=
∂
/
∂
x
′
i
{\displaystyle \mathbf {e} '_{i}={\partial }/{\partial {x'}^{i}}}
be the basis, tangent vectors in this new coordinates system. We can express
e
i
{\displaystyle \mathbf {e} _{i}}
in the new system by applying the chain rule on x. As a function of coordinates we find the following transformation
e
i
′
=
∂
∂
x
′
i
=
∂
x
j
∂
x
′
i
∂
∂
x
j
=
∂
x
j
∂
x
′
i
e
j
{\displaystyle \mathbf {e} '_{i}={\frac {\partial }{\partial {x'}^{i}}}={\frac {\partial x^{j}}{\partial {x'}^{i}}}{\frac {\partial }{\partial x^{j}}}={\frac {\partial x^{j}}{\partial {x'}^{i}}}\mathbf {e} _{j}}
which indeed is the same as the covariant transformation for the derivative of a function.
== Contravariant transformation ==
The components of a (tangent) vector transform in a different way, called contravariant transformation. Consider a tangent vector v and call its components
v
i
{\displaystyle v^{i}}
on a basis
e
i
{\displaystyle \mathbf {e} _{i}}
. On another basis
e
i
′
{\displaystyle \mathbf {e} '_{i}}
we call the components
v
′
i
{\displaystyle {v'}^{i}}
, so
v
=
v
i
e
i
=
v
′
i
e
i
′
{\displaystyle \mathbf {v} =v^{i}\mathbf {e} _{i}={v'}^{i}\mathbf {e} '_{i}}
in which
v
i
=
d
x
i
d
λ
and
v
′
i
=
d
x
′
i
d
λ
{\displaystyle v^{i}={\frac {dx^{i}}{d\lambda }}\;{\mbox{ and }}\;{v'}^{i}={\frac {d{x'}^{i}}{d\lambda }}}
If we express the new components in terms of the old ones, then
v
′
i
=
d
x
′
i
d
λ
=
∂
x
′
i
∂
x
j
d
x
j
d
λ
=
∂
x
′
i
∂
x
j
v
j
{\displaystyle {v'}^{i}={\frac {d{x'}^{i}}{d\lambda \;\;}}={\frac {\partial {x'}^{i}}{\partial x^{j}}}{\frac {dx^{j}}{d\lambda }}={\frac {\partial {x'}^{i}}{\partial x^{j}}}{v}^{j}}
This is the explicit form of a transformation called the contravariant transformation and we note that it is different and just the inverse of the covariant rule. In order to distinguish them from the covariant (tangent) vectors, the index is placed on top.
=== Basis differential forms transform contravariantly ===
An example of a contravariant transformation is given by a differential form df. For f as a function of coordinates
x
i
{\displaystyle x^{i}}
, df can be expressed in terms of the basis
d
x
i
{\displaystyle dx^{i}}
. The differentials dx transform according to the contravariant rule since
d
x
′
i
=
∂
x
′
i
∂
x
j
d
x
j
{\displaystyle d{x'}^{i}={\frac {\partial {x'}^{i}}{\partial {x}^{j}}}{dx}^{j}}
== Dual properties ==
Entities that transform covariantly (like basis vectors) and the ones that transform contravariantly (like components of a vector and differential forms) are "almost the same" and yet they are different. They have "dual" properties.
What is behind this, is mathematically known as the dual space that always goes together with a given linear vector space.
Take any vector space T. A function f on T is called linear if, for any vectors v, w and scalar α:
f
(
v
+
w
)
=
f
(
v
)
+
f
(
w
)
f
(
α
v
)
=
α
f
(
v
)
{\displaystyle {\begin{aligned}f(\mathbf {v} +\mathbf {w} )&=f(\mathbf {v} )+f(\mathbf {w} )\\f(\alpha \mathbf {v} )&=\alpha f(\mathbf {v} )\end{aligned}}}
A simple example is the function which assigns a vector the value of one of its components (called a projection function). It has a vector as argument and assigns a real number, the value of a component.
All such scalar-valued linear functions together form a vector space, called the dual space of T. The sum f+g is again a linear function for linear f and g, and the same holds for scalar multiplication αf.
Given a basis
e
i
{\displaystyle \mathbf {e} _{i}}
for T, we can define a basis, called the dual basis for the dual space in a natural way by taking the set of linear functions mentioned above: the projection functions. Each projection function (indexed by ω) produces the number 1 when applied to one of the basis vectors
e
i
{\displaystyle \mathbf {e} _{i}}
. For example,
ω
0
{\displaystyle \omega ^{0}}
gives a 1 on
e
0
{\displaystyle \mathbf {e} _{0}}
and zero elsewhere. Applying this linear function
ω
0
{\displaystyle {\omega }^{0}}
to a vector
v
=
v
i
e
i
{\displaystyle \mathbf {v} =v^{i}\mathbf {e} _{i}}
, gives (using its linearity)
ω
0
(
v
)
=
ω
0
(
v
i
e
i
)
=
v
i
ω
0
(
e
i
)
=
v
0
{\displaystyle \omega ^{0}(\mathbf {v} )=\omega ^{0}(v^{i}\mathbf {e} _{i})=v^{i}\omega ^{0}(\mathbf {e} _{i})=v^{0}}
so just the value of the first coordinate. For this reason it is called the projection function.
There are as many dual basis vectors
ω
i
{\displaystyle \omega ^{i}}
as there are basis vectors
e
i
{\displaystyle \mathbf {e} _{i}}
, so the dual space has the same dimension as the linear space itself. It is "almost the same space", except that the elements of the dual space (called dual vectors) transform covariantly and the elements of the tangent vector space transform contravariantly.
Sometimes an extra notation is introduced where the real value of a linear function σ on a tangent vector u is given as
σ
[
u
]
:=
⟨
σ
,
u
⟩
{\displaystyle \sigma [\mathbf {u} ]:=\langle \sigma ,\mathbf {u} \rangle }
where
⟨
σ
,
u
⟩
{\displaystyle \langle \sigma ,\mathbf {u} \rangle }
is a real number. This notation emphasizes the bilinear character of the form. It is linear in σ since that is a linear function and it is linear in u since that is an element of a vector space.
== Co- and contravariant tensor components ==
=== Without coordinates ===
A tensor of type (r, s) may be defined as a real-valued multilinear function of r dual vectors and s vectors. Since vectors and dual vectors may be defined without dependence on a coordinate system, a tensor defined in this way is independent of the choice of a coordinate system.
The notation of a tensor is
T
(
σ
,
…
,
ρ
,
u
,
…
,
v
)
≡
T
σ
…
ρ
u
…
v
{\displaystyle {\begin{aligned}&T\left(\sigma ,\ldots ,\rho ,\mathbf {u} ,\ldots ,\mathbf {v} \right)\\\equiv {}&{T^{\sigma \ldots \rho }}_{\mathbf {u} \ldots \mathbf {v} }\end{aligned}}}
for dual vectors (differential forms) ρ, σ and tangent vectors
u
,
v
{\displaystyle \mathbf {u} ,\mathbf {v} }
. In the second notation the distinction between vectors and differential forms is more obvious.
=== With coordinates ===
Because a tensor depends linearly on its arguments, it is completely determined if one knows the values on a basis
ω
i
…
ω
j
{\displaystyle \omega ^{i}\ldots \omega ^{j}}
and
e
k
…
e
l
{\displaystyle \mathbf {e} _{k}\ldots \mathbf {e} _{l}}
T
(
ω
i
,
…
,
ω
j
,
e
k
…
e
l
)
=
T
i
…
j
k
…
l
{\displaystyle T(\omega ^{i},\ldots ,\omega ^{j},\mathbf {e} _{k}\ldots \mathbf {e} _{l})={T^{i\ldots j}}_{k\ldots l}}
The numbers
T
i
…
j
k
…
l
{\displaystyle {T^{i\ldots j}}_{k\ldots l}}
are called the components of the tensor on the chosen basis.
If we choose another basis (which are a linear combination of the original basis), we can use the linear properties of the tensor and we will find that the tensor components in the upper indices transform as dual vectors (so contravariant), whereas the lower indices will transform as the basis of tangent vectors and are thus covariant. For a tensor of rank 2, we can verify that
A
′
i
j
=
∂
x
l
∂
x
′
i
∂
x
m
∂
x
′
j
A
l
m
{\displaystyle {A'}_{ij}={\frac {\partial x^{l}}{\partial {x'}^{i}}}{\frac {\partial x^{m}}{\partial {x'}^{j}}}A_{lm}}
covariant tensor
A
′
i
j
=
∂
x
′
i
∂
x
l
∂
x
′
j
∂
x
m
A
l
m
{\displaystyle {A'\,}^{ij}={\frac {\partial {x'}^{i}}{\partial x^{l}}}{\frac {\partial {x'}^{j}}{\partial x^{m}}}A^{lm}}
contravariant tensor
For a mixed co- and contravariant tensor of rank 2
A
′
i
j
=
∂
x
′
i
∂
x
l
∂
x
m
∂
x
′
j
A
l
m
{\displaystyle {A'\,}^{i}{}_{j}={\frac {\partial {x'}^{i}}{\partial x^{l}}}{\frac {\partial x^{m}}{\partial {x'}^{j}}}A^{l}{}_{m}}
mixed co- and contravariant tensor
== See also ==
Covariance and contravariance of vectors
General covariance
Lorentz covariance
== References == | Wikipedia/Covariant_transformation |
In computer vision, the trifocal tensor (also tritensor) is a 3×3×3 array of numbers (i.e., a tensor) that incorporates all projective geometric relationships among three views. It relates the coordinates of corresponding points or lines in three views, being independent of the scene structure and depending only on the relative motion (i.e., pose) among the three views and their intrinsic calibration parameters. Hence, the trifocal tensor can be considered as the generalization of the fundamental matrix in three views. It is noted that despite the tensor being made up of 27 elements, only 18 of them are actually independent.
There is also a so-called calibrated trifocal tensor, which relates the coordinates of points and lines in three views given their intrinsic parameters and encodes the relative pose of the cameras up to global scale, totalling 11 independent elements or degrees of freedom. The reduced degrees of freedom allow for fewer correspondences to fit the model, at the cost of increased nonlinearity.
== Correlation slices ==
The tensor can also be seen as a collection of three rank-two 3 x 3 matrices
T
1
,
T
2
,
T
3
{\displaystyle {\mathbf {T} }_{1},\;{\mathbf {T} }_{2},\;{\mathbf {T} }_{3}}
known as its correlation slices. Assuming that the projection matrices of three views are
P
=
[
I
|
0
]
{\displaystyle {\mathbf {P} }=[{\mathbf {I} }\;|\;{\mathbf {0} }]}
,
P
′
=
[
A
|
a
4
]
{\displaystyle {\mathbf {P} }'=[{\mathbf {A} }\;|\;{\mathbf {a} }_{4}]}
and
P
″
=
[
B
|
b
4
]
{\displaystyle {\mathbf {P} ''}=[{\mathbf {B} }\;|\;{\mathbf {b} }_{4}]}
, the correlation slices of the corresponding tensor can be expressed in closed form as
T
i
=
a
i
b
4
t
−
a
4
b
i
t
,
i
=
1
…
3
{\displaystyle {\mathbf {T} }_{i}={\mathbf {a} }_{i}{\mathbf {b} }_{4}^{t}-{\mathbf {a} }_{4}{\mathbf {b} }_{i}^{t},\;i=1\ldots 3}
, where
a
i
,
b
i
{\displaystyle {\mathbf {a} }_{i},\;{\mathbf {b} }_{i}}
are respectively the ith columns of the camera matrices. In practice, however, the tensor is estimated from point and line matches across the three views.
== Trilinear constraints ==
One of the most important properties of the trifocal tensor is that it gives rise to linear relationships between lines and points in three images. More specifically, for triplets of corresponding points
x
↔
x
′
↔
x
″
{\displaystyle {\mathbf {x} }\;\leftrightarrow \;{\mathbf {x} }'\;\leftrightarrow \;{\mathbf {x} }''}
and any corresponding lines
l
↔
l
′
↔
l
″
{\displaystyle {\mathbf {l} }\;\leftrightarrow \;{\mathbf {l} }'\;\leftrightarrow \;{\mathbf {l} }''}
through them, the following trilinear constraints hold:
(
l
′
t
[
T
1
,
T
2
,
T
3
]
l
″
)
[
l
]
×
=
0
t
{\displaystyle ({\mathbf {l} }^{\prime t}\left[{\mathbf {T} }_{1},\;{\mathbf {T} }_{2},\;{\mathbf {T} }_{3}\right]{\mathbf {l} }'')[{\mathbf {l} }]_{\times }={\mathbf {0} }^{t}}
l
′
t
(
∑
i
x
i
T
i
)
l
″
=
0
{\displaystyle {\mathbf {l} }^{\prime t}\left(\sum _{i}x_{i}{\mathbf {T} }_{i}\right){\mathbf {l} }''=0}
l
′
t
(
∑
i
x
i
T
i
)
[
x
″
]
×
=
0
t
{\displaystyle {\mathbf {l} }^{\prime t}\left(\sum _{i}x_{i}{\mathbf {T} }_{i}\right)[{\mathbf {x} }'']_{\times }={\mathbf {0} }^{t}}
[
x
′
]
×
(
∑
i
x
i
T
i
)
l
″
=
0
{\displaystyle [{\mathbf {x} }']_{\times }\left(\sum _{i}x_{i}{\mathbf {T} }_{i}\right){\mathbf {l} }''={\mathbf {0} }}
[
x
′
]
×
(
∑
i
x
i
T
i
)
[
x
″
]
×
=
0
3
×
3
{\displaystyle [{\mathbf {x} }']_{\times }\left(\sum _{i}x_{i}{\mathbf {T} }_{i}\right)[{\mathbf {x} }'']_{\times }={\mathbf {0} }_{3\times 3}}
where
[
⋅
]
×
{\displaystyle [\cdot ]_{\times }}
denotes the skew-symmetric cross product matrix.
== Transfer ==
Given the trifocal tensor of three views and a pair of matched points in two views, it is possible to determine the location of the point in the third view without any further information. This is known as point transfer and a similar result holds for lines and conics. For general curves, the transfer can be realized through a local differential curve model of osculating circles (i.e., curvature), which can then be transferred as conics. The transfer of third-order models reflecting space torsion using calibrated trifocal tensors have been studied, but remains an open problem for uncalibrated trifocal tensors.
== Estimation ==
=== Uncalibrated ===
The classical case is 6 point correspondences giving 3 solutions.
The case estimating the trifocal tensor from 9 line correspondences has only recently been solved.
=== Calibrated ===
Estimating the calibrated trifocal tensor has been cited as notoriously difficult, and requires 4 point correspondences.
The case of using only three point correspondences has recently been solved, where the points are attributed with tangent directions or incident lines; with only two of the points having incident lines, this is a minimal problem of degree 312 (so there can be at most 312 solutions) and is relevant for the case of general curves (whose points have tangents), or feature points with attributed directions (such as SIFT directions). The same technique solved the mixed case of three point correspondences and one line correspondence, which has also been shown to be minimal with degree 216.
== References ==
== Further reading ==
Hartley, Richard I. (1997). "Lines and Points in Three Views and the Trifocal Tensor". International Journal of Computer Vision. 22 (2): 125–140. doi:10.1023/A:1007936012022. S2CID 8979544.
Torr, P. H. S.; Zisserman, A. (1997). "Robust Parameterization and Computation of the Trifocal Tensor". Image and Vision Computing. 15 (8): 591–607. CiteSeerX 10.1.1.41.3172. doi:10.1016/S0262-8856(97)00010-3.
== External links ==
Visualization of trifocal geometry (originally by Sylvain Bougnoux of INRIA Robotvis, requires Java)
=== Algorithms ===
Matlab implementation of the uncalibrated trifocal tensor estimation and comparison to pairwise fundamental matrices
C++ implementation of the calibrated trifocal tensor estimation using optimized Homotopy Continuation code. Presently includes cases of three corresponding points with lines at these points (as in feature positions and orientations, or curve points with tangents), and also for three corresponding points and one line correspondence. | Wikipedia/Trifocal_tensor |
In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation.
The most familiar coordinate systems are the two-dimensional and three-dimensional Cartesian coordinate systems. Cartesian tensors may be used with any Euclidean space, or more technically, any finite-dimensional vector space over the field of real numbers that has an inner product.
Use of Cartesian tensors occurs in physics and engineering, such as with the Cauchy stress tensor and the moment of inertia tensor in rigid body dynamics. Sometimes general curvilinear coordinates are convenient, as in high-deformation continuum mechanics, or even necessary, as in general relativity. While orthonormal bases may be found for some such coordinate systems (e.g. tangent to spherical coordinates), Cartesian tensors may provide considerable simplification for applications in which rotations of rectilinear coordinate axes suffice. The transformation is a passive transformation, since the coordinates are changed and not the physical system.
== Cartesian basis and related terminology ==
=== Vectors in three dimensions ===
In 3D Euclidean space,
R
3
{\displaystyle \mathbb {R} ^{3}}
, the standard basis is ex, ey, ez. Each basis vector points along the x-, y-, and z-axes, and the vectors are all unit vectors (or normalized), so the basis is orthonormal.
Throughout, when referring to Cartesian coordinates in three dimensions, a right-handed system is assumed and this is much more common than a left-handed system in practice, see orientation (vector space) for details.
For Cartesian tensors of order 1, a Cartesian vector a can be written algebraically as a linear combination of the basis vectors ex, ey, ez:
a
=
a
x
e
x
+
a
y
e
y
+
a
z
e
z
{\displaystyle \mathbf {a} =a_{\text{x}}\mathbf {e} _{\text{x}}+a_{\text{y}}\mathbf {e} _{\text{y}}+a_{\text{z}}\mathbf {e} _{\text{z}}}
where the coordinates of the vector with respect to the Cartesian basis are denoted ax, ay, az. It is common and helpful to display the basis vectors as column vectors
e
x
=
(
1
0
0
)
,
e
y
=
(
0
1
0
)
,
e
z
=
(
0
0
1
)
{\displaystyle \mathbf {e} _{\text{x}}={\begin{pmatrix}1\\0\\0\end{pmatrix}}\,,\quad \mathbf {e} _{\text{y}}={\begin{pmatrix}0\\1\\0\end{pmatrix}}\,,\quad \mathbf {e} _{\text{z}}={\begin{pmatrix}0\\0\\1\end{pmatrix}}}
when we have a coordinate vector in a column vector representation:
a
=
(
a
x
a
y
a
z
)
{\displaystyle \mathbf {a} ={\begin{pmatrix}a_{\text{x}}\\a_{\text{y}}\\a_{\text{z}}\end{pmatrix}}}
A row vector representation is also legitimate, although in the context of general curvilinear coordinate systems the row and column vector representations are used separately for specific reasons – see Einstein notation and covariance and contravariance of vectors for why.
The term "component" of a vector is ambiguous: it could refer to:
a specific coordinate of the vector such as az (a scalar), and similarly for x and y, or
the coordinate scalar-multiplying the corresponding basis vector, in which case the "y-component" of a is ayey (a vector), and similarly for x and z.
A more general notation is tensor index notation, which has the flexibility of numerical values rather than fixed coordinate labels. The Cartesian labels are replaced by tensor indices in the basis vectors ex ↦ e1, ey ↦ e2, ez ↦ e3 and coordinates ax ↦ a1, ay ↦ a2, az ↦ a3. In general, the notation e1, e2, e3 refers to any basis, and a1, a2, a3 refers to the corresponding coordinate system; although here they are restricted to the Cartesian system. Then:
a
=
a
1
e
1
+
a
2
e
2
+
a
3
e
3
=
∑
i
=
1
3
a
i
e
i
{\displaystyle \mathbf {a} =a_{1}\mathbf {e} _{1}+a_{2}\mathbf {e} _{2}+a_{3}\mathbf {e} _{3}=\sum _{i=1}^{3}a_{i}\mathbf {e} _{i}}
It is standard to use the Einstein notation—the summation sign for summation over an index that is present exactly twice within a term may be suppressed for notational conciseness:
a
=
∑
i
=
1
3
a
i
e
i
≡
a
i
e
i
{\displaystyle \mathbf {a} =\sum _{i=1}^{3}a_{i}\mathbf {e} _{i}\equiv a_{i}\mathbf {e} _{i}}
An advantage of the index notation over coordinate-specific notations is the independence of the dimension of the underlying vector space, i.e. the same expression on the right hand side takes the same form in higher dimensions (see below). Previously, the Cartesian labels x, y, z were just labels and not indices. (It is informal to say "i = x, y, z").
=== Second-order tensors in three dimensions ===
A dyadic tensor T is an order-2 tensor formed by the tensor product ⊗ of two Cartesian vectors a and b, written T = a ⊗ b. Analogous to vectors, it can be written as a linear combination of the tensor basis ex ⊗ ex ≡ exx, ex ⊗ ey ≡ exy, ..., ez ⊗ ez ≡ ezz (the right-hand side of each identity is only an abbreviation, nothing more):
T
=
(
a
x
e
x
+
a
y
e
y
+
a
z
e
z
)
⊗
(
b
x
e
x
+
b
y
e
y
+
b
z
e
z
)
=
a
x
b
x
e
x
⊗
e
x
+
a
x
b
y
e
x
⊗
e
y
+
a
x
b
z
e
x
⊗
e
z
+
a
y
b
x
e
y
⊗
e
x
+
a
y
b
y
e
y
⊗
e
y
+
a
y
b
z
e
y
⊗
e
z
+
a
z
b
x
e
z
⊗
e
x
+
a
z
b
y
e
z
⊗
e
y
+
a
z
b
z
e
z
⊗
e
z
{\displaystyle {\begin{aligned}\mathbf {T} =\quad &\left(a_{\text{x}}\mathbf {e} _{\text{x}}+a_{\text{y}}\mathbf {e} _{\text{y}}+a_{\text{z}}\mathbf {e} _{\text{z}}\right)\otimes \left(b_{\text{x}}\mathbf {e} _{\text{x}}+b_{\text{y}}\mathbf {e} _{\text{y}}+b_{\text{z}}\mathbf {e} _{\text{z}}\right)\\[5pt]{}=\quad &a_{\text{x}}b_{\text{x}}\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{x}}+a_{\text{x}}b_{\text{y}}\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{y}}+a_{\text{x}}b_{\text{z}}\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{z}}\\[4pt]{}+{}&a_{\text{y}}b_{\text{x}}\mathbf {e} _{\text{y}}\otimes \mathbf {e} _{\text{x}}+a_{\text{y}}b_{\text{y}}\mathbf {e} _{\text{y}}\otimes \mathbf {e} _{\text{y}}+a_{\text{y}}b_{\text{z}}\mathbf {e} _{\text{y}}\otimes \mathbf {e} _{\text{z}}\\[4pt]{}+{}&a_{\text{z}}b_{\text{x}}\mathbf {e} _{\text{z}}\otimes \mathbf {e} _{\text{x}}+a_{\text{z}}b_{\text{y}}\mathbf {e} _{\text{z}}\otimes \mathbf {e} _{\text{y}}+a_{\text{z}}b_{\text{z}}\mathbf {e} _{\text{z}}\otimes \mathbf {e} _{\text{z}}\end{aligned}}}
Representing each basis tensor as a matrix:
e
x
⊗
e
x
≡
e
xx
=
(
1
0
0
0
0
0
0
0
0
)
,
e
x
⊗
e
y
≡
e
xy
=
(
0
1
0
0
0
0
0
0
0
)
,
e
z
⊗
e
z
≡
e
zz
=
(
0
0
0
0
0
0
0
0
1
)
{\displaystyle {\begin{aligned}\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{x}}&\equiv \mathbf {e} _{\text{xx}}={\begin{pmatrix}1&0&0\\0&0&0\\0&0&0\end{pmatrix}}\,,&\mathbf {e} _{\text{x}}\otimes \mathbf {e} _{\text{y}}&\equiv \mathbf {e} _{\text{xy}}={\begin{pmatrix}0&1&0\\0&0&0\\0&0&0\end{pmatrix}}\,,&\mathbf {e} _{\text{z}}\otimes \mathbf {e} _{\text{z}}&\equiv \mathbf {e} _{\text{zz}}={\begin{pmatrix}0&0&0\\0&0&0\\0&0&1\end{pmatrix}}\end{aligned}}}
then T can be represented more systematically as a matrix:
T
=
(
a
x
b
x
a
x
b
y
a
x
b
z
a
y
b
x
a
y
b
y
a
y
b
z
a
z
b
x
a
z
b
y
a
z
b
z
)
{\displaystyle \mathbf {T} ={\begin{pmatrix}a_{\text{x}}b_{\text{x}}&a_{\text{x}}b_{\text{y}}&a_{\text{x}}b_{\text{z}}\\a_{\text{y}}b_{\text{x}}&a_{\text{y}}b_{\text{y}}&a_{\text{y}}b_{\text{z}}\\a_{\text{z}}b_{\text{x}}&a_{\text{z}}b_{\text{y}}&a_{\text{z}}b_{\text{z}}\end{pmatrix}}}
See matrix multiplication for the notational correspondence between matrices and the dot and tensor products.
More generally, whether or not T is a tensor product of two vectors, it is always a linear combination of the basis tensors with coordinates Txx, Txy, ..., Tzz:
T
=
T
xx
e
xx
+
T
xy
e
xy
+
T
xz
e
xz
+
T
yx
e
yx
+
T
yy
e
yy
+
T
yz
e
yz
+
T
zx
e
zx
+
T
zy
e
zy
+
T
zz
e
zz
{\displaystyle {\begin{aligned}\mathbf {T} =\quad &T_{\text{xx}}\mathbf {e} _{\text{xx}}+T_{\text{xy}}\mathbf {e} _{\text{xy}}+T_{\text{xz}}\mathbf {e} _{\text{xz}}\\[4pt]{}+{}&T_{\text{yx}}\mathbf {e} _{\text{yx}}+T_{\text{yy}}\mathbf {e} _{\text{yy}}+T_{\text{yz}}\mathbf {e} _{\text{yz}}\\[4pt]{}+{}&T_{\text{zx}}\mathbf {e} _{\text{zx}}+T_{\text{zy}}\mathbf {e} _{\text{zy}}+T_{\text{zz}}\mathbf {e} _{\text{zz}}\end{aligned}}}
while in terms of tensor indices:
T
=
T
i
j
e
i
j
≡
∑
i
j
T
i
j
e
i
⊗
e
j
,
{\displaystyle \mathbf {T} =T_{ij}\mathbf {e} _{ij}\equiv \sum _{ij}T_{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\,,}
and in matrix form:
T
=
(
T
xx
T
xy
T
xz
T
yx
T
yy
T
yz
T
zx
T
zy
T
zz
)
{\displaystyle \mathbf {T} ={\begin{pmatrix}T_{\text{xx}}&T_{\text{xy}}&T_{\text{xz}}\\T_{\text{yx}}&T_{\text{yy}}&T_{\text{yz}}\\T_{\text{zx}}&T_{\text{zy}}&T_{\text{zz}}\end{pmatrix}}}
Second-order tensors occur naturally in physics and engineering when physical quantities have directional dependence in the system, often in a "stimulus-response" way. This can be mathematically seen through one aspect of tensors – they are multilinear functions. A second-order tensor T which takes in a vector u of some magnitude and direction will return a vector v; of a different magnitude and in a different direction to u, in general. The notation used for functions in mathematical analysis leads us to write v − T(u), while the same idea can be expressed in matrix and index notations (including the summation convention), respectively:
(
v
x
v
y
v
z
)
=
(
T
xx
T
xy
T
xz
T
yx
T
yy
T
yz
T
zx
T
zy
T
zz
)
(
u
x
u
y
u
z
)
,
v
i
=
T
i
j
u
j
{\displaystyle {\begin{aligned}{\begin{pmatrix}v_{\text{x}}\\v_{\text{y}}\\v_{\text{z}}\end{pmatrix}}&={\begin{pmatrix}T_{\text{xx}}&T_{\text{xy}}&T_{\text{xz}}\\T_{\text{yx}}&T_{\text{yy}}&T_{\text{yz}}\\T_{\text{zx}}&T_{\text{zy}}&T_{\text{zz}}\end{pmatrix}}{\begin{pmatrix}u_{\text{x}}\\u_{\text{y}}\\u_{\text{z}}\end{pmatrix}}\,,&v_{i}&=T_{ij}u_{j}\end{aligned}}}
By "linear", if u = ρr + σs for two scalars ρ and σ and vectors r and s, then in function and index notations:
v
=
T
(
ρ
r
+
σ
s
)
=
ρ
T
(
r
)
+
σ
T
(
s
)
v
i
=
T
i
j
(
ρ
r
j
+
σ
s
j
)
=
ρ
T
i
j
r
j
+
σ
T
i
j
s
j
{\displaystyle {\begin{aligned}\mathbf {v} &=&&\mathbf {T} (\rho \mathbf {r} +\sigma \mathbf {s} )&=&&\rho \mathbf {T} (\mathbf {r} )+\sigma \mathbf {T} (\mathbf {s} )\\[1ex]v_{i}&=&&T_{ij}(\rho r_{j}+\sigma s_{j})&=&&\rho T_{ij}r_{j}+\sigma T_{ij}s_{j}\end{aligned}}}
and similarly for the matrix notation. The function, matrix, and index notations all mean the same thing. The matrix forms provide a clear display of the components, while the index form allows easier tensor-algebraic manipulation of the formulae in a compact manner. Both provide the physical interpretation of directions; vectors have one direction, while second-order tensors connect two directions together. One can associate a tensor index or coordinate label with a basis vector direction.
The use of second-order tensors are the minimum to describe changes in magnitudes and directions of vectors, as the dot product of two vectors is always a scalar, while the cross product of two vectors is always a pseudovector perpendicular to the plane defined by the vectors, so these products of vectors alone cannot obtain a new vector of any magnitude in any direction. (See also below for more on the dot and cross products). The tensor product of two vectors is a second-order tensor, although this has no obvious directional interpretation by itself.
The previous idea can be continued: if T takes in two vectors p and q, it will return a scalar r. In function notation we write r = T(p, q), while in matrix and index notations (including the summation convention) respectively:
r
=
(
p
x
p
y
p
z
)
(
T
xx
T
xy
T
xz
T
yx
T
yy
T
yz
T
zx
T
zy
T
zz
)
(
q
x
q
y
q
z
)
=
p
i
T
i
j
q
j
{\displaystyle r={\begin{pmatrix}p_{\text{x}}&p_{\text{y}}&p_{\text{z}}\end{pmatrix}}{\begin{pmatrix}T_{\text{xx}}&T_{\text{xy}}&T_{\text{xz}}\\T_{\text{yx}}&T_{\text{yy}}&T_{\text{yz}}\\T_{\text{zx}}&T_{\text{zy}}&T_{\text{zz}}\end{pmatrix}}{\begin{pmatrix}q_{\text{x}}\\q_{\text{y}}\\q_{\text{z}}\end{pmatrix}}=p_{i}T_{ij}q_{j}}
The tensor T is linear in both input vectors. When vectors and tensors are written without reference to components, and indices are not used, sometimes a dot ⋅ is placed where summations over indices (known as tensor contractions) are taken. For the above cases:
v
=
T
⋅
u
r
=
p
⋅
T
⋅
q
{\displaystyle {\begin{aligned}\mathbf {v} &=\mathbf {T} \cdot \mathbf {u} \\r&=\mathbf {p} \cdot \mathbf {T} \cdot \mathbf {q} \end{aligned}}}
motivated by the dot product notation:
a
⋅
b
≡
a
i
b
i
{\displaystyle \mathbf {a} \cdot \mathbf {b} \equiv a_{i}b_{i}}
More generally, a tensor of order m which takes in n vectors (where n is between 0 and m inclusive) will return a tensor of order m − n, see Tensor § As multilinear maps for further generalizations and details. The concepts above also apply to pseudovectors in the same way as for vectors. The vectors and tensors themselves can vary within throughout space, in which case we have vector fields and tensor fields, and can also depend on time.
Following are some examples:
For the electrical conduction example, the index and matrix notations would be:
J
i
=
σ
i
j
E
j
≡
∑
j
σ
i
j
E
j
(
J
x
J
y
J
z
)
=
(
σ
xx
σ
xy
σ
xz
σ
yx
σ
yy
σ
yz
σ
zx
σ
zy
σ
zz
)
(
E
x
E
y
E
z
)
{\displaystyle {\begin{aligned}J_{i}&=\sigma _{ij}E_{j}\equiv \sum _{j}\sigma _{ij}E_{j}\\{\begin{pmatrix}J_{\text{x}}\\J_{\text{y}}\\J_{\text{z}}\end{pmatrix}}&={\begin{pmatrix}\sigma _{\text{xx}}&\sigma _{\text{xy}}&\sigma _{\text{xz}}\\\sigma _{\text{yx}}&\sigma _{\text{yy}}&\sigma _{\text{yz}}\\\sigma _{\text{zx}}&\sigma _{\text{zy}}&\sigma _{\text{zz}}\end{pmatrix}}{\begin{pmatrix}E_{\text{x}}\\E_{\text{y}}\\E_{\text{z}}\end{pmatrix}}\end{aligned}}}
while for the rotational kinetic energy T:
T
=
1
2
ω
i
I
i
j
ω
j
≡
1
2
∑
i
j
ω
i
I
i
j
ω
j
,
=
1
2
(
ω
x
ω
y
ω
z
)
(
I
xx
I
xy
I
xz
I
yx
I
yy
I
yz
I
zx
I
zy
I
zz
)
(
ω
x
ω
y
ω
z
)
.
{\displaystyle {\begin{aligned}T&={\frac {1}{2}}\omega _{i}I_{ij}\omega _{j}\equiv {\frac {1}{2}}\sum _{ij}\omega _{i}I_{ij}\omega _{j}\,,\\&={\frac {1}{2}}{\begin{pmatrix}\omega _{\text{x}}&\omega _{\text{y}}&\omega _{\text{z}}\end{pmatrix}}{\begin{pmatrix}I_{\text{xx}}&I_{\text{xy}}&I_{\text{xz}}\\I_{\text{yx}}&I_{\text{yy}}&I_{\text{yz}}\\I_{\text{zx}}&I_{\text{zy}}&I_{\text{zz}}\end{pmatrix}}{\begin{pmatrix}\omega _{\text{x}}\\\omega _{\text{y}}\\\omega _{\text{z}}\end{pmatrix}}\,.\end{aligned}}}
See also constitutive equation for more specialized examples.
=== Vectors and tensors in n dimensions ===
In n-dimensional Euclidean space over the real numbers,
R
n
{\displaystyle \mathbb {R} ^{n}}
, the standard basis is denoted e1, e2, e3, ... en. Each basis vector ei points along the positive xi axis, with the basis being orthonormal. Component j of ei is given by the Kronecker delta:
(
e
i
)
j
=
δ
i
j
{\displaystyle (\mathbf {e} _{i})_{j}=\delta _{ij}}
A vector in
R
n
{\displaystyle \mathbb {R} ^{n}}
takes the form:
a
=
a
i
e
i
≡
∑
i
a
i
e
i
.
{\displaystyle \mathbf {a} =a_{i}\mathbf {e} _{i}\equiv \sum _{i}a_{i}\mathbf {e} _{i}\,.}
Similarly for the order-2 tensor above, for each vector a and b in
R
n
{\displaystyle \mathbb {R} ^{n}}
:
T
=
a
i
b
j
e
i
j
≡
∑
i
j
a
i
b
j
e
i
⊗
e
j
,
{\displaystyle \mathbf {T} =a_{i}b_{j}\mathbf {e} _{ij}\equiv \sum _{ij}a_{i}b_{j}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\,,}
or more generally:
T
=
T
i
j
e
i
j
≡
∑
i
j
T
i
j
e
i
⊗
e
j
.
{\displaystyle \mathbf {T} =T_{ij}\mathbf {e} _{ij}\equiv \sum _{ij}T_{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\,.}
== Transformations of Cartesian vectors (any number of dimensions) ==
=== Meaning of "invariance" under coordinate transformations ===
The position vector x in
R
n
{\displaystyle \mathbb {R} ^{n}}
is a simple and common example of a vector, and can be represented in any coordinate system. Consider the case of rectangular coordinate systems with orthonormal bases only. It is possible to have a coordinate system with rectangular geometry if the basis vectors are all mutually perpendicular and not normalized, in which case the basis is orthogonal but not orthonormal. However, orthonormal bases are easier to manipulate and are often used in practice. The following results are true for orthonormal bases, not orthogonal ones.
In one rectangular coordinate system, x as a contravector has coordinates xi and basis vectors ei, while as a covector it has coordinates xi and basis covectors ei, and we have:
x
=
x
i
e
i
,
x
=
x
i
e
i
{\displaystyle {\begin{aligned}\mathbf {x} &=x^{i}\mathbf {e} _{i}\,,&\mathbf {x} &=x_{i}\mathbf {e} ^{i}\end{aligned}}}
In another rectangular coordinate system, x as a contravector has coordinates xi and basis ei, while as a covector it has coordinates xi and basis ei, and we have:
x
=
x
¯
i
e
¯
i
,
x
=
x
¯
i
e
¯
i
{\displaystyle {\begin{aligned}\mathbf {x} &={\bar {x}}^{i}{\bar {\mathbf {e} }}_{i}\,,&\mathbf {x} &={\bar {x}}_{i}{\bar {\mathbf {e} }}^{i}\end{aligned}}}
Each new coordinate is a function of all the old ones, and vice versa for the inverse function:
x
¯
i
=
x
¯
i
(
x
1
,
x
2
,
…
)
⇌
x
i
=
x
i
(
x
¯
1
,
x
¯
2
,
…
)
x
¯
i
=
x
¯
i
(
x
1
,
x
2
,
…
)
⇌
x
i
=
x
i
(
x
¯
1
,
x
¯
2
,
…
)
{\displaystyle {\begin{aligned}{\bar {x}}{}^{i}={\bar {x}}{}^{i}\left(x^{1},x^{2},\ldots \right)\quad &\rightleftharpoons \quad x{}^{i}=x{}^{i}\left({\bar {x}}^{1},{\bar {x}}^{2},\ldots \right)\\{\bar {x}}{}_{i}={\bar {x}}{}_{i}\left(x_{1},x_{2},\ldots \right)\quad &\rightleftharpoons \quad x{}_{i}=x{}_{i}\left({\bar {x}}_{1},{\bar {x}}_{2},\ldots \right)\end{aligned}}}
and similarly each new basis vector is a function of all the old ones, and vice versa for the inverse function:
e
¯
j
=
e
¯
j
(
e
1
,
e
2
,
…
)
⇌
e
j
=
e
j
(
e
¯
1
,
e
¯
2
,
…
)
e
¯
j
=
e
¯
j
(
e
1
,
e
2
,
…
)
⇌
e
j
=
e
j
(
e
¯
1
,
e
¯
2
,
…
)
{\displaystyle {\begin{aligned}{\bar {\mathbf {e} }}{}_{j}={\bar {\mathbf {e} }}{}_{j}\left(\mathbf {e} _{1},\mathbf {e} _{2},\ldots \right)\quad &\rightleftharpoons \quad \mathbf {e} {}_{j}=\mathbf {e} {}_{j}\left({\bar {\mathbf {e} }}_{1},{\bar {\mathbf {e} }}_{2},\ldots \right)\\{\bar {\mathbf {e} }}{}^{j}={\bar {\mathbf {e} }}{}^{j}\left(\mathbf {e} ^{1},\mathbf {e} ^{2},\ldots \right)\quad &\rightleftharpoons \quad \mathbf {e} {}^{j}=\mathbf {e} {}^{j}\left({\bar {\mathbf {e} }}^{1},{\bar {\mathbf {e} }}^{2},\ldots \right)\end{aligned}}}
for all i, j.
A vector is invariant under any change of basis, so if coordinates transform according to a transformation matrix L, the bases transform according to the matrix inverse L−1, and conversely if the coordinates transform according to inverse L−1, the bases transform according to the matrix L. The difference between each of these transformations is shown conventionally through the indices as superscripts for contravariance and subscripts for covariance, and the coordinates and bases are linearly transformed according to the following rules:
where Lij represents the entries of the transformation matrix (row number is i and column number is j) and (L−1)ik denotes the entries of the inverse matrix of the matrix Lik.
If L is an orthogonal transformation (orthogonal matrix), the objects transforming by it are defined as Cartesian tensors. This geometrically has the interpretation that a rectangular coordinate system is mapped to another rectangular coordinate system, in which the norm of the vector x is preserved (and distances are preserved).
The determinant of L is det(L) = ±1, which corresponds to two types of orthogonal transformation: (+1) for rotations and (−1) for improper rotations (including reflections).
There are considerable algebraic simplifications, the matrix transpose is the inverse from the definition of an orthogonal transformation:
L
T
=
L
−
1
⇒
(
L
−
1
)
i
j
=
(
L
T
)
i
j
=
(
L
)
j
i
=
L
j
i
{\displaystyle {\boldsymbol {\mathsf {L}}}^{\textsf {T}}={\boldsymbol {\mathsf {L}}}^{-1}\Rightarrow \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{i}{}^{j}=\left({\boldsymbol {\mathsf {L}}}^{\textsf {T}}\right)_{i}{}^{j}=({\boldsymbol {\mathsf {L}}})^{j}{}_{i}={\mathsf {L}}^{j}{}_{i}}
From the previous table, orthogonal transformations of covectors and contravectors are identical. There is no need to differ between raising and lowering indices, and in this context and applications to physics and engineering the indices are usually all subscripted to remove confusion for exponents. All indices will be lowered in the remainder of this article. One can determine the actual raised and lowered indices by considering which quantities are covectors or contravectors, and the relevant transformation rules.
Exactly the same transformation rules apply to any vector a, not only the position vector. If its components ai do not transform according to the rules, a is not a vector.
Despite the similarity between the expressions above, for the change of coordinates such as xj = Lijxi, and the action of a tensor on a vector like bi = Tij aj, L is not a tensor, but T is. In the change of coordinates, L is a matrix, used to relate two rectangular coordinate systems with orthonormal bases together. For the tensor relating a vector to a vector, the vectors and tensors throughout the equation all belong to the same coordinate system and basis.
=== Derivatives and Jacobian matrix elements ===
The entries of L are partial derivatives of the new or old coordinates with respect to the old or new coordinates, respectively.
Differentiating xi with respect to xk:
∂
x
¯
i
∂
x
k
=
∂
∂
x
k
(
x
j
L
j
i
)
=
L
j
i
∂
x
j
∂
x
k
=
δ
k
j
L
j
i
=
L
k
i
{\displaystyle {\frac {\partial {\bar {x}}_{i}}{\partial x_{k}}}={\frac {\partial }{\partial x_{k}}}(x_{j}{\mathsf {L}}_{ji})={\mathsf {L}}_{ji}{\frac {\partial x_{j}}{\partial x_{k}}}=\delta _{kj}{\mathsf {L}}_{ji}={\mathsf {L}}_{ki}}
so
L
i
j
≡
L
i
j
=
∂
x
¯
j
∂
x
i
{\displaystyle {{\mathsf {L}}_{i}}^{j}\equiv {\mathsf {L}}_{ij}={\frac {\partial {\bar {x}}_{j}}{\partial x_{i}}}}
is an element of the Jacobian matrix. There is a (partially mnemonical) correspondence between index positions attached to L and in the partial derivative: i at the top and j at the bottom, in each case, although for Cartesian tensors the indices can be lowered.
Conversely, differentiating xj with respect to xi:
∂
x
j
∂
x
¯
k
=
∂
∂
x
¯
k
(
x
¯
i
(
L
−
1
)
i
j
)
=
∂
x
¯
i
∂
x
¯
k
(
L
−
1
)
i
j
=
δ
k
i
(
L
−
1
)
i
j
=
(
L
−
1
)
k
j
{\displaystyle {\frac {\partial x_{j}}{\partial {\bar {x}}_{k}}}={\frac {\partial }{\partial {\bar {x}}_{k}}}\left({\bar {x}}_{i}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}\right)={\frac {\partial {\bar {x}}_{i}}{\partial {\bar {x}}_{k}}}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}=\delta _{ki}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{kj}}
so
(
L
−
1
)
i
j
≡
(
L
−
1
)
i
j
=
∂
x
j
∂
x
¯
i
{\displaystyle \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{i}{}^{j}\equiv \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}={\frac {\partial x_{j}}{\partial {\bar {x}}_{i}}}}
is an element of the inverse Jacobian matrix, with a similar index correspondence.
Many sources state transformations in terms of the partial derivatives:
and the explicit matrix equations in 3d are:
x
¯
=
L
x
(
x
¯
1
x
¯
2
x
¯
3
)
=
(
∂
x
¯
1
∂
x
1
∂
x
¯
1
∂
x
2
∂
x
¯
1
∂
x
3
∂
x
¯
2
∂
x
1
∂
x
¯
2
∂
x
2
∂
x
¯
2
∂
x
3
∂
x
¯
3
∂
x
1
∂
x
¯
3
∂
x
2
∂
x
¯
3
∂
x
3
)
(
x
1
x
2
x
3
)
{\displaystyle {\begin{aligned}{\bar {\mathbf {x} }}&={\boldsymbol {\mathsf {L}}}\mathbf {x} \\{\begin{pmatrix}{\bar {x}}_{1}\\{\bar {x}}_{2}\\{\bar {x}}_{3}\end{pmatrix}}&={\begin{pmatrix}{\frac {\partial {\bar {x}}_{1}}{\partial x_{1}}}&{\frac {\partial {\bar {x}}_{1}}{\partial x_{2}}}&{\frac {\partial {\bar {x}}_{1}}{\partial x_{3}}}\\{\frac {\partial {\bar {x}}_{2}}{\partial x_{1}}}&{\frac {\partial {\bar {x}}_{2}}{\partial x_{2}}}&{\frac {\partial {\bar {x}}_{2}}{\partial x_{3}}}\\{\frac {\partial {\bar {x}}_{3}}{\partial x_{1}}}&{\frac {\partial {\bar {x}}_{3}}{\partial x_{2}}}&{\frac {\partial {\bar {x}}_{3}}{\partial x_{3}}}\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}\end{aligned}}}
similarly for
x
=
L
−
1
x
¯
=
L
T
x
¯
{\displaystyle \mathbf {x} ={\boldsymbol {\mathsf {L}}}^{-1}{\bar {\mathbf {x} }}={\boldsymbol {\mathsf {L}}}^{\textsf {T}}{\bar {\mathbf {x} }}}
=== Projections along coordinate axes ===
As with all linear transformations, L depends on the basis chosen. For two orthonormal bases
e
¯
i
⋅
e
¯
j
=
e
i
⋅
e
j
=
δ
i
j
,
|
e
i
|
=
|
e
¯
i
|
=
1
,
{\displaystyle {\begin{aligned}{\bar {\mathbf {e} }}_{i}\cdot {\bar {\mathbf {e} }}_{j}&=\mathbf {e} _{i}\cdot \mathbf {e} _{j}=\delta _{ij}\,,&\left|\mathbf {e} _{i}\right|&=\left|{\bar {\mathbf {e} }}_{i}\right|=1\,,\end{aligned}}}
projecting x to the x axes:
x
¯
i
=
e
¯
i
⋅
x
=
e
¯
i
⋅
x
j
e
j
=
x
i
L
i
j
,
{\displaystyle {\bar {x}}_{i}={\bar {\mathbf {e} }}_{i}\cdot \mathbf {x} ={\bar {\mathbf {e} }}_{i}\cdot x_{j}\mathbf {e} _{j}=x_{i}{\mathsf {L}}_{ij}\,,}
projecting x to the x axes:
x
i
=
e
i
⋅
x
=
e
i
⋅
x
¯
j
e
¯
j
=
x
¯
j
(
L
−
1
)
j
i
.
{\displaystyle x_{i}=\mathbf {e} _{i}\cdot \mathbf {x} =\mathbf {e} _{i}\cdot {\bar {x}}_{j}{\bar {\mathbf {e} }}_{j}={\bar {x}}_{j}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ji}\,.}
Hence the components reduce to direction cosines between the xi and xj axes:
L
i
j
=
e
¯
i
⋅
e
j
=
cos
θ
i
j
(
L
−
1
)
i
j
=
e
i
⋅
e
¯
j
=
cos
θ
j
i
{\displaystyle {\begin{aligned}{\mathsf {L}}_{ij}&={\bar {\mathbf {e} }}_{i}\cdot \mathbf {e} _{j}=\cos \theta _{ij}\\\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ij}&=\mathbf {e} _{i}\cdot {\bar {\mathbf {e} }}_{j}=\cos \theta _{ji}\end{aligned}}}
where θij and θji are the angles between the xi and xj axes. In general, θij is not equal to θji, because for example θ12 and θ21 are two different angles.
The transformation of coordinates can be written:
and the explicit matrix equations in 3d are:
x
¯
=
L
x
(
x
¯
1
x
¯
2
x
¯
3
)
=
(
e
¯
1
⋅
e
1
e
¯
1
⋅
e
2
e
¯
1
⋅
e
3
e
¯
2
⋅
e
1
e
¯
2
⋅
e
2
e
¯
2
⋅
e
3
e
¯
3
⋅
e
1
e
¯
3
⋅
e
2
e
¯
3
⋅
e
3
)
(
x
1
x
2
x
3
)
=
(
cos
θ
11
cos
θ
12
cos
θ
13
cos
θ
21
cos
θ
22
cos
θ
23
cos
θ
31
cos
θ
32
cos
θ
33
)
(
x
1
x
2
x
3
)
{\displaystyle {\begin{aligned}{\bar {\mathbf {x} }}&={\boldsymbol {\mathsf {L}}}\mathbf {x} \\{\begin{pmatrix}{\bar {x}}_{1}\\{\bar {x}}_{2}\\{\bar {x}}_{3}\end{pmatrix}}&={\begin{pmatrix}{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{1}\cdot \mathbf {e} _{3}\\{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{2}\cdot \mathbf {e} _{3}\\{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{1}&{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{2}&{\bar {\mathbf {e} }}_{3}\cdot \mathbf {e} _{3}\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}={\begin{pmatrix}\cos \theta _{11}&\cos \theta _{12}&\cos \theta _{13}\\\cos \theta _{21}&\cos \theta _{22}&\cos \theta _{23}\\\cos \theta _{31}&\cos \theta _{32}&\cos \theta _{33}\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\\x_{3}\end{pmatrix}}\end{aligned}}}
similarly for
x
=
L
−
1
x
¯
=
L
T
x
¯
{\displaystyle \mathbf {x} ={\boldsymbol {\mathsf {L}}}^{-1}{\bar {\mathbf {x} }}={\boldsymbol {\mathsf {L}}}^{\textsf {T}}{\bar {\mathbf {x} }}}
The geometric interpretation is the xi components equal to the sum of projecting the xj components onto the xj axes.
The numbers ei⋅ej arranged into a matrix would form a symmetric matrix (a matrix equal to its own transpose) due to the symmetry in the dot products, in fact it is the metric tensor g. By contrast ei⋅ej or ei⋅ej do not form symmetric matrices in general, as displayed above. Therefore, while the L matrices are still orthogonal, they are not symmetric.
Apart from a rotation about any one axis, in which the xi and xi for some i coincide, the angles are not the same as Euler angles, and so the L matrices are not the same as the rotation matrices.
== Transformation of the dot and cross products (three dimensions only) ==
The dot product and cross product occur very frequently, in applications of vector analysis to physics and engineering, examples include:
power transferred P by an object exerting a force F with velocity v along a straight-line path:
P
=
v
⋅
F
{\displaystyle P=\mathbf {v} \cdot \mathbf {F} }
tangential velocity v at a point x of a rotating rigid body with angular velocity ω:
v
=
ω
×
x
{\displaystyle \mathbf {v} ={\boldsymbol {\omega }}\times \mathbf {x} }
potential energy U of a magnetic dipole of magnetic moment m in a uniform external magnetic field B:
U
=
−
m
⋅
B
{\displaystyle U=-\mathbf {m} \cdot \mathbf {B} }
angular momentum J for a particle with position vector r and momentum p:
J
=
r
×
p
{\displaystyle \mathbf {J} =\mathbf {r} \times \mathbf {p} }
torque τ acting on an electric dipole of electric dipole moment p in a uniform external electric field E:
τ
=
p
×
E
{\displaystyle {\boldsymbol {\tau }}=\mathbf {p} \times \mathbf {E} }
induced surface current density jS in a magnetic material of magnetization M on a surface with unit normal n:
j
S
=
M
×
n
{\displaystyle \mathbf {j} _{\mathrm {S} }=\mathbf {M} \times \mathbf {n} }
How these products transform under orthogonal transformations is illustrated below.
=== Dot product, Kronecker delta, and metric tensor ===
The dot product ⋅ of each possible pairing of the basis vectors follows from the basis being orthonormal. For perpendicular pairs we have
e
x
⋅
e
y
=
e
y
⋅
e
z
=
e
z
⋅
e
x
=
e
y
⋅
e
x
=
e
z
⋅
e
y
=
e
x
⋅
e
z
=
0
{\displaystyle {\begin{array}{llll}\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{y}}&=\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{z}}&=\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{x}}&=\\\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{x}}&=\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{y}}&=\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{z}}&=0\end{array}}}
while for parallel pairs we have
e
x
⋅
e
x
=
e
y
⋅
e
y
=
e
z
⋅
e
z
=
1.
{\displaystyle \mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{x}}=\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{y}}=\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{z}}=1.}
Replacing Cartesian labels by index notation as shown above, these results can be summarized by
e
i
⋅
e
j
=
δ
i
j
{\displaystyle \mathbf {e} _{i}\cdot \mathbf {e} _{j}=\delta _{ij}}
where δij are the components of the Kronecker delta. The Cartesian basis can be used to represent δ in this way.
In addition, each metric tensor component gij with respect to any basis is the dot product of a pairing of basis vectors:
g
i
j
=
e
i
⋅
e
j
.
{\displaystyle g_{ij}=\mathbf {e} _{i}\cdot \mathbf {e} _{j}.}
For the Cartesian basis the components arranged into a matrix are:
g
=
(
g
xx
g
xy
g
xz
g
yx
g
yy
g
yz
g
zx
g
zy
g
zz
)
=
(
e
x
⋅
e
x
e
x
⋅
e
y
e
x
⋅
e
z
e
y
⋅
e
x
e
y
⋅
e
y
e
y
⋅
e
z
e
z
⋅
e
x
e
z
⋅
e
y
e
z
⋅
e
z
)
=
(
1
0
0
0
1
0
0
0
1
)
{\displaystyle \mathbf {g} ={\begin{pmatrix}g_{\text{xx}}&g_{\text{xy}}&g_{\text{xz}}\\g_{\text{yx}}&g_{\text{yy}}&g_{\text{yz}}\\g_{\text{zx}}&g_{\text{zy}}&g_{\text{zz}}\\\end{pmatrix}}={\begin{pmatrix}\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{x}}&\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{y}}&\mathbf {e} _{\text{x}}\cdot \mathbf {e} _{\text{z}}\\\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{x}}&\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{y}}&\mathbf {e} _{\text{y}}\cdot \mathbf {e} _{\text{z}}\\\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{x}}&\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{y}}&\mathbf {e} _{\text{z}}\cdot \mathbf {e} _{\text{z}}\\\end{pmatrix}}={\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\\\end{pmatrix}}}
so are the simplest possible for the metric tensor, namely the δ:
g
i
j
=
δ
i
j
{\displaystyle g_{ij}=\delta _{ij}}
This is not true for general bases: orthogonal coordinates have diagonal metrics containing various scale factors (i.e. not necessarily 1), while general curvilinear coordinates could also have nonzero entries for off-diagonal components.
The dot product of two vectors a and b transforms according to
a
⋅
b
=
a
¯
j
b
¯
j
=
a
i
L
i
j
b
k
(
L
−
1
)
j
k
=
a
i
δ
i
k
b
k
=
a
i
b
i
{\displaystyle \mathbf {a} \cdot \mathbf {b} ={\bar {a}}_{j}{\bar {b}}_{j}=a_{i}{\mathsf {L}}_{ij}b_{k}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{jk}=a_{i}\delta _{i}{}_{k}b_{k}=a_{i}b_{i}}
which is intuitive, since the dot product of two vectors is a single scalar independent of any coordinates. This also applies more generally to any coordinate systems, not just rectangular ones; the dot product in one coordinate system is the same in any other.
=== Cross product, Levi-Civita symbol, and pseudovectors ===
For the cross product (×) of two vectors, the results are (almost) the other way round. Again, assuming a right-handed 3d Cartesian coordinate system, cyclic permutations in perpendicular directions yield the next vector in the cyclic collection of vectors:
e
x
×
e
y
=
e
z
e
y
×
e
z
=
e
x
e
z
×
e
x
=
e
y
e
y
×
e
x
=
−
e
z
e
z
×
e
y
=
−
e
x
e
x
×
e
z
=
−
e
y
{\displaystyle {\begin{aligned}\mathbf {e} _{\text{x}}\times \mathbf {e} _{\text{y}}&=\mathbf {e} _{\text{z}}&\mathbf {e} _{\text{y}}\times \mathbf {e} _{\text{z}}&=\mathbf {e} _{\text{x}}&\mathbf {e} _{\text{z}}\times \mathbf {e} _{\text{x}}&=\mathbf {e} _{\text{y}}\\[1ex]\mathbf {e} _{\text{y}}\times \mathbf {e} _{\text{x}}&=-\mathbf {e} _{\text{z}}&\mathbf {e} _{\text{z}}\times \mathbf {e} _{\text{y}}&=-\mathbf {e} _{\text{x}}&\mathbf {e} _{\text{x}}\times \mathbf {e} _{\text{z}}&=-\mathbf {e} _{\text{y}}\end{aligned}}}
while parallel vectors clearly vanish:
e
x
×
e
x
=
e
y
×
e
y
=
e
z
×
e
z
=
0
{\displaystyle \mathbf {e} _{\text{x}}\times \mathbf {e} _{\text{x}}=\mathbf {e} _{\text{y}}\times \mathbf {e} _{\text{y}}=\mathbf {e} _{\text{z}}\times \mathbf {e} _{\text{z}}={\boldsymbol {0}}}
and replacing Cartesian labels by index notation as above, these can be summarized by:
e
i
×
e
j
=
{
+
e
k
cyclic permutations:
(
i
,
j
,
k
)
=
(
1
,
2
,
3
)
,
(
2
,
3
,
1
)
,
(
3
,
1
,
2
)
−
e
k
anticyclic permutations:
(
i
,
j
,
k
)
=
(
2
,
1
,
3
)
,
(
3
,
2
,
1
)
,
(
1
,
3
,
2
)
0
i
=
j
{\displaystyle \mathbf {e} _{i}\times \mathbf {e} _{j}={\begin{cases}+\mathbf {e} _{k}&{\text{cyclic permutations: }}(i,j,k)=(1,2,3),(2,3,1),(3,1,2)\\[2pt]-\mathbf {e} _{k}&{\text{anticyclic permutations: }}(i,j,k)=(2,1,3),(3,2,1),(1,3,2)\\[2pt]{\boldsymbol {0}}&i=j\end{cases}}}
where i, j, k are indices which take values 1, 2, 3. It follows that:
e
k
⋅
e
i
×
e
j
=
{
+
1
cyclic permutations:
(
i
,
j
,
k
)
=
(
1
,
2
,
3
)
,
(
2
,
3
,
1
)
,
(
3
,
1
,
2
)
−
1
anticyclic permutations:
(
i
,
j
,
k
)
=
(
2
,
1
,
3
)
,
(
3
,
2
,
1
)
,
(
1
,
3
,
2
)
0
i
=
j
or
j
=
k
or
k
=
i
{\displaystyle {\mathbf {e} _{k}\cdot \mathbf {e} _{i}\times \mathbf {e} _{j}}={\begin{cases}+1&{\text{cyclic permutations: }}(i,j,k)=(1,2,3),(2,3,1),(3,1,2)\\[2pt]-1&{\text{anticyclic permutations: }}(i,j,k)=(2,1,3),(3,2,1),(1,3,2)\\[2pt]0&i=j{\text{ or }}j=k{\text{ or }}k=i\end{cases}}}
These permutation relations and their corresponding values are important, and there is an object coinciding with this property: the Levi-Civita symbol, denoted by ε. The Levi-Civita symbol entries can be represented by the Cartesian basis:
ε
i
j
k
=
e
i
⋅
e
j
×
e
k
{\displaystyle \varepsilon _{ijk}=\mathbf {e} _{i}\cdot \mathbf {e} _{j}\times \mathbf {e} _{k}}
which geometrically corresponds to the volume of a cube spanned by the orthonormal basis vectors, with sign indicating orientation (and not a "positive or negative volume"). Here, the orientation is fixed by ε123 = +1, for a right-handed system. A left-handed system would fix ε123 = −1 or equivalently ε321 = +1.
The scalar triple product can now be written:
c
⋅
a
×
b
=
c
i
e
i
⋅
a
j
e
j
×
b
k
e
k
=
ε
i
j
k
c
i
a
j
b
k
{\displaystyle \mathbf {c} \cdot \mathbf {a} \times \mathbf {b} =c_{i}\mathbf {e} _{i}\cdot a_{j}\mathbf {e} _{j}\times b_{k}\mathbf {e} _{k}=\varepsilon _{ijk}c_{i}a_{j}b_{k}}
with the geometric interpretation of volume (of the parallelepiped spanned by a, b, c) and algebraically is a determinant:: 23
c
⋅
a
×
b
=
|
c
x
a
x
b
x
c
y
a
y
b
y
c
z
a
z
b
z
|
{\displaystyle \mathbf {c} \cdot \mathbf {a} \times \mathbf {b} ={\begin{vmatrix}c_{\text{x}}&a_{\text{x}}&b_{\text{x}}\\c_{\text{y}}&a_{\text{y}}&b_{\text{y}}\\c_{\text{z}}&a_{\text{z}}&b_{\text{z}}\end{vmatrix}}}
This in turn can be used to rewrite the cross product of two vectors as follows:
(
a
×
b
)
i
=
e
i
⋅
a
×
b
=
ε
ℓ
j
k
(
e
i
)
ℓ
a
j
b
k
=
ε
ℓ
j
k
δ
i
ℓ
a
j
b
k
=
ε
i
j
k
a
j
b
k
⇒
a
×
b
=
(
a
×
b
)
i
e
i
=
ε
i
j
k
a
j
b
k
e
i
{\displaystyle {\begin{aligned}(\mathbf {a} \times \mathbf {b} )_{i}={\mathbf {e} _{i}\cdot \mathbf {a} \times \mathbf {b} }&=\varepsilon _{\ell jk}{(\mathbf {e} _{i})}_{\ell }a_{j}b_{k}=\varepsilon _{\ell jk}\delta _{i\ell }a_{j}b_{k}=\varepsilon _{ijk}a_{j}b_{k}\\\Rightarrow \quad {\mathbf {a} \times \mathbf {b} }=(\mathbf {a} \times \mathbf {b} )_{i}\mathbf {e} _{i}&=\varepsilon _{ijk}a_{j}b_{k}\mathbf {e} _{i}\end{aligned}}}
Contrary to its appearance, the Levi-Civita symbol is not a tensor, but a pseudotensor, the components transform according to:
ε
¯
p
q
r
=
det
(
L
)
ε
i
j
k
L
i
p
L
j
q
L
k
r
.
{\displaystyle {\bar {\varepsilon }}_{pqr}=\det({\boldsymbol {\mathsf {L}}})\varepsilon _{ijk}{\mathsf {L}}_{ip}{\mathsf {L}}_{jq}{\mathsf {L}}_{kr}\,.}
Therefore, the transformation of the cross product of a and b is:
(
a
¯
×
b
¯
)
i
=
ε
¯
i
j
k
a
¯
j
b
¯
k
=
det
(
L
)
ε
p
q
r
L
p
i
L
q
j
L
r
k
a
m
L
m
j
b
n
L
n
k
=
det
(
L
)
ε
p
q
r
L
p
i
L
q
j
(
L
−
1
)
j
m
L
r
k
(
L
−
1
)
k
n
a
m
b
n
=
det
(
L
)
ε
p
q
r
L
p
i
δ
q
m
δ
r
n
a
m
b
n
=
det
(
L
)
L
p
i
ε
p
q
r
a
q
b
r
=
det
(
L
)
(
a
×
b
)
p
L
p
i
{\displaystyle {\begin{aligned}&\left({\bar {\mathbf {a} }}\times {\bar {\mathbf {b} }}\right)_{i}\\[1ex]{}={}&{\bar {\varepsilon }}_{ijk}{\bar {a}}_{j}{\bar {b}}_{k}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;\varepsilon _{pqr}\;\;{\mathsf {L}}_{pi}{\mathsf {L}}_{qj}{\mathsf {L}}_{rk}\;\;a_{m}{\mathsf {L}}_{mj}\;\;b_{n}{\mathsf {L}}_{nk}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;\varepsilon _{pqr}\;\;{\mathsf {L}}_{pi}\;\;{\mathsf {L}}_{qj}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{jm}\;\;{\mathsf {L}}_{rk}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{kn}\;\;a_{m}\;\;b_{n}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;\varepsilon _{pqr}\;\;{\mathsf {L}}_{pi}\;\;\delta _{qm}\;\;\delta _{rn}\;\;a_{m}\;\;b_{n}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;{\mathsf {L}}_{pi}\;\;\varepsilon _{pqr}a_{q}b_{r}\\[1ex]{}={}&\det({\boldsymbol {\mathsf {L}}})\;\;(\mathbf {a} \times \mathbf {b} )_{p}{\mathsf {L}}_{pi}\end{aligned}}}
and so a × b transforms as a pseudovector, because of the determinant factor.
The tensor index notation applies to any object which has entities that form multidimensional arrays – not everything with indices is a tensor by default. Instead, tensors are defined by how their coordinates and basis elements change under a transformation from one coordinate system to another.
Note the cross product of two vectors is a pseudovector, while the cross product of a pseudovector with a vector is another vector.
=== Applications of the δ tensor and ε pseudotensor ===
Other identities can be formed from the δ tensor and ε pseudotensor, a notable and very useful identity is one that converts two Levi-Civita symbols adjacently contracted over two indices into an antisymmetrized combination of Kronecker deltas:
ε
i
j
k
ε
p
q
k
=
δ
i
p
δ
j
q
−
δ
i
q
δ
j
p
{\displaystyle \varepsilon _{ijk}\varepsilon _{pqk}=\delta _{ip}\delta _{jq}-\delta _{iq}\delta _{jp}}
The index forms of the dot and cross products, together with this identity, greatly facilitate the manipulation and derivation of other identities in vector calculus and algebra, which in turn are used extensively in physics and engineering. For instance, it is clear the dot and cross products are distributive over vector addition:
a
⋅
(
b
+
c
)
=
a
i
(
b
i
+
c
i
)
=
a
i
b
i
+
a
i
c
i
=
a
⋅
b
+
a
⋅
c
a
×
(
b
+
c
)
=
e
i
ε
i
j
k
a
j
(
b
k
+
c
k
)
=
e
i
ε
i
j
k
a
j
b
k
+
e
i
ε
i
j
k
a
j
c
k
=
a
×
b
+
a
×
c
{\displaystyle {\begin{aligned}\mathbf {a} \cdot (\mathbf {b} +\mathbf {c} )&=a_{i}(b_{i}+c_{i})=a_{i}b_{i}+a_{i}c_{i}=\mathbf {a} \cdot \mathbf {b} +\mathbf {a} \cdot \mathbf {c} \\[1ex]\mathbf {a} \times (\mathbf {b} +\mathbf {c} )&=\mathbf {e} _{i}\varepsilon _{ijk}a_{j}(b_{k}+c_{k})=\mathbf {e} _{i}\varepsilon _{ijk}a_{j}b_{k}+\mathbf {e} _{i}\varepsilon _{ijk}a_{j}c_{k}=\mathbf {a} \times \mathbf {b} +\mathbf {a} \times \mathbf {c} \end{aligned}}}
without resort to any geometric constructions – the derivation in each case is a quick line of algebra. Although the procedure is less obvious, the vector triple product can also be derived. Rewriting in index notation:
[
a
×
(
b
×
c
)
]
i
=
ε
i
j
k
a
j
(
ε
k
ℓ
m
b
ℓ
c
m
)
=
(
ε
i
j
k
ε
k
ℓ
m
)
a
j
b
ℓ
c
m
{\displaystyle \left[\mathbf {a} \times (\mathbf {b} \times \mathbf {c} )\right]_{i}=\varepsilon _{ijk}a_{j}(\varepsilon _{k\ell m}b_{\ell }c_{m})=(\varepsilon _{ijk}\varepsilon _{k\ell m})a_{j}b_{\ell }c_{m}}
and because cyclic permutations of indices in the ε symbol does not change its value, cyclically permuting indices in εkℓm to obtain εℓmk allows us to use the above δ-ε identity to convert the ε symbols into δ tensors:
[
a
×
(
b
×
c
)
]
i
=
(
δ
i
ℓ
δ
j
m
−
δ
i
m
δ
j
ℓ
)
a
j
b
ℓ
c
m
=
δ
i
ℓ
δ
j
m
a
j
b
ℓ
c
m
−
δ
i
m
δ
j
ℓ
a
j
b
ℓ
c
m
=
a
j
b
i
c
j
−
a
j
b
j
c
i
=
[
(
a
⋅
c
)
b
−
(
a
⋅
b
)
c
]
i
{\displaystyle {\begin{aligned}\left[\mathbf {a} \times (\mathbf {b} \times \mathbf {c} )\right]_{i}{}={}&\left(\delta _{i\ell }\delta _{jm}-\delta _{im}\delta _{j\ell }\right)a_{j}b_{\ell }c_{m}\\{}={}&\delta _{i\ell }\delta _{jm}a_{j}b_{\ell }c_{m}-\delta _{im}\delta _{j\ell }a_{j}b_{\ell }c_{m}\\{}={}&a_{j}b_{i}c_{j}-a_{j}b_{j}c_{i}\\{}={}&\left[(\mathbf {a} \cdot \mathbf {c} )\mathbf {b} -(\mathbf {a} \cdot \mathbf {b} )\mathbf {c} \right]_{i}\end{aligned}}}
thusly:
a
×
(
b
×
c
)
=
(
a
⋅
c
)
b
−
(
a
⋅
b
)
c
{\displaystyle \mathbf {a} \times (\mathbf {b} \times \mathbf {c} )=(\mathbf {a} \cdot \mathbf {c} )\mathbf {b} -(\mathbf {a} \cdot \mathbf {b} )\mathbf {c} }
Note this is antisymmetric in b and c, as expected from the left hand side. Similarly, via index notation or even just cyclically relabelling a, b, and c in the previous result and taking the negative:
(
a
×
b
)
×
c
=
(
c
⋅
a
)
b
−
(
c
⋅
b
)
a
{\displaystyle (\mathbf {a} \times \mathbf {b} )\times \mathbf {c} =(\mathbf {c} \cdot \mathbf {a} )\mathbf {b} -(\mathbf {c} \cdot \mathbf {b} )\mathbf {a} }
and the difference in results show that the cross product is not associative. More complex identities, like quadruple products;
(
a
×
b
)
⋅
(
c
×
d
)
,
(
a
×
b
)
×
(
c
×
d
)
,
…
{\displaystyle (\mathbf {a} \times \mathbf {b} )\cdot (\mathbf {c} \times \mathbf {d} ),\quad (\mathbf {a} \times \mathbf {b} )\times (\mathbf {c} \times \mathbf {d} ),\quad \ldots }
and so on, can be derived in a similar manner.
== Transformations of Cartesian tensors (any number of dimensions) ==
Tensors are defined as quantities which transform in a certain way under linear transformations of coordinates.
=== Second order ===
Let a = aiei and b = biei be two vectors, so that they transform according to aj = aiLij, bj = biLij.
Taking the tensor product gives:
a
⊗
b
=
a
i
e
i
⊗
b
j
e
j
=
a
i
b
j
e
i
⊗
e
j
{\displaystyle \mathbf {a} \otimes \mathbf {b} =a_{i}\mathbf {e} _{i}\otimes b_{j}\mathbf {e} _{j}=a_{i}b_{j}\mathbf {e} _{i}\otimes \mathbf {e} _{j}}
then applying the transformation to the components
a
¯
p
b
¯
q
=
a
i
L
i
p
b
j
L
j
q
=
L
i
p
L
j
q
a
i
b
j
{\displaystyle {\bar {a}}_{p}{\bar {b}}_{q}=a_{i}{\mathsf {L}}_{i}{}_{p}b_{j}{\mathsf {L}}_{j}{}_{q}={\mathsf {L}}_{i}{}_{p}{\mathsf {L}}_{j}{}_{q}a_{i}b_{j}}
and to the bases
e
¯
p
⊗
e
¯
q
=
(
L
−
1
)
p
i
e
i
⊗
(
L
−
1
)
q
j
e
j
=
(
L
−
1
)
p
i
(
L
−
1
)
q
j
e
i
⊗
e
j
=
L
i
p
−
1
L
j
q
−
1
e
i
⊗
e
j
{\displaystyle {\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{pi}\mathbf {e} _{i}\otimes \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{qj}\mathbf {e} _{j}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{pi}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{qj}\mathbf {e} _{i}\otimes \mathbf {e} _{j}={\mathsf {L}}_{ip}^{-1}{\mathsf {L}}_{jq}^{-1}\mathbf {e} _{i}\otimes \mathbf {e} _{j}}
gives the transformation law of an order-2 tensor. The tensor a⊗b is invariant under this transformation:
a
¯
p
b
¯
q
e
¯
p
⊗
e
¯
q
=
L
k
p
L
ℓ
q
a
k
b
ℓ
(
L
−
1
)
p
i
(
L
−
1
)
q
j
e
i
⊗
e
j
=
L
k
p
(
L
−
1
)
p
i
L
ℓ
q
(
L
−
1
)
q
j
a
k
b
ℓ
e
i
⊗
e
j
=
δ
k
i
δ
ℓ
j
a
k
b
ℓ
e
i
⊗
e
j
=
a
i
b
j
e
i
⊗
e
j
{\displaystyle {\begin{aligned}{\bar {a}}_{p}{\bar {b}}_{q}{\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}{}={}&{\mathsf {L}}_{kp}{\mathsf {L}}_{\ell q}a_{k}b_{\ell }\,\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{pi}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{qj}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\\[1ex]{}={}&{\mathsf {L}}_{kp}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{pi}{\mathsf {L}}_{\ell q}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{qj}\,a_{k}b_{\ell }\mathbf {e} _{i}\otimes \mathbf {e} _{j}\\[1ex]{}={}&\delta _{k}{}_{i}\delta _{\ell j}\,a_{k}b_{\ell }\mathbf {e} _{i}\otimes \mathbf {e} _{j}\\[1ex]{}={}&a_{i}b_{j}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\end{aligned}}}
More generally, for any order-2 tensor
R
=
R
i
j
e
i
⊗
e
j
,
{\displaystyle \mathbf {R} =R_{ij}\mathbf {e} _{i}\otimes \mathbf {e} _{j}\,,}
the components transform according to;
R
¯
p
q
=
L
i
p
L
j
q
R
i
j
,
{\displaystyle {\bar {R}}_{pq}={\mathsf {L}}_{i}{}_{p}{\mathsf {L}}_{j}{}_{q}R_{ij},}
and the basis transforms by:
e
¯
p
⊗
e
¯
q
=
(
L
−
1
)
i
p
e
i
⊗
(
L
−
1
)
j
q
e
j
{\displaystyle {\bar {\mathbf {e} }}_{p}\otimes {\bar {\mathbf {e} }}_{q}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{ip}\mathbf {e} _{i}\otimes \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{jq}\mathbf {e} _{j}}
If R does not transform according to this rule – whatever quantity R may be – it is not an order-2 tensor.
=== Any order ===
More generally, for any order p tensor
T
=
T
j
1
j
2
⋯
j
p
e
j
1
⊗
e
j
2
⊗
⋯
e
j
p
{\displaystyle \mathbf {T} =T_{j_{1}j_{2}\cdots j_{p}}\mathbf {e} _{j_{1}}\otimes \mathbf {e} _{j_{2}}\otimes \cdots \mathbf {e} _{j_{p}}}
the components transform according to;
T
¯
j
1
j
2
⋯
j
p
=
L
i
1
j
1
L
i
2
j
2
⋯
L
i
p
j
p
T
i
1
i
2
⋯
i
p
{\displaystyle {\bar {T}}_{j_{1}j_{2}\cdots j_{p}}={\mathsf {L}}_{i_{1}j_{1}}{\mathsf {L}}_{i_{2}j_{2}}\cdots {\mathsf {L}}_{i_{p}j_{p}}T_{i_{1}i_{2}\cdots i_{p}}}
and the basis transforms by:
e
¯
j
1
⊗
e
¯
j
2
⋯
⊗
e
¯
j
p
=
(
L
−
1
)
j
1
i
1
e
i
1
⊗
(
L
−
1
)
j
2
i
2
e
i
2
⋯
⊗
(
L
−
1
)
j
p
i
p
e
i
p
{\displaystyle {\bar {\mathbf {e} }}_{j_{1}}\otimes {\bar {\mathbf {e} }}_{j_{2}}\cdots \otimes {\bar {\mathbf {e} }}_{j_{p}}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j_{1}i_{1}}\mathbf {e} _{i_{1}}\otimes \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j_{2}i_{2}}\mathbf {e} _{i_{2}}\cdots \otimes \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{j_{p}i_{p}}\mathbf {e} _{i_{p}}}
For a pseudotensor S of order p, the components transform according to;
S
¯
j
1
j
2
⋯
j
p
=
det
(
L
)
L
i
1
j
1
L
i
2
j
2
⋯
L
i
p
j
p
S
i
1
i
2
⋯
i
p
.
{\displaystyle {\bar {S}}_{j_{1}j_{2}\cdots j_{p}}=\det({\boldsymbol {\mathsf {L}}}){\mathsf {L}}_{i_{1}j_{1}}{\mathsf {L}}_{i_{2}j_{2}}\cdots {\mathsf {L}}_{i_{p}j_{p}}S_{i_{1}i_{2}\cdots i_{p}}\,.}
== Pseudovectors as antisymmetric second order tensors ==
The antisymmetric nature of the cross product can be recast into a tensorial form as follows. Let c be a vector, a be a pseudovector, b be another vector, and T be a second order tensor such that:
c
=
a
×
b
=
T
⋅
b
{\displaystyle \mathbf {c} =\mathbf {a} \times \mathbf {b} =\mathbf {T} \cdot \mathbf {b} }
As the cross product is linear in a and b, the components of T can be found by inspection, and they are:
T
=
(
0
−
a
z
a
y
a
z
0
−
a
x
−
a
y
a
x
0
)
{\displaystyle \mathbf {T} ={\begin{pmatrix}0&-a_{\text{z}}&a_{\text{y}}\\a_{\text{z}}&0&-a_{\text{x}}\\-a_{\text{y}}&a_{\text{x}}&0\\\end{pmatrix}}}
so the pseudovector a can be written as an antisymmetric tensor. This transforms as a tensor, not a pseudotensor. For the mechanical example above for the tangential velocity of a rigid body, given by v = ω × x, this can be rewritten as v = Ω ⋅ x where Ω is the tensor corresponding to the pseudovector ω:
Ω
=
(
0
−
ω
z
ω
y
ω
z
0
−
ω
x
−
ω
y
ω
x
0
)
{\displaystyle {\boldsymbol {\Omega }}={\begin{pmatrix}0&-\omega _{\text{z}}&\omega _{\text{y}}\\\omega _{\text{z}}&0&-\omega _{\text{x}}\\-\omega _{\text{y}}&\omega _{\text{x}}&0\\\end{pmatrix}}}
For an example in electromagnetism, while the electric field E is a vector field, the magnetic field B is a pseudovector field. These fields are defined from the Lorentz force for a particle of electric charge q traveling at velocity v:
F
=
q
(
E
+
v
×
B
)
=
q
(
E
−
B
×
v
)
{\displaystyle \mathbf {F} =q(\mathbf {E} +\mathbf {v} \times \mathbf {B} )=q(\mathbf {E} -\mathbf {B} \times \mathbf {v} )}
and considering the second term containing the cross product of a pseudovector B and velocity vector v, it can be written in matrix form, with F, E, and v as column vectors and B as an antisymmetric matrix:
(
F
x
F
y
F
z
)
=
q
(
E
x
E
y
E
z
)
−
q
(
0
−
B
z
B
y
B
z
0
−
B
x
−
B
y
B
x
0
)
(
v
x
v
y
v
z
)
{\displaystyle {\begin{pmatrix}F_{\text{x}}\\F_{\text{y}}\\F_{\text{z}}\\\end{pmatrix}}=q{\begin{pmatrix}E_{\text{x}}\\E_{\text{y}}\\E_{\text{z}}\\\end{pmatrix}}-q{\begin{pmatrix}0&-B_{\text{z}}&B_{\text{y}}\\B_{\text{z}}&0&-B_{\text{x}}\\-B_{\text{y}}&B_{\text{x}}&0\\\end{pmatrix}}{\begin{pmatrix}v_{\text{x}}\\v_{\text{y}}\\v_{\text{z}}\\\end{pmatrix}}}
If a pseudovector is explicitly given by a cross product of two vectors (as opposed to entering the cross product with another vector), then such pseudovectors can also be written as antisymmetric tensors of second order, with each entry a component of the cross product. The angular momentum of a classical pointlike particle orbiting about an axis, defined by J = x × p, is another example of a pseudovector, with corresponding antisymmetric tensor:
J
=
(
0
−
J
z
J
y
J
z
0
−
J
x
−
J
y
J
x
0
)
=
(
0
−
(
x
p
y
−
y
p
x
)
(
z
p
x
−
x
p
z
)
(
x
p
y
−
y
p
x
)
0
−
(
y
p
z
−
z
p
y
)
−
(
z
p
x
−
x
p
z
)
(
y
p
z
−
z
p
y
)
0
)
{\displaystyle \mathbf {J} ={\begin{pmatrix}0&-J_{\text{z}}&J_{\text{y}}\\J_{\text{z}}&0&-J_{\text{x}}\\-J_{\text{y}}&J_{\text{x}}&0\\\end{pmatrix}}={\begin{pmatrix}0&-(xp_{\text{y}}-yp_{\text{x}})&(zp_{\text{x}}-xp_{\text{z}})\\(xp_{\text{y}}-yp_{\text{x}})&0&-(yp_{\text{z}}-zp_{\text{y}})\\-(zp_{\text{x}}-xp_{\text{z}})&(yp_{\text{z}}-zp_{\text{y}})&0\\\end{pmatrix}}}
Although Cartesian tensors do not occur in the theory of relativity; the tensor form of orbital angular momentum J enters the spacelike part of the relativistic angular momentum tensor, and the above tensor form of the magnetic field B enters the spacelike part of the electromagnetic tensor.
== Vector and tensor calculus ==
The following formulae are only so simple in Cartesian coordinates – in general curvilinear coordinates there are factors of the metric and its determinant – see tensors in curvilinear coordinates for more general analysis.
=== Vector calculus ===
Following are the differential operators of vector calculus. Throughout, let Φ(r, t) be a scalar field, and
A
(
r
,
t
)
=
A
x
(
r
,
t
)
e
x
+
A
y
(
r
,
t
)
e
y
+
A
z
(
r
,
t
)
e
z
B
(
r
,
t
)
=
B
x
(
r
,
t
)
e
x
+
B
y
(
r
,
t
)
e
y
+
B
z
(
r
,
t
)
e
z
{\displaystyle {\begin{aligned}\mathbf {A} (\mathbf {r} ,t)&=A_{\text{x}}(\mathbf {r} ,t)\mathbf {e} _{\text{x}}+A_{\text{y}}(\mathbf {r} ,t)\mathbf {e} _{\text{y}}+A_{\text{z}}(\mathbf {r} ,t)\mathbf {e} _{\text{z}}\\[1ex]\mathbf {B} (\mathbf {r} ,t)&=B_{\text{x}}(\mathbf {r} ,t)\mathbf {e} _{\text{x}}+B_{\text{y}}(\mathbf {r} ,t)\mathbf {e} _{\text{y}}+B_{\text{z}}(\mathbf {r} ,t)\mathbf {e} _{\text{z}}\end{aligned}}}
be vector fields, in which all scalar and vector fields are functions of the position vector r and time t.
The gradient operator in Cartesian coordinates is given by:
∇
=
e
x
∂
∂
x
+
e
y
∂
∂
y
+
e
z
∂
∂
z
{\displaystyle \nabla =\mathbf {e} _{\text{x}}{\frac {\partial }{\partial x}}+\mathbf {e} _{\text{y}}{\frac {\partial }{\partial y}}+\mathbf {e} _{\text{z}}{\frac {\partial }{\partial z}}}
and in index notation, this is usually abbreviated in various ways:
∇
i
≡
∂
i
≡
∂
∂
x
i
{\displaystyle \nabla _{i}\equiv \partial _{i}\equiv {\frac {\partial }{\partial x_{i}}}}
This operator acts on a scalar field Φ to obtain the vector field directed in the maximum rate of increase of Φ:
(
∇
Φ
)
i
=
∇
i
Φ
{\displaystyle \left(\nabla \Phi \right)_{i}=\nabla _{i}\Phi }
The index notation for the dot and cross products carries over to the differential operators of vector calculus.: 197
The directional derivative of a scalar field Φ is the rate of change of Φ along some direction vector a (not necessarily a unit vector), formed out of the components of a and the gradient:
a
⋅
(
∇
Φ
)
=
a
j
(
∇
Φ
)
j
{\displaystyle \mathbf {a} \cdot (\nabla \Phi )=a_{j}(\nabla \Phi )_{j}}
The divergence of a vector field A is:
∇
⋅
A
=
∇
i
A
i
{\displaystyle \nabla \cdot \mathbf {A} =\nabla _{i}A_{i}}
Note the interchange of the components of the gradient and vector field yields a different differential operator
A
⋅
∇
=
A
i
∇
i
{\displaystyle \mathbf {A} \cdot \nabla =A_{i}\nabla _{i}}
which could act on scalar or vector fields. In fact, if A is replaced by the velocity field u(r, t) of a fluid, this is a term in the material derivative (with many other names) of continuum mechanics, with another term being the partial time derivative:
D
D
t
=
∂
∂
t
+
u
⋅
∇
{\displaystyle {\frac {D}{Dt}}={\frac {\partial }{\partial t}}+\mathbf {u} \cdot \nabla }
which usually acts on the velocity field leading to the non-linearity in the Navier-Stokes equations.
As for the curl of a vector field A, this can be defined as a pseudovector field by means of the ε symbol:
(
∇
×
A
)
i
=
ε
i
j
k
∇
j
A
k
{\displaystyle \left(\nabla \times \mathbf {A} \right)_{i}=\varepsilon _{ijk}\nabla _{j}A_{k}}
which is only valid in three dimensions, or an antisymmetric tensor field of second order via antisymmetrization of indices, indicated by delimiting the antisymmetrized indices by square brackets (see Ricci calculus):
(
∇
×
A
)
i
j
=
∇
i
A
j
−
∇
j
A
i
=
2
∇
[
i
A
j
]
{\displaystyle \left(\nabla \times \mathbf {A} \right)_{ij}=\nabla _{i}A_{j}-\nabla _{j}A_{i}=2\nabla _{[i}A_{j]}}
which is valid in any number of dimensions. In each case, the order of the gradient and vector field components should not be interchanged as this would result in a different differential operator:
ε
i
j
k
A
j
∇
k
=
A
i
∇
j
−
A
j
∇
i
=
2
A
[
i
∇
j
]
{\displaystyle \varepsilon _{ijk}A_{j}\nabla _{k}=A_{i}\nabla _{j}-A_{j}\nabla _{i}=2A_{[i}\nabla _{j]}}
which could act on scalar or vector fields.
Finally, the Laplacian operator is defined in two ways, the divergence of the gradient of a scalar field Φ:
∇
⋅
(
∇
Φ
)
=
∇
i
(
∇
i
Φ
)
{\displaystyle \nabla \cdot (\nabla \Phi )=\nabla _{i}(\nabla _{i}\Phi )}
or the square of the gradient operator, which acts on a scalar field Φ or a vector field A:
(
∇
⋅
∇
)
Φ
=
(
∇
i
∇
i
)
Φ
(
∇
⋅
∇
)
A
=
(
∇
i
∇
i
)
A
{\displaystyle {\begin{aligned}(\nabla \cdot \nabla )\Phi &=(\nabla _{i}\nabla _{i})\Phi \\(\nabla \cdot \nabla )\mathbf {A} &=(\nabla _{i}\nabla _{i})\mathbf {A} \end{aligned}}}
In physics and engineering, the gradient, divergence, curl, and Laplacian operator arise inevitably in fluid mechanics, Newtonian gravitation, electromagnetism, heat conduction, and even quantum mechanics.
Vector calculus identities can be derived in a similar way to those of vector dot and cross products and combinations. For example, in three dimensions, the curl of a cross product of two vector fields A and B:
[
∇
×
(
A
×
B
)
]
i
=
ε
i
j
k
∇
j
(
ε
k
ℓ
m
A
ℓ
B
m
)
=
(
ε
i
j
k
ε
ℓ
m
k
)
∇
j
(
A
ℓ
B
m
)
=
(
δ
i
ℓ
δ
j
m
−
δ
i
m
δ
j
ℓ
)
(
B
m
∇
j
A
ℓ
+
A
ℓ
∇
j
B
m
)
=
(
B
j
∇
j
A
i
+
A
i
∇
j
B
j
)
−
(
B
i
∇
j
A
j
+
A
j
∇
j
B
i
)
=
(
B
j
∇
j
)
A
i
+
A
i
(
∇
j
B
j
)
−
B
i
(
∇
j
A
j
)
−
(
A
j
∇
j
)
B
i
=
[
(
B
⋅
∇
)
A
+
A
(
∇
⋅
B
)
−
B
(
∇
⋅
A
)
−
(
A
⋅
∇
)
B
]
i
{\displaystyle {\begin{aligned}&\left[\nabla \times (\mathbf {A} \times \mathbf {B} )\right]_{i}\\{}={}&\varepsilon _{ijk}\nabla _{j}(\varepsilon _{k\ell m}A_{\ell }B_{m})\\{}={}&(\varepsilon _{ijk}\varepsilon _{\ell mk})\nabla _{j}(A_{\ell }B_{m})\\{}={}&(\delta _{i\ell }\delta _{jm}-\delta _{im}\delta _{j\ell })(B_{m}\nabla _{j}A_{\ell }+A_{\ell }\nabla _{j}B_{m})\\{}={}&(B_{j}\nabla _{j}A_{i}+A_{i}\nabla _{j}B_{j})-(B_{i}\nabla _{j}A_{j}+A_{j}\nabla _{j}B_{i})\\{}={}&(B_{j}\nabla _{j})A_{i}+A_{i}(\nabla _{j}B_{j})-B_{i}(\nabla _{j}A_{j})-(A_{j}\nabla _{j})B_{i}\\{}={}&\left[(\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {A} (\nabla \cdot \mathbf {B} )-\mathbf {B} (\nabla \cdot \mathbf {A} )-(\mathbf {A} \cdot \nabla )\mathbf {B} \right]_{i}\\\end{aligned}}}
where the product rule was used, and throughout the differential operator was not interchanged with A or B. Thus:
∇
×
(
A
×
B
)
=
(
B
⋅
∇
)
A
+
A
(
∇
⋅
B
)
−
B
(
∇
⋅
A
)
−
(
A
⋅
∇
)
B
{\displaystyle \nabla \times (\mathbf {A} \times \mathbf {B} )=(\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {A} (\nabla \cdot \mathbf {B} )-\mathbf {B} (\nabla \cdot \mathbf {A} )-(\mathbf {A} \cdot \nabla )\mathbf {B} }
=== Tensor calculus ===
One can continue the operations on tensors of higher order. Let T = T(r, t) denote a second order tensor field, again dependent on the position vector r and time t.
For instance, the gradient of a vector field in two equivalent notations ("dyadic" and "tensor", respectively) is:
(
∇
A
)
i
j
≡
(
∇
⊗
A
)
i
j
=
∇
i
A
j
{\displaystyle (\nabla \mathbf {A} )_{ij}\equiv (\nabla \otimes \mathbf {A} )_{ij}=\nabla _{i}A_{j}}
which is a tensor field of second order.
The divergence of a tensor is:
(
∇
⋅
T
)
j
=
∇
i
T
i
j
{\displaystyle (\nabla \cdot \mathbf {T} )_{j}=\nabla _{i}T_{ij}}
which is a vector field. This arises in continuum mechanics in Cauchy's laws of motion – the divergence of the Cauchy stress tensor σ is a vector field, related to body forces acting on the fluid.
== Difference from the standard tensor calculus ==
Cartesian tensors are as in tensor algebra, but Euclidean structure of and restriction of the basis brings some simplifications compared to the general theory.
The general tensor algebra consists of general mixed tensors of type (p, q):
T
=
T
j
1
j
2
⋯
j
q
i
1
i
2
⋯
i
p
e
i
1
i
2
⋯
i
p
j
1
j
2
⋯
j
q
{\displaystyle \mathbf {T} =T_{j_{1}j_{2}\cdots j_{q}}^{i_{1}i_{2}\cdots i_{p}}\mathbf {e} _{i_{1}i_{2}\cdots i_{p}}^{j_{1}j_{2}\cdots j_{q}}}
with basis elements:
e
i
1
i
2
⋯
i
p
j
1
j
2
⋯
j
q
=
e
i
1
⊗
e
i
2
⊗
⋯
e
i
p
⊗
e
j
1
⊗
e
j
2
⊗
⋯
e
j
q
{\displaystyle \mathbf {e} _{i_{1}i_{2}\cdots i_{p}}^{j_{1}j_{2}\cdots j_{q}}=\mathbf {e} _{i_{1}}\otimes \mathbf {e} _{i_{2}}\otimes \cdots \mathbf {e} _{i_{p}}\otimes \mathbf {e} ^{j_{1}}\otimes \mathbf {e} ^{j_{2}}\otimes \cdots \mathbf {e} ^{j_{q}}}
the components transform according to:
T
¯
ℓ
1
ℓ
2
⋯
ℓ
q
k
1
k
2
⋯
k
p
=
L
i
1
k
1
L
i
2
k
2
⋯
L
i
p
k
p
(
L
−
1
)
ℓ
1
j
1
(
L
−
1
)
ℓ
2
j
2
⋯
(
L
−
1
)
ℓ
q
j
q
T
j
1
j
2
⋯
j
q
i
1
i
2
⋯
i
p
{\displaystyle {\bar {T}}_{\ell _{1}\ell _{2}\cdots \ell _{q}}^{k_{1}k_{2}\cdots k_{p}}={\mathsf {L}}_{i_{1}}{}^{k_{1}}{\mathsf {L}}_{i_{2}}{}^{k_{2}}\cdots {\mathsf {L}}_{i_{p}}{}^{k_{p}}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{\ell _{1}}{}^{j_{1}}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{\ell _{2}}{}^{j_{2}}\cdots \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{\ell _{q}}{}^{j_{q}}T_{j_{1}j_{2}\cdots j_{q}}^{i_{1}i_{2}\cdots i_{p}}}
as for the bases:
e
¯
k
1
k
2
⋯
k
p
ℓ
1
ℓ
2
⋯
ℓ
q
=
(
L
−
1
)
k
1
i
1
(
L
−
1
)
k
2
i
2
⋯
(
L
−
1
)
k
p
i
p
L
j
1
ℓ
1
L
j
2
ℓ
2
⋯
L
j
q
ℓ
q
e
i
1
i
2
⋯
i
p
j
1
j
2
⋯
j
q
{\displaystyle {\bar {\mathbf {e} }}_{k_{1}k_{2}\cdots k_{p}}^{\ell _{1}\ell _{2}\cdots \ell _{q}}=\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{k_{1}}{}^{i_{1}}\left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{k_{2}}{}^{i_{2}}\cdots \left({\boldsymbol {\mathsf {L}}}^{-1}\right)_{k_{p}}{}^{i_{p}}{\mathsf {L}}_{j_{1}}{}^{\ell _{1}}{\mathsf {L}}_{j_{2}}{}^{\ell _{2}}\cdots {\mathsf {L}}_{j_{q}}{}^{\ell _{q}}\mathbf {e} _{i_{1}i_{2}\cdots i_{p}}^{j_{1}j_{2}\cdots j_{q}}}
For Cartesian tensors, only the order p + q of the tensor matters in a Euclidean space with an orthonormal basis, and all p + q indices can be lowered. A Cartesian basis does not exist unless the vector space has a positive-definite metric, and thus cannot be used in relativistic contexts.
== History ==
Dyadic tensors were historically the first approach to formulating second-order tensors, similarly triadic tensors for third-order tensors, and so on. Cartesian tensors use tensor index notation, in which the variance may be glossed over and is often ignored, since the components remain unchanged by raising and lowering indices.
== See also ==
Tensor algebra
Tensor calculus
Tensors in curvilinear coordinates
Rotation group
== References ==
=== General references ===
D. C. Kay (1988). Tensor Calculus. Schaum's Outlines. McGraw Hill. pp. 18–19, 31–32. ISBN 0-07-033484-6.
M. R. Spiegel; S. Lipcshutz; D. Spellman (2009). Vector analysis. Schaum's Outlines (2nd ed.). McGraw Hill. p. 227. ISBN 978-0-07-161545-7.
J.R. Tyldesley (1975). An introduction to tensor analysis for engineers and applied scientists. Longman. pp. 5–13. ISBN 0-582-44355-5.
=== Further reading and applications ===
S. Lipcshutz; M. Lipson (2009). Linear Algebra. Schaum's Outlines (4th ed.). McGraw Hill. ISBN 978-0-07-154352-1.
Pei Chi Chou (1992). Elasticity: Tensor, Dyadic, and Engineering Approaches. Courier Dover Publications. ISBN 048-666-958-0.
T. W. Körner (2012). Vectors, Pure and Applied: A General Introduction to Linear Algebra. Cambridge University Press. p. 216. ISBN 978-11070-3356-6.
R. Torretti (1996). Relativity and Geometry. Courier Dover Publications. p. 103. ISBN 0-4866-90466.
J. J. L. Synge; A. Schild (1978). Tensor Calculus. Courier Dover Publications. p. 128. ISBN 0-4861-4139-X.
C. A. Balafoutis; R. V. Patel (1991). Dynamic Analysis of Robot Manipulators: A Cartesian Tensor Approach. The Kluwer International Series in Engineering and Computer Science: Robotics: vision, manipulation and sensors. Vol. 131. Springer. ISBN 0792-391-454.
S. G. Tzafestas (1992). Robotic systems: advanced techniques and applications. Springer. ISBN 0-792-317-491.
T. Dass; S. K. Sharma (1998). Mathematical Methods In Classical And Quantum Physics. Universities Press. p. 144. ISBN 817-371-0899.
G. F. J. Temple (2004). Cartesian Tensors: An Introduction. Dover Books on Mathematics Series. Dover. ISBN 0-4864-3908-9.
H. Jeffreys (1961). Cartesian Tensors. Cambridge University Press. ISBN 9780521054232. {{cite book}}: ISBN / Date incompatibility (help)
== External links ==
Cartesian Tensors
V. N. Kaliakin, Brief Review of Tensors, University of Delaware
R. E. Hunt, Cartesian Tensors, University of Cambridge | Wikipedia/Cartesian_tensor |
In the general theory of relativity, the Einstein field equations (EFE; also known as Einstein's equations) relate the geometry of spacetime to the distribution of matter within it.
The equations were published by Albert Einstein in 1915 in the form of a tensor equation which related the local spacetime curvature (expressed by the Einstein tensor) with the local energy, momentum and stress within that spacetime (expressed by the stress–energy tensor).
Analogously to the way that electromagnetic fields are related to the distribution of charges and currents via Maxwell's equations, the EFE relate the spacetime geometry to the distribution of mass–energy, momentum and stress, that is, they determine the metric tensor of spacetime for a given arrangement of stress–energy–momentum in the spacetime. The relationship between the metric tensor and the Einstein tensor allows the EFE to be written as a set of nonlinear partial differential equations when used in this way. The solutions of the EFE are the components of the metric tensor. The inertial trajectories of particles and radiation (geodesics) in the resulting geometry are then calculated using the geodesic equation.
As well as implying local energy–momentum conservation, the EFE reduce to Newton's law of gravitation in the limit of a weak gravitational field and velocities that are much less than the speed of light.
Exact solutions for the EFE can only be found under simplifying assumptions such as symmetry. Special classes of exact solutions are most often studied since they model many gravitational phenomena, such as rotating black holes and the expanding universe. Further simplification is achieved in approximating the spacetime as having only small deviations from flat spacetime, leading to the linearized EFE. These equations are used to study phenomena such as gravitational waves.
== Mathematical form ==
The Einstein field equations (EFE) may be written in the form:
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu },}
where Gμν is the Einstein tensor, gμν is the metric tensor, Tμν is the stress–energy tensor, Λ is the cosmological constant and κ is the Einstein gravitational constant.
The Einstein tensor is defined as
G
μ
ν
=
R
μ
ν
−
1
2
R
g
μ
ν
,
{\displaystyle G_{\mu \nu }=R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu },}
where Rμν is the Ricci curvature tensor, and R is the scalar curvature. This is a symmetric second-degree tensor that depends on only the metric tensor and its first and second derivatives.
The Einstein gravitational constant is defined as
κ
=
8
π
G
c
4
≈
2.07665
×
10
−
43
N
−
1
,
{\displaystyle \kappa ={\frac {8\pi G}{c^{4}}}\approx 2.07665\times 10^{-43}\,{\textrm {N}}^{-1},}
where G is the Newtonian constant of gravitation and c is the speed of light in vacuum.
The EFE can thus also be written as
R
μ
ν
−
1
2
R
g
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }.}
In standard units, each term on the left has quantity dimension of L−2.
The expression on the left represents the curvature of spacetime as determined by the metric; the expression on the right represents the stress–energy–momentum content of spacetime. The EFE can then be interpreted as a set of equations dictating how stress–energy–momentum determines the curvature of spacetime.
These equations, together with the geodesic equation, which dictates how freely falling matter moves through spacetime, form the core of the mathematical formulation of general relativity.
The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge-fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
Although the Einstein field equations were initially formulated in the context of a four-dimensional theory, some theorists have explored their consequences in n dimensions. The equations in contexts outside of general relativity are still referred to as the Einstein field equations. The vacuum field equations (obtained when Tμν is everywhere zero) define Einstein manifolds.
The equations are more complex than they appear. Given a specified distribution of matter and energy in the form of a stress–energy tensor, the EFE are understood to be equations for the metric tensor gμν, since both the Ricci tensor and scalar curvature depend on the metric in a complicated nonlinear manner. When fully written out, the EFE are a system of ten coupled, nonlinear, hyperbolic-elliptic partial differential equations.
=== Sign convention ===
The above form of the EFE is the standard established by Misner, Thorne, and Wheeler (MTW). The authors analyzed conventions that exist and classified these according to three signs ([S1] [S2] [S3]):
g
μ
ν
=
[
S
1
]
×
diag
(
−
1
,
+
1
,
+
1
,
+
1
)
R
μ
α
β
γ
=
[
S
2
]
×
(
Γ
α
γ
,
β
μ
−
Γ
α
β
,
γ
μ
+
Γ
σ
β
μ
Γ
γ
α
σ
−
Γ
σ
γ
μ
Γ
β
α
σ
)
G
μ
ν
=
[
S
3
]
×
κ
T
μ
ν
{\displaystyle {\begin{aligned}g_{\mu \nu }&=[S1]\times \operatorname {diag} (-1,+1,+1,+1)\\[6pt]{R^{\mu }}_{\alpha \beta \gamma }&=[S2]\times \left(\Gamma _{\alpha \gamma ,\beta }^{\mu }-\Gamma _{\alpha \beta ,\gamma }^{\mu }+\Gamma _{\sigma \beta }^{\mu }\Gamma _{\gamma \alpha }^{\sigma }-\Gamma _{\sigma \gamma }^{\mu }\Gamma _{\beta \alpha }^{\sigma }\right)\\[6pt]G_{\mu \nu }&=[S3]\times \kappa T_{\mu \nu }\end{aligned}}}
The third sign above is related to the choice of convention for the Ricci tensor:
R
μ
ν
=
[
S
2
]
×
[
S
3
]
×
R
α
μ
α
ν
{\displaystyle R_{\mu \nu }=[S2]\times [S3]\times {R^{\alpha }}_{\mu \alpha \nu }}
With these definitions Misner, Thorne, and Wheeler classify themselves as (+ + +), whereas Weinberg (1972) is (+ − −), Peebles (1980) and Efstathiou et al. (1990) are (− + +), Rindler (1977), Atwater (1974), Collins Martin & Squires (1989) and Peacock (1999) are (− + −).
Authors including Einstein have used a different sign in their definition for the Ricci tensor which results in the sign of the constant on the right side being negative:
R
μ
ν
−
1
2
R
g
μ
ν
−
Λ
g
μ
ν
=
−
κ
T
μ
ν
.
{\displaystyle R_{\mu \nu }-{\frac {1}{2}}Rg_{\mu \nu }-\Lambda g_{\mu \nu }=-\kappa T_{\mu \nu }.}
The sign of the cosmological term would change in both these versions if the (+ − − −) metric sign convention is used rather than the MTW (− + + +) metric sign convention adopted here.
=== Equivalent formulations ===
Taking the trace with respect to the metric of both sides of the EFE one gets
R
−
D
2
R
+
D
Λ
=
κ
T
,
{\displaystyle R-{\frac {D}{2}}R+D\Lambda =\kappa T,}
where D is the spacetime dimension. Solving for R and substituting this in the original EFE, one gets the following equivalent "trace-reversed" form:
R
μ
ν
−
2
D
−
2
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
D
−
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-{\frac {2}{D-2}}\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{D-2}}Tg_{\mu \nu }\right).}
In D = 4 dimensions this reduces to
R
μ
ν
−
Λ
g
μ
ν
=
κ
(
T
μ
ν
−
1
2
T
g
μ
ν
)
.
{\displaystyle R_{\mu \nu }-\Lambda g_{\mu \nu }=\kappa \left(T_{\mu \nu }-{\frac {1}{2}}T\,g_{\mu \nu }\right).}
Reversing the trace again would restore the original EFE. The trace-reversed form may be more convenient in some cases (for example, when one is interested in weak-field limit and can replace gμν in the expression on the right with the Minkowski metric without significant loss of accuracy).
== Cosmological constant ==
In the Einstein field equations
G
μ
ν
+
Λ
g
μ
ν
=
κ
T
μ
ν
,
{\displaystyle G_{\mu \nu }+\Lambda g_{\mu \nu }=\kappa T_{\mu \nu }\,,}
the term containing the cosmological constant Λ was absent from the version in which he originally published them. Einstein then included the term with the cosmological constant to allow for a universe that is not expanding or contracting. This effort was unsuccessful because:
any desired steady state solution described by this equation is unstable, and
observations by Edwin Hubble showed that our universe is expanding.
Einstein then abandoned Λ, remarking to George Gamow "that the introduction of the cosmological term was the biggest blunder of his life".
The inclusion of this term does not create inconsistencies. For many years the cosmological constant was almost universally assumed to be zero. More recent astronomical observations have shown an accelerating expansion of the universe, and to explain this a positive value of Λ is needed. The effect of the cosmological constant is negligible at the scale of a galaxy or smaller.
Einstein thought of the cosmological constant as an independent parameter, but its term in the field equation can also be moved algebraically to the other side and incorporated as part of the stress–energy tensor:
T
μ
ν
(
v
a
c
)
=
−
Λ
κ
g
μ
ν
.
{\displaystyle T_{\mu \nu }^{\mathrm {(vac)} }=-{\frac {\Lambda }{\kappa }}g_{\mu \nu }\,.}
This tensor describes a vacuum state with an energy density ρvac and isotropic pressure pvac that are fixed constants and given by
ρ
v
a
c
=
−
p
v
a
c
=
Λ
κ
,
{\displaystyle \rho _{\mathrm {vac} }=-p_{\mathrm {vac} }={\frac {\Lambda }{\kappa }},}
where it is assumed that Λ has SI unit m−2 and κ is defined as above.
The existence of a cosmological constant is thus equivalent to the existence of a vacuum energy and a pressure of opposite sign. This has led to the terms "cosmological constant" and "vacuum energy" being used interchangeably in general relativity.
== Features ==
=== Conservation of energy and momentum ===
General relativity is consistent with the local conservation of energy and momentum expressed as
∇
β
T
α
β
=
T
α
β
;
β
=
0.
{\displaystyle \nabla _{\beta }T^{\alpha \beta }={T^{\alpha \beta }}_{;\beta }=0.}
which expresses the local conservation of stress–energy. This conservation law is a physical requirement. With his field equations Einstein ensured that general relativity is consistent with this conservation condition.
=== Nonlinearity ===
The nonlinearity of the EFE distinguishes general relativity from many other fundamental physical theories. For example, Maxwell's equations of electromagnetism are linear in the electric and magnetic fields, and charge and current distributions (i.e. the sum of two solutions is also a solution); another example is the Schrödinger equation of quantum mechanics, which is linear in the wavefunction.
=== Correspondence principle ===
The EFE reduce to Newton's law of gravity by using both the weak-field approximation and the low-velocity approximation. The constant G appearing in the EFE is determined by making these two approximations.
== Vacuum field equations ==
If the energy–momentum tensor Tμν is zero in the region under consideration, then the field equations are also referred to as the vacuum field equations. By setting Tμν = 0 in the trace-reversed field equations, the vacuum field equations, also known as 'Einstein vacuum equations' (EVE), can be written as
R
μ
ν
=
0
.
{\displaystyle R_{\mu \nu }=0\,.}
In the case of nonzero cosmological constant, the equations are
R
μ
ν
=
Λ
D
2
−
1
g
μ
ν
.
{\displaystyle R_{\mu \nu }={\frac {\Lambda }{{\frac {D}{2}}-1}}g_{\mu \nu }\,.}
The solutions to the vacuum field equations are called vacuum solutions. Flat Minkowski space is the simplest example of a vacuum solution. Nontrivial examples include the Schwarzschild solution and the Kerr solution.
Manifolds with a vanishing Ricci tensor, Rμν = 0, are referred to as Ricci-flat manifolds and manifolds with a Ricci tensor proportional to the metric as Einstein manifolds.
== Einstein–Maxwell equations ==
If the energy–momentum tensor Tμν is that of an electromagnetic field in free space, i.e. if the electromagnetic stress–energy tensor
T
α
β
=
−
1
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
{\displaystyle T^{\alpha \beta }=\,-{\frac {1}{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right)}
is used, then the Einstein field equations are called the Einstein–Maxwell equations (with cosmological constant Λ, taken to be zero in conventional relativity theory):
G
α
β
+
Λ
g
α
β
=
κ
μ
0
(
F
α
ψ
F
ψ
β
+
1
4
g
α
β
F
ψ
τ
F
ψ
τ
)
.
{\displaystyle G^{\alpha \beta }+\Lambda g^{\alpha \beta }={\frac {\kappa }{\mu _{0}}}\left({F^{\alpha }}^{\psi }{F_{\psi }}^{\beta }+{\tfrac {1}{4}}g^{\alpha \beta }F_{\psi \tau }F^{\psi \tau }\right).}
Additionally, the covariant Maxwell equations are also applicable in free space:
F
α
β
;
β
=
0
F
[
α
β
;
γ
]
=
1
3
(
F
α
β
;
γ
+
F
β
γ
;
α
+
F
γ
α
;
β
)
=
1
3
(
F
α
β
,
γ
+
F
β
γ
,
α
+
F
γ
α
,
β
)
=
0
,
{\displaystyle {\begin{aligned}{F^{\alpha \beta }}_{;\beta }&=0\\F_{[\alpha \beta ;\gamma ]}&={\tfrac {1}{3}}\left(F_{\alpha \beta ;\gamma }+F_{\beta \gamma ;\alpha }+F_{\gamma \alpha ;\beta }\right)={\tfrac {1}{3}}\left(F_{\alpha \beta ,\gamma }+F_{\beta \gamma ,\alpha }+F_{\gamma \alpha ,\beta }\right)=0,\end{aligned}}}
where the semicolon represents a covariant derivative, and the brackets denote anti-symmetrization. The first equation asserts that the 4-divergence of the 2-form F is zero, and the second that its exterior derivative is zero. From the latter, it follows by the Poincaré lemma that in a coordinate chart it is possible to introduce an electromagnetic field potential Aα such that
F
α
β
=
A
α
;
β
−
A
β
;
α
=
A
α
,
β
−
A
β
,
α
{\displaystyle F_{\alpha \beta }=A_{\alpha ;\beta }-A_{\beta ;\alpha }=A_{\alpha ,\beta }-A_{\beta ,\alpha }}
in which the comma denotes a partial derivative. This is often taken as equivalent to the covariant Maxwell equation from which it is derived. However, there are global solutions of the equation that may lack a globally defined potential.
== Solutions ==
The solutions of the Einstein field equations are metrics of spacetime. These metrics describe the structure of the spacetime including the inertial motion of objects in the spacetime. As the field equations are non-linear, they cannot always be completely solved (i.e. without making approximations). For example, there is no known complete solution for a spacetime with two massive bodies in it (which is a theoretical model of a binary star system, for example). However, approximations are usually made in these cases. These are commonly referred to as post-Newtonian approximations. Even so, there are several cases where the field equations have been solved completely, and those are called exact solutions.
The study of exact solutions of Einstein's field equations is one of the activities of cosmology. It leads to the prediction of black holes and to different models of evolution of the universe.
One can also discover new solutions of the Einstein field equations via the method of orthonormal frames as pioneered by Ellis and MacCallum. In this approach, the Einstein field equations are reduced to a set of coupled, nonlinear, ordinary differential equations. As discussed by Hsu and Wainwright, self-similar solutions to the Einstein field equations are fixed points of the resulting dynamical system. New solutions have been discovered using these methods by LeBlanc and Kohli and Haslam.
== Linearized EFE ==
The nonlinearity of the EFE makes finding exact solutions difficult. One way of solving the field equations is to make an approximation, namely, that far from the source(s) of gravitating matter, the gravitational field is very weak and the spacetime approximates that of Minkowski space. The metric is then written as the sum of the Minkowski metric and a term representing the deviation of the true metric from the Minkowski metric, ignoring higher-power terms. This linearization procedure can be used to investigate the phenomena of gravitational radiation.
== Polynomial form ==
Despite the EFE as written containing the inverse of the metric tensor, they can be arranged in a form that contains the metric tensor in polynomial form and without its inverse. First, the determinant of the metric in 4 dimensions can be written
det
(
g
)
=
1
24
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
α
κ
g
β
λ
g
γ
μ
g
δ
ν
{\displaystyle \det(g)={\tfrac {1}{24}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\alpha \kappa }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}
using the Levi-Civita symbol; and the inverse of the metric in 4 dimensions can be written as:
g
α
κ
=
1
6
ε
α
β
γ
δ
ε
κ
λ
μ
ν
g
β
λ
g
γ
μ
g
δ
ν
det
(
g
)
.
{\displaystyle g^{\alpha \kappa }={\frac {{\tfrac {1}{6}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }}{\det(g)}}\,.}
Substituting this expression of the inverse of the metric into the equations then multiplying both sides by a suitable power of det(g) to eliminate it from the denominator results in polynomial equations in the metric tensor and its first and second derivatives. The Einstein–Hilbert action from which the equations are derived can also be written in polynomial form by suitable redefinitions of the fields.
== See also ==
== Notes ==
== References ==
See General relativity resources.
Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 978-0-7167-0344-0.
Weinberg, Steven (1972). Gravitation and Cosmology. John Wiley & Sons. ISBN 0-471-92567-5.
Peacock, John A. (1999). Cosmological Physics. Cambridge University Press. ISBN 978-0521410724.
== External links ==
"Einstein equations", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Caltech Tutorial on Relativity — A simple introduction to Einstein's Field Equations.
The Meaning of Einstein's Equation — An explanation of Einstein's field equation, its derivation, and some of its consequences
Video Lecture on Einstein's Field Equations by MIT Physics Professor Edmund Bertschinger.
Arch and scaffold: How Einstein found his field equations Physics Today November 2015, History of the Development of the Field Equations
=== External images ===
The Einstein field equation on the wall of the Museum Boerhaave in downtown Leiden
Suzanne Imber, "The impact of general relativity on the Atacama Desert", Einstein field equation on the side of a train in Bolivia. | Wikipedia/Einstein_field_equations |
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.
Tensors have become important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics (stress, elasticity, quantum mechanics, fluid mechanics, moment of inertia, ...), electrodynamics (electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), and general relativity (stress–energy tensor, curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".
Tullio Levi-Civita and Gregorio Ricci-Curbastro popularised tensors in 1900 – continuing the earlier work of Bernhard Riemann, Elwin Bruno Christoffel, and others – as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor.
== Definition ==
Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction.
=== As multidimensional arrays ===
A tensor may be represented as a (potentially multidimensional) array. Just as a vector in an n-dimensional space is represented by a one-dimensional array with n components with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square n × n array. The numbers in the multidimensional array are known as the components of the tensor. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order-2 tensor T could be denoted Tij , where i and j are indices running from 1 to n, or also by T ij. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while Tij and T ij can both be expressed as n-by-n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together.
The total number of indices (m) required to identify each component uniquely is equal to the dimension or the number of ways of an array, which is why a tensor is sometimes referred to as an m-dimensional array or an m-way array. The total number of indices is also called the order, degree or rank of a tensor, although the term "rank" generally has another meaning in the context of matrices and tensors.
Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see Covariance and contravariance of vectors), where the new basis vectors
e
^
i
{\displaystyle \mathbf {\hat {e}} _{i}}
are expressed in terms of the old basis vectors
e
j
{\displaystyle \mathbf {e} _{j}}
as,
e
^
i
=
∑
j
=
1
n
e
j
R
i
j
=
e
j
R
i
j
.
{\displaystyle \mathbf {\hat {e}} _{i}=\sum _{j=1}^{n}\mathbf {e} _{j}R_{i}^{j}=\mathbf {e} _{j}R_{i}^{j}.}
Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article. The components vi of a column vector v transform with the inverse of the matrix R,
v
^
i
=
(
R
−
1
)
j
i
v
j
,
{\displaystyle {\hat {v}}^{i}=\left(R^{-1}\right)_{j}^{i}v^{j},}
where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector components transform by the inverse of the change of basis. In contrast, the components, wi, of a covector (or row vector), w, transform with the matrix R itself,
w
^
i
=
w
j
R
i
j
.
{\displaystyle {\hat {w}}_{i}=w_{j}R_{i}^{j}.}
This is called a covariant transformation law, because the covector components transform by the same matrix as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript).
As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array
T
{\displaystyle T}
that transforms under a change of basis matrix
R
=
(
R
i
j
)
{\displaystyle R=\left(R_{i}^{j}\right)}
by
T
^
=
R
−
1
T
R
{\displaystyle {\hat {T}}=R^{-1}TR}
. For the individual matrix entries, this transformation law has the form
T
^
j
′
i
′
=
(
R
−
1
)
i
i
′
T
j
i
R
j
′
j
{\displaystyle {\hat {T}}_{j'}^{i'}=\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}}
so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1).
Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:
v
=
v
^
i
e
^
i
=
(
(
R
−
1
)
j
i
v
j
)
(
e
k
R
i
k
)
=
(
(
R
−
1
)
j
i
R
i
k
)
v
j
e
k
=
δ
j
k
v
j
e
k
=
v
k
e
k
=
v
i
e
i
{\displaystyle \mathbf {v} ={\hat {v}}^{i}\,\mathbf {\hat {e}} _{i}=\left(\left(R^{-1}\right)_{j}^{i}{v}^{j}\right)\left(\mathbf {e} _{k}R_{i}^{k}\right)=\left(\left(R^{-1}\right)_{j}^{i}R_{i}^{k}\right){v}^{j}\mathbf {e} _{k}=\delta _{j}^{k}{v}^{j}\mathbf {e} _{k}={v}^{k}\,\mathbf {e} _{k}={v}^{i}\,\mathbf {e} _{i}}
,
where
δ
j
k
{\displaystyle \delta _{j}^{k}}
is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices (j into k in this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like
v
i
e
i
{\displaystyle {v}^{i}\,\mathbf {e} _{i}}
can immediately be seen to be geometrically identical in all coordinate systems.
Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components
(
T
v
)
i
{\displaystyle (Tv)^{i}}
are given by
(
T
v
)
i
=
T
j
i
v
j
{\displaystyle (Tv)^{i}=T_{j}^{i}v^{j}}
. These components transform contravariantly, since
(
T
v
^
)
i
′
=
T
^
j
′
i
′
v
^
j
′
=
[
(
R
−
1
)
i
i
′
T
j
i
R
j
′
j
]
[
(
R
−
1
)
k
j
′
v
k
]
=
(
R
−
1
)
i
i
′
(
T
v
)
i
.
{\displaystyle \left({\widehat {Tv}}\right)^{i'}={\hat {T}}_{j'}^{i'}{\hat {v}}^{j'}=\left[\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}\right]\left[\left(R^{-1}\right)_{k}^{j'}v^{k}\right]=\left(R^{-1}\right)_{i}^{i'}(Tv)^{i}.}
The transformation law for an order p + q tensor with p contravariant indices and q covariant indices is thus given as,
T
^
j
1
′
,
…
,
j
q
′
i
1
′
,
…
,
i
p
′
=
(
R
−
1
)
i
1
i
1
′
⋯
(
R
−
1
)
i
p
i
p
′
{\displaystyle {\hat {T}}_{j'_{1},\ldots ,j'_{q}}^{i'_{1},\ldots ,i'_{p}}=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}}
T
j
1
,
…
,
j
q
i
1
,
…
,
i
p
{\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}}
R
j
1
′
j
1
⋯
R
j
q
′
j
q
.
{\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type (p, q). The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions), p + q in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type (p, q) is also called a (p, q)-tensor for short.
This discussion motivates the following formal definition:
Definition. A tensor of type (p, q) is an assignment of a multidimensional array
T
j
1
…
j
q
i
1
…
i
p
[
f
]
{\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}[\mathbf {f} ]}
to each basis f = (e1, ..., en) of an n-dimensional vector space such that, if we apply the change of basis
f
↦
f
⋅
R
=
(
e
i
R
1
i
,
…
,
e
i
R
n
i
)
{\displaystyle \mathbf {f} \mapsto \mathbf {f} \cdot R=\left(\mathbf {e} _{i}R_{1}^{i},\dots ,\mathbf {e} _{i}R_{n}^{i}\right)}
then the multidimensional array obeys the transformation law
T
j
1
′
…
j
q
′
i
1
′
…
i
p
′
[
f
⋅
R
]
=
(
R
−
1
)
i
1
i
1
′
⋯
(
R
−
1
)
i
p
i
p
′
{\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}}
T
j
1
,
…
,
j
q
i
1
,
…
,
i
p
[
f
]
{\displaystyle T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]}
R
j
1
′
j
1
⋯
R
j
q
′
j
q
.
{\displaystyle R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.
An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If
f
=
(
f
1
,
…
,
f
n
)
{\displaystyle \mathbf {f} =(\mathbf {f} _{1},\dots ,\mathbf {f} _{n})}
is an ordered basis, and
R
=
(
R
j
i
)
{\displaystyle R=\left(R_{j}^{i}\right)}
is an invertible
n
×
n
{\displaystyle n\times n}
matrix, then the action is given by
f
R
=
(
f
i
R
1
i
,
…
,
f
i
R
n
i
)
.
{\displaystyle \mathbf {f} R=\left(\mathbf {f} _{i}R_{1}^{i},\dots ,\mathbf {f} _{i}R_{n}^{i}\right).}
Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL(n). Let W be a vector space and let
ρ
{\displaystyle \rho }
be a representation of GL(n) on W (that is, a group homomorphism
ρ
:
GL
(
n
)
→
GL
(
W
)
{\displaystyle \rho :{\text{GL}}(n)\to {\text{GL}}(W)}
). Then a tensor of type
ρ
{\displaystyle \rho }
is an equivariant map
T
:
F
→
W
{\displaystyle T:F\to W}
. Equivariance here means that
T
(
F
R
)
=
ρ
(
R
−
1
)
T
(
F
)
.
{\displaystyle T(FR)=\rho \left(R^{-1}\right)T(F).}
When
ρ
{\displaystyle \rho }
is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups.
=== As multilinear maps ===
A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type (p, q) tensor T is defined as a multilinear map,
T
:
V
∗
×
⋯
×
V
∗
⏟
p
copies
×
V
×
⋯
×
V
⏟
q
copies
→
R
,
{\displaystyle T:\underbrace {V^{*}\times \dots \times V^{*}} _{p{\text{ copies}}}\times \underbrace {V\times \dots \times V} _{q{\text{ copies}}}\rightarrow \mathbf {R} ,}
where V∗ is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers,
R
{\displaystyle \mathbb {R} }
. More generally, V can be taken over any field F (e.g. the complex numbers), with F replacing
R
{\displaystyle \mathbb {R} }
as the codomain of the multilinear maps.
By applying a multilinear map T of type (p, q) to a basis {ej} for V and a canonical cobasis {εi} for V∗,
T
j
1
…
j
q
i
1
…
i
p
≡
T
(
ε
i
1
,
…
,
ε
i
p
,
e
j
1
,
…
,
e
j
q
)
,
{\displaystyle T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\equiv T\left({\boldsymbol {\varepsilon }}^{i_{1}},\ldots ,{\boldsymbol {\varepsilon }}^{i_{p}},\mathbf {e} _{j_{1}},\ldots ,\mathbf {e} _{j_{q}}\right),}
a (p + q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.
In viewing a tensor as a multilinear map, it is conventional to identify the double dual V∗∗ of the vector space V, i.e., the space of linear functionals on the dual vector space V∗, with the vector space V. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V∗ against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual.
=== Using tensor products ===
For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property as explained here and here.
A type (p, q) tensor is defined in this context as an element of the tensor product of vector spaces,
T
∈
V
⊗
⋯
⊗
V
⏟
p
copies
⊗
V
∗
⊗
⋯
⊗
V
∗
⏟
q
copies
.
{\displaystyle T\in \underbrace {V\otimes \dots \otimes V} _{p{\text{ copies}}}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{q{\text{ copies}}}.}
A basis vi of V and basis wj of W naturally induce a basis vi ⊗ wj of the tensor product V ⊗ W. The components of a tensor T are the coefficients of the tensor with respect to the basis obtained from a basis {ei} for V and its dual basis {εj}, i.e.
T
=
T
j
1
…
j
q
i
1
…
i
p
e
i
1
⊗
⋯
⊗
e
i
p
⊗
ε
j
1
⊗
⋯
⊗
ε
j
q
.
{\displaystyle T=T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\;\mathbf {e} _{i_{1}}\otimes \cdots \otimes \mathbf {e} _{i_{p}}\otimes {\boldsymbol {\varepsilon }}^{j_{1}}\otimes \cdots \otimes {\boldsymbol {\varepsilon }}^{j_{q}}.}
Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type (p, q) tensor. Moreover, the universal property of the tensor product gives a one-to-one correspondence between tensors defined in this way and tensors defined as multilinear maps.
This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual:
U
⊗
V
≅
(
U
∗
∗
)
⊗
(
V
∗
∗
)
≅
(
U
∗
⊗
V
∗
)
∗
≅
Hom
2
(
U
∗
×
V
∗
;
F
)
{\displaystyle U\otimes V\cong \left(U^{**}\right)\otimes \left(V^{**}\right)\cong \left(U^{*}\otimes V^{*}\right)^{*}\cong \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)}
The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps from
Hom
2
(
U
∗
×
V
∗
;
F
)
{\displaystyle \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)}
and
Hom
(
U
∗
⊗
V
∗
;
F
)
{\displaystyle \operatorname {Hom} \left(U^{*}\otimes V^{*};\mathbb {F} \right)}
.
Tensor products can be defined in great generality – for example, involving arbitrary modules over a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space V and its dual, as above.
=== Tensors in infinite dimensions ===
This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic. Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves. For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product). In some applications, it is the tensor product of Hilbert spaces that is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as a symmetric monoidal category that encodes their most important properties, rather than the specific models of those categories.
=== Tensor fields ===
In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor.
In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions,
x
¯
i
(
x
1
,
…
,
x
n
)
,
{\displaystyle {\bar {x}}^{i}\left(x^{1},\ldots ,x^{n}\right),}
defining a coordinate transformation,
T
^
j
1
′
…
j
q
′
i
1
′
…
i
p
′
(
x
¯
1
,
…
,
x
¯
n
)
=
∂
x
¯
i
1
′
∂
x
i
1
⋯
∂
x
¯
i
p
′
∂
x
i
p
∂
x
j
1
∂
x
¯
j
1
′
⋯
∂
x
j
q
∂
x
¯
j
q
′
T
j
1
…
j
q
i
1
…
i
p
(
x
1
,
…
,
x
n
)
.
{\displaystyle {\hat {T}}_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}\left({\bar {x}}^{1},\ldots ,{\bar {x}}^{n}\right)={\frac {\partial {\bar {x}}^{i'_{1}}}{\partial x^{i_{1}}}}\cdots {\frac {\partial {\bar {x}}^{i'_{p}}}{\partial x^{i_{p}}}}{\frac {\partial x^{j_{1}}}{\partial {\bar {x}}^{j'_{1}}}}\cdots {\frac {\partial x^{j_{q}}}{\partial {\bar {x}}^{j'_{q}}}}T_{j_{1}\dots j_{q}}^{i_{1}\dots i_{p}}\left(x^{1},\ldots ,x^{n}\right).}
== History ==
The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century. The word "tensor" itself was introduced in 1846 by William Rowan Hamilton to describe something different from what is now meant by a tensor. Gibbs introduced dyadics and polyadic algebra, which are also tensors in the modern sense. The contemporary usage was introduced by Woldemar Voigt in 1898.
Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented in 1892. It was made accessible to many mathematicians by the publication of Ricci-Curbastro and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications). In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense.
In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Albert Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann. Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:
I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.
Tensors and tensor fields were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics, and Hassler Whitney popularized the tensor product.
From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem). Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic. Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s.
== Examples ==
An elementary example of a mapping describable as a tensor is the dot product, which maps two vectors to a scalar. A more complex example is the Cauchy stress tensor T, which takes a directional unit vector v as input and maps it to the stress vector T(v), which is the force (per unit area) exerted by material on the negative side of the plane orthogonal to v against the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). The cross product, where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. The totally anti-symmetric symbol
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
nevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems.
This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type (n, m), where n is the number of contravariant indices, m is the number of covariant indices, and n + m gives the total order of the tensor. For example, a bilinear form is the same thing as a (0, 2)-tensor; an inner product is an example of a (0, 2)-tensor, but not all (0, 2)-tensors are inner products. In the (0, M)-entry of the table, M denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor.
Raising an index on an (n, m)-tensor produces an (n + 1, m − 1)-tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table. Contraction of an upper with a lower index of an (n, m)-tensor produces an (n − 1, m − 1)-tensor; this corresponds to moving diagonally up and to the left on the table.
== Properties ==
Assuming a basis of a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organized multidimensional array of numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows to define tensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers a tensor. Compare this to the array representing
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
not being a tensor, for the sign change under transformations changing the orientation.
Because the components of vectors and their duals transform differently under the change of their dual bases, there is a covariant and/or contravariant transformation law that relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively, vectors: n (contravariant indices) and dual vectors: m (covariant indices) in the input and output of a tensor determine the type (or valence) of the tensor, a pair of natural numbers (n, m), which determine the precise form of the transformation law. The order of a tensor is the sum of these two numbers.
The order (also degree or rank) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order 2 + 0 = 2, the same as the stress tensor, taking one vector and returning another 1 + 1 = 2. The
ε
i
j
k
{\displaystyle \varepsilon _{ijk}}
-symbol, mapping two vectors to one vector, would have order 2 + 1 = 3.
The collection of tensors on a vector space and its dual forms a tensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order 2, which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this.
== Notation ==
There are several notational systems that are used to describe tensors and perform calculations involving them.
=== Ricci calculus ===
Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives.
=== Einstein summation convention ===
The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index i is used twice in a given term of a tensor expression, it means that the term is to be summed for all i. Several distinct pairs of indices may be summed this way.
=== Penrose graphical notation ===
Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices.
=== Abstract index notation ===
The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation.
=== Component-free notation ===
A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces.
== Operations ==
There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type.
=== Tensor product ===
The tensor product takes two tensors, S and T, and produces a new tensor, S ⊗ T, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e.,
(
S
⊗
T
)
(
v
1
,
…
,
v
n
,
v
n
+
1
,
…
,
v
n
+
m
)
=
S
(
v
1
,
…
,
v
n
)
T
(
v
n
+
1
,
…
,
v
n
+
m
)
,
{\displaystyle (S\otimes T)(v_{1},\ldots ,v_{n},v_{n+1},\ldots ,v_{n+m})=S(v_{1},\ldots ,v_{n})T(v_{n+1},\ldots ,v_{n+m}),}
which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e.,
(
S
⊗
T
)
j
1
…
j
k
j
k
+
1
…
j
k
+
m
i
1
…
i
l
i
l
+
1
…
i
l
+
n
=
S
j
1
…
j
k
i
1
…
i
l
T
j
k
+
1
…
j
k
+
m
i
l
+
1
…
i
l
+
n
.
{\displaystyle (S\otimes T)_{j_{1}\ldots j_{k}j_{k+1}\ldots j_{k+m}}^{i_{1}\ldots i_{l}i_{l+1}\ldots i_{l+n}}=S_{j_{1}\ldots j_{k}}^{i_{1}\ldots i_{l}}T_{j_{k+1}\ldots j_{k+m}}^{i_{l+1}\ldots i_{l+n}}.}
If S is of type (l, k) and T is of type (n, m), then the tensor product S ⊗ T has type (l + n, k + m).
=== Contraction ===
Tensor contraction is an operation that reduces a type (n, m) tensor to a type (n − 1, m − 1) tensor, of which the trace is a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a (1, 1)-tensor
T
i
j
{\displaystyle T_{i}^{j}}
can be contracted to a scalar through
T
i
i
{\displaystyle T_{i}^{i}}
, where the summation is again implied. When the (1, 1)-tensor is interpreted as a linear map, this operation is known as the trace.
The contraction is often used in conjunction with the tensor product to contract an index from each tensor.
The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the space V with the space V∗ by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V∗ to a factor from V. For example, a tensor
T
∈
V
⊗
V
⊗
V
∗
{\displaystyle T\in V\otimes V\otimes V^{*}}
can be written as a linear combination
T
=
v
1
⊗
w
1
⊗
α
1
+
v
2
⊗
w
2
⊗
α
2
+
⋯
+
v
N
⊗
w
N
⊗
α
N
.
{\displaystyle T=v_{1}\otimes w_{1}\otimes \alpha _{1}+v_{2}\otimes w_{2}\otimes \alpha _{2}+\cdots +v_{N}\otimes w_{N}\otimes \alpha _{N}.}
The contraction of T on the first and last slots is then the vector
α
1
(
v
1
)
w
1
+
α
2
(
v
2
)
w
2
+
⋯
+
α
N
(
v
N
)
w
N
.
{\displaystyle \alpha _{1}(v_{1})w_{1}+\alpha _{2}(v_{2})w_{2}+\cdots +\alpha _{N}(v_{N})w_{N}.}
In a vector space with an inner product (also known as a metric) g, the term contraction is used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a (2, 0)-tensor
T
i
j
{\displaystyle T^{ij}}
can be contracted to a scalar through
T
i
j
g
i
j
{\displaystyle T^{ij}g_{ij}}
(yet again assuming the summation convention).
=== Raising or lowering an index ===
When a vector space is equipped with a nondegenerate bilinear form (or metric tensor as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known as lowering an index.
Conversely, the inverse operation can be defined, and is called raising an index. This is equivalent to a similar contraction on the product with a (2, 0)-tensor. This inverse metric tensor has components that are the matrix inverse of those of the metric tensor.
== Applications ==
=== Continuum mechanics ===
Important examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor field. The stress tensor and strain tensor are both second-order tensor fields, and are related in a general linear elastic material by a fourth-order elasticity tensor field. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed.
If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type (2, 0), in linear elasticity, or more precisely by a tensor field of type (2, 0), since the stresses may vary from point to point.
=== Other examples from physics ===
Common applications include:
Electromagnetic tensor (or Faraday tensor) in electromagnetism
Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics
Permittivity and electric susceptibility are tensors in anisotropic media
Four-tensors in general relativity (e.g. stress–energy tensor), used to represent momentum fluxes
Spherical tensor operators are the eigenfunctions of the quantum angular momentum operator in spherical coordinates
Diffusion tensors, the basis of diffusion tensor imaging, represent rates of diffusion in biological environments
Quantum mechanics and quantum computing utilize tensor products for combination of quantum states
=== Computer vision and optics ===
The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix.
The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities:
P
i
ε
0
=
∑
j
χ
i
j
(
1
)
E
j
+
∑
j
k
χ
i
j
k
(
2
)
E
j
E
k
+
∑
j
k
ℓ
χ
i
j
k
ℓ
(
3
)
E
j
E
k
E
ℓ
+
⋯
.
{\displaystyle {\frac {P_{i}}{\varepsilon _{0}}}=\sum _{j}\chi _{ij}^{(1)}E_{j}+\sum _{jk}\chi _{ijk}^{(2)}E_{j}E_{k}+\sum _{jk\ell }\chi _{ijk\ell }^{(3)}E_{j}E_{k}E_{\ell }+\cdots .\!}
Here
χ
(
1
)
{\displaystyle \chi ^{(1)}}
is the linear susceptibility,
χ
(
2
)
{\displaystyle \chi ^{(2)}}
gives the Pockels effect and second harmonic generation, and
χ
(
3
)
{\displaystyle \chi ^{(3)}}
gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.
=== Machine learning ===
The properties of tensors, especially tensor decomposition, have enabled their use in machine learning to embed higher dimensional data in artificial neural networks. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same.
== Generalizations ==
=== Tensor products of vector spaces ===
The vector spaces of a tensor product need not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product space V ⊗ W is a second-order "tensor" in this more general sense, and an order-d tensor may likewise be defined as an element of a tensor product of d different vector spaces. A type (n, m) tensor, in the sense defined previously, is also a tensor of order n + m in this more general sense. The concept of tensor product can be extended to arbitrary modules over a ring.
=== Tensors in infinite dimensions ===
The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces. Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual. Tensors thus live naturally on Banach manifolds and Fréchet manifolds.
=== Tensor densities ===
Suppose that a homogeneous medium fills R3, so that the density of the medium is described by a single scalar value ρ in kg⋅m−3. The mass, in kg, of a region Ω is obtained by multiplying ρ by the volume of the region Ω, or equivalently integrating the constant ρ over the region:
m
=
∫
Ω
ρ
d
x
d
y
d
z
,
{\displaystyle m=\int _{\Omega }\rho \,dx\,dy\,dz,}
where the Cartesian coordinates x, y, z are measured in m. If the units of length are changed into cm, then the numerical values of the coordinate functions must be rescaled by a factor of 100:
x
′
=
100
x
,
y
′
=
100
y
,
z
′
=
100
z
.
{\displaystyle x'=100x,\quad y'=100y,\quad z'=100z.}
The numerical value of the density ρ must then also transform by 100−3 m3/cm3 to compensate, so that the numerical value of the mass in kg is still given by integral of
ρ
d
x
d
y
d
z
{\displaystyle \rho \,dx\,dy\,dz}
. Thus
ρ
′
=
100
−
3
ρ
{\displaystyle \rho '=100^{-3}\rho }
(in units of kg⋅cm−3).
More generally, if the Cartesian coordinates x, y, z undergo a linear transformation, then the numerical value of the density ρ must change by a factor of the reciprocal of the absolute value of the determinant of the coordinate transformation, so that the integral remains invariant, by the change of variables formula for integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called a scalar density. To model a non-constant density, ρ is a function of the variables x, y, z (a scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the Jacobian of the coordinate change. For more on the intrinsic meaning, see Density on a manifold.
A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition:
T
j
1
′
…
j
q
′
i
1
′
…
i
p
′
[
f
⋅
R
]
=
|
det
R
|
−
w
(
R
−
1
)
i
1
i
1
′
⋯
(
R
−
1
)
i
p
i
p
′
T
j
1
,
…
,
j
q
i
1
,
…
,
i
p
[
f
]
R
j
1
′
j
1
⋯
R
j
q
′
j
q
.
{\displaystyle T_{j'_{1}\dots j'_{q}}^{i'_{1}\dots i'_{p}}[\mathbf {f} \cdot R]=\left|\det R\right|^{-w}\left(R^{-1}\right)_{i_{1}}^{i'_{1}}\cdots \left(R^{-1}\right)_{i_{p}}^{i'_{p}}T_{j_{1},\ldots ,j_{q}}^{i_{1},\ldots ,i_{p}}[\mathbf {f} ]R_{j'_{1}}^{j_{1}}\cdots R_{j'_{q}}^{j_{q}}.}
Here w is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism.
Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from the rational representations of the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are still semisimple representations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation, consisting of an (x, y) ∈ R2 with the transformation law
(
x
,
y
)
↦
(
x
+
y
log
|
det
R
|
,
y
)
.
{\displaystyle (x,y)\mapsto (x+y\log \left|\det R\right|,y).}
=== Geometric objects ===
The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes. Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles.
=== Spinors ===
When changing from one orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected (see orientation entanglement and plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1. A spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.
Spinors are elements of the spin representation of the rotation group, while tensors are elements of its tensor representations. Other classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well.
== See also ==
The dictionary definition of tensor at Wiktionary
Array data type, for tensor storage and manipulation
Bitensor
=== Foundational ===
=== Applications ===
== Explanatory notes ==
== References ==
=== Specific ===
=== General ===
This article incorporates material from tensor on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
== External links == | Wikipedia/Application_of_tensor_theory_in_engineering |
In mechanics, strain is defined as relative deformation, compared to a reference position configuration. Different equivalent choices may be made for the expression of a strain field depending on whether it is defined with respect to the initial or the final configuration of the body and on whether the metric tensor or its dual is considered.
Strain has dimension of a length ratio, with SI base units of meter per meter (m/m).
Hence strains are dimensionless and are usually expressed as a decimal fraction or a percentage.
Parts-per notation is also used, e.g., parts per million or parts per billion (sometimes called "microstrains" and "nanostrains", respectively), corresponding to μm/m and nm/m.
Strain can be formulated as the spatial derivative of displacement:
ε
≐
∂
∂
X
(
x
−
X
)
=
F
′
−
I
,
{\displaystyle {\boldsymbol {\varepsilon }}\doteq {\cfrac {\partial }{\partial \mathbf {X} }}\left(\mathbf {x} -\mathbf {X} \right)={\boldsymbol {F}}'-{\boldsymbol {I}},}
where I is the identity tensor.
The displacement of a body may be expressed in the form x = F(X), where X is the reference position of material points of the body;
displacement has units of length and does not distinguish between rigid body motions (translations and rotations) and deformations (changes in shape and size) of the body.
The spatial derivative of a uniform translation is zero, thus strains measure how much a given displacement differs locally from a rigid-body motion.
A strain is in general a tensor quantity. Physical insight into strains can be gained by observing that a given strain can be decomposed into normal and shear components. The amount of stretch or compression along material line elements or fibers is the normal strain, and the amount of distortion associated with the sliding of plane layers over each other is the shear strain, within a deforming body. This could be applied by elongation, shortening, or volume changes, or angular distortion.
The state of strain at a material point of a continuum body is defined as the totality of all the changes in length of material lines or fibers, the normal strain, which pass through that point and also the totality of all the changes in the angle between pairs of lines initially perpendicular to each other, the shear strain, radiating from this point. However, it is sufficient to know the normal and shear components of strain on a set of three mutually perpendicular directions.
If there is an increase in length of the material line, the normal strain is called tensile strain; otherwise, if there is reduction or compression in the length of the material line, it is called compressive strain.
== Strain regimes ==
Depending on the amount of strain, or local deformation, the analysis of deformation is subdivided into three deformation theories:
Finite strain theory, also called large strain theory, large deformation theory, deals with deformations in which both rotations and strains are arbitrarily large. In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue.
Infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement-gradient theory where strains and rotations are both small. In this case, the undeformed and deformed configurations of the body can be assumed identical. The infinitesimal strain theory is used in the analysis of deformations of materials exhibiting elastic behavior, such as materials found in mechanical and civil engineering applications, e.g. concrete and steel.
Large-displacement or large-rotation theory, which assumes small strains but large rotations and displacements.
== Strain measures ==
In each of these theories the strain is then defined differently. The engineering strain is the most common definition applied to materials used in mechanical and structural engineering, which are subjected to very small deformations. On the other hand, for some materials, e.g., elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%; thus other more complex definitions of strain are required, such as stretch, logarithmic strain, Green strain, and Almansi strain.
=== Engineering strain ===
Engineering strain, also known as Cauchy strain, is expressed as the ratio of total deformation to the initial dimension of the material body on which forces are applied. In the case of a material line element or fiber axially loaded, its elongation gives rise to an engineering normal strain or engineering extensional strain e, which equals the relative elongation or the change in length ΔL per unit of the original length L of the line element or fibers (in meters per meter). The normal strain is positive if the material fibers are stretched and negative if they are compressed. Thus, we have
e
=
Δ
L
L
=
l
−
L
L
{\displaystyle e={\frac {\Delta L}{L}}={\frac {l-L}{L}}}
,
where e is the engineering normal strain, L is the original length of the fiber and l is the final length of the fiber.
The true shear strain is defined as the change in the angle (in radians) between two material line elements initially perpendicular to each other in the undeformed or initial configuration. The engineering shear strain is defined as the tangent of that angle, and is equal to the length of deformation at its maximum divided by the perpendicular length in the plane of force application, which sometimes makes it easier to calculate.
=== Stretch ratio ===
The stretch ratio or extension ratio (symbol λ) is an alternative measure related to the extensional or normal strain of an axially loaded differential line element. It is defined as the ratio between the final length l and the initial length L of the material line.
λ
=
l
L
{\displaystyle \lambda ={\frac {l}{L}}}
The extension ratio λ is related to the engineering strain e by
e
=
λ
−
1
{\displaystyle e=\lambda -1}
This equation implies that when the normal strain is zero, so that there is no deformation, the stretch ratio is equal to unity.
The stretch ratio is used in the analysis of materials that exhibit large deformations, such as elastomers, which can sustain stretch ratios of 3 or 4 before they fail. On the other hand, traditional engineering materials, such as concrete or steel, fail at much lower stretch ratios.
=== Logarithmic strain ===
The logarithmic strain ε, also called, true strain or Hencky strain. Considering an incremental strain (Ludwik)
δ
ε
=
δ
l
l
{\displaystyle \delta \varepsilon ={\frac {\delta l}{l}}}
the logarithmic strain is obtained by integrating this incremental strain:
∫
δ
ε
=
∫
L
l
δ
l
l
ε
=
ln
(
l
L
)
=
ln
(
λ
)
=
ln
(
1
+
e
)
=
e
−
e
2
2
+
e
3
3
−
⋯
{\displaystyle {\begin{aligned}\int \delta \varepsilon &=\int _{L}^{l}{\frac {\delta l}{l}}\\\varepsilon &=\ln \left({\frac {l}{L}}\right)=\ln(\lambda )\\&=\ln(1+e)\\&=e-{\frac {e^{2}}{2}}+{\frac {e^{3}}{3}}-\cdots \end{aligned}}}
where e is the engineering strain. The logarithmic strain provides the correct measure of the final strain when deformation takes place in a series of increments, taking into account the influence of the strain path.
=== Green strain ===
The Green strain is defined as:
ε
G
=
1
2
(
l
2
−
L
2
L
2
)
=
1
2
(
λ
2
−
1
)
{\displaystyle \varepsilon _{G}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{L^{2}}}\right)={\tfrac {1}{2}}(\lambda ^{2}-1)}
=== Almansi strain ===
The Euler-Almansi strain is defined as
ε
E
=
1
2
(
l
2
−
L
2
l
2
)
=
1
2
(
1
−
1
λ
2
)
{\displaystyle \varepsilon _{E}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{l^{2}}}\right)={\tfrac {1}{2}}\left(1-{\frac {1}{\lambda ^{2}}}\right)}
== Strain tensor ==
The (infinitesimal) strain tensor (symbol
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
) is defined in the International System of Quantities (ISQ), more specifically in ISO 80000-4 (Mechanics), as a "tensor quantity representing the deformation of matter caused by stress. Strain tensor is symmetric and has three linear strain and three shear strain (Cartesian) components."
ISO 80000-4 further defines linear strain as the "quotient of change in length of an object and its length" and shear strain as the "quotient of parallel displacement of two surfaces of a layer and the thickness of the layer".
Thus, strains are classified as either normal or shear. A normal strain is perpendicular to the face of an element, and a shear strain is parallel to it. These definitions are consistent with those of normal stress and shear stress.
The strain tensor can then be expressed in terms of normal and shear components as:
ε
_
_
=
[
ε
x
x
ε
x
y
ε
x
z
ε
y
x
ε
y
y
ε
y
z
ε
z
x
ε
z
y
ε
z
z
]
=
[
ε
x
x
1
2
γ
x
y
1
2
γ
x
z
1
2
γ
y
x
ε
y
y
1
2
γ
y
z
1
2
γ
z
x
1
2
γ
z
y
ε
z
z
]
{\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{xx}&{\tfrac {1}{2}}\gamma _{xy}&{\tfrac {1}{2}}\gamma _{xz}\\{\tfrac {1}{2}}\gamma _{yx}&\varepsilon _{yy}&{\tfrac {1}{2}}\gamma _{yz}\\{\tfrac {1}{2}}\gamma _{zx}&{\tfrac {1}{2}}\gamma _{zy}&\varepsilon _{zz}\\\end{bmatrix}}}
=== Geometric setting ===
Consider a two-dimensional, infinitesimal, rectangular material element with dimensions dx × dy, which, after deformation, takes the form of a rhombus. The deformation is described by the displacement field u. From the geometry of the adjacent figure we have
l
e
n
g
t
h
(
A
B
)
=
d
x
{\displaystyle \mathrm {length} (AB)=dx}
and
l
e
n
g
t
h
(
a
b
)
=
(
d
x
+
∂
u
x
∂
x
d
x
)
2
+
(
∂
u
y
∂
x
d
x
)
2
=
d
x
2
(
1
+
∂
u
x
∂
x
)
2
+
d
x
2
(
∂
u
y
∂
x
)
2
=
d
x
(
1
+
∂
u
x
∂
x
)
2
+
(
∂
u
y
∂
x
)
2
{\displaystyle {\begin{aligned}\mathrm {length} (ab)&={\sqrt {\left(dx+{\frac {\partial u_{x}}{\partial x}}dx\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}dx\right)^{2}}}\\&={\sqrt {dx^{2}\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+dx^{2}\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\\&=dx~{\sqrt {\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\end{aligned}}}
For very small displacement gradients the squares of the derivative of
u
y
{\displaystyle u_{y}}
and
u
x
{\displaystyle u_{x}}
are negligible and we have
l
e
n
g
t
h
(
a
b
)
≈
d
x
(
1
+
∂
u
x
∂
x
)
=
d
x
+
∂
u
x
∂
x
d
x
{\displaystyle \mathrm {length} (ab)\approx dx\left(1+{\frac {\partial u_{x}}{\partial x}}\right)=dx+{\frac {\partial u_{x}}{\partial x}}dx}
=== Normal strain ===
For an isotropic material that obeys Hooke's law, a normal stress will cause a normal strain. Normal strains produce dilations.
The normal strain in the x-direction of the rectangular element is defined by
ε
x
=
extension
original length
=
l
e
n
g
t
h
(
a
b
)
−
l
e
n
g
t
h
(
A
B
)
l
e
n
g
t
h
(
A
B
)
=
∂
u
x
∂
x
{\displaystyle \varepsilon _{x}={\frac {\text{extension}}{\text{original length}}}={\frac {\mathrm {length} (ab)-\mathrm {length} (AB)}{\mathrm {length} (AB)}}={\frac {\partial u_{x}}{\partial x}}}
Similarly, the normal strain in the y- and z-directions becomes
ε
y
=
∂
u
y
∂
y
,
ε
z
=
∂
u
z
∂
z
{\displaystyle \varepsilon _{y}={\frac {\partial u_{y}}{\partial y}}\quad ,\qquad \varepsilon _{z}={\frac {\partial u_{z}}{\partial z}}}
=== Shear strain ===
The engineering shear strain (γxy) is defined as the change in angle between lines AC and AB. Therefore,
γ
x
y
=
α
+
β
{\displaystyle \gamma _{xy}=\alpha +\beta }
From the geometry of the figure, we have
tan
α
=
∂
u
y
∂
x
d
x
d
x
+
∂
u
x
∂
x
d
x
=
∂
u
y
∂
x
1
+
∂
u
x
∂
x
tan
β
=
∂
u
x
∂
y
d
y
d
y
+
∂
u
y
∂
y
d
y
=
∂
u
x
∂
y
1
+
∂
u
y
∂
y
{\displaystyle {\begin{aligned}\tan \alpha &={\frac {{\tfrac {\partial u_{y}}{\partial x}}dx}{dx+{\tfrac {\partial u_{x}}{\partial x}}dx}}={\frac {\tfrac {\partial u_{y}}{\partial x}}{1+{\tfrac {\partial u_{x}}{\partial x}}}}\\\tan \beta &={\frac {{\tfrac {\partial u_{x}}{\partial y}}dy}{dy+{\tfrac {\partial u_{y}}{\partial y}}dy}}={\frac {\tfrac {\partial u_{x}}{\partial y}}{1+{\tfrac {\partial u_{y}}{\partial y}}}}\end{aligned}}}
For small displacement gradients we have
∂
u
x
∂
x
≪
1
;
∂
u
y
∂
y
≪
1
{\displaystyle {\frac {\partial u_{x}}{\partial x}}\ll 1~;~~{\frac {\partial u_{y}}{\partial y}}\ll 1}
For small rotations, i.e. α and β are ≪ 1 we have tan α ≈ α, tan β ≈ β. Therefore,
α
≈
∂
u
y
∂
x
;
β
≈
∂
u
x
∂
y
{\displaystyle \alpha \approx {\frac {\partial u_{y}}{\partial x}}~;~~\beta \approx {\frac {\partial u_{x}}{\partial y}}}
thus
γ
x
y
=
α
+
β
=
∂
u
y
∂
x
+
∂
u
x
∂
y
{\displaystyle \gamma _{xy}=\alpha +\beta ={\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}}
By interchanging x and y and ux and uy, it can be shown that γxy = γyx.
Similarly, for the yz- and xz-planes, we have
γ
y
z
=
γ
z
y
=
∂
u
y
∂
z
+
∂
u
z
∂
y
,
γ
z
x
=
γ
x
z
=
∂
u
z
∂
x
+
∂
u
x
∂
z
{\displaystyle \gamma _{yz}=\gamma _{zy}={\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\quad ,\qquad \gamma _{zx}=\gamma _{xz}={\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}}
=== Volume strain ===
== Metric tensor ==
A strain field associated with a displacement is defined, at any point, by the change in length of the tangent vectors representing the speeds of arbitrarily parametrized curves passing through that point. A basic geometric result, due to Fréchet, von Neumann and Jordan, states that, if the lengths of the tangent vectors fulfil the axioms of a norm and the parallelogram law, then the length of a vector is the square root of the value of the quadratic form associated, by the polarization formula, with a positive definite bilinear map called the metric tensor.
== See also ==
Stress measures
Strain rate
Strain tensor
== References == | Wikipedia/Strain_(materials_science) |
In mathematics, and in particular functional analysis, the tensor product of Hilbert spaces is a way to extend the tensor product construction so that the result of taking a tensor product of two Hilbert spaces is another Hilbert space. Roughly speaking, the tensor product is the metric space completion of the ordinary tensor product. This is an example of a topological tensor product. The tensor product allows Hilbert spaces to be collected into a symmetric monoidal category.
== Definition ==
Since Hilbert spaces have inner products, one would like to introduce an inner product, and thereby a topology, on the tensor product that arises naturally from the inner products on the factors. Let
H
1
{\displaystyle H_{1}}
and
H
2
{\displaystyle H_{2}}
be two Hilbert spaces with inner products
⟨
⋅
,
⋅
⟩
1
{\displaystyle \langle \cdot ,\cdot \rangle _{1}}
and
⟨
⋅
,
⋅
⟩
2
,
{\displaystyle \langle \cdot ,\cdot \rangle _{2},}
respectively. Construct the tensor product of
H
1
{\displaystyle H_{1}}
and
H
2
{\displaystyle H_{2}}
as vector spaces as explained in the article on tensor products. We can turn this vector space tensor product into an inner product space by defining
⟨
ϕ
1
⊗
ϕ
2
,
ψ
1
⊗
ψ
2
⟩
=
⟨
ϕ
1
,
ψ
1
⟩
1
⟨
ϕ
2
,
ψ
2
⟩
2
for all
ϕ
1
,
ψ
1
∈
H
1
and
ϕ
2
,
ψ
2
∈
H
2
{\displaystyle \left\langle \phi _{1}\otimes \phi _{2},\psi _{1}\otimes \psi _{2}\right\rangle =\left\langle \phi _{1},\psi _{1}\right\rangle _{1}\,\left\langle \phi _{2},\psi _{2}\right\rangle _{2}\quad {\mbox{for all }}\phi _{1},\psi _{1}\in H_{1}{\mbox{ and }}\phi _{2},\psi _{2}\in H_{2}}
and extending by linearity. That this inner product is the natural one is justified by the identification of scalar-valued bilinear maps on
H
1
×
H
2
{\displaystyle H_{1}\times H_{2}}
and linear functionals on their vector space tensor product. Finally, take the completion under this inner product. The resulting Hilbert space is the tensor product of
H
1
{\displaystyle H_{1}}
and
H
2
.
{\displaystyle H_{2}.}
=== Explicit construction ===
The tensor product can also be defined without appealing to the metric space completion. If
H
1
{\displaystyle H_{1}}
and
H
2
{\displaystyle H_{2}}
are two Hilbert spaces, one associates to every simple tensor product
x
1
⊗
x
2
{\displaystyle x_{1}\otimes x_{2}}
the rank one operator from
H
1
∗
{\displaystyle H_{1}^{*}}
to
H
2
{\displaystyle H_{2}}
that maps a given
x
∗
∈
H
1
∗
{\displaystyle x^{*}\in H_{1}^{*}}
as
x
∗
↦
x
∗
(
x
1
)
x
2
.
{\displaystyle x^{*}\mapsto x^{*}(x_{1})\,x_{2}.}
This extends to a linear identification between
H
1
⊗
H
2
{\displaystyle H_{1}\otimes H_{2}}
and the space of finite rank operators from
H
1
∗
{\displaystyle H_{1}^{*}}
to
H
2
.
{\displaystyle H_{2}.}
The finite rank operators are embedded in the Hilbert space
H
S
(
H
1
∗
,
H
2
)
{\displaystyle HS(H_{1}^{*},H_{2})}
of Hilbert–Schmidt operators from
H
1
∗
{\displaystyle H_{1}^{*}}
to
H
2
.
{\displaystyle H_{2}.}
The scalar product in
H
S
(
H
1
∗
,
H
2
)
{\displaystyle HS(H_{1}^{*},H_{2})}
is given by
⟨
T
1
,
T
2
⟩
=
∑
n
⟨
T
1
e
n
∗
,
T
2
e
n
∗
⟩
,
{\displaystyle \langle T_{1},T_{2}\rangle =\sum _{n}\left\langle T_{1}e_{n}^{*},T_{2}e_{n}^{*}\right\rangle ,}
where
(
e
n
∗
)
{\displaystyle \left(e_{n}^{*}\right)}
is an arbitrary orthonormal basis of
H
1
∗
.
{\displaystyle H_{1}^{*}.}
Under the preceding identification, one can define the Hilbertian tensor product of
H
1
{\displaystyle H_{1}}
and
H
2
,
{\displaystyle H_{2},}
that is isometrically and linearly isomorphic to
H
S
(
H
1
∗
,
H
2
)
.
{\displaystyle HS(H_{1}^{*},H_{2}).}
=== Universal property ===
The Hilbert tensor product
H
1
⊗
H
2
{\displaystyle H_{1}\otimes H_{2}}
is characterized by the following universal property (Kadison & Ringrose 1997, Theorem 2.6.4):
A weakly Hilbert-Schmidt mapping
L
:
H
1
×
H
2
→
K
{\displaystyle L:H_{1}\times H_{2}\to K}
is defined as a bilinear map for which a real number
d
{\displaystyle d}
exists, such that
∑
i
,
j
=
1
∞
|
⟨
L
(
e
i
,
f
j
)
,
u
⟩
|
2
≤
d
2
‖
u
‖
2
{\displaystyle \sum _{i,j=1}^{\infty }{\bigl |}\left\langle L(e_{i},f_{j}),u\right\rangle {\bigr |}^{2}\leq d^{2}\,\|u\|^{2}}
for all
u
∈
K
{\displaystyle u\in K}
and one (hence all) orthonormal bases
e
1
,
e
2
,
…
{\displaystyle e_{1},e_{2},\ldots }
of
H
1
{\displaystyle H_{1}}
and
f
1
,
f
2
,
…
{\displaystyle f_{1},f_{2},\ldots }
of
H
2
.
{\displaystyle H_{2}.}
As with any universal property, this characterizes the tensor product H uniquely, up to isomorphism. The same universal property, with obvious modifications, also applies for the tensor product of any finite number of Hilbert spaces. It is essentially the same universal property shared by all definitions of tensor products, irrespective of the spaces being tensored: this implies that any space with a tensor product is a symmetric monoidal category, and Hilbert spaces are a particular example thereof.
=== Infinite tensor products ===
Two different definitions have historically been proposed for the tensor product of an arbitrary-sized collection
{
H
n
}
n
∈
N
{\textstyle \{H_{n}\}_{n\in N}}
of Hilbert spaces. Von Neumann's traditional definition simply takes the "obvious" tensor product: to compute
⨂
n
H
n
{\textstyle \bigotimes _{n}{H_{n}}}
, first collect all simple tensors of the form
⨂
n
∈
N
e
n
{\textstyle \bigotimes _{n\in N}{e_{n}}}
such that
∏
n
∈
N
‖
e
n
‖
<
∞
{\textstyle \prod _{n\in N}{\|e_{n}\|}<\infty }
. The latter describes a pre-inner product through the polarization identity, so take the closed span of such simple tensors modulo that inner product's isotropy subspaces. This definition is almost never separable, in part because, in physical applications, "most" of the space describes impossible states. Modern authors typically use instead a definition due to Guichardet: to compute
⨂
n
H
n
{\textstyle \bigotimes _{n}{H_{n}}}
, first select a unit vector
v
n
∈
H
n
{\textstyle v_{n}\in H_{n}}
in each Hilbert space, and then collect all simple tensors of the form
⨂
n
∈
N
e
n
{\textstyle \bigotimes _{n\in N}{e_{n}}}
, in which only finitely-many
e
n
{\textstyle e_{n}}
are not
v
n
{\textstyle v_{n}}
. Then take the
L
2
{\displaystyle L^{2}}
completion of these simple tensors.
=== Operator algebras ===
Let
A
i
{\displaystyle {\mathfrak {A}}_{i}}
be the von Neumann algebra of bounded operators on
H
i
{\displaystyle H_{i}}
for
i
=
1
,
2.
{\displaystyle i=1,2.}
Then the von Neumann tensor product of the von Neumann algebras is the strong completion of the set of all finite linear combinations of simple tensor products
A
1
⊗
A
2
{\displaystyle A_{1}\otimes A_{2}}
where
A
i
∈
A
i
{\displaystyle A_{i}\in {\mathfrak {A}}_{i}}
for
i
=
1
,
2.
{\displaystyle i=1,2.}
This is exactly equal to the von Neumann algebra of bounded operators of
H
1
⊗
H
2
.
{\displaystyle H_{1}\otimes H_{2}.}
Unlike for Hilbert spaces, one may take infinite tensor products of von Neumann algebras, and for that matter C*-algebras of operators, without defining reference states. This is one advantage of the "algebraic" method in quantum statistical mechanics.
== Properties ==
If
H
1
{\displaystyle H_{1}}
and
H
2
{\displaystyle H_{2}}
have orthonormal bases
{
ϕ
k
}
{\displaystyle \left\{\phi _{k}\right\}}
and
{
ψ
l
}
,
{\displaystyle \left\{\psi _{l}\right\},}
respectively, then
{
ϕ
k
⊗
ψ
l
}
{\displaystyle \left\{\phi _{k}\otimes \psi _{l}\right\}}
is an orthonormal basis for
H
1
⊗
H
2
.
{\displaystyle H_{1}\otimes H_{2}.}
In particular, the Hilbert dimension of the tensor product is the product (as cardinal numbers) of the Hilbert dimensions.
== Examples and applications ==
The following examples show how tensor products arise naturally.
Given two measure spaces
X
{\displaystyle X}
and
Y
{\displaystyle Y}
, with measures
μ
{\displaystyle \mu }
and
ν
{\displaystyle \nu }
respectively, one may look at
L
2
(
X
×
Y
)
,
{\displaystyle L^{2}(X\times Y),}
the space of functions on
X
×
Y
{\displaystyle X\times Y}
that are square integrable with respect to the product measure
μ
×
ν
.
{\displaystyle \mu \times \nu .}
If
f
{\displaystyle f}
is a square integrable function on
X
,
{\displaystyle X,}
and
g
{\displaystyle g}
is a square integrable function on
Y
,
{\displaystyle Y,}
then we can define a function
h
{\displaystyle h}
on
X
×
Y
{\displaystyle X\times Y}
by
h
(
x
,
y
)
=
f
(
x
)
g
(
y
)
.
{\displaystyle h(x,y)=f(x)g(y).}
The definition of the product measure ensures that all functions of this form are square integrable, so this defines a bilinear mapping
L
2
(
X
)
×
L
2
(
Y
)
→
L
2
(
X
×
Y
)
.
{\displaystyle L^{2}(X)\times L^{2}(Y)\to L^{2}(X\times Y).}
Linear combinations of functions of the form
f
(
x
)
g
(
y
)
{\displaystyle f(x)g(y)}
are also in
L
2
(
X
×
Y
)
.
{\displaystyle L^{2}(X\times Y).}
It turns out that the set of linear combinations is in fact dense in
L
2
(
X
×
Y
)
,
{\displaystyle L^{2}(X\times Y),}
if
L
2
(
X
)
{\displaystyle L^{2}(X)}
and
L
2
(
Y
)
{\displaystyle L^{2}(Y)}
are separable. This shows that
L
2
(
X
)
⊗
L
2
(
Y
)
{\displaystyle L^{2}(X)\otimes L^{2}(Y)}
is isomorphic to
L
2
(
X
×
Y
)
,
{\displaystyle L^{2}(X\times Y),}
and it also explains why we need to take the completion in the construction of the Hilbert space tensor product.
Similarly, we can show that
L
2
(
X
;
H
)
{\displaystyle L^{2}(X;H)}
, denoting the space of square integrable functions
X
→
H
,
{\displaystyle X\to H,}
is isomorphic to
L
2
(
X
)
⊗
H
{\displaystyle L^{2}(X)\otimes H}
if this space is separable. The isomorphism maps
f
(
x
)
⊗
ϕ
∈
L
2
(
X
)
⊗
H
{\displaystyle f(x)\otimes \phi \in L^{2}(X)\otimes H}
to
f
(
x
)
ϕ
∈
L
2
(
X
;
H
)
.
{\displaystyle f(x)\phi \in L^{2}(X;H).}
We can combine this with the previous example and conclude that
L
2
(
X
)
⊗
L
2
(
Y
)
{\displaystyle L^{2}(X)\otimes L^{2}(Y)}
and
L
2
(
X
×
Y
)
{\displaystyle L^{2}(X\times Y)}
are both isomorphic to
L
2
(
X
;
L
2
(
Y
)
)
.
{\displaystyle L^{2}\left(X;L^{2}(Y)\right).}
Tensor products of Hilbert spaces arise often in quantum mechanics. If some particle is described by the Hilbert space
H
1
,
{\displaystyle H_{1},}
and another particle is described by
H
2
,
{\displaystyle H_{2},}
then the system consisting of both particles is described by the tensor product of
H
1
{\displaystyle H_{1}}
and
H
2
.
{\displaystyle H_{2}.}
For example, the state space of a quantum harmonic oscillator is
L
2
(
R
)
,
{\displaystyle L^{2}(\mathbb {R} ),}
so the state space of two oscillators is
L
2
(
R
)
⊗
L
2
(
R
)
,
{\displaystyle L^{2}(\mathbb {R} )\otimes L^{2}(\mathbb {R} ),}
which is isomorphic to
L
2
(
R
2
)
.
{\displaystyle L^{2}\left(\mathbb {R} ^{2}\right).}
Therefore, the two-particle system is described by wave functions of the form
ψ
(
x
1
,
x
2
)
.
{\displaystyle \psi \left(x_{1},x_{2}\right).}
A more intricate example is provided by the Fock spaces, which describe a variable number of particles.
== References ==
== Bibliography ==
Kadison, Richard V.; Ringrose, John R. (1997). Fundamentals of the theory of operator algebras. Vol. I. Graduate Studies in Mathematics. Vol. 15. Providence, R.I.: American Mathematical Society. ISBN 978-0-8218-0819-1. MR 1468229..
Weidmann, Joachim (1980). Linear operators in Hilbert spaces. Graduate Texts in Mathematics. Vol. 68. Berlin, New York: Springer-Verlag. ISBN 978-0-387-90427-6. MR 0566954.. | Wikipedia/Tensor_product_of_Hilbert_spaces |
The Journal of Differential Geometry is a peer-reviewed scientific journal of mathematics published by International Press on behalf of Lehigh University in 3 volumes of 3 issues each per year. The journal publishes an annual supplement in book form called Surveys in Differential Geometry. It covers differential geometry and related subjects such as differential equations, mathematical physics, algebraic geometry, and geometric topology. The editor-in-chief is Shing-Tung Yau of Harvard University.
== History ==
The journal was established in 1967 by Chuan-Chih Hsiung, who was a professor in the Department of Mathematics at Lehigh University at the time. Hsiung served as the journal's editor-in-chief, and later co-editor-in-chief, until his death in 2009.
In May 1996, the annual Geometry and Topology conference which was held at Harvard University was dedicated to commemorating the 30th anniversary of the journal and the 80th birthday of its founder. Similarly, in May 2008 Harvard held a conference dedicated to the 40th anniversary of the Journal of Differential Geometry.
== Reception ==
In his 2005 book Mathematical Publishing: A Guidebook, Steven Krantz writes: "At some very prestigious journals, like the Annals of Mathematics or the Journal of Differential Geometry, the editorial board meets every couple of months and debates each paper in detail."
The journal is abstracted and indexed in MathSciNet, Zentralblatt MATH, Current Contents/Physical, Chemical & Earth Sciences, and the Science Citation Index. According to the Journal Citation Reports, the journal has a 2013 impact factor of 1.093.
== References ==
== External links ==
Official website
Surveys in Differential Geometry web page | Wikipedia/Journal_of_Differential_Geometry |
In mathematics, specifically linear algebra, a degenerate bilinear form f (x, y ) on a vector space V is a bilinear form such that the map from V to V∗ (the dual space of V ) given by v ↦ (x ↦ f (x, v )) is not an isomorphism. An equivalent definition when V is finite-dimensional is that it has a non-trivial kernel: there exist some non-zero x in V such that
f
(
x
,
y
)
=
0
{\displaystyle f(x,y)=0\,}
for all
y
∈
V
.
{\displaystyle \,y\in V.}
== Nondegenerate forms ==
A nondegenerate or nonsingular form is a bilinear form that is not degenerate, meaning that
v
↦
(
x
↦
f
(
x
,
v
)
)
{\displaystyle v\mapsto (x\mapsto f(x,v))}
is an isomorphism, or equivalently in finite dimensions, if and only if
f
(
x
,
y
)
=
0
{\displaystyle f(x,y)=0}
for all
y
∈
V
{\displaystyle y\in V}
implies that
x
=
0
{\displaystyle x=0}
.
== Using the determinant ==
If V is finite-dimensional then, relative to some basis for V, a bilinear form is degenerate if and only if the determinant of the associated matrix is zero – if and only if the matrix is singular, and accordingly degenerate forms are also called singular forms. Likewise, a nondegenerate form is one for which the associated matrix is non-singular, and accordingly nondegenerate forms are also referred to as non-singular forms. These statements are independent of the chosen basis.
== Related notions ==
If for a quadratic form Q there is a non-zero vector v ∈ V such that Q(v) = 0, then Q is an isotropic quadratic form. If Q has the same sign for all non-zero vectors, it is a definite quadratic form or an anisotropic quadratic form.
There is the closely related notion of a unimodular form and a perfect pairing; these agree over fields but not over general rings.
== Examples ==
The study of real, quadratic algebras shows the distinction between types of quadratic forms. The product zz* is a quadratic form for each of the complex numbers, split-complex numbers, and dual numbers. For z = x + ε y, the dual number form is x2 which is a degenerate quadratic form. The split-complex case is an isotropic form, and the complex case is a definite form.
The most important examples of nondegenerate forms are inner products and symplectic forms. Symmetric nondegenerate forms are important generalizations of inner products, in that often all that is required is that the map
V
→
V
∗
{\displaystyle V\to V^{*}}
be an isomorphism, not positivity. For example, a manifold with an inner product structure on its tangent spaces is a Riemannian manifold, while relaxing this to a symmetric nondegenerate form yields a pseudo-Riemannian manifold.
== Infinite dimensions ==
Note that in an infinite-dimensional space, we can have a bilinear form ƒ for which
v
↦
(
x
↦
f
(
x
,
v
)
)
{\displaystyle v\mapsto (x\mapsto f(x,v))}
is injective but not surjective. For example, on the space of continuous functions on a closed bounded interval, the form
f
(
ϕ
,
ψ
)
=
∫
ψ
(
x
)
ϕ
(
x
)
d
x
{\displaystyle f(\phi ,\psi )=\int \psi (x)\phi (x)\,dx}
is not surjective: for instance, the Dirac delta functional is in the dual space but not of the required form. On the other hand, this bilinear form satisfies
f
(
ϕ
,
ψ
)
=
0
{\displaystyle f(\phi ,\psi )=0}
for all
ϕ
{\displaystyle \phi }
implies that
ψ
=
0.
{\displaystyle \psi =0.\,}
In such a case where ƒ satisfies injectivity (but not necessarily surjectivity), ƒ is said to be weakly nondegenerate.
== Terminology ==
If f vanishes identically on all vectors it is said to be totally degenerate. Given any bilinear form f on V the set of vectors
{
x
∈
V
∣
f
(
x
,
y
)
=
0
for all
y
∈
V
}
{\displaystyle \{x\in V\mid f(x,y)=0{\mbox{ for all }}y\in V\}}
forms a totally degenerate subspace of V. The map f is nondegenerate if and only if this subspace is trivial.
Geometrically, an isotropic line of the quadratic form corresponds to a point of the associated quadric hypersurface in projective space. Such a line is additionally isotropic for the bilinear form if and only if the corresponding point is a singularity. Hence, over an algebraically closed field, Hilbert's Nullstellensatz guarantees that the quadratic form always has isotropic lines, while the bilinear form has them if and only if the surface is singular.
== See also ==
Indefinite inner product space – generalization of Hilbert space with indefinite signaturePages displaying wikidata descriptions as a fallback
Dual system
Linear form – Linear map from a vector space to its field of scalars
== References == | Wikipedia/Nondegenerate_bilinear_form |
In mathematics, the structure tensor, also referred to as the second-moment matrix, is a matrix derived from the gradient of a function. It describes the distribution of the gradient in a specified neighborhood around a point and makes the information invariant to the observing coordinates. The structure tensor is often used in image processing and computer vision.
== The 2D structure tensor ==
=== Continuous version ===
For a function
I
{\displaystyle I}
of two variables p = (x, y), the structure tensor is the 2×2 matrix
S
w
(
p
)
=
[
∫
w
(
r
)
(
I
x
(
p
−
r
)
)
2
d
r
∫
w
(
r
)
I
x
(
p
−
r
)
I
y
(
p
−
r
)
d
r
∫
w
(
r
)
I
x
(
p
−
r
)
I
y
(
p
−
r
)
d
r
∫
w
(
r
)
(
I
y
(
p
−
r
)
)
2
d
r
]
{\displaystyle S_{w}(p)={\begin{bmatrix}\int w(r)(I_{x}(p-r))^{2}\,dr&\int w(r)I_{x}(p-r)I_{y}(p-r)\,dr\\[10pt]\int w(r)I_{x}(p-r)I_{y}(p-r)\,dr&\int w(r)(I_{y}(p-r))^{2}\,dr\end{bmatrix}}}
where
I
x
{\displaystyle I_{x}}
and
I
y
{\displaystyle I_{y}}
are the partial derivatives of
I
{\displaystyle I}
with respect to x and y; the integrals range over the plane
R
2
{\displaystyle \mathbb {R} ^{2}}
; and w is some fixed "window function" (such as a Gaussian blur), a distribution on two variables. Note that the matrix
S
w
{\displaystyle S_{w}}
is itself a function of p = (x, y).
The formula above can be written also as
S
w
(
p
)
=
∫
w
(
r
)
S
0
(
p
−
r
)
d
r
{\textstyle S_{w}(p)=\int w(r)S_{0}(p-r)\,dr}
, where
S
0
{\displaystyle S_{0}}
is the matrix-valued function defined by
S
0
(
p
)
=
[
(
I
x
(
p
)
)
2
I
x
(
p
)
I
y
(
p
)
I
x
(
p
)
I
y
(
p
)
(
I
y
(
p
)
)
2
]
{\displaystyle S_{0}(p)={\begin{bmatrix}(I_{x}(p))^{2}&I_{x}(p)I_{y}(p)\\[10pt]I_{x}(p)I_{y}(p)&(I_{y}(p))^{2}\end{bmatrix}}}
If the gradient
∇
I
=
(
I
x
,
I
y
)
T
{\displaystyle \nabla I=(I_{x},I_{y})^{\text{T}}}
of
I
{\displaystyle I}
is viewed as a 2×1 (single-column) matrix, where
(
⋅
)
T
{\displaystyle (\cdot )^{\text{T}}}
denotes transpose operation, turning a row vector to a column vector, the matrix
S
0
{\displaystyle S_{0}}
can be written as the matrix product
(
∇
I
)
(
∇
I
)
T
{\displaystyle (\nabla I)(\nabla I)^{\text{T}}}
or tensor or outer product
∇
I
⊗
∇
I
{\displaystyle \nabla I\otimes \nabla I}
. Note however that the structure tensor
S
w
(
p
)
{\displaystyle S_{w}(p)}
cannot be factored in this way in general except if
w
{\displaystyle w}
is a Dirac delta function.
=== Discrete version ===
In image processing and other similar applications, the function
I
{\displaystyle I}
is usually given as a discrete array of samples
I
[
p
]
{\displaystyle I[p]}
, where p is a pair of integer indices. The 2D structure tensor at a given pixel is usually taken to be the discrete sum
S
w
[
p
]
=
[
∑
r
w
[
r
]
(
I
x
[
p
−
r
]
)
2
∑
r
w
[
r
]
I
x
[
p
−
r
]
I
y
[
p
−
r
]
∑
r
w
[
r
]
I
x
[
p
−
r
]
I
y
[
p
−
r
]
∑
r
w
[
r
]
(
I
y
[
p
−
r
]
)
2
]
{\displaystyle S_{w}[p]={\begin{bmatrix}\sum _{r}w[r](I_{x}[p-r])^{2}&\sum _{r}w[r]I_{x}[p-r]I_{y}[p-r]\\[10pt]\sum _{r}w[r]I_{x}[p-r]I_{y}[p-r]&\sum _{r}w[r](I_{y}[p-r])^{2}\end{bmatrix}}}
Here the summation index r ranges over a finite set of index pairs (the "window", typically
{
−
m
…
+
m
}
×
{
−
m
…
+
m
}
{\displaystyle \{-m\ldots +m\}\times \{-m\ldots +m\}}
for some m), and w[r] is a fixed "window weight" that depends on r, such that the sum of all weights is 1. The values
I
x
[
p
]
,
I
y
[
p
]
{\displaystyle I_{x}[p],I_{y}[p]}
are the partial derivatives sampled at pixel p; which, for instance, may be estimated from by
I
{\displaystyle I}
by finite difference formulas.
The formula of the structure tensor can be written also as
S
w
[
p
]
=
∑
r
w
[
r
]
S
0
[
p
−
r
]
{\textstyle S_{w}[p]=\sum _{r}w[r]S_{0}[p-r]}
, where
S
0
{\displaystyle S_{0}}
is the matrix-valued array such that
S
0
[
p
]
=
[
(
I
x
[
p
]
)
2
I
x
[
p
]
I
y
[
p
]
I
x
[
p
]
I
y
[
p
]
(
I
y
[
p
]
)
2
]
{\displaystyle S_{0}[p]={\begin{bmatrix}(I_{x}[p])^{2}&I_{x}[p]I_{y}[p]\\[10pt]I_{x}[p]I_{y}[p]&(I_{y}[p])^{2}\end{bmatrix}}}
=== Interpretation ===
The importance of the 2D structure tensor
S
w
{\displaystyle S_{w}}
stems from the fact eigenvalues
λ
1
,
λ
2
{\displaystyle \lambda _{1},\lambda _{2}}
(which can be ordered so that
λ
1
≥
λ
2
≥
0
{\displaystyle \lambda _{1}\geq \lambda _{2}\geq 0}
) and the corresponding eigenvectors
e
1
,
e
2
{\displaystyle e_{1},e_{2}}
summarize the distribution of the gradient
∇
I
=
(
I
x
,
I
y
)
{\displaystyle \nabla I=(I_{x},I_{y})}
of
I
{\displaystyle I}
within the window defined by
w
{\displaystyle w}
centered at
p
{\displaystyle p}
.
Namely, if
λ
1
>
λ
2
{\displaystyle \lambda _{1}>\lambda _{2}}
, then
e
1
{\displaystyle e_{1}}
(or
−
e
1
{\displaystyle -e_{1}}
) is the direction that is maximally aligned with the gradient within the window.
In particular, if
λ
1
>
0
,
λ
2
=
0
{\displaystyle \lambda _{1}>0,\lambda _{2}=0}
then the gradient is always a multiple of
e
1
{\displaystyle e_{1}}
(positive, negative or zero); this is the case if and only if
I
{\displaystyle I}
within the window varies along the direction
e
1
{\displaystyle e_{1}}
but is constant along
e
2
{\displaystyle e_{2}}
. This condition of eigenvalues is also called linear symmetry condition because then the iso-curves of
I
{\displaystyle I}
consist in parallel lines, i.e there exists a one dimensional function
g
{\displaystyle g}
which can generate the two dimensional function
I
{\displaystyle I}
as
I
(
x
,
y
)
=
g
(
d
T
p
)
{\displaystyle I(x,y)=g(d^{\text{T}}p)}
for some constant vector
d
=
(
d
x
,
d
y
)
T
{\displaystyle d=(d_{x},d_{y})^{T}}
and the coordinates
p
=
(
x
,
y
)
T
{\displaystyle p=(x,y)^{T}}
.
If
λ
1
=
λ
2
{\displaystyle \lambda _{1}=\lambda _{2}}
, on the other hand, the gradient in the window has no predominant direction; which happens, for instance, when the image has rotational symmetry within that window. This condition of eigenvalues is also called balanced body, or directional equilibrium condition because it holds when all gradient directions in the window are equally frequent/probable.
Furthermore, the condition
λ
1
=
λ
2
=
0
{\displaystyle \lambda _{1}=\lambda _{2}=0}
happens if and only if the function
I
{\displaystyle I}
is constant (
∇
I
=
(
0
,
0
)
{\displaystyle \nabla I=(0,0)}
) within
W
{\displaystyle W}
.
More generally, the value of
λ
k
{\displaystyle \lambda _{k}}
, for k=1 or k=2, is the
w
{\displaystyle w}
-weighted average, in the neighborhood of p, of the square of the directional derivative of
I
{\displaystyle I}
along
e
k
{\displaystyle e_{k}}
. The relative discrepancy between the two eigenvalues of
S
w
{\displaystyle S_{w}}
is an indicator of the degree of anisotropy of the gradient in the window, namely how strongly is it biased towards a particular direction (and its opposite). This attribute can be quantified by the coherence, defined as
c
w
=
(
λ
1
−
λ
2
λ
1
+
λ
2
)
2
{\displaystyle c_{w}=\left({\frac {\lambda _{1}-\lambda _{2}}{\lambda _{1}+\lambda _{2}}}\right)^{2}}
if
λ
2
>
0
{\displaystyle \lambda _{2}>0}
. This quantity is 1 when the gradient is totally aligned, and 0 when it has no preferred direction. The formula is undefined, even in the limit, when the image is constant in the window (
λ
1
=
λ
2
=
0
{\displaystyle \lambda _{1}=\lambda _{2}=0}
). Some authors define it as 0 in that case.
Note that the average of the gradient
∇
I
{\displaystyle \nabla I}
inside the window is not a good indicator of anisotropy. Aligned but oppositely oriented gradient vectors would cancel out in this average, whereas in the structure tensor they are properly added together. This is a reason for why
(
∇
I
)
(
∇
I
)
T
{\displaystyle (\nabla I)(\nabla I)^{\text{T}}}
is used in the averaging of the structure tensor to optimize the direction instead of
∇
I
{\displaystyle \nabla I}
.
By expanding the effective radius of the window function
w
{\displaystyle w}
(that is, increasing its variance), one can make the structure tensor more robust in the face of noise, at the cost of diminished spatial resolution. The formal basis for this property is described in more detail below, where it is shown that a multi-scale formulation of the structure tensor, referred to as the multi-scale structure tensor, constitutes a true multi-scale representation of directional data under variations of the spatial extent of the window function.
=== Complex version ===
The interpretation and implementation of the 2D structure tensor becomes particularly accessible using complex numbers. The structure tensor consists in 3 real numbers
S
w
(
p
)
=
[
μ
20
μ
11
μ
11
μ
02
]
{\displaystyle S_{w}(p)={\begin{bmatrix}\mu _{20}&\mu _{11}\\[10pt]\mu _{11}&\mu _{02}\end{bmatrix}}}
where
μ
20
=
∫
(
w
(
r
)
(
I
x
(
p
−
r
)
)
2
d
r
{\textstyle \mu _{20}=\int (w(r)(I_{x}(p-r))^{2}\,dr}
,
μ
02
=
∫
(
w
(
r
)
(
I
y
(
p
−
r
)
)
2
d
r
{\textstyle \mu _{02}=\int (w(r)(I_{y}(p-r))^{2}\,dr}
and
μ
11
=
∫
w
(
r
)
I
x
(
p
−
r
)
I
y
(
p
−
r
)
d
r
{\textstyle \mu _{11}=\int w(r)I_{x}(p-r)I_{y}(p-r)\,dr}
in which integrals can be replaced by summations for discrete representation. Using Parseval's identity it is clear that the three real numbers are the second order moments of the power spectrum of
I
{\displaystyle I}
. The following second order complex moment of the power spectrum of
I
{\displaystyle I}
can then be written as
κ
20
=
μ
20
−
μ
02
+
i
2
μ
11
=
∫
w
(
r
)
(
I
x
(
p
−
r
)
+
i
I
y
(
p
−
r
)
)
2
d
r
=
(
λ
1
−
λ
2
)
exp
(
i
2
ϕ
)
{\displaystyle \kappa _{20}=\mu _{20}-\mu _{02}+i2\mu _{11}=\int w(r)(I_{x}(p-r)+iI_{y}(p-r))^{2}\,dr=(\lambda _{1}-\lambda _{2})\exp(i2\phi )}
where
i
=
−
1
{\displaystyle i={\sqrt {-1}}}
and
ϕ
{\displaystyle \phi }
is the direction angle of the most significant eigenvector of the structure tensor
ϕ
=
∠
e
1
{\displaystyle \phi =\angle {e_{1}}}
whereas
λ
1
{\displaystyle \lambda _{1}}
and
λ
2
{\displaystyle \lambda _{2}}
are the most and the least significant eigenvalues. From, this it follows that
κ
20
{\displaystyle \kappa _{20}}
contains both a certainty
|
κ
20
|
=
λ
1
−
λ
2
{\displaystyle |\kappa _{20}|=\lambda _{1}-\lambda _{2}}
and the optimal direction in double angle representation since it is a complex number consisting of two real numbers. It follows also that if the gradient is represented as a complex number, and is remapped by squaring (i.e. the argument angles of the complex gradient is doubled), then averaging acts as an optimizer in the mapped domain, since it directly delivers both the optimal direction (in double angle representation) and the associated certainty. The complex number represents thus how much linear structure (linear symmetry) there is in image
I
{\displaystyle I}
, and the complex number is obtained directly by averaging the gradient in its (complex) double angle representation without computing the eigenvalues and the eigenvectors explicitly.
Likewise the following second order complex moment of the power spectrum of
I
{\displaystyle I}
, which happens to be always real because
I
{\displaystyle I}
is real,
κ
11
=
μ
20
+
μ
02
=
∫
w
(
r
)
|
I
x
(
p
−
r
)
+
i
I
y
(
p
−
r
)
|
2
d
r
=
λ
1
+
λ
2
{\displaystyle \kappa _{11}=\mu _{20}+\mu _{02}=\int w(r)|I_{x}(p-r)+iI_{y}(p-r)|^{2}\,dr=\lambda _{1}+\lambda _{2}}
can be obtained, with
λ
1
{\displaystyle \lambda _{1}}
and
λ
2
{\displaystyle \lambda _{2}}
being the eigenvalues as before. Notice that this time the magnitude of the complex gradient is squared (which is always real).
However, decomposing the structure tensor in its eigenvectors yields its tensor components as
S
w
(
p
)
=
λ
1
e
1
e
1
T
+
λ
2
e
2
e
2
T
=
(
λ
1
−
λ
2
)
e
1
e
1
T
+
λ
2
(
e
1
e
1
T
+
e
2
e
2
T
)
=
(
λ
1
−
λ
2
)
e
1
e
1
T
+
λ
2
E
{\displaystyle S_{w}(p)=\lambda _{1}e_{1}e_{1}^{\text{T}}+\lambda _{2}e_{2}e_{2}^{\text{T}}=(\lambda _{1}-\lambda _{2})e_{1}e_{1}^{\text{T}}+\lambda _{2}(e_{1}e_{1}^{\text{T}}+e_{2}e_{2}^{\text{T}})=(\lambda _{1}-\lambda _{2})e_{1}e_{1}^{\text{T}}+\lambda _{2}E}
where
E
{\displaystyle E}
is the identity matrix in 2D because the two eigenvectors are always orthogonal (and sum to unity). The first term in the last expression of the decomposition,
(
λ
1
−
λ
2
)
e
1
e
1
T
{\displaystyle (\lambda _{1}-\lambda _{2})e_{1}e_{1}^{\text{T}}}
, represents the linear symmetry component of the structure tensor containing all directional information (as a rank-1 matrix), whereas the second term represents the balanced body component of the tensor, which lacks any directional information (containing an identity matrix
E
{\displaystyle E}
). To know how much directional information there is in
I
{\displaystyle I}
is then the same as checking how large
λ
1
−
λ
2
{\displaystyle \lambda _{1}-\lambda _{2}}
is compared to
λ
2
{\displaystyle \lambda _{2}}
.
Evidently,
κ
20
{\displaystyle \kappa _{20}}
is the complex equivalent of the first term in the tensor decomposition, whereas
1
2
(
|
κ
20
|
−
κ
11
)
=
λ
2
{\displaystyle {\tfrac {1}{2}}(|\kappa _{20}|-\kappa _{11})=\lambda _{2}}
is the equivalent of the second term. Thus the two scalars, comprising three real numbers,
κ
20
=
(
λ
1
−
λ
2
)
exp
(
i
2
ϕ
)
=
w
∗
(
h
∗
I
)
2
κ
11
=
λ
1
+
λ
2
=
w
∗
|
h
∗
I
|
2
{\displaystyle {\begin{aligned}\kappa _{20}&=&(\lambda _{1}-\lambda _{2})\exp(i2\phi )&=w*(h*I)^{2}\\\kappa _{11}&=&\lambda _{1}+\lambda _{2}&=w*|h*I|^{2}\\\end{aligned}}}
where
h
(
x
,
y
)
=
(
x
+
i
y
)
exp
(
−
(
x
2
+
y
2
)
/
(
2
σ
2
)
)
{\displaystyle h(x,y)=(x+iy)\exp(-(x^{2}+y^{2})/(2\sigma ^{2}))}
is the (complex) gradient filter, and
∗
{\displaystyle *}
is convolution, constitute a complex representation of the 2D Structure Tensor. As discussed here and elsewhere
w
{\displaystyle w}
defines the local image which is usually a Gaussian (with a certain variance defining the outer scale), and
σ
{\displaystyle \sigma }
is the (inner scale) parameter determining the effective frequency range in which the orientation
2
ϕ
{\displaystyle 2\phi }
is to be estimated.
The elegance of the complex representation stems from that the two components of the structure tensor can be obtained as averages and independently. In turn, this means that
κ
20
{\displaystyle \kappa _{20}}
and
κ
11
{\displaystyle \kappa _{11}}
can be used in a scale space representation to describe the evidence for presence of unique orientation and the evidence for the alternative hypothesis, the presence of multiple balanced orientations, without computing the eigenvectors and eigenvalues. A functional, such as squaring the complex numbers have to this date not been shown to exist for structure tensors with dimensions higher than two. In Bigun 91, it has been put forward with due argument that this is because complex numbers are commutative algebras whereas quaternions, the possible candidate to construct such a functional by, constitute a non-commutative algebra.
The complex representation of the structure tensor is frequently used in fingerprint analysis to obtain direction maps containing certainties which in turn are used to enhance them, to find the locations of the global (cores and deltas) and local (minutia) singularities, as well as automatically evaluate the quality of the fingerprints.
== The 3D structure tensor ==
=== Definition ===
The structure tensor can be defined also for a function
I
{\displaystyle I}
of three variables p=(x,y,z) in an entirely analogous way. Namely, in the continuous version we have
S
w
(
p
)
=
∫
w
(
r
)
S
0
(
p
−
r
)
d
r
{\textstyle S_{w}(p)=\int w(r)S_{0}(p-r)\,dr}
, where
S
0
(
p
)
=
[
(
I
x
(
p
)
)
2
I
x
(
p
)
I
y
(
p
)
I
x
(
p
)
I
z
(
p
)
I
x
(
p
)
I
y
(
p
)
(
I
y
(
p
)
)
2
I
y
(
p
)
I
z
(
p
)
I
x
(
p
)
I
z
(
p
)
I
y
(
p
)
I
z
(
p
)
(
I
z
(
p
)
)
2
]
{\displaystyle S_{0}(p)={\begin{bmatrix}(I_{x}(p))^{2}&I_{x}(p)I_{y}(p)&I_{x}(p)I_{z}(p)\\[10pt]I_{x}(p)I_{y}(p)&(I_{y}(p))^{2}&I_{y}(p)I_{z}(p)\\[10pt]I_{x}(p)I_{z}(p)&I_{y}(p)I_{z}(p)&(I_{z}(p))^{2}\end{bmatrix}}}
where
I
x
,
I
y
,
I
z
{\displaystyle I_{x},I_{y},I_{z}}
are the three partial derivatives of
I
{\displaystyle I}
, and the integral ranges over
R
3
{\displaystyle \mathbb {R} ^{3}}
.
In the discrete version,
S
w
[
p
]
=
∑
r
w
[
r
]
S
0
[
p
−
r
]
{\textstyle S_{w}[p]=\sum _{r}w[r]S_{0}[p-r]}
, where
S
0
[
p
]
=
[
(
I
x
[
p
]
)
2
I
x
[
p
]
I
y
[
p
]
I
x
[
p
]
I
z
[
p
]
I
x
[
p
]
I
y
[
p
]
(
I
y
[
p
]
)
2
I
y
[
p
]
I
z
[
p
]
I
x
[
p
]
I
z
[
p
]
I
y
[
p
]
I
z
[
p
]
(
I
z
[
p
]
)
2
]
{\displaystyle S_{0}[p]={\begin{bmatrix}(I_{x}[p])^{2}&I_{x}[p]I_{y}[p]&I_{x}[p]I_{z}[p]\\[10pt]I_{x}[p]I_{y}[p]&(I_{y}[p])^{2}&I_{y}[p]I_{z}[p]\\[10pt]I_{x}[p]I_{z}[p]&I_{y}[p]I_{z}[p]&(I_{z}[p])^{2}\end{bmatrix}}}
and the sum ranges over a finite set of 3D indices, usually
{
−
m
…
+
m
}
×
{
−
m
…
+
m
}
×
{
−
m
…
+
m
}
{\displaystyle \{-m\ldots +m\}\times \{-m\ldots +m\}\times \{-m\ldots +m\}}
for some m.
=== Interpretation ===
As in the two-dimensional case, the eigenvalues
λ
1
,
λ
2
,
λ
3
{\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}
of
S
w
[
p
]
{\displaystyle S_{w}[p]}
, and the corresponding eigenvectors
e
^
1
,
e
^
2
,
e
^
3
{\displaystyle {\hat {e}}_{1},{\hat {e}}_{2},{\hat {e}}_{3}}
, summarize the distribution of gradient directions within the neighborhood of p defined by the window
w
{\displaystyle w}
. This information can be visualized as an ellipsoid whose semi-axes are equal to the eigenvalues and directed along their corresponding eigenvectors.
In particular, if the ellipsoid is stretched along one axis only, like a cigar (that is, if
λ
1
{\displaystyle \lambda _{1}}
is much larger than both
λ
2
{\displaystyle \lambda _{2}}
and
λ
3
{\displaystyle \lambda _{3}}
), it means that the gradient in the window is predominantly aligned with the direction
e
1
{\displaystyle e_{1}}
, so that the isosurfaces of
I
{\displaystyle I}
tend to be flat and perpendicular to that vector. This situation occurs, for instance, when p lies on a thin plate-like feature, or on the smooth boundary between two regions with contrasting values.
If the ellipsoid is flattened in one direction only, like a pancake (that is, if
λ
3
{\displaystyle \lambda _{3}}
is much smaller than both
λ
1
{\displaystyle \lambda _{1}}
and
λ
2
{\displaystyle \lambda _{2}}
), it means that the gradient directions are spread out but perpendicular to
e
3
{\displaystyle e_{3}}
; so that the isosurfaces tend to be like tubes parallel to that vector. This situation occurs, for instance, when p lies on a thin line-like feature, or on a sharp corner of the boundary between two regions with contrasting values.
Finally, if the ellipsoid is roughly spherical (that is, if
λ
1
≈
λ
2
≈
λ
3
{\displaystyle \lambda _{1}\approx \lambda _{2}\approx \lambda _{3}}
), it means that the gradient directions in the window are more or less evenly distributed, with no marked preference; so that the function
I
{\displaystyle I}
is mostly isotropic in that neighborhood. This happens, for instance, when the function has spherical symmetry in the neighborhood of p. In particular, if the ellipsoid degenerates to a point (that is, if the three eigenvalues are zero), it means that
I
{\displaystyle I}
is constant (has zero gradient) within the window.
== The multi-scale structure tensor ==
The structure tensor is an important tool in scale space analysis. The multi-scale structure tensor (or multi-scale second moment matrix) of a function
I
{\displaystyle I}
is in contrast to other one-parameter scale-space features an image descriptor that is defined over two scale parameters.
One scale parameter, referred to as local scale
t
{\displaystyle t}
, is needed for determining the amount of pre-smoothing when computing the image gradient
(
∇
I
)
(
x
;
t
)
{\displaystyle (\nabla I)(x;t)}
. Another scale parameter, referred to as integration scale
s
{\displaystyle s}
, is needed for specifying the spatial extent of the window function
w
(
ξ
;
s
)
{\displaystyle w(\xi ;s)}
that determines the weights for the region in space over which the components of the outer product of the gradient by itself
(
∇
I
)
(
∇
I
)
T
{\displaystyle (\nabla I)(\nabla I)^{\text{T}}}
are accumulated.
More precisely, suppose that
I
{\displaystyle I}
is a real-valued signal defined over
R
k
{\displaystyle \mathbb {R} ^{k}}
. For any local scale
t
>
0
{\displaystyle t>0}
, let a multi-scale representation
I
(
x
;
t
)
{\displaystyle I(x;t)}
of this signal be given by
I
(
x
;
t
)
=
h
(
x
;
t
)
∗
I
(
x
)
{\displaystyle I(x;t)=h(x;t)*I(x)}
where
h
(
x
;
t
)
{\displaystyle h(x;t)}
represents a pre-smoothing kernel. Furthermore, let
(
∇
I
)
(
x
;
t
)
{\displaystyle (\nabla I)(x;t)}
denote the gradient of the scale space representation.
Then, the multi-scale structure tensor/second-moment matrix is defined by
μ
(
x
;
t
,
s
)
=
∫
ξ
∈
R
k
(
∇
I
)
(
x
−
ξ
;
t
)
(
∇
I
)
T
(
x
−
ξ
;
t
)
w
(
ξ
;
s
)
d
ξ
{\displaystyle \mu (x;t,s)=\int _{\xi \in \mathbb {R} ^{k}}(\nabla I)(x-\xi ;t)\,(\nabla I)^{\text{T}}(x-\xi ;t)\,w(\xi ;s)\,d\xi }
Conceptually, one may ask if it would be sufficient to use any self-similar families of smoothing functions
h
(
x
;
t
)
{\displaystyle h(x;t)}
and
w
(
ξ
;
s
)
{\displaystyle w(\xi ;s)}
. If one naively would apply, for example, a box filter, however, then non-desirable artifacts could easily occur. If one wants the multi-scale structure tensor to be well-behaved over both increasing local scales
t
{\displaystyle t}
and increasing integration scales
s
{\displaystyle s}
, then it can be shown that both the smoothing function and the window function have to be Gaussian. The conditions that specify this uniqueness are similar to the scale-space axioms that are used for deriving the uniqueness of the Gaussian kernel for a regular Gaussian scale space of image intensities.
There are different ways of handling the two-parameter scale variations in this family of image descriptors. If we keep the local scale parameter
t
{\displaystyle t}
fixed and apply increasingly broadened versions of the window function by increasing the integration scale parameter
s
{\displaystyle s}
only, then we obtain a true formal scale space representation of the directional data computed at the given local scale
t
{\displaystyle t}
. If we couple the local scale and integration scale by a relative integration scale
r
≥
1
{\displaystyle r\geq 1}
, such that
s
=
r
t
{\displaystyle s=rt}
then for any fixed value of
r
{\displaystyle r}
, we obtain a reduced self-similar one-parameter variation, which is frequently used to simplify computational algorithms, for example in corner detection, interest point detection, texture analysis and image matching.
By varying the relative integration scale
r
≥
1
{\displaystyle r\geq 1}
in such a self-similar scale variation, we obtain another alternative way of parameterizing the multi-scale nature of directional data obtained by increasing the integration scale.
A conceptually similar construction can be performed for discrete signals, with the convolution integral replaced by a convolution sum and with the continuous Gaussian kernel
g
(
x
;
t
)
{\displaystyle g(x;t)}
replaced by the discrete Gaussian kernel
T
(
n
;
t
)
{\displaystyle T(n;t)}
:
μ
(
x
;
t
,
s
)
=
∑
n
∈
Z
k
(
∇
I
)
(
x
−
n
;
t
)
(
∇
I
)
T
(
x
−
n
;
t
)
w
(
n
;
s
)
{\displaystyle \mu (x;t,s)=\sum _{n\in \mathbb {Z} ^{k}}(\nabla I)(x-n;t)\,(\nabla I)^{\text{T}}(x-n;t)\,w(n;s)}
When quantizing the scale parameters
t
{\displaystyle t}
and
s
{\displaystyle s}
in an actual implementation, a finite geometric progression
α
i
{\displaystyle \alpha ^{i}}
is usually used, with i ranging from 0 to some maximum scale index m. Thus, the discrete scale levels will bear certain similarities to image pyramid, although spatial subsampling may not necessarily be used in order to preserve more accurate data for subsequent processing stages.
== Applications ==
The eigenvalues of the structure tensor play a significant role in many image processing algorithms, for problems like corner detection, interest point detection, and feature tracking. The structure tensor also plays a central role in the Lucas-Kanade optical flow algorithm, and in its extensions to estimate affine shape adaptation; where the magnitude of
λ
2
{\displaystyle \lambda _{2}}
is an indicator of the reliability of the computed result. The tensor has been used for scale space analysis, estimation of local surface orientation from monocular or binocular cues, non-linear fingerprint enhancement, diffusion-based image processing, and several other image processing problems. The structure tensor can be also applied in geology to filter seismic data.
=== Processing spatio-temporal video data with the structure tensor ===
The three-dimensional structure tensor has been used to analyze three-dimensional video data (viewed as a function of x, y, and time t).
If one in this context aims at image descriptors that are invariant under Galilean transformations, to make it possible to compare image measurements that have been obtained under variations of a priori unknown image velocities
v
=
(
v
x
,
v
y
)
T
{\displaystyle v=(v_{x},v_{y})^{\text{T}}}
[
x
′
y
′
t
′
]
=
G
[
x
y
t
]
=
[
x
−
v
x
t
y
−
v
y
t
t
]
,
{\displaystyle {\begin{bmatrix}x'\\y'\\t'\end{bmatrix}}=G{\begin{bmatrix}x\\y\\t\end{bmatrix}}={\begin{bmatrix}x-v_{x}\,t\\y-v_{y}\,t\\t\end{bmatrix}},}
it is, however, from a computational viewpoint preferable to parameterize the components in the structure tensor/second-moment matrix
S
{\displaystyle S}
using the notion of Galilean diagonalization
S
′
=
R
space
−
T
G
−
T
S
G
−
1
R
space
−
1
=
[
ν
1
ν
2
ν
3
]
{\displaystyle S'=R_{\text{space}}^{-{\text{T}}}\,G^{-{\text{T}}}\,S\,G^{-1}\,R_{\text{space}}^{-1}={\begin{bmatrix}\nu _{1}&\,&\,\\\,&\nu _{2}&\,\\\,&\,&\nu _{3}\end{bmatrix}}}
where
G
{\displaystyle G}
denotes a Galilean transformation of spacetime and
R
space
{\displaystyle R_{\text{space}}}
a two-dimensional rotation over the spatial domain,
compared to the abovementioned use of eigenvalues of a 3-D structure tensor, which corresponds to an eigenvalue decomposition and a (non-physical) three-dimensional rotation of spacetime
S
″
=
R
spacetime
−
T
S
R
spacetime
−
1
=
[
λ
1
λ
2
λ
3
]
.
{\displaystyle S''=R_{\text{spacetime}}^{-{\text{T}}}\,S\,R_{\text{spacetime}}^{-1}={\begin{bmatrix}\lambda _{1}&&\\&\lambda _{2}&\\&&\lambda _{3}\end{bmatrix}}.}
To obtain true Galilean invariance, however, also the shape of the spatio-temporal window function needs to be adapted, corresponding to the transfer of affine shape adaptation from spatial to spatio-temporal image data.
In combination with local spatio-temporal histogram descriptors,
these concepts together allow for Galilean invariant recognition of spatio-temporal events.
== See also ==
Tensor
Tensor operator
Directional derivative
Gaussian
Corner detection
Edge detection
Lucas-Kanade method
Affine shape adaptation
Generalized structure tensor
== References ==
== Resources ==
Download MATLAB Source
Structure Tensor Tutorial (Original) | Wikipedia/Structure_tensor |
In mathematics, any vector space
V
{\displaystyle V}
has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on
V
,
{\displaystyle V,}
together with the vector space structure of pointwise addition and scalar multiplication by constants.
The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the algebraic dual space.
When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the continuous dual space.
Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with finite-dimensional vector spaces.
When applied to vector spaces of functions (which are typically infinite-dimensional), dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in functional analysis.
Early terms for dual include polarer Raum [Hahn 1927], espace conjugué, adjoint space [Alaoglu 1940], and transponierter Raum [Schauder 1930] and [Banach 1932]. The term dual is due to Bourbaki 1938.
== Algebraic dual space ==
Given any vector space
V
{\displaystyle V}
over a field
F
{\displaystyle F}
, the (algebraic) dual space
V
∗
{\displaystyle V^{*}}
(alternatively denoted by
V
∨
{\displaystyle V^{\lor }}
or
V
′
{\displaystyle V'}
) is defined as the set of all linear maps
φ
:
V
→
F
{\displaystyle \varphi :V\to F}
(linear functionals). Since linear maps are vector space homomorphisms, the dual space may be denoted
hom
(
V
,
F
)
{\displaystyle \hom(V,F)}
.
The dual space
V
∗
{\displaystyle V^{*}}
itself becomes a vector space over
F
{\displaystyle F}
when equipped with an addition and scalar multiplication satisfying:
(
φ
+
ψ
)
(
x
)
=
φ
(
x
)
+
ψ
(
x
)
(
a
φ
)
(
x
)
=
a
(
φ
(
x
)
)
{\displaystyle {\begin{aligned}(\varphi +\psi )(x)&=\varphi (x)+\psi (x)\\(a\varphi )(x)&=a\left(\varphi (x)\right)\end{aligned}}}
for all
φ
,
ψ
∈
V
∗
{\displaystyle \varphi ,\psi \in V^{*}}
,
x
∈
V
{\displaystyle x\in V}
, and
a
∈
F
{\displaystyle a\in F}
.
Elements of the algebraic dual space
V
∗
{\displaystyle V^{*}}
are sometimes called covectors, one-forms, or linear forms.
The pairing of a functional
φ
{\displaystyle \varphi }
in the dual space
V
∗
{\displaystyle V^{*}}
and an element
x
{\displaystyle x}
of
V
{\displaystyle V}
is sometimes denoted by a bracket:
φ
(
x
)
=
[
x
,
φ
]
{\displaystyle \varphi (x)=[x,\varphi ]}
or
φ
(
x
)
=
⟨
x
,
φ
⟩
{\displaystyle \varphi (x)=\langle x,\varphi \rangle }
. This pairing defines a nondegenerate bilinear mapping
⟨
⋅
,
⋅
⟩
:
V
×
V
∗
→
F
{\displaystyle \langle \cdot ,\cdot \rangle :V\times V^{*}\to F}
called the natural pairing.
=== Finite-dimensional case ===
If
V
{\displaystyle V}
is finite-dimensional, then
V
∗
{\displaystyle V^{*}}
has the same dimension as
V
{\displaystyle V}
. Given a basis
{
e
1
,
…
,
e
n
}
{\displaystyle \{\mathbf {e} _{1},\dots ,\mathbf {e} _{n}\}}
in
V
{\displaystyle V}
, it is possible to construct a specific basis in
V
∗
{\displaystyle V^{*}}
, called the dual basis. This dual basis is a set
{
e
1
,
…
,
e
n
}
{\displaystyle \{\mathbf {e} ^{1},\dots ,\mathbf {e} ^{n}\}}
of linear functionals on
V
{\displaystyle V}
, defined by the relation
e
i
(
c
1
e
1
+
⋯
+
c
n
e
n
)
=
c
i
,
i
=
1
,
…
,
n
{\displaystyle \mathbf {e} ^{i}(c^{1}\mathbf {e} _{1}+\cdots +c^{n}\mathbf {e} _{n})=c^{i},\quad i=1,\ldots ,n}
for any choice of coefficients
c
i
∈
F
{\displaystyle c^{i}\in F}
. In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations
e
i
(
e
j
)
=
δ
j
i
{\displaystyle \mathbf {e} ^{i}(\mathbf {e} _{j})=\delta _{j}^{i}}
where
δ
j
i
{\displaystyle \delta _{j}^{i}}
is the Kronecker delta symbol. This property is referred to as the bi-orthogonality property.
For example, if
V
{\displaystyle V}
is
R
2
{\displaystyle \mathbb {R} ^{2}}
, let its basis be chosen as
{
e
1
=
(
1
/
2
,
1
/
2
)
,
e
2
=
(
0
,
1
)
}
{\displaystyle \{\mathbf {e} _{1}=(1/2,1/2),\mathbf {e} _{2}=(0,1)\}}
. The basis vectors are not orthogonal to each other. Then,
e
1
{\displaystyle \mathbf {e} ^{1}}
and
e
2
{\displaystyle \mathbf {e} ^{2}}
are one-forms (functions that map a vector to a scalar) such that
e
1
(
e
1
)
=
1
{\displaystyle \mathbf {e} ^{1}(\mathbf {e} _{1})=1}
,
e
1
(
e
2
)
=
0
{\displaystyle \mathbf {e} ^{1}(\mathbf {e} _{2})=0}
,
e
2
(
e
1
)
=
0
{\displaystyle \mathbf {e} ^{2}(\mathbf {e} _{1})=0}
, and
e
2
(
e
2
)
=
1
{\displaystyle \mathbf {e} ^{2}(\mathbf {e} _{2})=1}
. (Note: The superscript here is the index, not an exponent.) This system of equations can be expressed using matrix notation as
[
e
11
e
12
e
21
e
22
]
[
e
11
e
21
e
12
e
22
]
=
[
1
0
0
1
]
.
{\displaystyle {\begin{bmatrix}e^{11}&e^{12}\\e^{21}&e^{22}\end{bmatrix}}{\begin{bmatrix}e_{11}&e_{21}\\e_{12}&e_{22}\end{bmatrix}}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}.}
Solving for the unknown values in the first matrix shows the dual basis to be
{
e
1
=
(
2
,
0
)
,
e
2
=
(
−
1
,
1
)
}
{\displaystyle \{\mathbf {e} ^{1}=(2,0),\mathbf {e} ^{2}=(-1,1)\}}
. Because
e
1
{\displaystyle \mathbf {e} ^{1}}
and
e
2
{\displaystyle \mathbf {e} ^{2}}
are functionals, they can be rewritten as
e
1
(
x
,
y
)
=
2
x
{\displaystyle \mathbf {e} ^{1}(x,y)=2x}
and
e
2
(
x
,
y
)
=
−
x
+
y
{\displaystyle \mathbf {e} ^{2}(x,y)=-x+y}
.
In general, when
V
{\displaystyle V}
is
R
n
{\displaystyle \mathbb {R} ^{n}}
, if
E
=
[
e
1
|
⋯
|
e
n
]
{\displaystyle E=[\mathbf {e} _{1}|\cdots |\mathbf {e} _{n}]}
is a matrix whose columns are the basis vectors and
E
^
=
[
e
1
|
⋯
|
e
n
]
{\displaystyle {\hat {E}}=[\mathbf {e} ^{1}|\cdots |\mathbf {e} ^{n}]}
is a matrix whose columns are the dual basis vectors, then
E
^
T
⋅
E
=
I
n
,
{\displaystyle {\hat {E}}^{\textrm {T}}\cdot E=I_{n},}
where
I
n
{\displaystyle I_{n}}
is the identity matrix of order
n
{\displaystyle n}
. The biorthogonality property of these two basis sets allows any point
x
∈
V
{\displaystyle \mathbf {x} \in V}
to be represented as
x
=
∑
i
⟨
x
,
e
i
⟩
e
i
=
∑
i
⟨
x
,
e
i
⟩
e
i
,
{\displaystyle \mathbf {x} =\sum _{i}\langle \mathbf {x} ,\mathbf {e} ^{i}\rangle \mathbf {e} _{i}=\sum _{i}\langle \mathbf {x} ,\mathbf {e} _{i}\rangle \mathbf {e} ^{i},}
even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product
⟨
⋅
,
⋅
⟩
{\displaystyle \langle \cdot ,\cdot \rangle }
and the corresponding duality pairing are introduced, as described below in § Bilinear products and dual spaces.
In particular,
R
n
{\displaystyle \mathbb {R} ^{n}}
can be interpreted as the space of columns of
n
{\displaystyle n}
real numbers, its dual space is typically written as the space of rows of
n
{\displaystyle n}
real numbers. Such a row acts on
R
n
{\displaystyle \mathbb {R} ^{n}}
as a linear functional by ordinary matrix multiplication. This is because a functional maps every
n
{\displaystyle n}
-vector
x
{\displaystyle x}
into a real number
y
{\displaystyle y}
. Then, seeing this functional as a matrix
M
{\displaystyle M}
, and
x
{\displaystyle x}
as an
n
×
1
{\displaystyle n\times 1}
matrix, and
y
{\displaystyle y}
a
1
×
1
{\displaystyle 1\times 1}
matrix (trivially, a real number) respectively, if
M
x
=
y
{\displaystyle Mx=y}
then, by dimension reasons,
M
{\displaystyle M}
must be a
1
×
n
{\displaystyle 1\times n}
matrix; that is,
M
{\displaystyle M}
must be a row vector.
If
V
{\displaystyle V}
consists of the space of geometrical vectors in the plane, then the level curves of an element of
V
∗
{\displaystyle V^{*}}
form a family of parallel lines in
V
{\displaystyle V}
, because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element.
So an element of
V
∗
{\displaystyle V^{*}}
can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, it suffices to determine which of the lines the vector lies on. Informally, this "counts" how many lines the vector crosses.
More generally, if
V
{\displaystyle V}
is a vector space of any dimension, then the level sets of a linear functional in
V
∗
{\displaystyle V^{*}}
are parallel hyperplanes in
V
{\displaystyle V}
, and the action of a linear functional on a vector can be visualized in terms of these hyperplanes.
=== Infinite-dimensional case ===
If
V
{\displaystyle V}
is not finite-dimensional but has a basis
e
α
{\displaystyle \mathbf {e} _{\alpha }}
indexed by an infinite set
A
{\displaystyle A}
, then the same construction as in the finite-dimensional case yields linearly independent elements
e
α
{\displaystyle \mathbf {e} ^{\alpha }}
(
α
∈
A
{\displaystyle \alpha \in A}
) of the dual space, but they will not form a basis.
For instance, consider the space
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
, whose elements are those sequences of real numbers that contain only finitely many non-zero entries, which has a basis indexed by the natural numbers
N
{\displaystyle \mathbb {N} }
. For
i
∈
N
{\displaystyle i\in \mathbb {N} }
,
e
i
{\displaystyle \mathbf {e} _{i}}
is the sequence consisting of all zeroes except in the
i
{\displaystyle i}
-th position, which is 1.
The dual space of
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
is (isomorphic to)
R
N
{\displaystyle \mathbb {R} ^{\mathbb {N} }}
, the space of all sequences of real numbers: each real sequence
(
a
n
)
{\displaystyle (a_{n})}
defines a function where the element
(
x
n
)
{\displaystyle (x_{n})}
of
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
is sent to the number
∑
n
a
n
x
n
,
{\displaystyle \sum _{n}a_{n}x_{n},}
which is a finite sum because there are only finitely many nonzero
x
n
{\displaystyle x_{n}}
. The dimension of
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
is countably infinite, whereas
R
N
{\displaystyle \mathbb {R} ^{\mathbb {N} }}
does not have a countable basis.
This observation generalizes to any infinite-dimensional vector space
V
{\displaystyle V}
over any field
F
{\displaystyle F}
: a choice of basis
{
e
α
:
α
∈
A
}
{\displaystyle \{\mathbf {e} _{\alpha }:\alpha \in A\}}
identifies
V
{\displaystyle V}
with the space
(
F
A
)
0
{\displaystyle (F^{A})_{0}}
of functions
f
:
A
→
F
{\displaystyle f:A\to F}
such that
f
α
=
f
(
α
)
{\displaystyle f_{\alpha }=f(\alpha )}
is nonzero for only finitely many
α
∈
A
{\displaystyle \alpha \in A}
, where such a function
f
{\displaystyle f}
is identified with the vector
∑
α
∈
A
f
α
e
α
{\displaystyle \sum _{\alpha \in A}f_{\alpha }\mathbf {e} _{\alpha }}
in
V
{\displaystyle V}
(the sum is finite by the assumption on
f
{\displaystyle f}
, and any
v
∈
V
{\displaystyle v\in V}
may be written uniquely in this way by the definition of the basis).
The dual space of
V
{\displaystyle V}
may then be identified with the space
F
A
{\displaystyle F^{A}}
of all functions from
A
{\displaystyle A}
to
F
{\displaystyle F}
: a linear functional
T
{\displaystyle T}
on
V
{\displaystyle V}
is uniquely determined by the values
θ
α
=
T
(
e
α
)
{\displaystyle \theta _{\alpha }=T(\mathbf {e} _{\alpha })}
it takes on the basis of
V
{\displaystyle V}
, and any function
θ
:
A
→
F
{\displaystyle \theta :A\to F}
(with
θ
(
α
)
=
θ
α
{\displaystyle \theta (\alpha )=\theta _{\alpha }}
) defines a linear functional
T
{\displaystyle T}
on
V
{\displaystyle V}
by
T
(
∑
α
∈
A
f
α
e
α
)
=
∑
α
∈
A
f
α
T
(
e
α
)
=
∑
α
∈
A
f
α
θ
α
.
{\displaystyle T\left(\sum _{\alpha \in A}f_{\alpha }\mathbf {e} _{\alpha }\right)=\sum _{\alpha \in A}f_{\alpha }T(e_{\alpha })=\sum _{\alpha \in A}f_{\alpha }\theta _{\alpha }.}
Again, the sum is finite because
f
α
{\displaystyle f_{\alpha }}
is nonzero for only finitely many
α
{\displaystyle \alpha }
.
The set
(
F
A
)
0
{\displaystyle (F^{A})_{0}}
may be identified (essentially by definition) with the direct sum of infinitely many copies of
F
{\displaystyle F}
(viewed as a 1-dimensional vector space over itself) indexed by
A
{\displaystyle A}
, i.e. there are linear isomorphisms
V
≅
(
F
A
)
0
≅
⨁
α
∈
A
F
.
{\displaystyle V\cong (F^{A})_{0}\cong \bigoplus _{\alpha \in A}F.}
On the other hand,
F
A
{\displaystyle F^{A}}
is (again by definition), the direct product of infinitely many copies of
F
{\displaystyle F}
indexed by
A
{\displaystyle A}
, and so the identification
V
∗
≅
(
⨁
α
∈
A
F
)
∗
≅
∏
α
∈
A
F
∗
≅
∏
α
∈
A
F
≅
F
A
{\displaystyle V^{*}\cong \left(\bigoplus _{\alpha \in A}F\right)^{*}\cong \prod _{\alpha \in A}F^{*}\cong \prod _{\alpha \in A}F\cong F^{A}}
is a special case of a general result relating direct sums (of modules) to direct products.
If a vector space is not finite-dimensional, then its (algebraic) dual space is always of larger dimension (as a cardinal number) than the original vector space. This is in contrast to the case of the continuous dual space, discussed below, which may be isomorphic to the original vector space even if the latter is infinite-dimensional.
The proof of this inequality between dimensions results from the following.
If
V
{\displaystyle V}
is an infinite-dimensional
F
{\displaystyle F}
-vector space, the arithmetical properties of cardinal numbers implies that
d
i
m
(
V
)
=
|
A
|
<
|
F
|
|
A
|
=
|
V
∗
|
=
m
a
x
(
|
d
i
m
(
V
∗
)
|
,
|
F
|
)
,
{\displaystyle \mathrm {dim} (V)=|A|<|F|^{|A|}=|V^{\ast }|=\mathrm {max} (|\mathrm {dim} (V^{\ast })|,|F|),}
where cardinalities are denoted as absolute values. For proving that
d
i
m
(
V
)
<
d
i
m
(
V
∗
)
,
{\displaystyle \mathrm {dim} (V)<\mathrm {dim} (V^{*}),}
it suffices to prove that
|
F
|
≤
|
d
i
m
(
V
∗
)
|
,
{\displaystyle |F|\leq |\mathrm {dim} (V^{\ast })|,}
which can be done with an argument similar to Cantor's diagonal argument. The exact dimension of the dual is given by the Erdős–Kaplansky theorem.
=== Bilinear products and dual spaces ===
If V is finite-dimensional, then V is isomorphic to V∗. But there is in general no natural isomorphism between these two spaces. Any bilinear form ⟨·,·⟩ on V gives a mapping of V into its dual space via
v
↦
⟨
v
,
⋅
⟩
{\displaystyle v\mapsto \langle v,\cdot \rangle }
where the right hand side is defined as the functional on V taking each w ∈ V to ⟨v, w⟩. In other words, the bilinear form determines a linear mapping
Φ
⟨
⋅
,
⋅
⟩
:
V
→
V
∗
{\displaystyle \Phi _{\langle \cdot ,\cdot \rangle }:V\to V^{*}}
defined by
[
Φ
⟨
⋅
,
⋅
⟩
(
v
)
,
w
]
=
⟨
v
,
w
⟩
.
{\displaystyle \left[\Phi _{\langle \cdot ,\cdot \rangle }(v),w\right]=\langle v,w\rangle .}
If the bilinear form is nondegenerate, then this is an isomorphism onto a subspace of V∗.
If V is finite-dimensional, then this is an isomorphism onto all of V∗. Conversely, any isomorphism
Φ
{\displaystyle \Phi }
from V to a subspace of V∗ (resp., all of V∗ if V is finite dimensional) defines a unique nondegenerate bilinear form
⟨
⋅
,
⋅
⟩
Φ
{\displaystyle \langle \cdot ,\cdot \rangle _{\Phi }}
on V by
⟨
v
,
w
⟩
Φ
=
(
Φ
(
v
)
)
(
w
)
=
[
Φ
(
v
)
,
w
]
.
{\displaystyle \langle v,w\rangle _{\Phi }=(\Phi (v))(w)=[\Phi (v),w].\,}
Thus there is a one-to-one correspondence between isomorphisms of V to a subspace of (resp., all of) V∗ and nondegenerate bilinear forms on V.
If the vector space V is over the complex field, then sometimes it is more natural to consider sesquilinear forms instead of bilinear forms.
In that case, a given sesquilinear form ⟨·,·⟩ determines an isomorphism of V with the complex conjugate of the dual space
Φ
⟨
⋅
,
⋅
⟩
:
V
→
V
∗
¯
.
{\displaystyle \Phi _{\langle \cdot ,\cdot \rangle }:V\to {\overline {V^{*}}}.}
The conjugate of the dual space
V
∗
¯
{\displaystyle {\overline {V^{*}}}}
can be identified with the set of all additive complex-valued functionals f : V → C such that
f
(
α
v
)
=
α
¯
f
(
v
)
.
{\displaystyle f(\alpha v)={\overline {\alpha }}f(v).}
=== Injection into the double-dual ===
There is a natural homomorphism
Ψ
{\displaystyle \Psi }
from
V
{\displaystyle V}
into the double dual
V
∗
∗
=
hom
(
V
∗
,
F
)
{\displaystyle V^{**}=\hom(V^{*},F)}
, defined by
(
Ψ
(
v
)
)
(
φ
)
=
φ
(
v
)
{\displaystyle (\Psi (v))(\varphi )=\varphi (v)}
for all
v
∈
V
,
φ
∈
V
∗
{\displaystyle v\in V,\varphi \in V^{*}}
. In other words, if
e
v
v
:
V
∗
→
F
{\displaystyle \mathrm {ev} _{v}:V^{*}\to F}
is the evaluation map defined by
φ
↦
φ
(
v
)
{\displaystyle \varphi \mapsto \varphi (v)}
, then
Ψ
:
V
→
V
∗
∗
{\displaystyle \Psi :V\to V^{**}}
is defined as the map
v
↦
e
v
v
{\displaystyle v\mapsto \mathrm {ev} _{v}}
. This map
Ψ
{\displaystyle \Psi }
is always injective; and it is always an isomorphism if
V
{\displaystyle V}
is finite-dimensional.
Indeed, the isomorphism of a finite-dimensional vector space with its double dual is an archetypal example of a natural isomorphism.
Infinite-dimensional Hilbert spaces are not isomorphic to their algebraic double duals, but instead to their continuous double duals.
=== Transpose of a linear map ===
If f : V → W is a linear map, then the transpose (or dual) f∗ : W∗ → V∗ is defined by
f
∗
(
φ
)
=
φ
∘
f
{\displaystyle f^{*}(\varphi )=\varphi \circ f\,}
for every
φ
∈
W
∗
{\displaystyle \varphi \in W^{*}}
. The resulting functional
f
∗
(
φ
)
{\displaystyle f^{*}(\varphi )}
in
V
∗
{\displaystyle V^{*}}
is called the pullback of
φ
{\displaystyle \varphi }
along
f
{\displaystyle f}
.
The following identity holds for all
φ
∈
W
∗
{\displaystyle \varphi \in W^{*}}
and
v
∈
V
{\displaystyle v\in V}
:
[
f
∗
(
φ
)
,
v
]
=
[
φ
,
f
(
v
)
]
,
{\displaystyle [f^{*}(\varphi ),\,v]=[\varphi ,\,f(v)],}
where the bracket [·,·] on the left is the natural pairing of V with its dual space, and that on the right is the natural pairing of W with its dual. This identity characterizes the transpose, and is formally similar to the definition of the adjoint.
The assignment f ↦ f∗ produces an injective linear map between the space of linear operators from V to W and the space of linear operators from W∗ to V∗; this homomorphism is an isomorphism if and only if W is finite-dimensional.
If V = W then the space of linear maps is actually an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that (fg)∗ = g∗f∗.
In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over F to itself.
It is possible to identify (f∗)∗ with f using the natural injection into the double dual.
If the linear map f is represented by the matrix A with respect to two bases of V and W, then f∗ is represented by the transpose matrix AT with respect to the dual bases of W∗ and V∗, hence the name.
Alternatively, as f is represented by A acting on the left on column vectors, f∗ is represented by the same matrix acting on the right on row vectors.
These points of view are related by the canonical inner product on Rn, which identifies the space of column vectors with the dual space of row vectors.
=== Quotient spaces and annihilators ===
Let
S
{\displaystyle S}
be a subset of
V
{\displaystyle V}
.
The annihilator of
S
{\displaystyle S}
in
V
∗
{\displaystyle V^{*}}
, denoted here
S
0
{\displaystyle S^{0}}
, is the collection of linear functionals
f
∈
V
∗
{\displaystyle f\in V^{*}}
such that
[
f
,
s
]
=
0
{\displaystyle [f,s]=0}
for all
s
∈
S
{\displaystyle s\in S}
.
That is,
S
0
{\displaystyle S^{0}}
consists of all linear functionals
f
:
V
→
F
{\displaystyle f:V\to F}
such that the restriction to
S
{\displaystyle S}
vanishes:
f
|
S
=
0
{\displaystyle f|_{S}=0}
.
Within finite dimensional vector spaces, the annihilator is dual to (isomorphic to) the orthogonal complement.
The annihilator of a subset is itself a vector space.
The annihilator of the zero vector is the whole dual space:
{
0
}
0
=
V
∗
{\displaystyle \{0\}^{0}=V^{*}}
, and the annihilator of the whole space is just the zero covector:
V
0
=
{
0
}
⊆
V
∗
{\displaystyle V^{0}=\{0\}\subseteq V^{*}}
.
Furthermore, the assignment of an annihilator to a subset of
V
{\displaystyle V}
reverses inclusions, so that if
{
0
}
⊆
S
⊆
T
⊆
V
{\displaystyle \{0\}\subseteq S\subseteq T\subseteq V}
, then
{
0
}
⊆
T
0
⊆
S
0
⊆
V
∗
.
{\displaystyle \{0\}\subseteq T^{0}\subseteq S^{0}\subseteq V^{*}.}
If
A
{\displaystyle A}
and
B
{\displaystyle B}
are two subsets of
V
{\displaystyle V}
then
A
0
+
B
0
⊆
(
A
∩
B
)
0
.
{\displaystyle A^{0}+B^{0}\subseteq (A\cap B)^{0}.}
If
(
A
i
)
i
∈
I
{\displaystyle (A_{i})_{i\in I}}
is any family of subsets of
V
{\displaystyle V}
indexed by
i
{\displaystyle i}
belonging to some index set
I
{\displaystyle I}
, then
(
⋃
i
∈
I
A
i
)
0
=
⋂
i
∈
I
A
i
0
.
{\displaystyle \left(\bigcup _{i\in I}A_{i}\right)^{0}=\bigcap _{i\in I}A_{i}^{0}.}
In particular if
A
{\displaystyle A}
and
B
{\displaystyle B}
are subspaces of
V
{\displaystyle V}
then
(
A
+
B
)
0
=
A
0
∩
B
0
{\displaystyle (A+B)^{0}=A^{0}\cap B^{0}}
and
(
A
∩
B
)
0
=
A
0
+
B
0
.
{\displaystyle (A\cap B)^{0}=A^{0}+B^{0}.}
If
V
{\displaystyle V}
is finite-dimensional and
W
{\displaystyle W}
is a vector subspace, then
W
00
=
W
{\displaystyle W^{00}=W}
after identifying
W
{\displaystyle W}
with its image in the second dual space under the double duality isomorphism
V
≈
V
∗
∗
{\displaystyle V\approx V^{**}}
. In particular, forming the annihilator is a Galois connection on the lattice of subsets of a finite-dimensional vector space.
If
W
{\displaystyle W}
is a subspace of
V
{\displaystyle V}
then the quotient space
V
/
W
{\displaystyle V/W}
is a vector space in its own right, and so has a dual. By the first isomorphism theorem, a functional
f
:
V
→
F
{\displaystyle f:V\to F}
factors through
V
/
W
{\displaystyle V/W}
if and only if
W
{\displaystyle W}
is in the kernel of
f
{\displaystyle f}
. There is thus an isomorphism
(
V
/
W
)
∗
≅
W
0
.
{\displaystyle (V/W)^{*}\cong W^{0}.}
As a particular consequence, if
V
{\displaystyle V}
is a direct sum of two subspaces
A
{\displaystyle A}
and
B
{\displaystyle B}
, then
V
∗
{\displaystyle V^{*}}
is a direct sum of
A
0
{\displaystyle A^{0}}
and
B
0
{\displaystyle B^{0}}
.
=== Dimensional analysis ===
The dual space is analogous to a "negative"-dimensional space. Most simply, since a vector
v
∈
V
{\displaystyle v\in V}
can be paired with a covector
φ
∈
V
∗
{\displaystyle \varphi \in V^{*}}
by the natural pairing
⟨
x
,
φ
⟩
:=
φ
(
x
)
∈
F
{\displaystyle \langle x,\varphi \rangle :=\varphi (x)\in F}
to obtain a scalar, a covector can "cancel" the dimension of a vector, similar to reducing a fraction. Thus while the direct sum
V
⊕
V
∗
{\displaystyle V\oplus V^{*}}
is a
2
n
{\displaystyle 2n}
-dimensional space (if
V
{\displaystyle V}
is
n
{\displaystyle n}
-dimensional),
V
∗
{\displaystyle V^{*}}
behaves as an
(
−
n
)
{\displaystyle (-n)}
-dimensional space, in the sense that its dimensions can be canceled against the dimensions of
V
{\displaystyle V}
. This is formalized by tensor contraction.
This arises in physics via dimensional analysis, where the dual space has inverse units. Under the natural pairing, these units cancel, and the resulting scalar value is dimensionless, as expected. For example, in (continuous) Fourier analysis, or more broadly time–frequency analysis: given a one-dimensional vector space with a unit of time
t
{\displaystyle t}
, the dual space has units of frequency: occurrences per unit of time (units of
1
/
t
{\displaystyle 1/t}
). For example, if time is measured in seconds, the corresponding dual unit is the inverse second: over the course of 3 seconds, an event that occurs 2 times per second occurs a total of 6 times, corresponding to
3
s
⋅
2
s
−
1
=
6
{\displaystyle 3s\cdot 2s^{-1}=6}
. Similarly, if the primal space measures length, the dual space measures inverse length.
== Continuous dual space ==
When dealing with topological vector spaces, the continuous linear functionals from the space into the base field
F
=
C
{\displaystyle \mathbb {F} =\mathbb {C} }
(or
R
{\displaystyle \mathbb {R} }
) are particularly important.
This gives rise to the notion of the "continuous dual space" or "topological dual" which is a linear subspace of the algebraic dual space
V
∗
{\displaystyle V^{*}}
, denoted by
V
′
{\displaystyle V'}
.
For any finite-dimensional normed vector space or topological vector space, such as Euclidean n-space, the continuous dual and the algebraic dual coincide.
This is however false for any infinite-dimensional normed space, as shown by the example of discontinuous linear maps.
Nevertheless, in the theory of topological vector spaces the terms "continuous dual space" and "topological dual space" are often replaced by "dual space".
For a topological vector space
V
{\displaystyle V}
its continuous dual space, or topological dual space, or just dual space (in the sense of the theory of topological vector spaces)
V
′
{\displaystyle V'}
is defined as the space of all continuous linear functionals
φ
:
V
→
F
{\displaystyle \varphi :V\to {\mathbb {F} }}
.
Important examples for continuous dual spaces are the space of compactly supported test functions
D
{\displaystyle {\mathcal {D}}}
and its dual
D
′
,
{\displaystyle {\mathcal {D}}',}
the space of arbitrary distributions (generalized functions); the space of arbitrary test functions
E
{\displaystyle {\mathcal {E}}}
and its dual
E
′
,
{\displaystyle {\mathcal {E}}',}
the space of compactly supported distributions; and the space of rapidly decreasing test functions
S
,
{\displaystyle {\mathcal {S}},}
the Schwartz space, and its dual
S
′
,
{\displaystyle {\mathcal {S}}',}
the space of tempered distributions (slowly growing distributions) in the theory of generalized functions.
=== Properties ===
If X is a Hausdorff topological vector space (TVS), then the continuous dual space of X is identical to the continuous dual space of the completion of X.
=== Topologies on the dual ===
There is a standard construction for introducing a topology on the continuous dual
V
′
{\displaystyle V'}
of a topological vector space
V
{\displaystyle V}
. Fix a collection
A
{\displaystyle {\mathcal {A}}}
of bounded subsets of
V
{\displaystyle V}
.
This gives the topology on
V
{\displaystyle V}
of uniform convergence on sets from
A
,
{\displaystyle {\mathcal {A}},}
or what is the same thing, the topology generated by seminorms of the form
‖
φ
‖
A
=
sup
x
∈
A
|
φ
(
x
)
|
,
{\displaystyle \|\varphi \|_{A}=\sup _{x\in A}|\varphi (x)|,}
where
φ
{\displaystyle \varphi }
is a continuous linear functional on
V
{\displaystyle V}
, and
A
{\displaystyle A}
runs over the class
A
.
{\displaystyle {\mathcal {A}}.}
This means that a net of functionals
φ
i
{\displaystyle \varphi _{i}}
tends to a functional
φ
{\displaystyle \varphi }
in
V
′
{\displaystyle V'}
if and only if
for all
A
∈
A
‖
φ
i
−
φ
‖
A
=
sup
x
∈
A
|
φ
i
(
x
)
−
φ
(
x
)
|
⟶
i
→
∞
0.
{\displaystyle {\text{ for all }}A\in {\mathcal {A}}\qquad \|\varphi _{i}-\varphi \|_{A}=\sup _{x\in A}|\varphi _{i}(x)-\varphi (x)|{\underset {i\to \infty }{\longrightarrow }}0.}
Usually (but not necessarily) the class
A
{\displaystyle {\mathcal {A}}}
is supposed to satisfy the following conditions:
Each point
x
{\displaystyle x}
of
V
{\displaystyle V}
belongs to some set
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
:
for all
x
∈
V
there exists some
A
∈
A
such that
x
∈
A
.
{\displaystyle {\text{ for all }}x\in V\quad {\text{ there exists some }}A\in {\mathcal {A}}\quad {\text{ such that }}x\in A.}
Each two sets
A
∈
A
{\displaystyle A\in {\mathcal {A}}}
and
B
∈
A
{\displaystyle B\in {\mathcal {A}}}
are contained in some set
C
∈
A
{\displaystyle C\in {\mathcal {A}}}
:
for all
A
,
B
∈
A
there exists some
C
∈
A
such that
A
∪
B
⊆
C
.
{\displaystyle {\text{ for all }}A,B\in {\mathcal {A}}\quad {\text{ there exists some }}C\in {\mathcal {A}}\quad {\text{ such that }}A\cup B\subseteq C.}
A
{\displaystyle {\mathcal {A}}}
is closed under the operation of multiplication by scalars:
for all
A
∈
A
and all
λ
∈
F
such that
λ
⋅
A
∈
A
.
{\displaystyle {\text{ for all }}A\in {\mathcal {A}}\quad {\text{ and all }}\lambda \in {\mathbb {F} }\quad {\text{ such that }}\lambda \cdot A\in {\mathcal {A}}.}
If these requirements are fulfilled then the corresponding topology on
V
′
{\displaystyle V'}
is Hausdorff and the sets
U
A
=
{
φ
∈
V
′
:
‖
φ
‖
A
<
1
}
,
for
A
∈
A
{\displaystyle U_{A}~=~\left\{\varphi \in V'~:~\quad \|\varphi \|_{A}<1\right\},\qquad {\text{ for }}A\in {\mathcal {A}}}
form its local base.
Here are the three most important special cases.
The strong topology on
V
′
{\displaystyle V'}
is the topology of uniform convergence on bounded subsets in
V
{\displaystyle V}
(so here
A
{\displaystyle {\mathcal {A}}}
can be chosen as the class of all bounded subsets in
V
{\displaystyle V}
).
If
V
{\displaystyle V}
is a normed vector space (for example, a Banach space or a Hilbert space) then the strong topology on
V
′
{\displaystyle V'}
is normed (in fact a Banach space if the field of scalars is complete), with the norm
‖
φ
‖
=
sup
‖
x
‖
≤
1
|
φ
(
x
)
|
.
{\displaystyle \|\varphi \|=\sup _{\|x\|\leq 1}|\varphi (x)|.}
The stereotype topology on
V
′
{\displaystyle V'}
is the topology of uniform convergence on totally bounded sets in
V
{\displaystyle V}
(so here
A
{\displaystyle {\mathcal {A}}}
can be chosen as the class of all totally bounded subsets in
V
{\displaystyle V}
).
The weak topology on
V
′
{\displaystyle V'}
is the topology of uniform convergence on finite subsets in
V
{\displaystyle V}
(so here
A
{\displaystyle {\mathcal {A}}}
can be chosen as the class of all finite subsets in
V
{\displaystyle V}
).
Each of these three choices of topology on
V
′
{\displaystyle V'}
leads to a variant of reflexivity property for topological vector spaces:
If
V
′
{\displaystyle V'}
is endowed with the strong topology, then the corresponding notion of reflexivity is the standard one: the spaces reflexive in this sense are just called reflexive.
If
V
′
{\displaystyle V'}
is endowed with the stereotype dual topology, then the corresponding reflexivity is presented in the theory of stereotype spaces: the spaces reflexive in this sense are called stereotype.
If
V
′
{\displaystyle V'}
is endowed with the weak topology, then the corresponding reflexivity is presented in the theory of dual pairs: the spaces reflexive in this sense are arbitrary (Hausdorff) locally convex spaces with the weak topology.
=== Examples ===
Let 1 < p < ∞ be a real number and consider the Banach space ℓ p of all sequences a = (an) for which
‖
a
‖
p
=
(
∑
n
=
0
∞
|
a
n
|
p
)
1
p
<
∞
.
{\displaystyle \|\mathbf {a} \|_{p}=\left(\sum _{n=0}^{\infty }|a_{n}|^{p}\right)^{\frac {1}{p}}<\infty .}
Define the number q by 1/p + 1/q = 1. Then the continuous dual of ℓ p is naturally identified with ℓ q: given an element
φ
∈
(
ℓ
p
)
′
{\displaystyle \varphi \in (\ell ^{p})'}
, the corresponding element of ℓ q is the sequence
(
φ
(
e
n
)
)
{\displaystyle (\varphi (\mathbf {e} _{n}))}
where
e
n
{\displaystyle \mathbf {e} _{n}}
denotes the sequence whose n-th term is 1 and all others are zero. Conversely, given an element a = (an) ∈ ℓ q, the corresponding continuous linear functional
φ
{\displaystyle \varphi }
on ℓ p is defined by
φ
(
b
)
=
∑
n
a
n
b
n
{\displaystyle \varphi (\mathbf {b} )=\sum _{n}a_{n}b_{n}}
for all b = (bn) ∈ ℓ p (see Hölder's inequality).
In a similar manner, the continuous dual of ℓ 1 is naturally identified with ℓ ∞ (the space of bounded sequences).
Furthermore, the continuous duals of the Banach spaces c (consisting of all convergent sequences, with the supremum norm) and c0 (the sequences converging to zero) are both naturally identified with ℓ 1.
By the Riesz representation theorem, the continuous dual of a Hilbert space is again a Hilbert space which is anti-isomorphic to the original space.
This gives rise to the bra–ket notation used by physicists in the mathematical formulation of quantum mechanics.
By the Riesz–Markov–Kakutani representation theorem, the continuous dual of certain spaces of continuous functions can be described using measures.
=== Transpose of a continuous linear map ===
If T : V → W is a continuous linear map between two topological vector spaces, then the (continuous) transpose T′ : W′ → V′ is defined by the same formula as before:
T
′
(
φ
)
=
φ
∘
T
,
φ
∈
W
′
.
{\displaystyle T'(\varphi )=\varphi \circ T,\quad \varphi \in W'.}
The resulting functional T′(φ) is in V′. The assignment T → T′ produces a linear map between the space of continuous linear maps from V to W and the space of linear maps from W′ to V′.
When T and U are composable continuous linear maps, then
(
U
∘
T
)
′
=
T
′
∘
U
′
.
{\displaystyle (U\circ T)'=T'\circ U'.}
When V and W are normed spaces, the norm of the transpose in L(W′, V′) is equal to that of T in L(V, W).
Several properties of transposition depend upon the Hahn–Banach theorem.
For example, the bounded linear map T has dense range if and only if the transpose T′ is injective.
When T is a compact linear map between two Banach spaces V and W, then the transpose T′ is compact.
This can be proved using the Arzelà–Ascoli theorem.
When V is a Hilbert space, there is an antilinear isomorphism iV from V onto its continuous dual V′.
For every bounded linear map T on V, the transpose and the adjoint operators are linked by
i
V
∘
T
∗
=
T
′
∘
i
V
.
{\displaystyle i_{V}\circ T^{*}=T'\circ i_{V}.}
When T is a continuous linear map between two topological vector spaces V and W, then the transpose T′ is continuous when W′ and V′ are equipped with "compatible" topologies: for example, when for X = V and X = W, both duals X′ have the strong topology β(X′, X) of uniform convergence on bounded sets of X, or both have the weak-∗ topology σ(X′, X) of pointwise convergence on X.
The transpose T′ is continuous from β(W′, W) to β(V′, V), or from σ(W′, W) to σ(V′, V).
=== Annihilators ===
Assume that W is a closed linear subspace of a normed space V, and consider the annihilator of W in V′,
W
⊥
=
{
φ
∈
V
′
:
W
⊆
ker
φ
}
.
{\displaystyle W^{\perp }=\{\varphi \in V':W\subseteq \ker \varphi \}.}
Then, the dual of the quotient V / W can be identified with W⊥, and the dual of W can be identified with the quotient V′ / W⊥.
Indeed, let P denote the canonical surjection from V onto the quotient V / W ; then, the transpose P′ is an isometric isomorphism from (V / W )′ into V′, with range equal to W⊥.
If j denotes the injection map from W into V, then the kernel of the transpose j′ is the annihilator of W:
ker
(
j
′
)
=
W
⊥
{\displaystyle \ker(j')=W^{\perp }}
and it follows from the Hahn–Banach theorem that j′ induces an isometric isomorphism
V′ / W⊥ → W′.
=== Further properties ===
If the dual of a normed space V is separable, then so is the space V itself.
The converse is not true: for example, the space ℓ 1 is separable, but its dual ℓ ∞ is not.
=== Double dual ===
In analogy with the case of the algebraic double dual, there is always a naturally defined continuous linear operator Ψ : V → V′′ from a normed space V into its continuous double dual V′′, defined by
Ψ
(
x
)
(
φ
)
=
φ
(
x
)
,
x
∈
V
,
φ
∈
V
′
.
{\displaystyle \Psi (x)(\varphi )=\varphi (x),\quad x\in V,\ \varphi \in V'.}
As a consequence of the Hahn–Banach theorem, this map is in fact an isometry, meaning ‖ Ψ(x) ‖ = ‖ x ‖ for all x ∈ V.
Normed spaces for which the map Ψ is a bijection are called reflexive.
When V is a topological vector space then Ψ(x) can still be defined by the same formula, for every x ∈ V, however several difficulties arise.
First, when V is not locally convex, the continuous dual may be equal to { 0 } and the map Ψ trivial.
However, if V is Hausdorff and locally convex, the map Ψ is injective from V to the algebraic dual V′∗ of the continuous dual, again as a consequence of the Hahn–Banach theorem.
Second, even in the locally convex setting, several natural vector space topologies can be defined on the continuous dual V′, so that the continuous double dual V′′ is not uniquely defined as a set. Saying that Ψ maps from V to V′′, or in other words, that Ψ(x) is continuous on V′ for every x ∈ V, is a reasonable minimal requirement on the topology of V′, namely that the evaluation mappings
φ
∈
V
′
↦
φ
(
x
)
,
x
∈
V
,
{\displaystyle \varphi \in V'\mapsto \varphi (x),\quad x\in V,}
be continuous for the chosen topology on V′. Further, there is still a choice of a topology on V′′, and continuity of Ψ depends upon this choice.
As a consequence, defining reflexivity in this framework is more involved than in the normed case.
== See also ==
Covariance and contravariance of vectors
Dual module
Dual norm
Duality (mathematics)
Duality (projective geometry)
Pontryagin duality
Reciprocal lattice – dual space basis, in crystallography
== Notes ==
== References ==
== Bibliography ==
Axler, Sheldon Jay (2015). Linear Algebra Done Right (3rd ed.). Springer. ISBN 978-3-319-11079-0.
Bourbaki, Nicolas (1989). Elements of mathematics, Algebra I. Springer-Verlag. ISBN 3-540-64243-9.
Bourbaki, Nicolas (2003). Elements of mathematics, Topological vector spaces. Springer-Verlag.
Halmos, Paul Richard (1974) [1958]. Finite-Dimensional Vector Spaces (2nd ed.). Springer. ISBN 0-387-90093-4.
Katznelson, Yitzhak; Katznelson, Yonatan R. (2008). A (Terse) Introduction to Linear Algebra. American Mathematical Society. ISBN 978-0-8218-4419-9.
Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556, Zbl 0984.00001
Tu, Loring W. (2011). An Introduction to Manifolds (2nd ed.). Springer. ISBN 978-1-4419-7400-6.
Mac Lane, Saunders; Birkhoff, Garrett (1999). Algebra (3rd ed.). AMS Chelsea Publishing. ISBN 0-8218-1646-2..
Misner, Charles W.; Thorne, Kip S.; Wheeler, John A. (1973). Gravitation. W. H. Freeman. ISBN 0-7167-0344-0.
Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
Rudin, Walter (1973). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 25 (First ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 9780070542259.
Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
Robertson, A.P.; Robertson, W. (1964). Topological vector spaces. Cambridge University Press.
Schaefer, Helmut H. (1966). Topological vector spaces. New York: The Macmillan Company.
Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
== External links ==
Weisstein, Eric W. "Dual Vector Space". MathWorld. | Wikipedia/Algebraic_dual |
In physics, specifically for special relativity and general relativity, a four-tensor is an abbreviation for a tensor in a four-dimensional spacetime.
== Generalities ==
General four-tensors are usually written in tensor index notation as
A
ν
1
,
ν
2
,
.
.
.
,
ν
m
μ
1
,
μ
2
,
.
.
.
,
μ
n
{\displaystyle A_{\;\nu _{1},\nu _{2},...,\nu _{m}}^{\mu _{1},\mu _{2},...,\mu _{n}}}
with the indices taking integer values from 0 to 3, with 0 for the timelike components and 1, 2, 3 for spacelike components. There are n contravariant indices and m covariant indices.
In special and general relativity, many four-tensors of interest are first order (four-vectors) or second order, but higher-order tensors occur. Examples are listed next.
In special relativity, the vector basis can be restricted to being orthonormal, in which case all four-tensors transform under Lorentz transformations. In general relativity, more general coordinate transformations are necessary since such a restriction is not in general possible.
== Examples ==
=== First-order tensors ===
In special relativity, one of the simplest non-trivial examples of a four-tensor is the four-displacement
x
μ
=
(
x
0
,
x
1
,
x
2
,
x
3
)
=
(
c
t
,
x
,
y
,
z
)
{\displaystyle x^{\mu }=\left(x^{0},x^{1},x^{2},x^{3}\right)=(ct,x,y,z)}
a four-tensor with contravariant rank 1 and covariant rank 0. Four-tensors of this kind are usually known as four-vectors. Here the component x0 = ct gives the displacement of a body in time (coordinate time t is multiplied by the speed of light c so that x0 has dimensions of length). The remaining components of the four-displacement form the spatial displacement vector x = (x1, x2, x3).
The four-momentum for massive or massless particles is
p
μ
=
(
p
0
,
p
1
,
p
2
,
p
3
)
=
(
1
c
E
,
p
x
,
p
y
,
p
z
)
{\displaystyle p^{\mu }=\left(p^{0},p^{1},p^{2},p^{3}\right)=\left({\frac {1}{c}}E,p_{x},p_{y},p_{z}\right)}
combining its energy (divided by c) p0 = E/c and 3-momentum p = (p1, p2, p3).
For a particle with invariant mass
m
0
{\displaystyle m_{0}}
, also known as rest mass, four momentum is defined by
p
μ
=
m
0
d
x
μ
d
τ
{\displaystyle p^{\mu }=m_{0}{\frac {dx^{\mu }}{d\tau }}}
with
τ
{\displaystyle \tau }
the proper time of the particle.
The relativistic mass is
m
=
γ
m
o
{\displaystyle m=\gamma m_{o}}
with Lorentz factor
γ
=
1
1
−
v
2
c
2
=
1
1
−
β
2
=
d
t
d
τ
{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}={\frac {1}{\sqrt {1-\beta ^{2}}}}={\frac {dt}{d\tau }}}
=== Second-order tensors ===
The Minkowski metric tensor with an orthonormal basis for the (−+++) convention is
η
μ
ν
=
(
−
1
0
0
0
0
1
0
0
0
0
1
0
0
0
0
1
)
{\displaystyle \eta ^{\mu \nu }={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}\,}
used for calculating the line element and raising and lowering indices. The above applies to Cartesian coordinates. In general relativity, the metric tensor is given by much more general expressions for curvilinear coordinates.
The angular momentum L = x ∧ p of a particle with relativistic mass m and relativistic momentum p (as measured by an observer in a lab frame) combines with another vector quantity N = mx − pt (without a standard name) in the relativistic angular momentum tensor
M
μ
ν
=
(
0
−
N
1
c
−
N
2
c
−
N
3
c
N
1
c
0
L
12
−
L
31
N
2
c
−
L
12
0
L
23
N
3
c
L
31
−
L
23
0
)
{\displaystyle M^{\mu \nu }={\begin{pmatrix}0&-N^{1}c&-N^{2}c&-N^{3}c\\N^{1}c&0&L^{12}&-L^{31}\\N^{2}c&-L^{12}&0&L^{23}\\N^{3}c&L^{31}&-L^{23}&0\end{pmatrix}}}
with components
M
α
β
=
X
α
P
β
−
X
β
P
α
{\displaystyle M^{\alpha \beta }=X^{\alpha }P^{\beta }-X^{\beta }P^{\alpha }}
The stress–energy tensor of a continuum or field generally takes the form of a second-order tensor, and usually denoted by T. The timelike component corresponds to energy density (energy per unit volume), the mixed spacetime components to momentum density (momentum per unit volume), and the purely spacelike parts to the 3d stress tensor.
The electromagnetic field tensor combines the electric field and E and magnetic field B
F
μ
ν
=
(
0
−
E
x
/
c
−
E
y
/
c
−
E
z
/
c
E
x
/
c
0
−
B
z
B
y
E
y
/
c
B
z
0
−
B
x
E
z
/
c
−
B
y
B
x
0
)
{\displaystyle F^{\mu \nu }={\begin{pmatrix}0&-E_{x}/c&-E_{y}/c&-E_{z}/c\\E_{x}/c&0&-B_{z}&B_{y}\\E_{y}/c&B_{z}&0&-B_{x}\\E_{z}/c&-B_{y}&B_{x}&0\end{pmatrix}}}
The electromagnetic displacement tensor combines the electric displacement field D and magnetic field intensity H as follows
D
μ
ν
=
(
0
−
D
x
c
−
D
y
c
−
D
z
c
D
x
c
0
−
H
z
H
y
D
y
c
H
z
0
−
H
x
D
z
c
−
H
y
H
x
0
)
.
{\displaystyle {\mathcal {D}}^{\mu \nu }={\begin{pmatrix}0&-D_{x}c&-D_{y}c&-D_{z}c\\D_{x}c&0&-H_{z}&H_{y}\\D_{y}c&H_{z}&0&-H_{x}\\D_{z}c&-H_{y}&H_{x}&0\end{pmatrix}}.}
The magnetization-polarization tensor combines the P and M fields
M
μ
ν
=
(
0
P
x
c
P
y
c
P
z
c
−
P
x
c
0
−
M
z
M
y
−
P
y
c
M
z
0
−
M
x
−
P
z
c
−
M
y
M
x
0
)
,
{\displaystyle {\mathcal {M}}^{\mu \nu }={\begin{pmatrix}0&P_{x}c&P_{y}c&P_{z}c\\-P_{x}c&0&-M_{z}&M_{y}\\-P_{y}c&M_{z}&0&-M_{x}\\-P_{z}c&-M_{y}&M_{x}&0\end{pmatrix}},}
The three field tensors are related by
D
μ
ν
=
1
μ
0
F
μ
ν
−
M
μ
ν
{\displaystyle {\mathcal {D}}^{\mu \nu }={\frac {1}{\mu _{0}}}F^{\mu \nu }-{\mathcal {M}}^{\mu \nu }\,}
which is equivalent to the definitions of the D and H fields.
The electric dipole moment d and magnetic dipole moment μ of a particle are unified into a single tensor
σ
μ
ν
=
(
0
d
x
d
y
d
z
−
d
x
0
μ
z
/
c
−
μ
y
/
c
−
d
y
−
μ
z
/
c
0
μ
x
/
c
−
d
z
μ
y
/
c
−
μ
x
/
c
0
)
,
{\displaystyle \sigma ^{\mu \nu }={\begin{pmatrix}0&d_{x}&d_{y}&d_{z}\\-d_{x}&0&\mu _{z}/c&-\mu _{y}/c\\-d_{y}&-\mu _{z}/c&0&\mu _{x}/c\\-d_{z}&\mu _{y}/c&-\mu _{x}/c&0\end{pmatrix}},}
The Ricci curvature tensor is another second-order tensor.
=== Higher-order tensors ===
In general relativity, there are curvature tensors which tend to be higher order, such as the Riemann curvature tensor and Weyl curvature tensor which are both fourth order tensors.
== See also ==
Spin tensor
Tetrad (general relativity)
== References == | Wikipedia/Four-tensors |
Diffusion-weighted magnetic resonance imaging (DWI or DW-MRI) is the use of specific MRI sequences as well as software that generates images from the resulting data that uses the diffusion of water molecules to generate contrast in MR images. It allows the mapping of the diffusion process of molecules, mainly water, in biological tissues, in vivo and non-invasively. Molecular diffusion in tissues is not random, but reflects interactions with many obstacles, such as macromolecules, fibers, and membranes. Water molecule diffusion patterns can therefore reveal microscopic details about tissue architecture, either normal or in a diseased state. A special kind of DWI, diffusion tensor imaging (DTI), has been used extensively to map white matter tractography in the brain.
== Introduction ==
In diffusion weighted imaging (DWI), the intensity of each image element (voxel) reflects the best estimate of the rate of water diffusion at that location. Because the mobility of water is driven by thermal agitation and highly dependent on its cellular environment, the hypothesis behind DWI is that findings may indicate (early) pathologic change. For instance, DWI is more sensitive to early changes after a stroke than more traditional MRI measurements such as T1 or T2 relaxation rates. A variant of diffusion weighted imaging, diffusion spectrum imaging (DSI), was used in deriving the Connectome data sets; DSI is a variant of diffusion-weighted imaging that is sensitive to intra-voxel heterogeneities in diffusion directions caused by crossing fiber tracts and thus allows more accurate mapping of axonal trajectories than other diffusion imaging approaches.
Diffusion-weighted images are very useful to diagnose vascular strokes in the brain. It is also used more and more in the staging of non-small-cell lung cancer, where it is a serious candidate to replace positron emission tomography as the 'gold standard' for this type of disease. Diffusion tensor imaging is being developed for studying the diseases of the white matter of the brain as well as for studies of other body tissues (see below). DWI is most applicable when the tissue of interest is dominated by isotropic water movement e.g. grey matter in the cerebral cortex and major brain nuclei, or in the body—where the diffusion rate appears to be the same when measured along any axis. However, DWI also remains sensitive to T1 and T2 relaxation. To entangle diffusion and relaxation effects on image contrast, one may obtain quantitative images of the diffusion coefficient, or more exactly the apparent diffusion coefficient (ADC). The ADC concept was introduced to take into account the fact that the diffusion process is complex in biological tissues and reflects several different mechanisms.
Diffusion tensor imaging (DTI) is important when a tissue—such as the neural axons of white matter in the brain or muscle fibers in the heart—has an internal fibrous structure analogous to the anisotropy of some crystals. Water will then diffuse more rapidly in the direction aligned with the internal structure (axial diffusion), and more slowly as it moves perpendicular to the preferred direction (radial diffusion). This also means that the measured rate of diffusion will differ depending on the direction from which an observer is looking.
Diffusion Basis Spectrum Imaging (DBSI) further separates DTI signals into discrete anisotropic diffusion tensors and a spectrum of isotropic diffusion tensors to better differentiate sub-voxel cellular structures. For example, anisotropic diffusion tensors correlate to axonal fibers, while low isotropic diffusion tensors correlate to cells and high isotropic diffusion tensors correlate to larger structures (such as the lumen or brain ventricles). DBSI has been shown to differentiate some types of brain tumors and multiple sclerosis with higher specificity and sensitivity than conventional DTI. DBSI has also been useful in determining microstructure properties of the brain.
Traditionally, in diffusion-weighted imaging (DWI), three gradient-directions are applied, sufficient to estimate the trace of the diffusion tensor or 'average diffusivity', a putative measure of edema. Clinically, trace-weighted images have proven to be very useful to diagnose vascular strokes in the brain, by early detection (within a couple of minutes) of the hypoxic edema.
More extended DTI scans derive neural tract directional information from the data using 3D or multidimensional vector algorithms based on six or more gradient directions, sufficient to compute the diffusion tensor. The diffusion tensor model is a rather simple model of the diffusion process, assuming homogeneity and linearity of the diffusion within each image voxel. From the diffusion tensor, diffusion anisotropy measures such as the fractional anisotropy (FA), can be computed. Moreover, the principal direction of the diffusion tensor can be used to infer the white-matter connectivity of the brain (i.e. tractography; trying to see which part of the brain is connected to which other part).
Recently, more advanced models of the diffusion process have been proposed that aim to overcome the weaknesses of the diffusion tensor model. Amongst others, these include q-space imaging and generalized diffusion tensor imaging.
== Mechanism ==
Diffusion imaging is an MRI method that produces in vivo magnetic resonance images of biological tissues sensitized with the local characteristics of molecular diffusion, generally water (but other moieties can also be investigated using MR spectroscopic approaches).
MRI can be made sensitive to the motion of molecules. Regular MRI acquisition utilizes the behavior of protons in water to generate contrast between clinically relevant features of a particular subject. The versatile nature of MRI is due to this capability of producing contrast related to the structure of tissues at the microscopic level. In a typical
T
1
{\displaystyle T_{1}}
-weighted image, water molecules in a sample are excited with the imposition of a strong magnetic field. This causes many of the protons in water molecules to precess simultaneously, producing signals in MRI. In
T
2
{\displaystyle T_{2}}
-weighted images, contrast is produced by measuring the loss of coherence or synchrony between the water protons. When water is in an environment where it can freely tumble, relaxation tends to take longer. In certain clinical situations, this can generate contrast between an area of pathology and the surrounding healthy tissue.
To sensitize MRI images to diffusion, the magnetic field strength (B1) is varied linearly by a pulsed field gradient. Since precession is proportional to the magnet strength, the protons begin to precess at different rates, resulting in dispersion of the phase and signal loss. Another gradient pulse is applied in the same magnitude but with opposite direction to refocus or rephase the spins. The refocusing will not be perfect for protons that have moved during the time interval between the pulses, and the signal measured by the MRI machine is reduced. This "field gradient pulse" method was initially devised for NMR by Stejskal and Tanner who derived the reduction in signal due to the application of the pulse gradient related to the amount of diffusion that is occurring through the following equation:
S
(
T
E
)
S
0
=
exp
[
−
γ
2
G
2
δ
2
(
Δ
−
δ
3
)
D
]
{\displaystyle {\frac {S(TE)}{S_{0}}}=\exp \left[-\gamma ^{2}G^{2}\delta ^{2}\left(\Delta -{\frac {\delta }{3}}\right)D\right]}
where
S
0
{\displaystyle S_{0}}
is the signal intensity without the diffusion weighting,
S
{\displaystyle S}
is the signal with the gradient,
γ
{\displaystyle \gamma }
is the gyromagnetic ratio,
G
{\displaystyle G}
is the strength of the gradient pulse,
δ
{\displaystyle \delta }
is the duration of the pulse,
Δ
{\displaystyle \Delta }
is the time between the two pulses, and finally,
D
{\displaystyle D}
is the diffusion-coefficient.
In order to localize this signal attenuation to get images of diffusion one has to combine the pulsed magnetic field gradient pulses used for MRI (aimed at localization of the signal, but those gradient pulses are too weak to produce a diffusion related attenuation) with additional "motion-probing" gradient pulses, according to the Stejskal and Tanner method. This combination is not trivial, as cross-terms arise between all gradient pulses. The equation set by Stejskal and Tanner then becomes inaccurate and the signal attenuation must be calculated, either analytically or numerically, integrating all gradient pulses present in the MRI sequence and their interactions. The result quickly becomes very complex given the many pulses present in the MRI sequence, and as a simplification, Le Bihan suggested gathering all the gradient terms in a "b factor" (which depends only on the acquisition parameters) so that the signal attenuation simply becomes:
S
(
T
E
)
S
0
=
exp
(
−
b
⋅
A
D
C
)
{\displaystyle {\frac {S(TE)}{S_{0}}}=\exp(-b\cdot ADC)}
Also, the diffusion coefficient,
D
{\displaystyle D}
, is replaced by an apparent diffusion coefficient,
A
D
C
{\displaystyle ADC}
, to indicate that the diffusion process is not free in tissues, but hindered and modulated by many mechanisms (restriction in closed spaces, tortuosity around obstacles, etc.) and that other sources of IntraVoxel Incoherent Motion (IVIM) such as blood flow in small vessels or cerebrospinal fluid in ventricles also contribute to the signal attenuation.
At the end, images are "weighted" by the diffusion process: In those diffusion-weighted images (DWI) the signal is more attenuated the faster the diffusion and the larger the b factor is. However, those diffusion-weighted images are still also sensitive to T1 and T2 relaxivity contrast, which can sometimes be confusing. It is possible to calculate "pure" diffusion maps (or more exactly ADC maps where the ADC is the sole source of contrast) by collecting images with at least 2 different values,
b
1
{\displaystyle b_{1}}
and
b
2
{\displaystyle b_{2}}
, of the b factor according to:
A
D
C
(
x
,
y
,
z
)
=
ln
[
S
2
(
x
,
y
,
z
)
/
S
1
(
x
,
y
,
z
)
]
/
(
b
1
−
b
2
)
{\displaystyle \mathrm {ADC} (x,y,z)=\ln[S_{2}(x,y,z)/S_{1}(x,y,z)]/(b_{1}-b_{2})}
Although this ADC concept has been extremely successful, especially for clinical applications, it has been challenged recently, as new, more comprehensive models of diffusion in biological tissues have been introduced. Those models have been made necessary, as diffusion in tissues is not free. In this condition, the ADC seems to depend on the choice of b values (the ADC seems to decrease when using larger b values), as the plot of ln(S/So) is not linear with the b factor, as expected from the above equations. This deviation from a free diffusion behavior is what makes diffusion MRI so successful, as the ADC is very sensitive to changes in tissue microstructure. On the other hand, modeling diffusion in tissues is becoming very complex. Among most popular models are the biexponential model, which assumes the presence of 2 water pools in slow or intermediate exchange and the cumulant-expansion (also called Kurtosis) model,
which does not necessarily require the presence of 2 pools.
=== Diffusion model ===
Given the concentration
ρ
{\displaystyle \rho }
and flux
J
{\displaystyle J}
, Fick's first law gives a relationship between the flux and the concentration gradient:
J
(
x
,
t
)
=
−
D
∇
ρ
(
x
,
t
)
{\displaystyle J(x,t)=-D\nabla \rho (x,t)}
where D is the diffusion coefficient. Then, given conservation of mass, the continuity equation relates the time derivative of the concentration with the divergence of the flux:
∂
ρ
(
x
,
t
)
∂
t
=
−
∇
⋅
J
(
x
,
t
)
{\displaystyle {\frac {\partial \rho (x,t)}{\partial t}}=-\nabla \cdot J(x,t)}
Putting the two together, we get the diffusion equation:
∂
ρ
(
x
,
t
)
∂
t
=
D
∇
2
ρ
(
x
,
t
)
.
{\displaystyle {\frac {\partial \rho (x,t)}{\partial t}}=D\nabla ^{2}\rho (x,t).}
=== Magnetization dynamics ===
With no diffusion present, the change in nuclear magnetization over time is given by the classical Bloch equation
d
M
→
d
t
=
γ
M
→
×
B
→
−
M
x
i
→
+
M
y
j
→
T
2
−
(
M
z
−
M
0
)
k
→
T
1
{\displaystyle {\frac {d{\vec {M}}}{dt}}=\gamma {\vec {M}}\times {\vec {B}}-{\frac {M_{x}{\vec {i}}+M_{y}{\vec {j}}}{T_{2}}}-{\frac {(M_{z}-M_{0}){\vec {k}}}{T_{1}}}}
which has terms for precession, T2 relaxation, and T1 relaxation.
In 1956, H.C. Torrey mathematically showed how the Bloch equations for magnetization would change with the addition of diffusion. Torrey modified Bloch's original description of transverse magnetization to include diffusion terms and the application of a spatially varying gradient. Since the magnetization
M
{\displaystyle M}
is a vector, there are 3 diffusion equations, one for each dimension. The Bloch-Torrey equation is:
d
M
→
d
t
=
γ
M
→
×
B
→
−
M
x
i
→
+
M
y
j
→
T
2
−
(
M
z
−
M
0
)
k
→
T
1
+
∇
⋅
D
→
∇
M
→
{\displaystyle {\frac {d{\vec {M}}}{dt}}=\gamma {\vec {M}}\times {\vec {B}}-{\frac {M_{x}{\vec {i}}+M_{y}{\vec {j}}}{T_{2}}}-{\frac {(M_{z}-M_{0}){\vec {k}}}{T_{1}}}+\nabla \cdot {\vec {D}}\nabla {\vec {M}}}
where
D
→
{\displaystyle {\vec {D}}}
is now the diffusion tensor.
For the simplest case where the diffusion is isotropic the diffusion tensor is a multiple of the identity:
D
→
=
D
⋅
I
→
=
D
⋅
[
1
0
0
0
1
0
0
0
1
]
,
{\displaystyle {\vec {D}}=D\cdot {\vec {I}}=D\cdot {\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}},}
then the Bloch-Torrey equation will have the solution
M
=
M
bloch
e
−
1
3
γ
2
G
2
t
3
D
∼
e
−
b
D
0
{\displaystyle {M}={M}_{\text{bloch}}e^{-{\frac {1}{3}}\gamma ^{2}G^{2}t^{3}D}\sim e^{-bD_{0}}}
The exponential term will be referred to as the attenuation
A
{\displaystyle A}
. Anisotropic diffusion will have a similar solution for the diffusion tensor, except that what will be measured is the apparent diffusion coefficient (ADC). In general, the attenuation is:
A
=
e
−
∑
i
,
j
b
i
j
D
i
j
{\displaystyle A=e^{-\sum _{i,j}b_{ij}D_{ij}}}
where the
b
i
j
{\displaystyle b_{ij}}
terms incorporate the gradient fields
G
x
{\displaystyle G_{x}}
,
G
y
{\displaystyle G_{y}}
, and
G
z
{\displaystyle G_{z}}
.
=== Grayscale ===
The standard grayscale of DWI images is to represent increased diffusion restriction as brighter.
== ADC image ==
An apparent diffusion coefficient (ADC) image, or an ADC map, is an MRI image that more specifically shows diffusion than conventional DWI, by eliminating the T2 weighting that is otherwise inherent to conventional DWI. ADC imaging does so by acquiring multiple conventional DWI images with different amounts of DWI weighting, and the change in signal is proportional to the rate of diffusion. Contrary to DWI images, the standard grayscale of ADC images is to represent a smaller magnitude of diffusion as darker.
Cerebral infarction leads to diffusion restriction, and the difference between images with various DWI weighting will therefore be minor, leading to an ADC image with low signal in the infarcted area. A decreased ADC may be detected minutes after a cerebral infarction. The high signal of infarcted tissue on conventional DWI is a result of its partial T2 weighting.
== Diffusion tensor imaging ==
Diffusion tensor imaging (DTI) is a magnetic resonance imaging technique that enables the measurement of the restricted diffusion of water in tissue in order to produce neural tract images instead of using this data solely for the purpose of assigning contrast or colors to pixels in a cross-sectional image. It also provides useful structural information about muscle—including heart muscle—as well as other tissues such as the prostate.
In DTI, each voxel has one or more pairs of parameters: a rate of diffusion and a preferred direction of diffusion—described in terms of three-dimensional space—for which that parameter is valid. The properties of each voxel of a single DTI image are usually calculated by vector or tensor math from six or more different diffusion weighted acquisitions, each obtained with a different orientation of the diffusion sensitizing gradients. In some methods, hundreds of measurements—each making up a complete image—are made to generate a single resulting calculated image data set. The higher information content of a DTI voxel makes it extremely sensitive to subtle pathology in the brain. In addition the directional information can be exploited at a higher level of structure to select and follow neural tracts through the brain—a process called tractography.
A more precise statement of the image acquisition process is that the image-intensities at each position are attenuated, depending on the strength (b-value) and direction of the so-called magnetic diffusion gradient, as well as on the local microstructure in which the water molecules diffuse. The more attenuated the image is at a given position, the greater diffusion there is in the direction of the diffusion gradient. In order to measure the tissue's complete diffusion profile, one needs to repeat the MR scans, applying different directions (and possibly strengths) of the diffusion gradient for each scan.
=== Mathematical foundation—tensors ===
Diffusion MRI relies on the mathematics and physical interpretations of the geometric quantities known as tensors. Only a special case of the general mathematical notion is relevant to imaging, which is based on the concept of a symmetric matrix. Diffusion itself is tensorial, but in many cases the objective is not really about trying to study brain diffusion per se, but rather just trying to take advantage of diffusion anisotropy in white matter for the purpose of finding the orientation of the axons and the magnitude or degree of anisotropy. Tensors have a real, physical existence in a material or tissue so that they do not move when the coordinate system used to describe them is rotated. There are numerous different possible representations of a tensor (of rank 2), but among these, this discussion focuses on the ellipsoid because of its physical relevance to diffusion and because of its historical significance in the development of diffusion anisotropy imaging in MRI.
The following matrix displays the components of the diffusion tensor:
D
¯
=
|
D
x
x
D
x
y
D
x
z
D
x
y
D
y
y
D
y
z
D
x
z
D
y
z
D
z
z
|
{\displaystyle {\bar {D}}={\begin{vmatrix}D_{\color {red}xx}&D_{xy}&D_{xz}\\D_{xy}&D_{\color {red}yy}&D_{yz}\\D_{xz}&D_{yz}&D_{\color {red}zz}\end{vmatrix}}}
The same matrix of numbers can have a simultaneous second use to describe the shape and orientation of an ellipse and the same matrix of numbers can be used simultaneously in a third way for matrix mathematics to sort out eigenvectors and eigenvalues as explained below.
=== Physical tensors ===
The idea of a tensor in physical science evolved from attempts to describe the quantity of physical properties. The first properties they were applied to were those that can be described by a single number, such as temperature. Properties that can be described this way are called scalars; these can be considered tensors of rank 0, or 0th-order tensors. Tensors can also be used to describe quantities that have directionality, such as mechanical force. These quantities require specification of both magnitude and direction, and are often represented with a vector. A three-dimensional vector can be described with three components: its projection on the x, y, and z axes. Vectors of this sort can be considered tensors of rank 1, or 1st-order tensors.
A tensor is often a physical or biophysical property that determines the relationship between two vectors. When a force is applied to an object, movement can result. If the movement is in a single direction, the transformation can be described using a vector—a tensor of rank 1. However, in a tissue, diffusion leads to movement of water molecules along trajectories that proceed along multiple directions over time, leading to a complex projection onto the Cartesian axes. This pattern is reproducible if the same conditions and forces are applied to the same tissue in the same way. If there is an internal anisotropic organization of the tissue that constrains diffusion, then this fact will be reflected in the pattern of diffusion. The relationship between the properties of driving force that generate diffusion of the water molecules and the resulting pattern of their movement in the tissue can be described by a tensor. The collection of molecular displacements of this physical property can be described with nine components—each one associated with a pair of axes xx, yy, zz, xy, yx, xz, zx, yz, zy. These can be written as a matrix similar to the one at the start of this section.
Diffusion from a point source in the anisotropic medium of white matter behaves in a similar fashion. The first pulse of the Stejskal Tanner diffusion gradient effectively labels some water molecules and the second pulse effectively shows their displacement due to diffusion. Each gradient direction applied measures the movement along the direction of that gradient. Six or more gradients are summed to get all the measurements needed to fill in the matrix, assuming it is symmetric above and below the diagonal (red subscripts).
In 1848, Henri Hureau de Sénarmont applied a heated point to a polished crystal surface that had been coated with wax. In some materials that had "isotropic" structure, a ring of melt would spread across the surface in a circle. In anisotropic crystals the spread took the form of an ellipse. In three dimensions this spread is an ellipsoid. As Adolf Fick showed in the 1850s, diffusion exhibits many of the same patterns as those seen in the transfer of heat.
=== Mathematics of ellipsoids ===
At this point, it is helpful to consider the mathematics of ellipsoids. An ellipsoid can be described by the formula:
a
x
2
+
b
y
2
+
c
z
2
=
1
{\displaystyle ax^{2}+by^{2}+cz^{2}=1}
. This equation describes a quadric surface. The relative values of a, b, and c determine if the quadric describes an ellipsoid or a hyperboloid.
As it turns out, three more components can be added as follows:
a
x
2
+
b
y
2
+
c
z
2
+
d
y
z
+
e
z
x
+
f
x
y
=
1
{\displaystyle ax^{2}+by^{2}+cz^{2}+dyz+ezx+fxy=1}
. Many combinations of a, b, c, d, e, and f still describe ellipsoids, but the additional components (d, e, f) describe the rotation of the ellipsoid relative to the orthogonal axes of the Cartesian coordinate system. These six variables can be represented by a matrix similar to the tensor matrix defined at the start of this section (since diffusion is symmetric, then we only need six instead of nine components—the components below the diagonal elements of the matrix are the same as the components above the diagonal). This is what is meant when it is stated that a second-order symmetric tensor can be represented by an ellipsoid—if the diffusion values of the six terms of the quadric ellipsoid are placed into the matrix, this generates an ellipsoid angled off the orthogonal grid. Its shape will be more elongated if the relative anisotropy is high.
Mathematically, the diffusion matrix
D
¯
{\displaystyle {\bar {D}}}
is a covariance matrix. The ellipsoid that shows the pattern of dispersion is given by the equation
v
→
T
D
¯
−
1
v
=
1
{\displaystyle {\vec {v}}^{T}{\bar {D}}^{-1}v=1}
, where
v
→
{\displaystyle {\vec {v}}}
is displacement, the column vector
(
x
,
y
,
z
)
T
{\displaystyle (x,y,z)^{T}}
.
When the ellipsoid/tensor is represented by a matrix, we can apply a useful technique from standard matrix mathematics and linear algebra—that is to "diagonalize" the matrix. This has two important meanings in imaging. The idea is that there are two equivalent ellipsoids—of identical shape but with different size and orientation. The first one is the measured diffusion ellipsoid sitting at an angle determined by the axons, and the second one is perfectly aligned with the three Cartesian axes. The term "diagonalize" refers to the three components of the matrix along a diagonal from upper left to lower right (the components with red subscripts in the matrix at the start of this section). The variables
a
x
2
{\displaystyle ax^{2}}
,
b
y
2
{\displaystyle by^{2}}
, and
c
z
2
{\displaystyle cz^{2}}
are along the diagonal (red subscripts), but the variables d, e and f are "off diagonal". It then becomes possible to do a vector processing step in which we rewrite our matrix and replace it with a new matrix multiplied by three different vectors of unit length (length=1.0). The matrix is diagonalized because the off-diagonal components are all now zero. The rotation angles required to get to this equivalent position now appear in the three vectors and can be read out as the x, y, and z components of each of them. Those three vectors are called "eigenvectors" or characteristic vectors. They contain the orientation information of the original ellipsoid. The three axes of the ellipsoid are now directly along the main orthogonal axes of the coordinate system so we can easily infer their lengths. These lengths are the eigenvalues or characteristic values.
Diagonalization of a matrix is done by finding a second matrix that it can be multiplied with followed by multiplication by the inverse of the second matrix—wherein the result is a new matrix in which three diagonal (xx, yy, zz) components have numbers in them but the off-diagonal components (xy, yz, zx) are 0. The second matrix provides eigenvector information.
=== Measures of anisotropy and diffusivity ===
In present-day clinical neurology, various brain pathologies may be best detected by looking at particular measures of anisotropy and diffusivity. The underlying physical process of diffusion causes a group of water molecules to move out from a central point, and gradually reach the surface of an ellipsoid if the medium is anisotropic (it would be the surface of a sphere for an isotropic medium). The ellipsoid formalism functions also as a mathematical method of organizing tensor data. Measurement of an ellipsoid tensor further permits a retrospective analysis, to gather information about the process of diffusion in each voxel of the tissue.
In an isotropic medium such as cerebrospinal fluid, water molecules are moving due to diffusion and they move at equal rates in all directions. By knowing the detailed effects of diffusion gradients we can generate a formula that allows us to convert the signal attenuation of an MRI voxel into a numerical measure of diffusion—the diffusion coefficient D. When various barriers and restricting factors such as cell membranes and microtubules interfere with the free diffusion, we are measuring an "apparent diffusion coefficient", or ADC, because the measurement misses all the local effects and treats the attenuation as if all the movement rates were solely due to Brownian motion. The ADC in anisotropic tissue varies depending on the direction in which it is measured. Diffusion is fast along the length of (parallel to) an axon, and slower perpendicularly across it.
Once we have measured the voxel from six or more directions and corrected for attenuations due to T2 and T1 effects, we can use information from our calculated ellipsoid tensor to describe what is happening in the voxel. If you consider an ellipsoid sitting at an angle in a Cartesian grid then you can consider the projection of that ellipse onto the three axes. The three projections can give you the ADC along each of the three axes ADCx, ADCy, ADCz. This leads to the idea of describing the average diffusivity in the voxel which will simply be
(
A
D
C
x
+
A
D
C
y
+
A
D
C
z
)
/
3
=
A
D
C
i
{\displaystyle (ADC_{x}+ADC_{y}+ADC_{z})/3=ADC_{i}}
We use the i subscript to signify that this is what the isotropic diffusion coefficient would be with the effects of anisotropy averaged out.
The ellipsoid itself has a principal long axis and then two more small axes that describe its width and depth. All three of these are perpendicular to each other and cross at the center point of the ellipsoid. We call the axes in this setting eigenvectors and the measures of their lengths eigenvalues. The lengths are symbolized by the Greek letter λ. The long one pointing along the axon direction will be λ1 and the two small axes will have lengths λ2 and λ3. In the setting of the DTI tensor ellipsoid, we can consider each of these as a measure of the diffusivity along each of the three primary axes of the ellipsoid. This is a little different from the ADC since that was a projection on the axis, while λ is an actual measurement of the ellipsoid we have calculated.
The diffusivity along the principal axis, λ1 is also called the longitudinal diffusivity or the axial diffusivity or even the parallel diffusivity λ∥. Historically, this is closest to what Richards originally measured with the vector length in 1991. The diffusivities in the two minor axes are often averaged to produce a measure of radial diffusivity
λ
⊥
=
(
λ
2
+
λ
3
)
/
2.
{\displaystyle \lambda _{\perp }=(\lambda _{2}+\lambda _{3})/2.}
This quantity is an assessment of the degree of restriction due to membranes and other effects and proves to be a sensitive measure of degenerative pathology in some neurological conditions. It can also be called the perpendicular diffusivity (
λ
⊥
{\displaystyle \lambda _{\perp }}
).
Another commonly used measure that summarizes the total diffusivity is the Trace—which is the sum of the three eigenvalues,
t
r
(
Λ
)
=
λ
1
+
λ
2
+
λ
3
{\displaystyle \mathrm {tr} (\Lambda )=\lambda _{1}+\lambda _{2}+\lambda _{3}}
where
Λ
{\displaystyle \Lambda }
is a diagonal matrix with eigenvalues
λ
1
{\displaystyle \lambda _{1}}
,
λ
2
{\displaystyle \lambda _{2}}
and
λ
3
{\displaystyle \lambda _{3}}
on its diagonal.
If we divide this sum by three we have the mean diffusivity,
M
D
=
(
λ
1
+
λ
2
+
λ
3
)
/
3
{\displaystyle \mathrm {MD} =(\lambda _{1}+\lambda _{2}+\lambda _{3})/3}
which equals ADCi since
t
r
(
Λ
)
/
3
=
t
r
(
V
−
1
V
Λ
)
/
3
=
t
r
(
V
Λ
V
−
1
)
/
3
=
t
r
(
D
)
/
3
=
A
D
C
i
{\displaystyle {\begin{aligned}\mathrm {tr} (\Lambda )/3&=\mathrm {tr} (V^{-1}V\Lambda )/3\\&=\mathrm {tr} (V\Lambda V^{-1})/3\\&=\mathrm {tr} (D)/3\\&=ADC_{i}\end{aligned}}}
where
V
{\displaystyle V}
is the matrix of eigenvectors and
D
{\displaystyle D}
is the diffusion tensor.
Aside from describing the amount of diffusion, it is often important to describe the relative degree of anisotropy in a voxel. At one extreme would be the sphere of isotropic diffusion and at the other extreme would be a cigar or pencil shaped very thin prolate spheroid. The simplest measure is obtained by dividing the longest axis of the ellipsoid by the shortest = (λ1/λ3). However, this proves to be very susceptible to measurement noise, so increasingly complex measures were developed to capture the measure while minimizing the noise. An important element of these calculations is the sum of squares of the diffusivity differences = (λ1 − λ2)2 + (λ1 − λ3)2 + (λ2 − λ3)2. We use the square root of the sum of squares to obtain a sort of weighted average—dominated by the largest component. One objective is to keep the number near 0 if the voxel is spherical but near 1 if it is elongate. This leads to the fractional anisotropy or FA which is the square root of the sum of squares (SRSS) of the diffusivity differences, divided by the SRSS of the diffusivities. When the second and third axes are small relative to the principal axis, the number in the numerator is almost equal the number in the denominator. We also multiply by
1
/
2
{\displaystyle 1/{\sqrt {2}}}
so that FA has a maximum value of 1. The whole formula for FA looks like this:
F
A
=
3
(
(
λ
1
−
E
[
λ
]
)
2
+
(
λ
2
−
E
[
λ
]
)
2
+
(
λ
3
−
E
[
λ
]
)
2
)
2
(
λ
1
2
+
λ
2
2
+
λ
3
2
)
{\displaystyle \mathrm {FA} ={\frac {\sqrt {3((\lambda _{1}-\operatorname {E} [\lambda ])^{2}+(\lambda _{2}-\operatorname {E} [\lambda ])^{2}+(\lambda _{3}-\operatorname {E} [\lambda ])^{2})}}{\sqrt {2(\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2})}}}}
where
E
[
λ
]
=
(
λ
1
+
λ
2
+
λ
3
)
/
3
.
{\textstyle \operatorname {E} [\lambda ]=(\lambda _{1}+\lambda _{2}+\lambda _{3})/3\,.}
The fractional anisotropy can also be separated into linear, planar, and spherical measures depending on the "shape" of the diffusion ellipsoid. For example, a "cigar" shaped prolate ellipsoid indicates a strongly linear anisotropy, a "flying saucer" or oblate spheroid represents diffusion in a plane, and a sphere is indicative of isotropic diffusion, equal in all directions. If the eigenvalues of the diffusion vector are sorted such that
λ
1
≥
λ
2
≥
λ
3
≥
0
{\displaystyle \lambda _{1}\geq \lambda _{2}\geq \lambda _{3}\geq 0}
, then the measures can be calculated as follows:
For the linear case, where
λ
1
≫
λ
2
≃
λ
3
{\displaystyle \lambda _{1}\gg \lambda _{2}\simeq \lambda _{3}}
,
C
l
=
λ
1
−
λ
2
λ
1
+
λ
2
+
λ
3
{\displaystyle C_{l}={\frac {\lambda _{1}-\lambda _{2}}{\lambda _{1}+\lambda _{2}+\lambda _{3}}}}
For the planar case, where
λ
1
≃
λ
2
≫
λ
3
{\displaystyle \lambda _{1}\simeq \lambda _{2}\gg \lambda _{3}}
,
C
p
=
2
(
λ
2
−
λ
3
)
λ
1
+
λ
2
+
λ
3
{\displaystyle C_{p}={\frac {2(\lambda _{2}-\lambda _{3})}{\lambda _{1}+\lambda _{2}+\lambda _{3}}}}
For the spherical case, where
λ
1
≃
λ
2
≃
λ
3
{\displaystyle \lambda _{1}\simeq \lambda _{2}\simeq \lambda _{3}}
,
C
s
=
3
λ
3
λ
1
+
λ
2
+
λ
3
{\displaystyle C_{s}={\frac {3\lambda _{3}}{\lambda _{1}+\lambda _{2}+\lambda _{3}}}}
Each measure lies between 0 and 1 and they sum to unity. An additional anisotropy measure can used to describe the deviation from the spherical case:
C
a
=
C
l
+
C
p
=
1
−
C
s
=
λ
1
+
λ
2
−
2
λ
3
λ
1
+
λ
2
+
λ
3
{\displaystyle C_{a}=C_{l}+C_{p}=1-C_{s}={\frac {\lambda _{1}+\lambda _{2}-2\lambda _{3}}{\lambda _{1}+\lambda _{2}+\lambda _{3}}}}
There are other metrics of anisotropy used, including the relative anisotropy (RA):
R
A
=
(
λ
1
−
E
[
λ
]
)
2
+
(
λ
2
−
E
[
λ
]
)
2
+
(
λ
3
−
E
[
λ
]
)
2
3
E
[
λ
]
{\displaystyle \mathrm {RA} ={\frac {\sqrt {(\lambda _{1}-\operatorname {E} [\lambda ])^{2}+(\lambda _{2}-\operatorname {E} [\lambda ])^{2}+(\lambda _{3}-\operatorname {E} [\lambda ])^{2}}}{{\sqrt {3}}\operatorname {E} [\lambda ]}}}
and the volume ratio (VR):
V
R
=
λ
1
λ
2
λ
3
E
[
λ
]
3
{\displaystyle \mathrm {VR} ={\frac {\lambda _{1}\lambda _{2}\lambda _{3}}{\operatorname {E} [\lambda ]^{3}}}}
== Applications ==
The most common application of conventional DWI (without DTI) is in acute brain ischemia. DWI directly visualizes the ischemic necrosis in cerebral infarction in the form of a cytotoxic edema, appearing as a high DWI signal within minutes of arterial occlusion. With perfusion MRI detecting both the infarcted core and the salvageable penumbra, the latter can be quantified by DWI and perfusion MRI.
Another application area of DWI is in oncology. Tumors are in many instances highly cellular, giving restricted diffusion of water, and therefore appear with a relatively high signal intensity in DWI. DWI is commonly used to detect and stage tumors, and also to monitor tumor response to treatment over time. DWI can also be collected to visualize the whole body using a technique called 'diffusion-weighted whole-body imaging with background body signal suppression' (DWIBS). Some more specialized diffusion MRI techniques such as diffusion kurtosis imaging (DKI) have also been shown to predict the response of cancer patients to chemotherapy treatment.
The principal application is in the imaging of white matter where the location, orientation, and anisotropy of the tracts can be measured. The architecture of the axons in parallel bundles, and their myelin sheaths, facilitate the diffusion of the water molecules preferentially along their main direction. Such preferentially oriented diffusion is called anisotropic diffusion.
The imaging of this property is an extension of diffusion MRI. If a series of diffusion gradients (i.e. magnetic field variations in the MRI magnet) are applied that can determine at least 3 directional vectors (use of 6 different gradients is the minimum and additional gradients improve the accuracy for "off-diagonal" information), it is possible to calculate, for each voxel, a tensor (i.e. a symmetric positive definite 3×3 matrix) that describes the 3-dimensional shape of diffusion. The fiber direction is indicated by the tensor's main eigenvector. This vector can be color-coded, yielding a cartography of the tracts' position and direction (red for left-right, blue for superior-inferior, and green for anterior-posterior). The brightness is weighted by the fractional anisotropy which is a scalar measure of the degree of anisotropy in a given voxel. Mean diffusivity (MD) or trace is a scalar measure of the total diffusion within a voxel. These measures are commonly used clinically to localize white matter lesions that do not show up on other forms of clinical MRI.
Applications in the brain:
Tract-specific localization of white matter lesions such as trauma and in defining the severity of diffuse traumatic brain injury. The localization of tumors in relation to the white matter tracts (infiltration, deflection), has been one of the most important initial applications. In surgical planning for some types of brain tumors, surgery is aided by knowing the proximity and relative position of the corticospinal tract and a tumor.
Diffusion tensor imaging data can be used to perform tractography within white matter. Fiber tracking algorithms can be used to track a fiber along its whole length (e.g. the corticospinal tract, through which the motor information transit from the motor cortex to the spinal cord and the peripheral nerves). Tractography is a useful tool for measuring deficits in white matter, such as in aging. Its estimation of fiber orientation and strength is increasingly accurate, and it has widespread potential implications in the fields of cognitive neuroscience and neurobiology.
The use of DTI for the assessment of white matter in development, pathology and degeneration has been the focus of over 2,500 research publications since 2005. It promises to be very helpful in distinguishing Alzheimer's disease from other types of dementia. Applications in brain research include the investigation of neural networks in vivo, as well as in connectomics.
Applications for peripheral nerves:
Brachial plexus: DTI can differentiate normal nerves (as shown in the tractogram of the spinal cord and brachial plexus and 3D 4k reconstruction here) from traumatically injured nerve roots.
Cubital Tunnel Syndrome: metrics derived from DTI (FA and RD) can differentiate asymptomatic adults from those with compression of the ulnar nerve at the elbow
Carpal Tunnel Syndrome: Metrics derived from DTI (lower FA and MD) differentiate healthy adults from those with carpal tunnel syndrome
== Research ==
Early in the development of DTI based tractography, a number of researchers pointed out a flaw in the diffusion tensor model. The tensor analysis assumes that there is a single ellipsoid in each imaging voxel—as if all of the axons traveling through a voxel traveled in exactly the same direction. This is often true, but it can be estimated that in more than 30% of the voxels in a standard resolution brain image, there are at least two different neural tracts traveling in different directions that pass through each other. In the classic diffusion ellipsoid tensor model, the information from the crossing tract appears as noise or unexplained decreased anisotropy in a given voxel.
David Tuch was among the first to describe a solution to this problem. The idea is best understood by conceptually placing a kind of geodesic dome around each image voxel. This icosahedron provides a mathematical basis for passing a large number of evenly spaced gradient trajectories through the voxel—each coinciding with one of the apices of the icosahedron. We can then look into the voxel from a large number of different directions (typically 40 or more). We use "n-tuple" tessellations to add more evenly spaced apices to the original icosahedron (20 faces)—an idea that also had its precedents in paleomagnetism research several decades earlier. We want to know which direction lines turn up the maximum anisotropic diffusion measures. If there is a single tract, there will be only two maxima, pointing in opposite directions. If two tracts cross in the voxel, there will be two pairs of maxima, and so on. We can still use tensor mathematics to use the maxima to select groups of gradients to package into several different tensor ellipsoids in the same voxel, or use more complex higher-rank tensor analyses, or we can do a true "model free" analysis that picks the maxima, and then continue to do the tractography.
The Q-Ball method of tractography is an implementation in which David Tuch provides a mathematical alternative to the tensor model. Instead of forcing the diffusion anisotropy data into a group of tensors, the mathematics used deploys both probability distributions and some classic geometric tomography and vector mathematics developed nearly 100 years ago—the Funk Radon Transform.
Note, there is ongoing debate about the best way to preprocess DW-MRI. Several in-vivo studies have shown that the choice of software and functions applied (directed at correcting artefacts arising from e.g. motion and eddy-currents) have a meaningful impact on the DTI parameter estimates from tissue. Consequently, this is the topic of a multinational study directed by the diffusion-study group of the ISMRM.
=== Summary ===
For DTI, it is generally possible to use linear algebra, matrix mathematics and vector mathematics to process the analysis of the tensor data.
In some cases, the full set of tensor properties is of interest, but for tractography it is usually necessary to know only the magnitude and orientation of the primary axis or vector. This primary axis—the one with the greatest length—is the largest eigenvalue and its orientation is encoded in its matched eigenvector. Only one axis is needed as it is assumed the largest eigenvalue is aligned with the main axon direction to accomplish tractography.
== See also ==
Connectogram
Connectome
Tractography
== Explanatory notes ==
== References ==
== External links ==
PNRC: About Diffusion MRI
White Matter Atlas | Wikipedia/Diffusion_tensor_imaging |
In differential geometry and general relativity, a bitensor (or bi-tensor) is a tensorial object that depends on two points in a manifold, as opposed to ordinary tensors which depend on a single point. Bitensors provide a framework for describing relationships between different points in spacetime and are used in the study of various phenomena in curved spacetime.
== Definition ==
A bitensor is a tensorial object that depends on two points in a manifold, rather than on a single point as ordinary tensors do.
A bitensor field
B
{\displaystyle B}
can be formally defined as a map from the product manifold to an appropriate vector space
B
:
M
×
M
→
V
{\displaystyle B:M\times M\to V}
, where
M
{\displaystyle M}
is a smooth manifold and
V
{\displaystyle V}
is the vector space corresponding to the tensor space being considered.
In the language of fiber bundles, a bitensor of type
(
r
,
s
,
r
′
,
s
′
)
{\displaystyle (r,s,r',s')}
is defined as a section of the exterior tensor product bundle
T
s
r
M
⊠
T
s
′
r
′
M
{\displaystyle T_{s}^{r}M\boxtimes T_{s'}^{r'}M}
, where
T
s
r
M
{\displaystyle T_{s}^{r}M}
denotes the tensor bundle of rank
(
r
,
s
)
{\displaystyle (r,s)}
and
⊠
{\displaystyle \boxtimes }
represents the exterior tensor product
B
∈
Γ
(
T
s
r
M
⊠
T
s
′
r
′
M
)
{\displaystyle B\in \Gamma (T_{s}^{r}M\boxtimes T_{s'}^{r'}M)}
, where
Γ
{\displaystyle \Gamma }
denotes the space of sections.
The exterior tensor product bundle is constructed as
V
1
⊠
V
2
=
p
r
1
∗
V
1
⊗
p
r
2
∗
V
2
{\displaystyle {\mathcal {V}}_{1}\boxtimes {\mathcal {V}}_{2}=\mathrm {pr} _{1}^{*}{\mathcal {V}}_{1}\otimes \mathrm {pr} _{2}^{*}{\mathcal {V}}_{2}}
where
p
r
i
{\displaystyle \mathrm {pr} _{i}}
are projection operators that project onto the respective factors of the product manifold
M
×
M
{\displaystyle M\times M}
, and
p
r
i
∗
{\displaystyle \mathrm {pr} _{i}^{*}}
denotes the pullback of the respective bundles.
In coordinate notation, a bitensor
T
{\displaystyle T}
with components
T
α
β
′
…
μ
ν
′
…
(
x
,
y
)
{\displaystyle T_{\alpha \beta '\ldots }^{\mu \nu '\ldots }(x,y)}
has indices associated with two different points
x
{\displaystyle x}
and
y
{\displaystyle y}
in the manifold. By convention, unprimed indices (such as
μ
{\displaystyle \mu }
,
α
{\displaystyle \alpha }
) refer to the first point, while primed indices (such as
ν
′
{\displaystyle \nu '}
,
β
′
{\displaystyle \beta '}
) refer to the second point. The simplest example of a bitensor is a biscalar field, which is a scalar function of two points. Applications include parallel transport, heat kernels, and various Green's functions employed in quantum field theory in curved spacetime.
== History ==
The concept of bitensors was first formally developed by mathematician Harold Stanley Ruse in his 1931 paper An Absolute Partial Differential Calculus, published in the Quarterly Journal of Mathematics. Ruse introduced bitensors as a generalization of tensor calculus to functions of two sets of variables, drawing an analogy with partial differentiation in elementary calculus. He developed the formalism for bitensor transformations, covariant derivatives, and scalar connections, establishing the foundation for what he termed an "absolute partial differential calculus."
== See also ==
Parallel transport
Pullback
Propagator
Riemann curvature tensor
Stokes flow
Synge's world function
== References == | Wikipedia/Bitensor |
In machine learning, the term tensor informally refers to two different concepts (i) a way of organizing data and (ii) a multilinear (tensor) transformation. Data may be organized in a multidimensional array (M-way array), informally referred to as a "data tensor"; however, in the strict mathematical sense, a tensor is a multilinear mapping over a set of domain vector spaces to a range vector space. Observations, such as images, movies, volumes, sounds, and relationships among words and concepts, stored in an M-way array ("data tensor"), may be analyzed either by artificial neural networks or tensor methods.
Tensor decomposition factorizes data tensors into smaller tensors. Operations on data tensors can be expressed in terms of matrix multiplication and the Kronecker product. The computation of gradients, a crucial aspect of backpropagation, can be performed using software libraries such as PyTorch and TensorFlow.
Computations are often performed on graphics processing units (GPUs) using CUDA, and on dedicated hardware such as Google's Tensor Processing Unit or Nvidia's Tensor core. These developments have greatly accelerated neural network architectures, and increased the size and complexity of models that can be trained.
== History ==
A tensor is by definition a multilinear map. In mathematics, this may express a multilinear relationship between sets of algebraic objects. In physics, tensor fields, considered as tensors at each point in space, are useful in expressing mechanics such as stress or elasticity. In machine learning, the exact use of tensors depends on the statistical approach being used.
In 2001, the field of signal processing and statistics were making use of tensor methods. Pierre Comon surveys the early adoption of tensor methods in the fields of telecommunications, radio surveillance, chemometrics and sensor processing. Linear tensor rank methods (such as, Parafac/CANDECOMP) analyzed M-way arrays ("data tensors") composed of higher order statistics that were employed in blind source separation problems to compute a linear model of the data. He noted several early limitations in determining the tensor rank and efficient tensor rank decomposition.
In the early 2000s, multilinear tensor methods crossed over into computer vision, computer graphics and machine learning with papers by Vasilescu or in collaboration with Terzopoulos, such as Human Motion Signatures, TensorFaces TensorTexures and Multilinear Projection. Multilinear algebra, the algebra of higher-order tensors, is a suitable and transparent framework for analyzing the multifactor structure of an ensemble of observations and for addressing the difficult problem of disentangling the causal factors based on second order or higher order statistics associated with each causal factor.
Tensor (multilinear) factor analysis disentangles and reduces the influence of different causal factors with multilinear subspace learning.
When treating an image or a video as a 2- or 3-way array, i.e., "data matrix/tensor", tensor methods reduce spatial or time redundancies as demonstrated by Wang and Ahuja.
Yoshua Bengio,
Geoff Hinton
and their collaborators briefly discuss the relationship between deep neural networks and tensor factor analysis beyond the use of M-way arrays ("data tensors") as inputs. One of the early uses of tensors for neural networks appeared in natural language processing. A single word can be expressed as a vector via Word2vec. Thus a relationship between two words can be encoded in a matrix. However, for more complex relationships such as subject-object-verb, it is necessary to build higher-dimensional networks. In 2009, the work of Sutskever introduced Bayesian Clustered Tensor Factorization to model relational concepts while reducing the parameter space. From 2014 to 2015, tensor methods become more common in convolutional neural networks (CNNs). Tensor methods organize neural network weights in a "data tensor", analyze and reduce the number of neural network weights. Lebedev et al. accelerated CNN networks for character classification (the recognition of letters and digits in images) by using 4D kernel tensors.
== Definition ==
Let
F
{\displaystyle \mathbb {F} }
be a field such as the real numbers
R
{\displaystyle \mathbb {R} }
or the complex numbers
C
{\displaystyle \mathbb {C} }
. A tensor
T
∈
F
I
0
×
I
2
×
…
×
I
C
{\displaystyle {\mathcal {T}}\in {\mathbb {F} }^{I_{0}\times I_{2}\times \ldots \times I_{C}}}
is a multilinear transformation from a set of domain vector spaces to a range vector space:
T
:
{
F
I
1
×
F
I
2
×
…
F
I
C
}
↦
F
I
0
{\displaystyle {\mathcal {T}}:\{{\mathbb {F} }^{I_{1}}\times {\mathbb {F} }^{I_{2}}\times \ldots {\mathbb {F} }^{I_{C}}\}\mapsto {\mathbb {F} }^{I_{0}}}
Here,
C
{\displaystyle C}
and
I
0
,
I
1
,
…
,
I
C
{\displaystyle I_{0},I_{1},\ldots ,I_{C}}
are positive integers, and
(
C
+
1
)
{\displaystyle (C+1)}
is the number of modes of a tensor (also known as the number of ways of a multi-way array). The dimensionality of mode
c
{\displaystyle c}
is
I
c
{\displaystyle I_{c}}
, for
0
≤
c
≤
C
{\displaystyle 0\leq c\leq C}
.
In statistics and machine learning, an image is vectorized when viewed as a single observation, and a collection of vectorized images is organized as a "data tensor". For example, a set of facial images
{
d
i
p
,
i
e
,
i
l
,
i
v
∈
R
I
X
}
{\displaystyle \{{\mathbb {d} }_{i_{p},i_{e},i_{l},i_{v}}\in {\mathbb {R} }^{I_{X}}\}}
with
I
X
{\displaystyle I_{X}}
pixels that are the consequences of multiple causal factors, such as a facial geometry
i
p
(
1
≤
i
p
≤
I
P
)
{\displaystyle i_{p}(1\leq i_{p}\leq I_{P})}
, an expression
i
e
(
1
≤
i
e
≤
I
E
)
{\displaystyle i_{e}(1\leq i_{e}\leq I_{E})}
, an illumination condition
i
l
(
1
≤
i
l
≤
I
L
)
{\displaystyle i_{l}(1\leq i_{l}\leq I_{L})}
, and a viewing condition
i
v
(
1
≤
i
v
≤
I
V
)
{\displaystyle i_{v}(1\leq i_{v}\leq I_{V})}
may be organized into a data tensor (ie. multiway array)
D
∈
R
I
X
×
I
P
×
I
E
×
I
L
×
V
{\displaystyle {\mathcal {D}}\in {\mathbb {R} }^{I_{X}\times I_{P}\times I_{E}\times I_{L}\times V}}
where
I
P
{\displaystyle I_{P}}
are the total number of facial geometries,
I
E
{\displaystyle I_{E}}
are the total number of expressions,
I
L
{\displaystyle I_{L}}
are the total number of illumination conditions, and
I
V
{\displaystyle I_{V}}
are the total number of viewing conditions. Tensor factorizations methods such as TensorFaces and multilinear (tensor) independent component analysis factorizes the data tensor into a set of vector spaces that span the causal factor representations, where an image is the result of tensor transformation
T
{\displaystyle {\mathcal {T}}}
that maps a set of causal factor representations to the pixel space.
Another approach to using tensors in machine learning is to embed various data types directly. For example, a grayscale image, commonly represented as a discrete 2-way array
D
∈
R
I
R
X
×
I
C
X
{\displaystyle {\mathbf {D} }\in {\mathbb {R} }^{I_{RX}\times I_{CX}}}
with dimensionality
I
R
X
×
I
C
X
{\displaystyle I_{RX}\times I_{CX}}
where
I
R
X
{\displaystyle I_{RX}}
are the number of rows and
I
C
X
{\displaystyle I_{CX}}
are the number of columns. When an image is treated as 2-way array or 2nd order tensor (i.e. as a collection of column/row observations), tensor factorization methods compute the image column space, the image row space and the normalized PCA coefficients or the ICA coefficients.
Similarly, a color image with RGB channels,
D
∈
R
N
×
M
×
3
.
{\displaystyle {\mathcal {D}}\in \mathbb {R} ^{N\times M\times 3}.}
may be viewed as a 3rd order data tensor or 3-way array.--------
In natural language processing, a word might be expressed as a vector
v
{\displaystyle v}
via the Word2vec algorithm. Thus
v
{\displaystyle v}
becomes a mode-1 tensor
v
↦
A
∈
R
N
.
{\displaystyle v\mapsto {\mathcal {A}}\in \mathbb {R} ^{N}.}
The embedding of subject-object-verb semantics requires embedding relationships among three words. Because a word is itself a vector, subject-object-verb semantics could be expressed using mode-3 tensors
v
a
×
v
b
×
v
c
↦
A
∈
R
N
×
N
×
N
.
{\displaystyle v_{a}\times v_{b}\times v_{c}\mapsto {\mathcal {A}}\in \mathbb {R} ^{N\times N\times N}.}
In practice the neural network designer is primarily concerned with the specification of embeddings, the connection of tensor layers, and the operations performed on them in a network. Modern machine learning frameworks manage the optimization, tensor factorization and backpropagation automatically.
=== As unit values ===
Tensors may be used as the unit values of neural networks which extend the concept of scalar, vector and matrix values to multiple dimensions.
The output value of single layer unit
y
m
{\displaystyle y_{m}}
is the sum-product of its input units and the connection weights filtered through the activation function
f
{\displaystyle f}
:
y
m
=
f
(
∑
n
x
n
u
m
,
n
)
,
{\displaystyle y_{m}=f\left(\sum _{n}x_{n}u_{m,n}\right),}
where
y
m
∈
R
.
{\displaystyle y_{m}\in \mathbb {R} .}
If each output element of
y
m
{\displaystyle y_{m}}
is a scalar, then we have the classical definition of an artificial neural network. By replacing each unit component with a tensor, the network is able to express higher dimensional data such as images or videos:
y
m
∈
R
I
0
×
I
1
×
.
.
×
I
C
.
{\displaystyle y_{m}\in \mathbb {R} ^{I_{0}\times I_{1}\times ..\times I_{C}}.}
This use of tensors to replace unit values is common in convolutional neural networks where each unit might be an image processed through multiple layers. By embedding the data in tensors such network structures enable learning of complex data types.
=== In fully connected layers ===
Tensors may also be used to compute the layers of a fully connected neural network, where the tensor is applied to the entire layer instead of individual unit values.
The output value of single layer unit
y
m
{\displaystyle y_{m}}
is the sum-product of its input units and the connection weights filtered through the activation function
f
{\displaystyle f}
:
y
m
=
f
(
∑
n
x
n
u
m
,
n
)
.
{\displaystyle y_{m}=f\left(\sum _{n}x_{n}u_{m,n}\right).}
The vectors
x
{\displaystyle x}
and
y
{\displaystyle y}
of output values can be expressed as a mode-1 tensors, while the hidden weights can be expressed as a mode-2 tensor. In this example the unit values are scalars while the tensor takes on the dimensions of the network layers:
x
n
↦
X
∈
R
1
×
N
,
{\displaystyle x_{n}\mapsto {\mathcal {X}}\in \mathbb {R} ^{1\times N},}
y
n
↦
Y
∈
R
M
×
1
,
{\displaystyle y_{n}\mapsto {\mathcal {Y}}\in \mathbb {R} ^{M\times 1},}
u
n
↦
U
∈
R
N
×
M
.
{\displaystyle u_{n}\mapsto {\mathcal {U}}\in \mathbb {R} ^{N\times M}.}
In this notation, the output values can be computed as a tensor product of the input and weight tensors:
Y
=
f
(
X
U
)
.
{\displaystyle {\mathcal {Y}}=f({\mathcal {X}}{\mathcal {U}}).}
which computes the sum-product as a tensor multiplication (similar to matrix multiplication).
This formulation of tensors enables the entire layer of a fully connected network to be efficiently computed by mapping the units and weights to tensors.
=== In convolutional layers ===
A different reformulation of neural networks allows tensors to express the convolution layers of a neural network. A convolutional layer has multiple inputs, each of which is a spatial structure such as an image or volume. The inputs are convolved by filtering before being passed to the next layer. A typical use is to perform feature detection or isolation in image recognition.
Convolution is often computed as the multiplication of an input signal
g
{\displaystyle g}
with a filter kernel
f
{\displaystyle f}
. In two dimensions the discrete, finite form is:
(
f
∗
g
)
x
,
y
=
∑
j
=
−
w
w
∑
k
=
−
w
w
f
j
,
k
g
x
+
j
,
y
+
k
,
{\displaystyle (f*g)_{x,y}=\sum _{j=-w}^{w}\sum _{k=-w}^{w}f_{j,k}g_{x+j,y+k},}
where
w
{\displaystyle w}
is the width of the kernel.
This definition can be rephrased as a matrix-vector product in terms of tensors that express the kernel, data and inverse transform of the kernel.
Y
=
A
[
(
C
g
)
⊙
(
B
d
)
]
,
{\displaystyle {\mathcal {Y}}={\mathcal {A}}[(Cg)\odot (Bd)],}
where
A
,
B
{\displaystyle {\mathcal {A}},{\mathcal {B}}}
and
C
{\displaystyle {\mathcal {C}}}
are the inverse transform, data and kernel. The derivation is more complex when the filtering kernel also includes a non-linear activation function such as sigmoid or ReLU.
The hidden weights of the convolution layer are the parameters to the filter. These can be reduced with a pooling layer which reduces the resolution (size) of the data, and can also be expressed as a tensor operation.
=== Tensor factorization ===
An important contribution of tensors in machine learning is the ability to factorize tensors to decompose data into constituent factors or reduce the learned parameters. Data tensor modeling techniques stem from the linear tensor decomposition (CANDECOMP/Parafac decomposition) and the multilinear tensor decompositions (Tucker).
==== Tucker decomposition ====
Tucker decomposition, for example, takes a 3-way array
X
∈
R
I
×
J
×
K
{\displaystyle {\mathcal {X}}\in \mathbb {R} ^{I\times J\times K}}
and decomposes the tensor into three matrices
A
,
B
,
C
{\displaystyle {\mathcal {A,B,C}}}
and a smaller tensor
G
{\displaystyle {\mathcal {G}}}
. The shape of the matrices and new tensor are such that the total number of elements is reduced. The new tensors have shapes
A
∈
R
I
×
P
,
{\displaystyle {\mathcal {A}}\in \mathbb {R} ^{I\times P},}
B
∈
R
J
×
Q
,
{\displaystyle {\mathcal {B}}\in \mathbb {R} ^{J\times Q},}
C
∈
R
K
×
R
,
{\displaystyle {\mathcal {C}}\in \mathbb {R} ^{K\times R},}
G
∈
R
P
×
Q
×
R
.
{\displaystyle {\mathcal {G}}\in \mathbb {R} ^{P\times Q\times R}.}
Then the original tensor can be expressed as the tensor product of these four tensors:
X
=
G
×
A
×
B
×
C
.
{\displaystyle {\mathcal {X}}={\mathcal {G}}\times {\mathcal {A}}\times {\mathcal {B}}\times {\mathcal {C}}.}
In the example shown in the figure, the dimensions of the tensors are
X
{\displaystyle {\mathcal {X}}}
: I=8, J=6, K=3,
A
{\displaystyle {\mathcal {A}}}
: I=8, P=5,
B
{\displaystyle {\mathcal {B}}}
: J=6, Q=4,
C
{\displaystyle {\mathcal {C}}}
: K=3, R=2,
G
{\displaystyle {\mathcal {G}}}
: P=5, Q=4, R=2.
The total number of elements in the Tucker factorization is
|
A
|
+
|
B
|
+
|
C
|
+
|
G
|
=
{\displaystyle |{\mathcal {A}}|+|{\mathcal {B}}|+|{\mathcal {C}}|+|{\mathcal {G}}|=}
(
I
×
P
)
+
(
J
×
Q
)
+
(
K
×
R
)
+
(
P
×
Q
×
R
)
=
8
×
5
+
6
×
4
+
3
×
2
+
5
×
4
×
2
=
110.
{\displaystyle (I\times P)+(J\times Q)+(K\times R)+(P\times Q\times R)=8\times 5+6\times 4+3\times 2+5\times 4\times 2=110.}
The number of elements in the original
X
{\displaystyle {\mathcal {X}}}
is 144, resulting in a data reduction from 144 down to 110 elements, a reduction of 23% in parameters or data size. For much larger initial tensors, and depending on the rank (redundancy) of the tensor, the gains can be more significant.
The work of Rabanser et al. provides an introduction to tensors with more details on the extension of Tucker decomposition to N-dimensions beyond the mode-3 example given here.
==== Tensor trains ====
Another technique for decomposing tensors rewrites the initial tensor as a sequence (train) of smaller sized tensors. A tensor-train (TT) is a sequence of tensors of reduced rank, called canonical factors. The original tensor can be expressed as the sum-product of the sequence.
X
=
G
1
G
2
G
3
.
.
G
d
{\displaystyle {\mathcal {X}}={\mathcal {G_{1}}}{\mathcal {G_{2}}}{\mathcal {G_{3}}}..{\mathcal {G_{d}}}}
Developed in 2011 by Ivan Oseledts, the author observes that Tucker decomposition is "suitable for small dimensions, especially for the three-dimensional case. For large d it is not suitable." Thus tensor-trains can be used to factorize larger tensors in higher dimensions.
=== Tensor graphs ===
The unified data architecture and automatic differentiation of tensors has enabled higher-level designs of machine learning in the form of tensor graphs. This leads to new architectures, such as tensor-graph convolutional networks (TGCN), which identify highly non-linear associations in data, combine multiple relations, and scale gracefully, while remaining robust and performant.
These developments are impacting all areas of machine learning, such as text mining and clustering, time varying data, and neural networks wherein the input data is a social graph and the data changes dynamically.
== Hardware ==
Tensors provide a unified way to train neural networks for more complex data sets. However, training is expensive to compute on classical CPU hardware.
In 2014, Nvidia developed cuDNN, CUDA Deep Neural Network, a library for a set of optimized primitives written in the parallel CUDA language. CUDA and thus cuDNN run on dedicated GPUs that implement unified massive parallelism in hardware. These GPUs were not yet dedicated chips for tensors, but rather existing hardware adapted for parallel computation in machine learning.
In the period 2015–2017 Google invented the Tensor Processing Unit (TPU). TPUs are dedicated, fixed function hardware units that specialize in the matrix multiplications needed for tensor products. Specifically, they implement an array of 65,536 multiply units that can perform a 256x256 matrix sum-product in just one global instruction cycle.
Later in 2017, Nvidia released its own Tensor Core with the Volta GPU architecture. Each Tensor Core is a microunit that can perform a 4x4 matrix sum-product. There are eight tensor cores for each shared memory (SM) block. The first GV100 GPU card has 108 SMs resulting in 672 tensor cores. This device accelerated machine learning by 12x over the previous Tesla GPUs. The number of tensor cores scales as the number of cores and SM units continue to grow in each new generation of cards.
The development of GPU hardware, combined with the unified architecture of tensor cores, has enabled the training of much larger neural networks. In 2022, the largest neural network was Google's PaLM with 540 billion learned parameters (network weights) (the older GPT-3 language model has over 175 billion learned parameters that produces human-like text; size isn't everything, Stanford's much smaller 2023 Alpaca model claims to be better, having learned from Meta/Facebook's 2023 model LLaMA, the smaller 7 billion parameter variant). The widely popular chatbot ChatGPT is built on top of GPT-3.5 (and after an update GPT-4) using supervised and reinforcement learning.
== References == | Wikipedia/Tensor_(machine_learning) |
In continuum mechanics, the finite strain theory—also called large strain theory, or large deformation theory—deals with deformations in which strains and/or rotations are large enough to invalidate assumptions inherent in infinitesimal strain theory. In this case, the undeformed and deformed configurations of the continuum are significantly different, requiring a clear distinction between them. This is commonly the case with elastomers, plastically deforming materials and other fluids and biological soft tissue.
== Displacement field ==
== Deformation gradient tensor ==
The deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is related to both the reference and current configuration, as seen by the unit vectors
e
j
{\displaystyle \mathbf {e} _{j}}
and
I
K
{\displaystyle \mathbf {I} _{K}\,\!}
, therefore it is a two-point tensor.
Two types of deformation gradient tensor may be defined.
Due to the assumption of continuity of
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
,
F
{\displaystyle \mathbf {F} }
has the inverse
H
=
F
−
1
{\displaystyle \mathbf {H} =\mathbf {F} ^{-1}\,\!}
, where
H
{\displaystyle \mathbf {H} }
is the spatial deformation gradient tensor. Then, by the implicit function theorem, the Jacobian determinant
J
(
X
,
t
)
{\displaystyle J(\mathbf {X} ,t)}
must be nonsingular, i.e.
J
(
X
,
t
)
=
det
F
(
X
,
t
)
≠
0
{\displaystyle J(\mathbf {X} ,t)=\det \mathbf {F} (\mathbf {X} ,t)\neq 0}
The material deformation gradient tensor
F
(
X
,
t
)
=
F
j
K
e
j
⊗
I
K
{\displaystyle \mathbf {F} (\mathbf {X} ,t)=F_{jK}\mathbf {e} _{j}\otimes \mathbf {I} _{K}}
is a second-order tensor that represents the gradient of the mapping function or functional relation
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, which describes the motion of a continuum. The material deformation gradient tensor characterizes the local deformation at a material point with position vector
X
{\displaystyle \mathbf {X} \,\!}
, i.e., deformation at neighbouring points, by transforming (linear transformation) a material line element emanating from that point from the reference configuration to the current or deformed configuration, assuming continuity in the mapping function
χ
(
X
,
t
)
{\displaystyle \chi (\mathbf {X} ,t)\,\!}
, i.e. differentiable function of
X
{\displaystyle \mathbf {X} }
and time
t
{\displaystyle t\,\!}
, which implies that cracks and voids do not open or close during the deformation. Thus we have,
d
x
=
∂
x
∂
X
d
X
or
d
x
j
=
∂
x
j
∂
X
K
d
X
K
=
∇
χ
(
X
,
t
)
d
X
or
d
x
j
=
F
j
K
d
X
K
.
=
F
(
X
,
t
)
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}={\frac {\partial x_{j}}{\partial X_{K}}}\,dX_{K}\\&=\nabla \chi (\mathbf {X} ,t)\,d\mathbf {X} \qquad &{\text{or}}&\qquad dx_{j}=F_{jK}\,dX_{K}\,.\\&=\mathbf {F} (\mathbf {X} ,t)\,d\mathbf {X} \end{aligned}}}
=== Relative displacement vector ===
Consider a particle or material point
P
{\displaystyle P}
with position vector
X
=
X
I
I
I
{\displaystyle \mathbf {X} =X_{I}\mathbf {I} _{I}}
in the undeformed configuration (Figure 2). After a displacement of the body, the new position of the particle indicated by
p
{\displaystyle p}
in the new configuration is given by the vector position
x
=
x
i
e
i
{\displaystyle \mathbf {x} =x_{i}\mathbf {e} _{i}\,\!}
. The coordinate systems for the undeformed and deformed configuration can be superimposed for convenience.
Consider now a material point
Q
{\displaystyle Q}
neighboring
P
{\displaystyle P\,\!}
, with position vector
X
+
Δ
X
=
(
X
I
+
Δ
X
I
)
I
I
{\displaystyle \mathbf {X} +\Delta \mathbf {X} =(X_{I}+\Delta X_{I})\mathbf {I} _{I}\,\!}
. In the deformed configuration this particle has a new position
q
{\displaystyle q}
given by the position vector
x
+
Δ
x
{\displaystyle \mathbf {x} +\Delta \mathbf {x} \,\!}
. Assuming that the line segments
Δ
X
{\displaystyle \Delta X}
and
Δ
x
{\displaystyle \Delta \mathbf {x} }
joining the particles
P
{\displaystyle P}
and
Q
{\displaystyle Q}
in both the undeformed and deformed configuration, respectively, to be very small, then we can express them as
d
X
{\displaystyle d\mathbf {X} }
and
d
x
{\displaystyle d\mathbf {x} \,\!}
. Thus from Figure 2 we have
x
+
d
x
=
X
+
d
X
+
u
(
X
+
d
X
)
d
x
=
X
−
x
+
d
X
+
u
(
X
+
d
X
)
=
d
X
+
u
(
X
+
d
X
)
−
u
(
X
)
=
d
X
+
d
u
{\displaystyle {\begin{aligned}\mathbf {x} +d\mathbf {x} &=\mathbf {X} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\d\mathbf {x} &=\mathbf {X} -\mathbf {x} +d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )\\&=d\mathbf {X} +\mathbf {u} (\mathbf {X} +d\mathbf {X} )-\mathbf {u} (\mathbf {X} )\\&=d\mathbf {X} +d\mathbf {u} \\\end{aligned}}}
where
d
u
{\displaystyle \mathbf {du} }
is the relative displacement vector, which represents the relative displacement of
Q
{\displaystyle Q}
with respect to
P
{\displaystyle P}
in the deformed configuration.
==== Taylor approximation ====
For an infinitesimal element
d
X
{\displaystyle d\mathbf {X} \,\!}
, and assuming continuity on the displacement field, it is possible to use a Taylor series expansion around point
P
{\displaystyle P\,\!}
, neglecting higher-order terms, to approximate the components of the relative displacement vector for the neighboring particle
Q
{\displaystyle Q}
as
u
(
X
+
d
X
)
=
u
(
X
)
+
d
u
or
u
i
∗
=
u
i
+
d
u
i
≈
u
(
X
)
+
∇
X
u
⋅
d
X
or
u
i
∗
≈
u
i
+
∂
u
i
∂
X
J
d
X
J
.
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} +d\mathbf {X} )&=\mathbf {u} (\mathbf {X} )+d\mathbf {u} \quad &{\text{or}}&\quad u_{i}^{*}=u_{i}+du_{i}\\&\approx \mathbf {u} (\mathbf {X} )+\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \quad &{\text{or}}&\quad u_{i}^{*}\approx u_{i}+{\frac {\partial u_{i}}{\partial X_{J}}}dX_{J}\,.\end{aligned}}}
Thus, the previous equation
d
x
=
d
X
+
d
u
{\displaystyle d\mathbf {x} =d\mathbf {X} +d\mathbf {u} }
can be written as
d
x
=
d
X
+
d
u
=
d
X
+
∇
X
u
⋅
d
X
=
(
I
+
∇
X
u
)
d
X
=
F
d
X
{\displaystyle {\begin{aligned}d\mathbf {x} &=d\mathbf {X} +d\mathbf {u} \\&=d\mathbf {X} +\nabla _{\mathbf {X} }\mathbf {u} \cdot d\mathbf {X} \\&=\left(\mathbf {I} +\nabla _{\mathbf {X} }\mathbf {u} \right)d\mathbf {X} \\&=\mathbf {F} d\mathbf {X} \end{aligned}}}
=== Time-derivative of the deformation gradient ===
Calculations that involve the time-dependent deformation of a body often require a time derivative of the deformation gradient to be calculated. A geometrically consistent definition of such a derivative requires an excursion into differential geometry but we avoid those issues in this article.
The time derivative of
F
{\displaystyle \mathbf {F} }
is
F
˙
=
∂
F
∂
t
=
∂
∂
t
[
∂
x
(
X
,
t
)
∂
X
]
=
∂
∂
X
[
∂
x
(
X
,
t
)
∂
t
]
=
∂
∂
X
[
V
(
X
,
t
)
]
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial \mathbf {F} }{\partial t}}={\frac {\partial }{\partial t}}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[{\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial t}}\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]}
where
V
{\displaystyle \mathbf {V} }
is the (material) velocity. The derivative on the right hand side represents a material velocity gradient. It is common to convert that into a spatial gradient by applying the chain rule for derivatives, i.e.,
F
˙
=
∂
∂
X
[
V
(
X
,
t
)
]
=
∂
∂
X
[
v
(
x
(
X
,
t
)
,
t
)
]
=
∂
∂
x
[
v
(
x
,
t
)
]
|
x
=
x
(
X
,
t
)
⋅
∂
x
(
X
,
t
)
∂
X
=
l
⋅
F
{\displaystyle {\dot {\mathbf {F} }}={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {V} (\mathbf {X} ,t)\right]={\frac {\partial }{\partial \mathbf {X} }}\left[\mathbf {v} (\mathbf {x} (\mathbf {X} ,t),t)\right]=\left.{\frac {\partial }{\partial \mathbf {x} }}\left[\mathbf {v} (\mathbf {x} ,t)\right]\right|_{\mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}\cdot {\frac {\partial \mathbf {x} (\mathbf {X} ,t)}{\partial \mathbf {X} }}={\boldsymbol {l}}\cdot \mathbf {F} }
where
l
=
(
∇
x
v
)
T
{\displaystyle {\boldsymbol {l}}=(\nabla _{\mathbf {x} }\mathbf {v} )^{T}}
is the spatial velocity gradient and where
v
(
x
,
t
)
=
V
(
X
,
t
)
{\displaystyle \mathbf {v} (\mathbf {x} ,t)=\mathbf {V} (\mathbf {X} ,t)}
is the spatial (Eulerian) velocity at
x
=
x
(
X
,
t
)
{\displaystyle \mathbf {x} =\mathbf {x} (\mathbf {X} ,t)}
. If the spatial velocity gradient is constant in time, the above equation can be solved exactly to give
F
=
e
l
t
{\displaystyle \mathbf {F} =e^{{\boldsymbol {l}}\,t}}
assuming
F
=
1
{\displaystyle \mathbf {F} =\mathbf {1} }
at
t
=
0
{\displaystyle t=0}
. There are several methods of computing the exponential above.
Related quantities often used in continuum mechanics are the rate of deformation tensor and the spin tensor defined, respectively, as:
d
=
1
2
(
l
+
l
T
)
,
w
=
1
2
(
l
−
l
T
)
.
{\displaystyle {\boldsymbol {d}}={\tfrac {1}{2}}\left({\boldsymbol {l}}+{\boldsymbol {l}}^{T}\right)\,,~~{\boldsymbol {w}}={\tfrac {1}{2}}\left({\boldsymbol {l}}-{\boldsymbol {l}}^{T}\right)\,.}
The rate of deformation tensor gives the rate of stretching of line elements while the spin tensor indicates the rate of rotation or vorticity of the motion.
The material time derivative of the inverse of the deformation gradient (keeping the reference configuration fixed) is often required in analyses that involve finite strains. This derivative is
∂
∂
t
(
F
−
1
)
=
−
F
−
1
⋅
F
˙
⋅
F
−
1
.
{\displaystyle {\frac {\partial }{\partial t}}\left(\mathbf {F} ^{-1}\right)=-\mathbf {F} ^{-1}\cdot {\dot {\mathbf {F} }}\cdot \mathbf {F} ^{-1}\,.}
The above relation can be verified by taking the material time derivative of
F
−
1
⋅
d
x
=
d
X
{\displaystyle \mathbf {F} ^{-1}\cdot d\mathbf {x} =d\mathbf {X} }
and noting that
X
˙
=
0
{\displaystyle {\dot {\mathbf {X} }}=0}
.
=== Polar decomposition of the deformation gradient tensor ===
The deformation gradient
F
{\displaystyle \mathbf {F} }
, like any invertible second-order tensor, can be decomposed, using the polar decomposition theorem, into a product of two second-order tensors (Truesdell and Noll, 1965): an orthogonal tensor and a positive definite symmetric tensor, i.e.,
F
=
R
U
=
V
R
{\displaystyle \mathbf {F} =\mathbf {R} \mathbf {U} =\mathbf {V} \mathbf {R} }
where the tensor
R
{\displaystyle \mathbf {R} }
is a proper orthogonal tensor, i.e.,
R
−
1
=
R
T
{\displaystyle \mathbf {R} ^{-1}=\mathbf {R} ^{T}}
and
det
R
=
+
1
{\displaystyle \det \mathbf {R} =+1\,\!}
, representing a rotation; the tensor
U
{\displaystyle \mathbf {U} }
is the right stretch tensor; and
V
{\displaystyle \mathbf {V} }
the left stretch tensor. The terms right and left means that they are to the right and left of the rotation tensor
R
{\displaystyle \mathbf {R} \,\!}
, respectively.
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
are both positive definite, i.e.
x
⋅
U
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {U} \cdot \mathbf {x} >0}
and
x
⋅
V
⋅
x
>
0
{\displaystyle \mathbf {x} \cdot \mathbf {V} \cdot \mathbf {x} >0}
for all non-zero
x
∈
R
3
{\displaystyle \mathbf {x} \in \mathbb {R} ^{3}}
, and symmetric tensors, i.e.
U
=
U
T
{\displaystyle \mathbf {U} =\mathbf {U} ^{T}}
and
V
=
V
T
{\displaystyle \mathbf {V} =\mathbf {V} ^{T}\,\!}
, of second order.
This decomposition implies that the deformation of a line element
d
X
{\displaystyle d\mathbf {X} }
in the undeformed configuration onto
d
x
{\displaystyle d\mathbf {x} }
in the deformed configuration, i.e.,
d
x
=
F
d
X
{\displaystyle d\mathbf {x} =\mathbf {F} \,d\mathbf {X} \,\!}
, may be obtained either by first stretching the element by
U
{\displaystyle \mathbf {U} \,\!}
, i.e.
d
x
′
=
U
d
X
{\displaystyle d\mathbf {x} '=\mathbf {U} \,d\mathbf {X} \,\!}
, followed by a rotation
R
{\displaystyle \mathbf {R} \,\!}
, i.e.,
d
x
=
R
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {R} \,d\mathbf {x} '\,\!}
; or equivalently, by applying a rigid rotation
R
{\displaystyle \mathbf {R} }
first, i.e.,
d
x
′
=
R
d
X
{\displaystyle d\mathbf {x} '=\mathbf {R} \,d\mathbf {X} \,\!}
, followed later by a stretching
V
{\displaystyle \mathbf {V} \,\!}
, i.e.,
d
x
=
V
d
x
′
{\displaystyle d\mathbf {x} =\mathbf {V} \,d\mathbf {x} '}
(See Figure 3).
Due to the orthogonality of
R
{\displaystyle \mathbf {R} }
V
=
R
⋅
U
⋅
R
T
{\displaystyle \mathbf {V} =\mathbf {R} \cdot \mathbf {U} \cdot \mathbf {R} ^{T}}
so that
U
{\displaystyle \mathbf {U} }
and
V
{\displaystyle \mathbf {V} }
have the same eigenvalues or principal stretches, but different eigenvectors or principal directions
N
i
{\displaystyle \mathbf {N} _{i}}
and
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, respectively. The principal directions are related by
n
i
=
R
N
i
.
{\displaystyle \mathbf {n} _{i}=\mathbf {R} \mathbf {N} _{i}.}
This polar decomposition, which is unique as
F
{\displaystyle \mathbf {F} }
is invertible with a positive determinant, is a corollary of the singular-value decomposition.
=== Transformation of a surface and volume element ===
To transform quantities that are defined with respect to areas in a deformed configuration to those relative to areas in a reference configuration, and vice versa, we use Nanson's relation, expressed as
d
a
n
=
J
d
A
F
−
T
⋅
N
{\displaystyle da~\mathbf {n} =J~dA~\mathbf {F} ^{-T}\cdot \mathbf {N} }
where
d
a
{\displaystyle da}
is an area of a region in the deformed configuration,
d
A
{\displaystyle dA}
is the same area in the reference configuration, and
n
{\displaystyle \mathbf {n} }
is the outward normal to the area element in the current configuration while
N
{\displaystyle \mathbf {N} }
is the outward normal in the reference configuration,
F
{\displaystyle \mathbf {F} }
is the deformation gradient, and
J
=
det
F
{\displaystyle J=\det \mathbf {F} \,\!}
.
The corresponding formula for the transformation of the volume element is
d
v
=
J
d
V
{\displaystyle dv=J~dV}
== Fundamental strain tensors ==
A strain tensor is defined by the IUPAC as:
"A symmetric tensor that results when a deformation gradient tensor is factorized into a rotation tensor followed or preceded by a symmetric tensor".
Since a pure rotation should not induce any strains in a deformable body, it is often convenient to use rotation-independent measures of deformation in continuum mechanics. As a rotation followed by its inverse rotation leads to no change (
R
R
T
=
R
T
R
=
I
{\displaystyle \mathbf {R} \mathbf {R} ^{T}=\mathbf {R} ^{T}\mathbf {R} =\mathbf {I} \,\!}
) we can exclude the rotation by multiplying the deformation gradient tensor
F
{\displaystyle \mathbf {F} }
by its transpose.
Several rotation-independent deformation gradient tensors (or "deformation tensors", for short) are used in mechanics. In solid mechanics, the most popular of these are the right and left Cauchy–Green deformation tensors.
=== Cauchy strain tensor (right Cauchy–Green deformation tensor) ===
In 1839, George Green introduced a deformation tensor known as the right Cauchy–Green deformation tensor or Green's deformation tensor (the IUPAC recommends that this tensor be called the Cauchy strain tensor), defined as:
C
=
F
T
F
=
U
2
or
C
I
J
=
F
k
I
F
k
J
=
∂
x
k
∂
X
I
∂
x
k
∂
X
J
.
{\displaystyle \mathbf {C} =\mathbf {F} ^{T}\mathbf {F} =\mathbf {U} ^{2}\qquad {\text{or}}\qquad C_{IJ}=F_{kI}~F_{kJ}={\frac {\partial x_{k}}{\partial X_{I}}}{\frac {\partial x_{k}}{\partial X_{J}}}.}
Physically, the Cauchy–Green tensor gives us the square of local change in distances due to deformation, i.e.
d
x
2
=
d
X
⋅
C
⋅
d
X
{\displaystyle d\mathbf {x} ^{2}=d\mathbf {X} \cdot \mathbf {C} \cdot d\mathbf {X} }
Invariants of
C
{\displaystyle \mathbf {C} }
are often used in the expressions for strain energy density functions. The most commonly used invariants are
I
1
C
:=
tr
(
C
)
=
C
I
I
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
C
:=
1
2
[
(
tr
C
)
2
−
tr
(
C
2
)
]
=
1
2
[
(
C
J
J
)
2
−
C
I
K
C
K
I
]
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
C
:=
det
(
C
)
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
.
{\displaystyle {\begin{aligned}I_{1}^{C}&:={\text{tr}}(\mathbf {C} )=C_{II}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}^{C}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {C} )^{2}-{\text{tr}}(\mathbf {C} ^{2})\right]={\tfrac {1}{2}}\left[(C_{JJ})^{2}-C_{IK}C_{KI}\right]=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}^{C}&:=\det(\mathbf {C} )=J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}.\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient
F
{\displaystyle \mathbf {F} }
and
λ
i
{\displaystyle \lambda _{i}}
are stretch ratios for the unit fibers that are initially oriented along the eigenvector directions of the right (reference) stretch tensor (these are not generally aligned with the three axis of the coordinate systems).
=== Finger strain tensor ===
The IUPAC recommends that the inverse of the right Cauchy–Green deformation tensor (called the Cauchy strain tensor in that document), i. e.,
C
−
1
{\displaystyle \mathbf {C} ^{-1}}
, be called the Finger strain tensor. However, that nomenclature is not universally accepted in applied mechanics.
f
=
C
−
1
=
F
−
1
F
−
T
or
f
I
J
=
∂
X
I
∂
x
k
∂
X
J
∂
x
k
{\displaystyle \mathbf {f} =\mathbf {C} ^{-1}=\mathbf {F} ^{-1}\mathbf {F} ^{-T}\qquad {\text{or}}\qquad f_{IJ}={\frac {\partial X_{I}}{\partial x_{k}}}{\frac {\partial X_{J}}{\partial x_{k}}}}
=== Green strain tensor (left Cauchy–Green deformation tensor) ===
Reversing the order of multiplication in the formula for the right Cauchy-Green deformation tensor leads to the left Cauchy–Green deformation tensor which is defined as:
B
=
F
F
T
=
V
2
or
B
i
j
=
∂
x
i
∂
X
K
∂
x
j
∂
X
K
{\displaystyle \mathbf {B} =\mathbf {F} \mathbf {F} ^{T}=\mathbf {V} ^{2}\qquad {\text{or}}\qquad B_{ij}={\frac {\partial x_{i}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{K}}}}
The left Cauchy–Green deformation tensor is often called the Finger deformation tensor, named after Josef Finger (1894).
The IUPAC recommends that this tensor be called the Green strain tensor.
Invariants of
B
{\displaystyle \mathbf {B} }
are also used in the expressions for strain energy density functions. The conventional invariants are defined as
I
1
:=
tr
(
B
)
=
B
i
i
=
λ
1
2
+
λ
2
2
+
λ
3
2
I
2
:=
1
2
[
(
tr
B
)
2
−
tr
(
B
2
)
]
=
1
2
(
B
i
i
2
−
B
j
k
B
k
j
)
=
λ
1
2
λ
2
2
+
λ
2
2
λ
3
2
+
λ
3
2
λ
1
2
I
3
:=
det
B
=
J
2
=
λ
1
2
λ
2
2
λ
3
2
{\displaystyle {\begin{aligned}I_{1}&:={\text{tr}}(\mathbf {B} )=B_{ii}=\lambda _{1}^{2}+\lambda _{2}^{2}+\lambda _{3}^{2}\\I_{2}&:={\tfrac {1}{2}}\left[({\text{tr}}~\mathbf {B} )^{2}-{\text{tr}}(\mathbf {B} ^{2})\right]={\tfrac {1}{2}}\left(B_{ii}^{2}-B_{jk}B_{kj}\right)=\lambda _{1}^{2}\lambda _{2}^{2}+\lambda _{2}^{2}\lambda _{3}^{2}+\lambda _{3}^{2}\lambda _{1}^{2}\\I_{3}&:=\det \mathbf {B} =J^{2}=\lambda _{1}^{2}\lambda _{2}^{2}\lambda _{3}^{2}\end{aligned}}}
where
J
:=
det
F
{\displaystyle J:=\det \mathbf {F} }
is the determinant of the deformation gradient.
For compressible materials, a slightly different set of invariants is used:
(
I
¯
1
:=
J
−
2
/
3
I
1
;
I
¯
2
:=
J
−
4
/
3
I
2
;
J
≠
1
)
.
{\displaystyle ({\bar {I}}_{1}:=J^{-2/3}I_{1}~;~~{\bar {I}}_{2}:=J^{-4/3}I_{2}~;~~J\neq 1)~.}
=== Piola strain tensor (Cauchy deformation tensor) ===
Earlier in 1828, Augustin-Louis Cauchy introduced a deformation tensor defined as the inverse of the left Cauchy–Green deformation tensor,
B
−
1
{\displaystyle \mathbf {B} ^{-1}\,\!}
. This tensor has also been called the Piola strain tensor by the IUPAC and the Finger tensor in the rheology and fluid dynamics literature.
c
=
B
−
1
=
F
−
T
F
−
1
or
c
i
j
=
∂
X
K
∂
x
i
∂
X
K
∂
x
j
{\displaystyle \mathbf {c} =\mathbf {B} ^{-1}=\mathbf {F} ^{-T}\mathbf {F} ^{-1}\qquad {\text{or}}\qquad c_{ij}={\frac {\partial X_{K}}{\partial x_{i}}}{\frac {\partial X_{K}}{\partial x_{j}}}}
=== Spectral representation ===
If there are three distinct principal stretches
λ
i
{\displaystyle \lambda _{i}\,\!}
, the spectral decompositions of
C
{\displaystyle \mathbf {C} }
and
B
{\displaystyle \mathbf {B} }
is given by
C
=
∑
i
=
1
3
λ
i
2
N
i
⊗
N
i
and
B
=
∑
i
=
1
3
λ
i
2
n
i
⊗
n
i
{\displaystyle \mathbf {C} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {N} _{i}\otimes \mathbf {N} _{i}\qquad {\text{and}}\qquad \mathbf {B} =\sum _{i=1}^{3}\lambda _{i}^{2}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
Furthermore,
U
=
∑
i
=
1
3
λ
i
N
i
⊗
N
i
;
V
=
∑
i
=
1
3
λ
i
n
i
⊗
n
i
{\displaystyle \mathbf {U} =\sum _{i=1}^{3}\lambda _{i}\mathbf {N} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {V} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {n} _{i}}
R
=
∑
i
=
1
3
n
i
⊗
N
i
;
F
=
∑
i
=
1
3
λ
i
n
i
⊗
N
i
{\displaystyle \mathbf {R} =\sum _{i=1}^{3}\mathbf {n} _{i}\otimes \mathbf {N} _{i}~;~~\mathbf {F} =\sum _{i=1}^{3}\lambda _{i}\mathbf {n} _{i}\otimes \mathbf {N} _{i}}
Observe that
V
=
R
U
R
T
=
∑
i
=
1
3
λ
i
R
(
N
i
⊗
N
i
)
R
T
=
∑
i
=
1
3
λ
i
(
R
N
i
)
⊗
(
R
N
i
)
{\displaystyle \mathbf {V} =\mathbf {R} ~\mathbf {U} ~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~\mathbf {R} ~(\mathbf {N} _{i}\otimes \mathbf {N} _{i})~\mathbf {R} ^{T}=\sum _{i=1}^{3}\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})\otimes (\mathbf {R} ~\mathbf {N} _{i})}
Therefore, the uniqueness of the spectral decomposition also implies that
n
i
=
R
N
i
{\displaystyle \mathbf {n} _{i}=\mathbf {R} ~\mathbf {N} _{i}\,\!}
. The left stretch (
V
{\displaystyle \mathbf {V} \,\!}
) is also called the spatial stretch tensor while the right stretch (
U
{\displaystyle \mathbf {U} \,\!}
) is called the material stretch tensor.
The effect of
F
{\displaystyle \mathbf {F} }
acting on
N
i
{\displaystyle \mathbf {N} _{i}}
is to stretch the vector by
λ
i
{\displaystyle \lambda _{i}}
and to rotate it to the new orientation
n
i
{\displaystyle \mathbf {n} _{i}\,\!}
, i.e.,
F
N
i
=
λ
i
(
R
N
i
)
=
λ
i
n
i
{\displaystyle \mathbf {F} ~\mathbf {N} _{i}=\lambda _{i}~(\mathbf {R} ~\mathbf {N} _{i})=\lambda _{i}~\mathbf {n} _{i}}
In a similar vein,
F
−
T
N
i
=
1
λ
i
n
i
;
F
T
n
i
=
λ
i
N
i
;
F
−
1
n
i
=
1
λ
i
N
i
.
{\displaystyle \mathbf {F} ^{-T}~\mathbf {N} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {n} _{i}~;~~\mathbf {F} ^{T}~\mathbf {n} _{i}=\lambda _{i}~\mathbf {N} _{i}~;~~\mathbf {F} ^{-1}~\mathbf {n} _{i}={\cfrac {1}{\lambda _{i}}}~\mathbf {N} _{i}~.}
==== Examples ====
Uniaxial extension of an incompressible material
This is the case where a specimen is stretched in 1-direction with a stretch ratio of
α
=
α
1
{\displaystyle \mathbf {\alpha =\alpha _{1}} \,\!}
. If the volume remains constant, the contraction in the other two directions is such that
α
1
α
2
α
3
=
1
{\displaystyle \mathbf {\alpha _{1}\alpha _{2}\alpha _{3}=1} }
or
α
2
=
α
3
=
α
−
0.5
{\displaystyle \mathbf {\alpha _{2}=\alpha _{3}=\alpha ^{-0.5}} \,\!}
. Then:
F
=
[
α
0
0
0
α
−
0.5
0
0
0
α
−
0.5
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\alpha &0&0\\0&\alpha ^{-0.5}&0\\0&0&\alpha ^{-0.5}\end{bmatrix}}}
B
=
C
=
[
α
2
0
0
0
α
−
1
0
0
0
α
−
1
]
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}\alpha ^{2}&0&0\\0&\alpha ^{-1}&0\\0&0&\alpha ^{-1}\end{bmatrix}}}
Simple shear
F
=
[
1
γ
0
0
1
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
B
=
[
1
+
γ
2
γ
0
γ
1
0
0
0
1
]
{\displaystyle \mathbf {B} ={\begin{bmatrix}1+\gamma ^{2}&\gamma &0\\\gamma &1&0\\0&0&1\end{bmatrix}}}
C
=
[
1
γ
0
γ
1
+
γ
2
0
0
0
1
]
{\displaystyle \mathbf {C} ={\begin{bmatrix}1&\gamma &0\\\gamma &1+\gamma ^{2}&0\\0&0&1\end{bmatrix}}}
Rigid body rotation
F
=
[
cos
θ
sin
θ
0
−
sin
θ
cos
θ
0
0
0
1
]
{\displaystyle \mathbf {F} ={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}}
B
=
C
=
[
1
0
0
0
1
0
0
0
1
]
=
1
{\displaystyle \mathbf {B} =\mathbf {C} ={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}}=\mathbf {1} }
=== Derivatives of stretch ===
Derivatives of the stretch with respect to the right Cauchy–Green deformation tensor are used to derive the stress-strain relations of many solids, particularly hyperelastic materials. These derivatives are
∂
λ
i
∂
C
=
1
2
λ
i
N
i
⊗
N
i
=
1
2
λ
i
R
T
(
n
i
⊗
n
i
)
R
;
i
=
1
,
2
,
3
{\displaystyle {\cfrac {\partial \lambda _{i}}{\partial \mathbf {C} }}={\cfrac {1}{2\lambda _{i}}}~\mathbf {N} _{i}\otimes \mathbf {N} _{i}={\cfrac {1}{2\lambda _{i}}}~\mathbf {R} ^{T}~(\mathbf {n} _{i}\otimes \mathbf {n} _{i})~\mathbf {R} ~;~~i=1,2,3}
and follow from the observations that
C
:
(
N
i
⊗
N
i
)
=
λ
i
2
;
∂
C
∂
C
=
I
(
s
)
;
I
(
s
)
:
(
N
i
⊗
N
i
)
=
N
i
⊗
N
i
.
{\displaystyle \mathbf {C} :(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\lambda _{i}^{2}~;~~~~{\cfrac {\partial \mathbf {C} }{\partial \mathbf {C} }}={\mathsf {I}}^{(s)}~;~~~~{\mathsf {I}}^{(s)}:(\mathbf {N} _{i}\otimes \mathbf {N} _{i})=\mathbf {N} _{i}\otimes \mathbf {N} _{i}.}
=== Physical interpretation of deformation tensors ===
Let
X
=
X
i
E
i
{\displaystyle \mathbf {X} =X^{i}~{\boldsymbol {E}}_{i}}
be a Cartesian coordinate system defined on the undeformed body and let
x
=
x
i
E
i
{\displaystyle \mathbf {x} =x^{i}~{\boldsymbol {E}}_{i}}
be another system defined on the deformed body. Let a curve
X
(
s
)
{\displaystyle \mathbf {X} (s)}
in the undeformed body be parametrized using
s
∈
[
0
,
1
]
{\displaystyle s\in [0,1]}
. Its image in the deformed body is
x
(
X
(
s
)
)
{\displaystyle \mathbf {x} (\mathbf {X} (s))}
.
The undeformed length of the curve is given by
l
X
=
∫
0
1
|
d
X
d
s
|
d
s
=
∫
0
1
d
X
d
s
⋅
d
X
d
s
d
s
=
∫
0
1
d
X
d
s
⋅
I
⋅
d
X
d
s
d
s
{\displaystyle l_{X}=\int _{0}^{1}\left|{\cfrac {d\mathbf {X} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {I}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
After deformation, the length becomes
l
x
=
∫
0
1
|
d
x
d
s
|
d
s
=
∫
0
1
d
x
d
s
⋅
d
x
d
s
d
s
=
∫
0
1
(
d
x
d
X
⋅
d
X
d
s
)
⋅
(
d
x
d
X
⋅
d
X
d
s
)
d
s
=
∫
0
1
d
X
d
s
⋅
[
(
d
x
d
X
)
T
⋅
d
x
d
X
]
⋅
d
X
d
s
d
s
{\displaystyle {\begin{aligned}l_{x}&=\int _{0}^{1}\left|{\cfrac {d\mathbf {x} }{ds}}\right|~ds=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {x} }{ds}}\cdot {\cfrac {d\mathbf {x} }{ds}}}}~ds=\int _{0}^{1}{\sqrt {\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)\cdot \left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\cdot {\cfrac {d\mathbf {X} }{ds}}\right)}}~ds\\&=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot \left[\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right]\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds\end{aligned}}}
Note that the right Cauchy–Green deformation tensor is defined as
C
:=
F
T
⋅
F
=
(
d
x
d
X
)
T
⋅
d
x
d
X
{\displaystyle {\boldsymbol {C}}:={\boldsymbol {F}}^{T}\cdot {\boldsymbol {F}}=\left({\cfrac {d\mathbf {x} }{d\mathbf {X} }}\right)^{T}\cdot {\cfrac {d\mathbf {x} }{d\mathbf {X} }}}
Hence,
l
x
=
∫
0
1
d
X
d
s
⋅
C
⋅
d
X
d
s
d
s
{\displaystyle l_{x}=\int _{0}^{1}{\sqrt {{\cfrac {d\mathbf {X} }{ds}}\cdot {\boldsymbol {C}}\cdot {\cfrac {d\mathbf {X} }{ds}}}}~ds}
which indicates that changes in length are characterized by
C
{\displaystyle {\boldsymbol {C}}}
.
== Finite strain tensors ==
The concept of strain is used to evaluate how much a given displacement differs locally from a rigid body displacement. One of such strains for large deformations is the Lagrangian finite strain tensor, also called the Green-Lagrangian strain tensor or Green–St-Venant strain tensor, defined as
E
=
1
2
(
C
−
I
)
or
E
K
L
=
1
2
(
∂
x
j
∂
X
K
∂
x
j
∂
X
L
−
δ
K
L
)
{\displaystyle \mathbf {E} ={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )\qquad {\text{or}}\qquad E_{KL}={\frac {1}{2}}\left({\frac {\partial x_{j}}{\partial X_{K}}}{\frac {\partial x_{j}}{\partial X_{L}}}-\delta _{KL}\right)}
or as a function of the displacement gradient tensor
E
=
1
2
[
(
∇
X
u
)
T
+
∇
X
u
+
(
∇
X
u
)
T
⋅
∇
X
u
]
{\displaystyle \mathbf {E} ={\frac {1}{2}}\left[(\nabla _{\mathbf {X} }\mathbf {u} )^{T}+\nabla _{\mathbf {X} }\mathbf {u} +(\nabla _{\mathbf {X} }\mathbf {u} )^{T}\cdot \nabla _{\mathbf {X} }\mathbf {u} \right]}
or
E
K
L
=
1
2
(
∂
u
K
∂
X
L
+
∂
u
L
∂
X
K
+
∂
u
M
∂
X
K
∂
u
M
∂
X
L
)
{\displaystyle E_{KL}={\frac {1}{2}}\left({\frac {\partial u_{K}}{\partial X_{L}}}+{\frac {\partial u_{L}}{\partial X_{K}}}+{\frac {\partial u_{M}}{\partial X_{K}}}{\frac {\partial u_{M}}{\partial X_{L}}}\right)}
The Green-Lagrangian strain tensor is a measure of how much
C
{\displaystyle \mathbf {C} }
differs from
I
{\displaystyle \mathbf {I} \,\!}
.
The Eulerian finite strain tensor, or Eulerian-Almansi finite strain tensor, referenced to the deformed configuration (i.e. Eulerian description) is defined as
e
=
1
2
(
I
−
c
)
=
1
2
(
I
−
B
−
1
)
or
e
r
s
=
1
2
(
δ
r
s
−
∂
X
M
∂
x
r
∂
X
M
∂
x
s
)
{\displaystyle \mathbf {e} ={\frac {1}{2}}(\mathbf {I} -\mathbf {c} )={\frac {1}{2}}(\mathbf {I} -\mathbf {B} ^{-1})\qquad {\text{or}}\qquad e_{rs}={\frac {1}{2}}\left(\delta _{rs}-{\frac {\partial X_{M}}{\partial x_{r}}}{\frac {\partial X_{M}}{\partial x_{s}}}\right)}
or as a function of the displacement gradients we have
e
i
j
=
1
2
(
∂
u
i
∂
x
j
+
∂
u
j
∂
x
i
−
∂
u
k
∂
x
i
∂
u
k
∂
x
j
)
{\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}-{\frac {\partial u_{k}}{\partial x_{i}}}{\frac {\partial u_{k}}{\partial x_{j}}}\right)}
=== Seth–Hill family of generalized strain tensors ===
B. R. Seth from the Indian Institute of Technology Kharagpur was the first to show that the Green and Almansi strain tensors are special cases of a more general strain measure. The idea was further expanded upon by Rodney Hill in 1968. The Seth–Hill family of strain measures (also called Doyle-Ericksen tensors) can be expressed as
E
(
m
)
=
1
2
m
(
U
2
m
−
I
)
=
1
2
m
[
C
m
−
I
]
{\displaystyle \mathbf {E} _{(m)}={\frac {1}{2m}}(\mathbf {U} ^{2m}-\mathbf {I} )={\frac {1}{2m}}\left[\mathbf {C} ^{m}-\mathbf {I} \right]}
For different values of
m
{\displaystyle m}
we have:
Green-Lagrangian strain tensor
E
(
1
)
=
1
2
(
U
2
−
I
)
=
1
2
(
C
−
I
)
{\displaystyle \mathbf {E} _{(1)}={\frac {1}{2}}(\mathbf {U} ^{2}-\mathbf {I} )={\frac {1}{2}}(\mathbf {C} -\mathbf {I} )}
Biot strain tensor
E
(
1
/
2
)
=
(
U
−
I
)
=
C
1
/
2
−
I
{\displaystyle \mathbf {E} _{(1/2)}=(\mathbf {U} -\mathbf {I} )=\mathbf {C} ^{1/2}-\mathbf {I} }
Logarithmic strain, Natural strain, True strain, or Hencky strain
E
(
0
)
=
ln
U
=
1
2
ln
C
{\displaystyle \mathbf {E} _{(0)}=\ln \mathbf {U} ={\frac {1}{2}}\,\ln \mathbf {C} }
Almansi strain
E
(
−
1
)
=
1
2
[
I
−
U
−
2
]
{\displaystyle \mathbf {E} _{(-1)}={\frac {1}{2}}\left[\mathbf {I} -\mathbf {U} ^{-2}\right]}
The second-order approximation of these tensors is
E
(
m
)
=
ε
+
1
2
(
∇
u
)
T
⋅
∇
u
−
(
1
−
m
)
ε
T
⋅
ε
{\displaystyle \mathbf {E} _{(m)}={\boldsymbol {\varepsilon }}+{\tfrac {1}{2}}(\nabla \mathbf {u} )^{T}\cdot \nabla \mathbf {u} -(1-m){\boldsymbol {\varepsilon }}^{T}\cdot {\boldsymbol {\varepsilon }}}
where
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
is the infinitesimal strain tensor.
Many other different definitions of tensors
E
{\displaystyle \mathbf {E} }
are admissible, provided that they all satisfy the conditions that:
E
{\displaystyle \mathbf {E} }
vanishes for all rigid-body motions
the dependence of
E
{\displaystyle \mathbf {E} }
on the displacement gradient tensor
∇
u
{\displaystyle \nabla \mathbf {u} }
is continuous, continuously differentiable and monotonic
it is also desired that
E
{\displaystyle \mathbf {E} }
reduces to the infinitesimal strain tensor
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
as the norm
|
∇
u
|
→
0
{\displaystyle |\nabla \mathbf {u} |\to 0}
An example is the set of tensors
E
(
n
)
=
(
U
n
−
U
−
n
)
/
2
n
{\displaystyle \mathbf {E} ^{(n)}=\left({\mathbf {U} }^{n}-{\mathbf {U} }^{-n}\right)/2n}
which do not belong to the Seth–Hill class, but have the same 2nd-order approximation as the Seth–Hill measures at
m
=
0
{\displaystyle m=0}
for any value of
n
{\displaystyle n}
.
=== Physical interpretation of the finite strain tensor ===
The diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to the normal strain, e.g.
E
11
=
e
(
I
1
)
+
1
2
e
(
I
1
)
2
{\displaystyle E_{11}=e_{(\mathbf {I} _{1})}+{\frac {1}{2}}e_{(\mathbf {I} _{1})}^{2}}
where
e
(
I
1
)
{\displaystyle e_{(\mathbf {I} _{1})}}
is the normal strain or engineering strain in the direction
I
1
{\displaystyle \mathbf {I} _{1}\,\!}
.
The off-diagonal components
E
K
L
{\displaystyle E_{KL}}
of the Lagrangian finite strain tensor are related to shear strain, e.g.
E
12
=
1
2
2
E
11
+
1
2
E
22
+
1
sin
ϕ
12
{\displaystyle E_{12}={\frac {1}{2}}{\sqrt {2E_{11}+1}}{\sqrt {2E_{22}+1}}\sin \phi _{12}}
where
ϕ
12
{\displaystyle \phi _{12}}
is the change in the angle between two line elements that were originally perpendicular with directions
I
1
{\displaystyle \mathbf {I} _{1}}
and
I
2
{\displaystyle \mathbf {I} _{2}\,\!}
, respectively.
Under certain circumstances, i.e. small displacements and small displacement rates, the components of the Lagrangian finite strain tensor may be approximated by the components of the infinitesimal strain tensor
== Compatibility conditions ==
The problem of compatibility in continuum mechanics involves the determination of allowable single-valued continuous fields on bodies. These allowable conditions leave the body without unphysical gaps or overlaps after a deformation. Most such conditions apply to simply-connected bodies. Additional conditions are required for the internal boundaries of multiply connected bodies.
=== Compatibility of the deformation gradient ===
The necessary and sufficient conditions for the existence of a compatible
F
{\displaystyle {\boldsymbol {F}}}
field over a simply connected body are
∇
×
F
=
0
{\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {F}}={\boldsymbol {0}}}
=== Compatibility of the right Cauchy–Green deformation tensor ===
The necessary and sufficient conditions for the existence of a compatible
C
{\displaystyle {\boldsymbol {C}}}
field over a simply connected body are
R
α
β
ρ
γ
:=
∂
∂
X
ρ
[
(
X
)
Γ
α
β
γ
]
−
∂
∂
X
β
[
(
X
)
Γ
α
ρ
γ
]
+
(
X
)
Γ
μ
ρ
γ
(
X
)
Γ
α
β
μ
−
(
X
)
Γ
μ
β
γ
(
X
)
Γ
α
ρ
μ
=
0
{\displaystyle R_{\alpha \beta \rho }^{\gamma }:={\frac {\partial }{\partial X^{\rho }}}[\,_{(X)}\Gamma _{\alpha \beta }^{\gamma }]-{\frac {\partial }{\partial X^{\beta }}}[\,_{(X)}\Gamma _{\alpha \rho }^{\gamma }]+\,_{(X)}\Gamma _{\mu \rho }^{\gamma }\,_{(X)}\Gamma _{\alpha \beta }^{\mu }-\,_{(X)}\Gamma _{\mu \beta }^{\gamma }\,_{(X)}\Gamma _{\alpha \rho }^{\mu }=0}
We can show these are the mixed components of the Riemann–Christoffel curvature tensor. Therefore, the necessary conditions for
C
{\displaystyle {\boldsymbol {C}}}
-compatibility are that the Riemann–Christoffel curvature of the deformation is zero.
=== Compatibility of the left Cauchy–Green deformation tensor ===
General sufficiency conditions for the left Cauchy–Green deformation tensor in three-dimensions were derived by Amit Acharya. Compatibility conditions for two-dimensional
B
{\displaystyle {\boldsymbol {B}}}
fields were found by Janet Blume.
== See also ==
Infinitesimal strain
Compatibility (mechanics)
Curvilinear coordinates
Piola–Kirchhoff stress tensor, the stress tensor for finite deformations.
Stress measures
Strain partitioning
== References ==
== Further reading ==
Dill, Ellis Harold (2006). Continuum Mechanics: Elasticity, Plasticity, Viscoelasticity. Germany: CRC Press. ISBN 0-8493-9779-0.
Dimitrienko, Yuriy (2011). Nonlinear Continuum Mechanics and Large Inelastic Deformations. Germany: Springer. ISBN 978-94-007-0033-8.
Hutter, Kolumban; Klaus Jöhnk (2004). Continuum Methods of Physical Modeling. Germany: Springer. ISBN 3-540-20619-1.
Lubarda, Vlado A. (2001). Elastoplasticity Theory. CRC Press. ISBN 0-8493-1138-1.
Macosko, C. W. (1994). Rheology: principles, measurement and applications. VCH Publishers. ISBN 1-56081-579-5.
Mase, George E. (1970). Continuum Mechanics. McGraw-Hill Professional. ISBN 0-07-040663-4.
Mase, G. Thomas; George E. Mase (1999). Continuum Mechanics for Engineers (Second ed.). CRC Press. ISBN 0-8493-1855-6.
Nemat-Nasser, Sia (2006). Plasticity: A Treatise on Finite Deformation of Heterogeneous Inelastic Materials. Cambridge: Cambridge University Press. ISBN 0-521-83979-3.
Rees, David (2006). Basic Engineering Plasticity – An Introduction with Engineering and Manufacturing Applications. Butterworth-Heinemann. ISBN 0-7506-8025-3.
== External links ==
Prof. Amit Acharya's notes on compatibility on iMechanica | Wikipedia/Finite_deformation_tensors |
In mathematics, the tensor product of modules is a construction that allows arguments about bilinear maps (e.g. multiplication) to be carried out in terms of linear maps. The module construction is analogous to the construction of the tensor product of vector spaces, but can be carried out for a pair of modules over a commutative ring resulting in a third module, and also for a pair of a right-module and a left-module over any ring, with result an abelian group. Tensor products are important in areas of abstract algebra, homological algebra, algebraic topology, algebraic geometry, operator algebras and noncommutative geometry. The universal property of the tensor product of vector spaces extends to more general situations in abstract algebra. The tensor product of an algebra and a module can be used for extension of scalars. For a commutative ring, the tensor product of modules can be iterated to form the tensor algebra of a module, allowing one to define multiplication in the module in a universal way.
== Balanced product ==
For a ring R, a right R-module M, a left R-module N, and an abelian group G, a map φ: M × N → G is said to be R-balanced, R-middle-linear or an R-balanced product if for all m, m′ in M, n, n′ in N, and r in R the following hold:: 126
φ
(
m
,
n
+
n
′
)
=
φ
(
m
,
n
)
+
φ
(
m
,
n
′
)
Dl
φ
φ
(
m
+
m
′
,
n
)
=
φ
(
m
,
n
)
+
φ
(
m
′
,
n
)
Dr
φ
φ
(
m
⋅
r
,
n
)
=
φ
(
m
,
r
⋅
n
)
A
φ
{\displaystyle {\begin{aligned}\varphi (m,n+n')&=\varphi (m,n)+\varphi (m,n')&&{\text{Dl}}_{\varphi }\\\varphi (m+m',n)&=\varphi (m,n)+\varphi (m',n)&&{\text{Dr}}_{\varphi }\\\varphi (m\cdot r,n)&=\varphi (m,r\cdot n)&&{\text{A}}_{\varphi }\\\end{aligned}}}
The set of all such balanced products over R from M × N to G is denoted by LR(M, N; G).
If φ, ψ are balanced products, then each of the operations φ + ψ and −φ defined pointwise is a balanced product. This turns the set LR(M, N; G) into an abelian group.
For M and N fixed, the map G ↦ LR(M, N; G) is a functor from the category of abelian groups to itself. The morphism part is given by mapping a group homomorphism g : G → G′ to the function φ ↦ g ∘ φ, which goes from LR(M, N; G) to LR(M, N; G′).
Remarks
Properties (Dl) and (Dr) express biadditivity of φ, which may be regarded as distributivity of φ over addition.
Property (A) resembles some associative property of φ.
Every ring R is an R-bimodule. So the ring multiplication (r, r′) ↦ r ⋅ r′ in R is an R-balanced product R × R → R.
== Definition ==
For a ring R, a right R-module M, a left R-module N, the tensor product over R
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
is an abelian group together with a balanced product (as defined above)
⊗
:
M
×
N
→
M
⊗
R
N
{\displaystyle \otimes :M\times N\to M\otimes _{R}N}
which is universal in the following sense:
For every abelian group G and every balanced product
f
:
M
×
N
→
G
{\displaystyle f:M\times N\to G}
there is a unique group homomorphism
f
~
:
M
⊗
R
N
→
G
{\displaystyle {\tilde {f}}:M\otimes _{R}N\to G}
such that
f
~
∘
⊗
=
f
.
{\displaystyle {\tilde {f}}\circ \otimes =f.}
As with all universal properties, the above property defines the tensor product uniquely up to a unique isomorphism: any other abelian group and balanced product with the same properties will be isomorphic to M ⊗R N and ⊗. Indeed, the mapping ⊗ is called canonical, or more explicitly: the canonical mapping (or balanced product) of the tensor product.
The definition does not prove the existence of M ⊗R N; see below for a construction.
The tensor product can also be defined as a representing object for the functor G → LR(M,N;G); explicitly, this means there is a natural isomorphism:
{
Hom
Z
(
M
⊗
R
N
,
G
)
≃
L
R
(
M
,
N
;
G
)
g
↦
g
∘
⊗
{\displaystyle {\begin{cases}\operatorname {Hom} _{\mathbb {Z} }(M\otimes _{R}N,G)\simeq \operatorname {L} _{R}(M,N;G)\\g\mapsto g\circ \otimes \end{cases}}}
This is a succinct way of stating the universal mapping property given above. (If a priori one is given this natural isomorphism, then
⊗
{\displaystyle \otimes }
can be recovered by taking
G
=
M
⊗
R
N
{\displaystyle G=M\otimes _{R}N}
and then mapping the identity map.)
Similarly, given the natural identification
L
R
(
M
,
N
;
G
)
=
Hom
R
(
M
,
Hom
Z
(
N
,
G
)
)
{\displaystyle \operatorname {L} _{R}(M,N;G)=\operatorname {Hom} _{R}(M,\operatorname {Hom} _{\mathbb {Z} }(N,G))}
, one can also define M ⊗R N by the formula
Hom
Z
(
M
⊗
R
N
,
G
)
≃
Hom
R
(
M
,
Hom
Z
(
N
,
G
)
)
.
{\displaystyle \operatorname {Hom} _{\mathbb {Z} }(M\otimes _{R}N,G)\simeq \operatorname {Hom} _{R}(M,\operatorname {Hom} _{\mathbb {Z} }(N,G)).}
This is known as the tensor-hom adjunction; see also § Properties.
For each x in M, y in N, one writes
for the image of (x, y) under the canonical map
⊗
:
M
×
N
→
M
⊗
R
N
{\displaystyle \otimes :M\times N\to M\otimes _{R}N}
. It is often called a pure tensor. Strictly speaking, the correct notation would be x ⊗R y but it is conventional to drop R here. Then, immediately from the definition, there are relations:
The universal property of a tensor product has the following important consequence:
Proof: For the first statement, let L be the subgroup of
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
generated by elements of the form in question,
Q
=
(
M
⊗
R
N
)
/
L
{\displaystyle Q=(M\otimes _{R}N)/L}
and q the quotient map to Q. We have:
0
=
q
∘
⊗
{\displaystyle 0=q\circ \otimes }
as well as
0
=
0
∘
⊗
{\displaystyle 0=0\circ \otimes }
. Hence, by the uniqueness part of the universal property, q = 0. The second statement is because to define a module homomorphism, it is enough to define it on the generating set of the module.
◻
{\displaystyle \square }
== Application of the universal property of tensor products ==
=== Determining whether a tensor product of modules is zero ===
In practice, it is sometimes more difficult to show that a tensor product of R-modules
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
is nonzero than it is to show that it is 0. The universal property gives a convenient way for checking this.
To check that a tensor product
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
is nonzero, one can construct an R-bilinear map
f
:
M
×
N
→
G
{\displaystyle f:M\times N\rightarrow G}
to an abelian group
G
{\displaystyle G}
such that
f
(
m
,
n
)
≠
0
{\displaystyle f(m,n)\neq 0}
. This works because if
m
⊗
n
=
0
{\displaystyle m\otimes n=0}
, then
f
(
m
,
n
)
=
f
¯
(
m
⊗
n
)
=
(
f
)
¯
(
0
)
=
0
{\displaystyle f(m,n)={\bar {f}}(m\otimes n)={\bar {(f)}}(0)=0}
.
For example, to see that
Z
/
p
Z
⊗
Z
Z
/
p
Z
{\displaystyle \mathbb {Z} /p\mathbb {Z} \otimes _{\mathbb {Z} }\mathbb {Z} /p\mathbb {Z} }
, is nonzero, take
G
{\displaystyle G}
to be
Z
/
p
Z
{\displaystyle \mathbb {Z} /p\mathbb {Z} }
and
(
m
,
n
)
↦
m
n
{\displaystyle (m,n)\mapsto mn}
. This says that the pure tensors
m
⊗
n
≠
0
{\displaystyle m\otimes n\neq 0}
as long as
m
n
{\displaystyle mn}
is nonzero in
Z
/
p
Z
{\displaystyle \mathbb {Z} /p\mathbb {Z} }
.
=== For equivalent modules ===
The proposition says that one can work with explicit elements of the tensor products instead of invoking the universal property directly each time. This is very convenient in practice. For example, if R is commutative and the left and right actions by R on modules are considered to be equivalent, then
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
can naturally be furnished with the R-scalar multiplication by extending
r
⋅
(
x
⊗
y
)
:=
(
r
⋅
x
)
⊗
y
=
x
⊗
(
r
⋅
y
)
{\displaystyle r\cdot (x\otimes y):=(r\cdot x)\otimes y=x\otimes (r\cdot y)}
to the whole
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
by the previous proposition (strictly speaking, what is needed is a bimodule structure not commutativity; see a paragraph below). Equipped with this R-module structure,
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
satisfies a universal property similar to the above: for any R-module G, there is a natural isomorphism:
{
Hom
R
(
M
⊗
R
N
,
G
)
≃
{
R
-bilinear maps
M
×
N
→
G
}
,
g
↦
g
∘
⊗
{\displaystyle {\begin{cases}\operatorname {Hom} _{R}(M\otimes _{R}N,G)\simeq \{R{\text{-bilinear maps }}M\times N\to G\},\\g\mapsto g\circ \otimes \end{cases}}}
If R is not necessarily commutative but if M has a left action by a ring S (for example, R), then
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
can be given the left S-module structure, like above, by the formula
s
⋅
(
x
⊗
y
)
:=
(
s
⋅
x
)
⊗
y
.
{\displaystyle s\cdot (x\otimes y):=(s\cdot x)\otimes y.}
Analogously, if N has a right action by a ring S, then
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
becomes a right S-module.
=== Tensor product of linear maps and a change of base ring ===
Given linear maps
f
:
M
→
M
′
{\displaystyle f:M\to M'}
of right modules over a ring R and
g
:
N
→
N
′
{\displaystyle g:N\to N'}
of left modules, there is a unique group homomorphism
{
f
⊗
g
:
M
⊗
R
N
→
M
′
⊗
R
N
′
x
⊗
y
↦
f
(
x
)
⊗
g
(
y
)
{\displaystyle {\begin{cases}f\otimes g:M\otimes _{R}N\to M'\otimes _{R}N'\\x\otimes y\mapsto f(x)\otimes g(y)\end{cases}}}
The construction has a consequence that tensoring is a functor: each right R-module M determines the functor
M
⊗
R
−
:
R
-Mod
⟶
Ab
{\displaystyle M\otimes _{R}-:R{\text{-Mod}}\longrightarrow {\text{Ab}}}
from the category of left modules to the category of abelian groups that sends N to M ⊗ N and a module homomorphism f to the group homomorphism 1 ⊗ f.
If
f
:
R
→
S
{\displaystyle f:R\to S}
is a ring homomorphism and if M is a right S-module and N a left S-module, then there is the canonical surjective homomorphism:
M
⊗
R
N
→
M
⊗
S
N
{\displaystyle M\otimes _{R}N\to M\otimes _{S}N}
induced by
M
×
N
⟶
⊗
S
M
⊗
S
N
.
{\displaystyle M\times N{\overset {\otimes _{S}}{\longrightarrow }}M\otimes _{S}N.}
The resulting map is surjective since pure tensors x ⊗ y generate the whole module. In particular, taking R to be
Z
{\displaystyle \mathbb {Z} }
this shows every tensor product of modules is a quotient of a tensor product of abelian groups.
=== Several modules ===
(This section need to be updated. For now, see § Properties for the more general discussion.)
It is possible to extend the definition to a tensor product of any number of modules over the same commutative ring. For example, the universal property of
is that each trilinear map on
corresponds to a unique linear map
The binary tensor product is associative: (M1 ⊗ M2) ⊗ M3 is naturally isomorphic to M1 ⊗ (M2 ⊗ M3). The tensor product of three modules defined by the universal property of trilinear maps is isomorphic to both of these iterated tensor products.
== Properties ==
=== Modules over general rings ===
Let R1, R2, R3, R be rings, not necessarily commutative.
For an R1-R2-bimodule M12 and a left R2-module M20,
M
12
⊗
R
2
M
20
{\displaystyle M_{12}\otimes _{R_{2}}M_{20}}
is a left R1-module.
For a right R2-module M02 and an R2-R3-bimodule M23,
M
02
⊗
R
2
M
23
{\displaystyle M_{02}\otimes _{R_{2}}M_{23}}
is a right R3-module.
(associativity) For a right R1-module M01, an R1-R2-bimodule M12, and a left R2-module M20 we have:
(
M
01
⊗
R
1
M
12
)
⊗
R
2
M
20
=
M
01
⊗
R
1
(
M
12
⊗
R
2
M
20
)
.
{\displaystyle \left(M_{01}\otimes _{R_{1}}M_{12}\right)\otimes _{R_{2}}M_{20}=M_{01}\otimes _{R_{1}}\left(M_{12}\otimes _{R_{2}}M_{20}\right).}
Since R is an R-R-bimodule, we have
R
⊗
R
R
=
R
{\displaystyle R\otimes _{R}R=R}
with the ring multiplication
m
n
=:
m
⊗
R
n
{\displaystyle mn=:m\otimes _{R}n}
as its canonical balanced product.
=== Modules over commutative rings ===
Let R be a commutative ring, and M, N and P be R-modules. Then (in the below, "=" denotes canonical isomorphisms; this attitude is permissible since a tensor product is defined only up to unique isomorphisms)
Identity
R
⊗
R
M
=
M
.
{\displaystyle R\otimes _{R}M=M.}
Associativity
(
M
⊗
R
N
)
⊗
R
P
=
M
⊗
R
(
N
⊗
R
P
)
.
{\displaystyle (M\otimes _{R}N)\otimes _{R}P=M\otimes _{R}(N\otimes _{R}P).}
Symmetry
M
⊗
R
N
=
N
⊗
R
M
.
{\displaystyle M\otimes _{R}N=N\otimes _{R}M.}
In fact, for any permutation σ of the set {1, ..., n}, there is a unique isomorphism:
{
M
1
⊗
R
⋯
⊗
R
M
n
⟶
M
σ
(
1
)
⊗
R
⋯
⊗
R
M
σ
(
n
)
x
1
⊗
⋯
⊗
x
n
⟼
x
σ
(
1
)
⊗
⋯
⊗
x
σ
(
n
)
{\displaystyle {\begin{cases}M_{1}\otimes _{R}\cdots \otimes _{R}M_{n}\longrightarrow M_{\sigma (1)}\otimes _{R}\cdots \otimes _{R}M_{\sigma (n)}\\x_{1}\otimes \cdots \otimes x_{n}\longmapsto x_{\sigma (1)}\otimes \cdots \otimes x_{\sigma (n)}\end{cases}}}
The first three properties (plus identities on morphisms) say that the category of R-modules, with R commutative, forms a symmetric monoidal category.
Distribution over direct sums
M
⊗
R
(
N
⊕
P
)
=
(
M
⊗
R
N
)
⊕
(
M
⊗
R
P
)
.
{\displaystyle M\otimes _{R}(N\oplus P)=(M\otimes _{R}N)\oplus (M\otimes _{R}P).}
In fact,
M
⊗
R
(
⨁
i
∈
I
N
i
)
=
⨁
i
∈
I
(
M
⊗
R
N
i
)
,
{\displaystyle M\otimes _{R}\left(\bigoplus \nolimits _{i\in I}N_{i}\right)=\bigoplus \nolimits _{i\in I}\left(M\otimes _{R}N_{i}\right),}
for an index set I of arbitrary cardinality. Since finite products coincide with finite direct sums, this imples:
Distribution over finite products
For any finitely many
N
i
{\displaystyle N_{i}}
,
M
⊗
R
∏
i
=
1
n
N
i
=
∏
i
=
1
n
M
⊗
R
N
i
.
{\displaystyle M\otimes _{R}\prod _{i=1}^{n}N_{i}=\prod _{i=1}^{n}M\otimes _{R}N_{i}.}
Base extension
If S is an R-algebra, writing
−
S
=
S
⊗
R
−
{\displaystyle -_{S}=S\otimes _{R}-}
,
(
M
⊗
R
N
)
S
=
M
S
⊗
S
N
S
;
{\displaystyle (M\otimes _{R}N)_{S}=M_{S}\otimes _{S}N_{S};}
cf. § Extension of scalars. A corollary is:
Distribution over localization
For any multiplicatively closed subset S of R,
S
−
1
(
M
⊗
R
N
)
=
S
−
1
M
⊗
S
−
1
R
S
−
1
N
{\displaystyle S^{-1}(M\otimes _{R}N)=S^{-1}M\otimes _{S^{-1}R}S^{-1}N}
as an
S
−
1
R
{\displaystyle S^{-1}R}
-module, since
S
−
1
R
{\displaystyle S^{-1}R}
is an R-algebra and
S
−
1
−
=
S
−
1
R
⊗
R
−
{\displaystyle S^{-1}-=S^{-1}R\otimes _{R}-}
.
Commutativity with direct limits
For any direct system of R-modules Mi,
(
lim
→
M
i
)
⊗
R
N
=
lim
→
(
M
i
⊗
R
N
)
.
{\displaystyle (\varinjlim M_{i})\otimes _{R}N=\varinjlim (M_{i}\otimes _{R}N).}
Adjunction
Hom
R
(
M
⊗
R
N
,
P
)
=
Hom
R
(
M
,
Hom
R
(
N
,
P
)
)
.
{\displaystyle \operatorname {Hom} _{R}(M\otimes _{R}N,P)=\operatorname {Hom} _{R}(M,\operatorname {Hom} _{R}(N,P)){\text{.}}}
A corollary is:
Right-exactness
If
0
→
N
′
→
f
N
→
g
N
″
→
0
{\displaystyle 0\to N'{\overset {f}{\to }}N{\overset {g}{\to }}N''\to 0}
is an exact sequence of R-modules, then
M
⊗
R
N
′
→
1
⊗
f
M
⊗
R
N
→
1
⊗
g
M
⊗
R
N
″
→
0
{\displaystyle M\otimes _{R}N'{\overset {1\otimes f}{\to }}M\otimes _{R}N{\overset {1\otimes g}{\to }}M\otimes _{R}N''\to 0}
is an exact sequence of R-modules, where
(
1
⊗
f
)
(
x
⊗
y
)
=
x
⊗
f
(
y
)
.
{\displaystyle (1\otimes f)(x\otimes y)=x\otimes f(y).}
Tensor-hom relation
There is a canonical R-linear map:
Hom
R
(
M
,
N
)
⊗
P
→
Hom
R
(
M
,
N
⊗
P
)
,
{\displaystyle \operatorname {Hom} _{R}(M,N)\otimes P\to \operatorname {Hom} _{R}(M,N\otimes P),}
which is an isomorphism if either M or P is a finitely generated projective module (see § As linearity-preserving maps for the non-commutative case); more generally, there is a canonical R-linear map:
Hom
R
(
M
,
N
)
⊗
Hom
R
(
M
′
,
N
′
)
→
Hom
R
(
M
⊗
M
′
,
N
⊗
N
′
)
{\displaystyle \operatorname {Hom} _{R}(M,N)\otimes \operatorname {Hom} _{R}(M',N')\to \operatorname {Hom} _{R}(M\otimes M',N\otimes N')}
which is an isomorphism if either
(
M
,
N
)
{\displaystyle (M,N)}
or
(
M
,
M
′
)
{\displaystyle (M,M')}
is a pair of finitely generated projective modules.
To give a practical example, suppose M, N are free modules with bases
e
i
,
i
∈
I
{\displaystyle e_{i},i\in I}
and
f
j
,
j
∈
J
{\displaystyle f_{j},j\in J}
. Then M is the direct sum
M
=
⨁
i
∈
I
R
e
i
{\displaystyle M=\bigoplus _{i\in I}Re_{i}}
and the same for N. By the distributive property, one has:
M
⊗
R
N
=
⨁
i
,
j
R
(
e
i
⊗
f
j
)
;
{\displaystyle M\otimes _{R}N=\bigoplus _{i,j}R(e_{i}\otimes f_{j});}
i.e.,
e
i
⊗
f
j
,
i
∈
I
,
j
∈
J
{\displaystyle e_{i}\otimes f_{j},\,i\in I,j\in J}
are the R-basis of
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
. Even if M is not free, a free presentation of M can be used to compute tensor products.
The tensor product, in general, does not commute with inverse limit: on the one hand,
Q
⊗
Z
Z
/
p
n
=
0
{\displaystyle \mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Z} /p^{n}=0}
(cf. "examples"). On the other hand,
(
lim
←
Z
/
p
n
)
⊗
Z
Q
=
Z
p
⊗
Z
Q
=
Z
p
[
p
−
1
]
=
Q
p
{\displaystyle \left(\varprojlim \mathbb {Z} /p^{n}\right)\otimes _{\mathbb {Z} }\mathbb {Q} =\mathbb {Z} _{p}\otimes _{\mathbb {Z} }\mathbb {Q} =\mathbb {Z} _{p}\left[p^{-1}\right]=\mathbb {Q} _{p}}
where
Z
p
,
Q
p
{\displaystyle \mathbb {Z} _{p},\mathbb {Q} _{p}}
are the ring of p-adic integers and the field of p-adic numbers. See also "profinite integer" for an example in the similar spirit.
If R is not commutative, the order of tensor products could matter in the following way: we "use up" the right action of M and the left action of N to form the tensor product
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
; in particular,
N
⊗
R
M
{\displaystyle N\otimes _{R}M}
would not even be defined. If M, N are bi-modules, then
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
has the left action coming from the left action of M and the right action coming from the right action of N; those actions need not be the same as the left and right actions of
N
⊗
R
M
{\displaystyle N\otimes _{R}M}
.
The associativity holds more generally for non-commutative rings: if M is a right R-module, N a (R, S)-module and P a left S-module, then
(
M
⊗
R
N
)
⊗
S
P
=
M
⊗
R
(
N
⊗
S
P
)
{\displaystyle (M\otimes _{R}N)\otimes _{S}P=M\otimes _{R}(N\otimes _{S}P)}
as abelian group.
The general form of adjoint relation of tensor products says: if R is not necessarily commutative, M is a right R-module, N is a (R, S)-module, P is a right S-module, then as abelian group
Hom
S
(
M
⊗
R
N
,
P
)
=
Hom
R
(
M
,
Hom
S
(
N
,
P
)
)
,
f
↦
f
′
{\displaystyle \operatorname {Hom} _{S}(M\otimes _{R}N,P)=\operatorname {Hom} _{R}(M,\operatorname {Hom} _{S}(N,P)),\,f\mapsto f'}
where
f
′
{\displaystyle f'}
is given by
f
′
(
x
)
(
y
)
=
f
(
x
⊗
y
)
{\displaystyle f'(x)(y)=f(x\otimes y)}
.
=== Tensor product of an R-module with the fraction field ===
Let R be an integral domain with fraction field K.
For any R-module M,
K
⊗
R
M
≅
K
⊗
R
(
M
/
M
tor
)
{\displaystyle K\otimes _{R}M\cong K\otimes _{R}(M/M_{\operatorname {tor} })}
as R-modules, where
M
tor
{\displaystyle M_{\operatorname {tor} }}
is the torsion submodule of M.
If M is a torsion R-module then
K
⊗
R
M
=
0
{\displaystyle K\otimes _{R}M=0}
and if M is not a torsion module then
K
⊗
R
M
≠
0
{\displaystyle K\otimes _{R}M\neq 0}
.
If N is a submodule of M such that
M
/
N
{\displaystyle M/N}
is a torsion module then
K
⊗
R
N
≅
K
⊗
R
M
{\displaystyle K\otimes _{R}N\cong K\otimes _{R}M}
as R-modules by
x
⊗
n
↦
x
⊗
n
{\displaystyle x\otimes n\mapsto x\otimes n}
.
In
K
⊗
R
M
{\displaystyle K\otimes _{R}M}
,
x
⊗
m
=
0
{\displaystyle x\otimes m=0}
if and only if
x
=
0
{\displaystyle x=0}
or
m
∈
M
tor
{\displaystyle m\in M_{\operatorname {tor} }}
. In particular,
M
tor
=
ker
(
M
→
K
⊗
R
M
)
{\displaystyle M_{\operatorname {tor} }=\operatorname {ker} (M\to K\otimes _{R}M)}
where
m
↦
1
⊗
m
{\displaystyle m\mapsto 1\otimes m}
.
K
⊗
R
M
≅
M
(
0
)
{\displaystyle K\otimes _{R}M\cong M_{(0)}}
where
M
(
0
)
{\displaystyle M_{(0)}}
is the localization of the module
M
{\displaystyle M}
at the prime ideal
(
0
)
{\displaystyle (0)}
(i.e., the localization with respect to the nonzero elements).
=== Extension of scalars ===
The adjoint relation in the general form has an important special case: for any R-algebra S, M a right R-module, P a right S-module, using
Hom
S
(
S
,
−
)
=
−
{\displaystyle \operatorname {Hom} _{S}(S,-)=-}
, we have the natural isomorphism:
Hom
S
(
M
⊗
R
S
,
P
)
=
Hom
R
(
M
,
Res
R
(
P
)
)
.
{\displaystyle \operatorname {Hom} _{S}(M\otimes _{R}S,P)=\operatorname {Hom} _{R}(M,\operatorname {Res} _{R}(P)).}
This says that the functor
−
⊗
R
S
{\displaystyle -\otimes _{R}S}
is a left adjoint to the forgetful functor
Res
R
{\displaystyle \operatorname {Res} _{R}}
, which restricts an S-action to an R-action. Because of this,
−
⊗
R
S
{\displaystyle -\otimes _{R}S}
is often called the extension of scalars from R to S. In the representation theory, when R, S are group algebras, the above relation becomes the Frobenius reciprocity.
==== Examples ====
R
n
⊗
R
S
=
S
n
{\displaystyle R^{n}\otimes _{R}S=S^{n}}
, for any R-algebra S (i.e., a free module remains free after extending scalars.)
For a commutative ring
R
{\displaystyle R}
and a commutative R-algebra S, we have:
S
⊗
R
R
[
x
1
,
…
,
x
n
]
=
S
[
x
1
,
…
,
x
n
]
;
{\displaystyle S\otimes _{R}R[x_{1},\dots ,x_{n}]=S[x_{1},\dots ,x_{n}];}
in fact, more generally,
S
⊗
R
(
R
[
x
1
,
…
,
x
n
]
/
I
)
=
S
[
x
1
,
…
,
x
n
]
/
I
S
[
x
1
,
…
,
x
n
]
,
{\displaystyle S\otimes _{R}(R[x_{1},\dots ,x_{n}]/I)=S[x_{1},\dots ,x_{n}]/IS[x_{1},\dots ,x_{n}],}
where
I
{\displaystyle I}
is an ideal.
Using
C
=
R
[
x
]
/
(
x
2
+
1
)
{\displaystyle \mathbb {C} =\mathbb {R} [x]/(x^{2}+1)}
, the previous example and the Chinese remainder theorem, we have as rings
C
⊗
R
C
=
C
[
x
]
/
(
x
2
+
1
)
=
C
[
x
]
/
(
x
+
i
)
×
C
[
x
]
/
(
x
−
i
)
=
C
2
.
{\displaystyle \mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} =\mathbb {C} [x]/(x^{2}+1)=\mathbb {C} [x]/(x+i)\times \mathbb {C} [x]/(x-i)=\mathbb {C} ^{2}.}
This gives an example when a tensor product is a direct product.
R
⊗
Z
Z
[
i
]
=
R
[
i
]
=
C
{\displaystyle \mathbb {R} \otimes _{\mathbb {Z} }\mathbb {Z} [i]=\mathbb {R} [i]=\mathbb {C} }
.
== Examples ==
The structure of a tensor product of quite ordinary modules may be unpredictable.
Let G be an abelian group in which every element has finite order (that is G is a torsion abelian group; for example G can be a finite abelian group or
Q
/
Z
{\displaystyle \mathbb {Q} /\mathbb {Z} }
). Then:
Q
⊗
Z
G
=
0.
{\displaystyle \mathbb {Q} \otimes _{\mathbb {Z} }G=0.}
Indeed, any
x
∈
Q
⊗
Z
G
{\displaystyle x\in \mathbb {Q} \otimes _{\mathbb {Z} }G}
is of the form
x
=
∑
i
r
i
⊗
g
i
,
r
i
∈
Q
,
g
i
∈
G
.
{\displaystyle x=\sum _{i}r_{i}\otimes g_{i},\qquad r_{i}\in \mathbb {Q} ,g_{i}\in G.}
If
n
i
{\displaystyle n_{i}}
is the order of
g
i
{\displaystyle g_{i}}
, then we compute:
x
=
∑
(
r
i
/
n
i
)
n
i
⊗
g
i
=
∑
r
i
/
n
i
⊗
n
i
g
i
=
0.
{\displaystyle x=\sum (r_{i}/n_{i})n_{i}\otimes g_{i}=\sum r_{i}/n_{i}\otimes n_{i}g_{i}=0.}
Similarly, one sees
Q
/
Z
⊗
Z
Q
/
Z
=
0.
{\displaystyle \mathbb {Q} /\mathbb {Z} \otimes _{\mathbb {Z} }\mathbb {Q} /\mathbb {Z} =0.}
Here are some identities useful for calculation: Let R be a commutative ring, I, J ideals, M, N R-modules. Then
R
/
I
⊗
R
M
=
M
/
I
M
{\displaystyle R/I\otimes _{R}M=M/IM}
. If M is flat,
I
M
=
I
⊗
R
M
{\displaystyle IM=I\otimes _{R}M}
.
M
/
I
M
⊗
R
/
I
N
/
I
N
=
M
⊗
R
N
⊗
R
R
/
I
{\displaystyle M/IM\otimes _{R/I}N/IN=M\otimes _{R}N\otimes _{R}R/I}
(because tensoring commutes with base extensions)
R
/
I
⊗
R
R
/
J
=
R
/
(
I
+
J
)
{\displaystyle R/I\otimes _{R}R/J=R/(I+J)}
.
Example: If G is an abelian group,
G
⊗
Z
Z
/
n
=
G
/
n
G
{\displaystyle G\otimes _{\mathbb {Z} }\mathbb {Z} /n=G/nG}
; this follows from 1.
Example:
Z
/
n
⊗
Z
Z
/
m
=
Z
/
gcd
(
n
,
m
)
{\displaystyle \mathbb {Z} /n\otimes _{\mathbb {Z} }\mathbb {Z} /m=\mathbb {Z} /{\gcd(n,m)}}
; this follows from 3. In particular, for distinct prime numbers p, q,
Z
/
p
Z
⊗
Z
/
q
Z
=
0.
{\displaystyle \mathbb {Z} /p\mathbb {Z} \otimes \mathbb {Z} /q\mathbb {Z} =0.}
Tensor products can be applied to control the order of elements of groups. Let G be an abelian group. Then the multiples of 2 in
G
⊗
Z
/
2
Z
{\displaystyle G\otimes \mathbb {Z} /2\mathbb {Z} }
are zero.
Example: Let
μ
n
{\displaystyle \mu _{n}}
be the group of n-th roots of unity. It is a cyclic group and cyclic groups are classified by orders. Thus, non-canonically,
μ
n
≈
Z
/
n
{\displaystyle \mu _{n}\approx \mathbb {Z} /n}
and thus, when g is the gcd of n and m,
μ
n
⊗
Z
μ
m
≈
μ
g
.
{\displaystyle \mu _{n}\otimes _{\mathbb {Z} }\mu _{m}\approx \mu _{g}.}
Example: Consider
Q
⊗
Z
Q
{\displaystyle \mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Q} }
. Since
Q
⊗
Q
Q
{\displaystyle \mathbb {Q} \otimes _{\mathbb {Q} }\mathbb {Q} }
is obtained from
Q
⊗
Z
Q
{\displaystyle \mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Q} }
by imposing
Q
{\displaystyle \mathbb {Q} }
-linearity on the middle, we have the surjection
Q
⊗
Z
Q
→
Q
⊗
Q
Q
{\displaystyle \mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Q} \to \mathbb {Q} \otimes _{\mathbb {Q} }\mathbb {Q} }
whose kernel is generated by elements of the form
r
s
x
⊗
y
−
x
⊗
r
s
y
{\displaystyle {r \over s}x\otimes y-x\otimes {r \over s}y}
where r, s, x, u are integers and s is nonzero. Since
r
s
x
⊗
y
=
r
s
x
⊗
s
s
y
=
x
⊗
r
s
y
,
{\displaystyle {r \over s}x\otimes y={r \over s}x\otimes {s \over s}y=x\otimes {r \over s}y,}
the kernel actually vanishes; hence,
Q
⊗
Z
Q
=
Q
⊗
Q
Q
=
Q
{\displaystyle \mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Q} =\mathbb {Q} \otimes _{\mathbb {Q} }\mathbb {Q} =\mathbb {Q} }
.
However, consider
C
⊗
R
C
{\displaystyle \mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} }
and
C
⊗
C
C
{\displaystyle \mathbb {C} \otimes _{\mathbb {C} }\mathbb {C} }
. As
R
{\displaystyle \mathbb {R} }
-vector space,
C
⊗
R
C
{\displaystyle \mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} }
has dimension 4, but
C
⊗
C
C
{\displaystyle \mathbb {C} \otimes _{\mathbb {C} }\mathbb {C} }
has dimension 2.
Thus,
C
⊗
R
C
{\displaystyle \mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} }
and
C
⊗
C
C
{\displaystyle \mathbb {C} \otimes _{\mathbb {C} }\mathbb {C} }
are not isomorphic.
Example: We propose to compare
R
⊗
Z
R
{\displaystyle \mathbb {R} \otimes _{\mathbb {Z} }\mathbb {R} }
and
R
⊗
R
R
{\displaystyle \mathbb {R} \otimes _{\mathbb {R} }\mathbb {R} }
. Like in the previous example, we have:
R
⊗
Z
R
=
R
⊗
Q
R
{\displaystyle \mathbb {R} \otimes _{\mathbb {Z} }\mathbb {R} =\mathbb {R} \otimes _{\mathbb {Q} }\mathbb {R} }
as abelian group and thus as
Q
{\displaystyle \mathbb {Q} }
-vector space (any
Z
{\displaystyle \mathbb {Z} }
-linear map between
Q
{\displaystyle \mathbb {Q} }
-vector spaces is
Q
{\displaystyle \mathbb {Q} }
-linear). As
Q
{\displaystyle \mathbb {Q} }
-vector space,
R
{\displaystyle \mathbb {R} }
has dimension (cardinality of a basis) of continuum. Hence,
R
⊗
Q
R
{\displaystyle \mathbb {R} \otimes _{\mathbb {Q} }\mathbb {R} }
has a
Q
{\displaystyle \mathbb {Q} }
-basis indexed by a product of continuums; thus its
Q
{\displaystyle \mathbb {Q} }
-dimension is continuum. Hence, for dimension reason, there is a non-canonical isomorphism of
Q
{\displaystyle \mathbb {Q} }
-vector spaces:
R
⊗
Z
R
≈
R
⊗
R
R
.
{\displaystyle \mathbb {R} \otimes _{\mathbb {Z} }\mathbb {R} \approx \mathbb {R} \otimes _{\mathbb {R} }\mathbb {R} .}
Consider the modules
M
=
C
[
x
,
y
,
z
]
/
(
f
)
,
N
=
C
[
x
,
y
,
z
]
/
(
g
)
{\displaystyle M=\mathbb {C} [x,y,z]/(f),N=\mathbb {C} [x,y,z]/(g)}
for
f
,
g
∈
C
[
x
,
y
,
z
]
{\displaystyle f,g\in \mathbb {C} [x,y,z]}
irreducible polynomials such that
gcd
(
f
,
g
)
=
1
{\displaystyle \gcd(f,g)=1}
. Then,
C
[
x
,
y
,
z
]
(
f
)
⊗
C
[
x
,
y
,
z
]
C
[
x
,
y
,
z
]
(
g
)
≅
C
[
x
,
y
,
z
]
(
f
,
g
)
{\displaystyle {\frac {\mathbb {C} [x,y,z]}{(f)}}\otimes _{\mathbb {C} [x,y,z]}{\frac {\mathbb {C} [x,y,z]}{(g)}}\cong {\frac {\mathbb {C} [x,y,z]}{(f,g)}}}
Another useful family of examples comes from changing the scalars. Notice that
Z
[
x
1
,
…
,
x
n
]
(
f
1
,
…
,
f
k
)
⊗
Z
R
≅
R
[
x
1
,
…
,
x
n
]
(
f
1
,
…
,
f
k
)
{\displaystyle {\frac {\mathbb {Z} [x_{1},\ldots ,x_{n}]}{(f_{1},\ldots ,f_{k})}}\otimes _{\mathbb {Z} }R\cong {\frac {R[x_{1},\ldots ,x_{n}]}{(f_{1},\ldots ,f_{k})}}}
Good examples of this phenomenon to look at are when
R
=
Q
,
C
,
Z
/
(
p
k
)
,
Z
p
,
Q
p
{\displaystyle R=\mathbb {Q} ,\mathbb {C} ,\mathbb {Z} /(p^{k}),\mathbb {Z} _{p},\mathbb {Q} _{p}}
.
== Construction ==
The construction of M ⊗ N takes a quotient of a free abelian group with basis the symbols m ∗ n, used here to denote the ordered pair (m, n), for m in M and n in N by the subgroup generated by all elements of the form
−m ∗ (n + n′) + m ∗ n + m ∗ n′
−(m + m′) ∗ n + m ∗ n + m′ ∗ n
(m · r) ∗ n − m ∗ (r · n)
where m, m′ in M, n, n′ in N, and r in R. The quotient map which takes m ∗ n = (m, n) to the coset containing m ∗ n; that is,
⊗
:
M
×
N
→
M
⊗
R
N
,
(
m
,
n
)
↦
[
m
∗
n
]
{\displaystyle \otimes :M\times N\to M\otimes _{R}N,\,(m,n)\mapsto [m*n]}
is balanced, and the subgroup has been chosen minimally so that this map is balanced. The universal property of ⊗ follows from the universal properties of a free abelian group and a quotient.
If S is a subring of a ring R, then
M
⊗
R
N
{\displaystyle M\otimes _{R}N}
is the quotient group of
M
⊗
S
N
{\displaystyle M\otimes _{S}N}
by the subgroup generated by
x
r
⊗
S
y
−
x
⊗
S
r
y
,
r
∈
R
,
x
∈
M
,
y
∈
N
{\displaystyle xr\otimes _{S}y-x\otimes _{S}ry,\,r\in R,x\in M,y\in N}
, where
x
⊗
S
y
{\displaystyle x\otimes _{S}y}
is the image of
(
x
,
y
)
{\displaystyle (x,y)}
under
⊗
:
M
×
N
→
M
⊗
S
N
{\displaystyle \otimes :M\times N\to M\otimes _{S}N}
. In particular, any tensor product of R-modules can be constructed, if so desired, as a quotient of a tensor product of abelian groups by imposing the R-balanced product property.
More category-theoretically, let σ be the given right action of R on M; i.e., σ(m, r) = m · r and τ the left action of R of N. Then, provided the tensor product of abelian groups is already defined, the tensor product of M and N over R can be defined as the coequalizer:
M
⊗
R
⊗
N
→
σ
×
1
→
1
×
τ
M
⊗
N
→
M
⊗
R
N
{\displaystyle M\otimes R\otimes N{{{} \atop {\overset {\sigma \times 1}{\to }}} \atop {{\underset {1\times \tau }{\to }} \atop {}}}M\otimes N\to M\otimes _{R}N}
where
⊗
{\displaystyle \otimes }
without a subscript refers to the tensor product of abelian groups.
In the construction of the tensor product over a commutative ring R, the R-module structure can be built in from the start by forming the quotient of a free R-module by the submodule generated by the elements given above for the general construction, augmented by the elements r ⋅ (m ∗ n) − m ∗ (r ⋅ n). Alternately, the general construction can be given a Z(R)-module structure by defining the scalar action by r ⋅ (m ⊗ n) = m ⊗ (r ⋅ n) when this is well-defined, which is precisely when r ∈ Z(R), the centre of R.
The direct product of M and N is rarely isomorphic to the tensor product of M and N. When R is not commutative, then the tensor product requires that M and N be modules on opposite sides, while the direct product requires they be modules on the same side. In all cases the only function from M × N to G that is both linear and bilinear is the zero map.
== As linear maps ==
In the general case, not all the properties of a tensor product of vector spaces extend to modules. Yet, some useful properties of the tensor product, considered as module homomorphisms, remain.
=== Dual module ===
The dual module of a right R-module E, is defined as HomR(E, R) with the canonical left R-module structure, and is denoted E∗. The canonical structure is the pointwise operations of addition and scalar multiplication. Thus, E∗ is the set of all R-linear maps E → R (also called linear forms), with operations
(
ϕ
+
ψ
)
(
u
)
=
ϕ
(
u
)
+
ψ
(
u
)
,
ϕ
,
ψ
∈
E
∗
,
u
∈
E
{\displaystyle (\phi +\psi )(u)=\phi (u)+\psi (u),\quad \phi ,\psi \in E^{*},u\in E}
(
r
⋅
ϕ
)
(
u
)
=
r
⋅
ϕ
(
u
)
,
ϕ
∈
E
∗
,
u
∈
E
,
r
∈
R
,
{\displaystyle (r\cdot \phi )(u)=r\cdot \phi (u),\quad \phi \in E^{*},u\in E,r\in R,}
The dual of a left R-module is defined analogously, with the same notation.
There is always a canonical homomorphism E → E∗∗ from E to its second dual. It is an isomorphism if E is a free module of finite rank. In general, E is called a reflexive module if the canonical homomorphism is an isomorphism.
=== Duality pairing ===
We denote the natural pairing of its dual E∗ and a right R-module E, or of a left R-module F and its dual F∗ as
⟨
⋅
,
⋅
⟩
:
E
∗
×
E
→
R
:
(
e
′
,
e
)
↦
⟨
e
′
,
e
⟩
=
e
′
(
e
)
{\displaystyle \langle \cdot ,\cdot \rangle :E^{*}\times E\to R:(e',e)\mapsto \langle e',e\rangle =e'(e)}
⟨
⋅
,
⋅
⟩
:
F
×
F
∗
→
R
:
(
f
,
f
′
)
↦
⟨
f
,
f
′
⟩
=
f
′
(
f
)
.
{\displaystyle \langle \cdot ,\cdot \rangle :F\times F^{*}\to R:(f,f')\mapsto \langle f,f'\rangle =f'(f).}
The pairing is left R-linear in its left argument, and right R-linear in its right argument:
⟨
r
⋅
g
,
h
⋅
s
⟩
=
r
⋅
⟨
g
,
h
⟩
⋅
s
,
r
,
s
∈
R
.
{\displaystyle \langle r\cdot g,h\cdot s\rangle =r\cdot \langle g,h\rangle \cdot s,\quad r,s\in R.}
=== An element as a (bi)linear map ===
In the general case, each element of the tensor product of modules gives rise to a left R-linear map, to a right R-linear map, and to an R-bilinear form. Unlike the commutative case, in the general case the tensor product is not an R-module, and thus does not support scalar multiplication.
Given right R-module E and right R-module F, there is a canonical homomorphism θ : F ⊗R E∗ → HomR(E, F) such that θ(f ⊗ e′) is the map e ↦ f ⋅ ⟨e′, e⟩.
Given left R-module E and right R-module F, there is a canonical homomorphism θ : F ⊗R E → HomR(E∗, F) such that θ(f ⊗ e) is the map e′ ↦ f ⋅ ⟨e, e′⟩.
Both cases hold for general modules, and become isomorphisms if the modules E and F are restricted to being finitely generated projective modules (in particular free modules of finite ranks). Thus, an element of a tensor product of modules over a ring R maps canonically onto an R-linear map, though as with vector spaces, constraints apply to the modules for this to be equivalent to the full space of such linear maps.
Given right R-module E and left R-module F, there is a canonical homomorphism θ : F∗ ⊗R E∗ → LR(F × E, R) such that θ(f′ ⊗ e′) is the map (f, e) ↦ ⟨f, f′⟩ ⋅ ⟨e′, e⟩. Thus, an element of a tensor product ξ ∈ F∗ ⊗R E∗ may be thought of giving rise to or acting as an R-bilinear map F × E → R.
=== Trace ===
Let R be a commutative ring and E an R-module. Then there is a canonical R-linear map:
E
∗
⊗
R
E
→
R
{\displaystyle E^{*}\otimes _{R}E\to R}
induced through linearity by
ϕ
⊗
x
↦
ϕ
(
x
)
{\displaystyle \phi \otimes x\mapsto \phi (x)}
; it is the unique R-linear map corresponding to the natural pairing.
If E is a finitely generated projective R-module, then one can identify
E
∗
⊗
R
E
=
End
R
(
E
)
{\displaystyle E^{*}\otimes _{R}E=\operatorname {End} _{R}(E)}
through the canonical homomorphism mentioned above and then the above is the trace map:
tr
:
End
R
(
E
)
→
R
.
{\displaystyle \operatorname {tr} :\operatorname {End} _{R}(E)\to R.}
When R is a field, this is the usual trace of a linear transformation.
== Example from differential geometry: tensor field ==
The most prominent example of a tensor product of modules in differential geometry is the tensor product of the spaces of vector fields and differential forms. More precisely, if R is the (commutative) ring of smooth functions on a smooth manifold M, then one puts
T
q
p
=
Γ
(
M
,
T
M
)
⊗
p
⊗
R
Γ
(
M
,
T
∗
M
)
⊗
q
{\displaystyle {\mathfrak {T}}_{q}^{p}=\Gamma (M,TM)^{\otimes p}\otimes _{R}\Gamma (M,T^{*}M)^{\otimes q}}
where Γ means the space of sections and the superscript
⊗
p
{\displaystyle \otimes p}
means tensoring p times over R. By definition, an element of
T
q
p
{\displaystyle {\mathfrak {T}}_{q}^{p}}
is a tensor field of type (p, q).
As R-modules,
T
p
q
{\displaystyle {\mathfrak {T}}_{p}^{q}}
is the dual module of
T
q
p
{\displaystyle {\mathfrak {T}}_{q}^{p}}
.
To lighten the notation, put
E
=
Γ
(
M
,
T
M
)
{\displaystyle E=\Gamma (M,TM)}
and so
E
∗
=
Γ
(
M
,
T
∗
M
)
{\displaystyle E^{*}=\Gamma (M,T^{*}M)}
. When p, q ≥ 1, for each (k, l) with 1 ≤ k ≤ p, 1 ≤ l ≤ q, there is an R-multilinear map:
E
p
×
E
∗
q
→
T
q
−
1
p
−
1
,
(
X
1
,
…
,
X
p
,
ω
1
,
…
,
ω
q
)
↦
⟨
X
k
,
ω
l
⟩
X
1
⊗
⋯
⊗
X
l
^
⊗
⋯
⊗
X
p
⊗
ω
1
⊗
⋯
ω
l
^
⊗
⋯
⊗
ω
q
{\displaystyle E^{p}\times {E^{*}}^{q}\to {\mathfrak {T}}_{q-1}^{p-1},\,(X_{1},\dots ,X_{p},\omega _{1},\dots ,\omega _{q})\mapsto \langle X_{k},\omega _{l}\rangle X_{1}\otimes \cdots \otimes {\widehat {X_{l}}}\otimes \cdots \otimes X_{p}\otimes \omega _{1}\otimes \cdots {\widehat {\omega _{l}}}\otimes \cdots \otimes \omega _{q}}
where
E
p
{\displaystyle E^{p}}
means
∏
1
p
E
{\displaystyle \prod _{1}^{p}E}
and the hat means a term is omitted. By the universal property, it corresponds to a unique R-linear map:
C
l
k
:
T
q
p
→
T
q
−
1
p
−
1
.
{\displaystyle C_{l}^{k}:{\mathfrak {T}}_{q}^{p}\to {\mathfrak {T}}_{q-1}^{p-1}.}
It is called the contraction of tensors in the index (k, l). Unwinding what the universal property says one sees:
C
l
k
(
X
1
⊗
⋯
⊗
X
p
⊗
ω
1
⊗
⋯
⊗
ω
q
)
=
⟨
X
k
,
ω
l
⟩
X
1
⊗
⋯
X
l
^
⋯
⊗
X
p
⊗
ω
1
⊗
⋯
ω
l
^
⋯
⊗
ω
q
.
{\displaystyle C_{l}^{k}(X_{1}\otimes \cdots \otimes X_{p}\otimes \omega _{1}\otimes \cdots \otimes \omega _{q})=\langle X_{k},\omega _{l}\rangle X_{1}\otimes \cdots {\widehat {X_{l}}}\cdots \otimes X_{p}\otimes \omega _{1}\otimes \cdots {\widehat {\omega _{l}}}\cdots \otimes \omega _{q}.}
Remark: The preceding discussion is standard in textbooks on differential geometry (e.g., Helgason). In a way, the sheaf-theoretic construction (i.e., the language of sheaf of modules) is more natural and increasingly more common; for that, see the section § Tensor product of sheaves of modules.
== Relationship to flat modules ==
In general,
−
⊗
R
−
:
Mod-
R
×
R
-Mod
⟶
A
b
{\displaystyle -\otimes _{R}-:{\text{Mod-}}R\times R{\text{-Mod}}\longrightarrow \mathrm {Ab} }
is a bifunctor which accepts a right and a left R module pair as input, and assigns them to the tensor product in the category of abelian groups.
By fixing a right R module M, a functor
M
⊗
R
−
:
R
-Mod
⟶
A
b
{\displaystyle M\otimes _{R}-:R{\text{-Mod}}\longrightarrow \mathrm {Ab} }
arises, and symmetrically a left R module N could be fixed to create a functor
−
⊗
R
N
:
Mod-
R
⟶
A
b
.
{\displaystyle -\otimes _{R}N:{\text{Mod-}}R\longrightarrow \mathrm {Ab} .}
Unlike the Hom bifunctor
H
o
m
R
(
−
,
−
)
,
{\displaystyle \mathrm {Hom} _{R}(-,-),}
the tensor functor is covariant in both inputs.
It can be shown that
M
⊗
R
−
{\displaystyle M\otimes _{R}-}
and
−
⊗
R
N
{\displaystyle -\otimes _{R}N}
are always right exact functors, but not necessarily left exact (
0
→
Z
→
Z
→
Z
n
→
0
{\displaystyle 0\to \mathbb {Z} \to \mathbb {Z} \to \mathbb {Z} _{n}\to 0}
, where the first map is multiplication by
n
{\displaystyle n}
, is exact but not after taking the tensor with
Z
n
{\displaystyle \mathbb {Z} _{n}}
). By definition, a module T is a flat module if
T
⊗
R
−
{\displaystyle T\otimes _{R}-}
is an exact functor.
If
{
m
i
∣
i
∈
I
}
{\displaystyle \{m_{i}\mid i\in I\}}
and
{
n
j
∣
j
∈
J
}
{\displaystyle \{n_{j}\mid j\in J\}}
are generating sets for M and N, respectively, then
{
m
i
⊗
n
j
∣
i
∈
I
,
j
∈
J
}
{\displaystyle \{m_{i}\otimes n_{j}\mid i\in I,j\in J\}}
will be a generating set for
M
⊗
R
N
.
{\displaystyle M\otimes _{R}N.}
Because the tensor functor
M
⊗
R
−
{\displaystyle M\otimes _{R}-}
sometimes fails to be left exact, this may not be a minimal generating set, even if the original generating sets are minimal. If M is a flat module, the functor
M
⊗
R
−
{\displaystyle M\otimes _{R}-}
is exact by the very definition of a flat module. If the tensor products are taken over a field F, we are in the case of vector spaces as above. Since all F modules are flat, the bifunctor
−
⊗
R
−
{\displaystyle -\otimes _{R}-}
is exact in both positions, and the two given generating sets are bases, then
{
m
i
⊗
n
j
∣
i
∈
I
,
j
∈
J
}
{\displaystyle \{m_{i}\otimes n_{j}\mid i\in I,j\in J\}}
indeed forms a basis for
M
⊗
F
N
{\displaystyle M\otimes _{F}N}
.
== Additional structure ==
If S and T are commutative R-algebras, then, similar to #For equivalent modules, S ⊗R T will be a commutative R-algebra as well, with the multiplication map defined by (m1 ⊗ m2) (n1 ⊗ n2) = (m1n1 ⊗ m2n2) and extended by linearity. In this setting, the tensor product become a fibered coproduct in the category of commutative R-algebras. (But it is not a coproduct in the category of R-algebras.)
If M and N are both R-modules over a commutative ring, then their tensor product is again an R-module. If R is a ring, RM is a left R-module, and the commutator
of any two elements r and s of R is in the annihilator of M, then we can make M into a right R module by setting
The action of R on M factors through an action of a quotient commutative ring. In this case the tensor product of M with itself over R is again an R-module. This is a very common technique in commutative algebra.
== Generalization ==
=== Tensor product of complexes of modules ===
If X, Y are complexes of R-modules (R a commutative ring), then their tensor product is the complex given by
(
X
⊗
R
Y
)
n
=
∑
i
+
j
=
n
X
i
⊗
R
Y
j
,
{\displaystyle (X\otimes _{R}Y)_{n}=\sum _{i+j=n}X_{i}\otimes _{R}Y_{j},}
with the differential given by: for x in Xi and y in Yj,
d
X
⊗
Y
(
x
⊗
y
)
=
d
X
(
x
)
⊗
y
+
(
−
1
)
i
x
⊗
d
Y
(
y
)
.
{\displaystyle d_{X\otimes Y}(x\otimes y)=d_{X}(x)\otimes y+(-1)^{i}x\otimes d_{Y}(y).}
For example, if C is a chain complex of flat abelian groups and if G is an abelian group, then the homology group of
C
⊗
Z
G
{\displaystyle C\otimes _{\mathbb {Z} }G}
is the homology group of C with coefficients in G (see also: universal coefficient theorem.)
=== Tensor product of sheaves of modules ===
The tensor product of sheaves of modules is the sheaf associated to the pre-sheaf of the tensor products of the modules of sections over open subsets.
In this setup, for example, one can define a tensor field on a smooth manifold M as a (global or local) section of the tensor product (called tensor bundle)
(
T
M
)
⊗
p
⊗
O
(
T
∗
M
)
⊗
q
{\displaystyle (TM)^{\otimes p}\otimes _{O}(T^{*}M)^{\otimes q}}
where O is the sheaf of rings of smooth functions on M and the bundles
T
M
,
T
∗
M
{\displaystyle TM,T^{*}M}
are viewed as locally free sheaves on M.
The exterior bundle on M is the subbundle of the tensor bundle consisting of all antisymmetric covariant tensors. Sections of the exterior bundle are differential forms on M.
One important case when one forms a tensor product over a sheaf of non-commutative rings appears in theory of D-modules; that is, tensor products over the sheaf of differential operators.
== See also ==
Tor functor
Tensor product of algebras
Tensor product of fields
Derived tensor product
Eilenberg–Watts theorem
== Notes ==
== References ==
Bourbaki, Algebra
Helgason, Sigurdur (1978), Differential geometry, Lie groups and symmetric spaces, Academic Press, ISBN 0-12-338460-5
Northcott, D.G. (1984), Multilinear Algebra, Cambridge University Press, ISBN 613-0-04808-4.
Hazewinkel, Michiel; Gubareni, Nadezhda Mikhaĭlovna; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004), Algebras, rings and modules, Springer, ISBN 978-1-4020-2690-4.
May, Peter (1999). A concise course in algebraic topology (PDF). University of Chicago Press. | Wikipedia/Tensor_product_of_modules |
A tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space.
Tensor may also refer to:
== Mathematics ==
Tensor (intrinsic definition)
Tensor field
Tensor product
Tensor (obsolete), the norm used on the quaternion algebra in William Rowan Hamilton's work; see Classical Hamiltonian quaternions § Tensor
Symmetric tensor, a tensor that is invariant under a permutation of its vector arguments
== Computer science ==
Tensor (machine learning), the application of tensors to artificial neural networks
Tensor Processing Unit, an integrated circuit developed by Google for neural network machine learning
Google Tensor, a system on a chip (Soc) found on some Pixel smartphones beginning with the Pixel 6
TensorFlow, a technology developed by Google
== Other uses ==
Tensor Trucks, a skateboarding truck company
Tensor lamp, a trademarked brand of small high-intensity low-voltage desk lamp
== See also ==
Tensor muscle (disambiguation)
Tensor type, in tensor analysis
Category: Tensors
Glossary of tensor theory
Curvature tensor (disambiguation)
Stress tensor (disambiguation)
Tense (disambiguation) | Wikipedia/Tensor_(disambiguation) |
In mechanics, strain is defined as relative deformation, compared to a reference position configuration. Different equivalent choices may be made for the expression of a strain field depending on whether it is defined with respect to the initial or the final configuration of the body and on whether the metric tensor or its dual is considered.
Strain has dimension of a length ratio, with SI base units of meter per meter (m/m).
Hence strains are dimensionless and are usually expressed as a decimal fraction or a percentage.
Parts-per notation is also used, e.g., parts per million or parts per billion (sometimes called "microstrains" and "nanostrains", respectively), corresponding to μm/m and nm/m.
Strain can be formulated as the spatial derivative of displacement:
ε
≐
∂
∂
X
(
x
−
X
)
=
F
′
−
I
,
{\displaystyle {\boldsymbol {\varepsilon }}\doteq {\cfrac {\partial }{\partial \mathbf {X} }}\left(\mathbf {x} -\mathbf {X} \right)={\boldsymbol {F}}'-{\boldsymbol {I}},}
where I is the identity tensor.
The displacement of a body may be expressed in the form x = F(X), where X is the reference position of material points of the body;
displacement has units of length and does not distinguish between rigid body motions (translations and rotations) and deformations (changes in shape and size) of the body.
The spatial derivative of a uniform translation is zero, thus strains measure how much a given displacement differs locally from a rigid-body motion.
A strain is in general a tensor quantity. Physical insight into strains can be gained by observing that a given strain can be decomposed into normal and shear components. The amount of stretch or compression along material line elements or fibers is the normal strain, and the amount of distortion associated with the sliding of plane layers over each other is the shear strain, within a deforming body. This could be applied by elongation, shortening, or volume changes, or angular distortion.
The state of strain at a material point of a continuum body is defined as the totality of all the changes in length of material lines or fibers, the normal strain, which pass through that point and also the totality of all the changes in the angle between pairs of lines initially perpendicular to each other, the shear strain, radiating from this point. However, it is sufficient to know the normal and shear components of strain on a set of three mutually perpendicular directions.
If there is an increase in length of the material line, the normal strain is called tensile strain; otherwise, if there is reduction or compression in the length of the material line, it is called compressive strain.
== Strain regimes ==
Depending on the amount of strain, or local deformation, the analysis of deformation is subdivided into three deformation theories:
Finite strain theory, also called large strain theory, large deformation theory, deals with deformations in which both rotations and strains are arbitrarily large. In this case, the undeformed and deformed configurations of the continuum are significantly different and a clear distinction has to be made between them. This is commonly the case with elastomers, plastically-deforming materials and other fluids and biological soft tissue.
Infinitesimal strain theory, also called small strain theory, small deformation theory, small displacement theory, or small displacement-gradient theory where strains and rotations are both small. In this case, the undeformed and deformed configurations of the body can be assumed identical. The infinitesimal strain theory is used in the analysis of deformations of materials exhibiting elastic behavior, such as materials found in mechanical and civil engineering applications, e.g. concrete and steel.
Large-displacement or large-rotation theory, which assumes small strains but large rotations and displacements.
== Strain measures ==
In each of these theories the strain is then defined differently. The engineering strain is the most common definition applied to materials used in mechanical and structural engineering, which are subjected to very small deformations. On the other hand, for some materials, e.g., elastomers and polymers, subjected to large deformations, the engineering definition of strain is not applicable, e.g. typical engineering strains greater than 1%; thus other more complex definitions of strain are required, such as stretch, logarithmic strain, Green strain, and Almansi strain.
=== Engineering strain ===
Engineering strain, also known as Cauchy strain, is expressed as the ratio of total deformation to the initial dimension of the material body on which forces are applied. In the case of a material line element or fiber axially loaded, its elongation gives rise to an engineering normal strain or engineering extensional strain e, which equals the relative elongation or the change in length ΔL per unit of the original length L of the line element or fibers (in meters per meter). The normal strain is positive if the material fibers are stretched and negative if they are compressed. Thus, we have
e
=
Δ
L
L
=
l
−
L
L
{\displaystyle e={\frac {\Delta L}{L}}={\frac {l-L}{L}}}
,
where e is the engineering normal strain, L is the original length of the fiber and l is the final length of the fiber.
The true shear strain is defined as the change in the angle (in radians) between two material line elements initially perpendicular to each other in the undeformed or initial configuration. The engineering shear strain is defined as the tangent of that angle, and is equal to the length of deformation at its maximum divided by the perpendicular length in the plane of force application, which sometimes makes it easier to calculate.
=== Stretch ratio ===
The stretch ratio or extension ratio (symbol λ) is an alternative measure related to the extensional or normal strain of an axially loaded differential line element. It is defined as the ratio between the final length l and the initial length L of the material line.
λ
=
l
L
{\displaystyle \lambda ={\frac {l}{L}}}
The extension ratio λ is related to the engineering strain e by
e
=
λ
−
1
{\displaystyle e=\lambda -1}
This equation implies that when the normal strain is zero, so that there is no deformation, the stretch ratio is equal to unity.
The stretch ratio is used in the analysis of materials that exhibit large deformations, such as elastomers, which can sustain stretch ratios of 3 or 4 before they fail. On the other hand, traditional engineering materials, such as concrete or steel, fail at much lower stretch ratios.
=== Logarithmic strain ===
The logarithmic strain ε, also called, true strain or Hencky strain. Considering an incremental strain (Ludwik)
δ
ε
=
δ
l
l
{\displaystyle \delta \varepsilon ={\frac {\delta l}{l}}}
the logarithmic strain is obtained by integrating this incremental strain:
∫
δ
ε
=
∫
L
l
δ
l
l
ε
=
ln
(
l
L
)
=
ln
(
λ
)
=
ln
(
1
+
e
)
=
e
−
e
2
2
+
e
3
3
−
⋯
{\displaystyle {\begin{aligned}\int \delta \varepsilon &=\int _{L}^{l}{\frac {\delta l}{l}}\\\varepsilon &=\ln \left({\frac {l}{L}}\right)=\ln(\lambda )\\&=\ln(1+e)\\&=e-{\frac {e^{2}}{2}}+{\frac {e^{3}}{3}}-\cdots \end{aligned}}}
where e is the engineering strain. The logarithmic strain provides the correct measure of the final strain when deformation takes place in a series of increments, taking into account the influence of the strain path.
=== Green strain ===
The Green strain is defined as:
ε
G
=
1
2
(
l
2
−
L
2
L
2
)
=
1
2
(
λ
2
−
1
)
{\displaystyle \varepsilon _{G}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{L^{2}}}\right)={\tfrac {1}{2}}(\lambda ^{2}-1)}
=== Almansi strain ===
The Euler-Almansi strain is defined as
ε
E
=
1
2
(
l
2
−
L
2
l
2
)
=
1
2
(
1
−
1
λ
2
)
{\displaystyle \varepsilon _{E}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{l^{2}}}\right)={\tfrac {1}{2}}\left(1-{\frac {1}{\lambda ^{2}}}\right)}
== Strain tensor ==
The (infinitesimal) strain tensor (symbol
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
) is defined in the International System of Quantities (ISQ), more specifically in ISO 80000-4 (Mechanics), as a "tensor quantity representing the deformation of matter caused by stress. Strain tensor is symmetric and has three linear strain and three shear strain (Cartesian) components."
ISO 80000-4 further defines linear strain as the "quotient of change in length of an object and its length" and shear strain as the "quotient of parallel displacement of two surfaces of a layer and the thickness of the layer".
Thus, strains are classified as either normal or shear. A normal strain is perpendicular to the face of an element, and a shear strain is parallel to it. These definitions are consistent with those of normal stress and shear stress.
The strain tensor can then be expressed in terms of normal and shear components as:
ε
_
_
=
[
ε
x
x
ε
x
y
ε
x
z
ε
y
x
ε
y
y
ε
y
z
ε
z
x
ε
z
y
ε
z
z
]
=
[
ε
x
x
1
2
γ
x
y
1
2
γ
x
z
1
2
γ
y
x
ε
y
y
1
2
γ
y
z
1
2
γ
z
x
1
2
γ
z
y
ε
z
z
]
{\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{xx}&{\tfrac {1}{2}}\gamma _{xy}&{\tfrac {1}{2}}\gamma _{xz}\\{\tfrac {1}{2}}\gamma _{yx}&\varepsilon _{yy}&{\tfrac {1}{2}}\gamma _{yz}\\{\tfrac {1}{2}}\gamma _{zx}&{\tfrac {1}{2}}\gamma _{zy}&\varepsilon _{zz}\\\end{bmatrix}}}
=== Geometric setting ===
Consider a two-dimensional, infinitesimal, rectangular material element with dimensions dx × dy, which, after deformation, takes the form of a rhombus. The deformation is described by the displacement field u. From the geometry of the adjacent figure we have
l
e
n
g
t
h
(
A
B
)
=
d
x
{\displaystyle \mathrm {length} (AB)=dx}
and
l
e
n
g
t
h
(
a
b
)
=
(
d
x
+
∂
u
x
∂
x
d
x
)
2
+
(
∂
u
y
∂
x
d
x
)
2
=
d
x
2
(
1
+
∂
u
x
∂
x
)
2
+
d
x
2
(
∂
u
y
∂
x
)
2
=
d
x
(
1
+
∂
u
x
∂
x
)
2
+
(
∂
u
y
∂
x
)
2
{\displaystyle {\begin{aligned}\mathrm {length} (ab)&={\sqrt {\left(dx+{\frac {\partial u_{x}}{\partial x}}dx\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}dx\right)^{2}}}\\&={\sqrt {dx^{2}\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+dx^{2}\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\\&=dx~{\sqrt {\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\end{aligned}}}
For very small displacement gradients the squares of the derivative of
u
y
{\displaystyle u_{y}}
and
u
x
{\displaystyle u_{x}}
are negligible and we have
l
e
n
g
t
h
(
a
b
)
≈
d
x
(
1
+
∂
u
x
∂
x
)
=
d
x
+
∂
u
x
∂
x
d
x
{\displaystyle \mathrm {length} (ab)\approx dx\left(1+{\frac {\partial u_{x}}{\partial x}}\right)=dx+{\frac {\partial u_{x}}{\partial x}}dx}
=== Normal strain ===
For an isotropic material that obeys Hooke's law, a normal stress will cause a normal strain. Normal strains produce dilations.
The normal strain in the x-direction of the rectangular element is defined by
ε
x
=
extension
original length
=
l
e
n
g
t
h
(
a
b
)
−
l
e
n
g
t
h
(
A
B
)
l
e
n
g
t
h
(
A
B
)
=
∂
u
x
∂
x
{\displaystyle \varepsilon _{x}={\frac {\text{extension}}{\text{original length}}}={\frac {\mathrm {length} (ab)-\mathrm {length} (AB)}{\mathrm {length} (AB)}}={\frac {\partial u_{x}}{\partial x}}}
Similarly, the normal strain in the y- and z-directions becomes
ε
y
=
∂
u
y
∂
y
,
ε
z
=
∂
u
z
∂
z
{\displaystyle \varepsilon _{y}={\frac {\partial u_{y}}{\partial y}}\quad ,\qquad \varepsilon _{z}={\frac {\partial u_{z}}{\partial z}}}
=== Shear strain ===
The engineering shear strain (γxy) is defined as the change in angle between lines AC and AB. Therefore,
γ
x
y
=
α
+
β
{\displaystyle \gamma _{xy}=\alpha +\beta }
From the geometry of the figure, we have
tan
α
=
∂
u
y
∂
x
d
x
d
x
+
∂
u
x
∂
x
d
x
=
∂
u
y
∂
x
1
+
∂
u
x
∂
x
tan
β
=
∂
u
x
∂
y
d
y
d
y
+
∂
u
y
∂
y
d
y
=
∂
u
x
∂
y
1
+
∂
u
y
∂
y
{\displaystyle {\begin{aligned}\tan \alpha &={\frac {{\tfrac {\partial u_{y}}{\partial x}}dx}{dx+{\tfrac {\partial u_{x}}{\partial x}}dx}}={\frac {\tfrac {\partial u_{y}}{\partial x}}{1+{\tfrac {\partial u_{x}}{\partial x}}}}\\\tan \beta &={\frac {{\tfrac {\partial u_{x}}{\partial y}}dy}{dy+{\tfrac {\partial u_{y}}{\partial y}}dy}}={\frac {\tfrac {\partial u_{x}}{\partial y}}{1+{\tfrac {\partial u_{y}}{\partial y}}}}\end{aligned}}}
For small displacement gradients we have
∂
u
x
∂
x
≪
1
;
∂
u
y
∂
y
≪
1
{\displaystyle {\frac {\partial u_{x}}{\partial x}}\ll 1~;~~{\frac {\partial u_{y}}{\partial y}}\ll 1}
For small rotations, i.e. α and β are ≪ 1 we have tan α ≈ α, tan β ≈ β. Therefore,
α
≈
∂
u
y
∂
x
;
β
≈
∂
u
x
∂
y
{\displaystyle \alpha \approx {\frac {\partial u_{y}}{\partial x}}~;~~\beta \approx {\frac {\partial u_{x}}{\partial y}}}
thus
γ
x
y
=
α
+
β
=
∂
u
y
∂
x
+
∂
u
x
∂
y
{\displaystyle \gamma _{xy}=\alpha +\beta ={\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}}
By interchanging x and y and ux and uy, it can be shown that γxy = γyx.
Similarly, for the yz- and xz-planes, we have
γ
y
z
=
γ
z
y
=
∂
u
y
∂
z
+
∂
u
z
∂
y
,
γ
z
x
=
γ
x
z
=
∂
u
z
∂
x
+
∂
u
x
∂
z
{\displaystyle \gamma _{yz}=\gamma _{zy}={\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\quad ,\qquad \gamma _{zx}=\gamma _{xz}={\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}}
=== Volume strain ===
== Metric tensor ==
A strain field associated with a displacement is defined, at any point, by the change in length of the tangent vectors representing the speeds of arbitrarily parametrized curves passing through that point. A basic geometric result, due to Fréchet, von Neumann and Jordan, states that, if the lengths of the tangent vectors fulfil the axioms of a norm and the parallelogram law, then the length of a vector is the square root of the value of the quadratic form associated, by the polarization formula, with a positive definite bilinear map called the metric tensor.
== See also ==
Stress measures
Strain rate
Strain tensor
== References == | Wikipedia/Strain_tensor |
In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), tensor calculus or tensor analysis developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century. The basis of modern tensor analysis was developed by Bernhard Riemann in a paper from 1861.
A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays.
A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the degree (or order) of the tensor.
For compactness and convenience, the Ricci calculus incorporates Einstein notation, which implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules.
== Applications ==
Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning.
Working with a main proponent of the exterior calculus Élie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.
== Notation for indices ==
=== Basis-related distinctions ===
==== Space and time coordinates ====
Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows:
The lowercase Latin alphabet a, b, c, ... is used to indicate restriction to 3-dimensional Euclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately.
The lowercase Greek alphabet α, β, γ, ... is used for 4-dimensional spacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components.
Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space.
==== Coordinate and index notation ====
The author(s) will usually make it clear whether a subscript is intended as an index or as a label.
For example, in 3-D Euclidean space and using Cartesian coordinates; the coordinate vector A = (A1, A2, A3) = (Ax, Ay, Az) shows a direct correspondence between the subscripts 1, 2, 3 and the labels x, y, z. In the expression Ai, i is interpreted as an index ranging over the values 1, 2, 3, while the x, y, z subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label t.
==== Reference to basis ====
Indices themselves may be labelled using diacritic-like symbols, such as a hat (ˆ), bar (¯), tilde (˜), or prime (′) as in:
X
ϕ
^
,
Y
λ
¯
,
Z
η
~
,
T
μ
′
{\displaystyle X_{\hat {\phi }}\,,Y_{\bar {\lambda }}\,,Z_{\tilde {\eta }}\,,T_{\mu '}}
to denote a possibly different basis for that index. An example is in Lorentz transformations from one frame of reference to another, where one frame could be unprimed and the other primed, as in:
v
μ
′
=
v
ν
L
ν
μ
′
.
{\displaystyle v^{\mu '}=v^{\nu }L_{\nu }{}^{\mu '}.}
This is not to be confused with van der Waerden notation for spinors, which uses hats and overdots on indices to reflect the chirality of a spinor.
=== Upper and lower indices ===
Ricci calculus, and index notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are not exponents, even though they may look as such to the reader only familiar with other parts of mathematics.
In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as
a
i
j
b
j
k
{\displaystyle a_{ij}b_{jk}}
for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained.
==== Covariant tensor components ====
A lower index (subscript) indicates covariance of the components with respect to that index:
A
α
β
γ
⋯
{\displaystyle A_{\alpha \beta \gamma \cdots }}
==== Contravariant tensor components ====
An upper index (superscript) indicates contravariance of the components with respect to that index:
A
α
β
γ
⋯
{\displaystyle A^{\alpha \beta \gamma \cdots }}
==== Mixed-variance tensor components ====
A tensor may have both upper and lower indices:
A
α
β
γ
δ
⋯
.
{\displaystyle A_{\alpha }{}^{\beta }{}_{\gamma }{}^{\delta \cdots }.}
Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the generalized Kronecker delta).
==== Tensor type and degree ====
The number of each upper and lower indices of a tensor gives its type: a tensor with p upper and q lower indices is said to be of type (p, q), or to be a type-(p, q) tensor.
The number of indices of a tensor, regardless of variance, is called the degree of the tensor (alternatively, its valence, order or rank, although rank is ambiguous). Thus, a tensor of type (p, q) has degree p + q.
==== Summation convention ====
The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over:
A
α
B
α
≡
∑
α
A
α
B
α
or
A
α
B
α
≡
∑
α
A
α
B
α
.
{\displaystyle A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\quad {\text{or}}\quad A^{\alpha }B_{\alpha }\equiv \sum _{\alpha }A^{\alpha }B_{\alpha }\,.}
The operation implied by such a summation is called tensor contraction:
A
α
B
β
→
A
α
B
α
≡
∑
α
A
α
B
α
.
{\displaystyle A_{\alpha }B^{\beta }\rightarrow A_{\alpha }B^{\alpha }\equiv \sum _{\alpha }A_{\alpha }B^{\alpha }\,.}
This summation may occur more than once within a term with a distinct symbol per pair of indices, for example:
A
α
γ
B
α
C
γ
β
≡
∑
α
∑
γ
A
α
γ
B
α
C
γ
β
.
{\displaystyle A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\equiv \sum _{\alpha }\sum _{\gamma }A_{\alpha }{}^{\gamma }B^{\alpha }C_{\gamma }{}^{\beta }\,.}
Other combinations of repeated indices within a term are considered to be ill-formed, such as
The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis.
==== Multi-index notation ====
If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list:
A
i
1
⋯
i
n
B
i
1
⋯
i
n
j
1
⋯
j
m
C
j
1
⋯
j
m
≡
A
I
B
I
J
C
J
,
{\displaystyle A_{i_{1}\cdots i_{n}}B^{i_{1}\cdots i_{n}j_{1}\cdots j_{m}}C_{j_{1}\cdots j_{m}}\equiv A_{I}B^{IJ}C_{J},}
where I = i1 i2 ⋅⋅⋅ in and J = j1 j2 ⋅⋅⋅ jm.
==== Sequential summation ====
A pair of vertical bars | ⋅ | around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is completely antisymmetric in each of the two sets of indices:
A
|
α
β
γ
|
⋯
B
α
β
γ
⋯
=
A
α
β
γ
⋯
B
|
α
β
γ
|
⋯
=
∑
α
<
β
<
γ
A
α
β
γ
⋯
B
α
β
γ
⋯
{\displaystyle A_{|\alpha \beta \gamma |\cdots }B^{\alpha \beta \gamma \cdots }=A_{\alpha \beta \gamma \cdots }B^{|\alpha \beta \gamma |\cdots }=\sum _{\alpha <\beta <\gamma }A_{\alpha \beta \gamma \cdots }B^{\alpha \beta \gamma \cdots }}
means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example:
A
|
α
β
γ
|
|
δ
ϵ
⋯
λ
|
B
α
β
γ
δ
ϵ
⋯
λ
|
μ
ν
⋯
ζ
|
C
μ
ν
⋯
ζ
=
∑
α
<
β
<
γ
∑
δ
<
ϵ
<
⋯
<
λ
∑
μ
<
ν
<
⋯
<
ζ
A
α
β
γ
δ
ϵ
⋯
λ
B
α
β
γ
δ
ϵ
⋯
λ
μ
ν
⋯
ζ
C
μ
ν
⋯
ζ
{\displaystyle {\begin{aligned}&A_{|\alpha \beta \gamma |}{}^{|\delta \epsilon \cdots \lambda |}B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda |\mu \nu \cdots \zeta |}C^{\mu \nu \cdots \zeta }\\[3pt]={}&\sum _{\alpha <\beta <\gamma }~\sum _{\delta <\epsilon <\cdots <\lambda }~\sum _{\mu <\nu <\cdots <\zeta }A_{\alpha \beta \gamma }{}^{\delta \epsilon \cdots \lambda }B^{\alpha \beta \gamma }{}_{\delta \epsilon \cdots \lambda \mu \nu \cdots \zeta }C^{\mu \nu \cdots \zeta }\end{aligned}}}
When using multi-index notation, an underarrow is placed underneath the block of indices:
A
P
⇁
Q
⇁
B
P
Q
R
⇁
C
R
=
∑
P
⇁
∑
Q
⇁
∑
R
⇁
A
P
Q
B
P
Q
R
C
R
{\displaystyle A_{\underset {\rightharpoondown }{P}}{}^{\underset {\rightharpoondown }{Q}}B^{P}{}_{Q{\underset {\rightharpoondown }{R}}}C^{R}=\sum _{\underset {\rightharpoondown }{P}}\sum _{\underset {\rightharpoondown }{Q}}\sum _{\underset {\rightharpoondown }{R}}A_{P}{}^{Q}B^{P}{}_{QR}C^{R}}
where
P
⇁
=
|
α
β
γ
|
,
Q
⇁
=
|
δ
ϵ
⋯
λ
|
,
R
⇁
=
|
μ
ν
⋯
ζ
|
{\displaystyle {\underset {\rightharpoondown }{P}}=|\alpha \beta \gamma |\,,\quad {\underset {\rightharpoondown }{Q}}=|\delta \epsilon \cdots \lambda |\,,\quad {\underset {\rightharpoondown }{R}}=|\mu \nu \cdots \zeta |}
==== Raising and lowering indices ====
By contracting an index with a non-singular metric tensor, the type of a tensor can be changed, converting a lower index to an upper index or vice versa:
B
γ
β
⋯
=
g
γ
α
A
α
β
⋯
and
A
α
β
⋯
=
g
α
γ
B
γ
β
⋯
{\displaystyle B^{\gamma }{}_{\beta \cdots }=g^{\gamma \alpha }A_{\alpha \beta \cdots }\quad {\text{and}}\quad A_{\alpha \beta \cdots }=g_{\alpha \gamma }B^{\gamma }{}_{\beta \cdots }}
The base symbol in many cases is retained (e.g. using A where B appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.
=== Correlations between index positions and invariance ===
This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.
The Kronecker delta is used, see also below.
== General outlines for index notation and operations ==
Tensors are equal if and only if every corresponding component is equal; e.g., tensor A equals tensor B if and only if
A
α
β
γ
=
B
α
β
γ
{\displaystyle A^{\alpha }{}_{\beta \gamma }=B^{\alpha }{}_{\beta \gamma }}
for all α, β, γ. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis).
=== Free and dummy indices ===
Indices not involved in contractions are called free indices. Indices used in contractions are termed dummy indices, or summation indices.
=== A tensor equation represents many ordinary (real-valued) equations ===
The components of tensors (like Aα, Bβγ etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has n free indices, and if the dimensionality of the underlying vector space is m, the equality represents mn equations: each index takes on every value of a specific set of values.
For instance, if
A
α
B
β
γ
C
γ
δ
+
D
α
β
E
δ
=
T
α
β
δ
{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }=T^{\alpha }{}_{\beta }{}_{\delta }}
is in four dimensions (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (α, β, δ), there are 43 = 64 equations. Three of these are:
A
0
B
1
0
C
00
+
A
0
B
1
1
C
10
+
A
0
B
1
2
C
20
+
A
0
B
1
3
C
30
+
D
0
1
E
0
=
T
0
1
0
A
1
B
0
0
C
00
+
A
1
B
0
1
C
10
+
A
1
B
0
2
C
20
+
A
1
B
0
3
C
30
+
D
1
0
E
0
=
T
1
0
0
A
1
B
2
0
C
02
+
A
1
B
2
1
C
12
+
A
1
B
2
2
C
22
+
A
1
B
2
3
C
32
+
D
1
2
E
2
=
T
1
2
2
.
{\displaystyle {\begin{aligned}A^{0}B_{1}{}^{0}C_{00}+A^{0}B_{1}{}^{1}C_{10}+A^{0}B_{1}{}^{2}C_{20}+A^{0}B_{1}{}^{3}C_{30}+D^{0}{}_{1}{}E_{0}&=T^{0}{}_{1}{}_{0}\\A^{1}B_{0}{}^{0}C_{00}+A^{1}B_{0}{}^{1}C_{10}+A^{1}B_{0}{}^{2}C_{20}+A^{1}B_{0}{}^{3}C_{30}+D^{1}{}_{0}{}E_{0}&=T^{1}{}_{0}{}_{0}\\A^{1}B_{2}{}^{0}C_{02}+A^{1}B_{2}{}^{1}C_{12}+A^{1}B_{2}{}^{2}C_{22}+A^{1}B_{2}{}^{3}C_{32}+D^{1}{}_{2}{}E_{2}&=T^{1}{}_{2}{}_{2}.\end{aligned}}}
This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation.
=== Indices are replaceable labels ===
Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is:
A
α
B
β
γ
C
γ
δ
+
D
α
β
E
δ
→
A
λ
B
β
μ
C
μ
δ
+
D
λ
β
E
δ
,
{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\rightarrow A^{\lambda }B_{\beta }{}^{\mu }C_{\mu \delta }+D^{\lambda }{}_{\beta }{}E_{\delta }\,,}
whereas an erroneous change is:
A
α
B
β
γ
C
γ
δ
+
D
α
β
E
δ
↛
A
λ
B
β
γ
C
μ
δ
+
D
α
β
E
δ
.
{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\nrightarrow A^{\lambda }B_{\beta }{}^{\gamma }C_{\mu \delta }+D^{\alpha }{}_{\beta }{}E_{\delta }\,.}
In the first replacement, λ replaced α and μ replaced γ everywhere, so the expression still has the same meaning. In the second, λ did not fully replace α, and μ did not fully replace γ (incidentally, the contraction on the γ index became a tensor product), which is entirely inconsistent for reasons shown next.
=== Indices are the same in every term ===
The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example:
A
α
B
β
γ
C
γ
δ
+
D
α
δ
E
β
=
T
α
β
δ
{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D^{\alpha }{}_{\delta }E_{\beta }=T^{\alpha }{}_{\beta }{}_{\delta }}
as for an erroneous expression:
A
α
B
β
γ
C
γ
δ
+
D
α
β
γ
E
δ
.
{\displaystyle A^{\alpha }B_{\beta }{}^{\gamma }C_{\gamma \delta }+D_{\alpha }{}_{\beta }{}^{\gamma }E^{\delta }.}
In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, α, β, δ line up throughout and γ occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while β lines up, α and δ do not, and γ appears twice in one term (contraction) and once in another term, which is inconsistent.
=== Brackets and punctuation used once where implied ===
When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply.
If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets.
Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices, not to intermediately placed covariant indices.
== Symmetric and antisymmetric parts ==
=== Symmetric part of tensor ===
Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizing p indices using σ to range over permutations of the numbers 1 to p, one takes a sum over the permutations of those indices ασ(i) for i = 1, 2, 3, ..., p, and then divides by the number of permutations:
A
(
α
1
α
2
⋯
α
p
)
α
p
+
1
⋯
α
q
=
1
p
!
∑
σ
A
α
σ
(
1
)
⋯
α
σ
(
p
)
α
p
+
1
⋯
α
q
.
{\displaystyle A_{(\alpha _{1}\alpha _{2}\cdots \alpha _{p})\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {1}{p!}}\sum _{\sigma }A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\,.}
For example, two symmetrizing indices mean there are two indices to permute and sum over:
A
(
α
β
)
γ
⋯
=
1
2
!
(
A
α
β
γ
⋯
+
A
β
α
γ
⋯
)
{\displaystyle A_{(\alpha \beta )\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }+A_{\beta \alpha \gamma \cdots }\right)}
while for three symmetrizing indices, there are three indices to sum over and permute:
A
(
α
β
γ
)
δ
⋯
=
1
3
!
(
A
α
β
γ
δ
⋯
+
A
γ
α
β
δ
⋯
+
A
β
γ
α
δ
⋯
+
A
α
γ
β
δ
⋯
+
A
γ
β
α
δ
⋯
+
A
β
α
γ
δ
⋯
)
{\displaystyle A_{(\alpha \beta \gamma )\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }+A_{\alpha \gamma \beta \delta \cdots }+A_{\gamma \beta \alpha \delta \cdots }+A_{\beta \alpha \gamma \delta \cdots }\right)}
The symmetrization is distributive over addition;
A
(
α
(
B
β
)
γ
⋯
+
C
β
)
γ
⋯
)
=
A
(
α
B
β
)
γ
⋯
+
A
(
α
C
β
)
γ
⋯
{\displaystyle A_{(\alpha }\left(B_{\beta )\gamma \cdots }+C_{\beta )\gamma \cdots }\right)=A_{(\alpha }B_{\beta )\gamma \cdots }+A_{(\alpha }C_{\beta )\gamma \cdots }}
Indices are not part of the symmetrization when they are:
not on the same level, for example;
A
(
α
B
β
γ
)
=
1
2
!
(
A
α
B
β
γ
+
A
γ
B
β
α
)
{\displaystyle A_{(\alpha }B^{\beta }{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }+A_{\gamma }B^{\beta }{}_{\alpha }\right)}
within the parentheses and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example;
A
(
α
B
|
β
|
γ
)
=
1
2
!
(
A
α
B
β
γ
+
A
γ
B
β
α
)
{\displaystyle A_{(\alpha }B_{|\beta |}{}_{\gamma )}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }+A_{\gamma }B_{\beta \alpha }\right)}
Here the α and γ indices are symmetrized, β is not.
=== Antisymmetric or alternating part of tensor ===
Square brackets, [ ], around multiple indices denotes the antisymmetrized part of the tensor. For p antisymmetrizing indices – the sum over the permutations of those indices ασ(i) multiplied by the signature of the permutation sgn(σ) is taken, then divided by the number of permutations:
A
[
α
1
⋯
α
p
]
α
p
+
1
⋯
α
q
=
1
p
!
∑
σ
sgn
(
σ
)
A
α
σ
(
1
)
⋯
α
σ
(
p
)
α
p
+
1
⋯
α
q
=
δ
α
1
⋯
α
p
β
1
…
β
p
A
β
1
⋯
β
p
α
p
+
1
⋯
α
q
{\displaystyle {\begin{aligned}&A_{[\alpha _{1}\cdots \alpha _{p}]\alpha _{p+1}\cdots \alpha _{q}}\\[3pt]={}&{\dfrac {1}{p!}}\sum _{\sigma }\operatorname {sgn}(\sigma )A_{\alpha _{\sigma (1)}\cdots \alpha _{\sigma (p)}\alpha _{p+1}\cdots \alpha _{q}}\\={}&\delta _{\alpha _{1}\cdots \alpha _{p}}^{\beta _{1}\dots \beta _{p}}A_{\beta _{1}\cdots \beta _{p}\alpha _{p+1}\cdots \alpha _{q}}\\\end{aligned}}}
where δβ1⋅⋅⋅βpα1⋅⋅⋅αp is the generalized Kronecker delta of degree 2p, with scaling as defined below.
For example, two antisymmetrizing indices imply:
A
[
α
β
]
γ
⋯
=
1
2
!
(
A
α
β
γ
⋯
−
A
β
α
γ
⋯
)
{\displaystyle A_{[\alpha \beta ]\gamma \cdots }={\dfrac {1}{2!}}\left(A_{\alpha \beta \gamma \cdots }-A_{\beta \alpha \gamma \cdots }\right)}
while three antisymmetrizing indices imply:
A
[
α
β
γ
]
δ
⋯
=
1
3
!
(
A
α
β
γ
δ
⋯
+
A
γ
α
β
δ
⋯
+
A
β
γ
α
δ
⋯
−
A
α
γ
β
δ
⋯
−
A
γ
β
α
δ
⋯
−
A
β
α
γ
δ
⋯
)
{\displaystyle A_{[\alpha \beta \gamma ]\delta \cdots }={\dfrac {1}{3!}}\left(A_{\alpha \beta \gamma \delta \cdots }+A_{\gamma \alpha \beta \delta \cdots }+A_{\beta \gamma \alpha \delta \cdots }-A_{\alpha \gamma \beta \delta \cdots }-A_{\gamma \beta \alpha \delta \cdots }-A_{\beta \alpha \gamma \delta \cdots }\right)}
as for a more specific example, if F represents the electromagnetic tensor, then the equation
0
=
F
[
α
β
,
γ
]
=
1
3
!
(
F
α
β
,
γ
+
F
γ
α
,
β
+
F
β
γ
,
α
−
F
β
α
,
γ
−
F
α
γ
,
β
−
F
γ
β
,
α
)
{\displaystyle 0=F_{[\alpha \beta ,\gamma ]}={\dfrac {1}{3!}}\left(F_{\alpha \beta ,\gamma }+F_{\gamma \alpha ,\beta }+F_{\beta \gamma ,\alpha }-F_{\beta \alpha ,\gamma }-F_{\alpha \gamma ,\beta }-F_{\gamma \beta ,\alpha }\right)\,}
represents Gauss's law for magnetism and Faraday's law of induction.
As before, the antisymmetrization is distributive over addition;
A
[
α
(
B
β
]
γ
⋯
+
C
β
]
γ
⋯
)
=
A
[
α
B
β
]
γ
⋯
+
A
[
α
C
β
]
γ
⋯
{\displaystyle A_{[\alpha }\left(B_{\beta ]\gamma \cdots }+C_{\beta ]\gamma \cdots }\right)=A_{[\alpha }B_{\beta ]\gamma \cdots }+A_{[\alpha }C_{\beta ]\gamma \cdots }}
As with symmetrization, indices are not antisymmetrized when they are:
not on the same level, for example;
A
[
α
B
β
γ
]
=
1
2
!
(
A
α
B
β
γ
−
A
γ
B
β
α
)
{\displaystyle A_{[\alpha }B^{\beta }{}_{\gamma ]}={\dfrac {1}{2!}}\left(A_{\alpha }B^{\beta }{}_{\gamma }-A_{\gamma }B^{\beta }{}_{\alpha }\right)}
within the square brackets and between vertical bars (i.e. |⋅⋅⋅|), modifying the previous example;
A
[
α
B
|
β
|
γ
]
=
1
2
!
(
A
α
B
β
γ
−
A
γ
B
β
α
)
{\displaystyle A_{[\alpha }B_{|\beta |}{}_{\gamma ]}={\dfrac {1}{2!}}\left(A_{\alpha }B_{\beta \gamma }-A_{\gamma }B_{\beta \alpha }\right)}
Here the α and γ indices are antisymmetrized, β is not.
=== Sum of symmetric and antisymmetric parts ===
Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices:
A
α
β
γ
⋯
=
A
(
α
β
)
γ
⋯
+
A
[
α
β
]
γ
⋯
{\displaystyle A_{\alpha \beta \gamma \cdots }=A_{(\alpha \beta )\gamma \cdots }+A_{[\alpha \beta ]\gamma \cdots }}
as can be seen by adding the above expressions for A(αβ)γ⋅⋅⋅ and A[αβ]γ⋅⋅⋅. This does not hold for other than two indices.
== Differentiation ==
For compactness, derivatives may be indicated by adding indices after a comma or semicolon.
=== Partial derivative ===
While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by xμ, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of differences in coordinates, Δxμ, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below.
To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable xγ, a comma is placed before an appended lower index of the coordinate variable.
A
α
β
⋯
,
γ
=
∂
∂
x
γ
A
α
β
⋯
{\displaystyle A_{\alpha \beta \cdots ,\gamma }={\dfrac {\partial }{\partial x^{\gamma }}}A_{\alpha \beta \cdots }}
This may be repeated (without adding further commas):
A
α
1
α
2
⋯
α
p
,
α
p
+
1
⋯
α
q
=
∂
∂
x
α
q
⋯
∂
∂
x
α
p
+
2
∂
∂
x
α
p
+
1
A
α
1
α
2
⋯
α
p
.
{\displaystyle A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}\,,\,\alpha _{p+1}\cdots \alpha _{q}}={\dfrac {\partial }{\partial x^{\alpha _{q}}}}\cdots {\dfrac {\partial }{\partial x^{\alpha _{p+2}}}}{\dfrac {\partial }{\partial x^{\alpha _{p+1}}}}A_{\alpha _{1}\alpha _{2}\cdots \alpha _{p}}.}
These components do not transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates
x
α
,
γ
=
δ
γ
α
,
{\displaystyle x^{\alpha }{}_{,\gamma }=\delta _{\gamma }^{\alpha },}
where δ is the Kronecker delta.
=== Covariant derivative ===
The covariant derivative is only defined if a connection is defined. For any tensor field, a semicolon ( ; ) placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a forward slash ( / ) or in three-dimensional curved space a single vertical bar ( | ).
The covariant derivative of a scalar function, a contravariant vector and a covariant vector are:
f
;
β
=
f
,
β
{\displaystyle f_{;\beta }=f_{,\beta }}
A
α
;
β
=
A
α
,
β
+
Γ
α
γ
β
A
γ
{\displaystyle A^{\alpha }{}_{;\beta }=A^{\alpha }{}_{,\beta }+\Gamma ^{\alpha }{}_{\gamma \beta }A^{\gamma }}
A
α
;
β
=
A
α
,
β
−
Γ
γ
α
β
A
γ
,
{\displaystyle A_{\alpha ;\beta }=A_{\alpha ,\beta }-\Gamma ^{\gamma }{}_{\alpha \beta }A_{\gamma }\,,}
where Γαγβ are the connection coefficients.
For an arbitrary tensor:
T
α
1
⋯
α
r
β
1
⋯
β
s
;
γ
=
T
α
1
⋯
α
r
β
1
⋯
β
s
,
γ
+
Γ
α
1
δ
γ
T
δ
α
2
⋯
α
r
β
1
⋯
β
s
+
⋯
+
Γ
α
r
δ
γ
T
α
1
⋯
α
r
−
1
δ
β
1
⋯
β
s
−
Γ
δ
β
1
γ
T
α
1
⋯
α
r
δ
β
2
⋯
β
s
−
⋯
−
Γ
δ
β
s
γ
T
α
1
⋯
α
r
β
1
⋯
β
s
−
1
δ
.
{\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma }&\\=T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&+\,\Gamma ^{\alpha _{1}}{}_{\delta \gamma }T^{\delta \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}+\cdots +\Gamma ^{\alpha _{r}}{}_{\delta \gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\delta }{}_{\beta _{1}\cdots \beta _{s}}\\&-\,\Gamma ^{\delta }{}_{\beta _{1}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\delta \beta _{2}\cdots \beta _{s}}-\cdots -\Gamma ^{\delta }{}_{\beta _{s}\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\delta }\,.\end{aligned}}}
An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol ∇β. For the case of a vector field Aα:
∇
β
A
α
=
A
α
;
β
.
{\displaystyle \nabla _{\beta }A^{\alpha }=A^{\alpha }{}_{;\beta }\,.}
The covariant formulation of the directional derivative of any tensor field along a vector vγ may be expressed as its contraction with the covariant derivative, e.g.:
v
γ
A
α
;
γ
.
{\displaystyle v^{\gamma }A_{\alpha ;\gamma }\,.}
The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly.
This derivative is characterized by the product rule:
(
A
α
β
⋯
B
γ
δ
⋯
)
;
ϵ
=
A
α
β
⋯
;
ϵ
B
γ
δ
⋯
+
A
α
β
⋯
B
γ
δ
⋯
;
ϵ
.
{\displaystyle (A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots })_{;\epsilon }=A^{\alpha }{}_{\beta \cdots ;\epsilon }B^{\gamma }{}_{\delta \cdots }+A^{\alpha }{}_{\beta \cdots }B^{\gamma }{}_{\delta \cdots ;\epsilon }\,.}
==== Connection types ====
A Koszul connection on the tangent bundle of a differentiable manifold is called an affine connection.
A connection is a metric connection when the covariant derivative of the metric tensor vanishes:
g
μ
ν
;
ξ
=
0
.
{\displaystyle g_{\mu \nu ;\xi }=0\,.}
An affine connection that is also a metric connection is called a Riemannian connection. A Riemannian connection that is torsion-free (i.e., for which the torsion tensor vanishes: Tαβγ = 0) is a Levi-Civita connection.
The Γαβγ for a Levi-Civita connection in a coordinate basis are called Christoffel symbols of the second kind.
=== Exterior derivative ===
The exterior derivative of a totally antisymmetric type (0, s) tensor field with components Aα1⋅⋅⋅αs (also called a differential form) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components:: 232–233
(
d
A
)
γ
α
1
⋯
α
s
=
∂
∂
x
[
γ
A
α
1
⋯
α
s
]
=
A
[
α
1
⋯
α
s
,
γ
]
.
{\displaystyle (\mathrm {d} A)_{\gamma \alpha _{1}\cdots \alpha _{s}}={\frac {\partial }{\partial x^{[\gamma }}}A_{\alpha _{1}\cdots \alpha _{s}]}=A_{[\alpha _{1}\cdots \alpha _{s},\gamma ]}.}
This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule.
=== Lie derivative ===
The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type (r, s) tensor field T along (the flow of) a contravariant vector field Xρ may be expressed using a coordinate basis as
(
L
X
T
)
α
1
⋯
α
r
β
1
⋯
β
s
=
X
γ
T
α
1
⋯
α
r
β
1
⋯
β
s
,
γ
−
X
α
1
,
γ
T
γ
α
2
⋯
α
r
β
1
⋯
β
s
−
⋯
−
X
α
r
,
γ
T
α
1
⋯
α
r
−
1
γ
β
1
⋯
β
s
+
X
γ
,
β
1
T
α
1
⋯
α
r
γ
β
2
⋯
β
s
+
⋯
+
X
γ
,
β
s
T
α
1
⋯
α
r
β
1
⋯
β
s
−
1
γ
.
{\displaystyle {\begin{aligned}({\mathcal {L}}_{X}T)^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}&\\=X^{\gamma }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s},\gamma }&-\,X^{\alpha _{1}}{}_{,\gamma }T^{\gamma \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -X^{\alpha _{r}}{}_{,\gamma }T^{\alpha _{1}\cdots \alpha _{r-1}\gamma }{}_{\beta _{1}\cdots \beta _{s}}\\&+\,X^{\gamma }{}_{,\beta _{1}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\gamma \beta _{2}\cdots \beta _{s}}+\cdots +X^{\gamma }{}_{,\beta _{s}}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\gamma }\,.\end{aligned}}}
This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero:
(
L
X
X
)
α
=
X
γ
X
α
,
γ
−
X
α
,
γ
X
γ
=
0
.
{\displaystyle ({\mathcal {L}}_{X}X)^{\alpha }=X^{\gamma }X^{\alpha }{}_{,\gamma }-X^{\alpha }{}_{,\gamma }X^{\gamma }=0\,.}
== Notable tensors ==
=== Kronecker delta ===
The Kronecker delta is like the identity matrix when multiplied and contracted:
δ
β
α
A
β
=
A
α
δ
ν
μ
B
μ
=
B
ν
.
{\displaystyle {\begin{aligned}\delta _{\beta }^{\alpha }\,A^{\beta }&=A^{\alpha }\\\delta _{\nu }^{\mu }\,B_{\mu }&=B_{\nu }.\end{aligned}}}
The components δαβ are the same in any basis and form an invariant tensor of type (1, 1), i.e. the identity of the tangent bundle over the identity mapping of the base manifold, and so its trace is an invariant.
Its trace is the dimensionality of the space; for example, in four-dimensional spacetime,
δ
ρ
ρ
=
δ
0
0
+
δ
1
1
+
δ
2
2
+
δ
3
3
=
4.
{\displaystyle \delta _{\rho }^{\rho }=\delta _{0}^{0}+\delta _{1}^{1}+\delta _{2}^{2}+\delta _{3}^{3}=4.}
The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree 2p may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of p! on the right):
δ
β
1
⋯
β
p
α
1
⋯
α
p
=
δ
β
1
[
α
1
⋯
δ
β
p
α
p
]
,
{\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}=\delta _{\beta _{1}}^{[\alpha _{1}}\cdots \delta _{\beta _{p}}^{\alpha _{p}]},}
and acts as an antisymmetrizer on p indices:
δ
β
1
⋯
β
p
α
1
⋯
α
p
A
β
1
⋯
β
p
=
A
[
α
1
⋯
α
p
]
.
{\displaystyle \delta _{\beta _{1}\cdots \beta _{p}}^{\alpha _{1}\cdots \alpha _{p}}\,A^{\beta _{1}\cdots \beta _{p}}=A^{[\alpha _{1}\cdots \alpha _{p}]}.}
=== Torsion tensor ===
An affine connection has a torsion tensor Tαβγ:
T
α
β
γ
=
Γ
α
β
γ
−
Γ
α
γ
β
−
γ
α
β
γ
,
{\displaystyle T^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\beta \gamma }-\Gamma ^{\alpha }{}_{\gamma \beta }-\gamma ^{\alpha }{}_{\beta \gamma },}
where γαβγ are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis.
For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations
Γ
α
β
γ
=
Γ
α
γ
β
.
{\displaystyle \Gamma ^{\alpha }{}_{\beta \gamma }=\Gamma ^{\alpha }{}_{\gamma \beta }.}
=== Riemann curvature tensor ===
If this tensor is defined as
R
ρ
σ
μ
ν
=
Γ
ρ
ν
σ
,
μ
−
Γ
ρ
μ
σ
,
ν
+
Γ
ρ
μ
λ
Γ
λ
ν
σ
−
Γ
ρ
ν
λ
Γ
λ
μ
σ
,
{\displaystyle R^{\rho }{}_{\sigma \mu \nu }=\Gamma ^{\rho }{}_{\nu \sigma ,\mu }-\Gamma ^{\rho }{}_{\mu \sigma ,\nu }+\Gamma ^{\rho }{}_{\mu \lambda }\Gamma ^{\lambda }{}_{\nu \sigma }-\Gamma ^{\rho }{}_{\nu \lambda }\Gamma ^{\lambda }{}_{\mu \sigma }\,,}
then it is the commutator of the covariant derivative with itself:
A
ν
;
ρ
σ
−
A
ν
;
σ
ρ
=
A
β
R
β
ν
ρ
σ
,
{\displaystyle A_{\nu ;\rho \sigma }-A_{\nu ;\sigma \rho }=A_{\beta }R^{\beta }{}_{\nu \rho \sigma }\,,}
since the connection is torsionless, which means that the torsion tensor vanishes.
This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows:
T
α
1
⋯
α
r
β
1
⋯
β
s
;
γ
δ
−
T
α
1
⋯
α
r
β
1
⋯
β
s
;
δ
γ
=
−
R
α
1
ρ
γ
δ
T
ρ
α
2
⋯
α
r
β
1
⋯
β
s
−
⋯
−
R
α
r
ρ
γ
δ
T
α
1
⋯
α
r
−
1
ρ
β
1
⋯
β
s
+
R
σ
β
1
γ
δ
T
α
1
⋯
α
r
σ
β
2
⋯
β
s
+
⋯
+
R
σ
β
s
γ
δ
T
α
1
⋯
α
r
β
1
⋯
β
s
−
1
σ
{\displaystyle {\begin{aligned}T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\gamma \delta }&-T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s};\delta \gamma }\\&\!\!\!\!\!\!\!\!\!\!=-R^{\alpha _{1}}{}_{\rho \gamma \delta }T^{\rho \alpha _{2}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s}}-\cdots -R^{\alpha _{r}}{}_{\rho \gamma \delta }T^{\alpha _{1}\cdots \alpha _{r-1}\rho }{}_{\beta _{1}\cdots \beta _{s}}\\&+R^{\sigma }{}_{\beta _{1}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\sigma \beta _{2}\cdots \beta _{s}}+\cdots +R^{\sigma }{}_{\beta _{s}\gamma \delta }T^{\alpha _{1}\cdots \alpha _{r}}{}_{\beta _{1}\cdots \beta _{s-1}\sigma }\,\end{aligned}}}
which are often referred to as the Ricci identities.
=== Metric tensor ===
The metric tensor gαβ is used for lowering indices and gives the length of any space-like curve
length
=
∫
y
1
y
2
g
α
β
d
x
α
d
γ
d
x
β
d
γ
d
γ
,
{\displaystyle {\text{length}}=\int _{y_{1}}^{y_{2}}{\sqrt {g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,}
where γ is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve
duration
=
∫
t
1
t
2
−
1
c
2
g
α
β
d
x
α
d
γ
d
x
β
d
γ
d
γ
,
{\displaystyle {\text{duration}}=\int _{t_{1}}^{t_{2}}{\sqrt {{\frac {-1}{c^{2}}}g_{\alpha \beta }{\frac {dx^{\alpha }}{d\gamma }}{\frac {dx^{\beta }}{d\gamma }}}}\,d\gamma \,,}
where γ is any smooth strictly monotone parameterization of the trajectory. See also Line element.
The inverse matrix gαβ of the metric tensor is another important tensor, used for raising indices:
g
α
β
g
β
γ
=
δ
γ
α
.
{\displaystyle g^{\alpha \beta }g_{\beta \gamma }=\delta _{\gamma }^{\alpha }\,.}
== See also ==
== Notes ==
== References ==
== Sources ==
Bishop, R.L.; Goldberg, S.I. (1968), Tensor Analysis on Manifolds (First Dover 1980 ed.), The Macmillan Company, ISBN 0-486-64039-6
Danielson, Donald A. (2003). Vectors and Tensors in Engineering and Physics (2/e ed.). Westview (Perseus). ISBN 978-0-8133-4080-7.
Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Kluwer Academic Publishers (Springer). ISBN 1-4020-1015-X.
Lovelock, David; Hanno Rund (1989) [1975]. Tensors, Differential Forms, and Variational Principles. Dover. ISBN 978-0-486-65840-7.
C. Møller (1952), The Theory of Relativity (3rd ed.), Oxford University Press
Synge J.L.; Schild A. (1949). Tensor Calculus. first Dover Publications 1978 edition. ISBN 978-0-486-63612-2. {{cite book}}: ISBN / Date incompatibility (help)
J.R. Tyldesley (1975), An introduction to Tensor Analysis: For Engineers and Applied Scientists, Longman, ISBN 0-582-44355-5
D.C. Kay (1988), Tensor Calculus, Schaum's Outlines, McGraw Hill (USA), ISBN 0-07-033484-6
T. Frankel (2012), The Geometry of Physics (3rd ed.), Cambridge University Press, ISBN 978-1107-602601
== Further reading ==
Dimitrienko, Yuriy (2002). Tensor Analysis and Nonlinear Tensor Functions. Springer. ISBN 1-4020-1015-X.
Sokolnikoff, Ivan S (1951). Tensor Analysis: Theory and Applications to Geometry and Mechanics of Continua. Wiley. ISBN 0471810525. {{cite book}}: ISBN / Date incompatibility (help)
Borisenko, A.I.; Tarapov, I.E. (1979). Vector and Tensor Analysis with Applications (2nd ed.). Dover. ISBN 0486638332.
Itskov, Mikhail (2015). Tensor Algebra and Tensor Analysis for Engineers: With Applications to Continuum Mechanics (2nd ed.). Springer. ISBN 9783319163420.
Tyldesley, J. R. (1973). An introduction to Tensor Analysis: For Engineers and Applied Scientists. Longman. ISBN 0-582-44355-5.
Kay, D. C. (1988). Tensor Calculus. Schaum’s Outlines. McGraw Hill. ISBN 0-07-033484-6.
Grinfeld, P. (2014). Introduction to Tensor Analysis and the Calculus of Moving Surfaces. Springer. ISBN 978-1-4614-7866-9.
== External links ==
Dullemond, Kees; Peeters, Kasper (1991–2010). "Introduction to Tensor Calculus" (PDF). Retrieved 17 May 2018. | Wikipedia/Absolute_differential_calculus |
In category theory, a Lawvere theory (named after American mathematician William Lawvere) is a category that can be considered a categorical counterpart of the notion of an equational theory.
== Definition ==
Let
ℵ
0
{\displaystyle \aleph _{0}}
be a skeleton of the category FinSet of finite sets and functions. Formally, a Lawvere theory consists of a small category L with (strictly associative) finite products and a strict identity-on-objects functor
I
:
ℵ
0
op
→
L
{\displaystyle I:\aleph _{0}^{\text{op}}\rightarrow L}
preserving finite products.
A model of a Lawvere theory in a category C with finite products is a finite-product preserving functor M : L → C. A morphism of models h : M → N where M and N are models of L is a natural transformation of functors.
== Category of Lawvere theories ==
A map between Lawvere theories (L, I) and (L′, I′) is a finite-product preserving functor that commutes with I and I′. Such a map is commonly seen as an interpretation of (L, I) in (L′, I′).
Lawvere theories together with maps between them form the category Law.
== Variations ==
Variations include multisorted (or multityped) Lawvere theory, infinitary Lawvere theory, and finite-product theory.
== See also ==
Algebraic theory
Clone (algebra)
Monad (category theory)
== Notes ==
== References ==
Hyland, Martin; Power, John (2007), "The Category Theoretic Understanding of Universal Algebra: Lawvere Theories and Monads" (PDF), Electronic Notes in Theoretical Computer Science, 172 (Computation, Meaning, and Logic: Articles dedicated to Gordon Plotkin): 437–458, CiteSeerX 10.1.1.158.5440, doi:10.1016/j.entcs.2007.02.019
Lawvere, William F. (1963), "Functorial Semantics of Algebraic Theories", PhD Thesis, vol. 50, no. 5, Columbia University, pp. 869–872, Bibcode:1963PNAS...50..869L, doi:10.1073/pnas.50.5.869, PMC 221940, PMID 16591125 | Wikipedia/Lawvere_theory |
In mathematics, an operad is a structure that consists of abstract operations, each one having a fixed finite number of inputs (arguments) and one output, as well as a specification of how to compose these operations. Given an operad
O
{\displaystyle O}
, one defines an algebra over
O
{\displaystyle O}
to be a set together with concrete operations on this set which behave just like the abstract operations of
O
{\displaystyle O}
. For instance, there is a Lie operad
L
{\displaystyle L}
such that the algebras over
L
{\displaystyle L}
are precisely the Lie algebras; in a sense
L
{\displaystyle L}
abstractly encodes the operations that are common to all Lie algebras. An operad is to its algebras as a group is to its group representations.
== History ==
Operads originate in algebraic topology; they were introduced to characterize iterated loop spaces by J. Michael Boardman and Rainer M. Vogt in 1968 and by J. Peter May in 1972.
Martin Markl, Steve Shnider, and Jim Stasheff write in their book on operads:
"The name operad and the formal definition appear first in the early 1970's in J. Peter May's "The Geometry of Iterated Loop Spaces", but a year or more earlier, Boardman and Vogt described the same concept under the name categories of operators in standard form, inspired by PROPs and PACTs of Adams and Mac Lane. In fact, there is an abundance of prehistory. Weibel [Wei] points out that the concept first arose a century ago in A.N. Whitehead's "A Treatise on Universal Algebra", published in 1898."
The word "operad" was created by May as a portmanteau of "operations" and "monad" (and also because his mother was an opera singer).
Interest in operads was considerably renewed in the early 90s when, based on early insights of Maxim Kontsevich, Victor Ginzburg and Mikhail Kapranov discovered that some duality phenomena in rational homotopy theory could be explained using Koszul duality of operads. Operads have since found many applications, such as in deformation quantization of Poisson manifolds, the Deligne conjecture, or graph homology in the work of Maxim Kontsevich and Thomas Willwacher.
== Intuition ==
Suppose
X
{\displaystyle X}
is a set and for
n
∈
N
{\displaystyle n\in \mathbb {N} }
we define
P
(
n
)
:=
{
f
:
X
n
→
X
}
{\displaystyle P(n):=\{f\colon X^{n}\to X\}}
,
the set of all functions from the cartesian product of
n
{\displaystyle n}
copies of
X
{\displaystyle X}
to
X
{\displaystyle X}
.
We can compose these functions: given
f
∈
P
(
n
)
{\displaystyle f\in P(n)}
,
f
1
∈
P
(
k
1
)
,
…
,
f
n
∈
P
(
k
n
)
{\displaystyle f_{1}\in P(k_{1}),\ldots ,f_{n}\in P(k_{n})}
, the function
f
∘
(
f
1
,
…
,
f
n
)
∈
P
(
k
1
+
⋯
+
k
n
)
{\displaystyle f\circ (f_{1},\ldots ,f_{n})\in P(k_{1}+\cdots +k_{n})}
is defined as follows: given
k
1
+
⋯
+
k
n
{\displaystyle k_{1}+\cdots +k_{n}}
arguments from
X
{\displaystyle X}
, we divide them into
n
{\displaystyle n}
blocks, the first one having
k
1
{\displaystyle k_{1}}
arguments, the second one
k
2
{\displaystyle k_{2}}
arguments, etc., and then apply
f
1
{\displaystyle f_{1}}
to the first block,
f
2
{\displaystyle f_{2}}
to the second block, etc. We then apply
f
{\displaystyle f}
to the list of
n
{\displaystyle n}
values obtained from
X
{\displaystyle X}
in such a way.
We can also permute arguments, i.e. we have a right action
∗
{\displaystyle *}
of the symmetric group
S
n
{\displaystyle S_{n}}
on
P
(
n
)
{\displaystyle P(n)}
, defined by
(
f
∗
s
)
(
x
1
,
…
,
x
n
)
=
f
(
x
s
−
1
(
1
)
,
…
,
x
s
−
1
(
n
)
)
{\displaystyle (f*s)(x_{1},\ldots ,x_{n})=f(x_{s^{-1}(1)},\ldots ,x_{s^{-1}(n)})}
for
f
∈
P
(
n
)
{\displaystyle f\in P(n)}
,
s
∈
S
n
{\displaystyle s\in S_{n}}
and
x
1
,
…
,
x
n
∈
X
{\displaystyle x_{1},\ldots ,x_{n}\in X}
.
The definition of a symmetric operad given below captures the essential properties of these two operations
∘
{\displaystyle \circ }
and
∗
{\displaystyle *}
.
== Definition ==
=== Non-symmetric operad ===
A non-symmetric operad (sometimes called an operad without permutations, or a non-
Σ
{\displaystyle \Sigma }
or plain operad) consists of the following:
a sequence
(
P
(
n
)
)
n
∈
N
{\displaystyle (P(n))_{n\in \mathbb {N} }}
of sets, whose elements are called
n
{\displaystyle n}
-ary operations,
an element
1
{\displaystyle 1}
in
P
(
1
)
{\displaystyle P(1)}
called the identity,
for all positive integers
n
{\displaystyle n}
,
k
1
,
…
,
k
n
{\textstyle k_{1},\ldots ,k_{n}}
, a composition function
∘
:
P
(
n
)
×
P
(
k
1
)
×
⋯
×
P
(
k
n
)
→
P
(
k
1
+
⋯
+
k
n
)
(
θ
,
θ
1
,
…
,
θ
n
)
↦
θ
∘
(
θ
1
,
…
,
θ
n
)
,
{\displaystyle {\begin{aligned}\circ :P(n)\times P(k_{1})\times \cdots \times P(k_{n})&\to P(k_{1}+\cdots +k_{n})\\(\theta ,\theta _{1},\ldots ,\theta _{n})&\mapsto \theta \circ (\theta _{1},\ldots ,\theta _{n}),\end{aligned}}}
satisfying the following coherence axioms:
identity:
θ
∘
(
1
,
…
,
1
)
=
θ
=
1
∘
θ
{\displaystyle \theta \circ (1,\ldots ,1)=\theta =1\circ \theta }
associativity:
θ
∘
(
θ
1
∘
(
θ
1
,
1
,
…
,
θ
1
,
k
1
)
,
…
,
θ
n
∘
(
θ
n
,
1
,
…
,
θ
n
,
k
n
)
)
=
(
θ
∘
(
θ
1
,
…
,
θ
n
)
)
∘
(
θ
1
,
1
,
…
,
θ
1
,
k
1
,
…
,
θ
n
,
1
,
…
,
θ
n
,
k
n
)
{\displaystyle {\begin{aligned}&\theta \circ {\Big (}\theta _{1}\circ (\theta _{1,1},\ldots ,\theta _{1,k_{1}}),\ldots ,\theta _{n}\circ (\theta _{n,1},\ldots ,\theta _{n,k_{n}}){\Big )}\\={}&{\Big (}\theta \circ (\theta _{1},\ldots ,\theta _{n}){\Big )}\circ (\theta _{1,1},\ldots ,\theta _{1,k_{1}},\ldots ,\theta _{n,1},\ldots ,\theta _{n,k_{n}})\end{aligned}}}
=== Symmetric operad ===
A symmetric operad (often just called operad) is a non-symmetric operad
P
{\displaystyle P}
as above, together with a right action of the symmetric group
S
n
{\displaystyle S_{n}}
on
P
(
n
)
{\displaystyle P(n)}
for
n
∈
N
{\displaystyle n\in \mathbb {N} }
, denoted by
∗
{\displaystyle *}
and satisfying
equivariance: given a permutation
t
∈
S
n
{\displaystyle t\in S_{n}}
,
(
θ
∗
t
)
∘
(
θ
1
,
…
,
θ
n
)
=
(
θ
∘
(
θ
t
−
1
(
1
)
,
…
,
θ
t
−
1
(
n
)
)
)
∗
t
′
{\displaystyle (\theta *t)\circ (\theta _{1},\ldots ,\theta _{n})=(\theta \circ (\theta _{t^{-1}(1)},\ldots ,\theta _{t^{-1}(n)}))*t'}
(where
t
′
{\displaystyle t'}
on the right hand side refers to the element of
S
k
1
+
⋯
+
k
n
{\displaystyle S_{k_{1}+\dots +k_{n}}}
that acts on the set
{
1
,
2
,
…
,
k
1
+
⋯
+
k
n
}
{\displaystyle \{1,2,\dots ,k_{1}+\dots +k_{n}\}}
by breaking it into
n
{\displaystyle n}
blocks, the first of size
k
1
{\displaystyle k_{1}}
, the second of size
k
2
{\displaystyle k_{2}}
, through the
n
{\displaystyle n}
th block of size
k
n
{\displaystyle k_{n}}
, and then permutes these
n
{\displaystyle n}
blocks by
t
{\displaystyle t}
, keeping each block intact)
and given
n
{\displaystyle n}
permutations
s
i
∈
S
k
i
{\displaystyle s_{i}\in S_{k_{i}}}
,
θ
∘
(
θ
1
∗
s
1
,
…
,
θ
n
∗
s
n
)
=
(
θ
∘
(
θ
1
,
…
,
θ
n
)
)
∗
(
s
1
,
…
,
s
n
)
{\displaystyle \theta \circ (\theta _{1}*s_{1},\ldots ,\theta _{n}*s_{n})=(\theta \circ (\theta _{1},\ldots ,\theta _{n}))*(s_{1},\ldots ,s_{n})}
(where
(
s
1
,
…
,
s
n
)
{\displaystyle (s_{1},\ldots ,s_{n})}
denotes the element of
S
k
1
+
⋯
+
k
n
{\displaystyle S_{k_{1}+\dots +k_{n}}}
that permutes the first of these blocks by
s
1
{\displaystyle s_{1}}
, the second by
s
2
{\displaystyle s_{2}}
, etc., and keeps their overall order intact).
The permutation actions in this definition are vital to most applications, including the original application to loop spaces.
=== Morphisms ===
A morphism of operads
f
:
P
→
Q
{\displaystyle f:P\to Q}
consists of a sequence
(
f
n
:
P
(
n
)
→
Q
(
n
)
)
n
∈
N
{\displaystyle (f_{n}:P(n)\to Q(n))_{n\in \mathbb {N} }}
that:
preserves the identity:
f
(
1
)
=
1
{\displaystyle f(1)=1}
preserves composition: for every n-ary operation
θ
{\displaystyle \theta }
and operations
θ
1
,
…
,
θ
n
{\displaystyle \theta _{1},\ldots ,\theta _{n}}
,
f
(
θ
∘
(
θ
1
,
…
,
θ
n
)
)
=
f
(
θ
)
∘
(
f
(
θ
1
)
,
…
,
f
(
θ
n
)
)
{\displaystyle f(\theta \circ (\theta _{1},\ldots ,\theta _{n}))=f(\theta )\circ (f(\theta _{1}),\ldots ,f(\theta _{n}))}
preserves the permutation actions:
f
(
x
∗
s
)
=
f
(
x
)
∗
s
{\displaystyle f(x*s)=f(x)*s}
.
Operads therefore form a category denoted by
O
p
e
r
{\displaystyle {\mathsf {Oper}}}
.
=== In other categories ===
So far operads have only been considered in the category of sets. More generally, it is possible to define operads in any symmetric monoidal category C . In that case, each
P
(
n
)
{\displaystyle P(n)}
is an object of C, the composition
∘
{\displaystyle \circ }
is a morphism
P
(
n
)
⊗
P
(
k
1
)
⊗
⋯
⊗
P
(
k
n
)
→
P
(
k
1
+
⋯
+
k
n
)
{\displaystyle P(n)\otimes P(k_{1})\otimes \cdots \otimes P(k_{n})\to P(k_{1}+\cdots +k_{n})}
in C (where
⊗
{\displaystyle \otimes }
denotes the tensor product of the monoidal category), and the actions of the symmetric group elements are given by isomorphisms in C.
A common example is the category of topological spaces and continuous maps, with the monoidal product given by the cartesian product. In this case, an operad is given by a sequence of spaces (instead of sets)
{
P
(
n
)
}
n
≥
0
{\displaystyle \{P(n)\}_{n\geq 0}}
. The structure maps of the operad (the composition and the actions of the symmetric groups) are then assumed to be continuous. The result is called a topological operad. Similarly, in the definition of a morphism of operads, it would be necessary to assume that the maps involved are continuous.
Other common settings to define operads include, for example, modules over a commutative ring, chain complexes, groupoids (or even the category of categories itself), coalgebras, etc.
=== Algebraist definition ===
Given a commutative ring R we consider the category
R
-
M
o
d
{\displaystyle R{\text{-}}{\mathsf {Mod}}}
of modules over R. An operad over R can be defined as a monoid object
(
T
,
γ
,
η
)
{\displaystyle (T,\gamma ,\eta )}
in the monoidal category of endofunctors on
R
-
M
o
d
{\displaystyle R{\text{-}}{\mathsf {Mod}}}
(it is a monad) satisfying some finiteness condition.
For example, a monoid object in the category of "polynomial endofunctors" on
R
-
M
o
d
{\displaystyle R{\text{-}}{\mathsf {Mod}}}
is an operad. Similarly, a symmetric operad can be defined as a monoid object in the category of
S
{\displaystyle \mathbb {S} }
-objects, where
S
{\displaystyle \mathbb {S} }
means a symmetric group. A monoid object in the category of combinatorial species is an operad in finite sets.
An operad in the above sense is sometimes thought of as a generalized ring. For example, Nikolai Durov defines his generalized rings as monoid objects in the monoidal category of endofunctors on
Set
{\displaystyle {\textbf {Set}}}
that commute with filtered colimits. This is a generalization of a ring since each ordinary ring R defines a monad
Σ
R
:
Set
→
Set
{\displaystyle \Sigma _{R}:{\textbf {Set}}\to {\textbf {Set}}}
that sends a set X to the underlying set of the free R-module
R
(
X
)
{\displaystyle R^{(X)}}
generated by X.
== Understanding the axioms ==
=== Associativity axiom ===
"Associativity" means that composition of operations is associative
(the function
∘
{\displaystyle \circ }
is associative), analogous to the axiom in category theory that
f
∘
(
g
∘
h
)
=
(
f
∘
g
)
∘
h
{\displaystyle f\circ (g\circ h)=(f\circ g)\circ h}
; it does not mean that the operations themselves are associative as operations.
Compare with the associative operad, below.
Associativity in operad theory means that expressions can be written involving operations without ambiguity from the omitted compositions, just as associativity for operations allows products to be written without ambiguity from the omitted parentheses.
For instance, if
θ
{\displaystyle \theta }
is a binary operation, which is written as
θ
(
a
,
b
)
{\displaystyle \theta (a,b)}
or
(
a
b
)
{\displaystyle (ab)}
. So that
θ
{\displaystyle \theta }
may or may not be associative.
Then what is commonly written
(
(
a
b
)
c
)
{\displaystyle ((ab)c)}
is unambiguously written operadically as
θ
∘
(
θ
,
1
)
{\displaystyle \theta \circ (\theta ,1)}
. This sends
(
a
,
b
,
c
)
{\displaystyle (a,b,c)}
to
(
a
b
,
c
)
{\displaystyle (ab,c)}
(apply
θ
{\displaystyle \theta }
on the first two, and the identity on the third), and then the
θ
{\displaystyle \theta }
on the left "multiplies"
a
b
{\displaystyle ab}
by
c
{\displaystyle c}
.
This is clearer when depicted as a tree:
which yields a 3-ary operation:
However, the expression
(
(
(
a
b
)
c
)
d
)
{\displaystyle (((ab)c)d)}
is a priori ambiguous:
it could mean
θ
∘
(
(
θ
,
1
)
∘
(
(
θ
,
1
)
,
1
)
)
{\displaystyle \theta \circ ((\theta ,1)\circ ((\theta ,1),1))}
, if the inner compositions are performed first, or it could mean
(
θ
∘
(
θ
,
1
)
)
∘
(
(
θ
,
1
)
,
1
)
{\displaystyle (\theta \circ (\theta ,1))\circ ((\theta ,1),1)}
,
if the outer compositions are performed first (operations are read from right to left).
Writing
x
=
θ
,
y
=
(
θ
,
1
)
,
z
=
(
(
θ
,
1
)
,
1
)
{\displaystyle x=\theta ,y=(\theta ,1),z=((\theta ,1),1)}
, this is
x
∘
(
y
∘
z
)
{\displaystyle x\circ (y\circ z)}
versus
(
x
∘
y
)
∘
z
{\displaystyle (x\circ y)\circ z}
. That is, the tree is missing "vertical parentheses":
If the top two rows of operations are composed first (puts an upward parenthesis at the
(
a
b
)
c
d
{\displaystyle (ab)c\ \ d}
line; does the inner composition first), the following results:
which then evaluates unambiguously to yield a 4-ary operation.
As an annotated expression:
θ
(
a
b
)
c
⋅
d
∘
(
(
θ
a
b
⋅
c
,
1
d
)
∘
(
(
θ
a
⋅
b
,
1
c
)
,
1
d
)
)
{\displaystyle \theta _{(ab)c\cdot d}\circ ((\theta _{ab\cdot c},1_{d})\circ ((\theta _{a\cdot b},1_{c}),1_{d}))}
If the bottom two rows of operations are composed first (puts a downward parenthesis at the
a
b
c
d
{\displaystyle ab\quad c\ \ d}
line; does the outer composition first), following results:
which then evaluates unambiguously to yield a 4-ary operation:
The operad axiom of associativity is that these yield the same result, and thus that the expression
(
(
(
a
b
)
c
)
d
)
{\displaystyle (((ab)c)d)}
is unambiguous.
=== Identity axiom ===
The identity axiom (for a binary operation) can be visualized in a tree as:
meaning that the three operations obtained are equal: pre- or post- composing with the identity makes no difference. As for categories,
1
∘
1
=
1
{\displaystyle 1\circ 1=1}
is a corollary of the identity axiom.
== Examples ==
=== Endomorphism operad in sets and operad algebras ===
The most basic operads are the ones given in the section on "Intuition", above. For any set
X
{\displaystyle X}
, we obtain the endomorphism operad
E
n
d
X
{\displaystyle {\mathcal {End}}_{X}}
consisting of all functions
X
n
→
X
{\displaystyle X^{n}\to X}
. These operads are important because they serve to define operad algebras. If
O
{\displaystyle {\mathcal {O}}}
is an operad, an operad algebra over
O
{\displaystyle {\mathcal {O}}}
is given by a set
X
{\displaystyle X}
and an operad morphism
O
→
E
n
d
X
{\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{X}}
. Intuitively, such a morphism turns each "abstract" operation of
O
(
n
)
{\displaystyle {\mathcal {O}}(n)}
into a "concrete"
n
{\displaystyle n}
-ary operation on the set
X
{\displaystyle X}
. An operad algebra over
O
{\displaystyle {\mathcal {O}}}
thus consists of a set
X
{\displaystyle X}
together with concrete operations on
X
{\displaystyle X}
that follow the rules abstractely specified by the operad
O
{\displaystyle {\mathcal {O}}}
.
=== Endomorphism operad in vector spaces and operad algebras ===
If k is a field, we can consider the category of finite-dimensional vector spaces over k; this becomes a monoidal category using the ordinary tensor product over k. We can then define endomorphism operads in this category, as follows. Let V be a finite-dimensional vector space The endomorphism operad
E
n
d
V
=
{
E
n
d
V
(
n
)
}
{\displaystyle {\mathcal {End}}_{V}=\{{\mathcal {End}}_{V}(n)\}}
of V consists of
E
n
d
V
(
n
)
{\displaystyle {\mathcal {End}}_{V}(n)}
= the space of linear maps
V
⊗
n
→
V
{\displaystyle V^{\otimes n}\to V}
,
(composition) given
f
∈
E
n
d
V
(
n
)
{\displaystyle f\in {\mathcal {End}}_{V}(n)}
,
g
1
∈
E
n
d
V
(
k
1
)
{\displaystyle g_{1}\in {\mathcal {End}}_{V}(k_{1})}
, ...,
g
n
∈
E
n
d
V
(
k
n
)
{\displaystyle g_{n}\in {\mathcal {End}}_{V}(k_{n})}
, their composition is given by the map
V
⊗
k
1
⊗
⋯
⊗
V
⊗
k
n
⟶
g
1
⊗
⋯
⊗
g
n
V
⊗
n
→
f
V
{\displaystyle V^{\otimes k_{1}}\otimes \cdots \otimes V^{\otimes k_{n}}\ {\overset {g_{1}\otimes \cdots \otimes g_{n}}{\longrightarrow }}\ V^{\otimes n}\ {\overset {f}{\to }}\ V}
,
(identity) The identity element in
E
n
d
V
(
1
)
{\displaystyle {\mathcal {End}}_{V}(1)}
is the identity map
id
V
{\displaystyle \operatorname {id} _{V}}
,
(symmetric group action)
S
n
{\displaystyle S_{n}}
operates on
E
n
d
V
(
n
)
{\displaystyle {\mathcal {End}}_{V}(n)}
by permuting the components of the tensors in
V
⊗
n
{\displaystyle V^{\otimes n}}
.
If
O
{\displaystyle {\mathcal {O}}}
is an operad, a k-linear operad algebra over
O
{\displaystyle {\mathcal {O}}}
is given by a finite-dimensional vector space V over k and an operad morphism
O
→
E
n
d
V
{\displaystyle {\mathcal {O}}\to {\mathcal {End}}_{V}}
; this amounts to specifying concrete multilinear operations on V that behave like the operations of
O
{\displaystyle {\mathcal {O}}}
. (Notice the analogy between operads&operad algebras and rings&modules: a module over a ring R is given by an abelian group M together with a ring homomorphism
R
→
End
(
M
)
{\displaystyle R\to \operatorname {End} (M)}
.)
Depending on applications, variations of the above are possible: for example, in algebraic topology, instead of vector spaces and tensor products between them, one uses (reasonable) topological spaces and cartesian products between them.
=== "Little something" operads ===
The little 2-disks operad is a topological operad where
P
(
n
)
{\displaystyle P(n)}
consists of ordered lists of n disjoint disks inside the unit disk of
R
2
{\displaystyle \mathbb {R} ^{2}}
centered at the origin. The symmetric group acts on such configurations by permuting the list of little disks. The operadic composition for little disks is illustrated in the accompanying figure to the right, where an element
θ
∈
P
(
3
)
{\displaystyle \theta \in P(3)}
is composed with an element
(
θ
1
,
θ
2
,
θ
3
)
∈
P
(
2
)
×
P
(
3
)
×
P
(
4
)
{\displaystyle (\theta _{1},\theta _{2},\theta _{3})\in P(2)\times P(3)\times P(4)}
to yield the element
θ
∘
(
θ
1
,
θ
2
,
θ
3
)
∈
P
(
9
)
{\displaystyle \theta \circ (\theta _{1},\theta _{2},\theta _{3})\in P(9)}
obtained by shrinking the configuration of
θ
i
{\displaystyle \theta _{i}}
and inserting it into the i-th disk of
θ
{\displaystyle \theta }
, for
i
=
1
,
2
,
3
{\displaystyle i=1,2,3}
.
Analogously, one can define the little n-disks operad by considering configurations of disjoint n-balls inside the unit ball of
R
n
{\displaystyle \mathbb {R} ^{n}}
.
Originally the little n-cubes operad or the little intervals operad (initially called little n-cubes PROPs) was defined by Michael Boardman and Rainer Vogt in a similar way, in terms of configurations of disjoint axis-aligned n-dimensional hypercubes (n-dimensional intervals) inside the unit hypercube. Later it was generalized by May to the little convex bodies operad, and "little disks" is a case of "folklore" derived from the "little convex bodies".
=== Rooted trees ===
In graph theory, rooted trees form a natural operad. Here,
P
(
n
)
{\displaystyle P(n)}
is the set of all rooted trees with n leaves, where the leaves are numbered from 1 to n. The group
S
n
{\displaystyle S_{n}}
operates on this set by permuting the leaf labels. Operadic composition
T
∘
(
S
1
,
…
,
S
n
)
{\displaystyle T\circ (S_{1},\ldots ,S_{n})}
is given by replacing the i-th leaf of
T
{\displaystyle T}
by the root of the i-th tree
S
i
{\displaystyle S_{i}}
, for
i
=
1
,
…
,
n
{\displaystyle i=1,\ldots ,n}
, thus attaching the n trees to
T
{\displaystyle T}
and forming a larger tree, whose root is taken to be the same as the root of
T
{\displaystyle T}
and whose leaves are numbered in order.
=== Swiss-cheese operad ===
The Swiss-cheese operad is a two-colored topological operad defined in terms of configurations of disjoint n-dimensional disks inside a unit n-semidisk and n-dimensional semidisks, centered at the base of the unit semidisk and sitting inside of it. The operadic composition comes from gluing configurations of "little" disks inside the unit disk into the "little" disks in another unit semidisk and configurations of "little" disks and semidisks inside the unit semidisk into the other unit semidisk.
The Swiss-cheese operad was defined by Alexander A. Voronov. It was used by Maxim Kontsevich to formulate a Swiss-cheese version of Deligne's conjecture on Hochschild cohomology. Kontsevich's conjecture was proven partly by Po Hu, Igor Kriz, and Alexander A. Voronov and then fully by Justin Thomas.
=== Associative operad ===
Another class of examples of operads are those capturing the structures of algebraic structures, such as associative algebras, commutative algebras and Lie algebras. Each of these can be exhibited as a finitely presented operad, in each of these three generated by binary operations.
For example, the associative operad is a symmetric operad generated by a binary operation
ψ
{\displaystyle \psi }
, subject only to the condition that
ψ
∘
(
ψ
,
1
)
=
ψ
∘
(
1
,
ψ
)
.
{\displaystyle \psi \circ (\psi ,1)=\psi \circ (1,\psi ).}
This condition corresponds to associativity of the binary operation
ψ
{\displaystyle \psi }
; writing
ψ
(
a
,
b
)
{\displaystyle \psi (a,b)}
multiplicatively, the above condition is
(
a
b
)
c
=
a
(
b
c
)
{\displaystyle (ab)c=a(bc)}
. This associativity of the operation should not be confused with associativity of composition which holds in any operad; see the axiom of associativity, above.
In the associative operad, each
P
(
n
)
{\displaystyle P(n)}
is given by the symmetric group
S
n
{\displaystyle S_{n}}
, on which
S
n
{\displaystyle S_{n}}
acts by right multiplication. The composite
σ
∘
(
τ
1
,
…
,
τ
n
)
{\displaystyle \sigma \circ (\tau _{1},\dots ,\tau _{n})}
permutes its inputs in blocks according to
σ
{\displaystyle \sigma }
, and within blocks according to the appropriate
τ
i
{\displaystyle \tau _{i}}
.
The algebras over the associative operad are precisely the semigroups: sets together with a single binary associative operation. The k-linear algebras over the associative operad are precisely the associative k-algebras.
=== Terminal symmetric operad ===
The terminal symmetric operad is the operad which has a single n-ary operation for each n, with each
S
n
{\displaystyle S_{n}}
acting trivially. The algebras over this operad are the commutative semigroups; the k-linear algebras are the commutative associative k-algebras.
=== Operads from the braid groups ===
Similarly, there is a non-
Σ
{\displaystyle \Sigma }
operad for which each
P
(
n
)
{\displaystyle P(n)}
is given by the Artin braid group
B
n
{\displaystyle B_{n}}
. Moreover, this non-
Σ
{\displaystyle \Sigma }
operad has the structure of a braided operad, which generalizes the notion of an operad from symmetric to braid groups.
=== Linear algebra ===
In linear algebra, real vector spaces can be considered to be algebras over the operad
R
∞
{\displaystyle \mathbb {R} ^{\infty }}
of all linear combinations . This operad is defined by
R
∞
(
n
)
=
R
n
{\displaystyle \mathbb {R} ^{\infty }(n)=\mathbb {R} ^{n}}
for
n
∈
N
{\displaystyle n\in \mathbb {N} }
, with the obvious action of
S
n
{\displaystyle S_{n}}
permuting components, and composition
x
→
∘
(
y
1
→
,
…
,
y
n
→
)
{\displaystyle {\vec {x}}\circ ({\vec {y_{1}}},\ldots ,{\vec {y_{n}}})}
given by the concatentation of the vectors
x
(
1
)
y
1
→
,
…
,
x
(
n
)
y
n
→
{\displaystyle x^{(1)}{\vec {y_{1}}},\ldots ,x^{(n)}{\vec {y_{n}}}}
, where
x
→
=
(
x
(
1
)
,
…
,
x
(
n
)
)
∈
R
n
{\displaystyle {\vec {x}}=(x^{(1)},\ldots ,x^{(n)})\in \mathbb {R} ^{n}}
. The vector
x
→
=
(
2
,
3
,
−
5
,
0
,
…
)
{\displaystyle {\vec {x}}=(2,3,-5,0,\dots )}
for instance represents the operation of forming a linear combination with coefficients 2,3,-5,0,...
This point of view formalizes the notion that linear combinations are the most general sort of operation on a vector space – saying that a vector space is an algebra over the operad of linear combinations is precisely the statement that all possible algebraic operations in a vector space are linear combinations. The basic operations of vector addition and scalar multiplication are a generating set for the operad of all linear combinations, while the linear combinations operad canonically encodes all possible operations on a vector space.
Similarly, affine combinations, conical combinations, and convex combinations can be considered to correspond to the sub-operads where the terms of the vector
x
→
{\displaystyle {\vec {x}}}
sum to 1, the terms are all non-negative, or both, respectively. Graphically, these are the infinite affine hyperplane, the infinite hyper-octant, and the infinite simplex. This formalizes what is meant by
R
n
{\displaystyle \mathbb {R} ^{n}}
being or the standard simplex being model spaces, and such observations as that every bounded convex polytope is the image of a simplex. Here suboperads correspond to more restricted operations and thus more general theories.
=== Commutative-ring operad and Lie operad ===
The commutative-ring operad is an operad whose algebras are the commutative rings. It is defined by
P
(
n
)
=
Z
[
x
1
,
…
,
x
n
]
{\displaystyle P(n)=\mathbb {Z} [x_{1},\ldots ,x_{n}]}
, with the obvious action of
S
n
{\displaystyle S_{n}}
and operadic composition given by substituting polynomials (with renumbered variables) for variables. A similar operad can be defined whose algebras are the associative, commutative algebras over some fixed base field. The Koszul-dual of this operad is the Lie operad (whose algebras are the Lie algebras), and vice versa.
== Free Operads ==
Typical algebraic constructions (e.g., free algebra construction) can be extended to operads. Let
S
e
t
S
n
{\displaystyle \mathbf {Set} ^{S_{n}}}
denote the category whose objects are sets on which the group
S
n
{\displaystyle S_{n}}
acts. Then there is a forgetful functor
O
p
e
r
→
∏
n
∈
N
S
e
t
S
n
{\displaystyle {\mathsf {Oper}}\to \prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}}
, which simply forgets the operadic composition. It is possible to construct a left adjoint
Γ
:
∏
n
∈
N
S
e
t
S
n
→
O
p
e
r
{\displaystyle \Gamma :\prod _{n\in \mathbb {N} }\mathbf {Set} ^{S_{n}}\to {\mathsf {Oper}}}
to this forgetful functor (this is the usual definition of free functor). Given a collection of operations E,
Γ
(
E
)
{\displaystyle \Gamma (E)}
is the free operad on E.
Like a group or a ring, the free construction allows to express an operad in terms of generators and relations. By a free representation of an operad
O
{\displaystyle {\mathcal {O}}}
, we mean writing
O
{\displaystyle {\mathcal {O}}}
as a quotient of a free operad
F
=
Γ
(
E
)
{\displaystyle {\mathcal {F}}=\Gamma (E)}
where E describes generators of
O
{\displaystyle {\mathcal {O}}}
and the kernel of the epimorphism
F
→
O
{\displaystyle {\mathcal {F}}\to {\mathcal {O}}}
describes the relations.
A (symmetric) operad
O
=
{
O
(
n
)
}
{\displaystyle {\mathcal {O}}=\{{\mathcal {O}}(n)\}}
is called quadratic if it has a free presentation such that
E
=
O
(
2
)
{\displaystyle E={\mathcal {O}}(2)}
is the generator and the relation is contained in
Γ
(
E
)
(
3
)
{\displaystyle \Gamma (E)(3)}
.
== Clones ==
Clones are the special case of operads that are also closed under identifying arguments together ("reusing" some data). Clones can be equivalently defined as operads that are also a minion (or clonoid).
== Operads in homotopy theory ==
In Stasheff (2004), Stasheff writes:
Operads are particularly important and useful in categories with a good notion of "homotopy", where they play a key role in organizing hierarchies of higher homotopies.
== Higher-order operad ==
In algebra, a higher-order operad is a higher-dimensional generalization of an operad.
== See also ==
PRO (category theory)
Algebra over an operad
Higher-order operad
E∞-operad
Pseudoalgebra
Multicategory
Opetope
== Notes ==
=== Citations ===
== References ==
Tom Leinster (2004). Higher Operads, Higher Categories. Cambridge University Press. arXiv:math/0305049. Bibcode:2004hohc.book.....L. ISBN 978-0-521-53215-0.
Martin Markl, Steve Shnider, Jim Stasheff (2002). Operads in Algebra, Topology and Physics. American Mathematical Society. ISBN 978-0-8218-4362-8.{{cite book}}: CS1 maint: multiple names: authors list (link)
Markl, Martin (June 2006). "Operads and PROPs". arXiv:math/0601129.
Stasheff, Jim (June–July 2004). "What Is...an Operad?" (PDF). Notices of the American Mathematical Society. 51 (6): 630–631. Retrieved 17 January 2008.
Loday, Jean-Louis; Vallette, Bruno (2012), Algebraic Operads (PDF), Grundlehren der Mathematischen Wissenschaften, vol. 346, Berlin, New York: Springer-Verlag, ISBN 978-3-642-30361-6
Zinbiel, Guillaume W. (2012), "Encyclopedia of types of algebras 2010", in Bai, Chengming; Guo, Li; Loday, Jean-Louis (eds.), Operads and universal algebra, Nankai Series in Pure, Applied Mathematics and Theoretical Physics, vol. 9, pp. 217–298, arXiv:1101.0267, Bibcode:2011arXiv1101.0267Z, ISBN 9789814365116
Fresse, Benoit (17 May 2017), Homotopy of Operads and Grothendieck-Teichmüller Groups, Mathematical Surveys and Monographs, American Mathematical Society, ISBN 978-1-4704-3480-9, MR 3643404, Zbl 1373.55014
Miguel A. Mendéz (2015). Set Operads in Combinatorics and Computer Science. SpringerBriefs in Mathematics. ISBN 978-3-319-11712-6.
Samuele Giraudo (2018). Nonsymmetric Operads in Combinatorics. Springer International Publishing. ISBN 978-3-030-02073-6.
== External links ==
operad at the nLab
https://golem.ph.utexas.edu/category/2011/05/an_operadic_introduction_to_en.html | Wikipedia/Operad_theory |
In universal algebra and mathematical logic, a term algebra is a freely generated algebraic structure over a given signature. For example, in a signature consisting of a single binary operation, the term algebra over a set X of variables is exactly the free magma generated by X. Other synonyms for the notion include absolutely free algebra and anarchic algebra.
From a category theory perspective, a term algebra is the initial object for the category of all X-generated algebras of the same signature, and this object, unique up to isomorphism, is called an initial algebra; it generates by homomorphic projection all algebras in the category.
A similar notion is that of a Herbrand universe in logic, usually used under this name in logic programming, which is (absolutely freely) defined starting from the set of constants and function symbols in a set of clauses. That is, the Herbrand universe consists of all ground terms: terms that have no variables in them.
An atomic formula or atom is commonly defined as a predicate applied to a tuple of terms; a ground atom is then a predicate in which only ground terms appear. The Herbrand base is the set of all ground atoms that can be formed from predicate symbols in the original set of clauses and terms in its Herbrand universe. These two concepts are named after Jacques Herbrand.
Term algebras also play a role in the semantics of abstract data types, where an abstract data type declaration provides the signature of a multi-sorted algebraic structure and the term algebra is a concrete model of the abstract declaration.
== Universal algebra ==
A type
τ
{\displaystyle \tau }
is a set of function symbols, with each having an associated arity (i.e. number of inputs). For any non-negative integer
n
{\displaystyle n}
, let
τ
n
{\displaystyle \tau _{n}}
denote the function symbols in
τ
{\displaystyle \tau }
of arity
n
{\displaystyle n}
. A constant is a function symbol of arity 0.
Let
τ
{\displaystyle \tau }
be a type, and let
X
{\displaystyle X}
be a non-empty set of symbols, representing the variable symbols. (For simplicity, assume
X
{\displaystyle X}
and
τ
{\displaystyle \tau }
are disjoint.) Then the set of terms
T
(
X
)
{\displaystyle T(X)}
of type
τ
{\displaystyle \tau }
over
X
{\displaystyle X}
is the set of all well-formed strings that can be constructed using the variable symbols of
X
{\displaystyle X}
and the constants and operations of
τ
{\displaystyle \tau }
. Formally,
T
(
X
)
{\displaystyle T(X)}
is the smallest set such that:
X
∪
τ
0
⊆
T
(
X
)
{\displaystyle X\cup \tau _{0}\subseteq T(X)}
— each variable symbol from
X
{\displaystyle X}
is a term in
T
(
X
)
{\displaystyle T(X)}
, and so is each constant symbol from
τ
0
{\displaystyle \tau _{0}}
.
For all
n
≥
1
{\displaystyle n\geq 1}
and for all function symbols
f
∈
τ
n
{\displaystyle f\in \tau _{n}}
and terms
t
1
,
.
.
.
,
t
n
∈
T
(
X
)
{\displaystyle t_{1},...,t_{n}\in T(X)}
, we have the string
f
(
t
1
,
.
.
.
,
t
n
)
∈
T
(
X
)
{\displaystyle f(t_{1},...,t_{n})\in T(X)}
— given
n
{\displaystyle n}
terms
t
1
,
.
.
.
,
t
n
{\displaystyle t_{1},...,t_{n}}
, the application of an
n
{\displaystyle n}
-ary function symbol
f
{\displaystyle f}
to them represents again a term.
The term algebra
T
(
X
)
{\displaystyle {\mathcal {T}}(X)}
of type
τ
{\displaystyle \tau }
over
X
{\displaystyle X}
is, in summary, the algebra of type
τ
{\displaystyle \tau }
that maps each expression to its string representation. Formally,
T
(
X
)
{\displaystyle {\mathcal {T}}(X)}
is defined as follows:
The domain of
T
(
X
)
{\displaystyle {\mathcal {T}}(X)}
is
T
(
X
)
{\displaystyle T(X)}
.
For each nullary function
f
{\displaystyle f}
in
τ
0
{\displaystyle \tau _{0}}
,
f
T
(
X
)
(
)
{\displaystyle f^{{\mathcal {T}}(X)}()}
is defined as the string
f
{\displaystyle f}
.
For all
n
≥
1
{\displaystyle n\geq 1}
and for each n-ary function
f
{\displaystyle f}
in
τ
{\displaystyle \tau }
and elements
t
1
,
.
.
.
,
t
n
{\displaystyle t_{1},...,t_{n}}
in the domain,
f
T
(
X
)
(
t
1
,
.
.
.
,
t
n
)
{\displaystyle f^{{\mathcal {T}}(X)}(t_{1},...,t_{n})}
is defined as the string
f
(
t
1
,
.
.
.
,
t
n
)
{\displaystyle f(t_{1},...,t_{n})}
.
A term algebra is called absolutely free because for any algebra
A
{\displaystyle {\mathcal {A}}}
of type
τ
{\displaystyle \tau }
, and for any function
g
:
X
→
A
{\displaystyle g:X\to {\mathcal {A}}}
,
g
{\displaystyle g}
extends to a unique homomorphism
g
∗
:
T
(
X
)
→
A
{\displaystyle g^{\ast }:{\mathcal {T}}(X)\to {\mathcal {A}}}
, which simply evaluates each term
t
∈
T
(
X
)
{\displaystyle t\in {\mathcal {T}}(X)}
to its corresponding value
g
∗
(
t
)
∈
A
{\displaystyle g^{\ast }(t)\in {\mathcal {A}}}
. Formally, for each
t
∈
T
(
X
)
{\displaystyle t\in {\mathcal {T}}(X)}
:
If
t
∈
X
{\displaystyle t\in X}
, then
g
∗
(
t
)
=
g
(
t
)
{\displaystyle g^{\ast }(t)=g(t)}
.
If
t
=
f
∈
τ
0
{\displaystyle t=f\in \tau _{0}}
, then
g
∗
(
t
)
=
f
A
(
)
{\displaystyle g^{\ast }(t)=f^{\mathcal {A}}()}
.
If
t
=
f
(
t
1
,
.
.
.
,
t
n
)
{\displaystyle t=f(t_{1},...,t_{n})}
where
f
∈
τ
n
{\displaystyle f\in \tau _{n}}
and
n
≥
1
{\displaystyle n\geq 1}
, then
g
∗
(
t
)
=
f
A
(
g
∗
(
t
1
)
,
.
.
.
,
g
∗
(
t
n
)
)
{\displaystyle g^{\ast }(t)=f^{\mathcal {A}}(g^{\ast }(t_{1}),...,g^{\ast }(t_{n}))}
.
== Example ==
As an example type inspired from integer arithmetic can be defined by
τ
0
=
{
0
,
1
}
{\displaystyle \tau _{0}=\{0,1\}}
,
τ
1
=
{
}
{\displaystyle \tau _{1}=\{\}}
,
τ
2
=
{
+
,
∗
}
{\displaystyle \tau _{2}=\{+,*\}}
, and
τ
i
=
{
}
{\displaystyle \tau _{i}=\{\}}
for each
i
>
2
{\displaystyle i>2}
.
The best-known algebra of type
τ
{\displaystyle \tau }
has the natural numbers as its domain and interprets
0
{\displaystyle 0}
,
1
{\displaystyle 1}
,
+
{\displaystyle +}
, and
∗
{\displaystyle *}
in the usual way; we refer to it as
A
n
a
t
{\displaystyle {\mathcal {A}}_{nat}}
.
For the example variable set
X
=
{
x
,
y
}
{\displaystyle X=\{x,y\}}
, we are going to investigate the term algebra
T
(
X
)
{\displaystyle {\mathcal {T}}(X)}
of type
τ
{\displaystyle \tau }
over
X
{\displaystyle X}
.
First, the set
T
(
X
)
{\displaystyle T(X)}
of terms of type
τ
{\displaystyle \tau }
over
X
{\displaystyle X}
is considered.
We use red color to flag its members, which otherwise may be hard to recognize due to their uncommon syntactic form.
We have e.g.
x
∈
T
(
X
)
{\displaystyle {\color {red}x}\in T(X)}
, since
x
∈
X
{\displaystyle x\in X}
is a variable symbol;
1
∈
T
(
X
)
{\displaystyle {\color {red}1}\in T(X)}
, since
1
∈
τ
0
{\displaystyle 1\in \tau _{0}}
is a constant symbol; hence
+
x
1
∈
T
(
X
)
{\displaystyle {\color {red}+x1}\in T(X)}
, since
+
{\displaystyle +}
is a 2-ary function symbol; hence, in turn,
∗
+
x
1
x
∈
T
(
X
)
{\displaystyle {\color {red}*+x1x}\in T(X)}
since
∗
{\displaystyle *}
is a 2-ary function symbol.
More generally, each string in
T
(
X
)
{\displaystyle T(X)}
corresponds to a mathematical expression built from the admitted symbols and written in Polish prefix notation;
for example, the term
∗
+
x
1
x
{\displaystyle {\color {red}*+x1x}}
corresponds to the expression
(
x
+
1
)
∗
x
{\displaystyle (x+1)*x}
in usual infix notation. No parentheses are needed to avoid ambiguities in Polish notation; e.g. the infix expression
x
+
(
1
∗
x
)
{\displaystyle x+(1*x)}
corresponds to the term
+
x
∗
1
x
{\displaystyle {\color {red}+x*1x}}
.
To give some counter-examples, we have e.g.
z
∉
T
(
X
)
{\displaystyle {\color {red}z}\not \in T(X)}
, since
z
{\displaystyle z}
is neither an admitted variable symbol nor an admitted constant symbol;
3
∉
T
(
X
)
{\displaystyle {\color {red}3}\not \in T(X)}
, for the same reason,
+
1
∉
T
(
X
)
{\displaystyle {\color {red}+1}\not \in T(X)}
, since
+
{\displaystyle +}
is a 2-ary function symbol, but is used here with only one argument term (viz.
1
{\displaystyle {\color {red}1}}
).
Now that the term set
T
(
X
)
{\displaystyle T(X)}
is established, we consider the term algebra
T
(
X
)
{\displaystyle {\mathcal {T}}(X)}
of type
τ
{\displaystyle \tau }
over
X
{\displaystyle X}
.
This algebra uses
T
(
X
)
{\displaystyle T(X)}
as its domain, on which addition and multiplication need to be defined.
The addition function
+
T
(
X
)
{\displaystyle +^{{\mathcal {T}}(X)}}
takes two terms
p
{\displaystyle p}
and
q
{\displaystyle q}
and returns the term
+
p
q
{\displaystyle {\color {red}+}pq}
; similarly, the multiplication function
∗
T
(
X
)
{\displaystyle *^{{\mathcal {T}}(X)}}
maps given terms
p
{\displaystyle p}
and
q
{\displaystyle q}
to the term
∗
p
q
{\displaystyle {\color {red}*}pq}
.
For example,
∗
T
(
X
)
(
+
x
1
,
x
)
{\displaystyle *^{{\mathcal {T}}(X)}({\color {red}+x1},{\color {red}x})}
evaluates to the term
∗
+
x
1
x
{\displaystyle {\color {red}*+x1x}}
.
Informally, the operations
+
T
(
X
)
{\displaystyle +^{{\mathcal {T}}(X)}}
and
∗
T
(
X
)
{\displaystyle *^{{\mathcal {T}}(X)}}
are both "sluggards" in that they just record what computation should be done, rather than doing it.
As an example for unique extendability of a homomorphism consider
g
:
X
→
A
n
a
t
{\displaystyle g:X\to {\mathcal {A}}_{nat}}
defined by
g
(
x
)
=
7
{\displaystyle g(x)=7}
and
g
(
y
)
=
3
{\displaystyle g(y)=3}
.
Informally,
g
{\displaystyle g}
defines an assignment of values to variable symbols, and once this is done, every term from
T
(
X
)
{\displaystyle T(X)}
can be evaluated in a unique way in
A
n
a
t
{\displaystyle {\mathcal {A}}_{nat}}
.
For example,
g
∗
(
+
x
1
)
=
g
∗
(
x
)
+
g
∗
(
1
)
since
g
∗
is a homomorphism
=
g
(
x
)
+
g
∗
(
1
)
since
g
∗
coincides on
X
with
g
=
7
+
g
∗
(
1
)
by definition of
g
=
7
+
1
since
g
∗
is a homomorphism
=
8
according to the well-known arithmetical rules in
A
n
a
t
{\displaystyle {\begin{array}{lll}&g^{*}({\color {red}+x1})\\=&g^{*}({\color {red}x})+g^{*}({\color {red}1})&{\text{ since }}g^{*}{\text{ is a homomorphism }}\\=&g({\color {red}x})+g^{*}({\color {red}1})&{\text{ since }}g^{*}{\text{ coincides on }}X{\text{ with }}g\\=&7+g^{*}({\color {red}1})&{\text{ by definition of }}g\\=&7+1&{\text{ since }}g^{*}{\text{ is a homomorphism }}\\=&8&{\text{ according to the well-known arithmetical rules in }}{\mathcal {A}}_{nat}\\\end{array}}}
In a similar way, one obtains
g
∗
(
∗
+
x
1
x
)
=
.
.
.
=
8
∗
g
(
x
)
=
.
.
.
=
56
{\displaystyle g^{*}({\color {red}*+x1x})=...=8*g({\color {red}x})=...=56}
.
== Herbrand base ==
The signature σ of a language is a triple <O, F, P> consisting of the alphabet of constants O, function symbols F, and predicates P. The Herbrand base of a signature σ consists of all ground atoms of σ: of all formulas of the form R(t1, ..., tn), where t1, ..., tn are terms containing no variables (i.e. elements of the Herbrand universe) and R is an n-ary relation symbol (i.e. predicate). In the case of logic with equality, it also contains all equations of the form t1 = t2, where t1 and t2 contain no variables.
== Decidability ==
Term algebras can be shown decidable using quantifier elimination. The complexity of the decision problem is in NONELEMENTARY because binary constructors are injective and thus pairing functions.
== See also ==
Answer-set programming
Clone (algebra)
Domain of discourse / Universe (mathematics)
Rabin's tree theorem (the monadic theory of the infinite complete binary tree is decidable)
Initial algebra
Abstract data type
Term rewriting system
== References ==
== Further reading ==
Joel Berman (2005). "The structure of free algebras". In Structural Theory of Automata, Semigroups, and Universal Algebra. Springer. pp. 47–76. MR2210125.
== External links ==
Weisstein, Eric W. "Herbrand Universe". MathWorld. | Wikipedia/Term_algebra |
Science is the peer-reviewed academic journal of the American Association for the Advancement of Science (AAAS) and one of the world's top academic journals. It was first published in 1880, is currently circulated weekly and has a subscriber base of around 130,000. Because institutional subscriptions and online access serve a larger audience, its estimated readership is over 400,000 people.
Science is based in Washington, D.C., United States, with a second office in Cambridge, UK.
== Contents ==
The major focus of the journal is publishing important original scientific research and research reviews, but Science also publishes science-related news, opinions on science policy and other matters of interest to scientists and others who are concerned with the wide implications of science and technology. Unlike most scientific journals, which focus on a specific field, Science and its rival Nature cover the full range of scientific disciplines. According to the Journal Citation Reports, Science's 2023 impact factor was 44.7.
Studies of methodological quality and reliability have found that some high-prestige journals including Science "publish significantly substandard structures", and overall "reliability of published research works in several fields may be decreasing with increasing journal rank".
Although it is the journal of the AAAS, membership in the AAAS is not required to publish in Science. Papers are accepted from authors around the world. Competition to publish in Science is very intense, as an article published in such a highly cited journal can lead to attention and career advancement for the authors. Fewer than 7% of articles submitted are accepted for publication.
== History ==
Science was founded by New York journalist John Michels in 1880 with financial support from Thomas Edison and later from Alexander Graham Bell. (Edison received favorable editorial treatment in return, without disclosure of the financial relationship, at a time when his reputation was suffering due to delays producing the promised commercially viable light bulb.) However, the journal never gained enough subscribers to succeed and ended publication in March 1882. Alexander Graham Bell and Gardiner Greene Hubbard bought the magazine rights and hired young entomologist Samuel H. Scudder to resurrect the journal one year later. They had some success while covering the meetings of prominent American scientific societies, including the AAAS. However, by 1894, Science was again in financial difficulty and was sold to psychologist James McKeen Cattell for $500 (equivalent to $18,170 in 2024).
In an agreement worked out by Cattell and AAAS secretary Leland O. Howard, Science became the journal of the American Association for the Advancement of Science in 1900. During the early part of the 20th century, important articles published in Science included papers on fruit fly genetics by Thomas Hunt Morgan, gravitational lensing by Albert Einstein, and spiral nebulae by Edwin Hubble. After Cattell died in 1944, the ownership of the journal was transferred to the AAAS.
After Cattell's death in 1944, the journal lacked a consistent editorial presence until Graham DuShane became editor in 1956. In 1958, under DuShane's leadership, Science absorbed The Scientific Monthly, thus increasing the journal's circulation by over 62% from 38,000 to more than 61,000. Physicist Philip Abelson, a co-discoverer of neptunium, served as editor from 1962 to 1984. Under Abelson the efficiency of the review process was improved and the publication practices were brought up to date. During this time, papers on the Apollo program missions and some of the earliest reports on AIDS were published.
Biochemist Daniel E. Koshland Jr. served as editor from 1985 until 1995. From 1995 until 2000, neuroscientist Floyd E. Bloom held that position. Biologist Donald Kennedy became the editor of Science in 2000. Biochemist Bruce Alberts took his place in March 2008. Geophysicist Marcia McNutt became editor-in-chief in June 2013. During her tenure the family of journals expanded to include Science Robotics and Science Immunology, and open access publishing with Science Advances. Jeremy M. Berg became editor-in-chief on July 1, 2016. Former Washington University in St. Louis Provost Holden Thorp was named editor-in-chief on Monday, August 19, 2019.
In February 2001, draft results of the human genome were simultaneously published by Nature and Science with Science publishing the Celera Genomics paper and Nature publishing the publicly funded Human Genome Project. In 2007, Science (together with Nature) received the Prince of Asturias Award for Communications and Humanity. In 2015, Rush D. Holt Jr., chief executive officer of the AAAS and executive publisher of Science, stated that the journal was becoming increasingly international: "[I]nternationally co-authored papers are now the norm—they represent almost 60 percent of the papers. In 1992, it was slightly less than 20 percent."
== Availability ==
The latest editions of the journal are available online, through the main journal website, only to subscribers, AAAS members, and for delivery to IP addresses at institutions that subscribe; students, K–12 teachers, and some others can subscribe at a reduced fee. However, research articles published after 1997 are available free (with online registration) one year after they are published i.e. delayed open access. Significant public-health related articles are also available free, sometimes immediately after publication. AAAS members may also access the pre-1997 Science archives at the Science website, where it is called "Science Classic".
The journal also participates in initiatives that provide free or low-cost access to readers in developing countries, including HINARI, OARE, AGORA, and Scidev.net.
Other features of the Science website include the free "ScienceNow" section with "up to the minute news from science", and "ScienceCareers", which provides free career resources for scientists and engineers. Science Express (Sciencexpress) provides advance electronic publication of selected Science papers.
== Affiliations ==
Science received funding for COVID-19-related coverage from the Pulitzer Center and the Heising-Simons Foundation.
== See also ==
AAAS publications
Breakthrough of the Year
List of scientific journals
== References ==
=== AAAS references ===
== External links ==
Official website | Wikipedia/Science_(journal) |
In mathematics, especially in the fields of universal algebra and graph theory, a graph algebra is a way of giving a directed graph an algebraic structure. It was introduced by McNulty and Shallon, and has seen many uses in the field of universal algebra since then.
== Definition ==
Let D = (V, E) be a directed graph, and 0 an element not in V. The graph algebra associated with D has underlying set
V
∪
{
0
}
{\displaystyle V\cup \{0\}}
, and is equipped with a multiplication defined by the rules
xy = x if
x
,
y
∈
V
{\displaystyle x,y\in V}
and
(
x
,
y
)
∈
E
{\displaystyle (x,y)\in E}
,
xy = 0 if
x
,
y
∈
V
∪
{
0
}
{\displaystyle x,y\in V\cup \{0\}}
and
(
x
,
y
)
∉
E
{\displaystyle (x,y)\notin E}
.
== Applications ==
This notion has made it possible to use the methods of graph theory in universal algebra and several other areas of discrete mathematics and computer science. Graph algebras have been used, for example, in constructions concerning dualities, equational theories, flatness, groupoid rings, topologies, varieties, finite-state machines,
tree languages and tree automata, etc.
== See also ==
Group algebra (disambiguation)
Incidence algebra
Path algebra
== Citations ==
== Works cited ==
== Further reading == | Wikipedia/Graph_algebra |
First-order equational logic consists of quantifier-free terms of ordinary first-order logic, with equality as the only predicate symbol. The model theory of this logic was developed into universal algebra by Birkhoff, Grätzer, and Cohn. It was later made into a branch of category theory by Lawvere ("algebraic theories").
The terms of equational logic are built up from variables and constants using function symbols (or operations).
== Syllogism ==
Here are the four inference rules of logic.
P
[
x
:=
E
]
{\textstyle P[x:=E]}
denotes textual substitution of expression
E
{\textstyle E}
for variable
x
{\textstyle x}
in expression
P
{\textstyle P}
. Next,
b
=
c
{\textstyle b=c}
denotes equality, for
b
{\textstyle b}
and
c
{\textstyle c}
of the same type, while
b
≡
c
{\textstyle b\equiv c}
, or equivalence, is defined only for
b
{\textstyle b}
and
c
{\textstyle c}
of type boolean. For
b
{\textstyle b}
and
c
{\textstyle c}
of type boolean,
b
=
c
{\textstyle b=c}
and
b
≡
c
{\textstyle b\equiv c}
have the same meaning.
== Proof ==
We explain how the four inference rules are used in proofs, using the proof of
¬
p
≡
p
≡
⊥
{\textstyle \lnot p\equiv p\equiv \bot }
. The logic symbols
⊤
{\textstyle \top }
and
⊥
{\textstyle \bot }
indicate "true" and "false," respectively, and
¬
{\textstyle \lnot }
indicates "not." The theorem numbers refer to theorems of A Logical Approach to Discrete Math.
(
0
)
¬
p
≡
p
≡
⊥
(
1
)
=
⟨
(
3.9
)
,
¬
(
p
≡
q
)
≡
¬
p
≡
q
,
with
q
:=
p
⟩
(
2
)
¬
(
p
≡
p
)
≡
⊥
(
3
)
=
⟨
Identity of
≡
(
3.9
)
,
with
q
:=
p
⟩
(
4
)
¬
⊤
≡
⊥
(
3.8
)
{\displaystyle {\begin{array}{lcl}(0)&&\lnot p\equiv p\equiv \bot \\(1)&=&\quad \left\langle \;(3.9),\;\lnot (p\equiv q)\equiv \lnot p\equiv q,\;{\text{with}}\ q:=p\;\right\rangle \\(2)&&\lnot (p\equiv p)\equiv \bot \\(3)&=&\quad \left\langle \;{\text{Identity of}}\ \equiv (3.9),\;{\text{with}}\ q:=p\;\right\rangle \\(4)&&\lnot \top \equiv \bot &(3.8)\end{array}}}
First, lines
(
0
)
{\textstyle (0)}
–
(
2
)
{\textstyle (2)}
show a use of inference rule Leibniz:
(
0
)
=
(
2
)
{\displaystyle (0)=(2)}
is the conclusion of Leibniz, and its premise
¬
(
p
≡
p
)
≡
¬
p
≡
p
{\textstyle \lnot (p\equiv p)\equiv \lnot p\equiv p}
is given on line
(
1
)
{\textstyle (1)}
. In the same way, the equality on lines
(
2
)
{\textstyle (2)}
–
(
4
)
{\textstyle (4)}
are substantiated using Leibniz.
The "hint" on line
(
1
)
{\textstyle (1)}
is supposed to give a premise of Leibniz, showing what substitution of equals for equals is being used. This premise is theorem
(
3.9
)
{\textstyle (3.9)}
with the substitution
p
:=
q
{\textstyle p:=q}
, i.e.
(
¬
(
p
≡
q
)
≡
¬
p
≡
q
)
[
p
:=
q
]
{\displaystyle (\lnot (p\equiv q)\equiv \lnot p\equiv q)[p:=q]}
This shows how inference rule Substitution is used within hints.
From
(
0
)
=
(
2
)
{\textstyle (0)=(2)}
and
(
2
)
=
(
4
)
{\textstyle (2)=(4)}
, we conclude by inference rule Transitivity that
(
0
)
=
(
4
)
{\textstyle (0)=(4)}
. This shows how Transitivity is used.
Finally, note that line
(
4
)
{\textstyle (4)}
,
¬
⊤
≡
⊥
{\textstyle \lnot \top \equiv \bot }
, is a theorem, as indicated by the hint to its right. Hence, by inference rule Equanimity, we conclude that line
(
0
)
{\textstyle (0)}
is also a theorem. And
(
0
)
{\textstyle (0)}
is what we wanted to prove.
== See also ==
Theory of pure equality
== References ==
== External links ==
Sakharov, Alex. "Equational Logic." From MathWorld--A Wolfram Web Resource, created by Eric W. Weisstein. | Wikipedia/Equational_logic |
In universal algebra, an abstract algebra A is called simple if and only if it has no nontrivial congruence relations, or equivalently, if every homomorphism with domain A is either injective or constant.
As congruences on rings are characterized by their ideals, this notion is a straightforward generalization of the notion from ring theory: a ring is simple in the sense that it has no nontrivial ideals if and only if it is simple in the sense of universal algebra. The same remark applies with respect to groups and normal subgroups; hence the universal notion is also a generalization of a simple group (it is a matter of convention whether a one-element algebra should be or should not be considered simple, hence only in this special case the notions might not match).
A theorem by Roberto Magari in 1969 asserts that every variety contains a simple algebra.
== See also ==
Simple group
Simple ring
Central simple algebra
== References == | Wikipedia/Simple_algebra_(universal_algebra) |
Informally in mathematical logic, an algebraic theory is a theory that uses axioms stated entirely in terms of equations between terms with free variables. Inequalities and quantifiers are specifically disallowed. Sentential logic is the subset of first-order logic involving only algebraic sentences.
The notion is very close to the notion of algebraic structure, which, arguably, may be just a synonym.
Saying that a theory is algebraic is a stronger condition than saying it is elementary.
== Informal interpretation ==
An algebraic theory consists of a collection of n-ary functional terms with additional rules (axioms).
For example, the theory of groups is an algebraic theory because it has three functional terms: a binary operation a × b, a nullary operation 1 (neutral element), and a unary operation x ↦ x−1 with the rules of associativity, neutrality and inverses respectively. Other examples include:
the theory of semigroups
the theory of lattices
the theory of rings
This is opposed to geometric theory which involves partial functions (or binary relationships) or existential quantors − see e.g. Euclidean geometry where the existence of points or lines is postulated.
== Category-based model-theoretical interpretation ==
An algebraic theory T is a category whose objects are natural numbers 0, 1, 2,..., and which, for each n, has an n-tuple of morphisms:
proji: n → 1, i = 1, ..., n
This allows interpreting n as a cartesian product of n copies of 1.
Example: Let's define an algebraic theory T taking hom(n, m) to be m-tuples of polynomials of n free variables X1, ..., Xn with integer coefficients and with substitution as composition. In this case proji is the same as Xi. This theory T is called the theory of commutative rings.
In an algebraic theory, any morphism n → m can be described as m morphisms of signature n → 1. These latter morphisms are called n-ary operations of the theory.
If E is a category with finite products, the full subcategory Alg(T, E) of the category of functors [T, E] consisting of those functors that preserve finite products is called the category of T-models or T-algebras.
Note that for the case of operation 2 → 1, the appropriate algebra A will define a morphism
A(2) ≈ A(1) × A(1) → A(1)
== See also ==
Algebraic definition
== References ==
Lawvere, F. W., 1963, Functorial Semantics of Algebraic Theories, Proceedings of the National Academy of Sciences 50, No. 5 (November 1963), 869-872
Adámek, J., Rosický, J., Vitale, E. M., Algebraic Theories. A Categorical Introduction To General Algebra
Kock, A., Reyes, G., Doctrines in categorical logic, in Handbook of Mathematical Logic, ed. J. Barwise, North Holland 1977
Algebraic theory at the nLab | Wikipedia/Algebraic_theory |
A theory of everything (TOE), final theory, ultimate theory, unified field theory, or master theory is a hypothetical singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all aspects of the universe.: 6 Finding a theory of everything is one of the major unsolved problems in physics.
Over the past few centuries, two theoretical frameworks have been developed that, together, most closely resemble a theory of everything. These two theories upon which all modern physics rests are general relativity and quantum mechanics. General relativity is a theoretical framework that only focuses on gravity for understanding the universe in regions of both large scale and high mass: planets, stars, galaxies, clusters of galaxies, etc. On the other hand, quantum mechanics is a theoretical framework that focuses primarily on three non-gravitational forces for understanding the universe in regions of both very small scale and low mass: subatomic particles, atoms, and molecules. Quantum mechanics successfully implemented the Standard Model that describes the three non-gravitational forces: strong nuclear, weak nuclear, and electromagnetic force – as well as all observed elementary particles.: 122
General relativity and quantum mechanics have been repeatedly validated in their separate fields of relevance. Since the usual domains of applicability of general relativity and quantum mechanics are so different, most situations require that only one of the two theories be used.: 842–844 The two theories are considered incompatible in regions of extremely small scale – the Planck scale – such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang). To resolve the incompatibility, a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three interactions, must be discovered to harmoniously integrate the realms of general relativity and quantum mechanics into a seamless whole: a theory of everything may be defined as a comprehensive theory that, in principle, would be capable of describing all physical phenomena in the universe.
In pursuit of this goal, quantum gravity has become one area of active research. One example is string theory, which evolved into a candidate for the theory of everything, but not without drawbacks (most notably, its apparent lack of currently testable predictions) and controversy. String theory posits that at the beginning of the universe (up to 10−43 seconds after the Big Bang), the four fundamental forces were once a single fundamental force. According to string theory, every particle in the universe, at its most ultramicroscopic level (Planck length), consists of varying combinations of vibrating strings (or strands) with preferred patterns of vibration. String theory further claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created (that is to say, the electron is a type of string that vibrates one way, while the up quark is a type of string vibrating another way, and so forth). String theory/M-theory proposes six or seven dimensions of spacetime in addition to the four common dimensions for a ten- or eleven-dimensional spacetime.
== Name ==
Initially, the term theory of everything was used with an ironic reference to various overgeneralized theories. For example, a grandfather of Ijon Tichy – a character from a cycle of Stanisław Lem's science fiction stories of the 1960s – was known to work on the "General Theory of Everything". Physicist Harald Fritzsch used the term in his 1977 lectures in Varenna. Physicist John Ellis claims to have introduced the acronym "TOE" into the technical literature in an article in Nature in 1986. Over time, the term stuck in popularizations of theoretical physics research.
== Historical antecedents ==
=== Antiquity to 19th century ===
Many ancient cultures such as Babylonian astronomers and Indian astronomy studied the pattern of the Seven Sacred Luminaires/Classical Planets against the background of stars, with their interest being to relate celestial movement to human events (astrology), and the goal being to predict events by recording events against a time measure and then look for recurrent patterns. The debate between the universe having either a beginning or eternal cycles can be traced to ancient Babylonia. Hindu cosmology posits that time is infinite with a cyclic universe, where the current universe was preceded and will be followed by an infinite number of universes. Time scales mentioned in Hindu cosmology correspond to those of modern scientific cosmology. Its cycles run from an ordinary day and night to a day and night of Brahma, 8.64 billion years long.
The natural philosophy of atomism appeared in several ancient traditions. In ancient Greek philosophy, the pre-Socratic philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the motions and collisions of atoms. The concept of 'atom' proposed by Democritus was an early philosophical attempt to unify phenomena observed in nature. The concept of 'atom' also appeared in the Nyaya-Vaisheshika school of ancient Indian philosophy.
Archimedes was possibly the first philosopher to have described nature with axioms (or principles) and then deduce new results from them. Any "theory of everything" is similarly expected to be based on axioms and to deduce all observable phenomena from them.: 340
Following earlier atomistic thought, the mechanical philosophy of the 17th century posited that all forces could be ultimately reduced to contact forces between the atoms, then imagined as tiny solid particles.: 184
In the late 17th century, Isaac Newton's description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Newton's work in his Mathematical Principles of Natural Philosophy dealt with this in a further example of unification, in this case unifying Galileo's work on terrestrial gravity, Kepler's laws of planetary motion and the phenomenon of tides by explaining these apparent actions at a distance under one single law: the law of universal gravitation. Newton achieved the first great unification in physics, and he further is credited with laying the foundations of future endeavors for a grand unified theory.
In 1814, building on these results, Laplace famously suggested that a sufficiently powerful intellect could, if it knew the position and velocity of every particle at a given time, along with the laws of nature, calculate the position of any particle at any other time:: ch 7
An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything. Modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplace's vision has to be amended: a theory of everything must include gravitation and quantum mechanics. Even ignoring quantum mechanics, chaos theory is sufficient to guarantee that the future of any sufficiently complex mechanical or astronomical system is unpredictable.
In 1820, Hans Christian Ørsted discovered a connection between electricity and magnetism, triggering decades of work that culminated in 1865, in James Clerk Maxwell's theory of electromagnetism, which achieved the second great unification in physics. During the 19th and early 20th centuries, it gradually became apparent that many common examples of forces – contact forces, elasticity, viscosity, friction, and pressure – result from electrical interactions between the smallest particles of matter.
In his experiments of 1849–1850, Michael Faraday was the first to search for a unification of gravity with electricity and magnetism. However, he found no connection.
In 1900, David Hilbert published a famous list of mathematical problems. In Hilbert's sixth problem, he challenged researchers to find an axiomatic basis to all of physics. In this problem he thus asked for what today would be called a theory of everything.
=== Early 20th century ===
In the late 1920s, the then new quantum mechanics showed that the chemical bonds between atoms were examples of (quantum) electrical forces, justifying Dirac's boast that "the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known".
After 1915, when Albert Einstein published the theory of gravity (general relativity), the search for a unified field theory combining gravity with electromagnetism began with a renewed interest. In Einstein's day, the strong and the weak forces had not yet been discovered, yet he found the potential existence of two other distinct forces, gravity and electromagnetism, far more alluring. This launched his 40-year voyage in search of the so-called "unified field theory" that he hoped would show that these two forces are really manifestations of one grand, underlying principle. During the last few decades of his life, this ambition alienated Einstein from the rest of mainstream of physics, as the mainstream was instead far more excited about the emerging framework of quantum mechanics. Einstein wrote to a friend in the early 1940s, "I have become a lonely old chap who is mainly known because he doesn't wear socks and who is exhibited as a curiosity on special occasions." Prominent contributors were Gunnar Nordström, Hermann Weyl, Arthur Eddington, David Hilbert, Theodor Kaluza, Oskar Klein (see Kaluza–Klein theory), and most notably, Albert Einstein and his collaborators. Einstein searched in earnest for, but ultimately failed to find, a unifying theory: ch 17 (see Einstein–Maxwell–Dirac equations).
=== Late 20th century and the nuclear interactions ===
In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outset, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped.
Gravity and electromagnetism are able to coexist as entries in a list of classical forces, but for many years it seemed that gravity could not be incorporated into the quantum framework, let alone unified with the other fundamental forces. For this reason, work on unification, for much of the 20th century, focused on understanding the three forces described by quantum mechanics: electromagnetism and the weak and strong forces. The first two were combined in 1967–1968 by Sheldon Glashow, Steven Weinberg, and Abdus Salam into the electroweak force.
Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have non-zero masses (80.4 GeV/c2 and 91.2 GeV/c2, respectively), whereas the photon, which carries the electromagnetic force, is massless. At higher energies W bosons and Z bosons can be created easily and the unified nature of the force becomes apparent.
While the strong and electroweak forces coexist under the Standard Model of particle physics, they remain distinct. Thus, the pursuit of a theory of everything remained unsuccessful: neither a unification of the strong and electroweak forces – which Laplace would have called 'contact forces' – nor a unification of these forces with gravitation had been achieved.
== Modern physics ==
=== Conventional sequence of theories ===
A theory of everything would unify all the fundamental interactions of nature: gravitation, the strong interaction, the weak interaction, and electromagnetism. Because the weak interaction can transform elementary particles from one kind into another, the theory of everything should also predict all the different kinds of particles possible. The usual assumed path of theories is given in the following graph, where each unification step leads one level up on the graph.
In this graph, electroweak unification occurs at around 100 GeV, grand unification is predicted to occur at 1016 GeV, and unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV.
Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and the weak and strong forces. Grand unification would imply the existence of an electronuclear force; it is expected to set in at energies of the order of 1016 GeV, far greater than could be reached by any currently feasible particle accelerator. Although the simplest grand unified theories have been experimentally ruled out, the idea of a grand unified theory, especially when linked with supersymmetry, remains a favorite candidate in the theoretical physics community. Supersymmetric grand unified theories seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and because the inflationary force may be related to grand unified theory physics (although it does not seem to form an inevitable part of the theory). Yet grand unified theories are clearly not the final answer; both the current standard model and all proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies.
The final step in the graph requires resolving the separation between quantum mechanics and gravitation, often equated with general relativity. Numerous researchers concentrate their efforts on this specific step; nevertheless, no accepted theory of quantum gravity, and thus no accepted theory of everything, has emerged with observational evidence. It is usually assumed that the theory of everything will also solve the remaining problems of grand unified theories.
In addition to explaining the forces listed in the graph, a theory of everything may also explain the status of at least two candidate forces suggested by modern cosmology: an inflationary force and dark energy. Furthermore, cosmological experiments also suggest the existence of dark matter, supposedly composed of fundamental particles outside the scheme of the standard model. However, the existence of these forces and particles has not been proven.
=== String theory and M-theory ===
Since the 1990s, some physicists such as Edward Witten believe that 11-dimensional M-theory, which is described in some limits by one of the five perturbative superstring theories, and in another by the maximally-supersymmetric eleven-dimensional supergravity, is the theory of everything. There is no widespread consensus on this issue.
One remarkable property of string/M-theory is that seven extra dimensions are required for the theory's consistency, on top of the four dimensions in our universe. In this regard, string theory can be seen as building on the insights of the Kaluza–Klein theory, in which it was realized that applying general relativity to a 5-dimensional universe, with one space dimension small and curled up, looks from the 4-dimensional perspective like the usual general relativity together with Maxwell's electrodynamics. This lent credence to the idea of unifying gauge and gravity interactions, and to extra dimensions, but did not address the detailed experimental requirements. Another important property of string theory is its supersymmetry, which together with extra dimensions are the two main proposals for resolving the hierarchy problem of the standard model, which is (roughly) the question of why gravity is so much weaker than any other force. The extra-dimensional solution involves allowing gravity to propagate into the other dimensions while keeping other forces confined to a 4-dimensional spacetime, an idea that has been realized with explicit stringy mechanisms.
Research into string theory has been encouraged by a variety of theoretical and experimental factors. On the experimental side, the particle content of the standard model supplemented with neutrino masses fits into a spinor representation of SO(10), a subgroup of E8 that routinely emerges in string theory, such as in heterotic string theory or (sometimes equivalently) in F-theory. String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations. On the theoretical side, it has begun to address some of the key questions in quantum gravity, such as resolving the black hole information paradox, counting the correct entropy of black holes and allowing for topology-changing processes. It has also led to many insights in pure mathematics and in ordinary, strongly-coupled gauge theory due to the Gauge/String duality.
In the late 1990s, it was noted that one major hurdle in this endeavor is that the number of possible 4-dimensional universes is incredibly large. The small, "curled up" extra dimensions can be compactified in an enormous number of different ways (one estimate is 10500 ) each of which leads to different properties for the low-energy particles and forces. This array of models is known as the string theory landscape.: 347
One proposed solution is that many or all of these possibilities are realized in one or another of a huge number of universes, but that only a small number of them are habitable. Hence what we normally conceive as the fundamental constants of the universe are ultimately the result of the anthropic principle rather than dictated by theory. This has led to criticism of string theory, arguing that it cannot make useful (i.e., original, falsifiable, and verifiable) predictions and regarding it as a pseudoscience/philosophy. Others disagree, and string theory remains an active topic of investigation in theoretical physics.
=== Loop quantum gravity ===
Current research on loop quantum gravity may eventually play a fundamental role in a theory of everything, but that is not its primary aim. Loop quantum gravity also introduces a lower bound on the possible length scales.
There have been recent claims that loop quantum gravity may be able to reproduce features resembling the Standard Model. So far only the first generation of fermions (leptons and quarks) with correct parity properties have been modelled by Sundance Bilson-Thompson using preons constituted of braids of spacetime as the building blocks. However, there is no derivation of the Lagrangian that would describe the interactions of such particles, nor is it possible to show that such particles are fermions, nor that the gauge groups or interactions of the Standard Model are realised. Use of quantum computing concepts made it possible to demonstrate that the particles are able to survive quantum fluctuations.
This model leads to an interpretation of electric and color charge as topological quantities (electric as number and chirality of twists carried on the individual ribbons and colour as variants of such twisting for fixed electric charge).
Bilson-Thompson's original paper suggested that the higher-generation fermions could be represented by more complicated braidings, although explicit constructions of these structures were not given. The electric charge, color, and parity properties of such fermions would arise in the same way as for the first generation. The model was expressly generalized for an infinite number of generations and for the weak force bosons (but not for photons or gluons) in a 2008 paper by Bilson-Thompson, Hackett, Kauffman and Smolin.
=== Other attempts ===
Among other attempts to develop a theory of everything is the theory of causal fermion systems, giving the two current physical theories (general relativity and quantum field theory) as limiting cases.
Another theory is called Causal Sets. As some of the approaches mentioned above, its direct goal isn't necessarily to achieve a theory of everything but primarily a working theory of quantum gravity, which might eventually include the standard model and become a candidate for a theory of everything. Its founding principle is that spacetime is fundamentally discrete and that the spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between relative past and future distinguishing spacetime events.
Causal dynamical triangulation does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves.
Another attempt may be related to ER=EPR, a conjecture in physics stating that entangled particles are connected by a wormhole (or Einstein–Rosen bridge).
=== Present status ===
At present, there is no candidate theory of everything that includes the standard model of particle physics and general relativity and that, at the same time, is able to calculate the fine-structure constant or the mass of the electron. Most particle physicists expect that the outcome of ongoing experiments – the search for new particles at the large particle accelerators and for dark matter – are needed in order to provide further input for a theory of everything.
== Arguments against ==
In parallel to the intense search for a theory of everything, various scholars have debated the possibility of its discovery.
=== Gödel's incompleteness theorem ===
A number of scholars claim that Gödel's incompleteness theorem suggests that attempts to construct a theory of everything are bound to fail. Gödel's theorem, informally stated, asserts that any formal theory sufficient to express elementary arithmetical facts and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory.
Stanley Jaki, in his 1966 book The Relevance of Physics, pointed out that, because a "theory of everything" will certainly be a consistent non-trivial mathematical theory, it must be incomplete. He claims that this dooms searches for a deterministic theory of everything.
Freeman Dyson has stated that "Gödel's theorem implies that pure mathematics is inexhaustible. No matter how many problems we solve, there will always be other problems that cannot be solved within the existing rules. […] Because of Gödel's theorem, physics is inexhaustible too. The laws of physics are a finite set of rules, and include the rules for doing mathematics, so that Gödel's theorem applies to them."
Stephen Hawking was originally a believer in the Theory of Everything, but after considering Gödel's Theorem, he concluded that one was not obtainable. "Some people will be very disappointed if there is not an ultimate theory that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind."
Jürgen Schmidhuber (1997) has argued against this view; he asserts that Gödel's theorems are irrelevant for computable physics. In 2000, Schmidhuber explicitly constructed limit-computable, deterministic universes whose pseudo-randomness based on undecidable, Gödel-like halting problems is extremely hard to detect but does not prevent formal theories of everything describable by very few bits of information.
Related critique was offered by Solomon Feferman and others. Douglas S. Robertson offers Conway's game of life as an example: The underlying rules are simple and complete, but there are formally undecidable questions about the game's behaviors. Analogously, it may (or may not) be possible to completely state the underlying rules of physics with a finite number of well-defined laws, but there is little doubt that there are questions about the behavior of physical systems which are formally undecidable on the basis of those underlying laws.
Since most physicists would consider the statement of the underlying rules to suffice as the definition of a "theory of everything", most physicists argue that Gödel's Theorem does not mean that a theory of everything cannot exist. On the other hand, the scholars invoking Gödel's Theorem appear, at least in some cases, to be referring not to the underlying rules, but to the understandability of the behavior of all physical systems, as when Hawking mentions arranging blocks into rectangles, turning the computation of prime numbers into a physical question. This definitional discrepancy may explain some of the disagreement among researchers.
=== Fundamental limits in accuracy ===
No physical theory to date is believed to be precisely accurate. Instead, physics has proceeded by a series of "successive approximations" allowing more and more accurate predictions over a wider and wider range of phenomena. Some physicists believe that it is therefore a mistake to confuse theoretical models with the true nature of reality, and hold that the series of approximations will never terminate in the "truth". Einstein himself expressed this view on occasions.
=== Definition of fundamental laws ===
There is a philosophical debate within the physics community as to whether a theory of everything deserves to be called the fundamental law of the universe. One view is the hard reductionist position that the theory of everything is the fundamental law and that all other theories that apply within the universe are a consequence of the theory of everything. Another view is that emergent laws, which govern the behavior of complex systems, should be seen as equally fundamental. Examples of emergent laws are the second law of thermodynamics and the theory of natural selection. The advocates of emergence argue that emergent laws, especially those describing complex or living systems are independent of the low-level, microscopic laws. In this view, emergent laws are as fundamental as a theory of everything.
A well-known debate over this took place between Steven Weinberg and Philip Anderson.
==== Impossibility of calculation ====
Weinberg points out that calculating the precise motion of an actual projectile in the Earth's atmosphere is impossible. So how can we know we have an adequate theory for describing the motion of projectiles? Weinberg suggests that we know principles (Newton's laws of motion and gravitation) that work "well enough" for simple examples, like the motion of planets in empty space. These principles have worked so well on simple examples that we can be reasonably confident they will work for more complex examples. For example, although general relativity includes equations that do not have exact solutions, it is widely accepted as a valid theory because all of its equations with exact solutions have been experimentally verified. Likewise, a theory of everything must work for a wide range of simple examples in such a way that we can be reasonably confident it will work for every situation in physics. Difficulties in creating a theory of everything often begin to appear when combining quantum mechanics with the theory of general relativity, as the equations of quantum mechanics begin to falter when the force of gravity is applied to them.
== See also ==
== References ==
=== Bibliography ===
Pais, Abraham (1982) Subtle is the Lord: The Science and the Life of Albert Einstein (Oxford University Press, Oxford. Ch. 17, ISBN 0-19-853907-X
Weinberg, Steven (1993) Dreams of a Final Theory: The Search for the Fundamental Laws of Nature, Hutchinson Radius, London, ISBN 0-09-177395-4
Corey S. Powell Relativity versus quantum mechanics: the battle for the universe, The Guardian (2015) Relativity versus quantum mechanics: the battle for the universe
== External links ==
The Elegant Universe, Nova episode about the search for the theory of everything and string theory.
Theory of Everything, freeview video by the Vega Science Trust, BBC and Open University.
The Theory of Everything: Are we getting closer, or is a final theory of matter and the universe impossible? Debate between John Ellis (physicist), Frank Close and Nicholas Maxwell.
Why The World Exists, a discussion between physicist Laura Mersini-Houghton, cosmologist George Francis Rayner Ellis and philosopher David Wallace about dark matter, parallel universes and explaining why these and the present Universe exist.
Theories of Everything, BBC Radio 4 discussion with Brian Greene, John Barrow & Val Gibson (In Our Time, March 25, 2004). | Wikipedia/Theory_of_everything |
Magnetostatics is the study of magnetic fields in systems where the currents are steady (not changing with time). It is the magnetic analogue of electrostatics, where the charges are stationary. The magnetization need not be static; the equations of magnetostatics can be used to predict fast magnetic switching events that occur on time scales of nanoseconds or less. Magnetostatics is even a good approximation when the currents are not static – as long as the currents do not alternate rapidly. Magnetostatics is widely used in applications of micromagnetics such as models of magnetic storage devices as in computer memory.
== Applications ==
=== Magnetostatics as a special case of Maxwell's equations ===
Starting from Maxwell's equations and assuming that charges are either fixed or move as a steady current
J
{\displaystyle \mathbf {J} }
, the equations separate into two equations for the electric field (see electrostatics) and two for the magnetic field. The fields are independent of time and each other. The magnetostatic equations, in both differential and integral forms, are shown in the table below.
Where ∇ with the dot denotes divergence, and B is the magnetic flux density, the first integral is over a surface
S
{\displaystyle S}
with oriented surface element
d
S
{\displaystyle d\mathbf {S} }
. Where ∇ with the cross denotes curl, J is the current density and H is the magnetic field intensity, the second integral is a line integral around a closed loop
C
{\displaystyle C}
with line element
l
{\displaystyle \mathbf {l} }
. The current going through the loop is
I
enc
{\displaystyle I_{\text{enc}}}
.
The quality of this approximation may be guessed by comparing the above equations with the full version of Maxwell's equations and considering the importance of the terms that have been removed. Of particular significance is the comparison of the
J
{\displaystyle \mathbf {J} }
term against the
∂
D
/
∂
t
{\displaystyle \partial \mathbf {D} /\partial t}
term. If the
J
{\displaystyle \mathbf {J} }
term is substantially larger, then the smaller term may be ignored without significant loss of accuracy.
=== Re-introducing Faraday's law ===
A common technique is to solve a series of magnetostatic problems at incremental time steps and then use these solutions to approximate the term
∂
B
/
∂
t
{\displaystyle \partial \mathbf {B} /\partial t}
. Plugging this result into Faraday's Law finds a value for
E
{\displaystyle \mathbf {E} }
(which had previously been ignored). This method is not a true solution of Maxwell's equations but can provide a good approximation for slowly changing fields.
== Solving for the magnetic field ==
=== Current sources ===
If all currents in a system are known (i.e., if a complete description of the current density
J
(
r
)
{\displaystyle \mathbf {J} (\mathbf {r} )}
is available) then the magnetic field can be determined, at a position r, from the currents by the Biot–Savart equation:: 174
B
(
r
)
=
μ
0
4
π
∫
J
(
r
′
)
×
(
r
−
r
′
)
|
r
−
r
′
|
3
d
3
r
′
{\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\int {{\frac {\mathbf {J} (\mathbf {r} ')\times \left(\mathbf {r} -\mathbf {r} '\right)}{|\mathbf {r} -\mathbf {r} '|^{3}}}\mathrm {d} ^{3}\mathbf {r} '}}
This technique works well for problems where the medium is a vacuum or air or some similar material with a relative permeability of 1. This includes air-core inductors and air-core transformers. One advantage of this technique is that, if a coil has a complex geometry, it can be divided into sections and the integral evaluated for each section. Since this equation is primarily used to solve linear problems, the contributions can be added. For a very difficult geometry, numerical integration may be used.
For problems where the dominant magnetic material is a highly permeable magnetic core with relatively small air gaps, a magnetic circuit approach is useful. When the air gaps are large in comparison to the magnetic circuit length, fringing becomes significant and usually requires a finite element calculation. The finite element calculation uses a modified form of the magnetostatic equations above in order to calculate magnetic potential. The value of
B
{\displaystyle \mathbf {B} }
can be found from the magnetic potential.
The magnetic field can be derived from the vector potential. Since the divergence of the magnetic flux density is always zero,
B
=
∇
×
A
,
{\displaystyle \mathbf {B} =\nabla \times \mathbf {A} ,}
and the relation of the vector potential to current is:: 176
A
(
r
)
=
μ
0
4
π
∫
J
(
r
′
)
|
r
−
r
′
|
d
3
r
′
.
{\displaystyle \mathbf {A} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\int {{\frac {\mathbf {J(\mathbf {r} ')} }{|\mathbf {r} -\mathbf {r} '|}}\mathrm {d} ^{3}\mathbf {r} '}.}
=== Magnetization ===
Strongly magnetic materials (i.e., ferromagnetic, ferrimagnetic or paramagnetic) have a magnetization that is primarily due to electron spin. In such materials the magnetization must be explicitly included using the relation
B
=
μ
0
(
M
+
H
)
.
{\displaystyle \mathbf {B} =\mu _{0}(\mathbf {M} +\mathbf {H} ).}
Except in the case of conductors, electric currents can be ignored. Then Ampère's law is simply
∇
×
H
=
0.
{\displaystyle \nabla \times \mathbf {H} =0.}
This has the general solution
H
=
−
∇
Φ
M
,
{\displaystyle \mathbf {H} =-\nabla \Phi _{M},}
where
Φ
M
{\displaystyle \Phi _{M}}
is a scalar potential.: 192 Substituting this in Gauss's law gives
∇
2
Φ
M
=
∇
⋅
M
.
{\displaystyle \nabla ^{2}\Phi _{M}=\nabla \cdot \mathbf {M} .}
Thus, the divergence of the magnetization,
∇
⋅
M
,
{\displaystyle \nabla \cdot \mathbf {M} ,}
has a role analogous to the electric charge in electrostatics and is often referred to as an effective charge density
ρ
M
{\displaystyle \rho _{M}}
.
The vector potential method can also be employed with an effective current density
J
M
=
∇
×
M
.
{\displaystyle \mathbf {J_{M}} =\nabla \times \mathbf {M} .}
== See also ==
Darwin Lagrangian
== References ==
== External links ==
Media related to Magnetostatics at Wikimedia Commons | Wikipedia/Magnetostatics |
Earth science or geoscience includes all fields of natural science related to the planet Earth. This is a branch of science dealing with the physical, chemical, and biological complex constitutions and synergistic linkages of Earth's four spheres: the biosphere, hydrosphere/cryosphere, atmosphere, and geosphere (or lithosphere). Earth science can be considered to be a branch of planetary science but with a much older history.
== Geology ==
Geology is broadly the study of Earth's structure, substance, and processes. Geology is largely the study of the lithosphere, or Earth's surface, including the crust and rocks. It includes the physical characteristics and processes that occur in the lithosphere as well as how they are affected by geothermal energy. It incorporates aspects of chemistry, physics, and biology as elements of geology interact. Historical geology is the application of geology to interpret Earth history and how it has changed over time.
Geochemistry studies the chemical components and processes of the Earth. Geophysics studies the physical properties of the Earth. Paleontology studies fossilized biological material in the lithosphere. Planetary geology studies geoscience as it pertains to extraterrestrial bodies. Geomorphology studies the origin of landscapes. Structural geology studies the deformation of rocks to produce mountains and lowlands. Resource geology studies how energy resources can be obtained from minerals. Environmental geology studies how pollution and contaminants affect soil and rock. Mineralogy is the study of minerals and includes the study of mineral formation, crystal structure, hazards associated with minerals, and the physical and chemical properties of minerals. Petrology is the study of rocks, including the formation and composition of rocks. Petrography is a branch of petrology that studies the typology and classification of rocks.
== Earth's interior ==
Plate tectonics, mountain ranges, volcanoes, and earthquakes are geological phenomena that can be explained in terms of physical and chemical processes in the Earth's crust. Beneath the Earth's crust lies the mantle which is heated by the radioactive decay of heavy elements. The mantle is not quite solid and consists of magma which is in a state of semi-perpetual convection. This convection process causes the lithospheric plates to move, albeit slowly. The resulting process is known as plate tectonics. Areas of the crust where new crust is created are called divergent boundaries, those where it is brought back into the Earth are convergent boundaries and those where plates slide past each other, but no new lithospheric material is created or destroyed, are referred to as transform (or conservative) boundaries. Earthquakes result from the movement of the lithospheric plates, and they often occur near convergent boundaries where parts of the crust are forced into the earth as part of subduction.
Plate tectonics might be thought of as the process by which the Earth is resurfaced. As the result of seafloor spreading, new crust and lithosphere is created by the flow of magma from the mantle to the near surface, through fissures, where it cools and solidifies. Through subduction, oceanic crust and lithosphere vehemently returns to the convecting mantle. Volcanoes result primarily from the melting of subducted crust material. Crust material that is forced into the asthenosphere melts, and some portion of the melted material becomes light enough to rise to the surface—giving birth to volcanoes.
== Atmospheric science ==
Atmospheric science initially developed in the late-19th century as a means to forecast the weather through meteorology, the study of weather. Atmospheric chemistry was developed in the 20th century to measure air pollution and expanded in the 1970s in response to acid rain. Climatology studies the climate and climate change.
The troposphere, stratosphere, mesosphere, thermosphere, and exosphere are the five layers which make up Earth's atmosphere. 75% of the mass in the atmosphere is located within the troposphere, the lowest layer. In all, the atmosphere is made up of about 78.0% nitrogen, 20.9% oxygen, and 0.92% argon, and small amounts of other gases including CO2 and water vapor. Water vapor and CO2 cause the Earth's atmosphere to catch and hold the Sun's energy through the greenhouse effect. This makes Earth's surface warm enough for liquid water and life. In addition to trapping heat, the atmosphere also protects living organisms by shielding the Earth's surface from cosmic rays. The magnetic field—created by the internal motions of the core—produces the magnetosphere which protects Earth's atmosphere from the solar wind. As the Earth is 4.5 billion years old, it would have lost its atmosphere by now if there were no protective magnetosphere.
== Earth's magnetic field ==
== Hydrology ==
Hydrology is the study of the hydrosphere and the movement of water on Earth. It emphasizes the study of how humans use and interact with freshwater supplies. Study of water's movement is closely related to geomorphology and other branches of Earth science. Applied hydrology involves engineering to maintain aquatic environments and distribute water supplies. Subdisciplines of hydrology include oceanography, hydrogeology, ecohydrology, and glaciology. Oceanography is the study of oceans. Hydrogeology is the study of groundwater. It includes the mapping of groundwater supplies and the analysis of groundwater contaminants. Applied hydrogeology seeks to prevent contamination of groundwater and mineral springs and make it available as drinking water. The earliest exploitation of groundwater resources dates back to 3000 BC, and hydrogeology as a science was developed by hydrologists beginning in the 17th century. Ecohydrology is the study of ecological systems in the hydrosphere. It can be divided into the physical study of aquatic ecosystems and the biological study of aquatic organisms. Ecohydrology includes the effects that organisms and aquatic ecosystems have on one another as well as how these ecoystems are affected by humans. Glaciology is the study of the cryosphere, including glaciers and coverage of the Earth by ice and snow. Concerns of glaciology include access to glacial freshwater, mitigation of glacial hazards, obtaining resources that exist beneath frozen land, and addressing the effects of climate change on the cryosphere.
== Ecology ==
Ecology is the study of the biosphere. This includes the study of nature and of how living things interact with the Earth and one another and the consequences of that. It considers how living things use resources such as oxygen, water, and nutrients from the Earth to sustain themselves. It also considers how humans and other living creatures cause changes to nature.
== Physical geography ==
Physical geography is the study of Earth's systems and how they interact with one another as part of a single self-contained system. It incorporates astronomy, mathematical geography, meteorology, climatology, geology, geomorphology, biology, biogeography, pedology, and soils geography. Physical geography is distinct from human geography, which studies the human populations on Earth, though it does include human effects on the environment.
== Methodology ==
Methodologies vary depending on the nature of the subjects being studied. Studies typically fall into one of three categories: observational, experimental, or theoretical. Earth scientists often conduct sophisticated computer analysis or visit an interesting location to study earth phenomena (e.g. Antarctica or hot spot island chains).
A foundational idea in Earth science is the notion of uniformitarianism, which states that "ancient geologic features are interpreted by understanding active processes that are readily observed." In other words, any geologic processes at work in the present have operated in the same ways throughout geologic time. This enables those who study Earth history to apply knowledge of how the Earth's processes operate in the present to gain insight into how the planet has evolved and changed throughout long history.
== Earth's spheres ==
In Earth science, it is common to conceptualize the Earth's surface as consisting of several distinct layers, often referred to as spheres: the lithosphere, the hydrosphere, the atmosphere, and the biosphere, this concept of spheres is a useful tool for understanding the Earth's surface and its various processes these correspond to rocks, water, air and life. Also included by some are the cryosphere (corresponding to ice) as a distinct portion of the hydrosphere and the pedosphere (corresponding to soil) as an active and intermixed sphere.
The following fields of science are generally categorized within the Earth sciences:
Geology describes the rocky parts of the Earth's crust (or lithosphere) and its historic development. Major subdisciplines are mineralogy and petrology, geomorphology, paleontology, stratigraphy, structural geology, engineering geology, and sedimentology.
Physical geography focuses on geography as an Earth science. Physical geography is the study of Earth's seasons, climate, atmosphere, soil, streams, landforms, and oceans. Physical geography can be divided into several branches or related fields, as follows: geomorphology, biogeography, environmental geography, palaeogeography, climatology, meteorology, coastal geography, hydrology, ecology, glaciology.
Geophysics and geodesy investigate the shape of the Earth, its reaction to forces and its magnetic and gravity fields. Geophysicists explore the Earth's core and mantle as well as the tectonic and seismic activity of the lithosphere. Geophysics is commonly used to supplement the work of geologists in developing a comprehensive understanding of crustal geology, particularly in mineral and petroleum exploration. Seismologists use geophysics to understand plate tectonic movement, as well as predict seismic activity.
Geochemistry studies the processes that control the abundance, composition, and distribution of chemical compounds and isotopes in geologic environments. Geochemists use the tools and principles of chemistry to study the Earth's composition, structure, processes, and other physical aspects. Major subdisciplines are aqueous geochemistry, cosmochemistry, isotope geochemistry and biogeochemistry.
Soil science covers the outermost layer of the Earth's crust that is subject to soil formation processes (or pedosphere). Major subdivisions in this field of study include edaphology and pedology.
Ecology covers the interactions between organisms and their environment. This field of study differentiates the study of Earth from other planets in the Solar System, Earth being the only planet teeming with life.
Hydrology, oceanography and limnology are studies which focus on the movement, distribution, and quality of the water and involve all the components of the hydrologic cycle on the Earth and its atmosphere (or hydrosphere). "Sub-disciplines of hydrology include hydrometeorology, surface water hydrology, hydrogeology, watershed science, forest hydrology, and water chemistry."
Glaciology covers the icy parts of the Earth (or cryosphere).
Atmospheric sciences cover the gaseous parts of the Earth (or atmosphere) between the surface and the exosphere (about 1000 km). Major subdisciplines include meteorology, climatology, atmospheric chemistry, and atmospheric physics.
=== Earth science breakup ===
== See also ==
== References ==
=== Sources ===
== Further reading ==
== External links ==
Earth Science Picture of the Day, a service of Universities Space Research Association, sponsored by NASA Goddard Space Flight Center.
Geoethics in Planetary and Space Exploration.
Geology Buzz: Earth Science Archived 2021-11-04 at the Wayback Machine | Wikipedia/Earth_science |
In physics, quintessence is a hypothetical form of dark energy, more precisely a scalar field minimally coupled to gravity, postulated as an explanation of the observation of an accelerating rate of expansion of the universe. The first example of this scenario was proposed by Ratra and Peebles (1988) and Wetterich (1988). The concept was expanded to more general types of time-varying dark energy, and the term "quintessence" was first introduced in a 1998 paper by Robert R. Caldwell, Rahul Dave and Paul Steinhardt. It has been proposed by some physicists to be a fifth fundamental force. Quintessence differs from the cosmological constant explanation of dark energy in that it is dynamic; that is, it changes over time, unlike the cosmological constant which, by definition, does not change. Quintessence can be either attractive or repulsive depending on the ratio of its kinetic and potential energy. Those working with this postulate believe that quintessence became repulsive about ten billion years ago, about 3.5 billion years after the Big Bang.
A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable.
== Terminology ==
The name comes from quinta essentia (fifth element). So called in Latin starting from the Middle Ages, this was the (first) element added by Aristotle to the other four ancient classical elements because he thought it was the essence of the celestial world. Aristotle posited it to be a pure, fine, and primigenial element which he referred to as aether in his text On the Heavens. Similarly, modern quintessence would be the fifth known "dynamical, time-dependent, and spatially inhomogeneous" contribution to the overall mass–energy content of the universe.
Of course, the other four components are not the ancient Greek classical elements, but rather "baryons, neutrinos, dark matter, [and] radiation." Although neutrinos are sometimes considered radiation, the term "radiation" in this context is only used to refer to massless photons. Spatial curvature of the cosmos (which has not been detected) is excluded because it is non-dynamical and homogeneous; the cosmological constant would not be considered a fifth component in this sense, because it is non-dynamical, homogeneous, and time-independent.
== Scalar field ==
Quintessence (Q) is a scalar field with an equation of state where wq, the ratio of pressure pq and density
ρ
{\displaystyle \rho }
q, is given by the potential energy
V
(
Q
)
{\displaystyle V(Q)}
and a kinetic term:
w
q
=
p
q
ρ
q
=
1
2
Q
˙
2
−
V
(
Q
)
1
2
Q
˙
2
+
V
(
Q
)
{\displaystyle w_{q}={\frac {p_{q}}{\rho _{q}}}={\frac {{\frac {1}{2}}{\dot {Q}}^{2}-V(Q)}{{\frac {1}{2}}{\dot {Q}}^{2}+V(Q)}}}
Hence, quintessence is dynamic, and generally has a density and wq parameter that varies with time. Specifically, wq parameter can vary within the range [-1,1]. By contrast, a cosmological constant is static, with a fixed energy density and wq = −1.
== Tracker behavior ==
Many models of quintessence have a tracker behavior, which according to Ratra and Peebles (1988) and Paul Steinhardt et al. (1999) partly solves the cosmological constant problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start having characteristics similar to dark energy, eventually dominating the universe. This naturally sets the low scale of the dark energy. When comparing the predicted expansion rate of the universe as given by the tracker solutions with cosmological data, a main feature of tracker solutions is that one needs four parameters to properly describe the behavior of their equation of state, whereas it has been shown that at most a two-parameter model can optimally be constrained by mid-term future data (horizon 2015–2020).
== Specific models ==
Some special cases of quintessence are phantom dark energy, in which wq < −1, and k-essence (short for kinetic quintessence), which has a non-standard form of kinetic energy. If this type of energy were to exist, it would cause a big rip in the universe due to the growing energy density of dark energy, which would cause the expansion of the universe to increase at a faster-than-exponential rate.
=== Holographic dark energy ===
Holographic dark energy models, compared with cosmological constant models, imply a high degeneracy. It has been suggested that dark energy might originate from quantum fluctuations of spacetime, and is limited by the event horizon of the universe.
Studies with quintessence dark energy found that it dominates gravitational collapse in a spacetime simulation, based on the holographic thermalization. These results show that the smaller the state parameter of quintessence is, the harder it is for the plasma to thermalize.
== See also ==
Aether (classical element)
Phantom dark energy
Quintom
== References ==
== Further reading ==
Christof, Wetterich (1987-09-24). "Cosmology and the fate of dilatation symmetry". Nuclear Physics B. 302 (4): 668–696. arXiv:1711.03844. Bibcode:1988NuPhB.302..668W. doi:10.1016/0550-3213(88)90193-9. S2CID 118970077.
Ostriker, J. P.; Steinhardt, P. (January 2001). "The Quintessential Universe". Scientific American. 284 (1): 46–53. Bibcode:2001SciAm.284a..46O. doi:10.1038/scientificamerican0101-46. PMID 11132422.
Krauss, Lawrence M. (2000). Quintessence: The Search for Missing Mass in the Universe. Basic Books. ISBN 978-0465037414. | Wikipedia/Quintessence_(physics) |
Computational particle physics refers to the methods and computing tools developed in and used by particle physics research. Like computational chemistry or computational biology, it is, for particle physics both a specific branch and an interdisciplinary field relying on computer science, theoretical and experimental particle physics and mathematics.
The main fields of computational particle physics are: lattice field theory (numerical computations), automatic calculation of particle interaction or decay (computer algebra) and event generators (stochastic methods).
== Computing tools ==
Computer algebra: Many of the computer algebra languages were developed initially to help particle physics calculations: Reduce, Mathematica, Schoonschip, Form, GiNaC.
Data Grid: The largest planned use of the grid systems will be for the analysis of the LHC - produced data. Large software packages have been developed to support this application like the LHC Computing Grid (LCG) . A similar effort in the wider e-Science community is the GridPP collaboration, a consortium of particle physicists from UK institutions and CERN.
Data Analysis Tools: These tools are motivated by the fact that particle physics experiments and simulations often create large datasets, e.g. see references.
Software Libraries: Many software libraries are used for particle physics computations. Also important are packages that simulate particle physics interactions using Monte Carlo simulation techniques (i.e. event generators).
CompHEP
UrQMD
APFEL
Geant4
== History ==
Particle physics played a role in the early history of the internet; the World-Wide Web was created by Tim Berners-Lee when working at CERN in 1991.
=== Computer Algebra ===
Note: This section contains an excerpt from 'Computer Algebra in Particle Physics' by Stefan Weinzierl
Particle physics is an important field of application for computer algebra and exploits the capabilities of Computer Algebra Systems (CAS). This leads to valuable feed-back for the development of CAS. Looking at the history of computer algebra systems, the first programs date back to the 1960s. The first systems were almost entirely based on LISP ("LISt Programming language"). LISP is an interpreted language and, as the name already indicates, designed for the manipulation of lists. Its importance for symbolic computer programs in the early days has been compared to the importance of FORTRAN for numerical programs in the same period. Already in this first period, the program REDUCE had some special features for the application to high energy physics. An exception to the LISP-based programs was SCHOONSHIP, written in assembler language by Martinus J. G. Veltman and specially designed for applications in particle physics. The use of assembler code lead to an incredible fast program (compared to the interpreted programs at that time) and allowed the calculation of more complex scattering processes in high energy physics. It has been claimed the program's importance was recognized in 1998 by awarding the half of the Nobel prize to Veltman. Also the program MACSYMA deserves to be mentioned explicitly, since it triggered important development with regard to algorithms. In the 1980s new computer algebra systems started to be written in C. This enabled the better exploitation of the resources of the computer (compared to the interpreted language LISP) and at the same time allowed to maintain portability (which would not have been possible in assembler language). This period marked also the appearance of the first commercial computer algebra system, among which Mathematica and Maple are the best known examples. In addition, a few dedicated programs appeared, an example relevant to particle physics is the program FORM by J. Vermaseren as a (portable) successor to SCHOONSHIP. More recently issues of the maintainability of large projects became more and more important and the overall programming paradigma changed from procedural programming to object-oriented design. In terms of programming languages this was reflected by a move from C to C++. Following this change of paradigma, the library GiNaC was developed. The GiNac library allows symbolic calculations in C++.
Code generation for computer algebra can also be used in this area.
=== Lattice field theory ===
Lattice field theory was created by Kenneth Wilson in 1974. Simulation techniques were later developed from statistical mechanics.
Since the early 1980s, LQCD researchers have pioneered the use of massively parallel computers in large scientific applications, using virtually all available computing systems including traditional main-frames, large PC clusters, and high-performance systems. In addition, it has also been used as a benchmark for high-performance computing, starting with the IBM Blue Gene supercomputer.
Eventually national and regional QCD grids were created: LATFOR (continental Europe), UKQCD and USQCD. The ILDG (International Lattice Data Grid) is an international venture comprising grids from the UK, the US, Australia, Japan and Germany, and was formed in 2002.
== See also ==
Les Houches Accords
CHEP Conference
Computational physics
== References ==
== External links ==
Brown University. Computational High Energy Physics (CHEP) group page Archived 2015-05-18 at the Wayback Machine
International Research Network for Computational Particle Physics Archived 2016-03-05 at the Wayback Machine. Center for Computational Sciences, Univ. of Tsukuba, Japan.
History of computing at CERN | Wikipedia/Computational_particle_physics |
A theory is a systematic and rational form of abstract thinking about a phenomenon, or the conclusions derived from such thinking. It involves contemplative and logical reasoning, often supported by processes such as observation, experimentation, and research. Theories can be scientific, falling within the realm of empirical and testable knowledge, or they may belong to non-scientific disciplines, such as philosophy, art, or sociology. In some cases, theories may exist independently of any formal discipline.
In modern science, the term "theory" refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction ("falsify") of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge, in contrast to more common uses of the word "theory" that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis). Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.
Theories guide the enterprise of finding facts rather than of reaching goals, and are neutral concerning alternatives among values.: 131 A theory can be a body of knowledge, which may or may not be associated with particular explanatory models. To theorize is to develop this body of knowledge.: 46
The word theory or "in theory" is sometimes used outside of science to refer to something which the speaker did not experience or test before. In science, this same concept is referred to as a hypothesis, and the word "hypothetically" is used both inside and outside of science. In its usage outside of science, the word "theory" is very often contrasted to "practice" (from Greek praxis, πρᾶξις) a Greek term for doing, which is opposed to theory. A "classical example" of the distinction between "theoretical" and "practical" uses the discipline of medicine: medical theory involves trying to understand the causes and nature of health and sickness, while the practical side of medicine is trying to make people healthy. These two things are related but can be independent, because it is possible to research health and sickness without curing specific patients, and it is possible to cure a patient without knowing how the cure worked.
== Ancient usage ==
The English word theory derives from a technical term in philosophy in Ancient Greek. As an everyday word, theoria, θεωρία, meant "looking at, viewing, beholding", but in more technical contexts it came to refer to contemplative or speculative understandings of natural things, such as those of natural philosophers, as opposed to more practical ways of knowing things, like that of skilled orators or artisans. English-speakers have used the word theory since at least the late 16th century. Modern uses of the word theory derive from the original definition, but have taken on new shades of meaning, still based on the idea of a theory as a thoughtful and rational explanation of the general nature of things.
Although it has more mundane meanings in Greek, the word θεωρία apparently developed special uses early in the recorded history of the Greek language. In the book From Religion to Philosophy, Francis Cornford suggests that the Orphics used the word theoria to mean "passionate sympathetic contemplation". Pythagoras changed the word to mean "the passionless contemplation of rational, unchanging truth" of mathematical knowledge, because he considered this intellectual pursuit the way to reach the highest plane of existence. Pythagoras emphasized subduing emotions and bodily desires to help the intellect function at the higher plane of theory. Thus, it was Pythagoras who gave the word theory the specific meaning that led to the classical and modern concept of a distinction between theory (as uninvolved, neutral thinking) and practice.
Aristotle's terminology, as already mentioned, contrasts theory with praxis or practice, and this contrast exists till today. For Aristotle, both practice and theory involve thinking, but the aims are different. Theoretical contemplation considers things humans do not move or change, such as nature, so it has no human aim apart from itself and the knowledge it helps create. On the other hand, praxis involves thinking, but always with an aim to desired actions, whereby humans cause change or movement themselves for their own ends. Any human movement that involves no conscious choice and thinking could not be an example of praxis or doing.
== Formality ==
Theories are analytical tools for understanding, explaining, and making predictions about a given subject matter. There are theories in many and varied fields of study, including the arts and sciences. A formal theory is syntactic in nature and is only meaningful when given a semantic component by applying it to some content (e.g., facts and relationships of the actual historical world as it is unfolding). Theories in various fields of study are often expressed in natural language, but can be constructed in such a way that their general form is identical to a theory as it is expressed in the formal language of mathematical logic. Theories may be expressed mathematically, symbolically, or in common language, but are generally expected to follow principles of rational thought or logic.
Theory is constructed of a set of sentences that are thought to be true statements about the subject under consideration. However, the truth of any one of these statements is always relative to the whole theory. Therefore, the same statement may be true with respect to one theory, and not true with respect to another. This is, in ordinary language, where statements such as "He is a terrible person" cannot be judged as true or false without reference to some interpretation of who "He" is and for that matter what a "terrible person" is under the theory.
Sometimes two theories have exactly the same explanatory power because they make the same predictions. A pair of such theories is called indistinguishable or observationally equivalent, and the choice between them reduces to convenience or philosophical preference.
The form of theories is studied formally in mathematical logic, especially in model theory. When theories are studied in mathematics, they are usually expressed in some formal language and their statements are closed under application of certain procedures called rules of inference. A special case of this, an axiomatic theory, consists of axioms (or axiom schemata) and rules of inference. A theorem is a statement that can be derived from those axioms by application of these rules of inference. Theories used in applications are abstractions of observed phenomena and the resulting theorems provide solutions to real-world problems. Obvious examples include arithmetic (abstracting concepts of number), geometry (concepts of space), and probability (concepts of randomness and likelihood).
Gödel's incompleteness theorem shows that no consistent, recursively enumerable theory (that is, one whose theorems form a recursively enumerable set) in which the concept of natural numbers can be expressed, can include all true statements about them. As a result, some domains of knowledge cannot be formalized, accurately and completely, as mathematical theories. (Here, formalizing accurately and completely means that all true propositions—and only true propositions—are derivable within the mathematical system.) This limitation, however, in no way precludes the construction of mathematical theories that formalize large bodies of scientific knowledge.
=== Underdetermination ===
A theory is underdetermined (also called indeterminacy of data to theory) if a rival, inconsistent theory is at least as consistent with the evidence. Underdetermination is an epistemological issue about the relation of evidence to conclusions.
A theory that lacks supporting evidence is generally, more properly, referred to as a hypothesis.
=== Intertheoretic reduction and elimination ===
If a new theory better explains and predicts a phenomenon than an old theory (i.e., it has more explanatory power), we are justified in believing that the newer theory describes reality more correctly. This is called an intertheoretic reduction because the terms of the old theory can be reduced to the terms of the new one. For instance, our historical understanding about sound, light and heat have been reduced to wave compressions and rarefactions, electromagnetic waves, and molecular kinetic energy, respectively. These terms, which are identified with each other, are called intertheoretic identities. When an old and new theory are parallel in this way, we can conclude that the new one describes the same reality, only more completely.
When a new theory uses new terms that do not reduce to terms of an older theory, but rather replace them because they misrepresent reality, it is called an intertheoretic elimination. For instance, the obsolete scientific theory that put forward an understanding of heat transfer in terms of the movement of caloric fluid was eliminated when a theory of heat as energy replaced it. Also, the theory that phlogiston is a substance released from burning and rusting material was eliminated with the new understanding of the reactivity of oxygen.
=== Versus theorems ===
Theories are distinct from theorems. A theorem is derived deductively from axioms (basic assumptions) according to a formal system of rules, sometimes as an end in itself and sometimes as a first step toward being tested or applied in a concrete situation; theorems are said to be true in the sense that the conclusions of a theorem are logical consequences of the axioms. Theories are abstract and conceptual, and are supported or challenged by observations in the world. They are 'rigorously tentative', meaning that they are proposed as true and expected to satisfy careful examination to account for the possibility of faulty inference or incorrect observation. Sometimes theories are incorrect, meaning that an explicit set of observations contradicts some fundamental objection or application of the theory, but more often theories are corrected to conform to new observations, by restricting the class of phenomena the theory applies to or changing the assertions made. An example of the former is the restriction of classical mechanics to phenomena involving macroscopic length scales and particle speeds much lower than the speed of light.
== Theory–practice relationship ==
Theory is often distinguished from practice or praxis. The question of whether theoretical models of work are relevant to work itself is of interest to scholars of professions such as medicine, engineering, law, and management.: 802
The gap between theory and practice has been framed as a knowledge transfer where there is a task of translating research knowledge to be application in practice, and ensuring that practitioners are made aware of it. Academics have been criticized for not attempting to transfer the knowledge they produce to practitioners.: 804 Another framing supposes that theory and knowledge seek to understand different problems and model the world in different words (using different ontologies and epistemologies). Another framing says that research does not produce theory that is relevant to practice.: 803
In the context of management, Van de Van and Johnson propose a form of engaged scholarship where scholars examine problems that occur in practice, in an interdisciplinary fashion, producing results that create both new practical results as well as new theoretical models, but targeting theoretical results shared in an academic fashion.: 815 They use a metaphor of "arbitrage" of ideas between disciplines, distinguishing it from collaboration.: 803
== Scientific ==
In science, the term "theory" refers to "a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment." Theories must also meet further requirements, such as the ability to make falsifiable predictions with consistent accuracy across a broad area of scientific inquiry, and production of strong evidence in favor of the theory from multiple independent sources (consilience).
The strength of a scientific theory is related to the diversity of phenomena it can explain, which is measured by its ability to make falsifiable predictions with respect to those phenomena. Theories are improved (or replaced by better theories) as more evidence is gathered, so that accuracy in prediction improves over time; this increased accuracy corresponds to an increase in scientific knowledge. Scientists use theories as a foundation to gain further scientific knowledge, as well as to accomplish goals such as inventing technology or curing diseases.
=== Definitions from scientific organizations ===
The United States National Academy of Sciences defines scientific theories as follows:The formal scientific definition of "theory" is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics) ... One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed.
From the American Association for the Advancement of Science:
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory." It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.
The term theory is not appropriate for describing scientific models or untested, but intricate hypotheses.
=== Philosophical views ===
The logical positivists thought of scientific theories as deductive theories—that a theory's content is based on some formal system of logic and on basic axioms. In a deductive theory, any sentence which is a logical consequence of one or more of the axioms is also a sentence of that theory. This is called the received view of theories.
In the semantic view of theories, which has largely replaced the received view, theories are viewed as scientific models. A model is an abstract and informative representation of reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country. In this approach, theories are a specific category of models that fulfill the necessary criteria. (See Theories as models for further discussion.)
=== In physics ===
In physics the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries, like equality of locations in space or in time, or identity of electrons, etc.)—which is capable of producing experimental predictions for a given category of physical systems. One good example is classical electromagnetism, which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations. The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism", reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered adequately tested, with new ones always in the making and perhaps untested.
=== Regarding the term "theoretical" ===
Certain tests may be infeasible or technically difficult. As a result, theories may make predictions that have not been confirmed or proven incorrect. These predictions may be described informally as "theoretical". They can be tested later, and if they are incorrect, this may lead to revision, invalidation, or rejection of the theory.
== Mathematical ==
In mathematics, the term theory is used differently than its use in science ─ necessarily so, since mathematics contains no explanations of natural phenomena per se, even though it may help provide insight into natural systems or be inspired by them. In the general sense, a mathematical theory is a branch of mathematics devoted to some specific topics or methods, such as set theory, number theory, group theory, probability theory, game theory, control theory, perturbation theory, etc., such as might be appropriate for a single textbook.
In mathematical logic, a theory has a related but different sense: it is the collection of the theorems that can be deduced from a given set of axioms, given a given set of inference rules.
== Philosophical ==
A theory can be either descriptive as in science, or prescriptive (normative) as in philosophy. The latter are those whose subject matter consists not of empirical data, but rather of ideas. At least some of the elementary theorems of a philosophical theory are statements whose truth cannot necessarily be scientifically tested through empirical observation.
A field of study is sometimes named a "theory" because its basis is some initial set of assumptions describing the field's approach to the subject. These assumptions are the elementary theorems of the particular theory, and can be thought of as the axioms of that field. Some commonly known examples include set theory and number theory; however literary theory, critical theory, and music theory are also of the same form.
=== Metatheory ===
One form of philosophical theory is a metatheory or meta-theory. A metatheory is a theory whose subject matter is some other theory or set of theories. In other words, it is a theory about theories. Statements made in the metatheory about the theory are called metatheorems.
== Political ==
A political theory is an ethical theory about the law and government. Often the term "political theory" refers to a general view, or specific ethic, political belief or attitude, thought about politics.
== Jurisprudential ==
In social science, jurisprudence is the philosophical theory of law. Contemporary philosophy of law addresses problems internal to law and legal systems, and problems of law as a particular social institution.
== Examples ==
Most of the following are scientific theories. Some are not, but rather encompass a body of knowledge or art, such as Music theory and Visual Arts Theories.
Anthropology:
Carneiro's circumscription theory
Astronomy:
Alpher–Bethe–Gamow theory —
B2FH Theory —
Copernican theory —
Newton's theory of gravitation —
Hubble's law —
Kepler's laws of planetary motion Ptolemaic theory
Biology:
Cell theory —
Chemiosmotic theory —
Evolution —
Germ theory —
Symbiogenesis
Chemistry:
Molecular theory —
Kinetic theory of gases —
Molecular orbital theory —
Valence bond theory —
Transition state theory —
RRKM theory —
Chemical graph theory —
Flory–Huggins solution theory —
Marcus theory —
Lewis theory (successor to Brønsted–Lowry acid–base theory) —
HSAB theory —
Debye–Hückel theory —
Thermodynamic theory of polymer elasticity —
Reptation theory —
Polymer field theory —
Møller–Plesset perturbation theory —
density functional theory —
Frontier molecular orbital theory —
Polyhedral skeletal electron pair theory —
Baeyer strain theory —
Quantum theory of atoms in molecules —
Collision theory —
Ligand field theory (successor to Crystal field theory) —
Variational transition-state theory —
Benson group increment theory —
Specific ion interaction theory
Climatology:
Climate change theory (general study of climate changes)
anthropogenic climate change (ACC)/
anthropogenic global warming (AGW) theories (due to human activity)
Computer Science:
Automata theory —
Queueing theory
Cosmology:
Big Bang Theory —
Cosmic inflation —
Loop quantum gravity —
Superstring theory —
Supergravity —
Supersymmetric theory —
Multiverse theory —
Holographic principle —
Quantum gravity —
M-theory
Economics:
Macroeconomic theory —
Microeconomic theory —
Law of Supply and demand
Education:
Constructivist theory —
Critical pedagogy theory —
Education theory —
Multiple intelligence theory —
Progressive education theory
Engineering:
Circuit theory —
Control theory —
Signal theory —
Systems theory —
Information theory
Film:
Film theory
Geology:
Plate tectonics
Humanities:
Critical theory
Jurisprudence or 'Legal theory':
Natural law —
Legal positivism —
Legal realism —
Critical legal studies
Law: see Jurisprudence; also Case theory
Linguistics:
X-bar theory —
Government and Binding —
Principles and parameters —
Universal grammar
Literature:
Literary theory
Mathematics:
Approximation theory —
Arakelov theory —
Asymptotic theory —
Bifurcation theory —
Catastrophe theory —
Category theory —
Chaos theory —
Choquet theory —
Coding theory —
Combinatorial game theory —
Computability theory —
Computational complexity theory —
Deformation theory —
Dimension theory —
Ergodic theory —
Field theory —
Galois theory —
Game theory —
Gauge theory —
Graph theory —
Group theory —
Hodge theory —
Homology theory —
Homotopy theory —
Ideal theory —
Intersection theory —
Invariant theory —
Iwasawa theory —
K-theory —
KK-theory —
Knot theory —
L-theory —
Lie theory —
Littlewood–Paley theory —
Matrix theory —
Measure theory —
Model theory —
Module theory —
Morse theory —
Nevanlinna theory —
Number theory —
Obstruction theory —
Operator theory —
Order theory —
PCF theory —
Perturbation theory —
Potential theory —
Probability theory —
Ramsey theory —
Rational choice theory —
Representation theory —
Ring theory —
Set theory —
Shape theory —
Small cancellation theory —
Spectral theory —
Stability theory —
Stable theory —
Sturm–Liouville theory —
Surgery theory —
Twistor theory —
Yang–Mills theory
Music:
Music theory
Philosophy:
Proof theory —
Speculative reason —
Theory of truth —
Type theory —
Value theory —
Virtue theory
Physics:
Acoustic theory —
Antenna theory —
Atomic theory —
BCS theory —
Conformal field theory —
Dirac hole theory —
Dynamo theory —
Landau theory —
M-theory —
Perturbation theory —
Theory of relativity (successor to classical mechanics) —
Gauge theory —
Quantum field theory —
Scattering theory —
String theory —
Quantum information theory
Psychology:
Cognitive dissonance theory —
Attachment theory —
Object permanence —
Poverty of stimulus —
Attribution theory —
Self-fulfilling prophecy —
Stockholm syndrome
Public Budgeting:
Incrementalism —
Zero-based budgeting
Public Administration:
Organizational theory
Semiotics:
Intertheoricity –
Transferogenesis
Sociology:
Critical theory —
Engaged theory —
Social theory —
Sociological theory –
Social capital theory
Statistics:
Extreme value theory
Theatre:
Performance theory
Visual Arts:
Aesthetics —
Art educational theory —
Architecture —
Composition —
Anatomy —
Color theory —
Perspective —
Visual perception —
Geometry —
Manifolds
Other:
Obsolete scientific theories
== See also ==
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
Eisenhardt, K. M., & Graebner, M. E. (2007). Theory building from cases: Opportunities and challenges. Academy of management journal, 50(1), 25-32.
== External links ==
"How science works: Even theories change", Understanding Science by the University of California Museum of Paleontology.
What is a Theory? | Wikipedia/Theory |
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation.
Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems.
Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics.
Two dominant branches of analytical mechanics are Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries.
Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory.
Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory.
The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics.
== Motivation ==
The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system.
Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. These quantities characterise the motion of a body idealised as a "mass point" or a "particle" understood as a single point to which a mass is attached. Newton's method has been successfully applied to a wide range of physical problems, including the motion of a particle in Earth's gravitational field and the motion of planets around the Sun. In this approach, Newton's laws describe the motion by a differential equation and then the problem is reduced to the solving of that equation.
When a mechanical system contains many particles, however (such as a complex mechanism or a fluid), Newton's approach is difficult to apply. Using a Newtonian approach is possible, under proper precautions, namely isolating each single particle from the others, and determining all the forces acting on it. Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications. This is false even for such simple system as rotations of a solid body. In more complicated systems, the vectorial approach cannot give an adequate description.
The analytical approach simplifies problems by treating mechanical systems as ensembles of particles that interact with each other, rather considering each particle as an isolated unit. In the vectorial approach, forces must be determined individually for each particle, whereas in the analytical approach it is enough to know one single function which contains implicitly all the forces acting on and in the system.
Such simplification is often done using certain kinematic conditions which are stated a priori. However, the analytical treatment does not require the knowledge of these forces and takes these kinematic conditions for granted.
Still, deriving the equations of motion of a complicated mechanical system requires a unifying basis from which they follow. This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion.
It is not altogether clear what is meant by 'solving' a set of differential equations. A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations.
Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem. Moreover, an accurate mental or drawn picture can be made for the motion of two bodies, and it can be as real and accurate as the real bodies moving and interacting. In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves.
Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed.
Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed.
== Intrinsic motion ==
=== Generalized coordinates and constraints ===
In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...).: 231
=== Difference between curvillinear and generalized coordinates ===
Generalized coordinates incorporate constraints on the system. There is one generalized coordinate qi for each degree of freedom (for convenience labelled by an index i = 1, 2...N), i.e. each way the system can change its configuration; as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule:
For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple:
q
=
(
q
1
,
q
2
,
…
,
q
N
)
{\displaystyle \mathbf {q} =(q_{1},q_{2},\dots ,q_{N})}
and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities:
d
q
d
t
=
(
d
q
1
d
t
,
d
q
2
d
t
,
…
,
d
q
N
d
t
)
≡
q
˙
=
(
q
˙
1
,
q
˙
2
,
…
,
q
˙
N
)
.
{\displaystyle {\frac {d\mathbf {q} }{dt}}=\left({\frac {dq_{1}}{dt}},{\frac {dq_{2}}{dt}},\dots ,{\frac {dq_{N}}{dt}}\right)\equiv \mathbf {\dot {q}} =({\dot {q}}_{1},{\dot {q}}_{2},\dots ,{\dot {q}}_{N}).}
=== D'Alembert's principle of virtual work ===
D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is:: 265
δ
W
=
Q
⋅
δ
q
=
0
,
{\displaystyle \delta W={\boldsymbol {\mathcal {Q}}}\cdot \delta \mathbf {q} =0\,,}
where
Q
=
(
Q
1
,
Q
2
,
…
,
Q
N
)
{\displaystyle {\boldsymbol {\mathcal {Q}}}=({\mathcal {Q}}_{1},{\mathcal {Q}}_{2},\dots ,{\mathcal {Q}}_{N})}
are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and q are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics:
Q
=
d
d
t
(
∂
T
∂
q
˙
)
−
∂
T
∂
q
,
{\displaystyle {\boldsymbol {\mathcal {Q}}}={\frac {d}{dt}}\left({\frac {\partial T}{\partial \mathbf {\dot {q}} }}\right)-{\frac {\partial T}{\partial \mathbf {q} }}\,,}
where T is the total kinetic energy of the system, and the notation
∂
∂
q
=
(
∂
∂
q
1
,
∂
∂
q
2
,
…
,
∂
∂
q
N
)
{\displaystyle {\frac {\partial }{\partial \mathbf {q} }}=\left({\frac {\partial }{\partial q_{1}}},{\frac {\partial }{\partial q_{2}}},\dots ,{\frac {\partial }{\partial q_{N}}}\right)}
is a useful shorthand (see matrix calculus for this notation).
=== Constraints ===
If the curvilinear coordinate system is defined by the standard position vector r, and if the position vector can be written in terms of the generalized coordinates q and time t in the form:
r
=
r
(
q
(
t
)
,
t
)
{\displaystyle \mathbf {r} =\mathbf {r} (\mathbf {q} (t),t)}
and this relation holds for all times t, then q are called holonomic constraints. Vector r is explicitly dependent on t in cases when the constraints vary with time, not just because of q(t). For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic.
== Lagrangian mechanics ==
The introduction of generalized coordinates and the fundamental Lagrangian function:
L
(
q
,
q
˙
,
t
)
=
T
(
q
,
q
˙
,
t
)
−
V
(
q
,
q
˙
,
t
)
{\displaystyle L(\mathbf {q} ,\mathbf {\dot {q}} ,t)=T(\mathbf {q} ,\mathbf {\dot {q}} ,t)-V(\mathbf {q} ,\mathbf {\dot {q}} ,t)}
where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations;
d
d
t
(
∂
L
∂
q
˙
)
=
∂
L
∂
q
,
{\displaystyle {\frac {d}{dt}}\left({\frac {\partial L}{\partial \mathbf {\dot {q}} }}\right)={\frac {\partial L}{\partial \mathbf {q} }}\,,}
which are a set of N second-order ordinary differential equations, one for each qi(t).
This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit.
The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates:
C
=
{
q
∈
R
N
}
,
{\displaystyle {\mathcal {C}}=\{\mathbf {q} \in \mathbb {R} ^{N}\}\,,}
where
R
N
{\displaystyle \mathbb {R} ^{N}}
is N-dimensional real space (see also set-builder notation). The particular solution to the Euler–Lagrange equations is called a (configuration) path or trajectory, i.e. one particular q(t) subject to the required initial conditions. The general solutions form a set of possible configurations as functions of time:
{
q
(
t
)
∈
R
N
:
t
≥
0
,
t
∈
R
}
⊆
C
,
{\displaystyle \{\mathbf {q} (t)\in \mathbb {R} ^{N}\,:\,t\geq 0,t\in \mathbb {R} \}\subseteq {\mathcal {C}}\,,}
The configuration space can be defined more generally, and indeed more deeply, in terms of topological manifolds and the tangent bundle.
== Hamiltonian mechanics ==
The Legendre transformation of the Lagrangian replaces the generalized coordinates and velocities (q, q̇) with (q, p); the generalized coordinates and the generalized momenta conjugate to the generalized coordinates:
p
=
∂
L
∂
q
˙
=
(
∂
L
∂
q
˙
1
,
∂
L
∂
q
˙
2
,
⋯
∂
L
∂
q
˙
N
)
=
(
p
1
,
p
2
⋯
p
N
)
,
{\displaystyle \mathbf {p} ={\frac {\partial L}{\partial \mathbf {\dot {q}} }}=\left({\frac {\partial L}{\partial {\dot {q}}_{1}}},{\frac {\partial L}{\partial {\dot {q}}_{2}}},\cdots {\frac {\partial L}{\partial {\dot {q}}_{N}}}\right)=(p_{1},p_{2}\cdots p_{N})\,,}
and introduces the Hamiltonian (which is in terms of generalized coordinates and momenta):
H
(
q
,
p
,
t
)
=
p
⋅
q
˙
−
L
(
q
,
q
˙
,
t
)
{\displaystyle H(\mathbf {q} ,\mathbf {p} ,t)=\mathbf {p} \cdot \mathbf {\dot {q}} -L(\mathbf {q} ,\mathbf {\dot {q}} ,t)}
where
⋅
{\displaystyle \cdot }
denotes the dot product, also leading to Hamilton's equations:
p
˙
=
−
∂
H
∂
q
,
q
˙
=
+
∂
H
∂
p
,
{\displaystyle \mathbf {\dot {p}} =-{\frac {\partial H}{\partial \mathbf {q} }}\,,\quad \mathbf {\dot {q}} =+{\frac {\partial H}{\partial \mathbf {p} }}\,,}
which are now a set of 2N first-order ordinary differential equations, one for each qi(t) and pi(t). Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian:
d
H
d
t
=
−
∂
L
∂
t
,
{\displaystyle {\frac {dH}{dt}}=-{\frac {\partial L}{\partial t}}\,,}
which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law:
p
˙
=
Q
.
{\displaystyle \mathbf {\dot {p}} ={\boldsymbol {\mathcal {Q}}}\,.}
Analogous to the configuration space, the set of all momenta is the generalized momentum space:
M
=
{
p
∈
R
N
}
.
{\displaystyle {\mathcal {M}}=\{\mathbf {p} \in \mathbb {R} ^{N}\}\,.}
("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves)
The set of all positions and momenta form the phase space:
P
=
C
×
M
=
{
(
q
,
p
)
∈
R
2
N
}
,
{\displaystyle {\mathcal {P}}={\mathcal {C}}\times {\mathcal {M}}=\{(\mathbf {q} ,\mathbf {p} )\in \mathbb {R} ^{2N}\}\,,}
that is, the Cartesian product of the configuration space and generalized momentum space.
A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait:
{
(
q
(
t
)
,
p
(
t
)
)
∈
R
2
N
:
t
≥
0
,
t
∈
R
}
⊆
P
,
{\displaystyle \{(\mathbf {q} (t),\mathbf {p} (t))\in \mathbb {R} ^{2N}\,:\,t\geq 0,t\in \mathbb {R} \}\subseteq {\mathcal {P}}\,,}
=== The Poisson bracket ===
All dynamical variables can be derived from position q, momentum p, and time t, and written as a function of these: A = A(q, p, t). If A(q, p, t) and B(q, p, t) are two scalar valued dynamical variables, the Poisson bracket is defined by the generalized coordinates and momenta:
{
A
,
B
}
≡
{
A
,
B
}
q
,
p
=
∂
A
∂
q
⋅
∂
B
∂
p
−
∂
A
∂
p
⋅
∂
B
∂
q
≡
∑
k
∂
A
∂
q
k
∂
B
∂
p
k
−
∂
A
∂
p
k
∂
B
∂
q
k
,
{\displaystyle {\begin{aligned}\{A,B\}\equiv \{A,B\}_{\mathbf {q} ,\mathbf {p} }&={\frac {\partial A}{\partial \mathbf {q} }}\cdot {\frac {\partial B}{\partial \mathbf {p} }}-{\frac {\partial A}{\partial \mathbf {p} }}\cdot {\frac {\partial B}{\partial \mathbf {q} }}\\&\equiv \sum _{k}{\frac {\partial A}{\partial q_{k}}}{\frac {\partial B}{\partial p_{k}}}-{\frac {\partial A}{\partial p_{k}}}{\frac {\partial B}{\partial q_{k}}}\,,\end{aligned}}}
Calculating the total derivative of one of these, say A, and substituting Hamilton's equations into the result leads to the time evolution of A:
d
A
d
t
=
{
A
,
H
}
+
∂
A
∂
t
.
{\displaystyle {\frac {dA}{dt}}=\{A,H\}+{\frac {\partial A}{\partial t}}\,.}
This equation in A is closely related to the equation of motion in the Heisenberg picture of quantum mechanics, in which classical dynamical variables become quantum operators (indicated by hats (^)), and the Poisson bracket is replaced by the commutator of operators via Dirac's canonical quantization:
{
A
,
B
}
→
1
i
ℏ
[
A
^
,
B
^
]
.
{\displaystyle \{A,B\}\rightarrow {\frac {1}{i\hbar }}[{\hat {A}},{\hat {B}}]\,.}
== Properties of the Lagrangian and the Hamiltonian ==
Following are overlapping properties between the Lagrangian and Hamiltonian functions.
All the individual generalized coordinates qi(t), velocities q̇i(t) and momenta pi(t) for every degree of freedom are mutually independent. Explicit time-dependence of a function means the function actually includes time t as a variable in addition to the q(t), p(t), not simply as a parameter through q(t) and p(t), which would mean explicit time-independence.
The Lagrangian is invariant under addition of the total time derivative of any function of q' and t, that is:
L
′
=
L
+
d
d
t
F
(
q
,
t
)
,
{\displaystyle L'=L+{\frac {d}{dt}}F(\mathbf {q} ,t)\,,}
so each Lagrangian L and L describe exactly the same motion. In other words, the Lagrangian of a system is not unique.
Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is:
K
=
H
+
∂
∂
t
G
(
q
,
p
,
t
)
,
{\displaystyle K=H+{\frac {\partial }{\partial t}}G(\mathbf {q} ,\mathbf {p} ,t)\,,}
(K is a frequently used letter in this case). This property is used in canonical transformations (see below).
If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations:
∂
L
∂
q
j
=
0
→
d
p
j
d
t
=
d
d
t
∂
L
∂
q
˙
j
=
0
{\displaystyle {\frac {\partial L}{\partial q_{j}}}=0\,\rightarrow \,{\frac {dp_{j}}{dt}}={\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{j}}}=0}
Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates.
If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time).
If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then:
T
(
(
λ
q
˙
i
)
2
,
(
λ
q
˙
j
λ
q
˙
k
)
,
q
)
=
λ
2
T
(
(
q
˙
i
)
2
,
q
˙
j
q
˙
k
,
q
)
,
L
(
q
,
q
˙
)
,
{\displaystyle T((\lambda {\dot {q}}_{i})^{2},(\lambda {\dot {q}}_{j}\lambda {\dot {q}}_{k}),\mathbf {q} )=\lambda ^{2}T(({\dot {q}}_{i})^{2},{\dot {q}}_{j}{\dot {q}}_{k},\mathbf {q} )\,,\quad L(\mathbf {q} ,\mathbf {\dot {q}} )\,,}
where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system:
H
=
T
+
V
=
E
.
{\displaystyle H=T+V=E\,.}
This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it.
== Principle of least action ==
Action is another quantity in analytical mechanics defined as a functional of the Lagrangian:
S
=
∫
t
1
t
2
L
(
q
,
q
˙
,
t
)
d
t
.
{\displaystyle {\mathcal {S}}=\int _{t_{1}}^{t_{2}}L(\mathbf {q} ,\mathbf {\dot {q}} ,t)dt\,.}
A general way to find the equations of motion from the action is the principle of least action:
δ
S
=
δ
∫
t
1
t
2
L
(
q
,
q
˙
,
t
)
d
t
=
0
,
{\displaystyle \delta {\mathcal {S}}=\delta \int _{t_{1}}^{t_{2}}L(\mathbf {q} ,\mathbf {\dot {q}} ,t)dt=0\,,}
where the departure t1 and arrival t2 times are fixed. The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space
C
{\displaystyle {\mathcal {C}}}
, in other words q(t) tracing out a path in
C
{\displaystyle {\mathcal {C}}}
. The path for which action is least is the path taken by the system.
From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics, and is used for calculating geodesic motion in general relativity.
== Hamiltonian-Jacobi mechanics ==
Canonical transformations
The invariance of the Hamiltonian (under addition of the partial time derivative of an arbitrary function of p, q, and t) allows the Hamiltonian in one set of coordinates q and momenta p to be transformed into a new set Q = Q(q, p, t) and P = P(q, p, t), in four possible ways:
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
∂
t
G
1
(
q
,
Q
,
t
)
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
∂
t
G
2
(
q
,
P
,
t
)
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
∂
t
G
3
(
p
,
Q
,
t
)
K
(
Q
,
P
,
t
)
=
H
(
q
,
p
,
t
)
+
∂
∂
t
G
4
(
p
,
P
,
t
)
{\displaystyle {\begin{aligned}&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial }{\partial t}}G_{1}(\mathbf {q} ,\mathbf {Q} ,t)\\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial }{\partial t}}G_{2}(\mathbf {q} ,\mathbf {P} ,t)\\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial }{\partial t}}G_{3}(\mathbf {p} ,\mathbf {Q} ,t)\\&K(\mathbf {Q} ,\mathbf {P} ,t)=H(\mathbf {q} ,\mathbf {p} ,t)+{\frac {\partial }{\partial t}}G_{4}(\mathbf {p} ,\mathbf {P} ,t)\\\end{aligned}}}
With the restriction on P and Q such that the transformed Hamiltonian system is:
P
˙
=
−
∂
K
∂
Q
,
Q
˙
=
+
∂
K
∂
P
,
{\displaystyle \mathbf {\dot {P}} =-{\frac {\partial K}{\partial \mathbf {Q} }}\,,\quad \mathbf {\dot {Q}} =+{\frac {\partial K}{\partial \mathbf {P} }}\,,}
the above transformations are called canonical transformations, each function Gn is called a generating function of the "nth kind" or "type-n". The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem.
The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation. One simple criterion for a transformation q → Q and p → P to be canonical is the Poisson bracket be unity,
{
Q
i
,
P
i
}
=
1
{\displaystyle \{Q_{i},P_{i}\}=1}
for all i = 1, 2,...N. If this does not hold then the transformation is not canonical.
The Hamilton–Jacobi equation
By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action
S
{\displaystyle {\mathcal {S}}}
) plus an arbitrary constant C:
G
2
(
q
,
t
)
=
S
(
q
,
t
)
+
C
,
{\displaystyle G_{2}(\mathbf {q} ,t)={\mathcal {S}}(\mathbf {q} ,t)+C\,,}
the generalized momenta become:
p
=
∂
S
∂
q
{\displaystyle \mathbf {p} ={\frac {\partial {\mathcal {S}}}{\partial \mathbf {q} }}}
and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation:
H
=
−
∂
S
∂
t
{\displaystyle H=-{\frac {\partial {\mathcal {S}}}{\partial t}}}
where H is the Hamiltonian as before:
H
=
H
(
q
,
p
,
t
)
=
H
(
q
,
∂
S
∂
q
,
t
)
{\displaystyle H=H(\mathbf {q} ,\mathbf {p} ,t)=H\left(\mathbf {q} ,{\frac {\partial {\mathcal {S}}}{\partial \mathbf {q} }},t\right)}
Another related function is Hamilton's characteristic function
W
(
q
)
=
S
(
q
,
t
)
+
E
t
{\displaystyle W(\mathbf {q} )={\mathcal {S}}(\mathbf {q} ,t)+Et}
used to solve the HJE by additive separation of variables for a time-independent Hamiltonian H.
The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields.
== Routhian mechanics ==
Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ... qs with conjugate momenta p = p1, p2, ... ps, with the rest of the coordinates non-cyclic and denoted ζ = ζ1, ζ1, ..., ζN − s, they can be removed by introducing the Routhian:
R
=
p
⋅
q
˙
−
L
(
q
,
p
,
ζ
,
ζ
˙
)
,
{\displaystyle R=\mathbf {p} \cdot \mathbf {\dot {q}} -L(\mathbf {q} ,\mathbf {p} ,{\boldsymbol {\zeta }},{\dot {\boldsymbol {\zeta }}})\,,}
which leads to a set of 2s Hamiltonian equations for the cyclic coordinates q,
q
˙
=
+
∂
R
∂
p
,
p
˙
=
−
∂
R
∂
q
,
{\displaystyle {\dot {\mathbf {q} }}=+{\frac {\partial R}{\partial \mathbf {p} }}\,,\quad {\dot {\mathbf {p} }}=-{\frac {\partial R}{\partial \mathbf {q} }}\,,}
and N − s Lagrangian equations in the non cyclic coordinates ζ.
d
d
t
∂
R
∂
ζ
˙
=
∂
R
∂
ζ
.
{\displaystyle {\frac {d}{dt}}{\frac {\partial R}{\partial {\dot {\boldsymbol {\zeta }}}}}={\frac {\partial R}{\partial {\boldsymbol {\zeta }}}}\,.}
Set up in this way, although the Routhian has the form of the Hamiltonian, it can be thought of a Lagrangian with N − s degrees of freedom.
The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion.
== Appellian mechanics ==
Appell's equation of motion involve generalized accelerations, the second time derivatives of the generalized coordinates:
α
r
=
q
¨
r
=
d
2
q
r
d
t
2
,
{\displaystyle \alpha _{r}={\ddot {q}}_{r}={\frac {d^{2}q_{r}}{dt^{2}}}\,,}
as well as generalized forces mentioned above in D'Alembert's principle. The equations are
Q
r
=
∂
S
∂
α
r
,
S
=
1
2
∑
k
=
1
N
m
k
a
k
2
,
{\displaystyle {\mathcal {Q}}_{r}={\frac {\partial S}{\partial \alpha _{r}}}\,,\quad S={\frac {1}{2}}\sum _{k=1}^{N}m_{k}\mathbf {a} _{k}^{2}\,,}
where
a
k
=
r
¨
k
=
d
2
r
k
d
t
2
{\displaystyle \mathbf {a} _{k}={\ddot {\mathbf {r} }}_{k}={\frac {d^{2}\mathbf {r} _{k}}{dt^{2}}}}
is the acceleration of the k particle, the second time derivative of its position vector. Each acceleration ak is expressed in terms of the generalized accelerations αr, likewise each rk are expressed in terms the generalized coordinates qr.
== Classical field theory ==
=== Lagrangian field theory ===
Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves:
L
=
L
(
ϕ
1
,
ϕ
2
,
…
,
∇
ϕ
1
,
∇
ϕ
2
,
…
,
∂
t
ϕ
1
,
∂
t
ϕ
2
,
…
,
r
,
t
)
.
{\displaystyle {\mathcal {L}}={\mathcal {L}}(\phi _{1},\phi _{2},\dots ,\nabla \phi _{1},\nabla \phi _{2},\dots ,\partial _{t}\phi _{1},\partial _{t}\phi _{2},\ldots ,\mathbf {r} ,t)\,.}
and the Euler–Lagrange equations have an analogue for fields:
∂
μ
(
∂
L
∂
(
∂
μ
ϕ
i
)
)
=
∂
L
∂
ϕ
i
,
{\displaystyle \partial _{\mu }\left({\frac {\partial {\mathcal {L}}}{\partial (\partial _{\mu }\phi _{i})}}\right)={\frac {\partial {\mathcal {L}}}{\partial \phi _{i}}}\,,}
where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear.
This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields.
The Lagrangian is the volume integral of the Lagrangian density:
L
=
∫
V
L
d
V
.
{\displaystyle L=\int _{\mathcal {V}}{\mathcal {L}}\,dV\,.}
Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation.
=== Hamiltonian field theory ===
The corresponding "momentum" field densities conjugate to the N scalar fields φi(r, t) are:
π
i
(
r
,
t
)
=
∂
L
∂
ϕ
˙
i
ϕ
˙
i
≡
∂
ϕ
i
∂
t
{\displaystyle \pi _{i}(\mathbf {r} ,t)={\frac {\partial {\mathcal {L}}}{\partial {\dot {\phi }}_{i}}}\,\quad {\dot {\phi }}_{i}\equiv {\frac {\partial \phi _{i}}{\partial t}}}
where in this context the overdot denotes a partial time derivative, not a total time derivative. The Hamiltonian density
H
{\displaystyle {\mathcal {H}}}
is defined by analogy with mechanics:
H
(
ϕ
1
,
ϕ
2
,
…
,
π
1
,
π
2
,
…
,
r
,
t
)
=
∑
i
=
1
N
ϕ
˙
i
(
r
,
t
)
π
i
(
r
,
t
)
−
L
.
{\displaystyle {\mathcal {H}}(\phi _{1},\phi _{2},\ldots ,\pi _{1},\pi _{2},\ldots ,\mathbf {r} ,t)=\sum _{i=1}^{N}{\dot {\phi }}_{i}(\mathbf {r} ,t)\pi _{i}(\mathbf {r} ,t)-{\mathcal {L}}\,.}
The equations of motion are:
ϕ
˙
i
=
+
δ
H
δ
π
i
,
π
˙
i
=
−
δ
H
δ
ϕ
i
,
{\displaystyle {\dot {\phi }}_{i}=+{\frac {\delta {\mathcal {H}}}{\delta \pi _{i}}}\,,\quad {\dot {\pi }}_{i}=-{\frac {\delta {\mathcal {H}}}{\delta \phi _{i}}}\,,}
where the variational derivative
δ
δ
ϕ
i
=
∂
∂
ϕ
i
−
∂
μ
∂
∂
(
∂
μ
ϕ
i
)
{\displaystyle {\frac {\delta }{\delta \phi _{i}}}={\frac {\partial }{\partial \phi _{i}}}-\partial _{\mu }{\frac {\partial }{\partial (\partial _{\mu }\phi _{i})}}}
must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear.
Again, the volume integral of the Hamiltonian density is the Hamiltonian
H
=
∫
V
H
d
V
.
{\displaystyle H=\int _{\mathcal {V}}{\mathcal {H}}\,dV\,.}
== Symmetry, conservation, and Noether's theorem ==
Symmetry transformations in classical space and time
Each transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries.
where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂ and angle θ.
Noether's theorem
Noether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s:
L
[
q
(
s
,
t
)
,
q
˙
(
s
,
t
)
]
=
L
[
q
(
t
)
,
q
˙
(
t
)
]
{\displaystyle L[q(s,t),{\dot {q}}(s,t)]=L[q(t),{\dot {q}}(t)]}
the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q will be conserved.
== See also ==
Lagrangian mechanics
Hamiltonian mechanics
Theoretical mechanics
Classical mechanics
Hamilton–Jacobi equation
Hamilton's principle
Kinematics
Kinetics (physics)
Non-autonomous mechanics
Udwadia–Kalaba equation
== References and notes == | Wikipedia/Analytical_dynamics |
In theoretical physics, quantum chromodynamics (QCD) is the study of the strong interaction between quarks mediated by gluons. Quarks are fundamental particles that make up composite hadrons such as the proton, neutron and pion. QCD is a type of quantum field theory called a non-abelian gauge theory, with symmetry group SU(3). The QCD analog of electric charge is a property called color. Gluons are the force carriers of the theory, just as photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics. A large body of experimental evidence for QCD has been gathered over the years.
QCD exhibits three salient properties:
Color confinement. Due to the force between two color charges remaining constant as they are separated, the energy grows until a quark–antiquark pair is spontaneously produced, turning the initial hadron into a pair of hadrons instead of isolating a color charge. Although analytically unproven, color confinement is well established from lattice QCD calculations and decades of experiments.
Asymptotic freedom, a steady reduction in the strength of interactions between quarks and gluons as the energy scale of those interactions increases (and the corresponding length scale decreases). The asymptotic freedom of QCD was discovered in 1973 by David Gross and Frank Wilczek, and independently by David Politzer in the same year. For this work, all three shared the 2004 Nobel Prize in Physics.
Chiral symmetry breaking, the spontaneous symmetry breaking of an important global symmetry of quarks, detailed below, with the result of generating masses for hadrons far above the masses of the quarks, and making pseudoscalar mesons exceptionally light. Yoichiro Nambu was awarded the 2008 Nobel Prize in Physics for elucidating the phenomenon in 1960, a dozen years before the advent of QCD. Lattice simulations have confirmed all his generic predictions.
== Terminology ==
Physicist Murray Gell-Mann coined the word quark in its present sense. It originally comes from the phrase "Three quarks for Muster Mark" in Finnegans Wake by James Joyce. On June 27, 1978, Gell-Mann wrote a private letter to the editor of the Oxford English Dictionary, in which he related that he had been influenced by Joyce's words: "The allusion to three quarks seemed perfect." (Originally, only three quarks had been discovered.)
The three kinds of charge in QCD (as opposed to one in quantum electrodynamics or QED) are usually referred to as "color charge" by loose analogy to the three kinds of color (red, green and blue) perceived by humans. Other than this nomenclature, the quantum parameter "color" is completely unrelated to the everyday, familiar phenomenon of color.
The force between quarks is known as the colour force (or color force) or strong interaction, and is responsible for the nuclear force.
Since the theory of electric charge is dubbed "electrodynamics", the Greek word χρῶμα (chrōma, "color") is applied to the theory of color charge, "chromodynamics".
== History ==
With the invention of bubble chambers and spark chambers in the 1950s, experimental particle physics discovered a large and ever-growing number of particles called hadrons. It seemed that such a large number of particles could not all be fundamental. First, the particles were classified by charge and isospin by Eugene Wigner and Werner Heisenberg; then, in 1953–56, according to strangeness by Murray Gell-Mann and Kazuhiko Nishijima (see Gell-Mann–Nishijima formula). To gain greater insight, the hadrons were sorted into groups having similar properties and masses using the eightfold way, invented in 1961 by Gell-Mann and Yuval Ne'eman. Gell-Mann and George Zweig, correcting an earlier approach of Shoichi Sakata, went on to propose in 1963 that the structure of the groups could be explained by the existence of three flavors of smaller particles inside the hadrons: the quarks. Gell-Mann also briefly discussed a field theory model in which quarks interact with gluons.
Perhaps the first remark that quarks should possess an additional quantum number was made as a short footnote in the preprint of Boris Struminsky in connection with the Ω− hyperon being composed of three strange quarks with parallel spins (this situation was peculiar, because since quarks are fermions, such a combination is forbidden by the Pauli exclusion principle):
Three identical quarks cannot form an antisymmetric S-state. In order to realize an antisymmetric orbital S-state, it is necessary for the quark to have an additional quantum number.
Boris Struminsky was a PhD student of Nikolay Bogolyubov. The problem considered in this preprint was suggested by Nikolay Bogolyubov, who advised Boris Struminsky in this research. In the beginning of 1965, Nikolay Bogolyubov, Boris Struminsky and Albert Tavkhelidze wrote a preprint with a more detailed discussion of the additional quark quantum degree of freedom. This work was also presented by Albert Tavkhelidze without obtaining consent of his collaborators for doing so at an international conference in Trieste (Italy), in May 1965.
A similar mysterious situation was with the Δ++ baryon; in the quark model, it is composed of three up quarks with parallel spins. In 1964–65, Greenberg and Han–Nambu independently resolved the problem by proposing that quarks possess an additional SU(3) gauge degree of freedom, later called color charge. Han and Nambu noted that quarks might interact via an octet of vector gauge bosons: the gluons.
Since free quark searches consistently failed to turn up any evidence for the new particles, and because an elementary particle back then was defined as a particle that could be separated and isolated, Gell-Mann often said that quarks were merely convenient mathematical constructs, not real particles. The meaning of this statement was usually clear in context: He meant quarks are confined, but he also was implying that the strong interactions could probably not be fully described by quantum field theory.
Richard Feynman argued that high energy experiments showed quarks are real particles: he called them partons (since they were parts of hadrons). By particles, Feynman meant objects that travel along paths, elementary particles in a field theory.
The difference between Feynman's and Gell-Mann's approaches reflected a deep split in the theoretical physics community. Feynman thought the quarks have a distribution of position or momentum, like any other particle, and he (correctly) believed that the diffusion of parton momentum explained diffractive scattering. Although Gell-Mann believed that certain quark charges could be localized, he was open to the possibility that the quarks themselves could not be localized because space and time break down. This was the more radical approach of S-matrix theory.
James Bjorken proposed that pointlike partons would imply certain relations in deep inelastic scattering of electrons and protons, which were verified in experiments at SLAC in 1969. This led physicists to abandon the S-matrix approach for the strong interactions.
In 1973 the concept of color as the source of a "strong field" was developed into the theory of QCD by physicists Harald Fritzsch and Heinrich Leutwyler, together with physicist Murray Gell-Mann. In particular, they employed the general field theory developed in 1954 by Chen Ning Yang and Robert Mills (see Yang–Mills theory), in which the carrier particles of a force can themselves radiate further carrier particles. (This is different from QED, where the photons that carry the electromagnetic force do not radiate further photons.)
The discovery of asymptotic freedom in the strong interactions by David Gross, David Politzer and Frank Wilczek allowed physicists to make precise predictions of the results of many high energy experiments using the quantum field theory technique of perturbation theory. Evidence of gluons was discovered in three-jet events at PETRA in 1979. These experiments became more and more precise, culminating in the verification of perturbative QCD at the level of a few percent at LEP, at CERN.
The other side of asymptotic freedom is confinement. Since the force between color charges does not decrease with distance, it is believed that quarks and gluons can never be liberated from hadrons. This aspect of the theory is verified within lattice QCD computations, but is not mathematically proven. One of the Millennium Prize Problems announced by the Clay Mathematics Institute requires a claimant to produce such a proof. Other aspects of non-perturbative QCD are the exploration of phases of quark matter, including the quark–gluon plasma.
== Theory ==
=== Some definitions ===
Every field theory of particle physics is based on certain symmetries of nature whose existence is deduced from observations. These can be
local symmetries, which are the symmetries that act independently at each point in spacetime. Each such symmetry is the basis of a gauge theory and requires the introduction of its own gauge bosons.
global symmetries, which are symmetries whose operations must be simultaneously applied to all points of spacetime.
QCD is a non-abelian gauge theory (or Yang–Mills theory) of the SU(3) gauge group obtained by taking the color charge to define a local symmetry.
Since the strong interaction does not discriminate between different flavors of quark, QCD has approximate flavor symmetry, which is broken by the differing masses of the quarks.
There are additional global symmetries whose definitions require the notion of chirality, discrimination between left and right-handed. If the spin of a particle has a positive projection on its direction of motion then it is called right-handed; otherwise, it is left-handed. Chirality and handedness are not the same, but become approximately equivalent at high energies.
Chiral symmetries involve independent transformations of these two types of particle.
Vector symmetries (also called diagonal symmetries) mean the same transformation is applied on the two chiralities.
Axial symmetries are those in which one transformation is applied on left-handed particles and the inverse on the right-handed particles.
=== Additional remarks: duality ===
As mentioned, asymptotic freedom means that at large energy – this corresponds also to short distances – there is practically no interaction between the particles. This is in contrast – more precisely one would say dual– to what one is used to, since usually one connects the absence of interactions with large distances. However, as already mentioned in the original paper of Franz Wegner, a solid state theorist who introduced 1971 simple gauge invariant lattice models, the high-temperature behaviour of the original model, e.g. the strong decay of correlations at large distances, corresponds to the low-temperature behaviour of the (usually ordered!) dual model, namely the asymptotic decay of non-trivial correlations, e.g. short-range deviations from almost perfect arrangements, for short distances. Here, in contrast to Wegner, we have only the dual model, which is that one described in this article.
=== Symmetry groups ===
The color group SU(3) corresponds to the local symmetry whose gauging gives rise to QCD. The electric charge labels a representation of the local symmetry group U(1), which is gauged to give QED: this is an abelian group. If one considers a version of QCD with Nf flavors of massless quarks, then there is a global (chiral) flavor symmetry group SUL(Nf) × SUR(Nf) × UB(1) × UA(1). The chiral symmetry is spontaneously broken by the QCD vacuum to the vector (L+R) SUV(Nf) with the formation of a chiral condensate. The vector symmetry, UB(1) corresponds to the baryon number of quarks and is an exact symmetry. The axial symmetry UA(1) is exact in the classical theory, but broken in the quantum theory, an occurrence called an anomaly. Gluon field configurations called instantons are closely related to this anomaly.
There are two different types of SU(3) symmetry: there is the symmetry that acts on the different colors of quarks, and this is an exact gauge symmetry mediated by the gluons, and there is also a flavor symmetry that rotates different flavors of quarks to each other, or flavor SU(3). Flavor SU(3) is an approximate symmetry of the vacuum of QCD, and is not a fundamental symmetry at all. It is an accidental consequence of the small mass of the three lightest quarks.
In the QCD vacuum there are vacuum condensates of all the quarks whose mass is less than the QCD scale. This includes the up and down quarks, and to a lesser extent the strange quark, but not any of the others. The vacuum is symmetric under SU(2) isospin rotations of up and down, and to a lesser extent under rotations of up, down, and strange, or full flavor group SU(3), and the observed particles make isospin and SU(3) multiplets.
The approximate flavor symmetries do have associated gauge bosons, observed particles like the rho and the omega, but these particles are nothing like the gluons and they are not massless. They are emergent gauge bosons in an approximate string description of QCD.
=== Lagrangian ===
The dynamics of the quarks and gluons are defined by the quantum chromodynamics Lagrangian. The gauge invariant QCD Lagrangian is
where
ψ
i
(
x
)
{\displaystyle \psi _{i}(x)\,}
is the quark field, a dynamical function of spacetime, in the fundamental representation of the SU(3) gauge group, indexed by
i
{\displaystyle i}
and
j
{\displaystyle j}
running from
1
{\displaystyle 1}
to
3
{\displaystyle 3}
;
ψ
¯
i
{\displaystyle {\bar {\psi }}_{i}\,}
is the Dirac adjoint of
ψ
i
{\displaystyle \psi _{i}\,}
;
D
μ
{\displaystyle D_{\mu }}
is the gauge covariant derivative; the γμ are Gamma matrices connecting the spinor representation to the vector representation of the Lorentz group.
Herein, the gauge covariant derivative
(
D
μ
)
i
j
=
∂
μ
δ
i
j
−
i
g
(
T
a
)
i
j
A
μ
a
{\displaystyle \left(D_{\mu }\right)_{ij}=\partial _{\mu }\delta _{ij}-ig\left(T_{a}\right)_{ij}{\mathcal {A}}_{\mu }^{a}\,}
couples the quark field with a coupling strength
g
{\displaystyle g\,}
to the gluon fields via the infinitesimal SU(3) generators
T
a
{\displaystyle T_{a}\,}
in the fundamental representation. An explicit representation of these generators is given by
T
a
=
λ
a
/
2
{\displaystyle T_{a}=\lambda _{a}/2\,}
, wherein the
λ
a
(
a
=
1
…
8
)
{\displaystyle \lambda _{a}\,(a=1\ldots 8)\,}
are the Gell-Mann matrices.
The symbol
G
μ
ν
a
{\displaystyle G_{\mu \nu }^{a}\,}
represents the gauge invariant gluon field strength tensor, analogous to the electromagnetic field strength tensor, Fμν, in quantum electrodynamics. It is given by:
G
μ
ν
a
=
∂
μ
A
ν
a
−
∂
ν
A
μ
a
+
g
f
a
b
c
A
μ
b
A
ν
c
,
{\displaystyle G_{\mu \nu }^{a}=\partial _{\mu }{\mathcal {A}}_{\nu }^{a}-\partial _{\nu }{\mathcal {A}}_{\mu }^{a}+gf^{abc}{\mathcal {A}}_{\mu }^{b}{\mathcal {A}}_{\nu }^{c}\,,}
where
A
μ
a
(
x
)
{\displaystyle {\mathcal {A}}_{\mu }^{a}(x)\,}
are the gluon fields, dynamical functions of spacetime, in the adjoint representation of the SU(3) gauge group, indexed by a, b and c running from
1
{\displaystyle 1}
to
8
{\displaystyle 8}
; and fabc are the structure constants of SU(3) (the generators of the adjoint representation). Note that the rules to move-up or pull-down the a, b, or c indices are trivial, (+, ..., +), so that fabc = fabc = fabc whereas for the μ or ν indices one has the non-trivial relativistic rules corresponding to the metric signature (+ − − −).
The variables m and g correspond to the quark mass and coupling of the theory, respectively, which are subject to renormalization.
An important theoretical concept is the Wilson loop (named after Kenneth G. Wilson). In lattice QCD, the final term of the above Lagrangian is discretized via Wilson loops, and more generally the behavior of Wilson loops can distinguish confined and deconfined phases.
=== Fields ===
Quarks are massive spin-1⁄2 fermions that carry a color charge whose gauging is the content of QCD. Quarks are represented by Dirac fields in the fundamental representation 3 of the gauge group SU(3). They also carry electric charge (either −1⁄3 or +2⁄3) and participate in weak interactions as part of weak isospin doublets. They carry global quantum numbers including the baryon number, which is 1⁄3 for each quark, hypercharge and one of the flavor quantum numbers.
Gluons are spin-1 bosons that also carry color charges, since they lie in the adjoint representation 8 of SU(3). They have no electric charge, do not participate in the weak interactions, and have no flavor. They lie in the singlet representation 1 of all these symmetry groups.
Each type of quark has a corresponding antiquark, of which the charge is exactly opposite. They transform in the conjugate representation to quarks, denoted
3
¯
{\displaystyle {\bar {\mathbf {3} }}}
.
=== Dynamics ===
According to the rules of quantum field theory, and the associated Feynman diagrams, the above theory gives rise to three basic interactions: a quark may emit (or absorb) a gluon, a gluon may emit (or absorb) a gluon, and two gluons may directly interact. This contrasts with QED, in which only the first kind of interaction occurs, since photons have no charge. Diagrams involving Faddeev–Popov ghosts must be considered too (except in the unitarity gauge).
=== Area law and confinement ===
Detailed computations with the above-mentioned Lagrangian show that the effective potential between a quark and its anti-quark in a meson contains a term that increases in proportion to the distance between the quark and anti-quark (
∝
r
{\displaystyle \propto r}
), which represents some kind of "stiffness" of the interaction between the particle and its anti-particle at large distances, similar to the entropic elasticity of a rubber band (see below). This leads to confinement of the quarks to the interior of hadrons, i.e. mesons and nucleons, with typical radii Rc, corresponding to former "Bag models" of the hadrons The order of magnitude of the "bag radius" is 1 fm (= 10−15 m). Moreover, the above-mentioned stiffness is quantitatively related to the so-called "area law" behavior of the expectation value of the Wilson loop product PW of the ordered coupling constants around a closed loop W; i.e.
⟨
P
W
⟩
{\displaystyle \,\langle P_{W}\rangle }
is proportional to the area enclosed by the loop. For this behavior the non-abelian behavior of the gauge group is essential.
== Methods ==
Further analysis of the content of the theory is complicated. Various techniques have been developed to work with QCD. Some of them are discussed briefly below.
=== Perturbative QCD ===
This approach is based on asymptotic freedom, which allows perturbation theory to be used accurately in experiments performed at very high energies. Although limited in scope, this approach has resulted in the most precise tests of QCD to date.
=== Lattice QCD ===
Among non-perturbative approaches to QCD, the most well established is lattice QCD. This approach uses a discrete set of spacetime points (called the lattice) to reduce the analytically intractable path integrals of the continuum theory to a very difficult numerical computation that is then carried out on supercomputers like the QCDOC, which was constructed for precisely this purpose. While it is a slow and resource-intensive approach, it has wide applicability, giving insight into parts of the theory inaccessible by other means, in particular into the explicit forces acting between quarks and antiquarks in a meson. However, the numerical sign problem makes it difficult to use lattice methods to study QCD at high density and low temperature (e.g. nuclear matter or the interior of neutron stars).
=== 1/N expansion ===
A well-known approximation scheme, the 1⁄N expansion, starts from the idea that the number of colors is infinite, and makes a series of corrections to account for the fact that it is not. Until now, it has been the source of qualitative insight rather than a method for quantitative predictions. Modern variants include the AdS/CFT approach.
=== Effective theories ===
For specific problems, effective theories may be written down that give qualitatively correct results in certain limits. In the best of cases, these may then be obtained as systematic expansions in some parameters of the QCD Lagrangian. One such effective field theory is chiral perturbation theory or ChiPT, which is the QCD effective theory at low energies. More precisely, it is a low energy expansion based on the spontaneous chiral symmetry breaking of QCD, which is an exact symmetry when quark masses are equal to zero, but for the u, d and s quark, which have small mass, it is still a good approximate symmetry. Depending on the number of quarks that are treated as light, one uses either SU(2) ChiPT or SU(3) ChiPT. Other effective theories are heavy quark effective theory (which expands around heavy quark mass near infinity), and soft-collinear effective theory (which expands around large ratios of energy scales). In addition to effective theories, models like the Nambu–Jona-Lasinio model and the chiral model are often used when discussing general features.
=== QCD sum rules ===
Based on an Operator product expansion one can derive sets of relations that connect different observables with each other.
== Experimental tests ==
The notion of quark flavors was prompted by the necessity of explaining the properties of hadrons during the development of the quark model. The notion of color was necessitated by the puzzle of the Δ++. This has been dealt with in the section on the history of QCD.
The first evidence for quarks as real constituent elements of hadrons was obtained in deep inelastic scattering experiments at SLAC. The first evidence for gluons came in three-jet events at PETRA.
Several good quantitative tests of perturbative QCD exist:
The running of the QCD coupling as deduced from many observations
Scaling violation in polarized and unpolarized deep inelastic scattering
Vector boson production at colliders (this includes the Drell–Yan process)
Direct photons produced in hadronic collisions
Jet cross sections in colliders
Event shape observables at the LEP
Heavy-quark production in colliders
Quantitative tests of non-perturbative QCD are fewer, because the predictions are harder to make. The best is probably the running of the QCD coupling as probed through lattice computations of heavy-quarkonium spectra. There is a recent claim about the mass of the heavy meson Bc . Other non-perturbative tests are currently at the level of 5% at best. Continuing work on masses and form factors of hadrons and their weak matrix elements are promising candidates for future quantitative tests. The whole subject of quark matter and the quark–gluon plasma is a non-perturbative test bed for QCD that still remains to be properly exploited.
One qualitative prediction of QCD is that there exist composite particles made solely of gluons called glueballs that have not yet been definitively observed experimentally. A definitive observation of a glueball with the properties predicted by QCD would strongly confirm the theory. In principle, if glueballs could be definitively ruled out, this would be a serious experimental blow to QCD. But, as of 2013, scientists are unable to confirm or deny the existence of glueballs definitively, despite the fact that particle accelerators have sufficient energy to generate them.
== Cross-relations to condensed matter physics ==
There are unexpected cross-relations to condensed matter physics. For example, the notion of gauge invariance forms the basis of the well-known Mattis spin glasses, which are systems with the usual spin degrees of freedom
s
i
=
±
1
{\displaystyle s_{i}=\pm 1\,}
for i =1,...,N, with the special fixed "random" couplings
J
i
,
k
=
ϵ
i
J
0
ϵ
k
.
{\displaystyle J_{i,k}=\epsilon _{i}\,J_{0}\,\epsilon _{k}\,.}
Here the εi and εk quantities can independently and "randomly" take the values ±1, which corresponds to a most-simple gauge transformation
(
s
i
→
s
i
⋅
ϵ
i
J
i
,
k
→
ϵ
i
J
i
,
k
ϵ
k
s
k
→
s
k
⋅
ϵ
k
)
.
{\displaystyle (\,s_{i}\to s_{i}\cdot \epsilon _{i}\quad \,J_{i,k}\to \epsilon _{i}J_{i,k}\epsilon _{k}\,\quad s_{k}\to s_{k}\cdot \epsilon _{k}\,)\,.}
This means that thermodynamic expectation values of measurable quantities, e.g. of the energy
H
:=
−
∑
s
i
J
i
,
k
s
k
,
{\textstyle {\mathcal {H}}:=-\sum s_{i}\,J_{i,k}\,s_{k}\,,}
are invariant.
However, here the coupling degrees of freedom
J
i
,
k
{\displaystyle J_{i,k}}
, which in the QCD correspond to the gluons, are "frozen" to fixed values (quenching). In contrast, in the QCD they "fluctuate" (annealing), and through the large number of gauge degrees of freedom the entropy plays an important role (see below).
For positive J0 the thermodynamics of the Mattis spin glass corresponds in fact simply to a "ferromagnet in disguise", just because these systems have no "frustration" at all. This term is a basic measure in spin glass theory. Quantitatively it is identical with the loop product
P
W
:
=
J
i
,
k
J
k
,
l
.
.
.
J
n
,
m
J
m
,
i
{\displaystyle P_{W}:\,=\,J_{i,k}J_{k,l}...J_{n,m}J_{m,i}}
along a closed loop W. However, for a Mattis spin glass – in contrast to "genuine" spin glasses – the quantity PW never becomes negative.
The basic notion "frustration" of the spin-glass is actually similar to the Wilson loop quantity of the QCD. The only difference is again that in the QCD one is dealing with SU(3) matrices, and that one is dealing with a "fluctuating" quantity. Energetically, perfect absence of frustration should be non-favorable and atypical for a spin glass, which means that one should add the loop product to the Hamiltonian, by some kind of term representing a "punishment". In the QCD the Wilson loop is essential for the Lagrangian rightaway.
The relation between the QCD and "disordered magnetic systems" (the spin glasses belong to them) were additionally stressed in a paper by Fradkin, Huberman and Shenker, which also stresses the notion of duality.
A further analogy consists in the already mentioned similarity to polymer physics, where, analogously to Wilson loops, so-called "entangled nets" appear, which are important for the formation of the entropy-elasticity (force proportional to the length) of a rubber band. The non-abelian character of the SU(3) corresponds thereby to the non-trivial "chemical links", which glue different loop segments together, and "asymptotic freedom" means in the polymer analogy simply the fact that in the short-wave limit, i.e. for
0
←
λ
w
≪
R
c
{\displaystyle 0\leftarrow \lambda _{w}\ll R_{c}}
(where Rc is a characteristic correlation length for the glued loops, corresponding to the above-mentioned "bag radius", while λw is the wavelength of an excitation) any non-trivial correlation vanishes totally, as if the system had crystallized.
There is also a correspondence between confinement in QCD – the fact that the color field is only different from zero in the interior of hadrons – and the behaviour of the usual magnetic field in the theory of type-II superconductors: there the magnetism is confined to the interior of the Abrikosov flux-line lattice, i.e., the London penetration depth λ of that theory is analogous to the confinement radius Rc of quantum chromodynamics. Mathematically, this correspondendence is supported by the second term,
∝
g
G
μ
a
ψ
¯
i
γ
μ
T
i
j
a
ψ
j
,
{\displaystyle \propto gG_{\mu }^{a}{\bar {\psi }}_{i}\gamma ^{\mu }T_{ij}^{a}\psi _{j}\,,}
on the r.h.s. of the Lagrangian.
== See also ==
For overviews:
Standard Model
Strong interaction
Quark
Gluon
Hadron
Color confinement
QCD matter
Quark–gluon plasma
For details:
Gauge theory
Quantum gauge theory, BRST quantization and Faddeev–Popov ghost
Quantum field theory – a more general category
For techniques:
Lattice QCD
1/N expansion
Perturbative QCD
Soft-collinear effective theory
Heavy quark effective theory
Chiral model
Nambu–Jona-Lasinio model
For experiments:
Deep inelastic scattering
Jet (particle physics)
Quark–gluon plasma
Quantum electrodynamics
Symmetry in quantum mechanics
Yang–Mills theory
Yang–Mills existence and mass gap
== References ==
== Further reading ==
Greiner, Walter; Schramm, Stefan; Stein, Eckart (2007). Quantum Chromodynamics. Berlin Heidelberg: Springer. ISBN 978-3-540-48535-3.
Halzen, Francis; Martin, Alan (1984). Quarks & Leptons: An Introductory Course in Modern Particle Physics. John Wiley & Sons. ISBN 978-0-471-88741-6.
Creutz, Michael (1985). Quarks, Gluons and Lattices. Cambridge University Press. ISBN 978-0-521-31535-7.
Gross, Franz; Klempt, Eberhard; Brodsky, Stanley J.; Buras, Andrzej J.; Burkert, Volker D.; Heinrich, Gudrun; Jakobs, Karl; Meyer, Curtis A.; Orginos, Kostas; Strickland, Michael; Stachel, Johanna; Zanderighi, Giulia; Brambilla, Nora; Braun-Munzinger, Peter; Britzger, Daniel (2023-12-12). "50 Years of quantum chromodynamics: Introduction and Review". The European Physical Journal C. 83 (12): 1125. arXiv:2212.11107. Bibcode:2023EPJC...83.1125G. doi:10.1140/epjc/s10052-023-11949-2. ISSN 1434-6052. A highly technical review with almost 5000 references.
== External links ==
Frank Wilczek (2000). "QCD made simple" (PDF). Physics Today. 53 (8): 22–28. Bibcode:2000PhT....53h..22W. doi:10.1063/1.1310117.
Particle data group
The millennium prize for proving confinement
Ab Initio Determination of Light Hadron Masses
Andreas S Kronfeld The Weight of the World Is Quantum Chromodynamics
Andreas S Kronfeld Quantum chromodynamics with advanced computing
Standard model gets right answer
Quantum Chromodynamics
Cern Courier, The history of QCD with Prof. Dr. Harald Fritzsch | Wikipedia/Quantum_chromodynamics |
The relationship between chemistry and physics is a topic of debate in the philosophy of science. The issue is a complicated one, since both physics and chemistry are divided into multiple subfields, each with their own goals. A major theme is whether, and in what sense, chemistry can be said to "reduce" to physics.
== Background ==
Although physics and chemistry are branches of science that both study matter, they differ in the scopes of their respective subjects. While physics focuses on phenomena such as force, motion, electromagnetism, elementary particles, and spacetime, chemistry is concerned mainly with the structure and reactions of atoms and molecules, but does not necessarily deal with non-baryonic matter. However, the two disciplines overlap in subjects concerning the behaviour of fluids, the thermodynamics of chemical reactions, the magnetic forces between atoms and molecules, and quantum chemistry. Moreover, the laws of chemistry highly depend on the laws of quantum mechanics.
In some respects the two sciences have developed independently, but less so towards the end of the twentieth century. There are many areas where there is major overlap, for instance both chemical physics and physical chemistry combine the two, while materials science is an interdisciplinary areas which combines both as well as some elements of engineering. This was deliberate, as recognized by the National Academies of Sciences, Engineering, and Medicine there are limitations to trying to force science into categories rather than focusing on the issues of importance, an approach now common in materials science.
== Historical views ==
In the 19th century, Auguste Comte in his hierarchy of the sciences, classified chemistry as more dependent than physics, as chemistry requires physics.
In 1958, Paul Oppenheim and Hilary Putnam put forward the idea that in the 20th century chemistry has been reduced to physics, as evidence for the unity of science.
== References == | Wikipedia/Difference_between_chemistry_and_physics |
Astroparticle physics, also called particle astrophysics, is a branch of particle physics that studies elementary particles of astrophysical origin and their relation to astrophysics and cosmology. It is a relatively new field of research emerging at the intersection of particle physics, astronomy, astrophysics, detector physics, relativity, solid state physics, and cosmology. Partly motivated by the discovery of neutrino oscillation, the field has undergone rapid development, both theoretically and experimentally, since the early 2000s.
== History ==
The field of astroparticle physics is evolved out of optical astronomy. With the growth of detector technology came the more mature astrophysics, which involved multiple physics subtopics, such as mechanics, electrodynamics, thermodynamics, plasma physics, nuclear physics, relativity, and particle physics. Particle physicists found astrophysics necessary due to difficulty in producing particles with comparable energy to those found in space. For example, the cosmic ray spectrum contains particles with energies as high as 1020 eV, where a proton–proton collision at the Large Hadron Collider occurs at an energy of ~1012 eV.
The field can be said to have begun in 1910, when a German physicist named Theodor Wulf measured the ionization in the air, an indicator of gamma radiation, at the bottom and top of the Eiffel Tower. He found that there was far more ionization at the top than what was expected if only terrestrial sources were attributed for this radiation.
The Austrian physicist Victor Francis Hess hypothesized that some of the ionization was caused by radiation from the sky. In order to defend this hypothesis, Hess designed instruments capable of operating at high altitudes and performed observations on ionization up to an altitude of 5.3 km. From 1911 to 1913, Hess made ten flights to meticulously measure ionization levels. Through prior calculations, he did not expect there to be any ionization above an altitude of 500 m if terrestrial sources were the sole cause of radiation. His measurements however, revealed that although the ionization levels initially decreased with altitude, they began to sharply rise at some point. At the peaks of his flights, he found that the ionization levels were much greater than at the surface. Hess was then able to conclude that "a radiation of very high penetrating power enters our atmosphere from above". Furthermore, one of Hess's flights was during a near-total eclipse of the Sun. Since he did not observe a dip in ionization levels, Hess reasoned that the source had to be further away in space. For this discovery, Hess was one of the people awarded the Nobel Prize in Physics in 1936. In 1925, Robert Millikan confirmed Hess's findings and subsequently coined the term 'cosmic rays'.
Many physicists knowledgeable about the origins of the field of astroparticle physics prefer to attribute this 'discovery' of cosmic rays by Hess as the starting point for the field.
== Topics of research ==
While it may be difficult to decide on a standard 'textbook' description of the field of astroparticle physics, the field can be characterized by the topics of research that are actively being pursued. The journal Astroparticle Physics accepts papers that are focused on new developments in the following areas:
High-energy cosmic-ray physics and astrophysics;
Particle cosmology;
Particle astrophysics;
Related astrophysics: supernova, active galactic nuclei, cosmic abundances, dark matter etc.;
High-energy, VHE and UHE gamma-ray astronomy;
High- and low-energy neutrino astronomy;
Instrumentation and detector developments related to the above-mentioned fields.
=== Open questions ===
One main task for the future of the field is simply to thoroughly define itself beyond working definitions and clearly differentiate itself from astrophysics and other related topics.
Current unsolved problems for the field of astroparticle physics include characterization of dark matter and dark energy. Observations of the orbital velocities of stars in the Milky Way and other galaxies starting with Walter Baade and Fritz Zwicky in the 1930s, along with observed velocities of galaxies in galactic clusters, found motion far exceeding the energy density of the visible matter needed to account for their dynamics. Since the early nineties some candidates have been found to partially explain some of the missing dark matter, but they are nowhere near sufficient to offer a full explanation. The finding of an accelerating universe suggests that a large part of the missing dark matter is stored as dark energy in a dynamical vacuum.
Another question for astroparticle physicists is why is there so much more matter than antimatter in the universe today. Baryogenesis is the term for the hypothetical processes that produced the unequal numbers of baryons and antibaryons in the early universe, which is why the universe is made of matter today, and not antimatter.
== Experimental facilities ==
The rapid development of this field has led to the design of new types of infrastructure. In underground laboratories or with specially designed telescopes, antennas and satellite experiments, astroparticle physicists employ new detection methods to observe a wide range of cosmic particles including neutrinos, gamma rays and cosmic rays at the highest energies. They are also searching for dark matter and gravitational waves. Experimental particle physicists are limited by the technology of their terrestrial accelerators, which are only able to produce a small fraction of the energies found in nature.
The following is an incomplete list of laboratories and experiments in astroparticle physics.
=== Underground laboratories ===
These facilities are located deep underground, to shield very sensitive experiments from cosmic rays that would otherwise preclude the observation of very rare phenomena.
China Jinping Underground Laboratory is a deep underground laboratory in the Jinping Mountains of Sichuan, China.
Kamioka Observatory is a neutrino and gravitational waves laboratory located underground in the Mozumi Mine near the Kamioka section of the city of Hida in Gifu Prefecture, Japan. It is the site of the Super-Kamiokande experiment.
Laboratori Nazionali del Gran Sasso (LNGS) is a laboratory that hosts experiments requiring a low-background environment. Its experimental halls are located within the Gran Sasso mountain, near L'Aquila (Italy).
SNOLAB is located 2 km underground in an active mine, in Greater Sudbury (Canada). Expanded from the original Sudbury Neutrino Observatory, the entire underground laboratory is operated as a cleanroom, hosting experiments in neutrino physics and dark matter searches.
Sanford Underground Research Facility (SURF) located in Lead, South Dakota hosts multiple experiments and is funded in part by the United States Department of Energy.
=== Neutrino detectors ===
Very large neutrino detectors are required to record the extremely rare interactions of neutrinos with atomic matter.
IceCube (Antarctica). The largest particle detector in the world, was completed in December 2010. The purpose of the detector is to investigate high energy neutrinos, search for dark matter, observe supernovae explosions, and search for exotic particles such as magnetic monopoles.
ANTARES (Toulon, France). A Neutrino detector 2.5 km under the Mediterranean Sea off the coast of Toulon, France. Designed to locate and observe neutrino flux in the direction of the southern hemisphere.
NESTOR Project (Pylos, Greece). The target of the international collaboration is the deployment of a neutrino telescope on the sea floor off of Pylos, Greece.
BOREXINO, a real-time detector, installed at LNGS, designed to detect neutrinos from the Sun with an organic liquid scintillator target.
=== Dark matter detectors ===
Experiments are dedicated to the direct detection of dark matter interactions with the detector target material.
LZ experiment is a dark matter direct detection experiment hoping to observe weakly interacting massive particles (WIMPs) scatters on xenon nuclei. The experiment is located at the Sanford Underground Research Facility (SURF) in South Dakota, and is managed by the United States Department of Energy's Lawrence Berkeley National Lab.
XENONnT, the upgrade of XENON1T, is a dark matter direct search experiment located at LNGS and is expected to be sensitive to WIMPs with spin-independent cross section of 10−48 cm2.
The Global Argon Dark Matter Collaboration operates a series of liquid argon experiments: DarkSide-50 at LNGS, DEAP-3600 at SNOLAB, and the upcoming DarkSide-20k detector at LNGS. These experiments look for WIMPs and heavier dark matter particle candidates.
The Cryogenic Dark Matter Search (CDMS) is a series of experiments searching for WIMPs interactions with semiconductor detectors at millikelvin temperatures.
The CERN Axion Solar Telescope (CERN, Switzerland) searches for axions originating from the Sun.
=== Cosmic ray observatories ===
Interested in high-energy cosmic ray detection are:
Pierre Auger Observatory (Malargüe, Argentina) detects and investigates high energy cosmic rays using two techniques. One is to study the particles interactions with water placed in surface detector tanks. The other technique is to track the development of air showers through observation of ultraviolet light emitted high in the Earth's atmosphere.
Telescope Array Project (Delta, Utah), an experiment for the detection of ultra high energy cosmic rays (UHECRs) using a ground array and fluorescence techniques in the desert of west Utah.
== See also ==
Astroparticle Physics (journal)
Urca process
Unsolved problems in physics
== References ==
Perkins, D.H. (2009). Particle Astrophysics (2nd ed.). Oxford University Press. ISBN 978-0-19-954546-9.
== External links ==
Aspera European network portal
www.astroparticle.org: all about astroparticle physics...
Aspera news
Virtual Institute of Astroparticle Physics
Helmholtz Alliance for Astroparticle Physics
UCLA Astro-Particle Physics at UCLA
Journal of Cosmology and Astroparticle Physics
Astroparticle Physics in the Netherlands
Astroparticle and High Energy Physics
ASD: Astroparticle Physics Laboratory at NASA
Teaching Astroparticle Physics | Wikipedia/Astroparticle_physics |
In physics, phenomenology is the application of theoretical physics to experimental data by making quantitative predictions based upon known theories. It is related to the philosophical notion of the same name in that these predictions describe anticipated behaviors for the phenomena in reality. Phenomenology stands in contrast with experimentation in the scientific method, in which the goal of the experiment is to test a scientific hypothesis instead of making predictions.
Phenomenology is commonly applied to the field of particle physics, where it forms a bridge between the mathematical models of theoretical physics (such as quantum field theories and theories of the structure of space-time) and the results of the high-energy particle experiments. It is sometimes used in other fields such as in condensed matter physics and plasma physics, when there are no existing theories for the observed experimental data.
== Applications in particle physics ==
=== Standard Model consequences ===
Within the well-tested and generally accepted Standard Model, phenomenology is the calculating of detailed predictions for experiments, usually at high precision (e.g., including radiative corrections).
Examples include:
Next-to-leading order calculations of particle production rates and distributions.
Monte Carlo simulation studies of physics processes at colliders.
Extraction of parton distribution functions from data.
==== CKM matrix calculations ====
The CKM matrix is useful in these predictions:
Application of heavy quark effective field theory to extract CKM matrix elements.
Using lattice QCD to extract quark masses and CKM matrix elements from experiment.
=== Theoretical models ===
In Physics beyond the Standard Model, phenomenology addresses the experimental consequences of new models: how their new particles could be searched for, how the model parameters could be measured, and how the model could be distinguished from other, competing models.
==== Phenomenological analysis ====
Phenomenological analyses, in which one studies the experimental consequences of adding the most general set of beyond-the-Standard-Model effects in a given sector of the Standard Model, usually parameterized in terms of anomalous couplings and higher-dimensional operators. In this case, the term "phenomenological" is being used more in its philosophy of science sense.
== See also ==
Effective theory
Phenomenological model
Phenomenological quantum gravity
== References ==
== External links ==
Papers on phenomenology are available on the hep-ph archive of the ArXiv.org e-print archive
List of topics on phenomenology from IPPP, the Institute for Particle Physics Phenomenology at University of Durham, UK
Collider Phenomenology: Basic knowledge and techniques, lectures by Tao Han
Pheno '08 Symposium on particle physics phenomenology, including slides from the talks linked from the symposium program. | Wikipedia/Phenomenology_(particle_physics) |
A Grand Unified Theory (GUT) is any model in particle physics that merges the electromagnetic, weak, and strong forces (the three gauge interactions of the Standard Model) into a single force at high energies. Although this unified force has not been directly observed, many GUT models theorize its existence. If the unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct.
Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single combined electroweak interaction. GUT models predict that at even higher energy, the strong and electroweak interactions will unify into one electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. Unifying gravity with the electronuclear interaction would provide a more comprehensive theory of everything (TOE) rather than a Grand Unified Theory. Thus, GUTs are often seen as an intermediate step towards a TOE.
The novel particles predicted by GUT models are expected to have extremely high masses—around the GUT scale of 1016 GeV/c2 (only three orders of magnitude below the Planck scale of 1019 GeV/c2)—and so are well beyond the reach of any foreseen particle hadron collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly, and instead the effects of grand unification might be detected through indirect observations of the following:
proton decay,
electric dipole moments of elementary particles,
or the properties of neutrinos.
Some GUTs, such as the Pati–Salam model, predict the existence of magnetic monopoles.
While GUTs might be expected to offer simplicity over the complications present in the Standard Model, realistic models remain complicated because they need to introduce additional fields and interactions, or even additional dimensions of space, in order to reproduce observed fermion masses and mixing angles. This difficulty, in turn, may be related to the existence of family symmetries beyond the conventional GUT models. Due to this and the lack of any observed effect of grand unification so far, there is no generally accepted GUT model.
Models that do not unify the three interactions using one simple group as the gauge symmetry but do so using semisimple groups can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well.
== History ==
Historically, the first true GUT, which was based on the simple Lie group SU(5), was proposed by Howard Georgi and Sheldon Glashow in 1974. The Georgi–Glashow model was preceded by the semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati also in 1974, who pioneered the idea to unify gauge interactions.
The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use the acronym in a paper.
== Motivation ==
The fact that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics. While the description of strong and weak interactions within the Standard Model is based on gauge symmetries governed by the simple symmetry groups SU(3) and SU(2) which allow only discrete charges, the remaining component, the weak hypercharge interaction is described by an abelian symmetry U(1) which in principle allows for arbitrary charge assignments. The observed charge quantization, namely the postulation that all known elementary particles carry electric charges which are exact multiples of one-third of the "elementary" charge, has led to the idea that hypercharge interactions and possibly the strong and weak interactions might be embedded in one Grand Unified interaction described by a single, larger simple symmetry group containing the Standard Model. This would automatically predict the quantized nature and values of all elementary particle charges. Since this also results in a prediction for the relative strengths of the fundamental interactions which we observe, in particular, the weak mixing angle, grand unification ideally reduces the number of independent input parameters but is also constrained by observations.
Grand unification is reminiscent of the unification of electric and magnetic forces by Maxwell's field theory of electromagnetism in the 19th century, but its physical implications and mathematical structure are qualitatively different.
== Unification of matter particles ==
=== SU(5) ===
SU(5) is the simplest GUT. The smallest simple Lie group which contains the standard model, and upon which the first Grand Unified Theory was based, is
S
U
(
5
)
⊃
S
U
(
3
)
×
S
U
(
2
)
×
U
(
1
)
.
{\displaystyle {\rm {SU(5)\supset SU(3)\times SU(2)\times U(1).}}}
Such group symmetries allow the reinterpretation of several known particles, including the photon, W and Z bosons, and gluon, as different states of a single particle field. However, it is not obvious that the simplest possible choices for the extended "Grand Unified" symmetry should yield the correct inventory of elementary particles. The fact that all currently known matter particles fit perfectly into three copies of the smallest group representations of SU(5) and immediately carry the correct observed charges, is one of the first and most important reasons why people believe that a Grand Unified Theory might actually be realized in nature.
The two smallest irreducible representations of SU(5) are 5 (the defining representation) and 10. (These bold numbers indicate the dimension of the representation.) In the standard assignment, the 5 contains the charge conjugates of the right-handed down-type quark color triplet and a left-handed lepton isospin doublet, while the 10 contains the six up-type quark components, the left-handed down-type quark color triplet, and the right-handed electron. This scheme has to be replicated for each of the three known generations of matter. It is notable that the theory is anomaly free with this matter content.
The hypothetical right-handed neutrinos are a singlet of SU(5), which means its mass is not forbidden by any symmetry; it doesn't need a spontaneous electroweak symmetry breaking which explains why its mass would be heavy (see seesaw mechanism).
=== SO(10) ===
The next simple Lie group which contains the standard model is
S
O
(
10
)
⊃
S
U
(
5
)
⊃
S
U
(
3
)
×
S
U
(
2
)
×
U
(
1
)
.
{\displaystyle {\rm {SO(10)\supset SU(5)\supset SU(3)\times SU(2)\times U(1).}}}
Here, the unification of matter is even more complete, since the irreducible spinor representation 16 contains both the 5 and 10 of SU(5) and a right-handed neutrino, and thus the complete particle content of one generation of the extended standard model with neutrino masses. This is already the largest simple group that achieves the unification of matter in a scheme involving only the already known matter particles (apart from the Higgs sector).
Since different standard model fermions are grouped together in larger representations, GUTs specifically predict relations among the fermion masses, such as between the electron and the down quark, the muon and the strange quark, and the tau lepton and the bottom quark for SU(5) and SO(10). Some of these mass relations hold approximately, but most don't (see Georgi-Jarlskog mass relation).
The boson matrix for SO(10) is found by taking the 15 × 15 matrix from the 10 + 5 representation of SU(5) and adding an extra row and column for the right-handed neutrino. The bosons are found by adding a partner to each of the 20 charged bosons (2 right-handed W bosons, 6 massive charged gluons and 12 X/Y type bosons) and adding an extra heavy neutral Z-boson to make 5 neutral bosons in total. The boson matrix will have a boson or its new partner in each row and column. These pairs combine to create the familiar 16D Dirac spinor matrices of SO(10).
=== E6 ===
In some forms of string theory, including E8 × E8 heterotic string theory, the resultant four-dimensional theory after spontaneous compactification on a six-dimensional Calabi–Yau manifold resembles a GUT based on the group E6. Notably E6 is the only exceptional simple Lie group to have any complex representations, a requirement for a theory to contain chiral fermions (namely all weakly-interacting fermions). Hence the other four (G2, F4, E7, and E8) can't be the gauge group of a GUT.
=== Extended Grand Unified Theories ===
Non-chiral extensions of the Standard Model with vectorlike split-multiplet particle spectra which naturally appear in the higher SU(N) GUTs considerably modify the desert physics and lead to the realistic (string-scale) grand unification for conventional three quark-lepton families even without using supersymmetry (see below). On the other hand, due to a new missing VEV mechanism emerging in the supersymmetric SU(8) GUT the simultaneous solution to the gauge hierarchy (doublet-triplet splitting) problem and problem of unification of flavor can be argued.
GUTs with four families / generations, SU(8): Assuming 4 generations of fermions instead of 3 makes a total of 64 types of particles. These can be put into 64 = 8 + 56 representations of SU(8). This can be divided into SU(5) × SU(3)F × U(1) which is the SU(5) theory together with some heavy bosons which act on the generation number.
GUTs with four families / generations, O(16): Again assuming 4 generations of fermions, the 128 particles and anti-particles can be put into a single spinor representation of O(16).
=== Symplectic groups and quaternion representations ===
Symplectic gauge groups could also be considered. For example, Sp(8) (which is called Sp(4) in the article symplectic group) has a representation in terms of 4 × 4 quaternion unitary matrices which has a 16 dimensional real representation and so might be considered as a candidate for a gauge group. Sp(8) has 32 charged bosons and 4 neutral bosons. Its subgroups include SU(4) so can at least contain the gluons and photon of SU(3) × U(1). Although it's probably not possible to have weak bosons acting on chiral fermions in this representation. A quaternion representation of the fermions might be:
[
e
+
i
e
¯
+
j
v
+
k
v
¯
u
r
+
i
u
¯
r
¯
+
j
d
r
+
k
d
¯
r
¯
u
g
+
i
u
¯
g
¯
+
j
d
g
+
k
d
¯
g
¯
u
b
+
i
u
¯
b
¯
+
j
d
b
+
k
d
¯
b
¯
]
L
{\displaystyle {\begin{bmatrix}e+i\ {\overline {e}}+j\ v+k\ {\overline {v}}\\u_{r}+i\ {\overline {u}}_{\mathrm {\overline {r}} }+j\ d_{\mathrm {r} }+k\ {\overline {d}}_{\mathrm {\overline {r}} }\\u_{g}+i\ {\overline {u}}_{\mathrm {\overline {g}} }+j\ d_{\mathrm {g} }+k\ {\overline {d}}_{\mathrm {\overline {g}} }\\u_{b}+i\ {\overline {u}}_{\mathrm {\overline {b}} }+j\ d_{\mathrm {b} }+k\ {\overline {d}}_{\mathrm {\overline {b}} }\\\end{bmatrix}}_{\mathrm {L} }}
A further complication with quaternion representations of fermions is that there are two types of multiplication: left multiplication and right multiplication which must be taken into account. It turns out that including left and right-handed 4 × 4 quaternion matrices is equivalent to including a single right-multiplication by a unit quaternion which adds an extra SU(2) and so has an extra neutral boson and two more charged bosons. Thus the group of left- and right-handed 4 × 4 quaternion matrices is Sp(8) × SU(2) which does include the standard model bosons:
S
U
(
4
,
H
)
L
×
H
R
=
S
p
(
8
)
×
S
U
(
2
)
⊃
S
U
(
4
)
×
S
U
(
2
)
⊃
S
U
(
3
)
×
S
U
(
2
)
×
U
(
1
)
{\displaystyle \mathrm {SU(4,\mathbb {H} )_{L}\times \mathbb {H} _{R}=Sp(8)\times SU(2)\supset SU(4)\times SU(2)\supset SU(3)\times SU(2)\times U(1)} }
If
ψ
{\displaystyle \psi }
is a quaternion valued spinor,
A
μ
a
b
{\displaystyle A_{\mu }^{ab}}
is quaternion hermitian 4 × 4 matrix coming from Sp(8) and
B
μ
{\displaystyle B_{\mu }}
is a pure vector quaternion (both of which are 4-vector bosons) then the interaction term is:
ψ
a
¯
γ
μ
(
A
μ
a
b
ψ
b
+
ψ
a
B
μ
)
{\displaystyle \ {\overline {\psi ^{a}}}\gamma _{\mu }\left(A_{\mu }^{ab}\psi ^{b}+\psi ^{a}B_{\mu }\right)\ }
=== Octonion representations ===
It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann) Jordan algebra, which has the symmetry group of one of the exceptional Lie groups (F4, E6, E7, or E8) depending on the details.
ψ
=
[
a
e
μ
e
¯
b
τ
μ
¯
τ
¯
c
]
{\displaystyle \psi ={\begin{bmatrix}a&e&\mu \\{\overline {e}}&b&\tau \\{\overline {\mu }}&{\overline {\tau }}&c\end{bmatrix}}}
[
ψ
A
,
ψ
B
]
⊂
J
3
(
O
)
{\displaystyle \ [\psi _{A},\psi _{B}]\subset \mathrm {J} _{3}(\mathbb {O} )\ }
Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that E6 has subgroup O(10) and so is big enough to include the Standard Model. An E8 gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of E8, these would either have to include anti-particles (and so have baryogenesis), have new undiscovered particles, or have gravity-like (spin connection) bosons affecting elements of the particles spin direction. Each of these possesses theoretical problems.
=== Beyond Lie groups ===
Other structures have been suggested including Lie 3-algebras and Lie superalgebras. Neither of these fit with Yang–Mills theory. In particular Lie superalgebras would introduce bosons with incorrect statistics. Supersymmetry, however, does fit with Yang–Mills.
== Unification of forces and the role of supersymmetry ==
The unification of forces is possible due to the energy scale dependence of force coupling parameters in quantum field theory called renormalization group "running", which allows parameters with vastly different values at usual energies to converge to a single value at a much higher energy scale.
The renormalization group running of the three gauge couplings in the Standard Model has been found to nearly, but not quite, meet at the same point if the hypercharge is normalized so that it is consistent with SU(5) or SO(10) GUTs, which are precisely the GUT groups which lead to a simple fermion unification. This is a significant result, as other Lie groups lead to different normalizations. However, if the supersymmetric extension MSSM is used instead of the Standard Model, the match becomes much more accurate. In this case, the coupling constants of the strong and electroweak interactions meet at the grand unification energy, also known as the GUT scale:
Λ
GUT
≈
10
16
GeV
.
{\displaystyle \Lambda _{\text{GUT}}\approx 10^{16}\,{\text{GeV}}.}
It is commonly believed that this matching is unlikely to be a coincidence, and is often quoted as one of the main motivations to further investigate supersymmetric theories despite the fact that no supersymmetric partner particles have been experimentally observed. Also, most model builders simply assume supersymmetry because it solves the hierarchy problem—i.e., it stabilizes the electroweak Higgs mass against radiative corrections.
== Neutrino masses ==
Since Majorana masses of the right-handed neutrino are forbidden by SO(10) symmetry, SO(10) GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism. These predictions are independent of the Georgi–Jarlskog mass relations, wherein some GUTs predict other fermion mass ratios.
== Proposed theories ==
Several theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes all fundamental forces, including gravitation, is termed a theory of everything. Some common mainstream GUT models are:
Pati–Salam model – SU(4) × SU(2) × SU(2)
Georgi–Glashow model – SU(5); and Flipped SU(5) – SU(5) × U(1)
SO(10) model; and Flipped SO(10) – SO(10) × U(1)
E6 model; and Trinification – SU(3) × SU(3) × SU(3)
minimal left-right model – SU(3)C × SU(2)L × SU(2)R × U(1)B−L
331 model – SU(3)C × SU(3)L × U(1)X
chiral color
Not quite GUTs:
Note: These models refer to Lie algebras not to Lie groups. The Lie group could be
[
S
U
(
4
)
×
S
U
(
2
)
×
S
U
(
2
)
]
/
Z
2
,
{\displaystyle [\mathrm {SU} (4)\times \mathrm {SU} (2)\times \mathrm {SU} (2)]/\mathbb {Z} _{2},}
just to take a random example.
The most promising candidate is SO(10).
(Minimal) SO(10) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation. A number of other GUT models are based upon subgroups of SO(10). They are the minimal left-right model, SU(5), flipped SU(5) and the Pati–Salam model. The GUT group E6 contains SO(10), but models based upon it are significantly more complicated. The primary reason for studying E6 models comes from E8 × E8 heterotic string theory.
GUT models generically predict the existence of topological defects such as monopoles, cosmic strings, domain walls, and others. But none have been observed. Their absence is known as the monopole problem in cosmology. Many GUT models also predict proton decay, although not the Pati–Salam model. As of now, proton decay has never been experimentally observed. The minimal experimental limit on the proton's lifetime pretty much rules out minimal SU(5) and heavily constrains the other models. The lack of detected supersymmetry to date also constrains many models.
Some GUT theories like SU(5) and SO(10) suffer from what is called the doublet-triplet problem. These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons, the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group.
Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations.
== Ingredients ==
A GUT model consists of a gauge group which is a compact Lie group, a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter.
== Current evidence ==
The discovery of neutrino oscillations indicates that the Standard Model is incomplete, but there is currently no clear evidence that nature is described by any Grand Unified Theory. Neutrino oscillations have led to renewed interest toward certain GUT such as SO(10).
One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT. However, minimum proton lifetimes from research (at or exceeding the 1034~1035 year range) have ruled out simpler GUTs and most non-SUSY models.
The maximum upper limit on proton lifetime (if unstable), is calculated at 6×1039 years for SUSY models and 1.4×1036 years for minimal non-SUSY GUTs.
The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to 1016 GeV (slightly less than the Planck energy of 1019 GeV), which is somewhat suggestive. This interesting numerical observation is called the gauge coupling unification, and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still, it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) SO(10) models break with an intermediate gauge scale, such as the one of Pati–Salam group.
== See also ==
B − L quantum number
Classical unified field theories
Paradigm shift
Physics beyond the Standard Model
Theory of everything
X and Y bosons
== Notes ==
== References ==
== Further reading ==
Stephen Hawking, A Brief History of Time, includes a brief popular overview.
Langacker, Paul (2012). "Grand unification". Scholarpedia. 7 (10): 11419. Bibcode:2012SchpJ...711419L. doi:10.4249/scholarpedia.11419.
== External links ==
The Algebra of Grand Unified Theories | Wikipedia/Grand_Unified_Theory |
The many-body problem is a general name for a vast category of physical problems pertaining to the properties of microscopic systems made of many interacting particles.
== Terminology ==
Microscopic here implies that quantum mechanics has to be used to provide an accurate description of the system. Many can be anywhere from three to infinity (in the case of a practically infinite, homogeneous or periodic system, such as a crystal), although three- and four-body systems can be treated by specific means (respectively the Faddeev and Faddeev–Yakubovsky equations) and are thus sometimes separately classified as few-body systems.
== Explanation of the problem ==
In general terms, while the underlying physical laws that govern the motion of each individual particle may (or may not) be simple, the study of the collection of particles can be extremely complex. In such a quantum system, the repeated interactions between particles create quantum correlations, or entanglement. As a consequence, the wave function of the system is a complicated object holding a large amount of information, which usually makes exact or analytical calculations impractical or even impossible.
This becomes especially clear by a comparison to classical mechanics. Imagine a single particle that can be described with
k
{\displaystyle k}
numbers (take for example a free particle described by its position and velocity vector, resulting in
k
=
6
{\displaystyle k=6}
). In classical mechanics,
n
{\displaystyle n}
such particles can simply be described by
k
⋅
n
{\displaystyle k\cdot n}
numbers. The dimension of the classical many-body system scales linearly with the number of particles
n
{\displaystyle n}
.
In quantum mechanics, however, the many-body-system is in general in a superposition of combinations of single particle states - all the
k
n
{\displaystyle k^{n}}
different combinations have to be accounted for. The dimension of the quantum many body system therefore scales exponentially with
n
{\displaystyle n}
, much faster than in classical mechanics.
Because the required numerical expense grows so quickly, simulating the dynamics of more than three quantum-mechanical particles is already infeasible for many physical systems. Thus, many-body theoretical physics most often relies on a set of approximations specific to the problem at hand, and ranks among the most computationally intensive fields of science.
In many cases, emergent phenomena may arise which bear little resemblance to the underlying elementary laws.
Many-body problems play a central role in condensed matter physics.
== Examples ==
Condensed matter physics (solid-state physics, nanoscience, superconductivity)
Bose–Einstein condensation and Superfluids
Quantum chemistry (computational chemistry, molecular physics)
Atomic physics
Molecular physics
Nuclear physics (Nuclear structure, nuclear reactions, nuclear matter)
Quantum chromodynamics (Lattice QCD, hadron spectroscopy, QCD matter, quark–gluon plasma)
== Approaches ==
Mean-field theory and extensions (e.g. Hartree–Fock, Random phase approximation)
Dynamical mean field theory
Many-body perturbation theory and Green's function-based methods
Configuration interaction
Coupled cluster
Various Monte-Carlo approaches
Density functional theory
Lattice gauge theory
Matrix product state
Neural network quantum states
Numerical renormalization group
== Further reading ==
Jenkins, Stephen. "The Many Body Problem and Density Functional Theory".
Thouless, D. J. (1972). The quantum mechanics of many-body systems. New York: Academic Press. ISBN 0-12-691560-1.
Fetter, A. L.; Walecka, J. D. (2003). Quantum Theory of Many-Particle Systems. New York: Dover. ISBN 0-486-42827-3.
Nozières, P. (1997). Theory of Interacting Fermi Systems. Addison-Wesley. ISBN 0-201-32824-0.
Mattuck, R. D. (1976). A guide to Feynman diagrams in the many-body problem. New York: McGraw-Hill. ISBN 0-07-040954-4.
== References == | Wikipedia/Many-body_theory |
Vehicle dynamics is the study of vehicle motion, e.g., how a vehicle's forward movement changes in response to driver inputs, propulsion system outputs, ambient conditions, air/surface/water conditions, etc.
Vehicle dynamics is a part of engineering primarily based on classical mechanics.
It may be applied for motorized vehicles (such as automobiles), bicycles and motorcycles, aircraft, and watercraft.
== Factors affecting vehicle dynamics ==
The aspects of a vehicle's design which affect the dynamics can be grouped into drivetrain and braking, suspension and steering, distribution of mass, aerodynamics and tires.
=== Drivetrain and braking ===
Automobile layout (i.e. location of engine and driven wheels)
Powertrain
Braking system
=== Suspension and steering ===
Some attributes relate to the geometry of the suspension, steering and chassis. These include:
Ackermann steering geometry
Axle track
Camber angle
Caster angle
Ride height
Roll center
Scrub radius
Steering ratio
Toe
Wheel alignment
Wheelbase
=== Distribution of mass ===
Some attributes or aspects of vehicle dynamics are purely due to mass and its distribution. These include:
Center of mass
Moment of inertia
Roll moment
Sprung mass
Unsprung mass
Weight distribution
=== Aerodynamics ===
Some attributes or aspects of vehicle dynamics are purely aerodynamic. These include:
Automobile drag coefficient
Automotive aerodynamics
Center of pressure
Downforce
Ground effect in cars
=== Tires ===
Some attributes or aspects of vehicle dynamics can be attributed directly to the tires. These include:
Camber thrust
Circle of forces
Contact patch
Cornering force
Ground pressure
Pacejka's Magic Formula
Pneumatic trail
Radial Force Variation
Relaxation length
Rolling resistance
Self aligning torque
Skid
Slip angle
Slip (vehicle dynamics)
Spinout
Steering ratio
Tire load sensitivity
== Vehicle behaviours ==
Some attributes or aspects of vehicle dynamics are purely dynamic. These include:
Body flex
Body roll
Bump Steer
Bundorf analysis
Directional stability
Critical speed
Noise, vibration, and harshness
Pitch
Ride quality
Roll
Speed wobble
Understeer, oversteer, lift-off oversteer, and fishtailing
Weight transfer and load transfer
Yaw
== Analysis and simulation ==
The dynamic behavior of vehicles can be analysed in several different ways. This can be as straightforward as a simple spring mass system, through a three-degree of freedom (DoF) bicycle model, to a large degree of complexity using a multibody system simulation package such as MSC ADAMS or Modelica. As computers have gotten faster, and software user interfaces have improved, commercial packages such as CarSim have become widely used in industry for rapidly evaluating hundreds of test conditions much faster than real time. Vehicle models are often simulated with advanced controller designs provided as software in the loop (SIL) with controller design software such as Simulink, or with physical hardware in the loop (HIL).
Vehicle motions are largely due to the shear forces generated between the tires and road, and therefore the tire model is an essential part of the math model. In current vehicle simulator models, the tire model is the weakest and most difficult part to simulate. The tire model must produce realistic shear forces during braking, acceleration, cornering, and combinations, on a range of surface conditions. Many models are in use. Most are semi-empirical, such as the Pacejka Magic Formula model.
Racing car games or simulators are also a form of vehicle dynamics simulation. In early versions many simplifications were necessary in order to get real-time performance with reasonable graphics. However, improvements in computer speed have combined with interest in realistic physics, leading to driving simulators that are used for vehicle engineering using detailed models such as CarSim.
It is important that the models should agree with real world test results, hence many of the following tests are correlated against results from instrumented test vehicles.
Techniques include:
Linear range constant radius understeer
Fishhook
Frequency response
Lane change
Moose test
Sinusoidal steering
Skidpad
Swept path analysis
== See also ==
Automotive suspension design
Automobile handling
Hunting oscillation
Multi-axis shaker table
Vehicular metrics
4-poster
7 post shaker
== References ==
== Further reading ==
Egbert, Bakker; Nyborg, Lars; Pacejka, Hans B. (1987). "Tyre modelling for use in vehicle dynamics studies" (PDF). Society of Automotive Engineers. A new way of representing tyre data obtained from measurements in pure cornering and pure braking conditions.
Gillespie, Thomas D. (1992). Fundamentals of vehicle dynamics (2nd printing. ed.). Warrendale, PA: Society of Automotive Engineers. ISBN 978-1-56091-199-9. Mathematically oriented derivation of standard vehicle dynamics equations, and definitions of standard terms.
Milliken, William F. (2002). "Chassis Design – Principles and Analysis". Society of Automotive Engineers. Vehicle dynamics as developed by Maurice Olley from the 1930s onwards. First comprehensive analytical synthesis of vehicle dynamics.
Milliken, William F.; Douglas L. (1995). Race car vehicle dynamics (4. printing. ed.). Warrendale, Pa.: Society of Automotive Engineers. ISBN 978-1-56091-526-3. Latest and greatest, also the standard reference for automotive suspension engineers.
Limited, Jörnsen Reimpell; Helmut Stoll; Jürgen W. Betzler (2001). The automotive chassis : engineering principles. Translated from the German by AGET (2nd ed.). Warrendale, Pa.: Society of Automotive Engineers. ISBN 978-0-7680-0657-5. Archived from the original on 2012-11-02. Retrieved 2017-09-17. {{cite book}}: |last= has generic name (help) Vehicle dynamics and chassis design from a race car perspective.
Guiggiani, Massimo (2014). The Science of Vehicle Dynamics (1st. ed.). Dordrecht: Springer. ISBN 978-94-017-8532-7. Handling, Braking, and Ride of Road and Race Cars.
Meywerk, Martin (2015). Vehicle Dynamics (1st. ed.). West Sussex: John Wiley & Sons. ISBN 978-1-118-97135-2. Lecture Notes to the MOOC Vehicle Dynamics of iversity | Wikipedia/Vehicle_dynamics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.