id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
16,821,478
https://en.wikipedia.org/wiki/CFD-DEM%20model
A CFD-DEM model is suitable for the modeling or simulation of fluid-solids or fluid-particles systems. In a typical CFD-DEM model, the phase motion of discrete solids or particles is obtained by the Discrete Element Method (DEM) which applies Newton's laws of motion to every particle and the flow of continuum fluid is described by the local averaged Navier–Stokes equations that can be solved by the traditional Computational Fluid Dynamics (CFD). The model is first proposed by Tsuji et al. The interactions between the fluid phase and solids phase is better modeled according to Newton's third law. Software Open source and non-commercial software: The open source CFD software OpenFOAM includes particle methods, including DEM, and solvers that couple CFD-DEM. CFDEMcoupling (DCS Computing GmbH) couples CFD from OpenFOAM with open source DEM software, LIGGGHTS. MFiX(Open Source multiphase flow simulation package). The commercial software Simcenter STAR-CCM+ is an integrated multiphysics solution capable of CFD-DEM coupling involving single or multiphase flow, chemical reactions, electromagnetism and heat transfer Parallelization OpenMP has been shown to be more efficient in performing coupled CFD-DEM calculations in parallel framework as compared to MPI by Amritkar et al. References Simulation software
CFD-DEM model
[ "Physics" ]
284
[ "Computational physics stubs", "Computational physics" ]
7,721,911
https://en.wikipedia.org/wiki/Mestranol
Mestranol, sold under the brand names Enovid, Norinyl, and Ortho-Novum among others, is an estrogen medication which has been used in birth control pills, menopausal hormone therapy, and the treatment of menstrual disorders. It is formulated in combination with a progestin and is not available alone. It is taken by mouth. Side effects of mestranol include nausea, breast tension, edema, and breakthrough bleeding among others. It is an estrogen, or an agonist of the estrogen receptors, the biological target of estrogens like estradiol. Mestranol is a prodrug of ethinylestradiol in the body. Mestranol was discovered in 1956 and was introduced for medical use in 1957. It was the estrogen component in the first birth control pill. In 1969, mestranol was replaced by ethinylestradiol in most birth control pills, although mestranol continues to be used in a few birth control pills even today. Mestranol remains available only in a few countries, including the United States, United Kingdom, Japan, and Chile. Medical uses Mestranol was employed as the estrogen component in many of the first oral contraceptives, such as mestranol/noretynodrel (brand name Enovid) and mestranol/norethisterone (brand names Ortho-Novum, Norinyl), and is still in use today. In addition to its use as an oral contraceptive, mestranol has been used as a component of menopausal hormone therapy for the treatment of menopausal symptoms. Side effects Pharmacology Mestranol is a biologically inactive prodrug of ethinylestradiol to which it is demethylated in the liver (via O-Dealkylation) with a conversion efficiency of 70% (50 μg of mestranol is pharmacokinetically bioequivalent to 35 μg of ethinylestradiol). It has been found to possess 0.1 to 2.3% of the relative binding affinity of estradiol (100%) for the estrogen receptor, compared to 75 to 190% for ethinylestradiol. The elimination half-life of mestranol has been reported to be 50 minutes. The elimination half-life of the active form of mestranol, ethinylestradiol, is 7 to 36 hours. The effective ovulation-inhibiting dosage of mestranol has been studied in women. It has been reported to be about 98% effective at inhibiting ovulation at a dosage of 75 or 80 μg/day. In another study, the ovulation rate was 15.4% at 50 μg/day, 5.7% at 80 μg/day, and 1.1% at 100 μg/day. Chemistry Mestranol, also known as ethinylestradiol 3-methyl ether (EEME) or as 17α-ethynyl-3-methoxyestra-1,3,5(10)-trien-17β-ol, is a synthetic estrane steroid and a derivative of estradiol. It is specifically a derivative of ethinylestradiol (17α-ethynylestradiol) with a methyl ether at the C3 position. History In April 1956, noretynodrel was investigated, in Puerto Rico, in the first large-scale clinical trial of a progestogen as an oral contraceptive. The trial was conducted in Puerto Rico due to the high birth rate in the country and concerns of moral censure in the United States. It was discovered early into the study that the initial chemical syntheses of noretynodrel had been contaminated with small amounts (1–2%) of the 3-methyl ether of ethinylestradiol (noretynodrel having been synthesized from ethinylestradiol). When this impurity was removed, higher rates of breakthrough bleeding occurred. As a result, mestranol, that same year (1956), was developed and serendipitously identified as a very potent synthetic estrogen (and eventually as a prodrug of ethinylestradiol), given its name, and added back to the formulation. This resulted in Enovid by G. D. Searle & Company, the first oral contraceptive and a combination of 9.85 mg noretynodrel and 150 μg mestranol per pill. Around 1969, mestranol was replaced by ethinylestradiol in most combined oral contraceptives due to widespread panic about the recently uncovered increased risk of venous thromboembolism with estrogen-containing oral contraceptives. The rationale was that ethinylestradiol was approximately twice as potent by weight as mestranol and hence that the dose could be halved, which it was thought might result in a lower incidence of venous thromboembolism. Whether this actually did result in a lower incidence of venous thromboembolism has never been assessed. Society and culture Generic names Mestranol is the generic name of the drug and its , , , , , and , while mestranolo is its . Brand names Mestranol has been marketed under a variety of brand names, mostly or exclusively in combination with progestins, including Devocin, Enavid, Enovid, Femigen, Mestranol, Norbiogest, Ortho-Novin, Ortho-Novum, Ovastol, and Tranel among others. Today, it continues to be sold in combination with progestins under brand names including Lutedion, Necon, Norinyl, Ortho-Novum, and Sophia. Availability Mestranol remains available only in the United States, the United Kingdom, Japan, and Chile. It is only marketed in combination with progestins, such as norethisterone. Research Mestranol has been studied as a male contraceptive and was found to be highly effective. At a dosage of 0.45 mg/day, it suppressed gonadotropin levels, reduced sperm count to zero within 4 to 6 weeks, and decreased libido, erectile function, and testicular size. Gynecomastia occurred in all of the men. These findings contributed to the conclusion that estrogens would be unacceptable as contraceptives for men. Environmental presence In 2021, mestranol was one of the 12 compounds identified in sludge samples taken from 12 wastewater treatment plants in California that were collectively associated with estrogenic activity in in vitro. References Ethynyl compounds Estranes Estrogen ethers Hormonal contraception Prodrugs Synthetic estrogens
Mestranol
[ "Chemistry" ]
1,470
[ "Chemicals in medicine", "Prodrugs" ]
7,724,083
https://en.wikipedia.org/wiki/Penicillium%20crustosum
Penicillium crustosum is a blue-green or blue-grey mold that can cause food spoilage, particularly of protein-rich foods such as meats and cheeses. It is identified by its complex biseriate conidiophores on which phialides produce asexual spores. It can grow at fairly low temperatures (it is a psychrophile), and in low water activity environments. Penicillium crustosum produces mycotoxins, most notoriously the neurotoxic penitrems, including the best known penitrem toxin, penitrem A, and including penitrems A through G. Penitrem G has been shown to have insecticidal activity. In addition, P. crustosum can produce thomitrems A and E, and roquefortine C. Consumption of foods spoiled by this mold can cause transient neurological symptoms such as tremors. In dogs, symptoms can include vomiting, convulsion, tremors, ataxia, and tachycardia. References crustosum Fungi described in 1930 Taxa named by Charles Thom Fungus species
Penicillium crustosum
[ "Biology" ]
237
[ "Fungi", "Fungus species" ]
7,725,171
https://en.wikipedia.org/wiki/RUNX2
Runt-related transcription factor 2 (RUNX2) also known as core-binding factor subunit alpha-1 (CBF-alpha-1) is a protein that in humans is encoded by the RUNX2 gene. RUNX2 is a key transcription factor associated with osteoblast differentiation. It has also been suggested that Runx2 plays a cell proliferation regulatory role in cell cycle entry and exit in osteoblasts, as well as endothelial cells. Runx2 suppresses pre-osteoblast proliferation by affecting cell cycle progression in the G1 phase. In osteoblasts, the levels of Runx2 is highest in G1 phase and is lowest in S, G2, and M. The comprehensive cell cycle regulatory mechanisms that Runx2 may play are still unknown, although it is generally accepted that the varying activity and levels of Runx2 throughout the cell cycle contribute to cell cycle entry and exit, as well as cell cycle progression. These functions are especially important when discussing bone cancer, particularly osteosarcoma development, that can be attributed to aberrant cell proliferation control. Function Osteoblast differentiation This protein is a member of the RUNX family of transcription factors and has a Runt DNA-binding domain. It is essential for osteoblastic differentiation and skeletal morphogenesis. It acts as a scaffold for nucleic acids and regulatory factors involved in skeletal gene expression. The protein can bind DNA both as a monomer or, with more affinity, as a subunit of a heterodimeric complex. Transcript variants of the gene that encode different protein isoforms result from the use of alternate promoters as well as alternate splicing. The cellular dynamics of Runx2 protein are also important for proper osteoblast differentiation. Runx2 protein is detected in preosteoblasts and the expression is upregulated in immature osteoblasts and downregulated in mature osteoblasts. It is the first transcription factor required for determination of osteoblast commitment, followed by Sp7 and Wnt-signaling. Runx2 is responsible for inducing the differentiation of multipotent mesenchymal cells into immature osteoblasts, as well as activating expression of several key downstream proteins that maintain osteoblast differentiation and bone matrix genes. Knock-out of the DNA-binding activity results in inhibition of osteoblastic differentiation. Because of this, Runx2 is often referred to as the master regulator of bone. Cell cycle regulation In addition to being the master regulator of osteoblast differentiation, Runx2 has also been shown to play several roles in cell cycle regulation. This is due, in part, to the fact that Runx2 interacts with many cellular proliferation genes on a transcription level, such as c-Myb and C/EBP, as well as p53/ These functions are critical for osteoblast proliferation and maintenance. This is often controlled via oscillating levels of Runx2 within throughout cell cycle due to regulated degradation and transcriptional activity. Oscillating levels of Runx2 within the cell contribute to cell cycle dynamics. In the MC3T3-E1 osteoblast cell line, Runx2 levels are a maximum during G1 and a minimum during G2, S, and mitosis. In addition, the oscillations in Runx2 contribute to G1-related anti-proliferative function. It has also been proposed that decreasing levels of Runx2 leads to cell cycle exit for proliferating and differentiating osteoblasts, and that Runx2 plays a role in mediating the final stages of osteoblast via this mechanism. Current research posits that the levels of Runx2 serve various functions. In addition, Runx2 has been shown to interact with several kinases that contribute to facilitate cell-cycle dependent dynamics via direct protein phosphorylation. Furthermore, Runx2 controls the gene expression of cyclin D2, D3, and the CDK inhibitor p21(cip1) in hematopoietic cells. It has been shown that on a molecular level, Runx associates with the cdc2 partner cyclin B1 during mitosis. The phosphorylation state of Runx2 also mediates its DNA-binding activity. The Runx2 DNA-binding activity is correlated with cellular proliferation, which suggests Runx2 phosphorylation may also be related to Runx2-mediated cellular proliferation and cell cycle control. To support this, it has been noted that Runx is phosphorylated at Ser451 by cdc2 kinase, which facilitates cell cycle progression through the regulation of G2 and M phases. Pathology Cleidocranial dysplasia Mutations in Runx2 are associated with the disease Cleidocranial dysostosis. One study proposes that this phenotype arises partly due to the Runx2 dosage insufficiencies. Because Runx2 promotes exit from the cell cycle, insufficient amounts of Runx2 are related to increased proliferation of osteoblasts observed in patients with cleidocranial disostosis. Osteosarcoma Variants of Runx2 have been associated with the osteosarcoma phenotype. Current research suggests that this is partly due to the role of Runx2 in mitigating the cell cycle. Runx2 plays a role as a tumor suppressor of osteoblasts by halting cell cycle progression at G1. Compared to normal osteoblast cell line MC3T3-E1, the oscillations of Runx2 in osteosarcoma ROS and SaOS cell lines are aberrant when compared to the oscillations of Runx2 levels in normal osteoblasts, suggesting that deregulation of Runx2 levels may contribute to abnormal cell proliferation by an inability to escape the cell cycle. Molecularly, It has been proposed that proteasome inhibition by MG132 can stabilize Runx2 protein levels in late G1 and S in MC3T3 cells, but not in osteosarcoma cells which consequently leads to a cancerous phenotype. Regulation and co-factors Due to its role as a master transcription factor of osteoblast differentiation, the regulation of Runx2 is intricately connected to other processes within the cell. Twist, Msh homeobox 2 (Msx2), and promyeloctic leukemia zinc-finger protein (PLZF) act upstream of Runx2. Osterix (Osx) acts downstream of Runx2 and serves as a marker for normal osteoblast differentiation. Zinc finger protein 521 (ZFP521) and activating transcription factor 4 (ATF4) are cofactors of Runx2. Binding of the transcriptional coregulator, WWTR1 (TAZ) to Runx2 promotes transcription. Furthermore, in proliferating chondrocytes, Runx2 is inhibited by CyclinD1/CDK4 as part of the cell cycle. Interactions RUNX2 has been shown to interact with: AR ER-α C-Fos, C-jun, HDAC3, MYST4, SMAD1 SMAD3, and STUB1. and SMOC1. miR-133 and CyclinD1/CDK4 directly inhibits Runx2. See also RUNX1 RUNX3 References Further reading External links GeneReviews/NCBI/NIH/UW entry on Cleidocranial Dysplasia Transcription factors
RUNX2
[ "Chemistry", "Biology" ]
1,614
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
7,725,457
https://en.wikipedia.org/wiki/Tin%20ceiling
A tin ceiling is an architectural element, consisting of a ceiling finished with tinplate with designs pressed into them, that was very popular in Victorian buildings in North America in the late 19th and early 20th century. They were also popular in Australia where they were commonly known as pressed metal ceilings or Wunderlich ceilings (after the main Australian manufacturer Wunderlich). They were also used in South Africa. History Tin ceilings were introduced to North America as an affordable alternative to the exquisite plasterwork used in European homes. They gained popularity in the late 1800s as Americans sought sophisticated interior design. Durable and lightweight, tin ceilings were appealing to home and business owners alike as a functionally attractive design element that was readily available. Important critics such as John Ruskin, George Gilbert Scott, Charles Eastlake and William Morris debated the implications of faux materials. These critics believed it was morally wrong and deceptive to imitate another material and blamed the degradation of society towards the "art of shamming" rather than honesty in architecture. Nevertheless, tin ceilings lasted longer than plaster ones and were easier to clean. They encapsulated ideas of democracy, making such decoration available to the middle class majority who supported the machine production. Decorative metal ceilings were first made of corrugated iron sheets, appearing in the United States by the early 1870s. It was during the late Victorian era that thin rolled tin-plate was being mass-produced. Tinplate was originally made from dipping iron in molten tin in order to prevent rust. Later, steel replaced iron as the more cost-effective solution. Tinplate was not the only sheet metal used to make stamped ceilings. Copper, lead (known as ternplate) and zinc were other common architectural metals in the industry. Between 1890 and 1930, approximately forty-five companies in the United States marketed metal ceilings; most were in Ohio, Pennsylvania, and New York, located along railroad lines that served as the main routes for delivering the pressed metal products directly to contractors. The Wheeling Corrugating Company out of Wheeling, West Virginia, became the leading tin ceiling manufacturer in the late 1800s. At that time, Wheeling Corrugating was a large steel mill that also made products from their steel sheets such as roofing and siding. Sheets of tin were stamped one at a time using rope drop hammers and cast iron molds. Using this method of production, metal was sandwiched between two interlocking tools. The top tool, or "ram," was lifted up by a rope or chain, then dropped down onto the bottom die, smashing into the metal that was underneath and permanently embedding intricate patterns into the tin. Someone who saw the merit of this modern machine for its artistic potential was Frank Lloyd Wright. In his articles, "The Art and Craft of the Machine" and "In the Cause of Architecture," the series published by Architectural Record, Wright elaborates on his modern theory of science and art and the role of the machine in the future of art. Tin ceilings were traditionally painted white to give the appearance of hand-carved or molded plaster. They were incorporated into residential living rooms and parlors as well as schools, hospitals and commercial businesses where painted tin was often used as wainscoting. In the 1930s, tin ceilings began to lose their popularity and steel materials became scarce because of the effort to collect scrap metal during WWII. Many sheet metal companies began making other products in order to stay in business. In the 21st century, some renewed interest has been shown in tin ceilings. The increase in interest has stemmed from businesses that were renovating and an interest to return to the nostalgia of the turn of the century. Still to this day there exists a manufacturing company by the name of W.F. Norman Corporation that produces original tin ceilings and ornaments from the same rope drop hammers as it once did in 1898. Several other companies offer conventional tin ceilings as well as panels made to fit into a drop-ceiling grid. Restoration Tin ceilings were built to last, and in the absence of prolonged moisture damage leading to corrosion, they usually did; however, the wear and tear over the hundred years since the heyday of tin has led to a burgeoning restoration industry. Magazines such as The Old-House Journal were created to offer articles about restoration, repair and installation practices for historic preservation of tin ceilings. Environmental hazards from the lead paint used on turn of the century tin ceilings mean that this is a job for experts in the field. Often restoration is achieved by simply stripping old paint, treating the metal with a protective base coat, patching minor damaged areas, and repainting. In some cases, where small sections of a ceiling have been damaged, partial restoration is needed. Panels can be easily replaced through companies that still manufacture original design components. If, however, a ceiling requires a historic pattern that is no longer in production, good quality panels from the existing ceiling may be used to create a mold and new customized tin can be pressed. If full restoration is needed, meaning no part of the existing ceiling remains structurally sound, a professional can help design a new ceiling appropriate for the period and structure using existing molds or creating reproductions based on photographic evidence or architectural drawings. This latter method can be extremely expensive, and is not cost effective, due to the cost of making a custom mold for the panel and usually the metal trim that was also used with the original project. More detailed information for repair and replacement of decorative metal ceilings can be found in the National Park Service Technical Preservation Services. Modern adaptation Several companies now offer hand-painted finishes for metalwork, as well as a more permanent look that can be achieved with powder-coated finishes. For the low end of the market, imitation panels are pressed from plastic or aluminum. Tin is now fashionably used for art work, back splashes, cabinet faces, wainscoting and much more. For over 100 years the tin panel was made with nail rails around the outside of the panel, designed to overlap each other. Panels were nailed into wood furring strips which were prevalent prior to the invention of plywood. Today, nail up panels can be easily brad nailed or hand nailed, into plywood without the need for the original furring strips. There is also a patented interlocking tin panel that will screw directly into existing drywall/popcorn/plaster ceilings, without the need for extensive plywood installation. Tin panels today are made in and sizes for easier handling and one-person installation. Today, most tin ceiling manufacturers actually use recycled blackplate steel in a thickness of only . There are some manufacturers who also use actual tin plated steel, which is simply the blackplate steel with a thin coating of bright tin plate adhered to the base metal. Other manufacturers utilize aluminum, as it is rustproof and will last a lifetime. This finish is also an option with dropped ceilings. References Architectural elements Building materials Interior design
Tin ceiling
[ "Physics", "Technology", "Engineering" ]
1,391
[ "Building engineering", "Architecture", "Construction", "Materials", "Architectural elements", "Components", "Matter", "Building materials" ]
7,726,829
https://en.wikipedia.org/wiki/Oxy-fuel%20combustion%20process
Oxy-fuel combustion is the process of burning a fuel using pure oxygen, or a mixture of oxygen and recirculated flue gas, instead of air. Since the nitrogen component of air is not heated, fuel consumption is reduced, and higher flame temperatures are possible. Historically, the primary use of oxy-fuel combustion has been in welding and cutting of metals, especially steel, since oxy-fuel allows for higher flame temperatures than can be achieved with an air-fuel flame. It has also received a lot of attention in recent decades as a potential carbon capture and storage technology. There is currently research being done in firing fossil fuel power plants with an oxygen-enriched gas mix instead of air. Almost all of the nitrogen is removed from input air, yielding a stream that is approximately 95% oxygen. Firing with pure oxygen would result in too high a flame temperature, so the mixture is diluted by mixing with recycled flue gas, or staged combustion. The recycled flue gas can also be used to carry fuel into the boiler and ensure adequate convective heat transfer to all boiler areas. Oxy-fuel combustion produces approximately 75% less flue gas than air fueled combustion and produces exhaust consisting primarily of CO2 and H2O (see figure). Economy and efficiency The justification for using oxy-fuel is to produce a CO2 rich flue gas ready for sequestration. Oxy-fuel combustion has significant advantages over traditional air-fired plants. Among these are: The mass and volume of the flue gas are reduced by approximately 75%. Because the flue gas volume is reduced, less heat is lost in the flue gas. The size of the flue gas treatment equipment can be reduced by 75%. The flue gas is primarily CO2, suitable for sequestration. The concentration of pollutants in the flue gas is higher, making separation easier. Most of the flue gases are condensable; this makes compression separation possible. Heat of condensation can be captured and reused rather than lost in the flue gas. Because nitrogen from air is absent, nitrogen oxide production is greatly reduced. If the fuel contains sulfur, sulfuric acid can possibly be recovered instead of being released as a dangerous environmental pollutant or "lost" in flue gas desulfurization. Economically speaking this method costs more than a traditional air-fired plant. The main problem has been separating oxygen from the air. This process requires much energy, nearly 15% of production by a coal-fired power station can be consumed for this process. However, a new technology which is not yet practical called chemical looping combustion can be used to reduce this cost. In chemical looping combustion, the oxygen required to burn the coal is produced internally by oxidation and reduction reactions, as opposed to using more expensive methods of generating oxygen by separating it from air. At present in the absence of any need to reduce CO2 emissions, oxy-fuel is not competitive. However, oxy-fuel is a viable alternative to removing CO2 from the flue gas from a conventional air-fired fossil fuel plant. However, an oxygen concentrator might be able to help, as it simply removes nitrogen. In industries other than power generation, oxy-fuel combustion can be competitive due to higher sensible heat availability. Oxy-fuel combustion is common in various aspects of metal production. The glass industry has been converting to oxy-fuel since the early 1990s because glass furnaces require a temperature of approximately 1500 degrees C, which is not economically attainable at adiabatic flame temperatures for air-fuel combustion unless heat is regenerated between the flue stream and the incoming air stream. Developed in the mid-19th century, glass furnace regenerators are large and expensive high temperature brick ducts filled with brick arranged in a checkerboard pattern to capture heat as flue gas exits the furnace. When the flue duct is thoroughly heated, air flow is reversed and the flue duct becomes the air inlet, releasing its heat into the incoming air, and allowing for higher furnace temperatures than can be attained with air-fuel only. Two sets of regenerative flue ducts allowed for the air flow to be reversed at regular intervals, and thus maintain a high temperature in the incoming air. By allowing new furnaces to be built without the expense of regenerators, and especially with the added benefit of nitrogen oxide reduction, which allows glass plants to meet emission restrictions, oxy-fuel is cost effective without the need to reduce CO2 emissions. Oxy-fuel combustion also reduces CO2 release at the glass plant location, although this may be offset by CO2 production due to electric power generation which is necessary to produce oxygen for the combustion process. Oxy-fuel combustion may also be cost effective in the incineration of low BTU value hazardous waste fuels. It is often combined with staged combustion for nitrogen oxide reduction, since pure oxygen can stabilize combustion characteristics of a flame. Pilot plants There are pilot plants undergoing initial proof-of-concept testing to evaluate the technologies for scaling up to commercial plants, including Callide A Power Station in Queensland Australia Schwarze Pumpe Power Station in Spremberg, Germany CIUDEN in Cubillos del Sil, Spain NET Power Demonstration Facility White Rose plant One case study of oxy-fuel combustion is the attempted White Rose plant in North Yorkshire, United Kingdom. The planned project was an oxy-fuel power plant coupled with air separation to capture two million tons of carbon dioxide per year. The carbon dioxide would then be delivered by pipeline to be sequestered in a saline aquifer beneath the North Sea. However, in late 2015 and early 2016, following withdrawal of funding by the Drax Group and the U.K. government, construction was halted. The unforeseen loss of the funding from the UK government's CCS Commercialisation Programme, along with decreased subsidies for renewable energy, left the White Rose Plant with insufficient funds to continue development. Environmental impact One of the major environmental impacts of burning fossil fuels is the release of CO2, which contributes to climate change. Because oxyfuel combustion results in flue gas that already has a high concentration of , it makes it easier to purify and store the CO2 rather than releasing it to the atmosphere. Many fossil fuels, such as coal and oil shale, produce ash as a result of combustion. This ash also needs to be disposed of, which may impact the environment. So far studies indicate that, in general, oxyfuel combustion does not significantly affect the composition of ash produced. Measurements have shown similar mineral and heavy metal concentrations regardless of whether an air or oxyfuel environment was used. However, one notable exception is that oxyfuel ashes often have lower concentrations of calcium oxide or calcium hydroxide (free lime). Free lime forms when carbonate minerals in fuels like coal and oil shale decompose at the high temperatures occurring during combustion (calcination). Calcination is an equilibrium reaction and a higher partial pressure of shifts the equilibrium in favor of and respectively. Free lime is reactive and can potentially affect the environment, for instance by increasing the alkalinity of the ash. Because oxyfuel combustion takes place in a CO2-rich atmosphere, decomposition is reduced and the ash generally contains less free lime. Flue gas desulfurization is usually employed to increase the pH of flue gases or their product when reacting with atmospheric moisture (acid rain). Besides sulfur and its oxides, another potential acid rain component is formed from nitric and nitrous oxide interacting with water - eliminating nitrogen from combustion reduces this factor altogether. See also Air separation Cryogenic energy storage Premixed flame Chemical looping combustion Carbon capture and storage References Combustion Fuel technology Carbon capture and storage
Oxy-fuel combustion process
[ "Chemistry", "Engineering" ]
1,597
[ "Geoengineering", "Combustion", "Carbon capture and storage" ]
7,728,392
https://en.wikipedia.org/wiki/Entropy%20%28order%20and%20disorder%29
In thermodynamics, entropy is often associated with the amount of order or disorder in a thermodynamic system. This stems from Rudolf Clausius' 1862 assertion that any thermodynamic process always "admits to being reduced [reduction] to the alteration in some way or another of the arrangement of the constituent parts of the working body" and that internal work associated with these alterations is quantified energetically by a measure of "entropy" change, according to the following differential expression: where = motional energy ("heat") that is transferred reversibly to the system from the surroundings and = the absolute temperature at which the transfer occurs. In the years to follow, Ludwig Boltzmann translated these 'alterations of arrangement' into a probabilistic view of order and disorder in gas-phase molecular systems. In the context of entropy, "perfect internal disorder" has often been regarded as describing thermodynamic equilibrium, but since the thermodynamic concept is so far from everyday thinking, the use of the term in physics and chemistry has caused much confusion and misunderstanding. In recent years, to interpret the concept of entropy, by further describing the 'alterations of arrangement', there has been a shift away from the words 'order' and 'disorder', to words such as 'spread' and 'dispersal'. History This "molecular ordering" entropy perspective traces its origins to molecular movement interpretations developed by Rudolf Clausius in the 1850s, particularly with his 1862 visual conception of molecular disgregation. Similarly, in 1859, after reading a paper on the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. In 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and was so inspired by it that he spent much of his long and distinguished life developing the subject further. Later, Boltzmann, in efforts to develop a kinetic theory for the behavior of a gas, applied the laws of probability to Maxwell's and Clausius' molecular interpretation of entropy so as to begin to interpret entropy in terms of order and disorder. Similarly, in 1882 Hermann von Helmholtz used the word "Unordnung" (disorder) to describe entropy. Overview To highlight the fact that order and disorder are commonly understood to be measured in terms of entropy, below are current science encyclopedia and science dictionary definitions of entropy: A measure of the unavailability of a system's energy to do work; also a measure of disorder; the higher the entropy the greater the disorder. A measure of disorder; the higher the entropy the greater the disorder. In thermodynamics, a parameter representing the state of disorder of a system at the atomic, ionic, or molecular level; the greater the disorder the higher the entropy. A measure of disorder in the universe or of the unavailability of the energy in a system to do work. Entropy and disorder also have associations with equilibrium. Technically, entropy, from this perspective, is defined as a thermodynamic property which serves as a measure of how close a system is to equilibrium—that is, to perfect internal disorder. Likewise, the value of the entropy of a distribution of atoms and molecules in a thermodynamic system is a measure of the disorder in the arrangements of its particles. In a stretched out piece of rubber, for example, the arrangement of the molecules of its structure has an "ordered" distribution and has zero entropy, while the "disordered" kinky distribution of the atoms and molecules in the rubber in the non-stretched state has positive entropy. Similarly, in a gas, the order is perfect and the measure of entropy of the system has its lowest value when all the molecules are in one place, whereas when more points are occupied the gas is all the more disorderly and the measure of the entropy of the system has its largest value. In systems ecology, as another example, the entropy of a collection of items comprising a system is defined as a measure of their disorder or equivalently the relative likelihood of the instantaneous configuration of the items. Moreover, according to theoretical ecologist and chemical engineer Robert Ulanowicz, "that entropy might provide a quantification of the heretofore subjective notion of disorder has spawned innumerable scientific and philosophical narratives." In particular, many biologists have taken to speaking in terms of the entropy of an organism, or about its antonym negentropy, as a measure of the structural order within an organism. The mathematical basis with respect to the association entropy has with order and disorder began, essentially, with the famous Boltzmann formula, , which relates entropy S to the number of possible states W in which a system can be found. As an example, consider a box that is divided into two sections. What is the probability that a certain number, or all of the particles, will be found in one section versus the other when the particles are randomly allocated to different places within the box? If you only have one particle, then that system of one particle can subsist in two states, one side of the box versus the other. If you have more than one particle, or define states as being further locational subdivisions of the box, the entropy is larger because the number of states is greater. The relationship between entropy, order, and disorder in the Boltzmann equation is so clear among physicists that according to the views of thermodynamic ecologists Sven Jorgensen and Yuri Svirezhev, "it is obvious that entropy is a measure of order or, most likely, disorder in the system." In this direction, the second law of thermodynamics, as famously enunciated by Rudolf Clausius in 1865, states that: Thus, if entropy is associated with disorder and if the entropy of the universe is headed towards maximal entropy, then many are often puzzled as to the nature of the "ordering" process and operation of evolution in relation to Clausius' most famous version of the second law, which states that the universe is headed towards maximal "disorder". In the recent 2003 book SYNC – the Emerging Science of Spontaneous Order by Steven Strogatz, for example, we find "Scientists have often been baffled by the existence of spontaneous order in the universe. The laws of thermodynamics seem to dictate the opposite, that nature should inexorably degenerate toward a state of greater disorder, greater entropy. Yet all around us we see magnificent structures—galaxies, cells, ecosystems, human beings—that have all somehow managed to assemble themselves." The common argument used to explain this is that, locally, entropy can be lowered by external action, e.g. solar heating action, and that this applies to machines, such as a refrigerator, where the entropy in the cold chamber is being reduced, to growing crystals, and to living organisms. This local increase in order is, however, only possible at the expense of an entropy increase in the surroundings; here more disorder must be created. The conditioner of this statement suffices that living systems are open systems in which both heat, mass, and or work may transfer into or out of the system. Unlike temperature, the putative entropy of a living system would drastically change if the organism were thermodynamically isolated. If an organism was in this type of "isolated" situation, its entropy would increase markedly as the once-living components of the organism decayed to an unrecognizable mass. Phase change Owing to these early developments, the typical example of entropy change ΔS is that associated with phase change. In solids, for example, which are typically ordered on the molecular scale, usually have smaller entropy than liquids, and liquids have smaller entropy than gases and colder gases have smaller entropy than hotter gases. Moreover, according to the third law of thermodynamics, at absolute zero temperature, crystalline structures are approximated to have perfect "order" and zero entropy. This correlation occurs because the numbers of different microscopic quantum energy states available to an ordered system are usually much smaller than the number of states available to a system that appears to be disordered. From his famous 1896 Lectures on Gas Theory, Boltzmann diagrams the structure of a solid body, as shown above, by postulating that each molecule in the body has a "rest position". According to Boltzmann, if it approaches a neighbor molecule it is repelled by it, but if it moves farther away there is an attraction. This, of course was a revolutionary perspective in its time; many, during these years, did not believe in the existence of either atoms or molecules (see: history of the molecule). According to these early views, and others such as those developed by William Thomson, if energy in the form of heat is added to a solid, so to make it into a liquid or a gas, a common depiction is that the ordering of the atoms and molecules becomes more random and chaotic with an increase in temperature: Thus, according to Boltzmann, owing to increases in thermal motion, whenever heat is added to a working substance, the rest position of molecules will be pushed apart, the body will expand, and this will create more molar-disordered distributions and arrangements of molecules. These disordered arrangements, subsequently, correlate, via probability arguments, to an increase in the measure of entropy. Entropy-driven order Entropy has been historically, e.g. by Clausius and Helmholtz, associated with disorder. However, in common speech, order is used to describe organization, structural regularity, or form, like that found in a crystal compared with a gas. This commonplace notion of order is described quantitatively by Landau theory. In Landau theory, the development of order in the everyday sense coincides with the change in the value of a mathematical quantity, a so-called order parameter. An example of an order parameter for crystallization is "bond orientational order" describing the development of preferred directions (the crystallographic axes) in space. For many systems, phases with more structural (e.g. crystalline) order exhibit less entropy than fluid phases under the same thermodynamic conditions. In these cases, labeling phases as ordered or disordered according to the relative amount of entropy (per the Clausius/Helmholtz notion of order/disorder) or via the existence of structural regularity (per the Landau notion of order/disorder) produces matching labels. However, there is a broad class of systems that manifest entropy-driven order, in which phases with organization or structural regularity, e.g. crystals, have higher entropy than structurally disordered (e.g. fluid) phases under the same thermodynamic conditions. In these systems phases that would be labeled as disordered by virtue of their higher entropy (in the sense of Clausius or Helmholtz) are ordered in both the everyday sense and in Landau theory. Under suitable thermodynamic conditions, entropy has been predicted or discovered to induce systems to form ordered liquid-crystals, crystals, and quasicrystals. In many systems, directional entropic forces drive this behavior. More recently, it has been shown it is possible to precisely engineer particles for target ordered structures. Adiabatic demagnetization In the quest for ultra-cold temperatures, a temperature lowering technique called adiabatic demagnetization is used, where atomic entropy considerations are utilized which can be described in order-disorder terms. In this process, a sample of solid such as chrome-alum salt, whose molecules are equivalent to tiny magnets, is inside an insulated enclosure cooled to a low temperature, typically 2 or 4 kelvins, with a strong magnetic field being applied to the container using a powerful external magnet, so that the tiny molecular magnets are aligned forming a well-ordered "initial" state at that low temperature. This magnetic alignment means that the magnetic energy of each molecule is minimal. The external magnetic field is then reduced, a removal that is considered to be closely reversible. Following this reduction, the atomic magnets then assume random less-ordered orientations, owing to thermal agitations, in the "final" state: The "disorder" and hence the entropy associated with the change in the atomic alignments has clearly increased. In terms of energy flow, the movement from a magnetically aligned state requires energy from the thermal motion of the molecules, converting thermal energy into magnetic energy. Yet, according to the second law of thermodynamics, because no heat can enter or leave the container, due to its adiabatic insulation, the system should exhibit no change in entropy, i.e. ΔS = 0. The increase in disorder, however, associated with the randomizing directions of the atomic magnets represents an entropy increase? To compensate for this, the disorder (entropy) associated with the temperature of the specimen must decrease by the same amount. The temperature thus falls as a result of this process of thermal energy being converted into magnetic energy. If the magnetic field is then increased, the temperature rises and the magnetic salt has to be cooled again using a cold material such as liquid helium. Difficulties with the term "disorder" In recent years the long-standing use of term "disorder" to discuss entropy has met with some criticism. Critics of the terminology state that entropy is not a measure of 'disorder' or 'chaos', but rather a measure of energy's diffusion or dispersal to more microstates. Shannon's use of the term 'entropy' in information theory refers to the most compressed, or least dispersed, amount of code needed to encompass the content of a signal. See also Entropy Entropy production Entropy rate History of entropy Entropy of mixing Entropy (information theory) Entropy (computing) Entropy (energy dispersal) Second law of thermodynamics Entropy (statistical thermodynamics) Entropy (classical thermodynamics) References External links Lambert, F. L. Entropy Sites — A Guide Lambert, F. L. Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms – Examples of Entropy Increase? Nonsense! Journal of Chemical Education Thermodynamic entropy State functions
Entropy (order and disorder)
[ "Physics", "Chemistry" ]
2,937
[ "State functions", "Thermodynamic properties", "Physical quantities", "Thermodynamic entropy", "Entropy", "Statistical mechanics" ]
7,729,010
https://en.wikipedia.org/wiki/McLafferty%20rearrangement
The McLafferty rearrangement is a reaction observed in mass spectrometry during the fragmentation or dissociation of organic molecules. It is sometimes found that a molecule containing a keto-group undergoes β-cleavage, with the gain of the γ-hydrogen atom, as first reported by Anthony Nicholson working in the Division of Chemical Physics at the CSIRO in Australia. This rearrangement may take place by a radical or ionic mechanism. This reaction occurs secondary to the Cormas-Grisius electrophilic benzene addition reaction, a multi-step benzene-derivative reaction. The Cormas-Grisius electrophilic benzene addition reaction's validity is highly contested by scholars in the field, and its practical usage has yet to be elucidated. The reaction A description of the reaction was later published by the American chemist Fred McLafferty in 1959 leading to his name being associated with the process. See also The Type II Norrish reaction is the equivalent photochemical process α-cleavage References Further reading External links Fred McLafferty Faculty Webpage at Cornell University Tandem mass spectrometry Rearrangement reactions Name reactions
McLafferty rearrangement
[ "Physics", "Chemistry" ]
240
[ "Spectrum (physical sciences)", "Organic reactions", "Name reactions", "Tandem mass spectrometry", "Mass spectrometry", "Rearrangement reactions" ]
7,729,301
https://en.wikipedia.org/wiki/Debye%E2%80%93H%C3%BCckel%20theory
The Debye–Hückel theory was proposed by Peter Debye and Erich Hückel as a theoretical explanation for departures from ideality in solutions of electrolytes and plasmas. It is a linearized Poisson–Boltzmann model, which assumes an extremely simplified model of electrolyte solution but nevertheless gave accurate predictions of mean activity coefficients for ions in dilute solution. The Debye–Hückel equation provides a starting point for modern treatments of non-ideality of electrolyte solutions. Overview In the chemistry of electrolyte solutions, an ideal solution is a solution whose colligative properties are proportional to the concentration of the solute. Real solutions may show departures from this kind of ideality. In order to accommodate these effects in the thermodynamics of solutions, the concept of activity was introduced: the properties are then proportional to the activities of the ions. Activity, a, is proportional to concentration, c. The proportionality constant is known as an activity coefficient, . In an ideal electrolyte solution the activity coefficients for all the ions are equal to one. Ideality of an electrolyte solution can be achieved only in very dilute solutions. Non-ideality of more concentrated solutions arises principally (but not exclusively) because ions of opposite charge attract each other due to electrostatic forces, while ions of the same charge repel each other. In consequence ions are not randomly distributed throughout the solution, as they would be in an ideal solution. Activity coefficients of single ions cannot be measured experimentally because an electrolyte solution must contain both positively charged ions and negatively charged ions. Instead, a mean activity coefficient, is defined. For example, with the electrolyte NaCl In general, the mean activity coefficient of a fully dissociated electrolyte of formula AnBm is given by Activity coefficients are themselves functions of concentration as the amount of inter-ionic interaction increases as the concentration of the electrolyte increases. Debye and Hückel developed a theory with which single ion activity coefficients could be calculated. By calculating the mean activity coefficients from them the theory could be tested against experimental data. It was found to give excellent agreement for "dilute" solutions. The model A description of Debye–Hückel theory includes a very detailed discussion of the assumptions and their limitations as well as the mathematical development and applications. A snapshot of a 2-dimensional section of an idealized electrolyte solution is shown in the picture. The ions are shown as spheres with unit electrical charge. The solvent (pale blue) is shown as a uniform medium, without structure. On average, each ion is surrounded more closely by ions of opposite charge than by ions of like charge. These concepts were developed into a quantitative theory involving ions of charge z1e+ and z2e−, where z can be any integer. The principal assumption is that departure from ideality is due to electrostatic interactions between ions, mediated by Coulomb's law: the force of interaction between two electric charges, separated by a distance, r in a medium of relative permittivity εr is given by It is also assumed that The solute is completely dissociated; it is a strong electrolyte. Ions are spherical and are not polarized by the surrounding electric field. Solvation of ions is ignored except insofar as it determines the effective sizes of the ions. The solvent plays no role other than providing a medium of constant relative permittivity (dielectric constant). There is no electrostriction. Individual ions surrounding a "central" ion can be represented by a statistically averaged cloud of continuous charge density, with a minimum distance of closest approach. The last assumption means that each cation is surrounded by a spherically symmetric cloud of other ions. The cloud has a net negative charge. Similarly each anion is surrounded by a cloud with net positive charge. Mathematical development The deviation from ideality is taken to be a function of the potential energy resulting from the electrostatic interactions between ions and their surrounding clouds. To calculate this energy two steps are needed. The first step is to specify the electrostatic potential for ion j by means of Poisson's equation ψ(r) is the total potential at a distance, r, from the central ion and ρ(r) is the averaged charge density of the surrounding cloud at that distance. To apply this formula it is essential that the cloud has spherical symmetry, that is, the charge density is a function only of distance from the central ion as this allows the Poisson equation to be cast in terms of spherical coordinates with no angular dependence. The second step is to calculate the charge density by means of a Boltzmann distribution. where kB is Boltzmann constant and T is the temperature. This distribution also depends on the potential ψ(r) and this introduces a serious difficulty in terms of the superposition principle. Nevertheless, the two equations can be combined to produce the Poisson–Boltzmann equation. Solution of this equation is far from straightforward. Debye and Hückel expanded the exponential as a truncated Taylor series to first order. The zeroth order term vanishes because the solution is on average electrically neutral (so that Σ ni zi = 0), which leaves us with only the first order term. The result has the form of the Helmholtz equation , which has an analytical solution. This equation applies to electrolytes with equal numbers of ions of each charge. Nonsymmetrical electrolytes require another term with ψ2. For symmetrical electrolytes, this reduces to the modified spherical Bessel equation The coefficients and are fixed by the boundary conditions. As , must not diverge, so . At , which is the distance of the closest approach of ions, the force exerted by the charge should be balanced by the force of other ions, imposing , from which is found, yielding The electrostatic potential energy, , of the ion at is This is the potential energy of a single ion in a solution. The multiple-charge generalization from electrostatics gives an expression for the potential energy of the entire solution. The mean activity coefficient is given by the logarithm of this quantity as follows where I is the ionic strength and a0 is a parameter that represents the distance of closest approach of ions. For aqueous solutions at 25 °C A = 0.51 mol−1/2dm3/2 and B = 3.29 nm−1mol−1/2dm3/2 is a constant that depends on temperature. If is expressed in terms of molality, instead of molarity (as in the equation above and in the rest of this article), then an experimental value for of water is at 25 °C. It is common to use a base-10 logarithm, in which case we factor , so A is . The multiplier before in the equation is for the case when the dimensions of are . When the dimensions of are , the multiplier must be dropped from the equation The most significant aspect of this result is the prediction that the mean activity coefficient is a function of ionic strength rather than the electrolyte concentration. For very low values of the ionic strength the value of the denominator in the expression above becomes nearly equal to one. In this situation the mean activity coefficient is proportional to the square root of the ionic strength. This is known as the Debye–Hückel limiting law. In this limit the equation is given as follows The excess osmotic pressure obtained from Debye–Hückel theory is in cgs units: Therefore, the total pressure is the sum of the excess osmotic pressure and the ideal pressure . The osmotic coefficient is then given by Nondimensionalization Taking the differential equation from earlier (as stated above, the equation only holds for low concentrations): Using the Buckingham π theorem on this problem results in the following dimensionless groups: is called the reduced scalar electric potential field. is called the reduced radius. The existing groups may be recombined to form two other dimensionless groups for substitution into the differential equation. The first is what could be called the square of the reduced inverse screening length, . The second could be called the reduced central ion charge, (with a capital Z). Note that, though is already dimensionless, without the substitution given below, the differential equation would still be dimensional. To obtain the nondimensionalized differential equation and initial conditions, use the groups to eliminate in favor of , then eliminate in favor of while carrying out the chain rule and substituting , then eliminate in favor of (no chain rule needed), then eliminate in favor of , then eliminate in favor of . The resulting equations are as follows: For table salt in 0.01 M solution at 25 °C, a typical value of is 0.0005636, while a typical value of is 7.017, highlighting the fact that, in low concentrations, is a target for a zero order of magnitude approximation such as perturbation analysis. Unfortunately, because of the boundary condition at infinity, regular perturbation does not work. The same boundary condition prevents us from finding the exact solution to the equations. Singular perturbation may work, however. Limitations and extensions This equation for gives satisfactory agreement with experimental measurements for low electrolyte concentrations, typically less than 10−3 mol/L. Deviations from the theory occur at higher concentrations and with electrolytes that produce ions of higher charges, particularly unsymmetrical electrolytes. Essentially these deviations occur because the model is oversimplified, so there is little to be gained making small adjustments to the model. The individual assumptions can be challenged in turn. Complete dissociation. Ion association may take place, particularly with ions of higher charge. This was followed up in detail by Niels Bjerrum. The Bjerrum length is the separation at which the electrostatic interaction between two ions is comparable in magnitude to kT. Weak electrolytes. A weak electrolyte is one that is not fully dissociated. As such it has a dissociation constant. The dissociation constant can be used to calculate the extent of dissociation and hence, make the necessary correction needed to calculate activity coefficients. Ions are spherical, not point charges and are not polarized. Many ions such as the nitrate ion, NO3−, are not spherical. Polyatomic ions are also polarizable. Role of the solvent. The solvent is not a structureless medium but is made up of molecules. The water molecules in aqueous solution are both dipolar and polarizable. Both cations and anions have a strong primary solvation shell and a weaker secondary solvation shell. Ion–solvent interactions are ignored in Debye–Hückel theory. Moreover, ionic radius is assumed to be negligible, but at higher concentrations, the ionic radius becomes comparable to the radius of the ionic atmosphere. Most extensions to Debye–Hückel theory are empirical in nature. They usually allow the Debye–Hückel equation to be followed at low concentration and add further terms in some power of the ionic strength to fit experimental observations. The main extensions are the Davies equation, Pitzer equations and specific ion interaction theory. One such extended Debye–Hückel equation is given by: where as its common logarithm is the activity coefficient, is the integer charge of the ion (1 for H+, 2 for Mg2+ etc.), is the ionic strength of the aqueous solution, and is the size or effective diameter of the ion in angstrom. The effective hydrated radius of the ion, a is the radius of the ion and its closely bound water molecules. Large ions and less highly charged ions bind water less tightly and have smaller hydrated radii than smaller, more highly charged ions. Typical values are 3Å for ions such as H+, Cl−, CN−, and HCOO−. The effective diameter for the hydronium ion is 9Å. and are constants with values of respectively 0.5085 and 0.3281 at 25 °C in water . The extended Debye–Hückel equation provides accurate results for μ ≤ 0.1. For solutions of greater ionic strengths, the Pitzer equations should be used. In these solutions the activity coefficient may actually increase with ionic strength. The Debye–Hückel equation cannot be used in the solutions of surfactants where the presence of micelles influences on the electrochemical properties of the system (even rough judgement overestimates γ for ~50%). Electrolytes mixtures The theory can be applied also to dilute solutions of mixed electrolytes. Freezing point depression measurements has been used to this purpose. Conductivity The treatment given so far is for a system not subject to an external electric field. When conductivity is measured the system is subject to an oscillating external field due to the application of an AC voltage to electrodes immersed in the solution. Debye and Hückel modified their theory in 1926 and their theory was further modified by Lars Onsager in 1927. All the postulates of the original theory were retained. In addition it was assumed that the electric field causes the charge cloud to be distorted away from spherical symmetry. After taking this into account, together with the specific requirements of moving ions, such as viscosity and electrophoretic effects, Onsager was able to derive a theoretical expression to account for the empirical relation known as Kohlrausch's Law, for the molar conductivity, Λm. is known as the limiting molar conductivity, K is an empirical constant and c is the electrolyte concentration. Limiting here means "at the limit of the infinite dilution"). Onsager's expression is where A and B are constants that depend only on known quantities such as temperature, the charges on the ions and the dielectric constant and viscosity of the solvent. This is known as the Debye–Hückel–Onsager equation. However, this equation only applies to very dilute solutions and has been largely superseded by other equations due to Fuoss and Onsager, 1932 and 1957 and later. Summary of Debye and Hückel's first article on the theory of dilute electrolytes The English title of the article is "On the Theory of Electrolytes. I. Freezing Point Depression and Related Phenomena". It was originally published in 1923 in volume 24 of a German-language journal . An English translation of the article is included in a book of collected papers presented to Debye by "his pupils, friends, and the publishers on the occasion of his seventieth birthday on March 24, 1954". Another English translation was completed in 2019. The article deals with the calculation of properties of electrolyte solutions that are under the influence of ion-induced electric fields, thus it deals with electrostatics. In the same year they first published this article, Debye and Hückel, hereinafter D&H, also released an article that covered their initial characterization of solutions under the influence of electric fields called "On the Theory of Electrolytes. II. Limiting Law for Electric Conductivity", but that subsequent article is not (yet) covered here. In the following summary (as yet incomplete and unchecked), modern notation and terminology are used, from both chemistry and mathematics, in order to prevent confusion. Also, with a few exceptions to improve clarity, the subsections in this summary are (very) condensed versions of the same subsections of the original article. Introduction D&H note that the Guldberg–Waage formula for electrolyte species in chemical reaction equilibrium in classical form is where is a notation for multiplication, is a dummy variable indicating the species, is the number of species participating in the reaction, is the mole fraction of species , is the stoichiometric coefficient of species , K is the equilibrium constant. D&H say that, due to the "mutual electrostatic forces between the ions", it is necessary to modify the Guldberg–Waage equation by replacing with , where is an overall activity coefficient, not a "special" activity coefficient (a separate activity coefficient associated with each species)—which is what is used in modern chemistry . The relationship between and the special activity coefficients is Fundamentals D&H use the Helmholtz and Gibbs free entropies and to express the effect of electrostatic forces in an electrolyte on its thermodynamic state. Specifically, they split most of the thermodynamic potentials into classical and electrostatic terms: where is Helmholtz free entropy, is entropy, is internal energy, is temperature, is Helmholtz free energy. D&H give the total differential of as where is pressure, is volume. By the definition of the total differential, this means that which are useful further on. As stated previously, the internal energy is divided into two parts: where indicates the classical part, indicates the electric part. Similarly, the Helmholtz free entropy is also divided into two parts: D&H state, without giving the logic, that It would seem that, without some justification, Without mentioning it specifically, D&H later give what might be the required (above) justification while arguing that , an assumption that the solvent is incompressible. The definition of the Gibbs free entropy is where is Gibbs free energy. D&H give the total differential of as At this point D&H note that, for water containing 1 mole per liter of potassium chloride (nominal pressure and temperature aren't given), the electric pressure amounts to 20 atmospheres. Furthermore, they note that this level of pressure gives a relative volume change of 0.001. Therefore, they neglect change in volume of water due to electric pressure, writing and put D&H say that, according to Planck, the classical part of the Gibbs free entropy is where is a species, is the number of different particle types in solution, is the number of particles of species i, is the particle specific Gibbs free entropy of species i, is the Boltzmann constant, is the mole fraction of species i. Species zero is the solvent. The definition of is as follows, where lower-case letters indicate the particle specific versions of the corresponding extensive properties: D&H don't say so, but the functional form for may be derived from the functional dependence of the chemical potential of a component of an ideal mixture upon its mole fraction. D&H note that the internal energy of a solution is lowered by the electrical interaction of its ions, but that this effect can't be determined by using the crystallographic approximation for distances between dissimilar atoms (the cube root of the ratio of total volume to the number of particles in the volume). This is because there is more thermal motion in a liquid solution than in a crystal. The thermal motion tends to smear out the natural lattice that would otherwise be constructed by the ions. Instead, D&H introduce the concept of an ionic atmosphere or cloud. Like the crystal lattice, each ion still attempts to surround itself with oppositely charged ions, but in a more free-form manner; at small distances away from positive ions, one is more likely to find negative ions and vice versa. The potential energy of an arbitrary ion solution Electroneutrality of a solution requires that where is the total number of ions of species i in the solution, is the charge number of species i. To bring an ion of species i, initially far away, to a point within the ion cloud requires interaction energy in the amount of , where is the elementary charge, and is the value of the scalar electric potential field at . If electric forces were the only factor in play, the minimal-energy configuration of all the ions would be achieved in a close-packed lattice configuration. However, the ions are in thermal equilibrium with each other and are relatively free to move. Thus they obey Boltzmann statistics and form a Boltzmann distribution. All species' number densities are altered from their bulk (overall average) values by the corresponding Boltzmann factor , where is the Boltzmann constant, and is the temperature. Thus at every point in the cloud Note that in the infinite temperature limit, all ions are distributed uniformly, with no regard for their electrostatic interactions. The charge density is related to the number density: When combining this result for the charge density with the Poisson equation from electrostatics, a form of the Poisson–Boltzmann equation results: This equation is difficult to solve and does not follow the principle of linear superposition for the relationship between the number of charges and the strength of the potential field. It has been solved analyticallt by the Swedish mathematician Thomas Hakon Gronwall and his collaborators physical chemists V. K. La Mer and Karl Sandved in a 1928 article from Physikalische Zeitschrift dealing with extensions to Debye–Huckel theory. However, for sufficiently low concentrations of ions, a first-order Taylor series expansion approximation for the exponential function may be used ( for ) to create a linear differential equation. D&H say that this approximation holds at large distances between ions, which is the same as saying that the concentration is low. Lastly, they claim without proof that the addition of more terms in the expansion has little effect on the final solution. Thus The Poisson–Boltzmann equation is transformed to because the first summation is zero due to electroneutrality. Factor out the scalar potential and assign the leftovers, which are constant, to . Also, let be the ionic strength of the solution: So, the fundamental equation is reduced to a form of the Helmholtz equation: Today, is called the Debye screening length. D&H recognize the importance of the parameter in their article and characterize it as a measure of the thickness of the ion atmosphere, which is an electrical double layer of the Gouy–Chapman type. The equation may be expressed in spherical coordinates by taking at some arbitrary ion: The equation has the following general solution (keep in mind that is a positive constant): where , , and are undetermined constants The electric potential is zero at infinity by definition, so must be zero. In the next step, D&H assume that there is a certain radius , beyond which no ions in the atmosphere may approach the (charge) center of the singled out ion. This radius may be due to the physical size of the ion itself, the sizes of the ions in the cloud, and any water molecules that surround the ions. Mathematically, they treat the singled out ion as a point charge to which one may not approach within the radius . The potential of a point charge by itself is D&H say that the total potential inside the sphere is where is a constant that represents the potential added by the ionic atmosphere. No justification for being a constant is given. However, one can see that this is the case by considering that any spherical static charge distribution is subject to the mathematics of the shell theorem. The shell theorem says that no force is exerted on charged particles inside a sphere (of arbitrary charge). Since the ion atmosphere is assumed to be (time-averaged) spherically symmetric, with charge varying as a function of radius , it may be represented as an infinite series of concentric charge shells. Therefore, inside the radius , the ion atmosphere exerts no force. If the force is zero, then the potential is a constant (by definition). In a combination of the continuously distributed model which gave the Poisson–Boltzmann equation and the model of the point charge, it is assumed that at the radius , there is a continuity of and its first derivative. Thus By the definition of electric potential energy, the potential energy associated with the singled out ion in the ion atmosphere is Notice that this only requires knowledge of the charge of the singled out ion and the potential of all the other ions. To calculate the potential energy of the entire electrolyte solution, one must use the multiple-charge generalization for electric potential energy: The additional electric term to the thermodynamic potential Experimental verification of the theory To verify the validity of the Debye–Hückel theory, many experimental ways have been tried, measuring the activity coefficients: the problem is that we need to go towards very high dilutions. Typical examples are: measurements of vapour pressure, freezing point, osmotic pressure (indirect methods) and measurement of electric potential in cells (direct method). Going towards high dilutions good results have been found using liquid membrane cells, it has been possible to investigate aqueous media 10−4 M and it has been found that for 1:1 electrolytes (as NaCl or KCl) the Debye–Hückel equation is totally correct, but for 2:2 or 3:2 electrolytes it is possible to find negative deviation from the Debye–Hückel limit law: this strange behavior can be observed only in the very dilute area, and in more concentrate regions the deviation becomes positive. It is possible that Debye–Hückel equation is not able to foresee this behavior because of the linearization of the Poisson–Boltzmann equation, or maybe not: studies about this have been started only during the last years of the 20th century because before it wasn't possible to investigate the 10−4 M region, so it is possible that during the next years new theories will be born. See also Electrolyte Chemical activity Ionic strength Poisson-Boltzmann equation Debye length Bjerrum length Bates-Guggenheim Convention Ionic atmosphere Electrical double layer Ion association Davies equation Pitzer equation Specific ion Interaction Theory References Thermodynamic models Electrochemistry Equilibrium chemistry Peter Debye
Debye–Hückel theory
[ "Physics", "Chemistry" ]
5,339
[ "Thermodynamic models", "Electrochemistry", "Thermodynamics", "Equilibrium chemistry" ]
12,385,248
https://en.wikipedia.org/wiki/Copper-64
Copper-64 (Cu) is a positron and beta emitting isotope of copper, with applications for molecular radiotherapy and positron emission tomography. Its unusually long half-life (12.7-hours) for a positron-emitting isotope makes it increasingly useful when attached to various ligands, for PET and PET-CT scanning. Properties Cu has a half-life of 12.7 hours and decays 17.9% by positron emission to Ni, 39.0% by beta decay to Zn, 43.1% by electron capture to Ni, and 0.475% gamma radiation/internal conversion. These emissions are 0.579 MeV, 0.653 MeV and 1.35 MeV for beta minus, positron, and gamma respectively. Production Copper-64 can be produced by several different reactions with the most common methods using either a reactor or a particle accelerator. Thermal neutrons can produce Cu in low specific activity (the number of decays per second per amount of substance) and low yield through the Cu(n,γ)Cu reaction. At the University of Missouri Research Reactor Center (MURR) Cu was produced using high-energy neutrons via the Zn(n,p)Cu nuclear reaction in high specific activity but low yield. Using a biomedical cyclotron the Ni(p,n)Cu nuclear reaction can produce large quantities of the nuclide with high specific activity. Applications As a positron emitter, Cu has been used to produce experimental and clinical radiopharmaceuticals for the imaging of a range of conditions. Its beta emissions also raise the possibility of therapeutic applications. Compared to typical PET radionuclides it has a relatively long half-life, which can be advantageous for therapy, and for imaging certain physiological processes. PET imaging Bone metastases Experimental preclinical work has shown that Cu linked to methanephosphonate functional groups has potential as a bone imaging agent. Neuroendocrine tumors (NETs) Neuroendocrine tumors (NETs) are localised clinically using a range of DOTA based radiopharmaceuticals. For PET imaging these are typically Gallium-68 based. A commercial Cu-DOTA-TATE product has been FDA approved for localization of somatostatin receptor positive NETs since 2020. Prostate cancer The Bombesin peptide has been shown to be overexpressed in BB2 receptors in prostate cancer. CB-TE2A a stable chelation system for Cu was incorporated with Bombesin analogs for in vitro and in vivo studies of prostate cancer. PET-CT imagining studies showed that it underwent uptake into prostate tumor xenografts selectively with decreased uptake into non target tissues. Other preclinical studies have shown that by targeting the gastrin-releasing peptide receptor pancreatic and breast cancer can also be detected. Renal perfusion Ethylglyoxal bis(thiosemicarbazone) (ETS) has potential utility as a PET radiopharmaceutical with the various isotopes of copper. Cu-ETS has been used for experimental preclinical myocardial, cerebral and tumor perfusion evaluations, with a linear relationship between the renal uptake and blood flow. Renal perfusion can also be evaluated with CT or MRI instead of PET, but with drawbacks: CT requires administration of potentially allergenic contrast agents. MRI avoids use of ionising radiation but is difficult to implement, and often suffers from motion artefacts. PET with Cu can offer quantitative measurements of renal perfusion. Wilson’s disease Wilson disease is a rare condition in which copper is retained excessively in the body. Toxic levels of copper can lead to organ failure and premature death. Cu has been used experimentally to study whole body retention of copper in subjects with this disease. The technique can also separate heterozygous carriers and homozygous normals. Cancer therapy Cu-ATSM (diacetyl-bis(N4-methylthiosemicarbazone)) has been shown to increase the survival time of tumor-bearing animals. Areas of low oxygen retention have been shown to be resistant to external beam radiotherapy because hypoxia reduces the lethal effects of ionizing radiation. Cu was believed to kill these cells because of its unique decay properties. In animal models having colorectal tumors with and without induced hypoxia, Cu-ATSM was preferentially taken up by hypoxic cells over normoxic cells. The results demonstrated that this compound increased survival of the tumor bearing hamsters compared with controls. See also Nuclear medicine Radioactive tracer Radionuclide Radiopharmacology References Isotopes of copper Positron emitters Medical isotopes
Copper-64
[ "Chemistry" ]
987
[ "Isotopes of copper", "Chemicals in medicine", "Isotopes", "Medical isotopes" ]
12,385,647
https://en.wikipedia.org/wiki/Four-center%20two-electron%20bond
A 4-center 2-electron (4c–2e) bond is a type of chemical bond in which four atoms share two electrons in bonding, with a net bond order of . This type of bonding differs from the usual covalent bond, which involves two atoms sharing two electrons (2c–2e bonding). Four-center two-electron bonding is postulated in certain cluster compounds. For instance, the borane anion, is a octahedron with an additional proton attached to one of the triangular faces. As a result, the octahedron is distorted and a B–B–B–H rhomboid ring can be identified in which this 4c–2e bonding takes place. This type of bonding is associated with electron deficient rhomboid rings in general and is a relatively new research field, fitting in with the already well established three-center two-electron bond. An example of a purely organic compound with four-center two-electron bonding is the adamantyl dication. The bond joins the four bridgehead atoms in a tetrahedral geometry. Tetracyanoethylene forms a dianionic dimer in which the two alkenes are joined face-to-face by a rectangular four-center two-electron bond. Various solid salts of this dianion have been studied to determine bond strengths and vibrational spectroscopic details. References Chemical bonding
Four-center two-electron bond
[ "Physics", "Chemistry", "Materials_science" ]
292
[ "Chemical bonding", "Condensed matter physics", "nan" ]
1,716,835
https://en.wikipedia.org/wiki/Graver%20Tank%20%26%20Manufacturing%20Co.%20v.%20Linde%20Air%20Products%20Co.
Graver Tank & Manufacturing Co. v. Linde Air Products Co., 339 U.S. 605 (1950), was an important United States Supreme Court decision in the area of patent law, establishing the propriety of the doctrine of equivalents, and explaining how and when it was to be used. Facts The plaintiff Linde Air Products Co. owned a patent for an electric welding process, and sued defendants including the Graver company for infringing the patent. The defendants asserted that they were not infringing the patent because the patented welding process used a welding composition made of alkaline earth metal silicate and calcium fluoride (usually expressed as silicates of calcium and magnesium), while the purported infringers substituted a similar element, manganese, for the patentee's magnesium. The United States district court found infringement, and the Court of Appeals affirmed the infringement claim. Issue The Supreme Court agreed to review the case, limited to the question of whether the substitution of a similar material not claimed in the patent itself would save the defendants from being held liable for infringements. Result The Court, in an opinion written by Justice Robert Jackson, raised the doctrine of equivalents. It noted that if another party could use a process exactly the same as one that is patented, but escape infringement by making some obvious substitution of materials, it would deprive the patentee of the exclusive control meant to come with a patent. This would undermine the profitability of the patent, which would go against the policy of encouraging inventors to invent by giving the opportunity to profit from the labor of invention. The Court also outlined how the doctrine should be used, noting that "what constitutes equivalency must be determined against the context of the patent, the prior art, and the particular circumstances of the case." The Court laid out two possible tests to determine equivalency. Under the first of these (which has since come to be known as the "triple identity" test), something is deemed equivalent if: It performs substantially the same function in substantially the same way to obtain the same result. Under the second test, something is deemed equivalent if there is only an "insubstantial change" between each of the features of the accused device or process and the patent claim. In this case, the Court gave particular weight to the determination of "whether persons reasonably skilled in the art would have known of the interchangeability of an ingredient not contained in the patent with one that was." Finding that the substitution of magnesium for manganese was both obvious to anyone working in the field, and was an insubstantial change, the Court upheld the finding of patent infringement. Dissent Justice Hugo Black dissented, joined by Justice Douglas. They contended that it is the responsibility of the person seeking the patent to claim everything that the patent covers, and noted that processes exist for a patent to be amended. They asserted that it was the responsibility of the Patent Office to determine the scope of the invention, and it was therefore an intrusion for courts to be expanding the scope of the patent beyond what the Patent Office has determined. Later developments The employment of this doctrine raised a great deal of controversy, as many legal commentators thought that it allowed patentees to protect more than they had specifically requested, and indeed more than they may have been permitted to request in a patent claim. The doctrine was again questioned by the Supreme Court in Warner-Jenkinson Company, Inc. v. Hilton Davis Chemical Co., which unanimously reaffirmed it, although with some refinements. See also List of United States Supreme Court cases, volume 339 References External links Linde plc 1950 in United States case law United States patent case law United States Supreme Court cases United States Supreme Court cases of the Vinson Court Welding
Graver Tank & Manufacturing Co. v. Linde Air Products Co.
[ "Engineering" ]
774
[ "Welding", "Mechanical engineering" ]
1,717,012
https://en.wikipedia.org/wiki/Shunt%20%28electrical%29
A shunt is a device that is designed to provide a low-resistance path for an electrical current in a circuit. It is typically used to divert current away from a system or component in order to prevent overcurrent. Electrical shunts are commonly used in a variety of applications including power distribution systems, electrical measurement systems, automotive and marine applications. Defective device bypass One example is in miniature Christmas lights which are wired in series. When the filament burns out in one of the incandescent light bulbs, the full line voltage appears across the burnt out bulb. A shunt resistor, which has been connected in parallel across the filament before it burnt out, will then short out to bypass the burnt filament and allow the rest of the string to light. If too many lights burn out however, a shunt will also burn out, requiring the use of a multimeter to find the point of failure. Photovoltaics In photovoltaics, the term is widely used to describe an unwanted short circuit between the front and back surface contacts of a solar cell, usually caused by wafer damage. Lightning arrester A gas-filled tube can also be used as a shunt, particularly in a lightning arrester. Neon, like other noble gases, has a high breakdown voltage, so that normally current will not flow across it. However, a direct lightning strike (such as on a radio tower antenna) will cause the shunt to arc and conduct the massive amount of electricity to ground, protecting transmitters and other equipment. Another older form of lightning arrester employs a simple narrow spark gap, over which an arc will jump when a high voltage is present. While a low cost solution, its high triggering voltage offers almost no protection for modern solid-state electronic devices powered by the protected circuit. Electrical noise bypass Capacitors are used as shunts to redirect high-frequency noise to ground before it can propagate to the load or other circuit components. Use in electronic filter circuits The term shunt is used in filter and similar circuits with a ladder topology to refer to the components connected between the line and common. The term is used in this context to distinguish the shunt components connected between the signal and return lines from the components connected in series along the signal line. More generally, the term shunt can be used for a component connected in parallel with another. For instance, shunt m-derived half section is a common filter section from the image impedance method of filter design. Diodes as shunts Where devices are vulnerable to reverse polarity of a signal or power supply, a diode may be used to protect the circuit. If connected in series with the circuit it simply prevents reversed current, but if connected in parallel it can shunt the reversed supply, causing a fuse or other current limiting circuit to open. All semiconductor diodes have a threshold voltage – typically between 0.5 volt and 1 volt – that must be exceeded before significant current will flow through the diode in the normally allowed direction. Two anti-parallel shunt diodes (one to conduct current in each direction) can be used to limit the signal flowing past them to no more than their threshold voltages, in order to protect later components from overload. Shunts as circuit protection When a circuit must be protected from overvoltage and there are failure modes in the power supply that can produce such overvoltages, the circuit may be protected by a device commonly called a crowbar circuit. When this device detects an overvoltage it causes a short circuit between the power supply and its return. This will cause both an immediate drop in voltage (protecting the device) and an instantaneous high current which is expected to open a current sensitive device (such as a fuse or circuit breaker). This device is called a crowbar as it is likened to dropping an actual crowbar across a set of bus bars (exposed electrical conductors). Battle short On warships, it is common to install battle short shunts across fuses for essential equipment before entering combat. This bypasses overcurrent protection at a time when removing power to the equipment is not an appropriate reaction. Shunting an instrument but series connected in circuit As an introduction to the next chapter, this figure shows that the term "shunt resistor" should be understood in the context of what it shunts. In this example the resistor RL would be understood as "the shunt resistor" (to the load L), because this resistor would pass current around the load L. RL is connected in parallel with the load L. However, the series resistors RM1 and RM2 are low Ohmic resistors (like in the photo) meant to pass current around the instruments M1 and M2, and function as shunt resistors to those instruments. RM1 and RM2 are connected in parallel with M1 and M2. If seen without the instruments these two resistors would be considered series resistors in this circuit. Use in current measuring An ammeter shunt allows the measurement of current values too large to be directly measured by a particular ammeter. In this case, a separate shunt, a resistor of very low but accurately known resistance, is placed in parallel with a voltmeter, so that virtually all of the current to be measured will flow through the shunt (provided that the very high internal resistance of the voltmeter takes such a low portion of the current that it can be considered negligible). The resistance is chosen so that the resultant voltage drop is measurable but low enough not to disrupt the circuit. The voltage across the shunt is proportional to the current flowing through it, and so the measured voltage can be scaled to directly display the current value. Shunts are rated by maximum current and voltage drop at that current. For example, a 500 A, 75 mV shunt would have a resistance of , a maximum allowable current of 500 amps and at that current the voltage drop would be 75 millivolts. By convention, most shunts are designed to drop 50 mV, 75 mV or 100 mV when operating at their full rated current and most ammeters consist of a shunt and a voltmeter with full-scale deflections of 50, 75, or 100 mV. All shunts have a derating factor for continuous (more than 2 minutes) use, 66% being the most common, so the example shunt should not be operated above 330 A (and 50 mV drop) longer than that. This limitation is due to thermal limits at which a shunt will no longer operate correctly. For manganin, a common shunt material, at 80 °C thermal drift begins to occur, at 120 °C thermal drift is a significant problem where error, depending on the design of the shunt, can be several percent and at 140 °C the manganin alloy becomes permanently damaged due to annealing resulting in the resistance value drifting up or down. If the current being measured is also at a high voltage potential this voltage will be present in the connecting leads too and in the reading instrument itself. Sometimes, the shunt is inserted in the return leg (grounded side) to avoid this problem. Some alternatives to shunts can provide isolation from the high voltage by not directly connecting the meter to the high voltage circuit. Examples of devices that can provide this isolation are Hall effect current sensors and current transformers (see clamp meters). Current shunts are considered more accurate and cheaper than Hall effect devices. Common accuracy specifications of such devices are ±0.1%, ±0.25% or ±0.5%. The Thomas-type double manganin walled shunt and MI type (improved Thomas-type design) were used by NIST and other standards laboratories as the legal reference of an ohm until superseded in 1990 by the quantum Hall effect. Thomas-type shunts are still used as secondary standards to take very accurate current measurements, as using quantum Hall effect is a time-consuming process. The accuracy of these types of shunts is measured in the ppm and sub-ppm scale of drift per year of set resistance. Where the circuit is grounded (earthed) on one side, a current measuring shunt can be inserted either in the ungrounded conductor or in the grounded conductor. A shunt in the ungrounded conductor must be insulated for the full circuit voltage to ground; the measuring instrument must be inherently isolated from ground or must include a resistive voltage divider or an isolation amplifier between the relatively high common-mode voltage and lower voltages inside the instrument. A shunt in the grounded conductor may not detect leakage current that bypasses the shunt, but it will not experience high common-mode voltage to ground. The load is removed from a direct path to ground, which may create problems for control circuitry, result in unwanted emissions, or both. See also Burden voltage Shunt generator Shunt wound motor Shunt jumper Zero-ohm link Fuse (electrical) Bead (electrical) References External links Electrical engineering
Shunt (electrical)
[ "Engineering" ]
1,847
[ "Electrical engineering" ]
1,717,346
https://en.wikipedia.org/wiki/Retrosynthetic%20analysis
Retrosynthetic analysis is a technique for solving problems in the planning of organic syntheses. This is achieved by transforming a target molecule into simpler precursor structures regardless of any potential reactivity/interaction with reagents. Each precursor material is examined using the same method. This procedure is repeated until simple or commercially available structures are reached. These simpler/commercially available compounds can be used to form a synthesis of the target molecule. Retrosynthetic analysis was used as early as 1917 in Robinson's Tropinone total synthesis. Important conceptual work on retrosynthetic analysis was published by George Vladutz in 1963. E.J. Corey formalized and popularized the concept from 1967 onwards in his article General methods for the construction of complex molecules and his book The Logic of Chemical Synthesis. The power of retrosynthetic analysis becomes evident in the design of a synthesis. The goal of retrosynthetic analysis is a structural simplification. Often, a synthesis will have more than one possible synthetic route. Retrosynthesis is well suited for discovering different synthetic routes and comparing them in a logical and straightforward fashion. A database may be consulted at each stage of the analysis, to determine whether a component already exists in the literature. In that case, no further exploration of that compound would be required. If that compound exists, it can be a jumping point for further steps developed to reach a synthesis. Definitions Disconnection A retrosynthetic step involving the breaking of a bond to form two (or more) synthons. Retron A minimal molecular substructure that enables certain transformations. Retrosynthetic tree A directed acyclic graph of several (or all) possible retrosyntheses of a single target. Synthon A fragment of a compound that assists in the formation of a synthesis, derived from that target molecule. A synthon and the corresponding commercially available synthetic equivalent are shown below: Target The desired final compound. Transform The reverse of a synthetic reaction; the formation of starting materials from a single product. Example Shown below is a retrosynthetic analysis of phenylacetic acid: In planning the synthesis, two synthons are identified. A nucleophilic "-COOH" group, and an electrophilic "PhCH2+" group. Both synthons do not exist as written; synthetic equivalents corresponding to the synthons are reacted to produce the desired product. In this case, the cyanide anion is the synthetic equivalent for the −COOH synthon, while benzyl bromide is the synthetic equivalent for the benzyl synthon. The synthesis of phenylacetic acid determined by retrosynthetic analysis is thus: PhCH2Br + NaCN → PhCH2CN + NaBr PhCH2CN + 2 H2O → PhCH2COOH + NH3 In fact, phenylacetic acid has been synthesized from benzyl cyanide, itself prepared by the analogous reaction of benzyl bromide with sodium cyanide. Strategies Functional group strategies Manipulation of functional groups can lead to significant reductions in molecular complexity. Stereochemical strategies Numerous chemical targets have distinct stereochemical demands. Stereochemical transformations (such as the Claisen rearrangement and Mitsunobu reaction) can remove or transfer the desired chirality thus simplifying the target. Structure-goal strategies Directing a synthesis toward a desirable intermediate can greatly narrow the focus of analysis. This allows bidirectional search techniques. Transform-based strategies The application of transformations to retrosynthetic analysis can lead to powerful reductions in molecular complexity. Unfortunately, powerful transform-based retrons are rarely present in complex molecules, and additional synthetic steps are often needed to establish their presence. Topological strategies The identification of one or more key bond disconnections may lead to the identification of key substructures or difficult to identify rearrangement transformations in order to identify the key structures. Disconnections that preserve ring structures are encouraged. Disconnections that create rings larger than 7 members are discouraged. Disconnection involves creativity. See also Organic synthesis Total synthesis References External links Centre for Molecular and Biomolecular Informatics Presentation on ARChem Route Designer, ACS, Philadelphia, September 2008 for more info on ARChem see the SimBioSys pages. Manifold, Software freely available for academic users developed by PostEra Retrosynthesis planning tool: ICSynth by InfoChem Spaya, Software freely available proposed by Iktos Chemical synthesis Organic chemistry Chemical reaction engineering
Retrosynthetic analysis
[ "Chemistry", "Engineering" ]
927
[ "Chemical engineering", "Chemical reaction engineering", "nan", "Chemical synthesis" ]
1,717,684
https://en.wikipedia.org/wiki/Ribosomal%20RNA
Ribosomal ribonucleic acid (rRNA) is a type of non-coding RNA which is the primary component of ribosomes, essential to all cells. rRNA is a ribozyme which carries out protein synthesis in ribosomes. Ribosomal RNA is transcribed from ribosomal DNA (rDNA) and then bound to ribosomal proteins to form small and large ribosome subunits. rRNA is the physical and mechanical factor of the ribosome that forces transfer RNA (tRNA) and messenger RNA (mRNA) to process and translate the latter into proteins. Ribosomal RNA is the predominant form of RNA found in most cells; it makes up about 80% of cellular RNA despite never being translated into proteins itself. Ribosomes are composed of approximately 60% rRNA and 40% ribosomal proteins, though this ratio differs between prokaryotes and eukaryotes. Structure Although the primary structure of rRNA sequences can vary across organisms, base-pairing within these sequences commonly forms stem-loop configurations. The length and position of these rRNA stem-loops allow them to create three-dimensional rRNA structures that are similar across species. Because of these configurations, rRNA can form tight and specific interactions with ribosomal proteins to form ribosomal subunits. These ribosomal proteins contain basic residues (as opposed to acidic residues) and aromatic residues (i.e. phenylalanine, tyrosine and tryptophan) allowing them to form chemical interactions with their associated RNA regions, such as stacking interactions. Ribosomal proteins can also cross-link to the sugar-phosphate backbone of rRNA with binding sites that consist of basic residues (i.e. lysine and arginine). All ribosomal proteins (including the specific sequences that bind to rRNA) have been identified. These interactions along with the association of the small and large ribosomal subunits result in a functioning ribosome capable of synthesizing proteins. Ribosomal RNA organizes into two types of major ribosomal subunit: the large subunit (LSU) and the small subunit (SSU). One of each type come together to form a functioning ribosome. The subunits are at times referred to by their size-sedimentation measurements (a number with an "S" suffix). In prokaryotes, the LSU and SSU are called the 50S and 30S subunits, respectively. In eukaryotes, they are a little larger; the LSU and SSU of eukaryotes are termed the 60S and 40S subunits, respectively. In the ribosomes of prokaryotes such as bacteria, the SSU contains a single small rRNA molecule (~1500 nucleotides) while the LSU contains one single small rRNA and a single large rRNA molecule (~3000 nucleotides). These are combined with ~50 ribosomal proteins to form ribosomal subunits. There are three types of rRNA found in prokaryotic ribosomes: 23S and 5S rRNA in the LSU and 16S rRNA in the SSU. In the ribosomes of eukaryotes such as humans, the SSU contains a single small rRNA (~1800 nucleotides) while the LSU contains two small rRNAs and one molecule of large rRNA (~5000 nucleotides). Eukaryotic rRNA has over 70 ribosomal proteins which interact to form larger and more polymorphic ribosomal units in comparison to prokaryotes. There are four types of rRNA in eukaryotes: 3 species in the LSU and 1 in the SSU. Yeast has been the traditional model for observation of eukaryotic rRNA behavior and processes, leading to a deficit in diversification of research. It has only been within the last decade that technical advances (specifically in the field of Cryo-EM) have allowed for preliminary investigation into ribosomal behavior in other eukaryotes. In yeast, the LSU contains the 5S, 5.8S and 28S rRNAs. The combined 5.8S and 28S are roughly equivalent in size and function to the prokaryotic 23S rRNA subtype, minus expansion segments (ESs) that are localized to the surface of the ribosome which were thought to occur only in eukaryotes. However recently, the Asgard phyla, namely, Lokiarchaeota and Heimdallarchaeota, considered the closest archaeal relatives to Eukarya, were reported to possess two supersized ESs in their 23S rRNAs. Likewise, the 5S rRNA contains a 108‐nucleotide insertion in the ribosomes of the halophilic archaeon Halococcus morrhuae. A eukaryotic SSU contains the 18S rRNA subunit, which also contains ESs. SSU ESs are generally smaller than LSU ESs. SSU and LSU rRNA sequences are widely used for study of evolutionary relationships among organisms, since they are of ancient origin, are found in all known forms of life and are resistant to horizontal gene transfer. rRNA sequences are conserved (unchanged) over time due to their crucial role in the function of the ribosome. Phylogenic information derived from the 16s rRNA is currently used as the main method of delineation between similar prokaryotic species by calculating nucleotide similarity. The canonical tree of life is the lineage of the translation system. LSU rRNA subtypes have been called ribozymes because ribosomal proteins cannot bind to the catalytic site of the ribosome in this area (specifically the peptidyl transferase center, or PTC). The SSU rRNA subtypes decode mRNA in its decoding center (DC). Ribosomal proteins cannot enter the DC. The structure of rRNA is able to drastically change to affect tRNA binding to the ribosome during translation of other mRNAs. In 16S rRNA, this is thought to occur when certain nucleotides in the rRNA appear to alternate base pairing between one nucleotide or another, forming a "switch" that alters the rRNA's conformation. This process is able to affect the structure of the LSU and SSU, suggesting that this conformational switch in the rRNA structure affects the entire ribosome in its ability to match a codon with its anticodon in tRNA selection as well as decode mRNA. Assembly Ribosomal RNA's integration and assembly into ribosomes begins with their folding, modification, processing and assembly with ribosomal proteins to form the two ribosomal subunits, the LSU and the SSU. In Prokaryotes, rRNA incorporation occurs in the cytoplasm due to the lack of membrane-bound organelles. In Eukaryotes, however, this process primarily takes place in the nucleolus and is initiated by the synthesis of pre-RNA. This requires the presence of all three RNA polymerases. In fact, the transcription of pre-RNA by RNA polymerase I accounts for about 60% of cell's total cellular RNA transcription. This is followed by the folding of the pre-RNA so that it can be assembled with ribosomal proteins. This folding is catalyzed by endo- and exonucleases, RNA helicases, GTPases and ATPases. The rRNA subsequently undergoes endo- and exonucleolytic processing to remove external and internal transcribed spacers. The pre-RNA then undergoes modifications such as methylation or pseudouridinylation before ribosome assembly factors and ribosomal proteins assemble with the pre-RNA to form pre-ribosomal particles. Upon going under more maturation steps and subsequent exit from the nucleolus into the cytoplasm, these particles combine to form the ribosomes. The basic and aromatic residues found within the primary structure of rRNA allow for favorable stacking interactions and attraction to ribosomal proteins, creating a cross-linking effect between the backbone of rRNA and other components of the ribosomal unit. More detail on the initiation and beginning portion of these processes can be found in the "Biosynthesis" section. Function Universally conserved secondary structural elements in rRNA among different species show that these sequences are some of the oldest discovered. They serve critical roles in forming the catalytic sites of translation of mRNA. During translation of mRNA, rRNA functions to bind both mRNA and tRNA to facilitate the process of translating mRNA's codon sequence into amino acids. rRNA initiates the catalysis of protein synthesis when tRNA is sandwiched between the SSU and LSU. In the SSU, the mRNA interacts with the anticodons of the tRNA. In the LSU, the amino acid acceptor stem of the tRNA interacts with the LSU rRNA. The ribosome catalyzes ester-amide exchange, transferring the C-terminus of a nascent peptide from a tRNA to the amine of an amino acid. These processes are able to occur due to sites within the ribosome in which these molecules can bind, formed by the rRNA stem-loops. A ribosome has three of these binding sites called the A, P and E sites: In general, the A (aminoacyl) site contains an aminoacyl-tRNA (a tRNA esterified to an amino acid on the 3' end). The P (peptidyl) site contains a tRNA esterified to the nascent peptide. The free amino (NH2) group of the A site tRNA attacks the ester linkage of P site tRNA, causing transfer of the nascent peptide to the amino acid in the A site. This reaction is takes place in the peptidyl transferase center The E (exit) site contains a tRNA that has been discharged, with a free 3' end (with no amino acid or nascent peptide). A single mRNA can be translated simultaneously by multiple ribosomes. This is called a polysome. In prokaryotes, much work has been done to further identify the importance of rRNA in translation of mRNA. For example, it has been found that the A site consists primarily of 16S rRNA. Apart from various protein elements that interact with tRNA at this site, it is hypothesized that if these proteins were removed without altering ribosomal structure, the site would continue to function normally. In the P site, through the observation of crystal structures it has been shown the 3' end of 16s rRNA can fold into the site as if a molecule of mRNA. This results in intermolecular interactions that stabilize the subunits. Similarly, like the A site, the P site primarily contains rRNA with few proteins. The peptidyl transferase center, for example, is formed by nucleotides from the 23S rRNA subunit. In fact, studies have shown that the peptidyl transferase center contains no proteins, and is entirely initiated by the presence of rRNA. Unlike the A and P sites, the E site contains more proteins. Because proteins are not essential for the functioning of the A and P sites, the E site molecular composition shows that it is perhaps evolved later. In primitive ribosomes, it is likely that tRNAs exited from the P site. Additionally, it has been shown that E-site tRNA bind with both the 16S and 23S rRNA subunits. Subunits and associated ribosomal RNA Both prokaryotic and eukaryotic ribosomes can be broken down into two subunits, one large and one small. The exemplary species used in the table below for their respective rRNAs are the bacterium Escherichia coli (prokaryote) and human (eukaryote). Note that "nt" represents the length of the rRNA type in nucleotides and the "S" (such as in "16S) represents Svedberg units. S units of the subunits (or the rRNAs) cannot simply be added because they represent measures of sedimentation rate rather than of mass. The sedimentation rate of each subunit is affected by its shape, as well as by its mass. The nt units can be added as these represent the integer number of units in the linear rRNA polymers (for example, the total length of the human rRNA = 7216 nt). Gene clusters coding for rRNA are commonly called "ribosomal DNA" or rDNA (note that the term seems to imply that ribosomes contain DNA, which is not the case). In prokaryotes In prokaryotes a small 30S ribosomal subunit contains the 16S ribosomal RNA. The large 50S ribosomal subunit contains two rRNA species (the 5S and 23S ribosomal RNAs). Therefore it can be deduced that in both bacteria and archaea there is one rRNA gene that codes for all three rRNA types :16S, 23S and 5S. Bacterial 16S ribosomal RNA, 23S ribosomal RNA, and 5S rRNA genes are typically organized as a co-transcribed operon. As shown by the image in this section, there is an internal transcribed spacer between 16S and 23S rRNA genes. There may be one or more copies of the operon dispersed in the genome (for example, Escherichia coli has seven). Typically in bacteria there are between one and fifteen copies. Archaea contains either a single rRNA gene operon or up to four copies of the same operon. The 3' end of the 16S ribosomal RNA (in a ribosome) recognizes a sequence on the 5' end of mRNA called the Shine-Dalgarno sequence. In eukaryotes In contrast, eukaryotes generally have many copies of the rRNA genes organized in tandem repeats. In humans, approximately 300–400 repeats are present in five clusters, located on chromosomes 13 (RNR1), 14 (RNR2), 15 (RNR3), 21 (RNR4) and 22 (RNR5). Diploid humans have 10 clusters of genomic rDNA which in total make up less than 0.5% of the human genome. It was previously accepted that repeat rDNA sequences were identical and served as redundancies or failsafes to account for natural replication errors and point mutations. However, sequence variation in rDNA (and subsequently rRNA) in humans across multiple chromosomes has been observed, both within and between human individuals. Many of these variations are palindromic sequences and potential errors due to replication. Certain variants are also expressed in a tissue-specific manner in mice. Mammalian cells have 2 mitochondrial (12S and 16S) rRNA molecules and 4 types of cytoplasmic rRNA (the 28S, 5.8S, 18S, and 5S subunits). The 28S, 5.8S, and 18S rRNAs are encoded by a single transcription unit (45S) separated by 2 internally transcribed spacers. The first spacer corresponds to the one found in bacteria and archaea, and the other spacer is an insertion into what was the 23S rRNA in prokaryotes. The 45S rDNA is organized into 5 clusters (each has 30–40 repeats) on chromosomes 13, 14, 15, 21, and 22. These are transcribed by RNA polymerase I. The DNA for the 5S subunit occurs in tandem arrays (~200–300 true 5S genes and many dispersed pseudogenes), the largest one on the chromosome 1q41-42. 5S rRNA is transcribed by RNA polymerase III. The 18S rRNA in most eukaryotes is in the small ribosomal subunit, and the large subunit contains three rRNA species (the 5S, 5.8S and 28S in mammals, 25S in plants, rRNAs). In flies, the large subunit contains four rRNA species instead of three with a split in the 5.8S rRNA that presents a shorter 5.8S subunit (123 nt) and a 30 nucleotide subunit named the 2S rRNA. Both fragments are separated by an internally transcribed spacer of 28 nucleotides. Since the 2S rRNA is small and highly abundant, its presence can interfere with construction of sRNA libraries and compromise the quantification of other sRNAs. The 2S subunit is retrieved in fruit fly and dark-winged fungus gnat species but absent from mosquitoes. The tertiary structure of the small subunit ribosomal RNA (SSU rRNA) has been resolved by X-ray crystallography. The secondary structure of SSU rRNA contains 4 distinct domains—the 5', central, 3' major and 3' minor domains. A model of the secondary structure for the 5' domain (500-800 nucleotides) is shown. Biosynthesis In eukaryotes As the building-blocks for the organelle, production of rRNA is ultimately the rate-limiting step in the synthesis of a ribosome. In the nucleolus, rRNA is synthesized by RNA polymerase I using the specialty genes (rDNA) that encode for it, which are found repeatedly throughout the genome. The genes coding for 18S, 28S and 5.8S rRNA are located in the nucleolus organizer region and are transcribed into large precursor rRNA (pre-rRNA) molecules by RNA polymerase I. These pre-rRNA molecules are separated by external and internal spacer sequences and then methylated, which is key for later assembly and folding. After separation and release as individual molecules, assembly proteins bind to each naked rRNA strand and fold it into its functional form using cooperative assembly and progressive addition of more folding proteins as needed. The exact details of how the folding proteins bind to the rRNA and how correct folding is achieved remains unknown. The rRNA complexes are then further processed by reactions involving exo- and endo-nucleolytic cleavages guided by snoRNA (small nucleolar RNAs) in complex with proteins. As these complexes are compacted together to form a cohesive unit, interactions between rRNA and surrounding ribosomal proteins are constantly remodeled throughout assembly in order to provide stability and protect binding sites. This process is referred to as the "maturation" phase of the rRNA lifecycle. The modifications that occur during maturation of rRNA have been found to contribute directly to control of gene expression by providing physical regulation of translational access of tRNA and mRNA. Some studies have found that extensive methylation of various rRNA types is also necessary during this time to maintain ribosome stability. The genes for 5S rRNA are located inside the nucleolus and are transcribed into pre-5S rRNA by RNA polymerase III. The pre-5S rRNA enters the nucleolus for processing and assembly with 28S and 5.8S rRNA to form the LSU. 18S rRNA forms the SSUs by combining with numerous ribosomal proteins. Once both subunits are assembled, they are individually exported into the cytoplasm to form the 80S unit and begin initiation of translation of mRNA. Ribosomal RNA is non-coding and is never translated into proteins of any kind: rRNA is only transcribed from rDNA and then matured for use as a structural building block for ribosomes. Transcribed rRNA is bound to ribosomal proteins to form the subunits of ribosomes and acts as the physical structure that pushes mRNA and tRNA through the ribosome to process and translate them. Eukaryotic regulation Synthesis of rRNA is up-regulated and down-regulated to maintain homeostasis by a variety of processes and interactions: The kinase AKT indirectly promotes synthesis of rRNA as RNA polymerase I is AKT-dependent. Certain angiogenic ribonucleases, such as angiogenin (ANG), can translocate and accumulate in the nucleolus. When the concentration of ANG becomes too high, some studies have found that ANG can bind to the promoter region of rDNA and unnecessarily increase rRNA transcription. This can be damaging to the nucleolus and can even lead to unchecked transcription and cancer. During times of cellular glucose restriction, AMP-activated protein kinase (AMPK) discourages metabolic processes that consume energy but are non-essential. As a result, it is capable of phosphorylating RNA polymerase I (at the Ser-635 site) in order to down-regulate rRNA synthesis by disrupting transcription initiation. Impairment or removal of more than one pseudouridine or 29-O-methylation regions from the ribosome decoding center significantly reduces rate of rRNA transcription by reducing the rate of incorporation of new amino acids. Formation of heterochromatin is essential to silencing rRNA transcription, without which ribosomal RNA is synthesized unchecked and greatly decreases the lifespan of the organism. In prokaryotes Similar to eukaryotes, the production of rRNA is the rate-limiting step in the prokaryotic synthesis of a ribosome. In E. coli, it has been found that rRNA is transcribed from the two promoters P1 and P2 found within seven different rrn operons. The P1 promoter is specifically responsible for regulating rRNA synthesis during moderate to high bacterial growth rates. Because the transcriptional activity of this promoter is directly proportional to the growth rate, it is primarily responsible for rRNA regulation. An increased rRNA concentration serves as a negative feedback mechanism to ribosome synthesis. High NTP concentration has been found to be required for efficient transcription of the rrn P1 promoters. They are thought to form stabilizing complexes with RNA polymerase and the promoters. In bacteria specifically, this association of high NTP concentration with increased rRNA synthesis provides a molecular explanation as to why ribosomal and thus protein synthesis is dependent on growth-rate. A low growth-rate yields lower rRNA / ribosomal synthesis rates while a higher growth rate yields a higher rRNA / ribosomal synthesis rate. This allows a cell to save energy or increase its metabolic activity dependent on its needs and available resources. In prokaryotic cells, each rRNA gene or operon is transcribed into a single RNA precursor that includes 16S, 23S, 5S rRNA and tRNA sequences along with transcribed spacers. The RNA processing then begins before the transcription is complete. During processing reactions, the rRNAs and tRNAs are released as separate molecules. Prokaryotic regulation Because of the vital role rRNA plays in the cell physiology of prokaryotes, there is much overlap in rRNA regulation mechanisms. At the transcriptional level, there are both positive and negative effectors of rRNA transcription that facilitate a cell's maintenance of homeostasis: An UP element upstream of the rrn P1 promoter can bind a subunit of RNA polymerase, thus promoting transcription of rRNA. Transcription factors such as FIS bind upstream of the promoter and interact with RNA polymerase which facilitates transcription. Anti-termination factors bind downstream of the rrn P2 promoter, preventing premature transcription termination. Due to the stringent response, when the availability of amino acids is low, ppGpp (a negative effector) can inhibit transcription from both the P1 and P2 promoters. Degradation Ribosomal RNA is quite stable in comparison to other common types of RNA and persists for longer periods of time in a healthy cellular environment. Once assembled into functional units, ribosomal RNA within ribosomes are stable in the stationary phase of the cell life cycle for many hours. Degradation can be triggered via "stalling" of a ribosome, a state that occurs when the ribosome recognizes faulty mRNA or encounters other processing difficulties that causes translation by the ribosome to cease. Once a ribosome stalls, a specialized pathway on the ribosome is initiated to target the entire complex for disassembly. In eukaryotes As with any protein or RNA, rRNA production is prone to errors resulting in the production of non-functional rRNA. To correct this, the cell allows for degradation of rRNA through the non-functional rRNA decay (NRD) pathway. Much of the research in this topic was conducted on eukaryotic cells, specifically Saccharomyces cerevisiae yeast. Currently, only a basic understanding of how cells are able to target functionally defective ribosomes for ubiquination and degradation in eukaryotes is available. The NRD pathway for the 40S subunit may be independent or separate from the NRD pathway for the 60S subunit. It has been observed that certain genes were able to affect degradation of certain pre-RNAs, but not others. Numerous proteins are involved in the NRD pathway, such as Mms1p and Rtt101p, which are believed to complex together to target ribosomes for degradation. Mms1p and Rtt101p are found to bind together and Rtt101p is believed to recruit a ubiquitin E3 ligase complex, allowing for the non-functional ribosomes to be ubiquinated before being degraded. Prokaryotes lack a homolog for Mms1, so it is unclear how prokaryotes are able to degrade non-functional rRNAs. The growth rate of eukaryotic cells did not seem to be significantly affected by the accumulation of non-functional rRNAs. In prokaryotes Although there is far less research available on ribosomal RNA degradation in prokaryotes in comparison to eukaryotes, there has still been interest on whether bacteria follow a similar degradation scheme in comparison to the NRD in eukaryotes. Much of the research done for prokaryotes has been conducted on Escherichia coli. Many differences were found between eukaryotic and prokaryotic rRNA degradation, leading researchers to believe that the two degrade using different pathways. Certain mutations in rRNA that were able to trigger rRNA degradation in eukaryotes were unable to do so in prokaryotes. Point mutations in a 23S rRNA would cause both 23S and 16S rRNAs to be degraded, in comparison to eukaryotes, in which mutations in one subunit would only cause that subunit to be degraded. Researchers found that removal of a whole helix structure (H69) from the 23S rRNA did not trigger its degradation. This led them to believe that H69 was critical for endonucleases to recognize and degrade the mutated rRNA. Sequence conservation and stability Due to the prevalent and unwavering nature of rRNA across all organisms, the study of its resistance to gene transfer, mutation, and alteration without destruction of the organism has become a popular field of interest. Ribosomal RNA genes have been found to be tolerant to modification and incursion. When rRNA sequencing is altered, cells have been found to become compromised and quickly cease normal function. These key traits of rRNA have become especially important for gene database projects (comprehensive online resources such as SILVA or SINA) where alignment of ribosomal RNA sequences from across the different biologic domains greatly eases "taxonomic assignment, phylogenetic analysis and the investigation of microbial diversity." Examples of resilience: Addition of large, nonsensical RNA fragments into many parts of the 16S rRNA unit does not observably alter the function of the ribosomal unit as a whole. Non-coding RNARD7 has the capability to alter processing of rRNA to make the molecules resistant to degradation by carboxylic acid. This is a crucial mechanism in maintaining rRNA concentrations during active growth when acid build-up (due to the substrate phosphorylation required to produce ATP) can become toxic to intracellular functions. Insertion of hammerhead ribozymes that are capable of cis-cleavages along 16S rRNA greatly inhibit function and diminish stability. While most cellular functions degrade heavily after only short period of exposure to hypoxic environments, rRNA remains un-degraded and resolved after six days of prolonged hypoxia. Only after such an extended period of time do rRNA intermediates (indicative of degradation finally occurring) begin to present themselves. Significance Ribosomal RNA characteristics are important in evolution, thus taxonomy and medicine. rRNA is one of only a few gene products present in all cells. For this reason, genes that encode the rRNA (rDNA) are sequenced to identify an organism's taxonomic group, calculate related groups, and estimate rates of species divergence. As a result, many thousands of rRNA sequences are known and stored in specialized databases such as RDP-II and SILVA. Alterations to rRNA are what allow certain disease-causing bacteria, such as Mycobacterium tuberculosis (the bacterium that causes tuberculosis) to develop extreme drug resistance. Due to similar issues, this has become a prevalent problem in veterinary medicine where the main method for handling bacterial infection in pets is administration of drugs that attack the peptidyl-transferase centre (PTC) of the bacterial ribosome. Mutations in 23S rRNA have created perfect resistance to these drugs as they operate together in an unknown fashion to bypass the PTC entirely. rRNA is the target of numerous clinically relevant antibiotics: chloramphenicol, erythromycin, kasugamycin, micrococcin, paromomycin, linezolid, alpha-sarcin, spectinomycin, streptomycin, and thiostrepton. rRNA have been shown to be the origin of species-specific microRNAs, like miR-663 in humans and miR-712 in mice. These particular miRNAs originate from the internal transcribed spacers of the rRNA. Human genes 45S: RNR1, RNR2, RNR3, RNR4, RNR5; (unclustered) RNA18SN1, RNA18SN2, RNA18SN3, RNA18SN4, RNA18SN5, RNA28SN1, RNA28SN2, RNA28SN3, RNA28SN4, RNA28SN5, RNA45SN1, RNA45SN2, RNA45SN3, RNA45SN4, RNA45SN5, RNA5-8SN1, RNA5-8SN2, RNA5-8SN3, RNA5-8SN4, RNA5-8SN5 5S: RNA5S1, RNA5S2, RNA5S3, RNA5S4, RNA5S5, RNA5S6, RNA5S7, RNA5S8, RNA5S9, RNA5S10, RNA5S11, RNA5S12, RNA5S13, RNA5S14, RNA5S15, RNA5S16, RNA5S17 Mt: MT-RNR1, MT-TV (co-opted), MT-RNR2 See also Ribotyping Diazaborine B, a maturation inhibitor of rRNAs for the large ribosomal subunit References External links 16S rRNA, BioMineWiki Ribosomal Database Project II SILVA rRNA Database Project (also includes Eukaryotes (18S) and LSU (23S/28S)) Video: rRNA: sequence, function & synthesis Halococcus morrhuae (archaebacterium) 5S rRNA Protein biosynthesis RNA Non-coding RNA Ribozymes
Ribosomal RNA
[ "Chemistry" ]
6,593
[ "Catalysis", "Protein biosynthesis", "Gene expression", "Biosynthesis", "Ribozymes" ]
1,718,317
https://en.wikipedia.org/wiki/Lorenz%20gauge%20condition
In electromagnetism, the Lorenz gauge condition or Lorenz gauge (after Ludvig Lorenz) is a partial gauge fixing of the electromagnetic vector potential by requiring The name is frequently confused with Hendrik Lorentz, who has given his name to many concepts in this field. (See, however, the Note added below for a different interpretation.) The condition is Lorentz invariant. The Lorenz gauge condition does not completely determine the gauge: one can still make a gauge transformation where is the four-gradient and is any harmonic scalar function: that is, a scalar function obeying the equation of a massless scalar field. The Lorenz gauge condition is used to eliminate the redundant spin-0 component in Maxwell's equations when these are used to describe a massless spin-1 quantum field. It is also used for massive spin-1 fields where the concept of gauge transformations does not apply at all. Description In electromagnetism, the Lorenz condition is generally used in calculations of time-dependent electromagnetic fields through retarded potentials. The condition is where is the four-potential, the comma denotes a partial differentiation and the repeated index indicates that the Einstein summation convention is being used. The condition has the advantage of being Lorentz invariant. It still leaves substantial gauge degrees of freedom. In ordinary vector notation and SI units, the condition is where is the magnetic vector potential and is the electric potential; see also gauge fixing. In Gaussian units the condition is A quick justification of the Lorenz gauge can be found using Maxwell's equations and the relation between the magnetic vector potential and the magnetic field: Therefore, Since the curl is zero, that means there is a scalar function such that This gives a well known equation for the electric field: This result can be plugged into the Ampère–Maxwell equation, This leaves To have Lorentz invariance, the time derivatives and spatial derivatives must be treated equally (i.e. of the same order). Therefore, it is convenient to choose the Lorenz gauge condition, which makes the left hand side zero and gives the result A similar procedure with a focus on the electric scalar potential and making the same gauge choice will yield These are simpler and more symmetric forms of the inhomogeneous Maxwell's equations. Here is the vacuum velocity of light, and is the d'Alembertian operator with the metric signature. These equations are not only valid under vacuum conditions, but also in polarized media, if and are source density and circulation density, respectively, of the electromagnetic induction fields and calculated as usual from and by the equations The explicit solutions for and – unique, if all quantities vanish sufficiently fast at infinity – are known as retarded potentials. History When originally published in 1867, Lorenz's work was not received well by James Clerk Maxwell. Maxwell had eliminated the Coulomb electrostatic force from his derivation of the electromagnetic wave equation since he was working in what would nowadays be termed the Coulomb gauge. The Lorenz gauge hence contradicted Maxwell's original derivation of the EM wave equation by introducing a retardation effect to the Coulomb force and bringing it inside the EM wave equation alongside the time varying electric field, which was introduced in Lorenz's paper "On the identity of the vibrations of light with electrical currents". Lorenz's work was the first use of symmetry to simplify Maxwell's equations after Maxwell himself published his 1865 paper. In 1888, retarded potentials came into general use after Heinrich Rudolf Hertz's experiments on electromagnetic waves. In 1895, a further boost to the theory of retarded potentials came after J. J. Thomson's interpretation of data for electrons (after which investigation into electrical phenomena changed from time-dependent electric charge and electric current distributions over to moving point charges). Note added on 26 November 2024: It should be pointed out that Lorenz actually derived the 'condition' from postulated integral expressions for the potentials (nowadays known as retarded potentials), whereas Lorentz (and before him Emil Wiechert) imposed it on the potentials to fix the gauge (see, e.g, his 1904 Encyclopedia article on electron theory). So Lorenz' equation is not a real condition but a mathematical result. It is therefore misleading to attribute the gauge condition to Lorenz. See also Gauge fixing References External links and further reading General Further reading See also History Electromagnetism Concepts in physics
Lorenz gauge condition
[ "Physics" ]
923
[ "Electromagnetism", "Physical phenomena", "Fundamental interactions", "nan" ]
1,720,933
https://en.wikipedia.org/wiki/Surface%20gravity
The surface gravity, g, of an astronomical object is the gravitational acceleration experienced at its surface at the equator, including the effects of rotation. The surface gravity may be thought of as the acceleration due to gravity experienced by a hypothetical test particle which is very close to the object's surface and which, in order not to disturb the system, has negligible mass. For objects where the surface is deep in the atmosphere and the radius not known, the surface gravity is given at the 1 bar pressure level in the atmosphere. Surface gravity is measured in units of acceleration, which, in the SI system, are meters per second squared. It may also be expressed as a multiple of the Earth's standard surface gravity, which is equal to In astrophysics, the surface gravity may be expressed as , which is obtained by first expressing the gravity in cgs units, where the unit of acceleration and surface gravity is centimeters per second squared (cm/s2), and then taking the base-10 logarithm of the cgs value of the surface gravity. Therefore, the surface gravity of Earth could be expressed in cgs units as , and then taking the base-10 logarithm ("log g") of 980.665, giving 2.992 as "log g". The surface gravity of a white dwarf is very high, and of a neutron star even higher. A white dwarf's surface gravity is around 100,000 g () whilst the neutron star's compactness gives it a surface gravity of up to with typical values of order (that is more than 1011 times that of Earth). One measure of such immense gravity is that neutron stars have an escape velocity of around 100,000 km/s, about a third of the speed of light. Since black holes do not have a surface, the surface gravity is not defined. Relationship of surface gravity to mass and radius In the Newtonian theory of gravity, the gravitational force exerted by an object is proportional to its mass: an object with twice the mass-produces twice as much force. Newtonian gravity also follows an inverse square law, so that moving an object twice as far away divides its gravitational force by four, and moving it ten times as far away divides it by 100. This is similar to the intensity of light, which also follows an inverse square law: with relation to distance, light becomes less visible. Generally speaking, this can be understood as geometric dilution corresponding to point-source radiation into three-dimensional space. A large object, such as a planet or star, will usually be approximately round, approaching hydrostatic equilibrium (where all points on the surface have the same amount of gravitational potential energy). On a small scale, higher parts of the terrain are eroded, with eroded material deposited in lower parts of the terrain. On a large scale, the planet or star itself deforms until equilibrium is reached. For most celestial objects, the result is that the planet or star in question can be treated as a near-perfect sphere when the rotation rate is low. However, for young, massive stars, the equatorial azimuthal velocity can be quite high—up to 200 km/s or more—causing a significant amount of equatorial bulge. Examples of such rapidly rotating stars include Achernar, Altair, Regulus A and Vega. The fact that many large celestial objects are approximately spheres makes it easier to calculate their surface gravity. According to the shell theorem, the gravitational force outside a spherically symmetric body is the same as if its entire mass were concentrated in the center, as was established by Sir Isaac Newton. Therefore, the surface gravity of a planet or star with a given mass will be approximately inversely proportional to the square of its radius, and the surface gravity of a planet or star with a given average density will be approximately proportional to its radius. For example, the recently discovered planet, Gliese 581 c, has at least 5 times the mass of Earth, but is unlikely to have 5 times its surface gravity. If its mass is no more than 5 times that of the Earth, as is expected, and if it is a rocky planet with a large iron core, it should have a radius approximately 50% larger than that of Earth. Gravity on such a planet's surface would be approximately 2.2 times as strong as on Earth. If it is an icy or watery planet, its radius might be as large as twice the Earth's, in which case its surface gravity might be no more than 1.25 times as strong as the Earth's. These proportionalities may be expressed by the formula: where is the surface gravity of an object, expressed as a multiple of the Earth's, is its mass, expressed as a multiple of the Earth's mass () and its radius, expressed as a multiple of the Earth's (mean) radius (6,371 km). For instance, Mars has a mass of  = 0.107 Earth masses and a mean radius of 3,390 km = 0.532 Earth radii. The surface gravity of Mars is therefore approximately times that of Earth. Without using the Earth as a reference body, the surface gravity may also be calculated directly from Newton's law of universal gravitation, which gives the formula where is the mass of the object, is its radius, and is the gravitational constant. If denote the mean density of the object, this can also be written as so that, for fixed mean density, the surface gravity is proportional to the radius . Solving for mass, this equation can be written as But density is not constant, but increases as the planet grows in size, as they are not incompressible bodies. That is why the experimental relationship between surface gravity and mass does not grow as 1/3 but as 1/2: here with in times Earth's surface gravity and in times Earth's mass. In fact, the exoplanets found fulfilling the former relationship have been found to be rocky planets. Thus, for rocky planets, density grows with mass as . Gas giants For gas giant planets such as Jupiter, Saturn, Uranus, and Neptune, the surface gravity is given at the 1 bar pressure level in the atmosphere. It has been found that for giant planets with masses in the range up to 100 times Earth's mass, their gravity surface is nevertheless very similar and close to 1, a region named the gravity plateau. Non-spherically symmetric objects Most real astronomical objects are not perfectly spherically symmetric. One reason for this is that they are often rotating, which means that they are affected by the combined effects of gravitational force and centrifugal force. This causes stars and planets to be oblate, which means that their surface gravity is smaller at the equator than at the poles. This effect was exploited by Hal Clement in his SF novel Mission of Gravity, dealing with a massive, fast-spinning planet where gravity was much higher at the poles than at the equator. To the extent that an object's internal distribution of mass differs from a symmetric model, the measured surface gravity may be used to deduce things about the object's internal structure. This fact has been put to practical use since 1915–1916, when Roland Eötvös's torsion balance was used to prospect for oil near the city of Egbell (now Gbely, Slovakia.) In 1924, the torsion balance was used to locate the Nash Dome oil fields in Texas. It is sometimes useful to calculate the surface gravity of simple hypothetical objects which are not found in nature. The surface gravity of infinite planes, tubes, lines, hollow shells, cones, and even more unrealistic structures may be used to provide insights into the behavior of real structures. Black holes In relativity, the Newtonian concept of acceleration turns out not to be clear cut. For a black hole, which must be treated relativistically, one cannot define a surface gravity as the acceleration experienced by a test body at the object's surface because there is no surface, although the event horizon is a natural alternative candidate, but this still presents a problem because the acceleration of a test body at the event horizon of a black hole turns out to be infinite in relativity. Because of this, a renormalized value is used that corresponds to the Newtonian value in the non-relativistic limit. The value used is generally the local proper acceleration (which diverges at the event horizon) multiplied by the gravitational time dilation factor (which goes to zero at the event horizon). For the Schwarzschild case, this value is mathematically well behaved for all non-zero values of and . When one talks about the surface gravity of a black hole, one is defining a notion that behaves analogously to the Newtonian surface gravity, but is not the same thing. In fact, the surface gravity of a general black hole is not well defined. However, one can define the surface gravity for a black hole whose event horizon is a Killing horizon. The surface gravity of a static Killing horizon is the acceleration, as exerted at infinity, needed to keep an object at the horizon. Mathematically, if is a suitably normalized Killing vector, then the surface gravity is defined by where the equation is evaluated at the horizon. For a static and asymptotically flat spacetime, the normalization should be chosen so that as , and so that . For the Schwarzschild solution, take to be the time translation Killing vector , and more generally for the Kerr–Newman solution take , the linear combination of the time translation and axisymmetry Killing vectors which is null at the horizon, where is the angular velocity. Schwarzschild solution Since is a Killing vector implies . In coordinates . Performing a coordinate change to the advanced Eddington–Finklestein coordinates causes the metric to take the form Under a general change of coordinates the Killing vector transforms as giving the vectors and Considering the entry for gives the differential equation Therefore, the surface gravity for the Schwarzschild solution with mass is ( in SI units). Kerr solution The surface gravity for the uncharged, rotating black hole is, simply where is the Schwarzschild surface gravity, and is the spring constant of the rotating black hole. is the angular velocity at the event horizon. This expression gives a simple Hawking temperature of . Kerr–Newman solution The surface gravity for the Kerr–Newman solution is where is the electric charge, is the angular momentum, define to be the locations of the two horizons and . Dynamical black holes Surface gravity for stationary black holes is well defined. This is because all stationary black holes have a horizon that is Killing. Recently there has been a shift towards defining the surface gravity of dynamical black holes whose spacetime does not admit a timelike Killing vector (field). Several definitions have been proposed over the years by various authors, such as peeling surface gravity and Kodama surface gravity. As of current, there is no consensus or agreement on which definition, if any, is correct. Semiclassical results indicate that the peeling surface gravity is ill-defined for transient objects formed in finite time of a distant observer. References External links Newtonian surface gravity Exploratorium – Your Weight on Other Worlds Gravity Black holes General relativity
Surface gravity
[ "Physics", "Astronomy" ]
2,302
[ "Physical phenomena", "Black holes", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "General relativity", "Density", "Theory of relativity", "Stellar phenomena", "Astronomical objects" ]
683,065
https://en.wikipedia.org/wiki/Trisodium%20citrate
Trisodium citrate is a chemical compound with the molecular formula Na3C6H5O7. It is sometimes referred to simply as "sodium citrate", though sodium citrate can refer to any of the three sodium salts of citric acid. It possesses a saline, mildly tart taste, and is a mild alkali. Uses Foods Sodium citrate is chiefly used as a food additive, usually for flavor or as a preservative. Its E number is E331. Sodium citrate is employed as a flavoring agent in certain varieties of club soda. It is common as an ingredient in bratwurst, and is also used in commercial ready-to-drink beverages and drink mixes, contributing a tart flavor. It is found in gelatin mix, ice cream, yogurt, jams, sweets, milk powder, processed cheeses, carbonated beverages, wine, and butter chicken, amongst others. Because the elements in Na3C6H5O7 spell "Na C H O", "Nacho Cheese" is a convenient mnemonic for trisodium citrate's chemical formula. Sodium citrate can be used as an emulsifying stabilizer when making cheese. It allows the cheese to melt without becoming greasy by stopping the fats from separating. Buffering As a conjugate base of a weak acid, citrate can perform as a buffering agent or acidity regulator, resisting changes in pH. It is used to control acidity in some substances, such as gelatin desserts. It can be found in the milk minicontainers used with coffee machines. The compound is the product of antacids, such as Alka-Seltzer, when they are dissolved in water. The pH range of a solution of 5 g/100 ml water at 25 °C is 7.5 to 9.0. It is added to many commercially packaged dairy products to control the pH impact of the gastrointestinal system of humans, mainly in processed products such as cheese and yogurt, although it also has beneficial effects on the physical gel microstructure. Chemistry Sodium citrate is a component in Benedict's qualitative solution, often used in organic analysis to detect the presence of reducing sugars such as glucose. Medicine In 1914, the Belgian doctor Albert Hustin and the Argentine physician and researcher Luis Agote successfully used sodium citrate as an anticoagulant in blood transfusions, with Richard Lewisohn determining its correct concentration in 1915. It continues to be used in blood-collection tubes and for the preservation of blood in blood banks. The citrate ion chelates calcium ions in the blood by forming calcium citrate complexes, disrupting the blood clotting mechanism. Recently, trisodium citrate has also been used as a locking agent in vascath and haemodialysis lines instead of heparin due to its lower risk of systemic anticoagulation. In 2003, Ööpik et al. showed the use of sodium citrate (0.5 g/kg body weight) improved running performance over 5 km by 30 seconds. Sodium citrate is used to relieve discomfort in urinary-tract infections, such as cystitis, to reduce the acidosis seen in distal renal tubular acidosis, and can also be used as an osmotic laxative. It is a major component of the WHO oral rehydration solution. It is used as an antacid, especially prior to anaesthesia, for caesarian section procedures to reduce the risks associated with the aspiration of gastric contents. Boiler descaling Sodium citrate is a particularly effective agent for removal of carbonate scale from boilers without removing them from operation and for cleaning automobile radiators. See also Monosodium citrate Disodium citrate Citric acid References Food acidity regulators Citrates Organic sodium salts Chelating agents E-number additives
Trisodium citrate
[ "Chemistry" ]
828
[ "Salts", "Organic sodium salts", "Edible salt", "Chelating agents", "Process chemicals" ]
683,109
https://en.wikipedia.org/wiki/Triality
In mathematics, triality is a relationship among three vector spaces, analogous to the duality relation between dual vector spaces. Most commonly, it describes those special features of the Dynkin diagram D4 and the associated Lie group Spin(8), the double cover of 8-dimensional rotation group SO(8), arising because the group has an outer automorphism of order three. There is a geometrical version of triality, analogous to duality in projective geometry. Of all simple Lie groups, Spin(8) has the most symmetrical Dynkin diagram, D4. The diagram has four nodes with one node located at the center, and the other three attached symmetrically. The symmetry group of the diagram is the symmetric group S3 which acts by permuting the three legs. This gives rise to an S3 group of outer automorphisms of Spin(8). This automorphism group permutes the three 8-dimensional irreducible representations of Spin(8); these being the vector representation and two chiral spin representations. These automorphisms do not project to automorphisms of SO(8). The vector representation—the natural action of SO(8) (hence Spin(8)) on —consists over the real numbers of Euclidean 8-vectors and is generally known as the "defining module", while the chiral spin representations are also known as "half-spin representations", and all three of these are fundamental representations. No other connected Dynkin diagram has an automorphism group of order greater than 2; for other Dn (corresponding to other even Spin groups, Spin(2n)), there is still the automorphism corresponding to switching the two half-spin representations, but these are not isomorphic to the vector representation. Roughly speaking, symmetries of the Dynkin diagram lead to automorphisms of the Tits building associated with the group. For special linear groups, one obtains projective duality. For Spin(8), one finds a curious phenomenon involving 1-, 2-, and 4-dimensional subspaces of 8-dimensional space, historically known as "geometric triality". The exceptional 3-fold symmetry of the D4 diagram also gives rise to the Steinberg group 3D4. General formulation A duality between two vector spaces over a field is a non-degenerate bilinear form i.e., for each non-zero vector in one of the two vector spaces, the pairing with is a non-zero linear functional on the other. Similarly, a triality between three vector spaces over a field is a non-degenerate trilinear form i.e., each non-zero vector in one of the three vector spaces induces a duality between the other two. By choosing vectors in each on which the trilinear form evaluates to 1, we find that the three vector spaces are all isomorphic to each other, and to their duals. Denoting this common vector space by , the triality may be re-expressed as a bilinear multiplication where each corresponds to the identity element in . The non-degeneracy condition now implies that is a composition algebra. It follows that has dimension 1, 2, 4 or 8. If further and the form used to identify with its dual is positive definite, then is a Euclidean Hurwitz algebra, and is therefore isomorphic to R, C, H or O. Conversely, composition algebras immediately give rise to trialities by taking each equal to the algebra, and contracting the multiplication with the inner product on the algebra to make a trilinear form. An alternative construction of trialities uses spinors in dimensions 1, 2, 4 and 8. The eight-dimensional case corresponds to the triality property of Spin(8). See also Triple product, may be related to the 4-dimensional triality (on quaternions) References John Frank Adams (1981), Spin(8), Triality, F4 and all that, in "Superspace and supergravity", edited by Stephen Hawking and Martin Roček, Cambridge University Press, pages 435–445. John Frank Adams (1996), Lectures on Exceptional Lie Groups (Chicago Lectures in Mathematics), edited by Zafer Mahmud and Mamora Mimura, University of Chicago Press, . Further reading External links Spinors and Trialities by John Baez Triality with Zometool by David Richter Lie groups Spinors
Triality
[ "Mathematics" ]
917
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
683,116
https://en.wikipedia.org/wiki/SO%288%29
In mathematics, SO(8) is the special orthogonal group acting on eight-dimensional Euclidean space. It could be either a real or complex simple Lie group of rank 4 and dimension 28. Spin(8) Like all special orthogonal groups SO(n) with n ≥ 2, SO(8) is not simply connected. And all like all SO(n) with n > 2, the fundamental group of SO(8) is isomorphic to Z2. The universal cover of SO(8) is the spin group Spin(8). Center The center of SO(8) is Z2, the diagonal matrices {±I} (as for all SO(2n) with 2n ≥ 4), while the center of Spin(8) is Z2×Z2 (as for all Spin(4n), 4n ≥ 4). Triality SO(8) is unique among the simple Lie groups in that its Dynkin diagram, (D4 under the Dynkin classification), possesses a three-fold symmetry. This gives rise to peculiar feature of Spin(8) known as triality. Related to this is the fact that the two spinor representations, as well as the fundamental vector representation, of Spin(8) are all eight-dimensional (for all other spin groups the spinor representation is either smaller or larger than the vector representation). The triality automorphism of Spin(8) lives in the outer automorphism group of Spin(8) which is isomorphic to the symmetric group S3 that permutes these three representations. The automorphism group acts on the center Z2 x Z2 (which also has automorphism group isomorphic to S3 which may also be considered as the general linear group over the finite field with two elements, S3 ≅GL(2,2)). When one quotients Spin(8) by one central Z2, breaking this symmetry and obtaining SO(8), the remaining outer automorphism group is only Z2. The triality symmetry acts again on the further quotient SO(8)/Z2. Sometimes Spin(8) appears naturally in an "enlarged" form, as the automorphism group of Spin(8), which breaks up as a semidirect product: Aut(Spin(8)) ≅ PSO (8) ⋊ S3. Unit octonions Elements of SO(8) can be described with unit octonions, analogously to how elements of SO(2) can be described with unit complex numbers and elements of SO(4) can be described with unit quaternions. However the relationship is more complicated, partly due to the non-associativity of the octonions. A general element in SO(8) can be described as the product of 7 left-multiplications, 7 right-multiplications and also 7 bimultiplications by unit octonions (a bimultiplication being the composition of a left-multiplication and a right-multiplication by the same octonion and is unambiguously defined due to octonions obeying the Moufang identities). It can be shown that an element of SO(8) can be constructed with bimultiplications, by first showing that pairs of reflections through the origin in 8-dimensional space correspond to pairs of bimultiplications by unit octonions. The triality automorphism of Spin(8) described below provides similar constructions with left multiplications and right multiplications. Octonions and triality If and , it can be shown that this is equivalent to , meaning that without ambiguity. A triple of maps that preserve this identity, so that is called an isotopy. If the three maps of an isotopy are in , the isotopy is called an orthogonal isotopy. If , then following the above can be described as the product of bimultiplications of unit octonions, say . Let be the corresponding products of left and right multiplications by the conjugates (i.e., the multiplicative inverses) of the same unit octonions, so , . A simple calculation shows that is an isotopy. As a result of the non-associativity of the octonions, the only other orthogonal isotopy for is . As the set of orthogonal isotopies produce a 2-to-1 cover of , they must in fact be . Multiplicative inverses of octonions are two-sided, which means that is equivalent to . This means that a given isotopy can be permuted cyclically to give two further isotopies and . This produces an order 3 outer automorphism of . This "triality" automorphism is exceptional among spin groups. There is no triality automorphism of , as for a given the corresponding maps are only uniquely determined up to sign. Root system Weyl group Its Weyl/Coxeter group has 4! × 8 = 192 elements. Cartan matrix See also Octonions Clifford algebra G2 References (originally published in 1954 by Columbia University Press) Lie groups Octonions
SO(8)
[ "Mathematics" ]
1,068
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
683,322
https://en.wikipedia.org/wiki/HEPA
HEPA (, high efficiency particulate air) filter, also known as a high efficiency particulate arresting filter, is an efficiency standard of air filters. Filters meeting the HEPA standard must satisfy certain levels of efficiency. Common standards require that a HEPA air filter must remove—from the air that passes through—at least 99.95% (ISO, European Standard) or 99.97% (ASME, U.S. DOE) of particles whose diameter is equal to 0.3 μm, with the filtration efficiency increasing for particle diameters both less than and greater than 0.3 μm. HEPA filters capture pollen, dirt, dust, moisture, bacteria (0.2–2.0 μm), viruses (0.02–0.3 μm), and submicron liquid aerosol (0.02–0.5 μm). Some microorganisms, for example, Aspergillus niger, Penicillium citrinum, Staphylococcus epidermidis, and Bacillus subtilis are captured by HEPA filters with photocatalytic oxidation (PCO). A HEPA filter is also able to capture some viruses and bacteria which are ≤0.3 μm. A HEPA filter is also able to capture floor dust which contains bacteroidia, clostridia, and bacilli. HEPA was commercialized in the 1950s, and the original term became a registered trademark and later a generic trademark for highly efficient filters. HEPA filters are used in applications that require contamination control, such as the manufacturing of hard disk drives, medical devices, semiconductors, nuclear, food and pharmaceutical products, as well as in hospitals, homes, and vehicles. Mechanism HEPA filters are composed of a mat of randomly arranged fibers. The fibers are typically composed of polypropylene or fiberglass with diameters between 0.5 and 2.0 micrometers. Most of the time, these filters are composed of tangled bundles of fine fibers. These fibers create a narrow convoluted pathway through which air passes. When the largest particles are passing through this pathway, the bundles of fibers behave like a kitchen sieve which physically blocks the particles from passing through. However, when smaller particles pass with the air, as the air twists and turns, the smaller particles cannot keep up with the motion of the air and thus they collide with the fibers. The smallest particles have very little inertia and move randomly as a result of collisions with individual air molecules (Brownian motion). Because of their movement, they end up crashing into the fibers. Key factors affecting its functions are fiber diameter, filter thickness, and face velocity, which is the measured air speed at an inlet or outlet of a heating ventilation and air conditioning (HVAC) system. Face velocity is measured in m/s and can be calculated as the volume flow rate (m3/s) divided by the face area (m2). The air space between HEPA filter fibers is typically much greater than 0.3 μm. HEPA filters in very high level for smallest particulate matter. Unlike sieves or membrane filters, where particles smaller than openings or pores can pass through, HEPA filters are designed to target a range of particle sizes. These particles are trapped (they stick to a fiber) through a combination of the following three mechanisms: Diffusion; particles below 0.3 μm are captured by diffusion in a HEPA filter. This mechanism is a result of the collision with gas molecules by the smallest particles, especially those below 0.1 μm in diameter. The small particles are effectively blown or bounced around and collide with the filter media fibers. This behavior is similar to Brownian motion and raises the probability that a particle will be stopped by either interception or impaction; this mechanism becomes dominant at lower airflow. Interception; particles following a line of flow in the air stream come within one radius of a fiber and adhere to it. Mid size particles are being captured by this process. Impaction; larger particles are unable to avoid fibers by following the curving contours of the air stream and are forced to embed in one of them directly; this effect increases with diminishing fiber separation and higher air flow velocity. Diffusion predominates below the 0.1 μm diameter particle size, whilst impaction and interception predominate above 0.4 μm. In between, near the most penetrating particle size (MPPS) 0.21 μm, both diffusion and interception are comparatively inefficient. Because this is the weakest point in the filter's performance, the HEPA specifications use the retention of particles near this size (0.3 μm) to classify the filter. However it is possible for particles smaller than the MPPS to not have filtering efficiency greater than that of the MPPS. This is due to the fact that these particles can act as nucleation sites for mostly condensation and form particles near the MPPS. Gas filtration HEPA filters are designed to arrest very fine particles effectively, but they do not filter out gasses and odor molecules. Circumstances requiring filtration of volatile organic compounds, chemical vapors, or cigarette, pet or flatulence odors call for the use of an activated carbon (charcoal) or other type of filter instead of or in addition to a HEPA filter. Carbon cloth filters, claimed to be many times more efficient than the granular activated carbon form at adsorption of gaseous pollutants, are known as high efficiency gas adsorption filters (HEGA) and were originally developed by the British Armed Forces as a defense against chemical warfare. Pre-filter and HEPA filter A HEPA bag filter can be used in conjunction with a pre-filter (usually carbon-activated) to extend the usage life of the more expensive HEPA filter. In such setup, the first stage in the filtration process is made up of a pre-filter which removes most of the larger dust, hair, PM10 and pollen particles from the air. The second stage high-quality HEPA filter removes the finer particles that escape from the pre-filter. This is common in air handling units. Specifications HEPA filters, as defined by the United States Department of Energy (DOE) standard adopted by most American industries, remove at least 99.97% of aerosols 0.3 micrometers (μm) in diameter. The filter's minimal resistance to airflow, or pressure drop, is usually specified around at its nominal volumetric flow rate. The specification used in the European Union: European Standard EN 1822-1:2019, from which ISO 29463 is derived, defines several classes of filters by their retention at the given most penetrating particle size (MPPS): Efficient Particulate Air filters (EPA), High Efficiency Particulate Air filters (HEPA), and Ultra Low Particulate Air filters (ULPA). The averaged efficiency of the filter is called "overall", and the efficiency at a specific point is called "local": See also the different classes for air filters for comparison. Specifications for respirators For respirators, MSHA and NIOSH define HEPA as filters blocking ≥ 99.97% of 0.3 micron DOP particles, under 30 CFR 11 and 42 CFR 84. Since the transition to 42 CFR 84 in 1995, use of the term HEPA has been deprecated except for powered air-purifying respirators. However, by definition, ANSI Z88.2-2015 considers N100, R100, P100, and HE as HEPA filters. Marketing Some companies use the marketing term "True HEPA" to give consumers assurance that their air filters meet the HEPA standard, although this term has no legal or scientific meaning. Products that are marketed to be "HEPA-type," "HEPA-like," "HEPA-style" or "99% HEPA" do not satisfy the HEPA standard and may not have been tested in independent laboratories. Although such filters may come reasonably close to HEPA standards, others fall significantly short. Efficacy and safety In general terms (and allowing for some variation depending on factors such as the air-flow rate, the physical properties of the particles being filtered, as well as engineering details of the entire filtration-system design and not just the filter-media properties), HEPA filters experience the most difficulty in capturing particles in the size range of 0.15 to 0.2 μm. HEPA filtration works by mechanical means, unlike ionic and ozone treatment technologies, which use negative ions and ozone gas respectively. So, the likelihood of potential triggering of pulmonary side-effects such as asthma and allergies is much lower with HEPA purifiers. To ensure that a HEPA filter is working efficiently, the filters should be inspected and changed at least every six months in commercial settings. In residential settings, and depending on the general ambient air quality, these filters can be changed every two to three years. Failing to change a HEPA filter in a timely fashion will result in it putting stress on the machine or system and not removing particles from the air properly. Additionally, depending on the gasketing materials chosen in the design of the system, a clogged HEPA filter can result in extensive bypassing of airflow around the filter. Applications Biomedical HEPA filters are critical in the prevention of the spread of airborne bacterial and viral organisms and, therefore, infection. Typically, medical use HEPA filtration systems also incorporate high-energy ultraviolet light units or panels with anti-microbial coating to kill off the live bacteria and viruses trapped by the filter media. Some of the best-rated HEPA units have an efficiency rating of 99.995%, which assures a very high level of protection against airborne disease transmission. COVID-19 HEPA filters are capable of removing viruses including COVID-19 (particles of 60-140 nanometer diameter) from the air, and as such saw a surge in adoption during the pandemic in order to mitigate infection risks. To combat supply chain and cost issues hindering adoption of HEPA filters during the COVID-19 pandemic, a professor at University of California, Davis, created a simple do-it-yourself air purifier design called crbox(Corsi–Rosenthal Box). It involves arranging 4 HEPA filters in a cubic shape, the bottom being made out of cardboard, sealing the filter sides with tape and adding a fan on top. In addition, the COVID-19 Pandemic resulted in a surge of new air purifier products from new and established brands such as Dyson or Xiaomi hitting the markets. Vacuum cleaners Many vacuum cleaners also use HEPA filters as part of their filtration systems. This is beneficial for asthma and allergy sufferers, because the HEPA filter traps the fine particles (such as pollen and house dust mite feces) which trigger allergy and asthma symptoms. For a HEPA filter in a vacuum cleaner to be effective, the vacuum cleaner must be designed so that all the air drawn into the machine is expelled through the filter, with none of the air leaking past it. This is often referred to as "Sealed HEPA" or sometimes the more vague "True HEPA". Vacuum cleaners simply labeled "HEPA" may have a HEPA filter, but not all air necessarily passes through it. Finally, vacuum cleaner filters marketed as "HEPA-like" will typically use a filter of a similar construction to HEPA, but without the filtering efficiency. Because of the extra density of a true HEPA filter, HEPA vacuum cleaners require more powerful motors to provide adequate cleaning power. Some newer models claim to be better than the earlier ones with the inclusion of "washable" filters. Generally, washable true HEPA filters are expensive. A high-quality HEPA filter can trap 99.97% of dust particles that are 0.3 microns in diameter. For comparison, a human hair is about 50 to 150 microns in diameter. So, a true HEPA filter is effectively trapping particles several hundred times smaller than the width of a human hair. Some manufacturers claim filter standards such as "HEPA 4," without explaining the meaning behind them. This refers to their Minimum Efficiency Reporting Value (MERV) rating. These ratings are used to rate the ability of an air cleaner filter to remove dust from the air as it passes through the filter. MERV is a standard used to measure the overall efficiency of a filter. The MERV scale ranges from 1 to 16, and measures a filter's ability to remove particles from 10 to 0.3 micrometer in size. Filters with higher ratings not only remove more particles from the air, but they also remove smaller particles. Heating, ventilation, and air conditioning Heating, ventilation, and air conditioning (HVAC) is technology that uses air filters, such as HEPA filters, to remove pollutants from the air either indoors or in vehicles. Pollutants include smoke, viruses, powders, etc., and can originate either outside or inside. HVAC is used to provide environmental comfort and in polluted cities to maintain health. Vehicles Airlines Modern airliners use HEPA filters to reduce the spread of airborne pathogens in recirculated air. Critics have expressed concern about the effectiveness and state of repair of air filtering systems, since they think that much of the air in an airplane cabin is recirculated. Almost all of the air in a pressurized aircraft is, in fact, brought in from the outside, circulated through the cabin and then exhausted through outflow valves in the rear of the aircraft. About 40 percent of the cabin's air goes through a HEPA filter and the other 60 percent comes from outside the plane. Certified air filters block and capture 99.97 percent of airborne particles. Motor vehicles In 2016, it was announced that the Tesla Model X would have the world's first HEPA-grade filter in a Tesla car. Following the release of the Model X, Tesla has updated the Model S to also have an optional HEPA air filter. History The idea behind the development of the HEPA filter was born from gas masks worn by soldiers fighting in World War II. A piece of paper found inserted into a German gas mask had a remarkably high capture efficiency for chemical smoke. The British Army Chemical Corps duplicated this and began to manufacture it in large quantities for their own service gas masks. They needed another solution for operational headquarters, where individual gas masks were impractical. The Army Chemical Corps developed a combination mechanical blower and air purifier unit, which incorporated cellulose-asbestos paper in a deeply-pleated form with spacers between the pleats. It was referred to as an "absolute" air filter and laid the groundwork for further research to come in developing the HEPA filter. The next phase of the HEPA filter was designed in the 1940s and was used in the Manhattan Project to prevent the spread of airborne radioactive contaminants. The US Army Chemical Corps and National Defense Research Committee needed to develop a filter suitable for removing radioactive materials from the air. The Army Chemical Corps asked Nobel Laureate Irving Langmuir to recommend filter test methods and other general recommendations for creating the material to filter out these radioactive particles. He identified 0.3 micron size particles to be the "most penetrating size"—the most difficult and concerning. It was commercialized in the 1950s, and the original term became a registered trademark and later a generic trademark for highly efficient filters. Over the decades filters have evolved to satisfy the higher and higher demands for air quality in various high technology industries, such as aerospace, pharmaceutical industry, hospitals, health care, nuclear fuels, nuclear power, and integrated circuit fabrication. See also – trap particles with high voltage – vacuum cleaner with high efficiency air filter (MERV) – Removes 99.999% of dust, pollen, mold, bacteria, and particles larger than 120 nm (0.12 μm) References Further reading TSI Application Note ITI-041: Mechanisms of Filtration for High Efficiency Fibrous Filters 9382659989 Building biology Air filters Cleanroom technology Gas technologies Indoor air pollution
HEPA
[ "Chemistry", "Engineering" ]
3,360
[ "Building engineering", "Filters", "Cleanroom technology", "Air filters", "Building biology" ]
683,342
https://en.wikipedia.org/wiki/Fuse%20%28electrical%29
In electronics and electrical engineering, a fuse is an electrical safety device that operates to provide overcurrent protection of an electrical circuit. Its essential component is a metal wire or strip that melts when too much current flows through it, thereby stopping or interrupting the current. It is a sacrificial device; once a fuse has operated, it is an open circuit, and must be replaced or rewired, depending on its type. Fuses have been used as essential safety devices from the early days of electrical engineering. Today there are thousands of different fuse designs which have specific current and voltage ratings, breaking capacity, and response times, depending on the application. The time and current operating characteristics of fuses are chosen to provide adequate protection without needless interruption. Wiring regulations usually define a maximum fuse current rating for particular circuits. A fuse can be used to mitigate short circuits, overloading, mismatched loads, or device failure. When a damaged live wire makes contact with a metal case that is connected to ground, a short circuit will form and the fuse will melt. A fuse is an automatic means of removing power from a faulty system, often abbreviated to ADS (automatic disconnection of supply). Circuit breakers can be used as an alternative to fuses, but have significantly different characteristics. History Louis Clément François Breguet recommended the use of reduced-section conductors to protect telegraph stations from lightning strikes; by melting, the smaller wires would protect apparatus and wiring inside the building. A variety of wire or foil fusible elements were in use to protect telegraph cables and lighting installations as early as 1864. A fuse was patented by Thomas Edison in 1890 as part of his electric distribution system. Construction A fuse consists of a metal strip or wire fuse element, of small cross-section compared to the circuit conductors, mounted between a pair of electrical terminals, and (usually) enclosed by a non-combustible housing. The fuse is arranged in series to carry all the charge passing through the protected circuit. The resistance of the element generates heat due to the current flow. The size and construction of the element is (empirically) determined so that the heat produced for a normal current does not cause the element to attain a high temperature. If too high of a current flows, the element rises to a higher temperature and either directly melts, or else melts a soldered joint within the fuse, opening the circuit. The fuse element is made of zinc, copper, silver, aluminum, or alloys among these or other various metals to provide stable and predictable characteristics. The fuse ideally would carry its rated current indefinitely, and melt quickly on a small excess. The element must not be damaged by minor harmless surges of current, and must not oxidize or change its behavior after possibly years of service. The fuse elements may be shaped to increase heating effect. In large fuses, current may be divided between multiple strips of metal. A dual-element fuse may contain a metal strip that melts instantly on a short circuit, and also contain a low-melting solder joint that responds to long-term overload of low values compared to a short circuit. Fuse elements may be supported by steel or nichrome wires, so that no strain is placed on the element, but a spring may be included to increase the speed of parting of the element fragments. The fuse element may be surrounded by air, or by materials intended to speed the quenching of the arc. Silica sand or non-conducting liquids may be used. Characteristics Rated current IN A maximum current that the fuse can continuously conduct without interrupting the circuit. Time vs current characteristics The speed at which a fuse blows depends on how much current flows through it and the material of which the fuse is made. Manufacturers can provide a plot of current vs time, often plotted on logarithmic scales, to characterize the device and to allow comparison with the characteristics of protective devices upstream and downstream of the fuse. The operating time is not a fixed interval but decreases as the current increases. Fuses are designed to have particular characteristics of operating time compared to current. A standard fuse may require twice its rated current to open in one second, a fast-blow fuse may require twice its rated current to blow in 0.1 seconds, and a slow-blow fuse may require twice its rated current for tens of seconds to blow. Fuse selection depends on the load's characteristics. Semiconductor devices may use a fast or ultrafast fuse as semiconductor devices heat rapidly when excess current flows. The fastest blowing fuses are designed for the most sensitive electrical equipment, where even a short exposure to an overload current could be damaging. Normal fast-blow fuses are the most general purpose fuses. A time-delay fuse (also known as an anti-surge or slow-blow fuse) is designed to allow a current which is above the rated value of the fuse to flow for a short period of time without the fuse blowing. These types of fuse are used on equipment such as motors, which can draw larger than normal currents for up to several seconds while coming up to speed. The I2t value The I2t rating is related to the amount of energy let through by the fuse element when it clears the electrical fault. This term is normally used in short circuit conditions and the values are used to perform co-ordination studies in electrical networks. I2t parameters are provided by charts in manufacturer data sheets for each fuse family. For coordination of fuse operation with upstream or downstream devices, both melting I2t and clearing I2t are specified. The melting I2t is proportional to the amount of energy required to begin melting the fuse element. The clearing I2t is proportional to the total energy let through by the fuse when clearing a fault. The energy is mainly dependent on current and time for fuses as well as the available fault level and system voltage. Since the I2t rating of the fuse is proportional to the energy it lets through, it is a measure of the thermal damage from the heat and magnetic forces that will be produced by a fault end. Breaking capacity The breaking capacity is the maximum current that can safely be interrupted by the fuse. This should be higher than the prospective short-circuit current. Miniature fuses may have an interrupting rating only 10 times their rated current. Fuses for small, low-voltage, usually residential, wiring systems are commonly rated, in North American practice, to interrupt 10,000 amperes. Fuses for commercial or industrial power systems must have higher interrupting ratings, with some low-voltage current-limiting high interrupting fuses rated for 300,000 amperes. Fuses for high-voltage equipment, up to 115,000 volts, are rated by the total apparent power (megavolt-amperes, MVA) of the fault level on the circuit. Some fuses are designated high rupture capacity (HRC) or high breaking capacity (HBC) and are usually filled with sand or a similar material. Low-voltage high rupture capacity (HRC) fuses are used in the area of main distribution boards in low-voltage networks where there is a high prospective short circuit current. They are generally larger than screw-type fuses, and have ferrule cap or blade contacts. High rupture capacity fuses may be rated to interrupt current of 120 kA. HRC fuses are widely used in industrial installations and are also used in the public power grid, e.g. in transformer stations, main distribution boards, or in building junction boxes and as meter fuses. In some countries, because of the high fault current available where these fuses are used, local regulations may permit only trained personnel to change these fuses. Some varieties of HRC fuse include special handling features. Rated voltage The voltage rating of the fuse must be equal to or, greater than, what would become the open-circuit voltage. For example, a glass tube fuse rated at 32 volts would not reliably interrupt current from a voltage source of 120 or 230 V. If a 32 V fuse attempts to interrupt the 120 or 230 V source, an arc may result. Plasma inside the glass tube may continue to conduct current until the current diminishes to the point where the plasma becomes a non-conducting gas. Rated voltage should be higher than the maximum voltage source it would have to disconnect. Connecting fuses in series does not increase the rated voltage of the combination, nor of any one fuse. Medium-voltage fuses rated for a few thousand volts are never used on low voltage circuits, because of their cost and because they cannot properly clear the circuit when operating at very low voltages. Voltage drop The manufacturer may specify the voltage drop across the fuse at rated current. There is a direct relationship between a fuse's cold resistance and its voltage drop value. Once current is applied, resistance and voltage drop of a fuse will constantly grow with the rise of its operating temperature until the fuse finally reaches thermal equilibrium. The voltage drop should be taken into account, particularly when using a fuse in low-voltage applications. Voltage drop often is not significant in more traditional wire type fuses, but can be significant in other technologies such as resettable (PPTC) type fuses. Temperature derating Ambient temperature will change a fuse's operational parameters. A fuse rated for 1 A at 25 °C may conduct up to 10% or 20% more current at −40 °C and may open at 80% of its rated value at 100 °C. Operating values will vary with each fuse family and are provided in manufacturer data sheets. Markings Most fuses are marked on the body or end caps with markings that indicate their ratings. Surface-mount technology "chip type" fuses feature few or no markings, making identification very difficult. Similar appearing fuses may have significantly different properties, identified by their markings. Fuse markings will generally convey the following information, either explicitly as text, or else implicit with the approval agency marking for a particular type: Current rating of the fuse. Voltage rating of the fuse. Time-current characteristic; i.e. fuse speed. Approvals by national and international standards agencies. Manufacturer/part number/series. Interrupting rating (breaking capacity) Packages and materials Fuses come in a vast array of sizes and styles to serve in many applications, manufactured in standardised package layouts to make them easily interchangeable. Fuse bodies may be made of ceramic, glass, plastic, fiberglass, molded mica laminates, or molded compressed fibre depending on application and voltage class. Cartridge (ferrule) fuses have a cylindrical body terminated with metal end caps. Some cartridge fuses are manufactured with end caps of different sizes to prevent accidental insertion of the wrong fuse rating in a holder, giving them a bottle shape. Fuses for low voltage power circuits may have bolted blade or tag terminals which are secured by screws to a fuseholder. Some blade-type terminals are held by spring clips. Blade type fuses often require the use of a special purpose extractor tool to remove them from the fuse holder. Renewable fuses have replaceable fuse elements, allowing the fuse body and terminals to be reused if not damaged after a fuse operation. Fuses designed for soldering to a printed circuit board have radial or axial wire leads. Surface mount fuses have solder pads instead of leads. High-voltage fuses of the expulsion type have fiber or glass-reinforced plastic tubes and an open end, and can have the fuse element replaced. Semi-enclosed fuses are fuse wire carriers in which the fusible wire itself can be replaced. The exact fusing current is not as well controlled as an enclosed fuse, and it is extremely important to use the correct diameter and material when replacing the fuse wire, and for these reasons these fuses are slowly falling from favour. These are still used in consumer units in some parts of the world, but are becoming less common. While glass fuses have the advantage of a fuse element visible for inspection purposes, they have a low breaking capacity (interrupting rating), which generally restricts them to applications of 15 A or less at 250 VAC. Ceramic fuses have the advantage of a higher breaking capacity, facilitating their use in circuits with higher current and voltage. Filling a fuse body with sand provides additional cooling of the arc and increases the breaking capacity of the fuse. Medium-voltage fuses may have liquid-filled envelopes to assist in the extinguishing of the arc. Some types of distribution switchgear use fuse links immersed in the oil that fills the equipment. Fuse packages may include a rejection feature such as a pin, slot, or tab, which prevents interchange of otherwise similar appearing fuses. For example, fuse holders for North American class RK fuses have a pin that prevents installation of similar-appearing class H fuses, which have a much lower breaking capacity and a solid blade terminal that lacks the slot of the RK type. Dimensions Fuses can be built with different sized enclosures to prevent interchange of different ratings of fuse. For example, bottle style fuses distinguish between ratings with different cap diameters. Automotive glass fuses were made in different lengths, to prevent high-rated fuses being installed in a circuit intended for a lower rating. Special features Glass cartridge and plug fuses allow direct inspection of the fusible element. Other fuses have other indication methods including: Indicating pin or striker pin — extends out of the fuse cap when the element is blown. Indicating disc — a coloured disc (flush mounted in the end cap of the fuse) falls out when the element is blown. Element window — a small window built into the fuse body to provide visual indication of a blown element. External trip indicator — similar function to striker pin, but can be externally attached (using clips) to a compatible fuse. Some fuses allow a special purpose micro switch or relay unit to be fixed to the fuse body. When the fuse element blows, the indicating pin extends to activate the micro switch or relay, which, in turn, triggers an event. Some fuses for medium-voltage applications use two or three separate barrels and two or three fuse elements in parallel. Fuse standards IEC 60269 fuses The International Electrotechnical Commission publishes standard 60269 for low-voltage power fuses. The standard is in four volumes, which describe general requirements, fuses for industrial and commercial applications, fuses for residential applications, and fuses to protect semiconductor devices. The IEC standard unifies several national standards, thereby improving the interchangeability of fuses in international trade. All fuses of different technologies tested to meet IEC standards will have similar time-current characteristics, which simplifies design and maintenance. UL 248 fuses (North America) In the United States and Canada, low-voltage fuses to 1 kV AC rating are made in accordance with Underwriters Laboratories standard UL 248 or the harmonized Canadian Standards Association standard C22.2 No. 248. This standard applies to fuses rated 1 kV or less, AC or DC, and with breaking capacity up to 200 kA. These fuses are intended for installations following Canadian Electrical Code, Part I (CEC), or the National Electrical Code, NFPA 70 (NEC). The standard ampere ratings for fuses (and circuit breakers) in USA/Canada are considered 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 80, 90, 100, 110, 125, 150, 175, 200, 225, 250, 300, 350, 400, 450, 500, 600, 700, 800, 1000, 1200, 1600, 2000, 2500, 3000, 4000, 5000, and 6000 amperes. Additional standard ampere ratings for fuses are 1, 3, 6, 10, and 601. UL 248 currently has 19 "parts". UL 248-1 sets the general requirements for fuses, while the latter parts are dedicated to specific fuses sizes (ex: 248-8 for Class J, 248-10 for Class L), or for categories of fuses with unique properties (ex: 248-13 for semiconductor fuses, 248-19 for photovoltaic fuses). The general requirements (248-1) apply except as modified by the supplemental part (240-x). For example, UL 248-19 allows photovoltaic fuses to be rated up to 1500 volts, DC, versus 1000 volts under the general requirements. IEC and UL nomenclature varies slightly. IEC standards refer to a "fuse" as the assembly of a fusible link and a fuse holder. In North American standards, the fuse is the replaceable portion of the assembly, and a fuse link would be a bare metal element for installation in a fuse. Automotive fuses Automotive fuses are used to protect the wiring and electrical equipment for vehicles. There are several different types of automotive fuses and their usage is dependent upon the specific application, voltage, and current demands of the electrical circuit. Automotive fuses can be mounted in fuse blocks, inline fuse holders, or fuse clips. Some automotive fuses are occasionally used in non-automotive electrical applications. Standards for automotive fuses are published by SAE International (formerly known as the Society of Automotive Engineers). Automotive fuses can be classified into four distinct categories: Blade fuses Glass tube or Bosch type Fusible links Fuse limiters Most automotive fuses rated at 32 volts are used on circuits rated 24 volts DC and below. Some vehicles use a dual 12/42 V DC electrical system that will require a fuse rated at 58 V DC. High voltage fuses Fuses are used on power systems up to 115,000 volts AC. High-voltage fuses are used to protect instrument transformers used for electricity metering, or for small power transformers where the expense of a circuit breaker is not warranted. A circuit breaker at 115 kV may cost up to five times as much as a set of power fuses, so the resulting saving can be tens of thousands of dollars. In medium-voltage distribution systems, a power fuse may be used to protect a transformer serving 1–3 houses. Pole-mounted distribution transformers are nearly always protected by a fusible cutout, which can have the fuse element replaced using live-line maintenance tools. Medium-voltage fuses are also used to protect motors, capacitor banks and transformers and may be mounted in metal enclosed switchgear, or (rarely in new designs) on open switchboards. Expulsion fuses Large power fuses use fusible elements made of silver, copper or tin to provide stable and predictable performance. High voltage expulsion fuses surround the fusible link with gas-evolving substances, such as boric acid. When the fuse blows, heat from the arc causes the boric acid to evolve large volumes of gases. The associated high pressure (often greater than 100 atmospheres) and cooling gases rapidly quench the resulting arc. The hot gases are then explosively expelled out of the end(s) of the fuse. Such fuses can only be used outdoors. These type of fuses may have an impact pin to operate a switch mechanism, so that all three phases are interrupted if any one fuse blows. High-power fuse means that these fuses can interrupt several kiloamperes. Some manufacturers have tested their fuses for up to 63 kA short-circuit current. Comparison with circuit breakers Fuses have the advantages of often being less costly and simpler than a circuit breaker for similar ratings. The blown fuse must be replaced with a new device which is less convenient than simply resetting a breaker and therefore likely to discourage people from ignoring faults. On the other hand, replacing a fuse without isolating the circuit first (most building wiring designs do not provide individual isolation switches for each fuse) can be dangerous in itself, particularly if the fault is a short circuit. In terms of protection response time, fuses tend to isolate faults more quickly (depending on their operating time) than circuit breakers. A fuse can clear a fault within a quarter cycle of the fault current, while a circuit breaker may take around half to one cycle to clear the fault. The response time of a fuse can be as fast as 0.002 seconds, whereas a circuit breaker typically responds in the range of 0.02 to 0.05 seconds. High rupturing capacity fuses can be rated to safely interrupt up to 300,000 amperes at 600 V AC. Special current-limiting fuses are applied ahead of some molded-case breakers to protect the breakers in low-voltage power circuits with high short-circuit levels. Current-limiting fuses operate so quickly that they limit the total "let-through" energy that passes into the circuit, helping to protect downstream equipment from damage. These fuses open in less than one cycle of the AC power frequency; circuit breakers cannot match this speed. Some types of circuit breakers must be maintained on a regular basis to ensure their mechanical operation during an interruption. This is not the case with fuses, which rely on melting processes where no mechanical operation is required for the fuse to operate under fault conditions. In a multi-phase power circuit, if only one fuse opens, the remaining phases will have higher than normal currents, and unbalanced voltages, with possible damage to motors. Fuses only sense overcurrent, or to a degree, over-temperature, and cannot usually be used independently with protective relaying to provide more advanced protective functions, for example, ground fault detection. Some manufacturers of medium-voltage distribution fuses combine the overcurrent protection characteristics of the fusible element with the flexibility of relay protection by adding a pyrotechnic device to the fuse operated by external protective relays. For domestic applications, Miniature circuit breakers (MCB) are widely used as an alternative to fuses. Their rated current depend on the load current of the equipment to be protected and the ambient operational temperature. They are available in the following ratings: 6A, 10A, 16A, 20A, 25A, 32A, 45A, 50A, 63A, 80A, 100A, 125A. Fuse boxes United Kingdom In the UK, older electrical consumer units (also called fuse boxes) are fitted either with semi-enclosed (rewirable) fuses or cartridge fuses (Fuse wire is commonly supplied to consumers as short lengths of 5 A-, 15 A- and 30 A-rated wire wound on a piece of cardboard.) Modern consumer units usually contain miniature circuit breakers (MCBs) instead of fuses, though cartridge fuses are sometimes still used, as in some applications MCBs are prone to nuisance tripping. Renewable fuses (rewirable or cartridge) allow user replacement, but this can be hazardous as it is easy to put a higher-rated or double fuse element (link or wire) into the holder (overfusing), or simply fitting it with copper wire or even a totally different type of conducting object (coins, hairpins, paper clips, nails, etc.) to the existing carrier. One form of fuse box abuse was to put a penny in the socket, which defeated overcurrent protection and resulted in a dangerous condition. Such tampering will not be visible without full inspection of the fuse. Fuse wire was never used in North America for this reason, although renewable fuses continue to be made for distribution boards. The Wylex standard consumer unit was very popular in the United Kingdom until the wiring regulations started demanding residual-current devices (RCDs) for sockets that could feasibly supply equipment outside the equipotential zone. The design does not allow for fitting of RCDs or RCBOs. Some Wylex standard models were made with an RCD instead of the main switch, but (for consumer units supplying the entire installation) this is no longer compliant with the wiring regulations as alarm systems should not be RCD-protected. There are two styles of fuse base that can be screwed into these units: one designed for rewirable fusewire carriers and one designed for cartridge fuse carriers. Over the years MCBs have been made for both styles of base. In both cases, higher rated carriers had wider pins, so a carrier couldn't be changed for a higher rated one without also changing the base. Cartridge fuse carriers are also now available for DIN-rail enclosures. North America In North America, fuses were used in buildings wired before 1960. These Edison base fuses would screw into a fuse socket similar to Edison-base incandescent lamps. Ratings were 5, 10, 15, 20, 25, and 30 amperes. To prevent installation of fuses with an excessive current rating, later fuse boxes included rejection features in the fuse-holder socket, commonly known as Rejection Base (Type S fuses) which have smaller diameters that vary depending on the rating of the fuse. This means that fuses can only be replaced by the preset (Type S) fuse rating. This is a North American, tri-national standard (UL 4248–11; CAN/CSA-C22.2 NO. 4248.11-07 (R2012); and, NMX-J-009/4248/11-ANCE). Existing Edison fuse boards can easily be converted to only accept Rejection Base (Type S) fuses, by screwing-in a tamper-proof adapter. This adapter screws into the existing Edison fuse holder, and has a smaller diameter threaded hole to accept the designated Type S rated fuse. Some companies manufacture resettable miniature thermal circuit breakers, which screw into a fuse socket. Some installations use these Edison-base circuit breakers. However, any such breaker sold today does have one flaw. It may be installed in a circuit-breaker box with a door. If so, if the door is closed, the door may hold down the breaker's reset button. While in this state, the breaker is effectively useless: it does not provide any overcurrent protection. In the 1950s, fuses in new residential or industrial construction for branch circuit protection were superseded by low voltage circuit breakers. Fuses are widely used for protection of electric motor circuits; for small overloads, the motor protection circuit will open the controlling contactor automatically, and the fuse will only operate for short circuits or extreme overload. Coordination of fuses in series Where several fuses are connected in series at the various levels of a power distribution system, it is desirable to blow (clear) only the fuse (or other overcurrent device) electrically closest to the fault. This process is called "coordination" and may require the time-current characteristics of two fuses to be plotted on a common current basis. Fuses are selected so that the minor branch fuse disconnects its circuit well before the supplying, feeder fuse starts to melt. In this way, only the faulty circuit is interrupted with minimal disturbance to other circuits fed by a common supplying fuse. Where the fuses in a system are of similar types, simple rule-of-thumb ratios between ratings of the fuse closest to the load and the next fuse towards the source can be used. Other circuit protectors Resettable fuses So-called self-resetting fuses use a thermoplastic conductive element known as a polymeric positive temperature coefficient (PPTC) thermistor that impedes the circuit during an overcurrent condition (by increasing device resistance). The PPTC thermistor is self-resetting in that when current is removed, the device will cool and revert to low resistance. These devices are often used in aerospace/nuclear applications where replacement is difficult, or on a computer motherboard so that a shorted mouse or keyboard does not cause motherboard damage. Thermal fuses A thermal fuse is often found in consumer equipment such as coffee makers, hair dryers or transformers powering small consumer electronics devices. They contain a fusible, temperature-sensitive composition which holds a spring contact mechanism normally closed. When the surrounding temperature gets too high, the composition melts and allows the spring contact mechanism to break the circuit. The device can be used to prevent a fire in a hair dryer for example, by cutting off the power supply to the heater elements when the air flow is interrupted (e.g., the blower motor stops or the air intake becomes accidentally blocked). Thermal fuses are a 'one shot', non-resettable device which must be replaced once they have been activated (blown). Cable limiter A cable limiter is similar to a fuse but is intended only for protection of low voltage power cables. It is used, for example, in networks where multiple cables may be used in parallel. It is not intended to provide overload protection, but instead protects a cable that is exposed to a short circuit. The characteristics of the limiter are matched to the size of cable so that the limiter clears a fault before the cable insulation is damaged. Unicode symbol The Unicode character for the fuse's schematic symbol, found in the Miscellaneous Technical block, is (⏛). See also Antifuse Circuit breaker Power system protection Programmable ROM Recloser Shunt (electrical) Bead (electrical) Zero-ohm link Notes References Richard C. Dorf (ed.) The Electrical Engineering Handbook, CRC Press, Boca Raton, 1993, External links Fuse-selection checklist Len Lundy, "The fuse-selection checklist: a quick update", EDN Magazine, 26 Sept 1996, p121 wiki.diyfaq.org.uk - Fuses vs MCBs Electric power systems components Electrical components Electrical wiring Over-current protection devices Safety equipment 19th-century inventions
Fuse (electrical)
[ "Physics", "Technology", "Engineering" ]
6,067
[ "Electrical components", "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring", "Components" ]
683,368
https://en.wikipedia.org/wiki/Young%20tableau
In mathematics, a Young tableau (; plural: tableaux) is a combinatorial object useful in representation theory and Schubert calculus. It provides a convenient way to describe the group representations of the symmetric and general linear groups and to study their properties. Young tableaux were introduced by Alfred Young, a mathematician at Cambridge University, in 1900. They were then applied to the study of the symmetric group by Georg Frobenius in 1903. Their theory was further developed by many mathematicians, including Percy MacMahon, W. V. D. Hodge, G. de B. Robinson, Gian-Carlo Rota, Alain Lascoux, Marcel-Paul Schützenberger and Richard P. Stanley. Definitions Note: this article uses the English convention for displaying Young diagrams and tableaux. Diagrams A Young diagram (also called a Ferrers diagram, particularly when represented using dots) is a finite collection of boxes, or cells, arranged in left-justified rows, with the row lengths in non-increasing order. Listing the number of boxes in each row gives a partition of a non-negative integer , the total number of boxes of the diagram. The Young diagram is said to be of shape , and it carries the same information as that partition. Containment of one Young diagram in another defines a partial ordering on the set of all partitions, which is in fact a lattice structure, known as Young's lattice. Listing the number of boxes of a Young diagram in each column gives another partition, the conjugate or transpose partition of ; one obtains a Young diagram of that shape by reflecting the original diagram along its main diagonal. There is almost universal agreement that in labeling boxes of Young diagrams by pairs of integers, the first index selects the row of the diagram, and the second index selects the box within the row. Nevertheless, two distinct conventions exist to display these diagrams, and consequently tableaux: the first places each row below the previous one, the second stacks each row on top of the previous one. Since the former convention is mainly used by Anglophones while the latter is often preferred by Francophones, it is customary to refer to these conventions respectively as the English notation and the French notation; for instance, in his book on symmetric functions, Macdonald advises readers preferring the French convention to "read this book upside down in a mirror" (Macdonald 1979, p. 2). This nomenclature probably started out as jocular. The English notation corresponds to the one universally used for matrices, while the French notation is closer to the convention of Cartesian coordinates; however, French notation differs from that convention by placing the vertical coordinate first. The figure on the right shows, using the English notation, the Young diagram corresponding to the partition (5, 4, 1) of the number 10. The conjugate partition, measuring the column lengths, is (3, 2, 2, 2, 1). Arm and leg length In many applications, for example when defining Jack functions, it is convenient to define the arm length aλ(s) of a box s as the number of boxes to the right of s in the diagram λ in English notation. Similarly, the leg length lλ(s) is the number of boxes below s. The hook length of a box s is the number of boxes to the right of s or below s in English notation, including the box s itself; in other words, the hook length is aλ(s) + lλ(s) + 1. Tableaux A Young tableau is obtained by filling in the boxes of the Young diagram with symbols taken from some alphabet, which is usually required to be a totally ordered set. Originally that alphabet was a set of indexed variables , , ..., but now one usually uses a set of numbers for brevity. In their original application to representations of the symmetric group, Young tableaux have distinct entries, arbitrarily assigned to boxes of the diagram. A tableau is called standard if the entries in each row and each column are increasing. The number of distinct standard Young tableaux on entries is given by the involution numbers 1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496, ... . In other applications, it is natural to allow the same number to appear more than once (or not at all) in a tableau. A tableau is called semistandard, or column strict, if the entries weakly increase along each row and strictly increase down each column. Recording the number of times each number appears in a tableau gives a sequence known as the weight of the tableau. Thus the standard Young tableaux are precisely the semistandard tableaux of weight (1,1,...,1), which requires every integer up to to occur exactly once. In a standard Young tableau, the integer is a descent if appears in a row strictly below . The sum of the descents is called the major index of the tableau. Variations There are several variations of this definition: for example, in a row-strict tableau the entries strictly increase along the rows and weakly increase down the columns. Also, tableaux with decreasing entries have been considered, notably, in the theory of plane partitions. There are also generalizations such as domino tableaux or ribbon tableaux, in which several boxes may be grouped together before assigning entries to them. Skew tableaux A skew shape is a pair of partitions (, ) such that the Young diagram of contains the Young diagram of ; it is denoted by . If and , then the containment of diagrams means that for all . The skew diagram of a skew shape is the set-theoretic difference of the Young diagrams of and : the set of squares that belong to the diagram of but not to that of . A skew tableau of shape is obtained by filling the squares of the corresponding skew diagram; such a tableau is semistandard if entries increase weakly along each row, and increase strictly down each column, and it is standard if moreover all numbers from 1 to the number of squares of the skew diagram occur exactly once. While the map from partitions to their Young diagrams is injective, this is not the case for the map from skew shapes to skew diagrams; therefore the shape of a skew diagram cannot always be determined from the set of filled squares only. Although many properties of skew tableaux only depend on the filled squares, some operations defined on them do require explicit knowledge of and , so it is important that skew tableaux do record this information: two distinct skew tableaux may differ only in their shape, while they occupy the same set of squares, each filled with the same entries. Young tableaux can be identified with skew tableaux in which is the empty partition (0) (the unique partition of 0). Any skew semistandard tableau of shape with positive integer entries gives rise to a sequence of partitions (or Young diagrams), by starting with , and taking for the partition places further in the sequence the one whose diagram is obtained from that of by adding all the boxes that contain a value  ≤  in ; this partition eventually becomes equal to . Any pair of successive shapes in such a sequence is a skew shape whose diagram contains at most one box in each column; such shapes are called horizontal strips. This sequence of partitions completely determines , and it is in fact possible to define (skew) semistandard tableaux as such sequences, as is done by Macdonald (Macdonald 1979, p. 4). This definition incorporates the partitions and in the data comprising the skew tableau. Overview of applications Young tableaux have numerous applications in combinatorics, representation theory, and algebraic geometry. Various ways of counting Young tableaux have been explored and lead to the definition of and identities for Schur functions. Many combinatorial algorithms on tableaux are known, including Schützenberger's jeu de taquin and the Robinson–Schensted–Knuth correspondence. Lascoux and Schützenberger studied an associative product on the set of all semistandard Young tableaux, giving it the structure called the plactic monoid (French: le monoïde plaxique). In representation theory, standard Young tableaux of size describe bases in irreducible representations of the symmetric group on letters. The standard monomial basis in a finite-dimensional irreducible representation of the general linear group are parametrized by the set of semistandard Young tableaux of a fixed shape over the alphabet {1, 2, ..., }. This has important consequences for invariant theory, starting from the work of Hodge on the homogeneous coordinate ring of the Grassmannian and further explored by Gian-Carlo Rota with collaborators, de Concini and Procesi, and Eisenbud. The Littlewood–Richardson rule describing (among other things) the decomposition of tensor products of irreducible representations of into irreducible components is formulated in terms of certain skew semistandard tableaux. Applications to algebraic geometry center around Schubert calculus on Grassmannians and flag varieties. Certain important cohomology classes can be represented by Schubert polynomials and described in terms of Young tableaux. Applications in representation theory Young diagrams are in one-to-one correspondence with irreducible representations of the symmetric group over the complex numbers. They provide a convenient way of specifying the Young symmetrizers from which the irreducible representations are built. Many facts about a representation can be deduced from the corresponding diagram. Below, we describe two examples: determining the dimension of a representation and restricted representations. In both cases, we will see that some properties of a representation can be determined by using just its diagram. Young tableaux are involved in the use of the symmetric group in quantum chemistry studies of atoms, molecules and solids. Young diagrams also parametrize the irreducible polynomial representations of the general linear group (when they have at most nonempty rows), or the irreducible representations of the special linear group (when they have at most nonempty rows), or the irreducible complex representations of the special unitary group (again when they have at most nonempty rows). In these cases semistandard tableaux with entries up to play a central role, rather than standard tableaux; in particular it is the number of those tableaux that determines the dimension of the representation. Dimension of a representation The dimension of the irreducible representation of the symmetric group corresponding to a partition of is equal to the number of different standard Young tableaux that can be obtained from the diagram of the representation. This number can be calculated by the hook length formula. A hook length of a box in Young diagram of shape is the number of boxes that are in the same row to the right of it plus those boxes in the same column below it, plus one (for the box itself). By the hook-length formula, the dimension of an irreducible representation is divided by the product of the hook lengths of all boxes in the diagram of the representation: The figure on the right shows hook-lengths for all boxes in the diagram of the partition 10 = 5 + 4 + 1. Thus Similarly, the dimension of the irreducible representation of corresponding to the partition λ of n (with at most r parts) is the number of semistandard Young tableaux of shape λ (containing only the entries from 1 to r), which is given by the hook-length formula: where the index i gives the row and j the column of a box. For instance, for the partition (5,4,1) we get as dimension of the corresponding irreducible representation of (traversing the boxes by rows): Restricted representations A representation of the symmetric group on elements, is also a representation of the symmetric group on elements, . However, an irreducible representation of may not be irreducible for . Instead, it may be a direct sum of several representations that are irreducible for . These representations are then called the factors of the restricted representation (see also induced representation). The question of determining this decomposition of the restricted representation of a given irreducible representation of Sn, corresponding to a partition of , is answered as follows. One forms the set of all Young diagrams that can be obtained from the diagram of shape by removing just one box (which must be at the end both of its row and of its column); the restricted representation then decomposes as a direct sum of the irreducible representations of corresponding to those diagrams, each occurring exactly once in the sum. See also Robinson–Schensted correspondence Schur–Weyl duality Notes References William Fulton. Young Tableaux, with Applications to Representation Theory and Geometry. Cambridge University Press, 1997, . Lecture 4 Howard Georgi, Lie Algebras in Particle Physics, 2nd Edition - Westview Macdonald, I. G. Symmetric functions and Hall polynomials. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 1979. viii+180 pp. Laurent Manivel. Symmetric Functions, Schubert Polynomials, and Degeneracy Loci. American Mathematical Society. Jean-Christophe Novelli, Igor Pak, Alexander V. Stoyanovskii, "A direct bijective proof of the Hook-length formula", Discrete Mathematics and Theoretical Computer Science 1 (1997), pp. 53–67. Bruce E. Sagan. The Symmetric Group. Springer, 2001, Predrag Cvitanović, Group Theory: Birdtracks, Lie's, and Exceptional Groups. Princeton University Press, 2008. External links Eric W. Weisstein. "Ferrers Diagram". From MathWorld—A Wolfram Web Resource. Eric W. Weisstein. "Young Tableau." From MathWorld—A Wolfram Web Resource. Semistandard tableaux entry in the FindStat database Standard tableaux entry in the FindStat database Representation theory of finite groups Symmetric functions Integer partitions
Young tableau
[ "Physics", "Mathematics" ]
2,881
[ "Symmetry", "Number theory", "Symmetric functions", "Integer partitions", "Algebra" ]
683,561
https://en.wikipedia.org/wiki/Total%20variation
In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval [a, b] ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation x ↦ f(x), for x ∈ [a, b]. Functions whose total variation is finite are called functions of bounded variation. Historical note The concept of total variation for functions of one real variable was first introduced by Camille Jordan in the paper . He used the new concept in order to prove a convergence theorem for Fourier series of discontinuous periodic functions whose variation is bounded. The extension of the concept to functions of more than one variable however is not simple for various reasons. Definitions Total variation for functions of one real variable The total variation of a real-valued (or more generally complex-valued) function , defined on an interval is the quantity where the supremum runs over the set of all partitions of the given interval. Which means that . Total variation for functions of n > 1 real variables Let Ω be an open subset of Rn. Given a function f belonging to L1(Ω), the total variation of f in Ω is defined as where is the set of continuously differentiable vector functions of compact support contained in , is the essential supremum norm, and is the divergence operator. This definition does not require that the domain of the given function be a bounded set. Total variation in measure theory Classical total variation definition Following , consider a signed measure on a measurable space : then it is possible to define two set functions and , respectively called upper variation and lower variation, as follows clearly The variation (also called absolute variation) of the signed measure is the set function and its total variation is defined as the value of this measure on the whole space of definition, i.e. Modern definition of total variation norm uses upper and lower variations to prove the Hahn–Jordan decomposition: according to his version of this theorem, the upper and lower variation are respectively a non-negative and a non-positive measure. Using a more modern notation, define Then and are two non-negative measures such that The last measure is sometimes called, by abuse of notation, total variation measure. Total variation norm of complex measures If the measure is complex-valued i.e. is a complex measure, its upper and lower variation cannot be defined and the Hahn–Jordan decomposition theorem can only be applied to its real and imaginary parts. However, it is possible to follow and define the total variation of the complex-valued measure as follows The variation of the complex-valued measure is the set function where the supremum is taken over all partitions of a measurable set into a countable number of disjoint measurable subsets. This definition coincides with the above definition for the case of real-valued signed measures. Total variation norm of vector-valued measures The variation so defined is a positive measure (see ) and coincides with the one defined by when is a signed measure: its total variation is defined as above. This definition works also if is a vector measure: the variation is then defined by the following formula where the supremum is as above. This definition is slightly more general than the one given by since it requires only to consider finite partitions of the space : this implies that it can be used also to define the total variation on finite-additive measures. Total variation of probability measures The total variation of any probability measure is exactly one, therefore it is not interesting as a means of investigating the properties of such measures. However, when μ and ν are probability measures, the total variation distance of probability measures can be defined as where the norm is the total variation norm of signed measures. Using the property that , we eventually arrive at the equivalent definition and its values are non-trivial. The factor above is usually dropped (as is the convention in the article total variation distance of probability measures). Informally, this is the largest possible difference between the probabilities that the two probability distributions can assign to the same event. For a categorical distribution it is possible to write the total variation distance as follows It may also be normalized to values in by halving the previous definition as follows Basic properties Total variation of differentiable functions The total variation of a function can be expressed as an integral involving the given function instead of as the supremum of the functionals of definitions and . The form of the total variation of a differentiable function of one variable The total variation of a differentiable function , defined on an interval , has the following expression if is Riemann integrable If is differentiable and monotonic, then the above simplifies to For any differentiable function , we can decompose the domain interval , into subintervals (with ) in which is locally monotonic, then the total variation of over can be written as the sum of local variations on those subintervals: The form of the total variation of a differentiable function of several variables Given a function defined on a bounded open set , with of class , the total variation of has the following expression . Proof The first step in the proof is to first prove an equality which follows from the Gauss–Ostrogradsky theorem. Lemma Under the conditions of the theorem, the following equality holds: Proof of the lemma From the Gauss–Ostrogradsky theorem: by substituting , we have: where is zero on the border of by definition: Proof of the equality Under the conditions of the theorem, from the lemma we have: in the last part could be omitted, because by definition its essential supremum is at most one. On the other hand, we consider and which is the up to approximation of in with the same integral. We can do this since is dense in . Now again substituting into the lemma: This means we have a convergent sequence of that tends to as well as we know that . Q.E.D. It can be seen from the proof that the supremum is attained when The function is said to be of bounded variation precisely if its total variation is finite. Total variation of a measure The total variation is a norm defined on the space of measures of bounded variation. The space of measures on a σ-algebra of sets is a Banach space, called the ca space, relative to this norm. It is contained in the larger Banach space, called the ba space, consisting of finitely additive (as opposed to countably additive) measures, also with the same norm. The distance function associated to the norm gives rise to the total variation distance between two measures μ and ν. For finite measures on R, the link between the total variation of a measure μ and the total variation of a function, as described above, goes as follows. Given μ, define a function by Then, the total variation of the signed measure μ is equal to the total variation, in the above sense, of the function . In general, the total variation of a signed measure can be defined using Jordan's decomposition theorem by for any signed measure μ on a measurable space . Applications Total variation can be seen as a non-negative real-valued functional defined on the space of real-valued functions (for the case of functions of one variable) or on the space of integrable functions (for the case of functions of several variables). As a functional, total variation finds applications in several branches of mathematics and engineering, like optimal control, numerical analysis, and calculus of variations, where the solution to a certain problem has to minimize its value. As an example, use of the total variation functional is common in the following two kind of problems Numerical analysis of differential equations: it is the science of finding approximate solutions to differential equations. Applications of total variation to these problems are detailed in the article "total variation diminishing" Image denoising: in image processing, denoising is a collection of methods used to reduce the noise in an image reconstructed from data obtained by electronic means, for example data transmission or sensing. "Total variation denoising" is the name for the application of total variation to image noise reduction; further details can be found in the papers of and . A sensible extension of this model to colour images, called Colour TV, can be found in . See also Bounded variation p-variation Total variation diminishing Total variation denoising Quadratic variation Total variation distance of probability measures Kolmogorov–Smirnov test Anisotropic diffusion Notes Historical references . . . . . . . (available at Gallica). This is, according to Boris Golubov, the first paper on functions of bounded variation. . . The paper containing the first proof of Vitali covering theorem. References . . Available at Numdam. . . (available at the Polish Virtual Library of Science). English translation from the original French by Laurence Chisholm Young, with two additional notes by Stefan Banach. . External links One variable "Total variation" on PlanetMath. One and more variables Function of bounded variation at Encyclopedia of Mathematics Measure theory . . Jordan decomposition at Encyclopedia of Mathematics Applications (a work dealing with total variation application in denoising problems for image processing). . . Tony F. Chan and Jackie (Jianhong) Shen (2005), Image Processing and Analysis - Variational, PDE, Wavelet, and Stochastic Methods, SIAM, (with in-depth coverage and extensive applications of Total Variations in modern image processing, as started by Rudin, Osher, and Fatemi). Mathematical analysis
Total variation
[ "Mathematics" ]
1,989
[ "Mathematical analysis" ]
683,583
https://en.wikipedia.org/wiki/Transcutaneous%20electrical%20nerve%20stimulation
A transcutaneous electrical nerve stimulation (TENS or TNS) is a device that produces mild electric current to stimulate the nerves for therapeutic purposes. TENS, by definition, covers the complete range of transcutaneously applied currents used for nerve excitation, but the term is often used with a more restrictive intent, namely, to describe the kind of pulses produced by portable stimulators used to reduce pain. The unit is usually connected to the skin using two or more electrodes which are typically conductive gel pads. A typical battery-operated TENS unit is able to modulate pulse width, frequency, and intensity. Generally, TENS is applied at high frequency (>50 Hz) with an intensity below motor contraction (sensory intensity) or low frequency (<10 Hz) with an intensity that produces motor contraction. More recently, many TENS units use a mixed frequency mode which alleviates tolerance to repeated use. Intensity of stimulation should be strong but comfortable with greater intensities, regardless of frequency, producing the greatest analgesia. While the use of TENS has proved effective in clinical studies, there is controversy over which conditions the device should be used to treat. Medical uses Pain Transcutaneous electrical nerve stimulation is a commonly used treatment approach to alleviate acute and chronic pain by reducing the sensitization of dorsal horn neurons, elevating levels of gamma-aminobutyric acid and glycine, and inhibiting glial activation. Many systematic reviews and meta-analyses assessing clinical trials looking at the efficacy of TENS for different sources of pain, however, have been inconclusive due to a lack of high-quality and unbiased evidence. Potential benefits of TENS treatment include its safety profile, relative affordability, ease of self-administration, and availability over-the-counter without a prescription. In principle, an adequate intensity of stimulation is necessary to achieve pain relief with TENS. An analysis of treatment fidelity—meaning that the delivery of TENS in a trial was in accordance with current clinical advice, such as using "a strong but comfortable sensation" and suitable, frequent treatment durations—showed that higher-fidelity trials tended to have a positive outcome. Acute pain For people with recent-onset pain i.e., fewer than three months, such as pain associated with surgery, trauma, and medical procedures, TENS may be better than placebo in some cases. The evidence of benefit is very weak, though. Musculoskeletal and neck/back pain There is some evidence to support a benefit of using TENS in chronic musculoskeletal pain. Results from a task force on neck pain in 2008 found no clinically significant benefit of TENS for the treatment of neck pain when compared to placebo. A 2010 review did not find evidence to support the use of TENS for chronic low back pain. Another study examining knee osteoarthritis patients found that TENS demonstrated efficacy and a better safety profile relative to weak opiates. Given the age, comorbidity frequency, tendency toward polypharmacy, and sensitivity to adverse reactions among individuals most frequently reporting osteoarthritis, TENS could be a non-pharmacological alternative to analgesics in the management of knee osteoarthritis pain. Neuropathy and phantom limb pain There is tentative evidence that TENS may be useful for painful diabetic neuropathy. As of 2015, the efficacy of TENS for phantom limb pain is unknown; no randomized controlled trials have been performed. A few studies have shown objective evidence that TENS may modulate or suppress pain signals in the brain. One used evoked cortical potentials to show that electric stimulation of peripheral A-beta sensory fibers reliably suppressed A-delta fiber nociceptive (pain perception) processing. Two other studies used functional magnetic resonance imaging (fMRI): one showed that high-frequency TENS produced a decrease in pain-related cortical activations in patients with carpal tunnel syndrome, while the other showed that low-frequency TENS decreased shoulder impingement pain and modulated pain-induced activation in the brain. Labor and menstrual pain Early studies found that TENS "has been shown not to be effective in postoperative and labour pain." These studies also had questionable ability to truly blind the patients. However, more recent studies have shown that TENS was "effective for relieving labour pain, and they are well considered by pregnant participants." One study also showed that there was a significant change in laboring individuals' time to request analgesia such as an epidural. The group with the TENS waited five additional hours relative to those without TENS. Both groups were satisfied with the pain relief that they had from their choices. No maternal, infant, or labor problems were noted. There is tentative evidence that TENS may be helpful for treating pain from dysmenorrhoea, however further research is required. Cancer pain Non-pharmacological treatment options for people experiencing pain caused by cancer are much needed, however, it is not clear from the weak studies that have been published if TENS is an effective approach. Bladder function Percutaneous and transcutaneous electrical nerve stimulation in the tibial nerve have been used in the treatment of overactive bladder and urinary retention. Sometimes it is also done in the sacrum. Systematic review studies have shown limited evidence on the effectiveness, and more quality research is needed. A major trial found that in a care home context transcutaneous posterior tibial nerve stimulation did not improve urinary incontinence. Dentistry TENS has been extensively used in non-odontogenic orofacial pain relief. In addition, TENS and ultra low frequency-TENS (ULF-TENS) are commonly employed in diagnosis and treatment of temporomandibular joint dysfunction (TMD). Further clinical studies are required to determine its efficacy. Tremor A wearable neuromodulation device that delivers electrical stimulation to nerves in the wrist is now available by prescription. Worn around the wrist, it acts as a non-invasive treatment for those living with essential tremor. The stimulator has electrodes that are placed circumferentially around a patient's wrist. Positioning the electrodes on generally opposing sides of the target nerve can result in improved stimulation of the nerve. In clinical trials reductions in hand tremors were reported following noninvasive median and radial nerve stimulation. Transcutaneous afferent patterned stimulation (TAPS) is a tremor-customized therapy, based on the patient's measured tremor frequency, and is delivered transcutaneously to the median and radial nerves of a patient's wrist. The patient specific TAPS stimulation is determined through a calibration process performed by the accelerometer and microprocessor on the device. The Cala ONE delivers TAPS in a wrist-worn device that is calibrated to treat tremor symptoms. Cala ONE received de novo FDA clearance in April 2018 for the transient relief of hand tremors in adults with essential tremor and is currently marketed as Cala Trio. Contraindications People who have implanted electronic medical devices including pacemakers and cardiodefibrillators are not suggested to use TENS. In addition, caution should be taken before using TENS in those who are pregnant, have epilepsy, have an active malignancy, have deep vein thrombosis, have skin that is damaged, or are frail. The use of TENS is likely to be less effective on areas of numb skin or decreased sensation due to nerve damage. It may also cause skin irritation due to the inability to feel currents until they are too high. There is an unknown level of risk when placing electrodes over an infection (possible spreading due to muscle contractions), but cross contamination with the electrodes themselves is of greater concern. There are several anatomical locations where TENS electrodes are contraindicated: Over the eyes due to the risk of increasing intraocular pressure Transcerebrally On the front of the neck due to the risk of an acute hypotension (through a vasovagal response) or even a laryngospasm Through the chest using anterior and posterior electrode positions, or other transthoracic applications understood as "across a thoracic diameter"; this does not preclude coplanar applications Internally, except for specific applications of dental, vaginal, and anal stimulation that employ specialized TENS units On broken skin areas or wounds, although it can be placed around wounds Over a tumor or malignancy, based on in vitro experiments where electricity promotes cell growth Directly over the spinal column Cardiac pacemakers TENS used across an artificial cardiac pacemaker or other indwelling stimulator, including across its leads, may cause interference and failure of the implanted device. Serious accidents have been recorded in cases when this principle was not observed. A 2009 review in this area suggests that electrotherapy, including TENS, is "best avoided" in patients with pacemakers or implantable cardioverter-defibrillators (ICDs). They add that "there is no consensus and it may be possible to safely deliver these modalities in a proper setting with device and patient monitoring", and recommend further research. The review found several reports of ICDs administering inappropriate treatment due to interference with TENS devices, but notes that the reports on pacemakers are mixed: some non-programmable pacemakers were inhibited by TENS, but others were unaffected or auto-reprogrammed. Pregnancy TENS should be used with caution on people with epilepsy or on pregnant women; do not use over area of the uterus, as the effects of electrical stimulation on the developing fetus are not known. Side effects Overall, TENS has been found to be safe compared with pharmaceutical medications for treating pain. Potential side effects include skin itching near the electrodes and mild redness of the skin (erythema). Some people also report that they dislike the sensation associated with TENS. Device types The TENS device acts to stimulate the sensory nerves and a small portion of the peripheral motor nerves; the stimulation causes multiple mechanisms to trigger and manage the sense of pain in a patient. TENS operates by two main mechanisms: it stimulates competing sensory neurons at the pain perception gate, and it stimulates the opiate response. The mechanism that will be used varies with the type of device. The table below lists the types of devices: History Electrical stimulation for pain control was used in ancient Rome, in AD 63. It was reported by Scribonius Largus that pain was relieved by standing on an electrical fish at the seashore. In the 16th through the 18th centuries various electrostatic devices were used for headache and other pains. Benjamin Franklin was a proponent of this method for pain relief. In the 19th century a device called the electreat, along with numerous other devices were used for pain control and cancer cures. Only the electreat survived into the 20th century, but was not portable, and had limited control of the stimulus. Development of the modern TENS unit is generally credited to C. Norman Shealy. Modern The first modern, patient-wearable TENS was patented in the United States in 1974. It was initially used for testing the tolerance of chronic pain patients to electrical stimulation before implantation of electrodes in the spinal cord dorsal column. The electrodes were attached to an implanted receiver, which received its power from an antenna worn on the surface of the skin. Although intended only for testing tolerance to electrical stimulation, many of the patients said they received so much relief from the TENS itself that they never returned for the implant. A number of companies began manufacturing TENS units after the commercial success of the Medtronic device became known. The neurological division of Medtronic, founded by Don Maurer, Ed Schuck and Charles Ray, developed a number of applications for implanted electrical stimulation devices for treatment of epilepsy, Parkinson's disease, and other disorders of the nervous system. Today many people confuse TENS with electrical muscle stimulation (EMS). EMS and TENS devices look similar, with both using long electric lead wires and electrodes. TENS is for blocking pain, where EMS is for stimulating muscles. Beginning in the late 1970s, in the USSR as part of their space program further research was conducted into electronic pain reduction devices. Dr. Alexander Karasev developed scenar (or skenar) devices, and later in the early 2000s cosmodic devices. Each of these device types uses the fundamental technique of reading electrical signals in the skin, analyzing the signals, and returning therapeutic electrical pulses into the nerves. He terms the TENS devices first generation electronic pain relief devices, scenar devices second generation devices, cosmodic devices as third generation devices, and the D.O.V.E. (Device Organizing Vital Energy) device as an advanced second generation device which automatically incorporates some cosmodic therapeutic features. Research As reported, TENS has different effects on the brain. A randomized controlled trial in 2017 shown that sensory ULF-TENS applied on the skin proximally to trigeminal nerve, reduced the effect of acute mental stress assessed by heart rate variability (HRV). Further high quality studies are required to determine the effectiveness of TENS for treating dementia. A head-mounted TENS device called Cefaly was approved by the United States Food and Drug Administration (FDA), in March 2014, for the prevention of migraine attacks. The Cefaly device was found effective in preventing migraine attacks in a randomized sham-controlled trial. This was the first TENS device the FDA approved for pain prevention, as opposed to pain suppression. A study performed on healthy human subjects demonstrates that repeated application of TENS can generate analgesic tolerance within five days, reducing its efficacy. The study noted that TENS causes the release of endogenous opioids, and that the analgesia is likely due to opioid tolerance mechanisms. The pain reduction ability of TENS is unconfirmed by sufficient randomized controlled trials so far. One meta-analysis of several hundred TENS studies concluded that there was a significant overall reduction of pain intensity due to TENS, but there were too few participants and controls to be entirely certain of their validity. Therefore, the authors downgraded their confidence in the results by two levels, to low-certainty. See also Electroacupuncture Electrical muscle stimulation Erotic electrostimulation — for sexual uses of TENS devices Microcurrent electrical neuromuscular stimulator References Books cited Further reading Electrotherapy Neurotechnology Medical equipment Pain management
Transcutaneous electrical nerve stimulation
[ "Biology" ]
2,974
[ "Medical equipment", "Medical technology" ]
683,590
https://en.wikipedia.org/wiki/Microcurrent%20electrical%20neuromuscular%20stimulator
A microcurrent electrical neuromuscular stimulator or MENS (also microamperage electrical neuromuscular stimulator) is a device used to send weak electrical signals into the body. Such devices apply extremely small microamp [uA] electrical currents (less than 1 milliampere [mA]) to the tissues using electrodes placed on the skin. One microampere [uA] is 1 millionth of an ampere and the uses of MENS are distinct from those of "TENS" which runs at one milliamp [mA] or one thousandth of an amp. Uses MENS uses include treatments for pain, diabetic neuropathy, age-related macular degeneration, wound healing, tendon repair, plantar fasciitis and ruptured ligament recovery. Most microcurrent treatments concentrate on pain and/or speeding healing and recovery. It is commonly used by professional and performance athletes with acute pain and/or muscle tenderness as it is drug-free and non-invasive, thus avoiding testing and recovery issues. It is also used as a cosmetic treatment. History The body's electrical capabilities were studied at least as early as 1830, when the Italian Carlo Matteucci is credited as being one of the first to measure the electrical current in injured tissue. Bioelectricity received less attention after the discovery of penicillin, when the focus of medical research and treatments turned toward the body's chemical processes. Attention began to return to these properties and the possibilities of using very low current for healing in the mid-1900s. In a study published in 1969, for example, a team of researchers led by L.E. Wolcott applied microcurrent to a wide variety of wounds, using negative polarity over lesions in the initial phase, and then alternating application of positive and negative electrodes every three days. The stimulation current ranged from 200 to 800 uA and the treated group showed 200%-350% faster healing rates, with stronger tensile strength of scar tissue and antibacterial effects. In 1991, the German scientists Drs. Erwin Neher and Bert Sakmann shared the Nobel Prize in Physiology or Medicine for their development of the patch-clamp technique that allows the detection of minute electrical currents through cell membranes. This method allowed the detection of more than 20 different types of ion channel which permit positive or negatively charged ions to cross the cell membrane, confirming that cellular electrical activity is not limited only to nerve and muscle tissues. Efficacy A study by a neuroretinologist in the late 1980s suggested that microcurrent stimulation of acupuncture points for the eye had positive effects in slowing and even stopping progression of macular degeneration. This treatment is used to treat both the Wet and Dry forms of AMD. This study was based on Ngok Cheng's research on the increased amounts of ATP levels in living tissue after being stimulated with microcurrent. Mechanisms of action While the mechanisms of efficacy are not well established, a few studies have shown that there may be a correlation between the traditional Chinese medical system of acupuncture and microcurrent. A study published in 1975 by Reichmanis, Marino, and Becker concluded in part that. "At most acupuncture points on most subjects, there were greater electrical conductance maxims than at control sites." Manufacturers Many companies manufacture microcurrent devices for both professional and personal use, and microcurrent stimulation is used as a "complementary" veterinary modality. See also Acupuncture Electrical muscle stimulation Electroacupuncture Macular degeneration Percutaneous tibial nerve stimulation TENS References Medical equipment Electrotherapy Bioelectromagnetic-based therapies
Microcurrent electrical neuromuscular stimulator
[ "Biology" ]
770
[ "Medical equipment", "Medical technology" ]
684,210
https://en.wikipedia.org/wiki/Mean%20curvature
In mathematics, the mean curvature of a surface is an extrinsic measure of curvature that comes from differential geometry and that locally describes the curvature of an embedded surface in some ambient space such as Euclidean space. The concept was used by Sophie Germain in her work on elasticity theory. Jean Baptiste Marie Meusnier used it in 1776, in his studies of minimal surfaces. It is important in the analysis of minimal surfaces, which have mean curvature zero, and in the analysis of physical interfaces between fluids (such as soap films) which, for example, have constant mean curvature in static flows, by the Young–Laplace equation. Definition Let be a point on the surface inside the three dimensional Euclidean space . Each plane through containing the normal line to cuts in a (plane) curve. Fixing a choice of unit normal gives a signed curvature to that curve. As the plane is rotated by an angle (always containing the normal line) that curvature can vary. The maximal curvature and minimal curvature are known as the principal curvatures of . The mean curvature at is then the average of the signed curvature over all angles : . By applying Euler's theorem, this is equal to the average of the principal curvatures : More generally , for a hypersurface the mean curvature is given as More abstractly, the mean curvature is the trace of the second fundamental form divided by n (or equivalently, the shape operator). Additionally, the mean curvature may be written in terms of the covariant derivative as using the Gauss-Weingarten relations, where is a smoothly embedded hypersurface, a unit normal vector, and the metric tensor. A surface is a minimal surface if and only if the mean curvature is zero. Furthermore, a surface which evolves under the mean curvature of the surface , is said to obey a heat-type equation called the mean curvature flow equation. The sphere is the only embedded surface of constant positive mean curvature without boundary or singularities. However, the result is not true when the condition "embedded surface" is weakened to "immersed surface". Surfaces in 3D space For a surface defined in 3D space, the mean curvature is related to a unit normal of the surface: where the normal chosen affects the sign of the curvature. The sign of the curvature depends on the choice of normal: the curvature is positive if the surface curves "towards" the normal. The formula above holds for surfaces in 3D space defined in any manner, as long as the divergence of the unit normal may be calculated. Mean Curvature may also be calculated where I and II denote first and second quadratic form matrices, respectively. If is a parametrization of the surface and are two linearly independent vectors in parameter space then the mean curvature can be written in terms of the first and second fundamental forms as where , , , , , . For the special case of a surface defined as a function of two coordinates, e.g. , and using the upward pointing normal the (doubled) mean curvature expression is In particular at a point where , the mean curvature is half the trace of the Hessian matrix of . If the surface is additionally known to be axisymmetric with , where comes from the derivative of . Implicit form of mean curvature The mean curvature of a surface specified by an equation can be calculated by using the gradient and the Hessian matrix The mean curvature is given by: Another form is as the divergence of the unit normal. A unit normal is given by and the mean curvature is In fluid mechanics An alternate definition is occasionally used in fluid mechanics to avoid factors of two: . This results in the pressure according to the Young–Laplace equation inside an equilibrium spherical droplet being surface tension times ; the two curvatures are equal to the reciprocal of the droplet's radius . Minimal surfaces A minimal surface is a surface which has zero mean curvature at all points. Classic examples include the catenoid, helicoid and Enneper surface. Recent discoveries include Costa's minimal surface and the Gyroid. CMC surfaces An extension of the idea of a minimal surface are surfaces of constant mean curvature. The surfaces of unit constant mean curvature in hyperbolic space are called Bryant surfaces. See also Gaussian curvature Mean curvature flow Inverse mean curvature flow First variation of area formula Stretched grid method Notes References . Differential geometry Differential geometry of surfaces Surfaces Curvature (mathematics)
Mean curvature
[ "Physics" ]
887
[ "Geometric measurement", "Physical quantities", "Curvature (mathematics)" ]
685,179
https://en.wikipedia.org/wiki/Schwinger%27s%20quantum%20action%20principle
The Schwinger's quantum action principle is a variational approach to quantum mechanics and quantum field theory. This theory was introduced by Julian Schwinger in a series of articles starting 1950. Approach In Schwinger's approach, the action principle is targeted towards quantum mechanics. The action becomes a quantum action, i.e. an operator, . Although it is superficially different from the path integral formulation where the action is a classical function, the modern formulation of the two formalisms are identical. Suppose we have two states defined by the values of a complete set of commuting operators at two times. Let the early and late states be and , respectively. Suppose that there is a parameter in the Lagrangian which can be varied, usually a source for a field. The main equation of Schwinger's quantum action principle is: where the derivative is with respect to small changes () in the parameter, and with the Lagrange operator. In the path integral formulation, the transition amplitude is represented by the sum over all histories of , with appropriate boundary conditions representing the states and . The infinitesimal change in the amplitude is clearly given by Schwinger's formula. Conversely, starting from Schwinger's formula, it is easy to show that the fields obey canonical commutation relations and the classical equations of motion, and so have a path integral representation. Schwinger's formulation was most significant because it could treat fermionic anticommuting fields with the same formalism as bose fields, thus implicitly introducing differentiation and integration with respect to anti-commuting coordinates. See also Source field References Perturbation theory Quantum field theory Principles
Schwinger's quantum action principle
[ "Physics" ]
352
[ "Quantum field theory", "Quantum mechanics", "Perturbation theory" ]
685,311
https://en.wikipedia.org/wiki/Experimental%20physics
Experimental physics is the category of disciplines and sub-disciplines in the field of physics that are concerned with the observation of physical phenomena and experiments. Methods vary from discipline to discipline, from simple experiments and observations, such as Galileo's experiments, to more complicated ones, such as the Large Hadron Collider. Overview Experimental physics is a branch of physics that is concerned with data acquisition, data-acquisition methods, and the detailed conceptualization (beyond simple thought experiments) and realization of laboratory experiments. It is often contrasted with theoretical physics, which is more concerned with predicting and explaining the physical behaviour of nature than with acquiring empirical data. Although experimental and theoretical physics are concerned with different aspects of nature, they both share the same goal of understanding it and have a symbiotic relationship. The former provides data about the universe, which can then be analyzed in order to be understood, while the latter provides explanations for the data and thus offers insight into how to better acquire data and set up experiments. Theoretical physics can also offer insight into what data is needed in order to gain a better understanding of the universe, and into what experiments to devise in order to obtain it. The tension between experimental and theoretical aspects of physics was expressed by James Clerk Maxwell as "It is not till we attempt to bring the theoretical part of our training into contact with the practical that we begin to experience the full effect of what Faraday has called 'mental inertia' - not only the difficulty of recognizing, among the concrete objects before us, the abstract relation which we have learned from books, but the distracting pain of wrenching the mind away from the symbols to the objects, and from the objects back to the symbols. This however is the price we have to pay for new ideas." History As a distinct field, experimental physics was established in early modern Europe, during what is known as the Scientific Revolution, by physicists such as Galileo Galilei, Christiaan Huygens, Johannes Kepler, Blaise Pascal and Sir Isaac Newton. In the early 17th century, Galileo made extensive use of experimentation to validate physical theories, which is the key idea in the modern scientific method. Galileo formulated and successfully tested several results in dynamics, in particular the law of inertia, which later became the first law in Newton's laws of motion. In Galileo's Two New Sciences, a dialogue between the characters Simplicio and Salviati discuss the motion of a ship (as a moving frame) and how that ship's cargo is indifferent to its motion. Huygens used the motion of a boat along a Dutch canal to illustrate an early form of the conservation of momentum. Experimental physics is considered to have reached a high point with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by Sir Isaac Newton (1643–1727). In 1687, Newton published the Principia, detailing two comprehensive and successful physical laws: Newton's laws of motion, from which arise classical mechanics; and Newton's law of universal gravitation, which describes the fundamental force of gravity. Both laws agreed well with experiment. The Principia also included several theories in fluid dynamics. From the late 17th century onward, thermodynamics was developed by physicist and chemist Robert Boyle, Thomas Young, and many others. In 1733, Daniel Bernoulli used statistical arguments with classical mechanics to derive thermodynamic results, initiating the field of statistical mechanics. In 1798, Benjamin Thompson (Count Rumford) demonstrated the conversion of mechanical work into heat, and in 1847 James Prescott Joule stated the law of conservation of energy, in the form of heat as well as mechanical energy. Ludwig Boltzmann, in the nineteenth century, is responsible for the modern form of statistical mechanics. Besides classical mechanics and thermodynamics, another great field of experimental inquiry within physics was the nature of electricity. Observations in the 17th and eighteenth century by scientists such as Boyle, Stephen Gray, and Benjamin Franklin created a foundation for later work. These observations also established our basic understanding of electrical charge and current. By 1808 John Dalton had discovered that atoms of different elements have different weights and proposed the modern theory of the atom. It was Hans Christian Ørsted who first proposed the connection between electricity and magnetism after observing the deflection of a compass needle by a nearby electric current. By the early 1830s Michael Faraday had demonstrated that magnetic fields and electricity could generate each other. In 1864 James Clerk Maxwell presented to the Royal Society a set of equations that described this relationship between electricity and magnetism. Maxwell's equations also predicted correctly that light is an electromagnetic wave. Starting with astronomy, the principles of natural philosophy crystallized into fundamental laws of physics which were enunciated and improved in the succeeding centuries. By the 19th century, the sciences had segmented into multiple fields with specialized researchers and the field of physics, although logically pre-eminent, no longer could claim sole ownership of the entire field of scientific research. Current experiments Some examples of prominent experimental physics projects are: Relativistic Heavy Ion Collider which collides heavy ions such as gold ions (it is the first heavy ion collider) and protons, it is located at Brookhaven National Laboratory, on Long Island, USA. HERA, which collides electrons or positrons and protons, and is part of DESY, located in Hamburg, Germany. LHC, or the Large Hadron Collider, which completed construction in 2008 but suffered a series of setbacks. The LHC began operations in 2008, but was shut down for maintenance until the summer of 2009. It is the world's most energetic collider upon completion, it is located at CERN, on the French-Swiss border near Geneva. The collider became fully operational March 29, 2010 a year and a half later than originally planned. LIGO, the Laser Interferometer Gravitational-Wave Observatory, is a large-scale physics experiment and observatory to detect cosmic gravitational waves and to develop gravitational-wave observations as an astronomical tool. Currently two LIGO observatories exist: LIGO Livingston Observatory in Livingston, Louisiana, and LIGO Hanford Observatory near Richland, Washington. JWST, or the James Webb Space Telescope, launched in 2021. It will be the successor to the Hubble Space Telescope. It will survey the sky in the infrared region. The main goals of the JWST will be in order to understand the initial stages of the universe, galaxy formation as well as the formations of stars and planets, and the origins of life. Mississippi State Axion Search (2016 completion), Light Shining Through a Wall Experiment (LSW); EM Source: .7m, 50W continuous radio wave emitter Method Experimental physics uses two main methods of experimental research, controlled experiments, and natural experiments. Controlled experiments are often used in laboratories as laboratories can offer a controlled environment. Natural experiments are used, for example, in astrophysics when observing celestial objects where control of the variables in effect is impossible. Famous experiments Famous experiments include: Bell test experiments Cavendish experiment Chicago Pile-1 Cowan–Reines neutrino experiment Davisson–Germer experiment Delayed-choice quantum eraser Double-slit experiment Eddington experiment Eötvös experiment Fizeau experiment Foucault pendulum Franck–Hertz experiment Geiger–Marsden experiment Gravity Probe A and Gravity Probe B Hafele–Keating experiment Homestake experiment Kite experiment Oil drop experiment Michelson–Morley experiment Rømer's determination of the speed of light Stern–Gerlach experiment Torricelli's experiment Wu experiment Experimental techniques Some well-known experimental techniques include: Crystallography Ellipsometry Faraday cage Interferometry NMR Laser cooling Laser spectroscopy Raman spectroscopy Signal processing Spectroscopy STM Vacuum technique X-ray spectroscopy Inelastic neutron scattering Prominent experimental physicists Famous experimental physicists include: Archimedes (c. 287 BC – c. 212 BC) Alhazen (965–1039) Al-Biruni (973–1043) Al-Khazini (fl. 1115–1130) Galileo Galilei (1564–1642) Evangelista Torricelli (1608–1647) Robert Boyle (1627–1691) Christiaan Huygens (1629–1695) Robert Hooke (1635–1703) Isaac Newton (1643–1727) Ole Rømer (1644–1710) Stephen Gray (1666–1736) Daniel Bernoulli (1700-1782) Benjamin Franklin (1706–1790) Laura Bassi (1711–1778) Henry Cavendish (1731–1810) Joseph Priestley (1733–1804) William Herschel (1738–1822) Alessandro Volta (1745–1827) Pierre-Simon Laplace (1749–1827) Benjamin Thompson (1753–1814) John Dalton (1766–1844) Thomas Young (1773–1829) Carl Friedrich Gauss (1777–1855) Hans Christian Ørsted (1777–1851) Humphry Davy (1778–1829) Augustin-Jean Fresnel (1788–1827) Michael Faraday (1791–1867) James Prescott Joule (1818–1889) William Thomson, Lord Kelvin (1824–1907) James Clerk Maxwell (1831–1879) Ernst Mach (1838–1916) John William Strutt (3rd Baron Rayleigh) (1842–1919) Wilhelm Röntgen (1845–1923) Karl Ferdinand Braun (1850–1918) Henri Becquerel (1852–1908) Albert Abraham Michelson (1852–1931) Heike Kamerlingh Onnes (1853–1926) J. J. Thomson (1856–1940) Heinrich Hertz (1857–1894) Jagadish Chandra Bose (1858–1937) Pierre Curie (1859–1906) William Henry Bragg (1862–1942) Marie Curie (1867–1934) Robert Andrews Millikan (1868–1953) Ernest Rutherford (1871–1937) Lise Meitner (1878–1968) Max von Laue (1879–1960) Clinton Davisson (1881–1958) Hans Geiger (1882–1945) C. V. Raman (1888–1970) William Lawrence Bragg (1890–1971) James Chadwick (1891–1974) Arthur Compton (1892–1962) Pyotr Kapitsa (1894–1984) Charles Drummond Ellis (1895–1980) John Cockcroft (1897–1967) Patrick Blackett (Baron Blackett) (1897–1974) Ukichiro Nakaya (1900–1962) Enrico Fermi (1901–1954) Ernest Lawrence (1901–1958) Walter Houser Brattain (1902–1987) Pavel Cherenkov (1904–1990) Abraham Alikhanov (1904–1970) Carl David Anderson (1905–1991) Felix Bloch (1905–1983) Ernst Ruska (1906–1988) John Bardeen (1908–1991) William Shockley (1910–1989) Dorothy Hodgkin (1910–1994) Luis Walter Alvarez (1911–1988) Chien-Shiung Wu (1912–1997) Willis Lamb (1913–2008) Charles Hard Townes (1915–2015) Rosalind Franklin (1920–1958) Owen Chamberlain (1920–2006) Nicolaas Bloembergen (1920–2017) Vera Rubin (1928–2016) Mildred Dresselhaus (1930–2017) Rainer Weiss (1932–) Carlo Rubbia (1934–) Barry Barish (1936–) Samar Mubarakmand (1942–) Serge Haroche (1944–) Anton Zeilinger (1945–) Alain Aspect (1947–) Gerd Binnig (1947–) Steven Chu (1948–) Wolfgang Ketterle (1957–) Andre Geim (1958–) Lene Hau (1959–) Timelines See the timelines below for listings of physics experiments. Timeline of atomic and subatomic physics Timeline of classical mechanics Timeline of electromagnetism and classical optics Timeline of gravitational physics and relativity Timeline of nuclear fusion Timeline of particle discoveries Timeline of particle physics technology Timeline of states of matter and phase transitions Timeline of thermodynamics See also Physics Engineering Experimental science Measuring instrument Pulse programming References Further reading External links
Experimental physics
[ "Physics" ]
2,510
[ "Experimental physics" ]
685,428
https://en.wikipedia.org/wiki/Levetiracetam
Levetiracetam, sold under the brand name Keppra among others, is a novel antiepileptic drug (medication) used to treat epilepsy. It is used for partial-onset, myoclonic, or tonic–clonic seizures, and is taken either by mouth as an immediate or extended release formulation or by injection into a vein. "Levetiracetam was discovered in 1992 through screening in audiogenic seizure susceptible mice and, 3 years later, was reported to exhibit saturable, stereospecific binding in brain to a approximately 90 kDa protein, later identified as the ubiquitous synaptic vesicle glycoprotein SV2A." "The discovery process identifying levetiracetam's antiepileptic potential was unique because it challenged several dogmas of antiepileptic drug discovery, and thereby encountered skepticism from the epilepsy community." Common side effects of levetiracetam include sleepiness, dizziness, feeling tired, and aggression. Severe side effects may include psychosis, suicide, and allergic reactions such as Stevens–Johnson syndrome or anaphylaxis. Levetiracetam is the S-enantiomer of etiracetam. It acts as a synaptic vesicle glycoprotein 2A (SV2A) ligand. Levetiracetam was approved for medical use in the United States in 1999 and is available as a generic medication. In 2022, it was the 123rd most commonly prescribed medication in the United States, with more than 5million prescriptions. It is on the World Health Organization's List of Essential Medicines. Medical uses Focal epilepsy Levetiracetam is effective as single-drug treatment for newly diagnosed focal epilepsy in adults. It reduces focal seizures by 50% or more as an add-on medication. Partial-complex epilepsy Levetiracetam is effective as add-on treatment for partial (focal) epilepsy. Generalized epilepsy Levetiracetam is effective for treatment of generalized tonic-clonic epilepsy. It has been approved in the United States as add-on treatment for myoclonic, and tonic-clonic seizures. Levetiracetam has been approved in the European Union as a monotherapy treatment for epilepsy in the case of partial seizures or as an adjunctive therapy for partial, myoclonic, and tonic-clonic seizures. Levetiracetam is sometimes used off label to treat status epilepticus. Prevention of seizures Based on low-quality evidence, levetiracetam is about as effective as phenytoin for prevention of early seizures after traumatic brain injury. It may be effective for prevention of seizures associated with subarachnoid hemorrhages. Other Levetiracetam has not been found to be useful for treatment of neuropathic pain, nor for treatment of essential tremors. Levetiracetam has not been found to be useful for treating all developmental disorders within the autism spectrum; studies have only proven to be an effective treatment for partial, myoclonic, or tonic-clonic seizures associated with autism spectrum disorder. Special groups Levetiracetam's efficacy and tolerability in individuals with intellectual disability is comparable to those without. Studies in female pregnant rats have shown minor fetal skeletal abnormalities when given maximum recommended human doses of levetiracetam orally throughout pregnancy and lactation. Studies were conducted to look for increased adverse effects in the elderly population as compared to younger patients. One such study published in Epilepsy Research showed no significant increase in incidence of adverse symptoms experienced by young or elderly patients with disorders of the central nervous system. Adverse effects The most common adverse effects of levetiracetam treatment include effects on the central nervous system such as somnolence, decreased energy, headache, dizziness, mood swings and coordination difficulties. These adverse effects are most pronounced in the first month of therapy. About 4% of patients dropped out of pre-approval clinical trials due to these side effects. About 13% of people taking levetiracetam experience adverse neuropsychiatric symptoms, which are usually mild. These include agitation, hostility, apathy, anxiety, emotional lability, and depression. Serious psychiatric adverse side effects that are reversed by drug discontinuation occur in about 1%. These include hallucinations, suicidal thoughts, or psychosis. These occurred mostly within the first month of therapy, but they could develop at any time during treatment. Although rare, Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN), which appears as a painful spreading rash with redness and blistering and/or peeling skin, have been reported in patients treated with levetiracetam. The incidence of SJS following exposure to anti-epileptics such as levetiracetam is about 1 in 3,000. Levetiracetam should not be used in people who have previously shown hypersensitivity to levetiracetam or any of the inactive ingredients in the tablet or oral solution. Such hypersensitivity reactions include, but are not limited to, unexplained rash with redness or blistered skin, difficulty breathing, and tightness in the chest or airways. In a study, the incidence of decreased bone mineral density of patients on levetiracetam was significantly higher than those for other epileptic medications. Suicide Levetiracetam, along with other anti-epileptic drugs, can increase the risk of suicidal behavior or thoughts. People taking levetiracetam should be monitored closely for signs of worsening depression, suicidal thoughts or tendencies, or any altered emotional or behavioral states. Kidney and liver Kidney impairment decreases the rate of elimination of levetiracetam from the body. Individuals with reduced kidney function may require dose adjustments. Kidney function can be estimated from the rate of creatinine clearance. Dose adjustment of levetiracetam is not necessary in liver impairment. Drug interactions No significant pharmacokinetic interactions were observed between levetiracetam or its major metabolite and concomitant medications. The pharmacokinetic profile of levetiracetam is not influenced by phenytoin, phenobarbital, primidone, carbamazepine, valproic acid, lamotrigine, gabapentin, digoxin, ethinylestradiol, or warfarin. Mechanism of action The exact mechanism by which levetiracetam acts to treat epilepsy is unknown. Levetiracetam does not exhibit pharmacologic actions similar to that of classical anticonvulsants. It does not inhibit voltage-dependent Na+ channels, does not affect GABAergic transmission, and does not bind to GABAergic or glutamatergic receptors. However, the drug binds to SV2A, a synaptic vesicle glycoprotein, and inhibits presynaptic calcium channels, reducing neurotransmitter release and acting as a neuromodulator. This is believed to impede impulse conduction across synapses. As of 2024, this is widely accepted to be its mechanism of action. However, the molecular basis of this action remains unknown. Pharmacokinetics FDA provides a detailed review of the pharmacology and biopharmaceutics of Levetiracetam in 2013. Absorption The absorption of levetiracetam tablets and oral solution is rapid and essentially complete. The bioavailability of levetiracetam is close to 100 percent, and the effect of food on absorption is minor. Distribution The volume of distribution of levetiracetam is similar to total body water. Levetiracetam modestly binds to plasma proteins (less than 10%). Metabolism Levetiracetam does not undergo extensive metabolism, and the metabolites formed are not active and do not exert pharmacological activity. Metabolism of levetiracetam is not by liver cytochrome P450 enzymes, but through other metabolic pathways such as hydrolysis and hydroxylation. Excretion In persons with normal kidney function, levetiracetam is eliminated from the body primarily by the kidneys with about 66 percent of the original drug passed unchanged into urine. The plasma half-life of levetiracetam in adults is about 6 to 8 hours, although the mean CSF half life of approx. 24 hours better reflects levels at site of action. Analogues Brivaracetam, a chemical analogue to levetiracetam, is a racetam derivative with similar properties. Society and culture Levetiracetam is available as regular and extended release oral formulations and as intravenous formulations. The immediate release tablet has been available as a generic in the United States since 2008, and in the UK since 2011. The patent for the extended release tablet will expire in 2028. The branded version Keppra is manufactured by UCB Pharmaceuticals S.A. In 2015, Aprecia's orally disintegrating tablet form of the drug manufactured using pharmaceutical 3D printing techniques was approved by the FDA, under the trade name Spritam. Some have said that the drug has been improved by 3D printing, as the formula used now has improved disintegration properties. Legal status Australia Levetiracetam is a Schedule 4 substance in Australia under the Poisons Standard (February 2020). A Schedule 4 substance is classified as "Prescription Only Medicine, or Prescription Animal Remedy – Substances, the use or supply of which should be by or on the order of persons permitted by State or Territory legislation to prescribe and should be available from a pharmacist on prescription." Japan Under Japanese law, levetiracetam and other racetams cannot be brought into the country except for personal use by a traveler for whom it has been prescribed. Travelers who plan to bring more than a month's worth must apply for an import certificate, known as a . Research Levetiracetam has been studied in the past for treating symptoms of neurobiological conditions such as Tourette syndrome, and anxiety disorder. However, its most serious adverse effects are behavioral, and its benefit-risk ratio in these conditions is not well understood. Levetiracetam is being tested as a drug to reduce hyperactivity in the hippocampus in Alzheimer's disease. Additionally, Levetiracetam has been experimentally shown to reduce Levodopa-induced dyskinesia, a type of movement disorder, or dyskinesia associated with the use of Levodopa, a medication used to treat Parkinson's disease. Of the ten medications evaluated in a 2023 systematic review of the literature, levetiracetam was found to be the only medication with sufficient evidence showing that it may cause seizure freedom in some infants. Further, adverse effects from levetiracetam were rarely severe enough for the medication to be discontinued in this age group. Because available research included only 2 published studies reporting seizure freedom rates, however, the strength of the evidence was judged to be low. References Anticonvulsants Antidyskinetic agents Belgian inventions Butyramides Enantiopure drugs Racetams Wikipedia medicine articles ready to translate World Health Organization essential medicines
Levetiracetam
[ "Chemistry" ]
2,320
[ "Stereochemistry", "Enantiopure drugs" ]
686,036
https://en.wikipedia.org/wiki/Wave%20vector
In physics, a wave vector (or wavevector) is a vector used in describing a wave, with a typical unit being cycle per metre. It has a magnitude and direction. Its magnitude is the wavenumber of the wave (inversely proportional to the wavelength), and its direction is perpendicular to the wavefront. In isotropic media, this is also the direction of wave propagation. A closely related vector is the angular wave vector (or angular wavevector), with a typical unit being radian per metre. The wave vector and angular wave vector are related by a fixed constant of proportionality, 2 radians per cycle. It is common in several fields of physics to refer to the angular wave vector simply as the wave vector, in contrast to, for example, crystallography. It is also common to use the symbol for whichever is in use. In the context of special relativity, a wave four-vector can be defined, combining the (angular) wave vector and (angular) frequency. Definition The terms wave vector and angular wave vector have distinct meanings. Here, the wave vector is denoted by and the wavenumber by . The angular wave vector is denoted by and the angular wavenumber by . These are related by . A sinusoidal traveling wave follows the equation where: is position, is time, is a function of and describing the disturbance describing the wave (for example, for an ocean wave, would be the excess height of the water, or for a sound wave, would be the excess air pressure). is the amplitude of the wave (the peak magnitude of the oscillation), is a phase offset, is the (temporal) angular frequency of the wave, describing how many radians it traverses per unit of time, and related to the period by the equation is the angular wave vector of the wave, describing how many radians it traverses per unit of distance, and related to the wavelength by the equation The equivalent equation using the wave vector and frequency is where: is the frequency is the wave vector Direction of the wave vector The direction in which the wave vector points must be distinguished from the "direction of wave propagation". The "direction of wave propagation" is the direction of a wave's energy flow, and the direction that a small wave packet will move, i.e. the direction of the group velocity. For light waves in vacuum, this is also the direction of the Poynting vector. On the other hand, the wave vector points in the direction of phase velocity. In other words, the wave vector points in the normal direction to the surfaces of constant phase, also called wavefronts. In a lossless isotropic medium such as air, any gas, any liquid, amorphous solids (such as glass), and cubic crystals, the direction of the wavevector is the same as the direction of wave propagation. If the medium is anisotropic, the wave vector in general points in directions other than that of the wave propagation. The wave vector is always perpendicular to surfaces of constant phase. For example, when a wave travels through an anisotropic medium, such as light waves through an asymmetric crystal or sound waves through a sedimentary rock, the wave vector may not point exactly in the direction of wave propagation. In solid-state physics In solid-state physics, the "wavevector" (also called k-vector) of an electron or hole in a crystal is the wavevector of its quantum-mechanical wavefunction. These electron waves are not ordinary sinusoidal waves, but they do have a kind of envelope function which is sinusoidal, and the wavevector is defined via that envelope wave, usually using the "physics definition". See Bloch's theorem for further details. In special relativity A moving wave surface in special relativity may be regarded as a hypersurface (a 3D subspace) in spacetime, formed by all the events passed by the wave surface. A wavetrain (denoted by some variable ) can be regarded as a one-parameter family of such hypersurfaces in spacetime. This variable is a scalar function of position in spacetime. The derivative of this scalar is a vector that characterizes the wave, the four-wavevector. The four-wavevector is a wave four-vector that is defined, in Minkowski coordinates, as: where the angular frequency is the temporal component, and the wavenumber vector is the spatial component. Alternately, the wavenumber can be written as the angular frequency divided by the phase-velocity , or in terms of inverse period and inverse wavelength . When written out explicitly its contravariant and covariant forms are: In general, the Lorentz scalar magnitude of the wave four-vector is: The four-wavevector is null for massless (photonic) particles, where the rest mass An example of a null four-wavevector would be a beam of coherent, monochromatic light, which has phase-velocity {for light-like/null} which would have the following relation between the frequency and the magnitude of the spatial part of the four-wavevector: {for light-like/null} The four-wavevector is related to the four-momentum as follows: The four-wavevector is related to the four-frequency as follows: The four-wavevector is related to the four-velocity as follows: Lorentz transformation Taking the Lorentz transformation of the four-wavevector is one way to derive the relativistic Doppler effect. The Lorentz matrix is defined as In the situation where light is being emitted by a fast moving source and one would like to know the frequency of light detected in an earth (lab) frame, we would apply the Lorentz transformation as follows. Note that the source is in a frame and earth is in the observing frame, . Applying the Lorentz transformation to the wave vector and choosing just to look at the component results in where is the direction cosine of with respect to So {|cellpadding="2" style="border:2px solid #ccccff" | |} Source moving away (redshift) As an example, to apply this to a situation where the source is moving directly away from the observer (), this becomes: Source moving towards (blueshift) To apply this to a situation where the source is moving straight towards the observer (), this becomes: Source moving tangentially (transverse Doppler effect) To apply this to a situation where the source is moving transversely with respect to the observer (), this becomes: See also Plane-wave expansion Plane of incidence References Further reading Wave mechanics Vector physical quantities
Wave vector
[ "Physics", "Mathematics" ]
1,395
[ "Physical phenomena", "Physical quantities", "Quantity", "Classical mechanics", "Waves", "Wave mechanics", "Vector physical quantities" ]
9,991,540
https://en.wikipedia.org/wiki/Observed%20information
In statistics, the observed information, or observed Fisher information, is the negative of the second derivative (the Hessian matrix) of the "log-likelihood" (the logarithm of the likelihood function). It is a sample-based version of the Fisher information. Definition Suppose we observe random variables , independent and identically distributed with density f(X; θ), where θ is a (possibly unknown) vector. Then the log-likelihood of the parameters given the data is . We define the observed information matrix at as Since the inverse of the information matrix is the asymptotic covariance matrix of the corresponding maximum-likelihood estimator, the observed information is often evaluated at the maximum-likelihood estimate for the purpose of significance testing or confidence-interval construction. The invariance property of maximum-likelihood estimators allows the observed information matrix to be evaluated before being inverted. Alternative definition Andrew Gelman, David Dunson and Donald Rubin define observed information instead in terms of the parameters' posterior probability, : Fisher information The Fisher information is the expected value of the observed information given a single observation distributed according to the hypothetical model with parameter : . Comparison with the expected information The comparison between the observed information and the expected information remains an active and ongoing area of research and debate. Efron and Hinkley provided a frequentist justification for preferring the observed information to the expected information when employing normal approximations to the distribution of the maximum-likelihood estimator in one-parameter families in the presence of an ancillary statistic that affects the precision of the MLE. Lindsay and Li showed that the observed information matrix gives the minimum mean squared error as an approximation of the true information if an error term of is ignored. In Lindsay and Li's case, the expected information matrix still requires evaluation at the obtained ML estimates, introducing randomness. However, when the construction of confidence intervals is of primary focus, there are reported findings that the expected information outperforms the observed counterpart. Yuan and Spall showed that the expected information outperforms the observed counterpart for confidence-interval constructions of scalar parameters in the mean squared error sense. This finding was later generalized to multiparameter cases, although the claim had been weakened to the expected information matrix performing at least as well as the observed information matrix. See also Fisher information matrix Fisher information metric References Information theory Estimation theory
Observed information
[ "Mathematics", "Technology", "Engineering" ]
485
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
18,021,657
https://en.wikipedia.org/wiki/Noether%27s%20second%20theorem
In mathematics and theoretical physics, Noether's second theorem relates symmetries of an action functional with a system of differential equations. The theorem is named after its discoverer, Emmy Noether. The action S of a physical system is an integral of a so-called Lagrangian function L, from which the system's behavior can be determined by the principle of least action. Specifically, the theorem says that if the action has an infinite-dimensional Lie algebra of infinitesimal symmetries parameterized linearly by k arbitrary functions and their derivatives up to order m, then the functional derivatives of L satisfy a system of k differential equations. Noether's second theorem is sometimes used in gauge theory. Gauge theories are the basic elements of all modern field theories of physics, such as the prevailing Standard Model. Mathematical formulation First variation formula Suppose that we have a dynamical system specified in terms of independent variables , dependent variables , and a Lagrangian function of some finite order . Here is the collection of all th order partial derivatives of the dependent variables. As a general rule, latin indices from the middle of the alphabet take the values , greek indices take the values , and the summation convention apply to them. Multiindex notation for the latin indices is also introduced as follows. A multiindex of length is an ordered list of ordinary indices. The length is denoted as . The summation convention does not directly apply to multiindices since the summation over lengths needs to be displayed explicitly, e.g. The variation of the Lagrangian with respect to an arbitrary variation of the dependent variables isand applying the inverse product rule of differentiation we getwhere are the Euler-Lagrange expressions of the Lagrangian, and the coefficients (Lagrangian momenta) are given by Variational symmetries A variation is an infinitesimal symmetry of the Lagrangian if under this variation. It is an infinitesimal quasi-symmetry if there is a current such that . It should be remarked that it is possible to extend infinitesimal (quasi-)symmetries by including variations with as well, i.e. the independent variables are also varied. However such symmetries can always be rewritten so that they act only on the dependent variables. Therefore, in the sequel we restrict to so-called vertical variations where . For Noether's second theorem, we consider those variational symmetries (called gauge symmetries) which are parametrized linearly by a set of arbitrary functions and their derivatives. These variations have the generic form where the coefficients can depend on the independent and dependent variables as well as the derivatives of the latter up to some finite order, the are arbitrarily specifiable functions of the independent variables, and the latin indices take the values , where is some positive integer. For these variations to be (exact, i.e. not quasi-) gauge symmetries of the Lagrangian, it is necessary that for all possible choices of the functions . If the variations are quasi-symmetries, it is then necessary that the current also depends linearly and differentially on the arbitrary functions, i.e. then , whereFor simplicity, we will assume that all gauge symmetries are exact symmetries, but the general case is handled similarly. Noether's second theorem The statement of Noether's second theorem is that whenever given a Lagrangian as above that admits gauge symmetries parametrized linearly by arbitrary functions and their derivatives, then there exist linear differential relations between the Euler-Lagrange equations of . Combining the first variation formula together with the fact that the variations are symmetries, we getwhere on the first term proportional to the Euler-Lagrange expressions, further integrations by parts can be performed aswherein particular for ,Hence, we have an off-shell relation where with . This relation is valid for any choice of the gauge parameters . Choosing them to be compactly supported, and integrating the relation over the manifold of independent variables, the integral total divergence terms vanishes due to Stokes' theorem. Then from the fundamental lemma of the calculus of variations, we obtain that identically as off-shell relations (in fact, since the are linear in the Euler-Lagrange expressions, they necessarily vanish on-shell). Inserting this back into the initial equation, we also obtain the off-shell conservation law . The expressions are differential in the Euler-Lagrange expressions, specifically we havewhereHence, the equationsare differential relations to which the Euler-Lagrange expressions are subject to, and therefore the Euler-Lagrange equations of the system are not independent. Converse result A converse of the second Noether them can also be established. Specifically, suppose that the Euler-Lagrange expressions of the system are subject to differential relationsLetting be an arbitrary -tuple of functions, the formal adjoint of the operator acts on these functions through the formulawhich defines the adjoint operator uniquely. The coefficients of the adjoint operator are obtained through integration by parts as before, specificallywhereThen the definition of the adjoint operator together with the relations state that for each -tuple of functions , the value of the adjoint on the functions when contracted with the Euler-Lagrange expressions is a total divergence, viz. therefore if we define the variationsthe variationof the Lagrangian is a total divergence, hence the variations are quasi-symmetries for every value of the functions . See also Noether's first theorem Noether identities Gauge symmetry (mathematics) Notes References Further reading Calculus of variations Partial differential equations Conservation laws Theorems in mathematical physics Quantum field theory Symmetry
Noether's second theorem
[ "Physics", "Mathematics" ]
1,211
[ "Quantum field theory", "Mathematical theorems", "Equations of physics", "Conservation laws", "Quantum mechanics", "Theorems in mathematical physics", "Geometry", "Mathematical problems", "Symmetry", "Physics theorems" ]
18,026,038
https://en.wikipedia.org/wiki/Bismuth%28III%29%20iodide
Bismuth(III) iodide is the inorganic compound with the formula BiI3. This gray-black salt is the product of the reaction of bismuth and iodine, which once was of interest in qualitative inorganic analysis. Bismuth(III) iodide adopts a distinctive crystal structure, with iodide centres occupying a hexagonally closest-packed lattice, and bismuth centres occupying either none or two-thirds of the octahedral holes (alternating by layer), therefore it is said to occupy one third of the total octahedral holes. Synthesis Bismuth(III) iodide forms upon heating an intimate mixture of iodine and bismuth powder: 2Bi + 3I2 → 2BiI3 BiI3 can also be made by the reaction of bismuth oxide with aqueous hydroiodic acid: Bi2O3(s) + 6HI(aq) → 2BiI3(s) + 3H2O(l) Reactions Since bismuth(III) iodide is insoluble in water, an aqueous solution can be tested for the presence of Bi3+ ions by adding a source of iodide such as potassium iodide. A black precipitate of bismuth(III) iodide indicates a positive test. Bismuth(III) iodide forms iodobismuth(III) anions when heated with halide donors: 2 NaI + BiI3 → Na2[BiI5] Bismuth(III) iodide catalyzes the Mukaiyama aldol reaction. Bi(III) is also used in a Barbier type allylation of carbonyl compounds in combination with a reducing agent such as zinc or magnesium. References Bismuth iodide Iodides Metal halides
Bismuth(III) iodide
[ "Chemistry" ]
393
[ "Inorganic compounds", "Metal halides", "Salts" ]
18,026,501
https://en.wikipedia.org/wiki/Consolidation%20ratio
Consolidation ratio within network infrastructure for Internet hosting, is the number of virtual servers that can run on each physical host machine. Many companies arrive at that figure through trial and error by stacking virtual machines on top of each other until performance slows to a crawl. “It’s sort of capacity planning by bloody nose,” observes Bob Gill, managing director of server research for analyst firm TheInfoPro Inc. of New York. The recent V-index showed that the average consolidation ratio is actually lower than was expected - 6.3:1 VMs per physical host (actual ratio) vs. 9.8:1 (perceived) See also Nagle's algorithm References Computer networking Networking algorithms
Consolidation ratio
[ "Technology", "Engineering" ]
143
[ "Computer networking", "Computer engineering", "Computer network stubs", "Computer science", "Computing stubs" ]
4,457,082
https://en.wikipedia.org/wiki/Kohn%E2%80%93Sham%20equations
The Kohn-Sham equations are a set of mathematical equations used in quantum mechanics to simplify the complex problem of understanding how electrons behave in atoms and molecules. They introduce fictitious non-interacting electrons and use them to find the most stable arrangement of electrons, which helps scientists understand and predict the properties of matter at the atomic and molecular scale. Description In physics and quantum chemistry, specifically density functional theory, the Kohn–Sham equation is the non-interacting Schrödinger equation (more clearly, Schrödinger-like equation) of a fictitious system (the "Kohn–Sham system") of non-interacting particles (typically electrons) that generate the same density as any given system of interacting particles. In the Kohn–Sham theory the introduction of the noninteracting kinetic energy functional Ts into the energy expression leads, upon functional differentiation, to a collection of one-particle equations whose solutions are the Kohn–Sham orbitals. The Kohn–Sham equation is defined by a local effective (fictitious) external potential in which the non-interacting particles move, typically denoted as vs(r) or veff(r), called the Kohn–Sham potential. If the particles in the Kohn–Sham system are non-interacting fermions (non-fermion Density Functional Theory has been researched), the Kohn–Sham wavefunction is a single Slater determinant constructed from a set of orbitals that are the lowest-energy solutions to This eigenvalue equation is the typical representation of the Kohn–Sham equations. Here εi is the orbital energy of the corresponding Kohn–Sham orbital , and the density for an N-particle system is History The Kohn–Sham equations are named after Walter Kohn and Lu Jeu Sham, who introduced the concept at the University of California, San Diego, in 1965. Kohn received a Nobel Prize in Chemistry in 1998 for the Kohn–Sham equations and other work related to density functional theory (DFT). Kohn–Sham potential In Kohn–Sham density functional theory, the total energy of a system is expressed as a functional of the charge density as where Ts is the Kohn–Sham kinetic energy, which is expressed in terms of the Kohn–Sham orbitals as vext is the external potential acting on the interacting system (at minimum, for a molecular system, the electron–nuclei interaction), EH is the Hartree (or Coulomb) energy and Exc is the exchange–correlation energy. The Kohn–Sham equations are found by varying the total energy expression with respect to a set of orbitals, subject to constraints on those orbitals, to yield the Kohn–Sham potential as where the last term is the exchange–correlation potential. This term, and the corresponding energy expression, are the only unknowns in the Kohn–Sham approach to density functional theory. An approximation that does not vary the orbitals is Harris functional theory. The Kohn–Sham orbital energies εi, in general, have little physical meaning (see Koopmans' theorem). The sum of the orbital energies is related to the total energy as Because the orbital energies are non-unique in the more general restricted open-shell case, this equation only holds true for specific choices of orbital energies (see Koopmans' theorem). References Density functional theory Electron Eponymous equations of physics Quantum mechanics
Kohn–Sham equations
[ "Physics", "Chemistry" ]
695
[ "Electron", "Density functional theory", "Quantum chemistry", "Equations of physics", "Molecular physics", "Theoretical physics", "Eponymous equations of physics", "Quantum mechanics" ]
4,458,422
https://en.wikipedia.org/wiki/Electron-withdrawing%20group
An electron-withdrawing group (EWG) is a group or atom that has the ability to draw electron density toward itself and away from other adjacent atoms. This electron density transfer is often achieved by resonance or inductive effects. Electron-withdrawing groups have significant impacts on fundamental chemical processes such as acid-base reactions, redox potentials, and substitution reactions. Consequences of EWGs Effects on Bronsted acidity Electron-withdrawing groups exert an "inductive" or "electron-pulling" effect on covalent bonds. The strength of the electron-withdrawing group is inversely proportional to the pKa of the carboxylic acid. The inductive effect is cumulative: trichloroacetic acid is 1000x stronger than chloroacetic acid. The impact of the EWG on pKa decreases with distances from the carboxylic group. For benzoic acids, the effect is quantified by the Hammett equation: where = Reference constant = Substituent constant = Reaction rate constant Effect on Lewis acidity EWGs enhance the Lewis acidity, making compounds more reactive as Lewis acids. For example, fluorine is a stronger electron-withdrawing substituent than methyl, resulting in an increased Lewis acidity of boron trifluoride relative to trimethylborane. Electron-withdrawing groups also tend to reduce Lewis basicity. Effect on a aromatic substitution reactions Electrophilic aromatic substitution is famously affected by EWGs. The effect is transmitted by inductive and resonance effects. Benzene with an EWG typically undergoes electrophilic substitution at meta positions. Overall the rates are diminished. thus EWGs are called deactivating. When it comes to nucleophilic substitution reactions, electron-withdrawing groups are more prone to nucleophilic substitution. For example, chlorodinitrobenzene is far more susceptible to reactions displacing chloride compared to chlorobenzene. Effects on redox potential In the context of electron transfer, these groups enhance the oxidizing power tendency of the attached species. For example,  Tetracyanoethylene serves as an oxidant due to its attachment to four cyano substituents, which are electron-withdrawing groups. Oxidants with EWGs are stronger than the parent compound. Acetylferrocenium is 300 mV more oxidizing than ferrocene. Comparison with electron-donating groups Electron-withdrawing groups are the opposite effect of electron-donating groups (EDGs). Both describe functional groups, however, electron-withdrawing groups pull electron density away from a molecule, whereas EDGs push electron density onto a substituent. See also Electron-donating group References Physical organic chemistry Chemical bonding
Electron-withdrawing group
[ "Physics", "Chemistry", "Materials_science" ]
575
[ "nan", "Chemical bonding", "Condensed matter physics", "Physical organic chemistry" ]
4,458,810
https://en.wikipedia.org/wiki/Neuromodulation
Neuromodulation is the physiological process by which a given neuron uses one or more chemicals to regulate diverse populations of neurons. Neuromodulators typically bind to metabotropic, G-protein coupled receptors (GPCRs) to initiate a second messenger signaling cascade that induces a broad, long-lasting signal. This modulation can last for hundreds of milliseconds to several minutes. Some of the effects of neuromodulators include altering intrinsic firing activity, increasing or decreasing voltage-dependent currents, altering synaptic efficacy, increasing bursting activity and reconfiguring synaptic connectivity. Major neuromodulators in the central nervous system include: dopamine, serotonin, acetylcholine, histamine, norepinephrine, nitric oxide, and several neuropeptides. Cannabinoids can also be powerful CNS neuromodulators. Neuromodulators can be packaged into vesicles and released by neurons, secreted as hormones and delivered through the circulatory system. A neuromodulator can be conceptualized as a neurotransmitter that is not reabsorbed by the pre-synaptic neuron or broken down into a metabolite. Some neuromodulators end up spending a significant amount of time in the cerebrospinal fluid (CSF), influencing (or "modulating") the activity of several other neurons in the brain. Neuromodulator systems The major neurotransmitter systems are the noradrenaline (norepinephrine) system, the dopamine system, the serotonin system, and the cholinergic system. Drugs targeting the neurotransmitter of such systems affect the whole system, which explains the mode of action of many drugs. Most other neurotransmitters, on the other hand, e.g. glutamate, GABA and glycine, are used very generally throughout the central nervous system. Noradrenaline system The noradrenaline system consists of around 15,000 neurons, primarily in the locus coeruleus. This is diminutive compared to the more than 100 billion neurons in the brain. As with dopaminergic neurons in the substantia nigra, neurons in the locus coeruleus tend to be melanin-pigmented. Noradrenaline is released from the neurons, and acts on adrenergic receptors. Noradrenaline is often released steadily so that it can prepare the supporting glial cells for calibrated responses. Despite containing a relatively small number of neurons, when activated, the noradrenaline system plays major roles in the brain including involvement in suppression of the neuroinflammatory response, stimulation of neuronal plasticity through LTP, regulation of glutamate uptake by astrocytes and LTD, and consolidation of memory. Dopamine system The dopamine or dopaminergic system consists of several pathways, originating from the ventral tegmentum or substantia nigra as examples. It acts on dopamine receptors. Parkinson's disease is at least in part related to dropping out of dopaminergic cells in deep-brain nuclei, primarily the melanin-pigmented neurons in the substantia nigra but secondarily the noradrenergic neurons of the locus coeruleus. Treatments potentiating the effect of dopamine precursors have been proposed and effected, with moderate success. Dopamine pharmacology Cocaine, for example, blocks the reuptake of dopamine, leaving these neurotransmitters in the synaptic gap for longer. AMPT prevents the conversion of tyrosine to L-DOPA, the precursor to dopamine; reserpine prevents dopamine storage within vesicles; and deprenyl inhibits monoamine oxidase (MAO)-B and thus increases dopamine levels. Serotonin system The serotonin created by the brain comprises around 10% of total body serotonin. The majority (80-90%) is found in the gastrointestinal (GI) tract. It travels around the brain along the medial forebrain bundle and acts on serotonin receptors. In the peripheral nervous system (such as in the gut wall) serotonin regulates vascular tone. Serotonin pharmacology Selective serotonin reuptake inhibitors (SSRIs) such as fluoxetine are widely used antidepressants that specifically block the reuptake of serotonin with less effect on other transmitters. Tricyclic antidepressants also block reuptake of biogenic amines from the synapse, but may primarily affect serotonin or norepinephrine or both. They typically take four to six weeks to alleviate any symptoms of depression. They are considered to have immediate and long-term effects. Monoamine oxidase inhibitors allow reuptake of biogenic amine neurotransmitters from the synapse, but inhibit an enzyme which normally destroys (metabolizes) some of the transmitters after their reuptake. More of the neurotransmitters (especially serotonin, noradrenaline and dopamine) are available for release into synapses. MAOIs take several weeks to alleviate the symptoms of depression. Although changes in neurochemistry are found immediately after taking these antidepressants, symptoms may not begin to improve until several weeks after administration. Increased transmitter levels in the synapse alone does not relieve the depression or anxiety. Cholinergic system The cholinergic system consists of projection neurons from the pedunculopontine nucleus, laterodorsal tegmental nucleus, and basal forebrain and interneurons from the striatum and nucleus accumbens. It is not yet clear whether acetylcholine as a neuromodulator acts through volume transmission or classical synaptic transmission, as there is evidence to support both theories. Acetylcholine binds to both metabotropic muscarinic receptors (mAChR) and the ionotropic nicotinic receptors (nAChR). The cholinergic system has been found to be involved in responding to cues related to the reward pathway, enhancing signal detection and sensory attention, regulating homeostasis, mediating the stress response, and encoding the formation of memories. GABA Gamma-aminobutyric acid (GABA) has an inhibitory effect on brain and spinal cord activity. GABA is an amino acid that is the primary inhibitory neurotransmitter for the central nervous system (CNS). It reduces neuronal excitability by inhibiting nerve transmission. GABA has a multitude of different functions during development and influences the migration, proliferation, and proper morphological development of neurons. It also influences the timing of critical periods and potentially primes the earliest neuronal networks. There are two main types of GABA receptors: GABAa and GABAb. GABAa receptors inhibit neurotransmitter release and/or neuronal excitability and are a ligand-gated chloride channel. GABAb receptors are slower to react due to a GCPR that acts to inhibit neurons. GABA can be the culprit for many disorders ranging from schizophrenia to major depressive disorder because of its inhibitory characteristics being dampened. Neuropeptides Neuropeptides are small proteins used for communication in the nervous system. Neuropeptides represent the most diverse class of signaling molecules. There are 90 known genes that encode human neuropeptide precursors. In invertebrates, there are ~50 known genes encoding neuropeptide precursors. Most neuropeptides bind to G-protein coupled receptors, however some neuropeptides directly gate ion channels or act through kinase receptors. Opioid peptides – a large family of endogenous neuropeptides that are widely distributed throughout the central and peripheral nervous system. Opiate drugs such as heroin and morphine act at the receptors of these neurotransmitters. Endorphins Enkephalins Dynorphins Vasopressin Oxytocin Gastrin Cholecystokinins Somatostatin Cortistatins RF-amides Neuropeptide FF Neuropeptide Y - Pancreatic Polypeptide Peptide YY Prolactin-releasing peptide Calcitonin Adrenomedullin Natriuretic Bombesin-like peptides Endothelin Glucagon Secretin Vasoactive Intestinal Peptide Growth Hormone Releasing Hormone Gastric Inhibitory Peptide Corticotropin Releasing Hormone Urocortin Urotensin Substance P Neuromedins Tensin Kinin Granin Nerve Growth Factor Motilin Ghrelin Galanin Neuropeptide B/W Neurexophilin Insulin Relaxin Agouti-related protein homolog gene Prolactin Apelin Metastasis-suppressor Diazepam-binding inhibitor Cerebellins Leptin Adiponectin Visfatin Resistin Nucleibindin Ubiquitin Neuromuscular systems Neuromodulators may alter the output of a physiological system by acting on the associated inputs (for instance, central pattern generators). However, modeling work suggests that this alone is insufficient, because the neuromuscular transformation from neural input to muscular output may be tuned for particular ranges of input. Stern et al. (2007) suggest that neuromodulators must act not only on the input system but must change the transformation itself to produce the proper contractions of muscles as output. Volume transmission Neurotransmitter systems are systems of neurons in the brain expressing certain types of neurotransmitters, and thus form distinct systems. Activation of the system causes effects in large volumes of the brain, called volume transmission. Volume transmission is the diffusion of neurotransmitters through the brain extracellular fluid released at points that may be remote from the target cells with the resulting activation of extra-synaptic receptors, and with a longer time course than for transmission at a single synapse. Such prolonged transmitter action is called tonic transmission, in contrast to the phasic transmission that occurs rapidly at single synapses. Tonic Transmission There are three main components of tonic transmission: Continued release, sustained release, and baseline regulation. In the context of neuromodulation, continuous release is responsible for releasing neurotransmitters/neuromodulators at a constant low level from glial cells and tonic active neurons. Sustained Influence provides long-term stability to the entire process, and baseline regulation ensures that the neurons are in a continued state of readiness to respond to any signals. Acetylcholine, noradrenaline, dopamine, norepinephrine, and serotonin are some of the main components in tonic transmission to mediate arousal and attention. Phasic Transmission There are three main components of phasic transmission: burst release, transient effects, and stimulus-driven effects. As the name suggests, burst release is in charge of releasing neurotransmitters/neuromodulators in intense, acute bursts. Transient effects create acute momentary adjustments in neural activity. Lastly, as the name suggests, stimulus-driven effects react to sensory input, external stressors, and reward stimuli, which involve dopamine, norepinephrine, and serotonin. Types of Neuromodulation Therapies and Treatments There are two main categories for neuromodulation therapy: chemical and electrical. Electrical Neuromodulator Therapies Electrical neuromodulation has three subcategories: deep brain, spinal cord, and transcranial, each aiming to treat specific conditions. Deep brain stimulation involves electrodes being surgically implanted into specific sections of the brain that are commonly responsible for movement and motor control deficiencies and disorders like Parkinson's and tremors. Spinal cord stimulation works by being placed near the spinal cord to send electrical signals through the body to treat various forms of chronic pain like lower back pain and CRPS. This form of neuromodulator treatment is considered one of the more high-risk treatments because of its manipulation near the spinal cord. Transcranial magnetic stimulation is slightly different in that it utilizes a magnetic field to generate electrical currents throughout the brain. This treatment is widely used to remedy various mental health conditions like depression, obsessive-compulsive disorder, and other mood disorders. Neuromodulation is often used as a treatment mechanism for moderate to severe migraines by way of nerve stimulation. These treatments work by utilizing the basic ascending pathways. There are three main modes. It works by connecting a device to the body that sends electrical pulses directly to the affected site (Transcutaneous Electrical Nerve Stimulation), directly to the brain (Transcranial Magnetic Stimulation), or by holding a device close to the neck that works to block pain signals modulation from the PNS to the CNS. and sends two of the most notable modes of that treatment, which are electrical and magnetic stimulation. Electrical nerve stimulation and some of the characterizations include transcranial alternating stimulation and transcranial direct current stimulation. The other is magnetic stimulation, which includes single pulse and repetitive transcranial stimulation. Chemical Neuromodular Therapies Chemical neuromodulation mostly consists of collaborating natural and artificial chemical substances to treat various conditions. It uses both invasive and non-invasive modes of treatment, including pumps, injections, and oral medications. This mode of treatment can be used to manage immune responses like inflammation, mood, and motor disorders. See also Three-factor learning 5-HT2c receptor agonist Natural neuroactive substance References External links North American Neuromodulation Society Neuromodulation and Neural Plasticity International Neuromodulation Society Scolarpedia article on neuromodulation Neurochemistry Neurophysiology
Neuromodulation
[ "Chemistry", "Biology" ]
2,963
[ "Biochemistry", "Neurochemistry" ]
4,459,356
https://en.wikipedia.org/wiki/Solvated%20electron
A solvated electron is a free electron in a solution, in which it behaves like an anion. An electron's being solvated in a solution means it is bound by the solution. The notation for a solvated electron in formulas of chemical reactions is "e−". Often, discussions of solvated electrons focus on their solutions in ammonia, which are stable for days, but solvated electrons also occur in water and many other solvents in fact, in any solvent that mediates outer-sphere electron transfer. The solvated electron is responsible for a great deal of radiation chemistry. Ammonia solutions Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu, and Yb (also Mg using an electrolytic process), giving characteristic blue solutions. For alkali metals in liquid ammonia, the solution is blue when dilute and copper-colored when more concentrated (> 3 molar). These solutions conduct electricity. The blue colour of the solution is due to ammoniated electrons, which absorb energy in the visible region of light. The diffusivity of the solvated electron in liquid ammonia can be determined using potential-step chronoamperometry. Solvated electrons in ammonia are the anions of salts called electrides. Na + 6 NH3 → [Na(NH3)6]+ + e− The reaction is reversible: evaporation of the ammonia solution produces a film of metallic sodium. Case study: Li in NH3 A lithium–ammonia solution at −60 °C is saturated at about 15 mol% metal (MPM). When the concentration is increased in this range electrical conductivity increases from 10−2 to 104 Ω−1cm−1 (larger than liquid mercury). At around 8 MPM, a "transition to the metallic state" (TMS) takes place (also called a "metal-to-nonmetal transition" (MNMT)). At 4 MPM a liquid-liquid phase separation takes place: the less dense gold-colored phase becomes immiscible from a denser blue phase. Above 8 MPM the solution is bronze/gold-colored. In the same concentration range the overall density decreases by 30%. Other solvents Alkali metals also dissolve in some small primary amines, such as methylamine and ethylamine and hexamethylphosphoramide, forming blue solutions. Tetrahydrofuran (THF) dissolves alkali metal, but a Birch reduction (see ) analogue does not proceed without a diamine ligand. Solvated electron solutions of the alkaline earth metals magnesium, calcium, strontium and barium in ethylenediamine have been used to intercalate graphite with these metals. Water Solvated electrons are involved in the reaction of alkali metals with water, even though the solvated electron has only a fleeting existence. Below pH = 9.6 the hydrated electron reacts with the hydronium ion giving atomic hydrogen, which in turn can react with the hydrated electron giving hydroxide ion and usual molecular hydrogen H2. Solvated electrons can be found even in the gas phase. This implies their possible existence in the upper atmosphere of Earth and involvement in nucleation and aerosol formation. Its standard electrode potential value is -2.77 V. The equivalent conductivity of 177 Mho cm2 is similar to that of hydroxide ion. This value of equivalent conductivity corresponds to a diffusivity of 4.75 cm2s−1. Reactivity Although quite stable, the blue ammonia solutions containing solvated electrons degrade rapidly in the presence of catalysts to give colorless solutions of sodium amide: 2 [Na(NH3)6]+e− → H2 + 2 NaNH2 + 10 NH3 Electride salts can be isolated by the addition of macrocyclic ligands such as crown ether and cryptands to solutions containing solvated electrons. These ligands strongly bind the cations and prevent their re-reduction by the electron. [Na(NH3)6]+e− + cryptand → [Na(cryptand)]+e−+ 6 NH3 The solvated electron reacts with oxygen to form a superoxide radical (O2.−). With nitrous oxide, solvated electrons react to form hydroxyl radicals (HO.). Uses Solvated electrons are involved in electrode processes, a broad area with many technical applications (electrosynthesis, electroplating, electrowinning). A specialized use of sodium-ammonia solutions is the Birch reduction. Other reactions where sodium is used as a reducing agent also are assumed to involve solvated electrons, e.g. the use of sodium in ethanol as in the Bouveault–Blanc reduction. Work by Cullen et al. showed that metal-ammonia solutions can be used to intercalate a range of layered materials, which can then be exfoliated in polar, aprotic solvents, to produce ionic solutions of two-dimensional materials. An example of this is the intercalation of graphite with potassium and ammonia, which is then exfoliated by spontaneous dissolution in THF to produce a graphenide solution. History The observation of the color of metal-electride solutions is generally attributed to Humphry Davy. In 1807–1809, he examined the addition of grains of potassium to gaseous ammonia (liquefaction of ammonia was invented in 1823). James Ballantyne Hannay and J. Hogarth repeated the experiments with sodium in 1879–1880. W. Weyl in 1864 and C. A. Seely in 1871 used liquid ammonia, whereas Hamilton Cady in 1897 related the ionizing properties of ammonia to that of water. Charles A. Kraus measured the electrical conductance of metal ammonia solutions and in 1907 attributed it to the electrons liberated from the metal. In 1918, G. E. Gibson and W. L. Argo introduced the solvated electron concept. They noted based on absorption spectra that different metals and different solvents (methylamine, ethylamine) produce the same blue color, attributed to a common species, the solvated electron. In the 1970s, solid salts containing electrons as the anion were characterized. References Further reading The electrochemistry of the solvated electron. Technische Universiteit Eindhoven. IAEA On the Electrolytic Generation of Hydrated Electron. Fundamentals of Radiation Chemistry, chapter 6, p. 145–198, Academic Press, 1999. Tables of bimolecular rate constants of hydrated electrons, hydrogen atoms and hydroxyl radicals with inorganic and organic compounds, International Journal of Applied Radiation and Isotopes Anbar, Neta Solutions Nuclear chemistry Organic chemistry Radiation Electrides
Solvated electron
[ "Physics", "Chemistry" ]
1,415
[ "Transport phenomena", "Electron", "Physical phenomena", "Electrides", "Nuclear chemistry", "Salts", "Homogeneous chemical mixtures", "Waves", "Radiation", "nan", "Nuclear physics", "Solutions" ]
4,461,378
https://en.wikipedia.org/wiki/Monocline
A monocline (or, rarely, a monoform) is a step-like fold in rock strata consisting of a zone of steeper dip within an otherwise horizontal or gently dipping sequence. Formation Monoclines may be formed in several different ways (see diagram) By differential compaction over an underlying structure, particularly a large fault at the edge of a basin due to the greater compactibility of the basin fill, the amplitude of the fold will die out gradually upwards. By mild reactivation of an earlier extensional fault during a phase of inversion causing folding in the overlying sequence. As a form of fault propagation fold during upward propagation of an extensional fault in basement into an overlying cover sequence. As a form of fault propagation fold during upward propagation of a reverse fault in basement into an overlying cover sequence. Examples Waterpocket Fold in Capitol Reef National Park, Utah Comb Ridge in southern Utah Grandview-Phantom Monocline in Grand Canyon, Arizona Grand Hogback in Colorado Lebombo Mountains in Southern Africa Lapstone Monocline in the Blue Mountains (Australia) Beaumaris Monocline in Victoria (Australia) Purbeck Monocline on the Isle of Purbeck, Dorset, England Fore-Sudetic Monocline, Poland Sindh Monocline, Pakistan Torres Flexure, southern Brazil See also Anticline Homocline Syncline References Structural geology Deformation (mechanics)
Monocline
[ "Materials_science", "Engineering" ]
287
[ "Deformation (mechanics)", "Materials science" ]
14,020,842
https://en.wikipedia.org/wiki/Particle%20size
Particle size is a notion introduced for comparing dimensions of solid particles (flecks), liquid particles (droplets), or gaseous particles (bubbles). The notion of particle size applies to particles in colloids, in ecology, in granular material (whether airborne or not), and to particles that form a granular material (see also grain size). Measurement There are several methods for measuring particle size and particle size distribution. Some of them are based on light, other on ultrasound, or electric field, or gravity, or centrifugation. The use of sieves is a common measurement technique, however this process can be more susceptible to human error and is time consuming. Technology such as dynamic image analysis (DIA) can make particle size distribution analyses much easier. This approach can be seen in instruments like Retsch Technology's CAMSIZER or the Sympatec QICPIC series of instruments. They still lack the capability of inline measurements for real time monitoring in production environments. Therefore, inline imaging devices like the SOPAT system are most efficient. Machine learning algorithms are used to increase the performance of particle size measurement. This line of research can yield low-cost and real time particle size analysis. In all methods the size is an indirect measure, obtained by a model that transforms, in abstract way, the real particle shape into a simple and standardized shape, like a sphere (the most usual) or a cuboid (when minimum bounding box is used), where the size parameter (ex. diameter of sphere) makes sense. Exception is the mathematical morphology approach, where no shape hypothesis is necessary. Definition of the particle size for an ensemble (collection) of particles presents another problem. Real systems are practically always polydisperse, which means that the particles in an ensemble have different sizes. The notion of particle size distribution reflects this polydispersity. There is often a need for a certain average particle size for the ensemble of particles. Expressions for sphere size The particle size of a spherical object can be unambiguously and quantitatively defined by its diameter. However, a typical material object is likely to be irregular in shape and non-spherical. The above quantitative definition of particle size cannot be applied to non-spherical particles. There are several ways of extending the above quantitative definition to apply to non-spherical particles. Existing definitions are based on replacing a given particle with an imaginary sphere that has one of the properties identical with the particle. Volume-based particle size Volume-based particle size equals the diameter of the sphere that has the same volume as a given particle. Typically used in sieve analysis, as shape hypothesis (sieve's mesh size as the sphere diameter). where : diameter of representative sphere : volume of particle Area-based particle size Area-based particle size equals the diameter of the sphere that has the same surface area as a given particle. Typically used in optical granulometry techniques. where : diameter of representative sphere : surface area of particle Indirect measure expressions In some measures the size (a length dimension in the expression) can't be obtained, only calculated as a function of another dimensions and parameters. Illustrating below by the main cases. Weight-based (spheroidal) particle size Weight-based particle size equals the diameter of the sphere that has the same weight as a given particle. Useful as hypothesis in centrifugation and decantation, or when the number of particles can be estimated (to obtain average particle's weight as sample weight divided by the number of particles in the sample). This formula is only valid when all particles have the same density. where : diameter of representative sphere : weight of particle : density of particle : gravitational constant Aerodynamic particle size Hydrodynamic or aerodynamic particle size equals the diameter of the sphere that has the same drag coefficient as a given particle. Another complexity in defining particle size in a fluid medium appears for particles with sizes below a micrometre. When a particle becomes that small, the thickness of the interface layer becomes comparable with the particle size. As a result, the position of the particle surface becomes uncertain. There is a convention for placing this imaginary surface at a certain position suggested by Gibbs and presented in many books on interface and colloid science. International conventions There is an international standard on presenting various characteristic particle sizes, the ISO 9276 (Representation of results of particle size analysis). This set of various average sizes includes median size, geometric mean size, average size. In the selection of specific small-size particles is common the use of ISO 565 and ISO 3310-1 to the choice of mesh size. Colloidal particle In materials science and colloidal chemistry, the term colloidal particle refers to a small amount of matter having a size typical for colloids and with a clear phase boundary. The dispersed-phase particles have a diameter between approximately 1 and 1000 nanometers. Colloids are heterogeneous in nature, invisible to the naked eye, and always move in a random zig-zag-like motion known as Brownian motion. The scattering of light by colloidal particles is known as Tyndall effect. See also Dynamic light scattering Grain size Laser diffraction analysis Micromeritics Dispersion Technology Sauter mean diameter References 8.ISO Standard 14644-1 Classification Airborne Particles Cleanliness Colloidal chemistry Size Size
Particle size
[ "Physics", "Chemistry", "Mathematics" ]
1,097
[ "Geometric measurement", "Colloidal chemistry", "Physical quantities", "Quantity", "Colloids", "Size", "Surface science", "Physical objects", "Particles", "Matter" ]
14,023,957
https://en.wikipedia.org/wiki/5-Methyltetrahydropteroyltriglutamate%E2%80%94homocysteine%20S-methyltransferase
In enzymology, a 5-methyltetrahydropteroyltriglutamate—homocysteine S-methyltransferase () is an enzyme that catalyzes the chemical reaction 5-methyltetrahydropteroyltri-L-glutamate + L-homocysteine tetrahydropteroyltri-L-glutamate + L-methionine Thus, the two substrates of this enzyme are 5-methyltetrahydropteroyltri-L-glutamate and L-homocysteine, whereas its two products are tetrahydropteroyltri-L-glutamate and L-methionine. This enzyme participates in methionine metabolism. It has 2 cofactors: orthophosphate, and zinc. Nomenclature This enzyme belongs to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is 5-methyltetrahydropteroyltri-L-glutamate:L-homocysteine S-methyltransferase. Other names in common use include tetrahydropteroyltriglutamate methyltransferase, homocysteine methylase, methyltransferase, tetrahydropteroylglutamate-homocysteine transmethylase, methyltetrahydropteroylpolyglutamate:homocysteine methyltransferase, cobalamin-independent methionine synthase, methionine synthase (cobalamin-independent), and MetE. Structure The enzyme from Escherichia coli consists of two alpha8-beta8 (TIM) barrels positioned face to face and thought to have evolved by gene duplication. The active site lies between the tops of the two barrels, the N-terminal barrel binds 5-methyltetrahydropteroyltri-L-glutamic acid and the C-terminal barrel binds homocysteine. Homocysteine is coordinated to a zinc ion, as initially suggested by spectroscopy and mutagenesis . References Further reading EC 2.1.1 Zinc enzymes Enzymes of known structure Protein families
5-Methyltetrahydropteroyltriglutamate—homocysteine S-methyltransferase
[ "Biology" ]
481
[ "Protein families", "Protein classification" ]
14,024,091
https://en.wikipedia.org/wiki/Caffeoyl-CoA%20O-methyltransferase
In enzymology, a caffeoyl-CoA O-methyltransferase () is an enzyme that catalyzes the chemical reaction S-adenosyl-L-methionine + caffeoyl-CoA S-adenosyl-L-homocysteine + feruloyl-CoA Thus, the two substrates of this enzyme are S-adenosyl methionine and caffeoyl-CoA, whereas its two products are S-adenosylhomocysteine and feruloyl-CoA. A large number of natural products are generated via a step involving this enzyme. This enzyme is classified to the family of transferases, specifically those transferring one-carbon group methyltransferases. The systematic name of this enzyme class is S-adenosyl-L-methionine:caffeoyl-CoA 3-O-methyltransferase. Other names in common use include caffeoyl coenzyme A methyltransferase, caffeoyl-CoA 3-O-methyltransferase, and trans-caffeoyl-CoA 3-O-methyltransferase. This enzyme participates in phenylpropanoid biosynthesis. Structural studies As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes and . References EC 2.1.1 Enzymes of known structure O-methylated hydroxycinnamic acids metabolism O-methylation
Caffeoyl-CoA O-methyltransferase
[ "Chemistry" ]
318
[ "O-methylation", "Methylation" ]
14,024,418
https://en.wikipedia.org/wiki/Viral%20nonstructural%20protein
In virology, a nonstructural protein is a protein encoded by a virus but that is not part of the viral particle. They typically include the various enzymes and transcription factors the virus uses to replicate itself, such as a viral protease (3CL/nsp5, etc.), an RNA replicase or other template-directed polymerases, and some means to control the host. Examples NSP1 (rotavirus) NSP4 (rotavirus) NSP5 (rotavirus) Influenza non-structural protein NS1 influenza protein HBcAg, core antigen of hepatitis B Bunyaviridae nonstructural S proteins See also Viral structural protein References
Viral nonstructural protein
[ "Chemistry" ]
147
[ "Biochemistry stubs", "Protein stubs" ]
85,746
https://en.wikipedia.org/wiki/Stoma
In botany, a stoma (: stomata, from Greek στόμα, "mouth"), also called a stomate (: stomates), is a pore found in the epidermis of leaves, stems, and other organs, that controls the rate of gas exchange between the internal air spaces of the leaf and the atmosphere. The pore is bordered by a pair of specialized parenchyma cells known as guard cells that regulate the size of the stomatal opening. The term is usually used collectively to refer to the entire stomatal complex, consisting of the paired guard cells and the pore itself, which is referred to as the stomatal aperture. Air, containing oxygen, which is used in respiration, and carbon dioxide, which is used in photosynthesis, passes through stomata by gaseous diffusion. Water vapour diffuses through the stomata into the atmosphere as part of a process called transpiration. Stomata are present in the sporophyte generation of the vast majority of land plants, with the exception of liverworts, as well as some mosses and hornworts. In vascular plants the number, size and distribution of stomata varies widely. Dicotyledons usually have more stomata on the lower surface of the leaves than the upper surface. Monocotyledons such as onion, oat and maize may have about the same number of stomata on both leaf surfaces. In plants with floating leaves, stomata may be found only on the upper epidermis and submerged leaves may lack stomata entirely. Most tree species have stomata only on the lower leaf surface. Leaves with stomata on both the upper and lower leaf surfaces are called amphistomatous leaves; leaves with stomata only on the lower surface are hypostomatous, and leaves with stomata only on the upper surface are epistomatous or hyperstomatous. Size varies across species, with end-to-end lengths ranging from 10 to 80 μm and width ranging from a few to 50 μm. Function CO2 gain and water loss Carbon dioxide, a key reactant in photosynthesis, is present in the atmosphere at a concentration of about 400 ppm. Most plants require the stomata to be open during daytime. The air spaces in the leaf are saturated with water vapour, which exits the leaf through the stomata in a process known as transpiration. Therefore, plants cannot gain carbon dioxide without simultaneously losing water vapour. Alternative approaches Ordinarily, carbon dioxide is fixed to ribulose 1,5-bisphosphate (RuBP) by the enzyme RuBisCO in mesophyll cells exposed directly to the air spaces inside the leaf. This exacerbates the transpiration problem for two reasons: first, RuBisCo has a relatively low affinity for carbon dioxide, and second, it fixes oxygen to RuBP, wasting energy and carbon in a process called photorespiration. For both of these reasons, RuBisCo needs high carbon dioxide concentrations, which means wide stomatal apertures and, as a consequence, high water loss. Narrower stomatal apertures can be used in conjunction with an intermediary molecule with a high carbon dioxide affinity, phosphoenolpyruvate carboxylase (PEPcase). Retrieving the products of carbon fixation from PEPCase is an energy-intensive process, however. As a result, the PEPCase alternative is preferable only where water is limiting but light is plentiful, or where high temperatures increase the solubility of oxygen relative to that of carbon dioxide, magnifying RuBisCo's oxygenation problem. C.A.M. plants A group of mostly desert plants called "C.A.M." plants (crassulacean acid metabolism, after the family Crassulaceae, which includes the species in which the CAM process was first discovered) open their stomata at night (when water evaporates more slowly from leaves for a given degree of stomatal opening), use PEPcase to fix carbon dioxide and store the products in large vacuoles. The following day, they close their stomata and release the carbon dioxide fixed the previous night into the presence of RuBisCO. This saturates RuBisCO with carbon dioxide, allowing minimal photorespiration. This approach, however, is severely limited by the capacity to store fixed carbon in the vacuoles, so it is preferable only when water is severely limited. Opening and closing However, most plants do not have CAM and must therefore open and close their stomata during the daytime, in response to changing conditions, such as light intensity, humidity, and carbon dioxide concentration. When conditions are conducive to stomatal opening (e.g., high light intensity and high humidity), a proton pump drives protons (H+) from the guard cells. This means that the cells' electrical potential becomes increasingly negative. The negative potential opens potassium voltage-gated channels and so an uptake of potassium ions (K+) occurs. To maintain this internal negative voltage so that entry of potassium ions does not stop, negative ions balance the influx of potassium. In some cases, chloride ions enter, while in other plants the organic ion malate is produced in guard cells. This increase in solute concentration lowers the water potential inside the cell, which results in the diffusion of water into the cell through osmosis. This increases the cell's volume and turgor pressure. Then, because of rings of cellulose microfibrils that prevent the width of the guard cells from swelling, and thus only allow the extra turgor pressure to elongate the guard cells, whose ends are held firmly in place by surrounding epidermal cells, the two guard cells lengthen by bowing apart from one another, creating an open pore through which gas can diffuse. When the roots begin to sense a water shortage in the soil, abscisic acid (ABA) is released. ABA binds to receptor proteins in the guard cells' plasma membrane and cytosol, which first raises the pH of the cytosol of the cells and cause the concentration of free Ca2+ to increase in the cytosol due to influx from outside the cell and release of Ca2+ from internal stores such as the endoplasmic reticulum and vacuoles. This causes the chloride (Cl−) and organic ions to exit the cells. Second, this stops the uptake of any further K+ into the cells and, subsequently, the loss of K+. The loss of these solutes causes an increase in water potential, which results in the diffusion of water back out of the cell by osmosis. This makes the cell plasmolysed, which results in the closing of the stomatal pores. Guard cells have more chloroplasts than the other epidermal cells from which guard cells are derived. Their function is controversial. Inferring stomatal behavior from gas exchange The degree of stomatal resistance can be determined by measuring leaf gas exchange of a leaf. The transpiration rate is dependent on the diffusion resistance provided by the stomatal pores and also on the humidity gradient between the leaf's internal air spaces and the outside air. Stomatal resistance (or its inverse, stomatal conductance) can therefore be calculated from the transpiration rate and humidity gradient. This allows scientists to investigate how stomata respond to changes in environmental conditions, such as light intensity and concentrations of gases such as water vapor, carbon dioxide, and ozone. Evaporation (E) can be calculated as where ei and ea are the partial pressures of water in the leaf and in the ambient air respectively, P is atmospheric pressure, and r is stomatal resistance. The inverse of r is conductance to water vapor (g), so the equation can be rearranged to and solved for g: Photosynthetic CO2 assimilation (A) can be calculated from where Ca and Ci are the atmospheric and sub-stomatal partial pressures of CO2 respectively. The rate of evaporation from a leaf can be determined using a photosynthesis system. These scientific instruments measure the amount of water vapour leaving the leaf and the vapor pressure of the ambient air. Photosynthetic systems may calculate water use efficiency (A/E), g, intrinsic water use efficiency (A/g), and Ci. These scientific instruments are commonly used by plant physiologists to measure CO2 uptake and thus measure photosynthetic rate. Evolution There is little evidence of the evolution of stomata in the fossil record, but they had appeared in land plants by the middle of the Silurian period. They may have evolved by the modification of conceptacles from plants' alga-like ancestors. However, the evolution of stomata must have happened at the same time as the waxy cuticle was evolving – these two traits together constituted a major advantage for early terrestrial plants. Development There are three major epidermal cell types which all ultimately derive from the outermost (L1) tissue layer of the shoot apical meristem, called protodermal cells: trichomes, pavement cells and guard cells, all of which are arranged in a non-random fashion. An asymmetrical cell division occurs in protodermal cells resulting in one large cell that is fated to become a pavement cell and a smaller cell called a meristemoid that will eventually differentiate into the guard cells that surround a stoma. This meristemoid then divides asymmetrically one to three times before differentiating into a guard mother cell. The guard mother cell then makes one symmetrical division, which forms a pair of guard cells. Cell division is inhibited in some cells so there is always at least one cell between stomata. Stomatal patterning is controlled by the interaction of many signal transduction components such as EPF (Epidermal Patterning Factor), ERL (ERecta Like) and YODA (a putative MAP kinase kinase kinase). Mutations in any one of the genes which encode these factors may alter the development of stomata in the epidermis. For example, a mutation in one gene causes more stomata that are clustered together, hence is called Too Many Mouths (TMM). Whereas, disruption of the SPCH (SPeecCHless) gene prevents stomatal development all together.  Inhibition of stomatal production can occur by the activation of EPF1, which activates TMM/ERL, which together activate YODA. YODA inhibits SPCH, causing SPCH activity to decrease, preventing asymmetrical cell division that initiates stomata formation. Stomatal development is also coordinated by the cellular peptide signal called stomagen, which signals the activation of the SPCH, resulting in increased number of stomata. Environmental and hormonal factors can affect stomatal development. Light increases stomatal development in plants; while, plants grown in the dark have a lower amount of stomata. Auxin represses stomatal development by affecting their development at the receptor level like the ERL and TMM receptors. However, a low concentration of auxin allows for equal division of a guard mother cell and increases the chance of producing guard cells. Most angiosperm trees have stomata only on their lower leaf surface. Poplars and willows have them on both surfaces. When leaves develop stomata on both leaf surfaces, the stomata on the lower surface tend to be larger and more numerous, but there can be a great degree of variation in size and frequency about species and genotypes. White ash and white birch leaves had fewer stomata but larger in size. On the other hand sugar maple and silver maple had small stomata that were more numerous. Types Different classifications of stoma types exist. One that is widely used is based on the types that Julien Joseph Vesque introduced in 1889, was further developed by Metcalfe and Chalk, and later complemented by other authors. It is based on the size, shape and arrangement of the subsidiary cells that surround the two guard cells. They distinguish for dicots: (meaning star-celled) stomata have guard cells that are surrounded by at least five radiating cells forming a star-like circle. This is a rare type that can for instance be found in the family Ebenaceae. (meaning unequal celled) stomata have guard cells between two larger subsidiary cells and one distinctly smaller one. This type of stomata can be found in more than thirty dicot families, including Brassicaceae, Solanaceae, and Crassulaceae. It is sometimes called cruciferous type. (meaning irregular celled) stomata have guard cells that are surrounded by cells that have the same size, shape and arrangement as the rest of the epidermis cells. This type of stomata can be found in more than hundred dicot families such as Apocynaceae, Boraginaceae, Chenopodiaceae, and Cucurbitaceae. It is sometimes called ranunculaceous type. (meaning cross-celled) stomata have guard cells surrounded by two subsidiary cells, that each encircle one end of the opening and contact each other opposite to the middle of the opening. This type of stomata can be found in more than ten dicot families such as Caryophyllaceae and Acanthaceae. It is sometimes called caryophyllaceous type. stomata are bordered by just one subsidiary cell that differs from the surrounding epidermis cells, its length parallel to the stoma opening. This type occurs for instance in the Molluginaceae and Aizoaceae. (meaning parallel celled) stomata have one or more subsidiary cells parallel to the opening between the guard cells. These subsidiary cells may reach beyond the guard cells or not. This type of stomata can be found in more than hundred dicot families such as Rubiaceae, Convolvulaceae and Fabaceae. It is sometimes called rubiaceous type. In monocots, several different types of stomata occur such as: gramineous or graminoid (meaning grass-like) stomata have two guard cells surrounded by two lens-shaped subsidiary cells. The guard cells are narrower in the middle and bulbous on each end. This middle section is strongly thickened. The axis of the subsidiary cells are parallel stoma opening. This type can be found in monocot families including Poaceae and Cyperaceae. (meaning six-celled) stomata have six subsidiary cells around both guard cells, one at either end of the opening of the stoma, one adjoining each guard cell, and one between that last subsidiary cell and the standard epidermis cells. This type can be found in some monocot families. (meaning four-celled) stomata have four subsidiary cells, one on either end of the opening, and one next to each guard cell. This type occurs in many monocot families, but also can be found in some dicots, such as Tilia and several Asclepiadaceae. In ferns, four different types are distinguished: stomata have two guard cells in one layer with only ordinary epidermis cells, but with two subsidiary cells on the outer surface of the epidermis, arranged parallel to the guard cells, with a pore between them, overlying the stoma opening. stomata have two guard cells that are entirely encircled by one continuous subsidiary cell (like a donut). stomata have two guard cells that are entirely encircled by one subsidiary cell that has not merged its ends (like a sausage). stomata have two guard cells that are largely encircled by one subsidiary cell, but also contact ordinary epidermis cells (like a U or horseshoe). A catalogue of leaf epidermis prints showing stomata from a wide range of species can be found in Wikimedia commons https://commons.wikimedia.org/wiki/Category:Leaf_epidermis_and_stomata_prints Stomatal crypts Stomatal crypts are sunken areas of the leaf epidermis which form a chamber-like structure that contains one or more stomata and sometimes trichomes or accumulations of wax. Stomatal crypts can be an adaption to drought and dry climate conditions when the stomatal crypts are very pronounced. However, dry climates are not the only places where they can be found. The following plants are examples of species with stomatal crypts or antechambers: Nerium oleander, conifers, Hakea and Drimys winteri which is a species of plant found in the cloud forest. Stomata as pathogenic pathways Stomata are holes in the leaf by which pathogens can enter unchallenged. However, stomata can sense the presence of some, if not all, pathogens. However, pathogenic bacteria applied to Arabidopsis plant leaves can release the chemical coronatine, which induce the stomata to reopen. Stomata and climate change Response of stomata to environmental factors Photosynthesis, plant water transport (xylem) and gas exchange are regulated by stomatal function which is important in the functioning of plants. Stomata are responsive to light with blue light being almost 10 times as effective as red light in causing stomatal response. Research suggests this is because the light response of stomata to blue light is independent of other leaf components like chlorophyll. Guard cell protoplasts swell under blue light provided there is sufficient availability of potassium. Multiple studies have found support that increasing potassium concentrations may increase stomatal opening in the mornings, before the photosynthesis process starts, but that later in the day sucrose plays a larger role in regulating stomatal opening. Zeaxanthin in guard cells acts as a blue light photoreceptor which mediates the stomatal opening. The effect of blue light on guard cells is reversed by green light, which isomerizes zeaxanthin. Stomatal density and aperture (length of stomata) varies under a number of environmental factors such as atmospheric CO2 concentration, light intensity, air temperature and photoperiod (daytime duration). Decreasing stomatal density is one way plants have responded to the increase in concentration of atmospheric CO2 ([CO2]atm). Although changes in [CO2]atm response is the least understood mechanistically, this stomatal response has begun to plateau where it is soon expected to impact transpiration and photosynthesis processes in plants. Drought inhibits stomatal opening, but research on soybeans suggests moderate drought does not have a significant effect on stomatal closure of its leaves. There are different mechanisms of stomatal closure. Low humidity stresses guard cells causing turgor loss, termed hydropassive closure. Hydroactive closure is contrasted as the whole leaf affected by drought stress, believed to be most likely triggered by abscisic acid. Future adaptations during climate change It is expected that [CO2]atm will reach 500–1000 ppm by 2100. 96% of the past 400,000 years experienced below 280 ppm CO2. From this figure, it is highly probable that genotypes of today’s plants have diverged from their pre-industrial relatives. The gene HIC (high carbon dioxide) encodes a negative regulator for the development of stomata in plants. Research into the HIC gene using Arabidopsis thaliana found no increase of stomatal development in the dominant allele, but in the ‘wild type’ recessive allele showed a large increase, both in response to rising CO2 levels in the atmosphere. These studies imply the plants response to changing CO2 levels is largely controlled by genetics. Agricultural implications The CO2 fertiliser effect has been greatly overestimated during Free-Air Carbon dioxide Enrichment (FACE) experiments where results show increased CO2 levels in the atmosphere enhances photosynthesis, reduce transpiration, and increase water use efficiency (WUE). Increased biomass is one of the effects with simulations from experiments predicting a 5–20% increase in crop yields at 550 ppm of CO2. Rates of leaf photosynthesis were shown to increase by 30–50% in C3 plants, and 10–25% in C4 under doubled CO2 levels. The existence of a feedback mechanism results a phenotypic plasticity in response to [CO2]atm that may have been an adaptive trait in the evolution of plant respiration and function. Predicting how stomata perform during adaptation is useful for understanding the productivity of plant systems for both natural and agricultural systems. Plant breeders and farmers are beginning to work together using evolutionary and participatory plant breeding to find the best suited species such as heat and drought resistant crop varieties that could naturally evolve to the change in the face of food security challenges. References External links Plant anatomy Plant cells Plant physiology Photosynthesis
Stoma
[ "Chemistry", "Biology" ]
4,419
[ "Biochemistry", "Photosynthesis", "Plants", "Plant physiology" ]
85,754
https://en.wikipedia.org/wiki/Phonon
A phonon is a collective excitation in a periodic, elastic arrangement of atoms or molecules in condensed matter, specifically in solids and some liquids. A type of quasiparticle in physics, a phonon is an excited state in the quantum mechanical quantization of the modes of vibrations for elastic structures of interacting particles. Phonons can be thought of as quantized sound waves, similar to photons as quantized light waves. The study of phonons is an important part of condensed matter physics. They play a major role in many of the physical properties of condensed matter systems, such as thermal conductivity and electrical conductivity, as well as in models of neutron scattering and related effects. The concept of phonons was introduced in 1930 by Soviet physicist Igor Tamm. The name phonon was suggested by Yakov Frenkel. It comes from the Greek word (), which translates to sound or voice, because long-wavelength phonons give rise to sound. The name emphasizes the analogy to the word photon, in that phonons represent wave-particle duality for sound waves in the same way that photons represent wave-particle duality for light waves. Solids with more than one atom in the smallest unit cell exhibit both acoustic and optical phonons. Definition A phonon is the quantum mechanical description of an elementary vibrational motion in which a lattice of atoms or molecules uniformly oscillates at a single frequency. In classical mechanics this designates a normal mode of vibration. Normal modes are important because any arbitrary lattice vibration can be considered to be a superposition of these elementary vibration modes (cf. Fourier analysis). While normal modes are wave-like phenomena in classical mechanics, phonons have particle-like properties too, in a way related to the wave–particle duality of quantum mechanics. Lattice dynamics The equations in this section do not use axioms of quantum mechanics but instead use relations for which there exists a direct correspondence in classical mechanics. For example: a rigid regular, crystalline (not amorphous) lattice is composed of N particles. These particles may be atoms or molecules. N is a large number, say of the order of 1023, or on the order of the Avogadro number for a typical sample of a solid. Since the lattice is rigid, the atoms must be exerting forces on one another to keep each atom near its equilibrium position. These forces may be Van der Waals forces, covalent bonds, electrostatic attractions, and others, all of which are ultimately due to the electric force. Magnetic and gravitational forces are generally negligible. The forces between each pair of atoms may be characterized by a potential energy function V that depends on the distance of separation of the atoms. The potential energy of the entire lattice is the sum of all pairwise potential energies multiplied by a factor of 1/2 to compensate for double counting: where ri is the position of the ith atom, and V is the potential energy between two atoms. It is difficult to solve this many-body problem explicitly in either classical or quantum mechanics. In order to simplify the task, two important approximations are usually imposed. First, the sum is only performed over neighboring atoms. Although the electric forces in real solids extend to infinity, this approximation is still valid because the fields produced by distant atoms are effectively screened. Secondly, the potentials V are treated as harmonic potentials. This is permissible as long as the atoms remain close to their equilibrium positions. Formally, this is accomplished by Taylor expanding V about its equilibrium value to quadratic order, giving V proportional to the displacement x2 and the elastic force simply proportional to x. The error in ignoring higher order terms remains small if x remains close to the equilibrium position. The resulting lattice may be visualized as a system of balls connected by springs. The following figure shows a cubic lattice, which is a good model for many types of crystalline solid. Other lattices include a linear chain, which is a very simple lattice which we will shortly use for modeling phonons. (For other common lattices, see crystal structure.) The potential energy of the lattice may now be written as Here, ω is the natural frequency of the harmonic potentials, which are assumed to be the same since the lattice is regular. Ri is the position coordinate of the ith atom, which we now measure from its equilibrium position. The sum over nearest neighbors is denoted (nn). It is important to mention that the mathematical treatment given here is highly simplified in order to make it accessible to non-experts. The simplification has been achieved by making two basic assumptions in the expression for the total potential energy of the crystal. These assumptions are that (i) the total potential energy can be written as a sum of pairwise interactions, and (ii) each atom interacts with only its nearest neighbors. These are used only sparingly in modern lattice dynamics. A more general approach is to express the potential energy in terms of force constants. See, for example, the Wiki article on multiscale Green's functions. Lattice waves Due to the connections between atoms, the displacement of one or more atoms from their equilibrium positions gives rise to a set of vibration waves propagating through the lattice. One such wave is shown in the figure to the right. The amplitude of the wave is given by the displacements of the atoms from their equilibrium positions. The wavelength λ is marked. There is a minimum possible wavelength, given by twice the equilibrium separation a between atoms. Any wavelength shorter than this can be mapped onto a wavelength longer than 2a, due to the periodicity of the lattice. This can be thought of as a consequence of the Nyquist–Shannon sampling theorem, the lattice points being viewed as the "sampling points" of a continuous wave. Not every possible lattice vibration has a well-defined wavelength and frequency. However, the normal modes do possess well-defined wavelengths and frequencies. One-dimensional lattice In order to simplify the analysis needed for a 3-dimensional lattice of atoms, it is convenient to model a 1-dimensional lattice or linear chain. This model is complex enough to display the salient features of phonons. Classical treatment The forces between the atoms are assumed to be linear and nearest-neighbour, and they are represented by an elastic spring. Each atom is assumed to be a point particle and the nucleus and electrons move in step (adiabatic theorem): n − 1 n n + 1 ← a → ···o++++++o++++++o++++++o++++++o++++++o++++++o++++++o++++++o++++++o··· →→→→→→ un − 1unun + 1 where labels the th atom out of a total of , is the distance between atoms when the chain is in equilibrium, and the displacement of the th atom from its equilibrium position. If C is the elastic constant of the spring and the mass of the atom, then the equation of motion of the th atom is This is a set of coupled equations. Since the solutions are expected to be oscillatory, new coordinates are defined by a discrete Fourier transform, in order to decouple them. Put Here, corresponds and devolves to the continuous variable of scalar field theory. The are known as the normal coordinates for continuum field modes with for . Substitution into the equation of motion produces the following decoupled equations (this requires a significant manipulation using the orthonormality and completeness relations of the discrete Fourier transform), These are the equations for decoupled harmonic oscillators which have the solution Each normal coordinate Qk represents an independent vibrational mode of the lattice with wavenumber , which is known as a normal mode. The second equation, for , is known as the dispersion relation between the angular frequency and the wavenumber. In the continuum limit, →0, →∞, with held fixed, → , a scalar field, and . This amounts to classical free scalar field theory, an assembly of independent oscillators. Quantum treatment A one-dimensional quantum mechanical harmonic chain consists of N identical atoms. This is the simplest quantum mechanical model of a lattice that allows phonons to arise from it. The formalism for this model is readily generalizable to two and three dimensions. In contrast to the previous section, the positions of the masses are not denoted by , but instead by as measured from their equilibrium positions. (I.e. if particle is at its equilibrium position.) In two or more dimensions, the are vector quantities. The Hamiltonian for this system is where m is the mass of each atom (assuming it is equal for all), and xi and pi are the position and momentum operators, respectively, for the ith atom and the sum is made over the nearest neighbors (nn). However one expects that in a lattice there could also appear waves that behave like particles. It is customary to deal with waves in Fourier space which uses normal modes of the wavevector as variables instead of coordinates of particles. The number of normal modes is the same as the number of particles. Still, the Fourier space is very useful given the periodicity of the system. A set of N "normal coordinates" Qk may be introduced, defined as the discrete Fourier transforms of the xk and N "conjugate momenta" Πk defined as the Fourier transforms of the pk: The quantity k turns out to be the wavenumber of the phonon, i.e. 2 divided by the wavelength. This choice retains the desired commutation relations in either real space or wavevector space From the general result The potential energy term is where The Hamiltonian may be written in wavevector space as The couplings between the position variables have been transformed away; if the Q and Π were Hermitian (which they are not), the transformed Hamiltonian would describe N uncoupled harmonic oscillators. The form of the quantization depends on the choice of boundary conditions; for simplicity, periodic boundary conditions are imposed, defining the (N + 1)th atom as equivalent to the first atom. Physically, this corresponds to joining the chain at its ends. The resulting quantization is The upper bound to n comes from the minimum wavelength, which is twice the lattice spacing a, as discussed above. The harmonic oscillator eigenvalues or energy levels for the mode ωk are: The levels are evenly spaced at: where ħω is the zero-point energy of a quantum harmonic oscillator. An exact amount of energy ħω must be supplied to the harmonic oscillator lattice to push it to the next energy level. By analogy to the photon case when the electromagnetic field is quantized, the quantum of vibrational energy is called a phonon. All quantum systems show wavelike and particlelike properties simultaneously. The particle-like properties of the phonon are best understood using the methods of second quantization and operator techniques described later. Three-dimensional lattice This may be generalized to a three-dimensional lattice. The wavenumber k is replaced by a three-dimensional wavevector k. Furthermore, each k is now associated with three normal coordinates. The new indices s = 1, 2, 3 label the polarization of the phonons. In the one-dimensional model, the atoms were restricted to moving along the line, so the phonons corresponded to longitudinal waves. In three dimensions, vibration is not restricted to the direction of propagation, and can also occur in the perpendicular planes, like transverse waves. This gives rise to the additional normal coordinates, which, as the form of the Hamiltonian indicates, we may view as independent species of phonons. Dispersion relation For a one-dimensional alternating array of two types of ion or atom of mass m1, m2 repeated periodically at a distance a, connected by springs of spring constant K, two modes of vibration result: where k is the wavevector of the vibration related to its wavelength by . The connection between frequency and wavevector, ω = ω(k), is known as a dispersion relation. The plus sign results in the so-called optical mode, and the minus sign to the acoustic mode. In the optical mode two adjacent different atoms move against each other, while in the acoustic mode they move together. The speed of propagation of an acoustic phonon, which is also the speed of sound in the lattice, is given by the slope of the acoustic dispersion relation, (see group velocity.) At low values of k (i.e. long wavelengths), the dispersion relation is almost linear, and the speed of sound is approximately ωa, independent of the phonon frequency. As a result, packets of phonons with different (but long) wavelengths can propagate for large distances across the lattice without breaking apart. This is the reason that sound propagates through solids without significant distortion. This behavior fails at large values of k, i.e. short wavelengths, due to the microscopic details of the lattice. For a crystal that has at least two atoms in its primitive cell, the dispersion relations exhibit two types of phonons, namely, optical and acoustic modes corresponding to the upper blue and lower red curve in the diagram, respectively. The vertical axis is the energy or frequency of phonon, while the horizontal axis is the wavevector. The boundaries at − and are those of the first Brillouin zone. A crystal with N ≥ 2 different atoms in the primitive cell exhibits three acoustic modes: one longitudinal acoustic mode and two transverse acoustic modes. The number of optical modes is 3N – 3. The lower figure shows the dispersion relations for several phonon modes in GaAs as a function of wavevector k in the principal directions of its Brillouin zone. The modes are also referred to as the branches of phonon dispersion. In general, if there are p atoms (denoted by N earlier) in the primitive unit cell, there will be 3p branches of phonon dispersion in a 3-dimensional crystal. Out of these, 3 branches correspond to acoustic modes and the remaining 3p-3 branches will correspond to optical modes. In some special directions, some branches coincide due to symmetry. These branches are called degenerate. In acoustic modes, all the p atoms vibrate in phase. So there is no change in the relative displacements of these atoms during the wave propagation. Study of phonon dispersion is useful for modeling propagation of sound waves in solids, which is characterized by phonons. The energy of each phonon, as given earlier, is ħω. The velocity of the wave also is given in terms of ω and k . The direction of the wave vector is the direction of the wave propagation and the phonon polarization vector gives the direction in which the atoms vibrate. Actually, in general, the wave velocity in a crystal is different for different directions of k. In other words, most crystals are anisotropic for phonon propagation. A wave is longitudinal if the atoms vibrate in the same direction as the wave propagation. In a transverse wave, the atoms vibrate perpendicular to the wave propagation. However, except for isotropic crystals, waves in a crystal are not exactly longitudinal or transverse. For general anisotropic crystals, the phonon waves are longitudinal or transverse only in certain special symmetry directions. In other directions, they can be nearly longitudinal or nearly transverse. It is only for labeling convenience, that they are often called longitudinal or transverse but are actually quasi-longitudinal or quasi-transverse. Note that in the three-dimensional case, there are two directions perpendicular to a straight line at each point on the line. Hence, there are always two (quasi) transverse waves for each (quasi) longitudinal wave. Many phonon dispersion curves have been measured by inelastic neutron scattering. The physics of sound in fluids differs from the physics of sound in solids, although both are density waves: sound waves in fluids only have longitudinal components, whereas sound waves in solids have longitudinal and transverse components. This is because fluids cannot support shear stresses (but see viscoelastic fluids, which only apply to high frequencies). Interpretation of phonons using second quantization techniques The above-derived Hamiltonian may look like a classical Hamiltonian function, but if it is interpreted as an operator, then it describes a quantum field theory of non-interacting bosons. The second quantization technique, similar to the ladder operator method used for quantum harmonic oscillators, is a means of extracting energy eigenvalues without directly solving the differential equations. Given the Hamiltonian, , as well as the conjugate position, , and conjugate momentum defined in the quantum treatment section above, we can define creation and annihilation operators:   and   The following commutators can be easily obtained by substituting in the canonical commutation relation: Using this, the operators bk† and bk can be inverted to redefine the conjugate position and momentum as:   and   Directly substituting these definitions for and into the wavevector space Hamiltonian, as it is defined above, and simplifying then results in the Hamiltonian taking the form: This is known as the second quantization technique, also known as the occupation number formulation, where nk = bk†bk is the occupation number. This can be seen to be a sum of N independent oscillator Hamiltonians, each with a unique wave vector, and compatible with the methods used for the quantum harmonic oscillator (note that nk is hermitian). When a Hamiltonian can be written as a sum of commuting sub-Hamiltonians, the energy eigenstates will be given by the products of eigenstates of each of the separate sub-Hamiltonians. The corresponding energy spectrum is then given by the sum of the individual eigenvalues of the sub-Hamiltonians. As with the quantum harmonic oscillator, one can show that bk† and bk respectively create and destroy a single field excitation, a phonon, with an energy of ħωk. Three important properties of phonons may be deduced from this technique. First, phonons are bosons, since any number of identical excitations can be created by repeated application of the creation operator bk†. Second, each phonon is a "collective mode" caused by the motion of every atom in the lattice. This may be seen from the fact that the creation and annihilation operators, defined here in momentum space, contain sums over the position and momentum operators of every atom when written in position space. (See position and momentum space.) Finally, using the position–position correlation function, it can be shown that phonons act as waves of lattice displacement. This technique is readily generalized to three dimensions, where the Hamiltonian takes the form: This can be interpreted as the sum of 3N independent oscillator Hamiltonians, one for each wave vector and polarization. Acoustic and optical phonons Solids with more than one atom in the smallest unit cell exhibit two types of phonons: acoustic phonons and optical phonons. Acoustic phonons are coherent movements of atoms of the lattice out of their equilibrium positions. If the displacement is in the direction of propagation, then in some areas the atoms will be closer, in others farther apart, as in a sound wave in air (hence the name acoustic). Displacement perpendicular to the propagation direction is comparable to waves on a string. If the wavelength of acoustic phonons goes to infinity, this corresponds to a simple displacement of the whole crystal, and this costs zero deformation energy. Acoustic phonons exhibit a linear relationship between frequency and phonon wave-vector for long wavelengths. The frequencies of acoustic phonons tend to zero with longer wavelength. Longitudinal and transverse acoustic phonons are often abbreviated as LA and TA phonons, respectively. Optical phonons are out-of-phase movements of the atoms in the lattice, one atom moving to the left, and its neighbor to the right. This occurs if the lattice basis consists of two or more atoms. They are called optical because in ionic crystals, such as sodium chloride, fluctuations in displacement create an electrical polarization that couples to the electromagnetic field. Hence, they can be excited by infrared radiation, the electric field of the light will move every positive sodium ion in the direction of the field, and every negative chloride ion in the other direction, causing the crystal to vibrate. Optical phonons have a non-zero frequency at the Brillouin zone center and show no dispersion near that long wavelength limit. This is because they correspond to a mode of vibration where positive and negative ions at adjacent lattice sites swing against each other, creating a time-varying electrical dipole moment. Optical phonons that interact in this way with light are called infrared active. Optical phonons that are Raman active can also interact indirectly with light, through Raman scattering. Optical phonons are often abbreviated as LO and TO phonons, for the longitudinal and transverse modes respectively; the splitting between LO and TO frequencies is often described accurately by the Lyddane–Sachs–Teller relation. When measuring optical phonon energy experimentally, optical phonon frequencies are sometimes given in spectroscopic wavenumber notation, where the symbol ω represents ordinary frequency (not angular frequency), and is expressed in units of cm−1. The value is obtained by dividing the frequency by the speed of light in vacuum. In other words, the wave-number in cm−1 units corresponds to the inverse of the wavelength of a photon in vacuum that has the same frequency as the measured phonon. Crystal momentum By analogy to photons and matter waves, phonons have been treated with wavevector k as though it has a momentum ħk; however, this is not strictly correct, because ħk is not actually a physical momentum; it is called the crystal momentum or pseudomomentum. This is because k is only determined up to addition of constant vectors (the reciprocal lattice vectors and integer multiples thereof). For example, in the one-dimensional model, the normal coordinates Q and Π are defined so that where for any integer n. A phonon with wavenumber k is thus equivalent to an infinite family of phonons with wavenumbers k ± , k ± , and so forth. Physically, the reciprocal lattice vectors act as additional chunks of momentum which the lattice can impart to the phonon. Bloch electrons obey a similar set of restrictions. It is usually convenient to consider phonon wavevectors k which have the smallest magnitude |k| in their "family". The set of all such wavevectors defines the first Brillouin zone. Additional Brillouin zones may be defined as copies of the first zone, shifted by some reciprocal lattice vector. Thermodynamics The thermodynamic properties of a solid are directly related to its phonon structure. The entire set of all possible phonons that are described by the phonon dispersion relations combine in what is known as the phonon density of states which determines the heat capacity of a crystal. By the nature of this distribution, the heat capacity is dominated by the high-frequency part of the distribution, while thermal conductivity is primarily the result of the low-frequency region. At absolute zero temperature, a crystal lattice lies in its ground state, and contains no phonons. A lattice at a nonzero temperature has an energy that is not constant, but fluctuates randomly about some mean value. These energy fluctuations are caused by random lattice vibrations, which can be viewed as a gas of phonons. Because these phonons are generated by the temperature of the lattice, they are sometimes designated thermal phonons. Thermal phonons can be created and destroyed by random energy fluctuations. In the language of statistical mechanics this means that the chemical potential for adding a phonon is zero. This behavior is an extension of the harmonic potential into the anharmonic regime. The behavior of thermal phonons is similar to the photon gas produced by an electromagnetic cavity, wherein photons may be emitted or absorbed by the cavity walls. This similarity is not coincidental, for it turns out that the electromagnetic field behaves like a set of harmonic oscillators, giving rise to black-body radiation. Both gases obey the Bose–Einstein statistics: in thermal equilibrium and within the harmonic regime, the probability of finding phonons or photons in a given state with a given angular frequency is: where ωk,s is the frequency of the phonons (or photons) in the state, kB is the Boltzmann constant, and T is the temperature. Phonon tunneling Phonons have been shown to exhibit quantum tunneling behavior (or phonon tunneling) where, across gaps up to a nanometer wide, heat can flow via phonons that "tunnel" between two materials. This type of heat transfer works between distances too large for conduction to occur but too small for radiation to occur and therefore cannot be explained by classical heat transfer models. Operator formalism The phonon Hamiltonian is given by In terms of the creation and annihilation operators, these are given by Here, in expressing the Hamiltonian in operator formalism, we have not taken into account the ħωq term as, given a continuum or infinite lattice, the ħωq terms will add up yielding an infinite term. Because the difference in energy is what we measure and not the absolute value of it, the constant term ħωq can be ignored without changing the equations of motion. Hence, the ħωq factor is absent in the operator formalized expression for the Hamiltonian. The ground state, also called the "vacuum state", is the state composed of no phonons. Hence, the energy of the ground state is 0. When a system is in the state , we say there are nα phonons of type α, where nα is the occupation number of the phonons. The energy of a single phonon of type α is given by ħωq and the total energy of a general phonon system is given by n1ħω1 + n2ħω2 +.... As there are no cross terms (e.g. n1ħω2), the phonons are said to be non-interacting. The action of the creation and annihilation operators is given by: and, The creation operator, aα† creates a phonon of type α while aα annihilates one. Hence, they are respectively the creation and annihilation operators for phonons. Analogous to the quantum harmonic oscillator case, we can define particle number operator as The number operator commutes with a string of products of the creation and annihilation operators if and only if the number of creation operators is equal to number of annihilation operators. It can be shown that phonons are symmetric under exchange (i.e.  = ), so therefore they are considered bosons. Nonlinearity As well as photons, phonons can interact via parametric down conversion and form squeezed coherent states. Predicted properties Recent research has shown that phonons and rotons may have a non-negligible mass and be affected by gravity just as standard particles are. In particular, phonons are predicted to have a kind of negative mass and negative gravity. This can be explained by how phonons are known to travel faster in denser materials. Because the part of a material pointing towards a gravitational source is closer to the object, it becomes denser on that end. From this, it is predicted that phonons would deflect away as it detects the difference in densities, exhibiting the qualities of a negative gravitational field. Although the effect would be too small to measure, it is possible that future equipment could lead to successful results. Superconductivity Superconductivity is a state of electronic matter in which electrical resistance vanishes and magnetic fields are expelled from the material. In a superconductor, electrons are bound together into Cooper pairs by a weak attractive force. In a conventional superconductor, this attraction is caused by an exchange of phonons between the electrons. The evidence that phonons, the vibrations of the ionic lattice, are relevant for superconductivity is provided by the isotope effect, the dependence of the superconducting critical temperature on the mass of the ions. Other research In 2019, researchers were able to isolate individual phonons without destroying them for the first time. They have been also shown to form “phonon winds” where an electric current in a graphene surface is generated by a liquid flow above it due to the viscous forces at the liquid–solid interface. See also Boson Brillouin scattering Fracton Linear elasticity Mechanical wave Phonon scattering Carrier scattering Phononic crystal Rayleigh wave Relativistic heat conduction Rigid unit modes SASER Second sound Surface acoustic wave Surface phonon Thermal conductivity Vibration References External links * Optical and acoustic modes with movies in Bar-Ziv Lab. Quasiparticles Bosons 1932 introductions
Phonon
[ "Physics", "Materials_science" ]
6,104
[ "Matter", "Bosons", "Condensed matter physics", "Quasiparticles", "Subatomic particles" ]
85,757
https://en.wikipedia.org/wiki/Trans-lunar%20injection
A trans-lunar injection (TLI) is a propulsive maneuver, which is used to send a spacecraft to the Moon. Typical lunar transfer trajectories approximate Hohmann transfers, although low-energy transfers have also been used in some cases, as with the Hiten probe. For short duration missions without significant perturbations from sources outside the Earth-Moon system, a fast Hohmann transfer is typically more practical. A spacecraft performs TLI to begin a lunar transfer from a low circular parking orbit around Earth. The large TLI burn, usually performed by a chemical rocket engine, increases the spacecraft's velocity, changing its orbit from a circular low Earth orbit to a highly eccentric orbit. As the spacecraft begins coasting on the lunar transfer arc, its trajectory approximates an elliptical orbit about the Earth with an apogee near to the radius of the Moon's orbit. The TLI burn is sized and timed to precisely target the Moon as it revolves around the Earth. The burn is timed so that the spacecraft nears apogee as the Moon approaches. Finally, the spacecraft enters the Moon's sphere of influence, making a hyperbolic lunar swingby. Free return In some cases it is possible to design a TLI to target a free return trajectory, so that the spacecraft will loop around behind the Moon and return to Earth without need for further propulsive maneuvers. Such free return trajectories add a margin of safety to human spaceflight missions, since the spacecraft will return to Earth "for free" after the initial TLI burn. The Apollos 8, 10 and 11 began on a free return trajectory, while the later missions used a functionally similar hybrid trajectory, in which a midway course correction is required to reach the Moon. Modeling Patched conics TLI targeting and lunar transfers are a specific application of the n body problem, which may be approximated in various ways. The simplest way to explore lunar transfer trajectories is by the method of patched conics. The spacecraft is assumed to accelerate only under classical 2 body dynamics, being dominated by the Earth until it reaches the Moon's sphere of influence. Motion in a patched-conic system is deterministic and simple to calculate, lending itself for rough mission design and "back of the envelope" studies. Restricted circular three body (RC3B) More realistically, however, the spacecraft is subject to gravitational forces from many bodies. Gravitation from Earth and Moon dominate the spacecraft's acceleration, and since the spacecraft's own mass is negligible in comparison, the spacecraft's trajectory may be better approximated as a restricted three-body problem. This model is a closer approximation but lacks an analytic solution, requiring numerical calculation. Further accuracy More detailed simulation involves modeling the Moon's true orbital motion; gravitation from other astronomical bodies; the non-uniformity of the Earth's and Moon's gravity; including solar radiation pressure; and so on. Propagating spacecraft motion in such a model is numerically intensive, but necessary for true mission accuracy. History The first space probe to attempt TLI was the Soviet Union's Luna 1 on January 2, 1959 which was designed to impact the Moon. The burn however didn't go exactly as planned and the spacecraft missed the Moon by more than three times its radius and was sent into a heliocentric orbit. Luna 2 performed the same maneuver more accurately on September 12, 1959 and crashed into the Moon two days later. The Soviets repeated this success with 22 more Luna missions and 5 Zond missions travelling to the Moon between 1959 and 1976. The United States launched its first lunar impactor attempt, Ranger 3, on January 26, 1962, which failed to reach the Moon. This was followed by the first US success, Ranger 4, on April 23, 1962. Another 27 US missions to the Moon were launched from 1962 to 1973, including five successful Surveyor soft landers, five Lunar Orbiter surveillance probes, and nine Apollo missions, which landed the first humans on the Moon. For the Apollo lunar missions, TLI was performed by the restartable J-2 engine in the S-IVB third stage of the Saturn V rocket. This particular TLI burn lasted approximately 350 seconds, providing 3.05 to 3.25 km/s (10,000 to 10,600 ft/s) of change in velocity, at which point the spacecraft was traveling at approximately 10.4 km/s (34150 ft/s) relative to the Earth. The Apollo 8 TLI was spectacularly observed from the Hawaiian Islands in the pre-dawn sky south of Waikiki, photographed and reported in the papers the next day. In 1969, the Apollo 10 pre-dawn TLI was visible from Cloncurry, Australia. It was described as resembling car headlights coming over a hill in fog, with the spacecraft appearing as a bright comet with a greenish tinge. In 1990 Japan launched its first lunar mission, using the Hiten satellite to fly by the Moon and place the Hagoromo microsatellite in a lunar orbit. Following that, it explored a novel low delta-v TLI method with a 6-month transfer time (compared to 3 days for Apollo). The 1994 US Clementine spacecraft, designed to showcase lightweight technologies, used a 3 week long TLI with two intermediate Earth flybys before entering a lunar orbit. In 1997 Asiasat-3 became the first commercial satellite to reach the Moon's sphere of influence when, after a launch failure, it swung by the Moon twice as a low delta-v way to reach its desired geostationary orbit. It passed within 6200 km of the Moon's surface. The 2003 ESA SMART-1 technology demonstrator satellite became the first European satellite to orbit the Moon. After being launched into a geostationary transfer orbit (GTO), it used solar powered ion engines for propulsion. As a result of its extremely low delta-v TLI maneuver, the spacecraft took over 13 months to reach a lunar orbit and 17 months to reach its desired orbit. China launched its first Moon mission in 2007, placing the Chang'e 1 spacecraft in a lunar orbit. It used multiple burns to slowly raise its apogee to reach the vicinity of the Moon. India followed in 2008, launching the Chandrayaan-1 into a GTO and, like the Chinese spacecraft, increasing its apogee over a number of burns. The soft lander Beresheet from the Israel Aerospace Industries, used this maneuver in 2019, but crashed on the Moon. In 2011 the NASA GRAIL satellites used a low delta-v route to the Moon, passing by the Sun-Earth L1 point, and taking over 3 months. See also Astrodynamics Comparison of super heavy lift launch systems Low energy transfer Trans-Earth injection Trans-Mars injection References Astrodynamics Spacecraft propulsion Orbital maneuvers Exploration of the Moon Apollo program ja:月遷移軌道
Trans-lunar injection
[ "Engineering" ]
1,428
[ "Astrodynamics", "Aerospace engineering" ]
85,767
https://en.wikipedia.org/wiki/Pseudonymous%20remailer
A pseudonymous remailer or nym server, as opposed to an anonymous remailer, is an Internet software program designed to allow people to write pseudonymous messages on Usenet newsgroups and send pseudonymous email. Unlike purely anonymous remailers, it assigns its users a user name, and it keeps a database of instructions on how to return messages to the real user. These instructions usually involve the anonymous remailer network itself, thus protecting the true identity of the user. Primordial pseudonymous remailers once recorded enough information to trace the identity of the real user, making it possible for someone to obtain the identity of the real user through legal or illegal means. This form of pseudonymous remailer is no longer common. David Chaum wrote an article in 1981 that described many of the features present in modern pseudonymous remailers. The Penet remailer, which lasted from 1993 to 1996, was a popular pseudonymous remailer. Contemporary nym servers A nym server (short for "pseudonym server") is a server that provides an untraceable e-mail address, such that neither the nym server operator nor the operators of the remailers involved can discover which nym corresponds to which real identity. To set up a nym, one creates a PGP keypair and submits it to the nym server, along with instructions (called a reply block) to anonymous remailers (such as Cypherpunk or Mixmaster) on how to send a message to one's real address. The nym server returns a confirmation through this reply block. One then sends a message to the address in the confirmation. To send a message through the nym server so that the From address is the nym, one adds a few headers, signs the message with one's nym key, encrypts it with the nym server key, and sends the message to the nym server, optionally routing it through some anonymous remailers. When the nym server receives the message it decrypts it and sends it on to the intended recipient, with the From address indicating one's nym. When the nym server gets a message addressed to the nym, it appends it to the nym's reply block and sends it to the first remailer in the chain, which sends it to the next and so on until it reaches your real address. It is considered good practice to include instructions to encrypt it on the way, so that someone (or some organization) doing in/out traffic analysis on the nym server cannot easily match the message received by you to the one sent by the nym server. Existing "multi-use reply block" nym servers were shown to be susceptible to passive traffic analysis with one month's worth of incoming spam (based on 2005 figures) in a paper by Bram Cohen, Len Sassaman, and Nick Mathewson. See also Anonymity Anonymous P2P Anonymous remailer Cypherpunk anonymous remailer (Type I) Mixmaster anonymous remailer (Type II) Mixminion (Type III) I2P-Bote Onion routing Tor (network) Data privacy Penet remailer Traffic analysis References Further reading External links Anonymous Remailer FAQ Mixmaster FAQ Official I2P-Bote eepsite (I2P-internal) Anonymity networks Internet Protocol based network software Routing Network architecture
Pseudonymous remailer
[ "Engineering" ]
720
[ "Network architecture", "Computer networks engineering" ]
86,061
https://en.wikipedia.org/wiki/Cygnus%20X-1
Cygnus X-1 (abbreviated Cyg X-1) is a galactic X-ray source in the constellation Cygnus and was the first such source widely accepted to be a black hole. It was discovered in 1964 during a rocket flight and is one of the strongest X-ray sources detectable from Earth, producing a peak X-ray flux density of (). It remains among the most studied astronomical objects in its class. The compact object is now estimated to have a mass about 21.2 times the mass of the Sun and has been shown to be too small to be any known kind of normal star or other likely object besides a black hole. If so, the radius of its event horizon has "as upper bound to the linear dimension of the source region" of occasional X-ray bursts lasting only for about 1 ms. Cygnus X-1 belongs to a high-mass X-ray binary system, located about 2.22 kiloparsecs from the Sun, that includes a blue supergiant variable star designated HDE 226868, which it orbits at about 0.2 AU, or 20% of the distance from Earth to the Sun. A stellar wind from the star provides material for an accretion disk around the X-ray source. Matter in the inner disk is heated to millions of degrees, generating the observed X-rays. A pair of relativistic jets, arranged perpendicularly to the disk, are carrying part of the energy of the infalling material away into interstellar space. This system may belong to a stellar association called Cygnus OB3, which would mean that Cygnus X-1 is about 5 million years old and formed from a progenitor star that had more than . The majority of the star's mass was shed, most likely as a stellar wind. If this star had then exploded as a supernova, the resulting force would most likely have ejected the remnant from the system. Hence the star may have instead collapsed directly into a black hole. Cygnus X-1 was the subject of a friendly scientific wager between physicists Stephen Hawking and Kip Thorne in 1975, with Hawking—betting that it was not a black hole—hoping to lose. Hawking conceded the bet in 1990 after observational data had strengthened the case that there was indeed a black hole in the system. , this hypothesis lacked direct empirical evidence but was generally accepted based on indirect evidence. Discovery and observation Observation of X-ray emissions allows astronomers to study celestial phenomena involving gas with temperatures in the millions of degrees. However, because X-ray emissions are blocked by Earth's atmosphere, observation of celestial X-ray sources is not possible without lifting instruments to altitudes where the X-rays can penetrate. Cygnus X-1 was discovered using X-ray instruments that were carried aloft by a sounding rocket launched from White Sands Missile Range in New Mexico. As part of an ongoing effort to map these sources, a survey was conducted in 1964 using two Aerobee suborbital rockets. The rockets carried Geiger counters to measure X-ray emission in wavelength range 1– across an 8.4° section of the sky. These instruments swept across the sky as the rockets rotated, producing a map of closely spaced scans. As a result of these surveys, eight new sources of cosmic X-rays were discovered, including Cyg XR-1 (later Cyg X-1) in the constellation Cygnus. The celestial coordinates of this source were estimated as right ascension 19h53m and declination 34.6°. It was not associated with any especially prominent radio or optical source at that position. Seeing a need for longer-duration studies, in 1963 Riccardo Giacconi and Herb Gursky proposed the first orbital satellite to study X-ray sources. NASA launched their Uhuru satellite in 1970, which led to the discovery of 300 new X-ray sources. Extended Uhuru observations of Cygnus X-1 showed fluctuations in the X-ray intensity that occurs several times a second. This rapid variation meant that the X-ray generation must occur over a compact region no larger than ~ (roughly the size of Jupiter), as the speed of light restricts communication between more distant regions. In April–May 1971, Luc Braes and George K. Miley from Leiden Observatory, and independently Robert M. Hjellming and Campbell Wade at the National Radio Astronomy Observatory, detected radio emission from Cygnus X-1, and their accurate radio position pinpointed the X-ray source to the star AGK2 +35 1910 = HDE 226868. On the celestial sphere, this star lies about half a degree from the 4th-magnitude star Eta Cygni. It is a supergiant star that is by itself incapable of emitting the observed quantities of X-rays. Hence, the star must have a companion that could heat gas to the millions of degrees needed to produce the radiation source for Cygnus X-1. Louise Webster and Paul Murdin, at the Royal Greenwich Observatory, and Charles Thomas Bolton, working independently at the University of Toronto's David Dunlap Observatory, announced the discovery of a massive hidden companion to HDE 226868 in 1972. Measurements of the Doppler shift of the star's spectrum demonstrated the companion's presence and allowed its mass to be estimated from the orbital parameters. Based on the high predicted mass of the object, they surmised that it may be a black hole, as the largest possible neutron star cannot exceed three times the mass of the Sun. With further observations strengthening the evidence, by the end of 1973 the astronomical community generally conceded that Cygnus X-1 was most likely a black hole. More precise measurements of Cygnus X-1 demonstrated variability down to a single millisecond. This interval is consistent with turbulence in a disk of accreted matter surrounding a black hole—the accretion disk. X-ray bursts that last for about a third of a second match the expected time frame of matter falling toward a black hole. Cygnus X-1 has since been studied extensively using observations by orbiting and ground-based instruments. The similarities between the emissions of X-ray binaries such as HDE 226868/Cygnus X-1 and active galactic nuclei suggests a common mechanism of energy generation involving a black hole, an orbiting accretion disk and associated jets. For this reason, Cygnus X-1 is identified among a class of objects called microquasars; an analog of the quasars, or quasi-stellar radio sources, now known to be distant active galactic nuclei. Scientific studies of binary systems such as HDE 226868/Cygnus X-1 may lead to further insights into the mechanics of active galaxies. Binary system The compact object and blue supergiant star form a binary system in which they orbit around their center of mass every 5.599829 days. From the perspective of Earth, the compact object never goes behind the other star; in other words, the system does not eclipse. However, the inclination of the orbital plane to the line of sight from Earth remains uncertain, with predictions ranging from 27° to 65°. A 2007 study estimated the inclination as , which would mean that the semi-major axis is about , or 20% of the distance from Earth to the Sun. The orbital eccentricity is thought to be only , meaning a nearly circular orbit. Earth's distance to this system is calculated by trigonometric parallax as , and by radio astrometry as . The HDE 226868/Cygnus X-1 system shares a common motion through space with an association of massive stars named Cygnus OB3, which is located at roughly 2000 parsecs from the Sun. This implies that HDE 226868, Cygnus X-1 and this OB association may have formed at the same time and location. If so, then the age of the system is about . The motion of HDE 226868 with respect to Cygnus OB3 is , a typical value for random motion within a stellar association. HDE 226868 is about from the center of the association and could have reached that separation in about —which roughly agrees with estimated age of the association. With a galactic latitude of 4° and galactic longitude 71°, this system lies inward along the same Orion Spur, in which the Sun is located within the Milky Way, near where the spur approaches the Sagittarius Arm. Cygnus X-1 has been described as belonging to the Sagittarius Arm, though the structure of the Milky Way is not well established. Compact object From various techniques, the mass of the compact object appears to be greater than the maximum mass for a neutron star. Stellar evolutionary models suggest a mass of , while other techniques resulted in 10 solar masses. Measuring periodicities in the X-ray emission near the object yielded a more precise value of . In all cases, the object is most likely a black hole—a region of space with a gravitational field that is strong enough to prevent the escape of electromagnetic radiation from the interior. The boundary of this region is called the event horizon and has an effective radius called the Schwarzschild radius, which is about for Cygnus X-1. Anything (including matter and photons) that passes through this boundary is unable to escape. New measurements published in 2021 yielded an estimated mass of . Evidence of just such an event horizon may have been detected in 1992 using ultraviolet (UV) observations with the High Speed Photometer on the Hubble Space Telescope. As self-luminous clumps of matter spiral into a black hole, their radiation is emitted in a series of pulses that are subject to gravitational redshift as the material approaches the horizon. That is, the wavelengths of the radiation steadily increase, as predicted by general relativity. Matter hitting a solid, compact object would emit a final burst of energy, whereas material passing through an event horizon would not. Two such "dying pulse trains" were observed, which is consistent with the existence of a black hole. The spin of the compact object is not yet well determined. Past analysis of data from the space-based Chandra X-ray Observatory suggested that Cygnus X-1 was not rotating to any significant degree. However, evidence announced in 2011 suggests that it is rotating extremely rapidly, approximately 790 times per second. Formation The largest star in the Cygnus OB3 association has a mass 40 times that of the Sun. As more massive stars evolve more rapidly, this implies that the progenitor star for Cygnus X-1 had more than 40 solar masses. Given the current estimated mass of the black hole, the progenitor star must have lost over 30 solar masses of material. Part of this mass may have been lost to HDE 226868, while the remainder was most likely expelled by a strong stellar wind. The helium enrichment of HDE 226868's outer atmosphere may be evidence for this mass transfer. Possibly the progenitor may have evolved into a Wolf–Rayet star, which ejects a substantial proportion of its atmosphere using just such a powerful stellar wind. If the progenitor star had exploded as a supernova, then observations of similar objects show that the remnant would most likely have been ejected from the system at a relatively high velocity. As the object remained in orbit, this indicates that the progenitor may have collapsed directly into a black hole without exploding (or at most produced only a relatively modest explosion). Accretion disk The compact object is thought to be orbited by a thin, flat disk of accreting matter known as an accretion disk. This disk is intensely heated by friction between ionized gas in faster-moving inner orbits and that in slower outer ones. It is divided into a hot inner region with a relatively high level of ionization—forming a plasma—and a cooler, less ionized outer region that extends to an estimated 500 times the Schwarzschild radius, or about 15,000 km. Though highly and erratically variable, Cygnus X-1 is typically the brightest persistent source of hard X-rays—those with energies from about 30 up to several hundred kiloelectronvolts—in the sky. The X-rays are produced as lower-energy photons in the thin inner accretion disk, then given more energy through Compton scattering with very high-temperature electrons in a geometrically thicker, but nearly transparent corona enveloping it, as well as by some further reflection from the surface of the thin disk. An alternative possibility is that the X-rays may be Compton-scattered by the base of a jet instead of a disk corona. The X-ray emission from Cygnus X-1 can vary in a somewhat repetitive pattern called quasi-periodic oscillations (QPO). The mass of the compact object appears to determine the distance at which the surrounding plasma begins to emit these QPOs, with the emission radius decreasing as the mass decreases. This technique has been used to estimate the mass of Cygnus X-1, providing a cross-check with other mass derivations. Pulsations with a stable period, similar to those resulting from the spin of a neutron star, have never been seen from Cygnus X-1. The pulsations from neutron stars are caused by the neutron star's rotating magnetic field, but the no-hair theorem guarantees that the magnetic field of a black hole is exactly aligned with its rotation axis and thus is static. For example, the X-ray binary V 0332+53 was thought to be a possible black hole until pulsations were found. Cygnus X-1 has also never displayed X-ray bursts similar to those seen from neutron stars. Cygnus X-1 unpredictably changes between two X-ray states, although the X-rays may vary continuously between those states as well. In the most common state, the X-rays are "hard", which means that more of the X-rays have high energy. In the less common state, the X-rays are "soft", with more of the X-rays having lower energy. The soft state also shows greater variability. The hard state is believed to originate in a corona surrounding the inner part of the more opaque accretion disk. The soft state occurs when the disk draws closer to the compact object (possibly as close as ), accompanied by cooling or ejection of the corona. When a new corona is generated, Cygnus X-1 transitions back to the hard state. The spectral transition of Cygnus X-1 can be explained using a two-component advective flow solution, as proposed by Chakrabarti and Titarchuk. A hard state is generated by the inverse Comptonisation of seed photons from the Keplarian disk and likewise synchrotron photons produced by the hot electrons in the centrifugal-pressure–supported boundary layer (CENBOL). The X-ray flux from Cygnus X-1 varies periodically every 5.6 days, especially during superior conjunction when the orbiting objects are most closely aligned with Earth and the compact source is the more distant. This indicates that the emissions are being partially blocked by circumstellar matter, which may be the stellar wind from the star HDE 226868. There is a roughly 300-day periodicity in the emission, which could be caused by the precession of the accretion disk. Jets As accreted matter falls toward the compact object, it loses gravitational potential energy. Part of this released energy is dissipated by jets of particles, aligned perpendicular to the accretion disk, that flow outward with relativistic velocities (that is, the particles are moving at a significant fraction of the speed of light). This pair of jets provide a means for an accretion disk to shed excess energy and angular momentum. They may be created by magnetic fields within the gas that surrounds the compact object. The Cygnus X-1 jets are inefficient radiators and so release only a small proportion of their energy in the electromagnetic spectrum. That is, they appear "dark". The estimated angle of the jets to the line of sight is 30°, and they may be precessing. One of the jets is colliding with a relatively dense part of the interstellar medium (ISM), forming an energized ring that can be detected by its radio emission. This collision appears to be forming a nebula that has been observed in the optical wavelengths. To produce this nebula, the jet must have an estimated average power of 4–, or . This is more than 1,000 times the power emitted by the Sun. There is no corresponding ring in the opposite direction because that jet is facing a lower-density region of the ISM. In 2006, Cygnus X-1 became the first stellar-mass black hole found to display evidence of gamma-ray emission in the very high-energy band, above . The signal was observed at the same time as a flare of hard X-rays, suggesting a link between the events. The X-ray flare may have been produced at the base of the jet, while the gamma rays could have been generated where the jet interacts with the stellar wind of HDE 226868. HDE 226868 HDE 226868 is a supergiant star with a spectral class of O9.7 Iab, which is on the borderline between class-O and class-B stars. It has an estimated surface temperature of 31,000 K and mass approximately 20–40 times the mass of the Sun. Based on a stellar evolutionary model, at the estimated distance of 2,000 parsecs, this star may have a radius equal to about 15–17 times the solar radius and has approximately 300,000–400,000 times the luminosity of the Sun. For comparison, the compact object is estimated to be orbiting HDE 226868 at a distance of about 40 solar radii, or twice the radius of this star. The surface of HDE 226868 is being tidally distorted by the gravity of the massive companion, forming a tear-drop shape that is further distorted by rotation. This causes the optical brightness of the star to vary by 0.06 magnitudes during each 5.6-day binary orbit, with the minimum magnitude occurring when the system is aligned with the line of sight. The "ellipsoidal" pattern of light variation results from the limb darkening and gravity darkening of the star's surface. When the spectrum of HDE 226868 is compared to the similar star Alnilam, the former shows an overabundance of helium and an underabundance of carbon in its atmosphere. The ultraviolet and hydrogen-alpha spectral lines of HDE 226868 show profiles similar to the star P Cygni, which indicates that the star is surrounded by a gaseous envelope that is being accelerated away from the star at speeds of about 1,500 km/s. Like other stars of its spectral type, HDE 226868 is thought to be shedding mass in a stellar wind at an estimated rate of solar masses per year; or one solar mass every 400,000 years. The gravitational influence of the compact object appears to be reshaping this stellar wind, producing a focused wind geometry rather than a spherically symmetrical wind. X-rays from the region surrounding the compact object heat and ionize this stellar wind. As the object moves through different regions of the stellar wind during its 5.6-day orbit, the UV lines, the radio emission, and the X-rays themselves all vary. The Roche lobe of HDE 226868 defines the region of space around the star where orbiting material remains gravitationally bound. Material that passes beyond this lobe may fall toward the orbiting companion. This Roche lobe is believed to be close to the surface of HDE 226868 but not overflowing, so the material at the stellar surface is not being stripped away by its companion. However, a significant proportion of the stellar wind emitted by the star is being drawn onto the compact object's accretion disk after passing beyond this lobe. The gas and dust between the Sun and HDE 226868 results in a reduction in the apparent magnitude of the star, as well as a reddening of the hue—red light can more effectively penetrate the dust in the interstellar medium. The estimated value of the interstellar extinction (AV) is 3.3 magnitudes. Without the intervening matter, HDE 226868 would be a fifth-magnitude star, and thus visible to the unaided eye. Stephen Hawking and Kip Thorne Cygnus X-1 was the subject of a bet between physicists Stephen Hawking and Kip Thorne, in which Hawking bet against the existence of black holes in the region. Hawking later described this as an "insurance policy" of sorts. In his book A Brief History of Time he wrote: According to the updated tenth-anniversary edition of A Brief History of Time, Hawking has conceded the bet due to subsequent observational data in favor of black holes. In his own book Black Holes and Time Warps, Thorne reports that Hawking conceded the bet by breaking into Thorne's office while he was in Russia, finding the framed bet, and signing it. While Hawking referred to the bet as taking place in 1975, the written bet itself (in Thorne's handwriting, with his and Hawking's signatures) bears additional witness signatures under a legend stating "Witnessed this tenth day of December 1974". This date was confirmed by Kip Thorne on the January 10, 2018 episode of Nova on PBS. In popular culture Cygnus X-1 is the subject of a two-part song series by Canadian progressive rock band Rush. The first part, "Book I: The Voyage", is the last song on the 1977 album A Farewell to Kings. The second part, "Book II: Hemispheres", is the first song on the following 1978 album, Hemispheres. The lyrics describe an explorer aboard the spaceship Rocinante, who travels to the black hole, believing that there may be something beyond it. As he moves closer, it becomes increasingly difficult to control the ship, and he is eventually drawn in by the pull of gravity. In the 1979 Disney live-action science fiction film The Black Hole, the scientific survey ship captained by Dr. Hans Reinhardt to study the black hole of the film's title is the Cygnus, presumably (although never stated as such) named for the first-identified black hole, Cygnus X-1. See also X-ray binary List of nearest black holes Stellar black hole References External links Cygnus X-1 at Constellation Guide NuSTAR and Suzaku observations of the hard state in Cygnus X-1: locating the inner accretion disk Michael Parker, 29 May 2015 NuSTAR's First View of High-Energy X-ray Universe NASA/JPL-Caltech June 28, 2012 098298 O-type supergiants Cygnus (constellation) 226868 Stellar black holes X-ray binaries Cygni, V1357 Durchmusterung objects Rotating ellipsoidal variables
Cygnus X-1
[ "Physics", "Astronomy" ]
4,816
[ "Black holes", "Stellar black holes", "Cygnus (constellation)", "Unsolved problems in physics", "Constellations" ]
86,092
https://en.wikipedia.org/wiki/Bile
Bile (from Latin bilis), or gall, is a yellow-green/misty green fluid produced by the liver of most vertebrates that aids the digestion of lipids in the small intestine. In humans, bile is primarily composed of water, is produced continuously by the liver, and is stored and concentrated in the gallbladder. After a human eats, this stored bile is discharged into the first section of the small intestine. Composition In the human liver, bile is composed of 97–98% water, 0.7% bile salts, 0.2% bilirubin, 0.51% fats (cholesterol, fatty acids, and lecithin), and 200 meq/L inorganic salts. The two main pigments of bile are bilirubin, which is orange-yellow, and its oxidised form biliverdin, which is green. When mixed, they are responsible for the brown color of feces. About of bile is produced per day in adult human beings. Function Bile or gall acts to some extent as a surfactant, helping to emulsify the lipids in food. Bile salt anions are hydrophilic on one side and hydrophobic on the other side; consequently, they tend to aggregate around droplets of lipids (triglycerides and phospholipids) to form micelles, with the hydrophobic sides towards the fat and hydrophilic sides facing outwards. The hydrophilic sides are negatively charged, and this charge prevents fat droplets coated with bile from re-aggregating into larger fat particles. Ordinarily, the micelles in the duodenum have a diameter around 1–50 μm in humans. The dispersion of food fat into micelles provides a greatly increased surface area for the action of the enzyme pancreatic lipase, which digests the triglycerides, and is able to reach the fatty core through gaps between the bile salts. A triglyceride is broken down into two fatty acids and a monoglyceride, which are absorbed by the villi on the intestine walls. After being transferred across the intestinal membrane, the fatty acids reform into triglycerides (), before being absorbed into the lymphatic system through lacteals. Without bile salts, most of the lipids in food would be excreted in feces, undigested. Since bile increases the absorption of fats, it is an important part of the absorption of the fat-soluble substances, such as the vitamins A, D, E, and K. Besides its digestive function, bile serves also as the route of excretion for bilirubin, a byproduct of red blood cells recycled by the liver. Bilirubin derives from hemoglobin by glucuronidation. Bile tends to be alkaline on average. The pH of common duct bile (7.50 to 8.05) is higher than that of the corresponding gallbladder bile (6.80 to 7.65). Bile in the gallbladder becomes more acidic the longer a person goes without eating, though resting slows this fall in pH. As an alkali, it also has the function of neutralizing excess stomach acid before it enters the duodenum, the first section of the small intestine. Bile salts also act as bactericides, destroying many of the microbes that may be present in the food. Clinical significance In the absence of bile, fats become indigestible and are instead excreted in feces, a condition called steatorrhea. Feces lack their characteristic brown color and instead are white or gray, and greasy. Steatorrhea can lead to deficiencies in essential fatty acids and fat-soluble vitamins. In addition, past the small intestine (which is normally responsible for absorbing fat from food) the gastrointestinal tract and gut flora are not adapted to processing fats, leading to problems in the large intestine. The cholesterol contained in bile will occasionally accrete into lumps in the gallbladder, forming gallstones. Cholesterol gallstones are generally treated through surgical removal of the gallbladder. However, they can sometimes be dissolved by increasing the concentration of certain naturally occurring bile acids, such as chenodeoxycholic acid and ursodeoxycholic acid. On an empty stomach – after repeated vomiting, for example – a person's vomit may be green or dark yellow, and very bitter. The bitter and greenish component may be bile or normal digestive juices originating in the stomach. Bile may be forced into the stomach secondary due to a weakened valve (pylorus), the presence of certain drugs including alcohol, or powerful muscular contractions and duodenal spasms. This is known as biliary reflux. Obstruction Biliary obstruction refers to a condition when bile ducts which deliver bile from the gallbladder or liver to the duodenum become obstructed. The blockage of bile might cause a buildup of bilirubin in the bloodstream which can result in jaundice. There are several potential causes for biliary obstruction including gallstones, cancer, trauma, choledochal cysts, or other benign causes of bile duct narrowing. The most common cause of bile duct obstruction is when gallstone(s) are dislodged from the gallbladder into the cystic duct or common bile duct resulting in a blockage. A blockage of the gallbladder or cystic duct may cause cholecystitis. If the blockage is beyond the confluence of the pancreatic duct, this may cause gallstone pancreatitis. In some instances of biliary obstruction, the bile may become infected by bacteria resulting in ascending cholangitis. Society and culture In medical theories prevalent in the West from classical antiquity to the Middle Ages, the body's health depended on the equilibrium of four "humors", or vital fluids, two of which related to bile: blood, phlegm, "yellow bile" (choler), and "black bile". These "humors" are believed to have their roots in the appearance of a blood sedimentation test made in open air, which exhibits a dark clot at the bottom ("black bile"), a layer of unclotted erythrocytes ("blood"), a layer of white blood cells ("phlegm") and a layer of clear yellow serum ("yellow bile"). Excesses of black bile and yellow bile were thought to produce depression and aggression, respectively, and the Greek names for them gave rise to the English words cholera (from Greek χολή kholē, "bile") and melancholia. In the former of those senses, the same theories explain the derivation of the English word bilious from bile, the meaning of gall in English as "exasperation" or "impudence", and the Latin word cholera, derived from the Greek kholé, which was passed along into some Romance languages as words connoting anger, such as colère (French) and cólera (Spanish). Soap Soap can be mixed with bile from mammals, such as ox gall. This mixture, called bile soap or gall soap, can be applied to textiles a few hours before washing as a traditional and effective method for removing various kinds of tough stains. Food Pinapaitan is a dish in Philippine cuisine that uses bile as flavoring. Other areas where bile is commonly used as a cooking ingredient include Laos and northern parts of Thailand. During the Boshin War, Satsuma soldiers of the early Imperial Japanese Army reportedly ate human livers boiled in bile. The practice of eating a slain enemy's liver, known as , was a tradition of the Satsuma people. Bears In regions where bile products are a popular ingredient in traditional medicine, the use of bears in bile-farming has been widespread. This practice has been condemned by activists, and some pharmaceutical companies have developed synthetic (non-ursine) alternatives. Principal acids See also Bile acid sequestrant Enterohepatic circulation Intestinal juice References Further reading Seleem HM, Nada AS, Naguib MA, Abdelmaksoud OR, El-Gazzarah AR (2021). Serum immunoglobulin G4 in patients with nonmalignant common bile duct stricture. Menoufia Med J; 34:1275-83. Body fluids Digestive system Biomolecules Hepatology
Bile
[ "Chemistry", "Biology" ]
1,784
[ "Digestive system", "Natural products", "Organic compounds", "Organ systems", "Structural biology", "Biomolecules", "Biochemistry", "Molecular biology" ]
86,113
https://en.wikipedia.org/wiki/Winding%20number
In mathematics, the winding number or winding index of a closed curve in the plane around a given point is an integer representing the total number of times that the curve travels counterclockwise around the point, i.e., the curve's number of turns. For certain open plane curves, the number of turns may be a non-integer. The winding number depends on the orientation of the curve, and it is negative if the curve travels around the point clockwise. Winding numbers are fundamental objects of study in algebraic topology, and they play an important role in vector calculus, complex analysis, geometric topology, differential geometry, and physics (such as in string theory). Intuitive description Suppose we are given a closed, oriented curve in the xy plane. We can imagine the curve as the path of motion of some object, with the orientation indicating the direction in which the object moves. Then the winding number of the curve is equal to the total number of counterclockwise turns that the object makes around the origin. When counting the total number of turns, counterclockwise motion counts as positive, while clockwise motion counts as negative. For example, if the object first circles the origin four times counterclockwise, and then circles the origin once clockwise, then the total winding number of the curve is three. Using this scheme, a curve that does not travel around the origin at all has winding number zero, while a curve that travels clockwise around the origin has negative winding number. Therefore, the winding number of a curve may be any integer. The following pictures show curves with winding numbers between −2 and 3: Formal definition Let be a continuous closed path on the plane minus one point. The winding number of around is the integer where is the path written in polar coordinates, i.e. the lifted path through the covering map The winding number is well defined because of the existence and uniqueness of the lifted path (given the starting point in the covering space) and because all the fibers of are of the form (so the above expression does not depend on the choice of the starting point). It is an integer because the path is closed. Alternative definitions Winding number is often defined in different ways in various parts of mathematics. All of the definitions below are equivalent to the one given above: Alexander numbering A simple combinatorial rule for defining the winding number was proposed by August Ferdinand Möbius in 1865 and again independently by James Waddell Alexander II in 1928. Any curve partitions the plane into several connected regions, one of which is unbounded. The winding numbers of the curve around two points in the same region are equal. The winding number around (any point in) the unbounded region is zero. Finally, the winding numbers for any two adjacent regions differ by exactly 1; the region with the larger winding number appears on the left side of the curve (with respect to motion down the curve). Differential geometry In differential geometry, parametric equations are usually assumed to be differentiable (or at least piecewise differentiable). In this case, the polar coordinate θ is related to the rectangular coordinates x and y by the equation: Which is found by differentiating the following definition for θ: By the fundamental theorem of calculus, the total change in θ is equal to the integral of dθ. We can therefore express the winding number of a differentiable curve as a line integral: The one-form dθ (defined on the complement of the origin) is closed but not exact, and it generates the first de Rham cohomology group of the punctured plane. In particular, if ω is any closed differentiable one-form defined on the complement of the origin, then the integral of ω along closed loops gives a multiple of the winding number. Complex analysis Winding numbers play a very important role throughout complex analysis (c.f. the statement of the residue theorem). In the context of complex analysis, the winding number of a closed curve in the complex plane can be expressed in terms of the complex coordinate . Specifically, if we write z = reiθ, then and therefore As is a closed curve, the total change in is zero, and thus the integral of is equal to multiplied by the total change in . Therefore, the winding number of closed path about the origin is given by the expression More generally, if is a closed curve parameterized by , the winding number of about , also known as the index of with respect to , is defined for complex as This is a special case of the famous Cauchy integral formula. Some of the basic properties of the winding number in the complex plane are given by the following theorem: Theorem. Let be a closed path and let be the set complement of the image of , that is, . Then the index of with respect to ,is (i) integer-valued, i.e., for all ; (ii) constant over each component (i.e., maximal connected subset) of ; and (iii) zero if is in the unbounded component of . As an immediate corollary, this theorem gives the winding number of a circular path about a point . As expected, the winding number counts the number of (counterclockwise) loops makes around : Corollary. If is the path defined by , then Topology In topology, the winding number is an alternate term for the degree of a continuous mapping. In physics, winding numbers are frequently called topological quantum numbers. In both cases, the same concept applies. The above example of a curve winding around a point has a simple topological interpretation. The complement of a point in the plane is homotopy equivalent to the circle, such that maps from the circle to itself are really all that need to be considered. It can be shown that each such map can be continuously deformed to (is homotopic to) one of the standard maps , where multiplication in the circle is defined by identifying it with the complex unit circle. The set of homotopy classes of maps from a circle to a topological space form a group, which is called the first homotopy group or fundamental group of that space. The fundamental group of the circle is the group of the integers, Z; and the winding number of a complex curve is just its homotopy class. Maps from the 3-sphere to itself are also classified by an integer which is also called the winding number or sometimes Pontryagin index. Turning number One can also consider the winding number of the path with respect to the tangent of the path itself. As a path followed through time, this would be the winding number with respect to the origin of the velocity vector. In this case the example illustrated at the beginning of this article has a winding number of 3, because the small loop is counted. This is only defined for immersed paths (i.e., for differentiable paths with nowhere vanishing derivatives), and is the degree of the tangential Gauss map. This is called the turning number, rotation number, rotation index or index of the curve, and can be computed as the total curvature divided by 2. Polygons In polygons, the turning number is referred to as the polygon density. For convex polygons, and more generally simple polygons (not self-intersecting), the density is 1, by the Jordan curve theorem. By contrast, for a regular star polygon {p/q}, the density is q. Space curves Turning number cannot be defined for space curves as degree requires matching dimensions. However, for locally convex, closed space curves, one can define tangent turning sign as , where is the turning number of the stereographic projection of its tangent indicatrix. Its two values correspond to the two non-degenerate homotopy classes of locally convex curves. Winding number and Heisenberg ferromagnet equations The winding number is closely related with the (2 + 1)-dimensional continuous Heisenberg ferromagnet equations and its integrable extensions: the Ishimori equation etc. Solutions of the last equations are classified by the winding number or topological charge (topological invariant and/or topological quantum number). Applications Point in polygon A point's winding number with respect to a polygon can be used to solve the point in polygon (PIP) problem – that is, it can be used to determine if the point is inside the polygon or not. Generally, the ray casting algorithm is a better alternative to the PIP problem as it does not require trigonometric functions, contrary to the winding number algorithm. Nevertheless, the winding number algorithm can be sped up so that it too, does not require calculations involving trigonometric functions. The sped-up version of the algorithm, also known as Sunday's algorithm, is recommended in cases where non-simple polygons should also be accounted for. See also Argument principle Coin rotation paradox Linking coefficient Nonzero-rule Polygon density Residue theorem Schläfli symbol Topological degree theory Topological quantum number Twist (disambiguation)#Mathematics, science, and technology Wilson loop Writhe References External links Algebraic topology Complex analysis Differential geometry
Winding number
[ "Mathematics" ]
1,859
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
86,347
https://en.wikipedia.org/wiki/Group%20%28periodic%20table%29
In chemistry, a group (also known as a family) is a column of elements in the periodic table of the chemical elements. There are 18 numbered groups in the periodic table; the 14 f-block columns, between groups 2 and 3, are not numbered. The elements in a group have similar physical or chemical characteristics of the outermost electron shells of their atoms (i.e., the same core charge), because most chemical properties are dominated by the orbital location of the outermost electron. The modern numbering system of "group 1" to "group 18" has been recommended by the International Union of Pure and Applied Chemistry (IUPAC) since 1988. The 1-18 system is based on each atom's s, p and d electrons beyond those in atoms of the preceding noble gas. Two older incompatible naming schemes can assign the same number to different groups depending on the system being used. The older schemes were used by the Chemical Abstract Service (CAS, more popular in the United States), and by IUPAC before 1988 (more popular in Europe). The system of eighteen groups is generally accepted by the chemistry community, but some dissent exists about membership of elements number 1 and 2 (hydrogen and helium). Similar variation on the inner transition metals continues to exist in textbooks, although the correct positioning has been known since 1948 and was twice endorsed by IUPAC in 1988 (together with the 1–18 numbering) and 2021. Groups may also be identified using their topmost element, or have a specific name. For example, group 16 is also described as the "oxygen group" and as the "chalcogens". An exception is the "iron group", which usually refers to group 8, but in chemistry may also mean iron, cobalt, and nickel, or some other set of elements with similar chemical properties. In astrophysics and nuclear physics, it usually refers to iron, cobalt, nickel, chromium, and manganese. Group names Modern group names are numbers 1–18, with the 14 f-block columns remaining unnumbered (together making the 32 columns in the periodic table). Also, trivial names (like halogens) are common. In history, several sets of group names have been used, based on Roman numberings I–VIII, and "A" and "B" suffixes. List of group names Coinage metals: authors differ on whether roentgenium (Rg) is considered a coinage metal. It is in group 11, like the other coinage metals, and is expected to be chemically similar to gold. On the other hand, being extremely radioactive and short-lived, it cannot actually be used for coinage as the name suggests, and on that basis it is sometimes excluded. triels (group 13), from Greek tri: three, III tetrels (group 14), from Greek tetra: four, IV pentel (group 15), from Greek penta: five, V CAS and old IUPAC numbering (A/B) Two earlier group number systems exist: CAS (Chemical Abstracts Service) and old IUPAC. Both use numerals (Arabic or Roman) and letters A and B. Both systems agree on the numbers. The numbers indicate approximately the highest oxidation number of the elements in that group, and so indicate similar chemistry with other elements with the same numeral. The number proceeds in a linearly increasing fashion for the most part, once on the left of the table, and once on the right (see List of oxidation states of the elements), with some irregularities in the transition metals. However, the two systems use the letters differently. For example, potassium (K) has one valence electron. Therefore, it is located in group 1. Calcium (Ca) is in group 2, for it contains two valence electrons. In the old IUPAC system the letters A and B were designated to the left (A) and right (B) part of the table, while in the CAS system the letters A and B are designated to main group elements (A) and transition elements (B). The old IUPAC system was frequently used in Europe, while the CAS is most common in America. The new IUPAC scheme was developed to replace both systems as they confusingly used the same names to mean different things. The new system simply numbers the groups increasingly from left to right on the standard periodic table. The IUPAC proposal was first circulated in 1985 for public comments, and was later included as part of the 1990 edition of the Nomenclature of Inorganic Chemistry. Non-columnwise groups While groups are defined to be columns in the periodic table, as described above, there are also sets of elements named "group" that are not a column: Similar sets: noble metals, coinage metals, precious metals, refractory metals. References Further reading Periodic table
Group (periodic table)
[ "Chemistry" ]
993
[ "Periodic table", "Groups (periodic table)" ]
86,350
https://en.wikipedia.org/wiki/Period%20%28periodic%20table%29
A period on the periodic table is a row of chemical elements. All elements in a row have the same number of electron shells. Each next element in a period has one more proton and is less metallic than its predecessor. Arranged this way, elements in the same group (column) have similar chemical and physical properties, reflecting the periodic law. For example, the halogens lie in the second-to-last group (group 17) and share similar properties, such as high reactivity and the tendency to gain one electron to arrive at a noble-gas electronic configuration. , a total of 118 elements have been discovered and confirmed. Modern quantum mechanics explains these periodic trends in properties in terms of electron shells. As atomic number increases, shells fill with electrons in approximately the order shown in the ordering rule diagram. The filling of each shell corresponds to a row in the table. In the f-block and p-block of the periodic table, elements within the same period generally do not exhibit trends and similarities in properties (vertical trends down groups are more significant). However, in the d-block, trends across periods become significant, and in the f-block elements show a high degree of similarity across periods. Periods There are currently seven complete periods in the periodic table, comprising the 118 known elements. Any new elements will be placed into an eighth period; see extended periodic table. The elements are colour-coded below by their block: red for the s-block, yellow for the p-block, blue for the d-block, and green for the f-block. Period 1 The first period contains fewer elements than any other, with only two, hydrogen and helium. They therefore do not follow the octet rule, but rather a duplet rule. Chemically, helium behaves like a noble gas, and thus is taken to be part of the group 18 elements. However, in terms of its nuclear structure it belongs to the s-block, and is therefore sometimes classified as a group 2 element, or simultaneously both 2 and 18. Hydrogen readily loses and gains an electron, and so behaves chemically as both a group 1 and a group 17 element. Hydrogen (H) is the most abundant of the chemical elements, constituting roughly 75% of the universe's elemental mass. Ionized hydrogen is just a proton. Stars in the main sequence are mainly composed of hydrogen in its plasma state. Elemental hydrogen is relatively rare on Earth, and is industrially produced from hydrocarbons such as methane. Hydrogen can form compounds with most elements and is present in water and most organic compounds. Helium (He) exists only as a gas except in extreme conditions. It is the second-lightest element and is the second-most abundant in the universe. Most helium was formed during the Big Bang, but new helium is created through nuclear fusion of hydrogen in stars. On Earth, helium is relatively rare, only occurring as a byproduct of the natural decay of some radioactive elements. Such 'radiogenic' helium is trapped within natural gas in concentrations of up to seven percent by volume. Period 2 Period 2 elements involve the 2s and 2p orbitals. They include the biologically most essential elements besides hydrogen: carbon, nitrogen, and oxygen. Lithium (Li) is the lightest metal and the least dense solid element. In its non-ionized state it is one of the most reactive elements, and so is only ever found naturally in compounds. It is the heaviest primordial element forged in large quantities during the Big Bang. Beryllium (Be) has one of the highest melting points of all the light metals. Small amounts of beryllium were synthesised during the Big Bang, although most of it decayed or reacted further within stars to create larger nuclei, like carbon, nitrogen or oxygen. Beryllium is classified by the International Agency for Research on Cancer as a group 1 carcinogen. Between 1% and 15% of people are sensitive to beryllium and may develop an inflammatory reaction in their respiratory system and skin, called chronic beryllium disease. Boron (B) does not occur naturally as a free element, but in compounds such as borates. It is an essential plant micronutrient, required for cell wall strength and development, cell division, seed and fruit development, sugar transport and hormone development, though high levels are toxic. Carbon (C) is the fourth-most abundant element in the universe by mass after hydrogen, helium and oxygen and is the second-most abundant element in the human body by mass after oxygen, the third-most abundant by number of atoms. There are an almost infinite number of compounds that contain carbon due to carbon's ability to form long stable chains of C—C bonds. All organic compounds, those essential for life, contain at least one atom of carbon; combined with hydrogen, oxygen, nitrogen, sulfur, and phosphorus, carbon is the basis of every important biological compound. Nitrogen (N) is found mainly as mostly inert diatomic gas, N2, which makes up 78% of the Earth's atmosphere by volume. It is an essential component of proteins and therefore of life. Oxygen (O) comprising 21% of the atmosphere by volume and is required for respiration by all (or nearly all) animals, as well as being the principal component of water. Oxygen is the third-most abundant element in the universe, and oxygen compounds dominate the Earth's crust. Fluorine (F) is the most reactive element in its non-ionized state, and so is never found that way in nature. Neon (Ne) is a noble gas used in neon lighting. Period 3 All period three elements occur in nature and have at least one stable isotope. All but the noble gas argon are essential to basic geology and biology. Sodium (Na) is an alkali metal. It is present in Earth's oceans in large quantities in the form of sodium chloride (table salt). Magnesium (Mg) is an alkaline earth metal. Magnesium ions are found in chlorophyll. Aluminium (Al) is a post-transition metal. It is the most abundant metal in the Earth's crust. Silicon (Si) is a metalloid. It is a semiconductor, making it the principal component in many integrated circuits. Silicon dioxide is the principal constituent of sand. As Carbon is to Biology, Silicon is to Geology. Phosphorus (P) is a nonmetal essential to DNA. It is highly reactive, and as such is never found in nature as a free element. Sulfur (S) is a nonmetal. It is found in two amino acids: cysteine and methionine. Chlorine (Cl) is a halogen. Since it is one of the most reactive elements, it is often found on the Earth's surface as sodium chloride. Its compounds used as a disinfectant, especially in swimming pools. Argon (Ar) is a noble gas, making it almost entirely nonreactive. Incandescent lamps are often filled with noble gases such as argon in order to preserve the filaments at high temperatures. Period 4 Period 4 includes the biologically essential elements potassium and calcium, and is the first period in the d-block with the lighter transition metals. These include iron, the heaviest element forged in main-sequence stars and a principal component of the Earth, as well as other important metals such as cobalt, nickel, and copper. Almost all have biological roles. Completing the fourth period are six p-block elements: gallium, germanium, arsenic, selenium, bromine, and krypton. Period 5 Period 5 has the same number of elements as period 4 and follows the same general structure but with one more post transition metal and one fewer nonmetal. Of the three heaviest elements with biological roles, two (molybdenum and iodine) are in this period; tungsten, in period 6, is heavier, along with several of the early lanthanides. Period 5 also includes technetium, the lightest exclusively radioactive element. Period 6 Period 6 is the first period to include the f-block, with the lanthanides (also known as the rare earth elements), and includes the heaviest stable elements. Many of these heavy metals are toxic and some are radioactive, but platinum and gold are largely inert. Period 7 All elements of period 7 are radioactive. This period contains the heaviest element which occurs naturally on Earth, plutonium. All of the subsequent elements in the period have been synthesized artificially. Whilst five of these (from americium to einsteinium) are now available in macroscopic quantities, most are extremely rare, having only been prepared in microgram amounts or less. Some of the later elements have only ever been identified in laboratories in quantities of a few atoms at a time. Although the rarity of many of these elements means that experimental results are not very extensive, periodic and group trends in behaviour appear to be less well defined for period 7 than for other periods. Whilst francium and radium do show typical properties of groups 1 and 2, respectively, the actinides display a much greater variety of behaviour and oxidation states than the lanthanides. These peculiarities of period 7 may be due to a variety of factors, including a large degree of spin–orbit coupling and relativistic effects, ultimately caused by the very high positive electrical charge from their massive atomic nuclei. Period 8 No element of the eighth period has yet been synthesized. A g-block is predicted. It is not clear if all elements predicted for the eighth period are in fact physically possible. Therefore, there may not be a ninth period. See also Group (periodic table) References Periodic table Period (periodic table)
Period (periodic table)
[ "Chemistry" ]
1,998
[ "Periodic table", "Periods (periodic table)" ]
87,019
https://en.wikipedia.org/wiki/Ductility
Ductility refers to the ability of a material to sustain significant plastic deformation before fracture. Plastic deformation is the permanent distortion of a material under applied stress, as opposed to elastic deformation, which is reversible upon removing the stress. Ductility is a critical mechanical performance indicator, particularly in applications that require materials to bend, stretch, or deform in other ways without breaking. The extent of ductility can be quantitatively assessed using the percent elongation at break, given by the equation: where is the length of the material after fracture and is the original length before testing. This formula helps in quantifying how much a material can stretch under tensile stress before failure, providing key insights into its ductile behavior. Ductility is an important consideration in engineering and manufacturing. It defines a material's suitability for certain manufacturing operations (such as cold working) and its capacity to absorb mechanical overload like in an engine. Some metals that are generally described as ductile include gold and copper, while platinum is the most ductile of all metals in pure form. However, not all metals experience ductile failure as some can be characterized with brittle failure like cast iron. Polymers generally can be viewed as ductile materials as they typically allow for plastic deformation. Inorganic materials, including a wide variety of ceramics and semiconductors, are generally characterized by their brittleness. This brittleness primarily stems from their strong ionic or covalent bonds, which maintain the atoms in a rigid, densely packed arrangement. Such a rigid lattice structure restricts the movement of atoms or dislocations, essential for plastic deformation. The significant difference in ductility observed between metals and inorganic semiconductor or insulator can be traced back to each material’s inherent characteristics, including the nature of their defects, such as dislocations, and their specific chemical bonding properties. Consequently, unlike ductile metals and some organic materials with ductility (%EL) from 1.2% to over 1200%, brittle inorganic semiconductors and ceramic insulators typically show much smaller ductility at room temperature. Malleability, a similar mechanical property, is characterized by a material's ability to deform plastically without failure under compressive stress. Historically, materials were considered malleable if they were amenable to forming by hammering or rolling. Lead is an example of a material which is relatively malleable but not ductile. Materials science Ductility is especially important in metalworking, as materials that crack, break or shatter under stress cannot be manipulated using metal-forming processes such as hammering, rolling, drawing or extruding. Malleable materials can be formed cold using stamping or pressing, whereas brittle materials may be cast or thermoformed. High degrees of ductility occur due to metallic bonds, which are found predominantly in metals; this leads to the common perception that metals are ductile in general. In metallic bonds valence shell electrons are delocalized and shared between many atoms. The delocalized electrons allow metal atoms to slide past one another without being subjected to strong repulsive forces that would cause other materials to shatter. The ductility of steel varies depending on the alloying constituents. Increasing the levels of carbon decreases ductility. Many plastics and amorphous solids, such as Play-Doh, are also malleable. The most ductile metal is platinum and the most malleable metal is gold. When highly stretched, such metals distort via formation, reorientation and migration of dislocations and crystal twins without noticeable hardening. Quantification Basic definitions The quantities commonly used to define ductility in a tension test are relative elongation (in percent, sometimes denoted as ) and reduction of area (sometimes denoted as ) at fracture. Fracture strain is the engineering strain at which a test specimen fractures during a uniaxial tensile test. Percent elongation, or engineering strain at fracture, can be written as: Percent reduction in area can be written as: where the area of concern is the cross-sectional area of the gauge of the specimen. According to Shigley's Mechanical Engineering Design, 'significant' denotes about 5.0 percent elongation. Effect of sample dimensions An important point concerning the value of the ductility (nominal strain at failure) in a tensile test is that it commonly exhibits a dependence on sample dimensions. However, a universal parameter should exhibit no such dependence (and, indeed, there is no dependence for properties such as stiffness, yield stress and ultimate tensile strength). This occurs because the measured strain (displacement) at fracture commonly incorporates contributions from both the uniform deformation occurring up to the onset of necking and the subsequent deformation of the neck (during which there is little or no deformation in the rest of the sample). The significance of the contribution from neck development depends on the "aspect ratio" (length / diameter) of the gauge length, being greater when the ratio is low. This is a simple geometric effect, which has been clearly identified. There have been both experimental studies and theoretical explorations of the effect, mostly based on Finite Element Method (FEM) modelling. Nevertheless, it is not universally appreciated and, since the range of sample dimensions in common use is quite wide, it can lead to highly significant variations (by factors of up to 2 or 3) in ductility values obtained for the same material in different tests. A more meaningful representation of ductility would be obtained by identifying the strain at the onset of necking, which should be independent of sample dimensions. This point can be difficult to identify on a (nominal) stress-strain curve, because the peak (representing the onset of necking) is often relatively flat. Moreover, some (brittle) materials fracture before the onset of necking, such that there is no peak. In practice, for many purposes it is preferable to carry out a different kind of test, designed to evaluate the toughness (energy absorbed during fracture), rather than use ductility values obtained in tensile tests. In an absolute sense, "ductility" values are therefore virtually meaningless. The actual (true) strain in the neck at the point of fracture bears no direct relation to the raw number obtained from the nominal stress-strain curve; the true strain in the neck is often considerably higher. Also, the true stress at the point of fracture is usually higher than the apparent value according to the plot. The load often drops while the neck develops, but the sectional area in the neck is also dropping (more sharply), so the true stress there is rising. There is no simple way of estimating this value, since it depends on the geometry of the neck. While the true strain at fracture is a genuine indicator of "ductility", it cannot readily be obtained from a conventional tensile test. The Reduction in Area (RA) is defined as the decrease in sectional area at the neck (usually obtained by measurement of the diameter at one or both of the fractured ends), divided by the original sectional area. It is sometimes stated that this is a more reliable indicator of the "ductility" than the elongation at failure (partly in recognition of the fact that the latter is dependent on the aspect ratio of the gauge length, although this dependence is far from being universally appreciated). There is something in this argument, but the RA is still some way from being a genuinely meaningful parameter. One objection is that it is not easy to measure accurately, particularly with samples that are not circular in section. Rather more fundamentally, it is affected by both the uniform plastic deformation that took place before necking and by the development of the neck. Furthermore, it is sensitive to exactly what happens in the latter stages of necking, when the true strain is often becoming very high and the behavior is of limited significance in terms of a meaningful definition of strength (or toughness). There has again been extensive study of this issue. Ductile–brittle transition temperature Metals can undergo two different types of fractures: brittle fracture or ductile fracture. Failure propagation occurs faster in brittle materials due to the ability for ductile materials to undergo plastic deformation. Thus, ductile materials are able to sustain more stress due to their ability to absorb more energy prior to failure than brittle materials are. The plastic deformation results in the material following a modification of the Griffith equation, where the critical fracture stress increases due to the plastic work required to extend the crack adding to the work necessary to form the crack - work corresponding to the increase in surface energy that results from the formation of an addition crack surface. The plastic deformation of ductile metals is important as it can be a sign of the potential failure of the metal. Yet, the point at which the material exhibits a ductile behavior versus a brittle behavior is not only dependent on the material itself but also on the temperature at which the stress is being applied to the material. The temperature where the material changes from brittle to ductile or vice versa is crucial for the design of load-bearing metallic products. The minimum temperature at which the metal transitions from a brittle behavior to a ductile behavior, or from a ductile behavior to a brittle behavior, is known as the ductile-brittle transition temperature (DBTT). Below the DBTT, the material will not be able to plastically deform, and the crack propagation rate increases rapidly leading to the material undergoing brittle failure rapidly. Furthermore, DBTT is important since, once a material is cooled below the DBTT, it has a much greater tendency to shatter on impact instead of bending or deforming (low temperature embrittlement). Thus, the DBTT indicates the temperature at which, as temperature decreases, a material's ability to deform in a ductile manner decreases and so the rate of crack propagation drastically increases. In other words, solids are very brittle at very low temperatures, and their toughness becomes much higher at elevated temperatures. For more general applications, it is preferred to have a lower DBTT to ensure the material has a wider ductility range. This ensures that sudden cracks are inhibited so that failures in the metal body are prevented. It has been determined that the more slip systems a material has, the wider the range of temperatures ductile behavior is exhibited at. This is due to the slip systems allowing for more motion of dislocations when a stress is applied to the material. Thus, in materials with a lower amount of slip systems, dislocations are often pinned by obstacles leading to strain hardening, which increases the materials strength which makes the material more brittle. For this reason, FCC (face centered cubic) structures are ductile over a wide range of temperatures, BCC (body centered cubic) structures are ductile only at high temperatures, and HCP (hexagonal closest packed) structures are often brittle over wide ranges of temperatures. This leads to each of these structures having different performances as they approach failure (fatigue, overload, and stress cracking) under various temperatures, and shows the importance of the DBTT in selecting the correct material for a specific application. For example, zamak 3 exhibits good ductility at room temperature but shatters when impacted at sub-zero temperatures. DBTT is a very important consideration in selecting materials that are subjected to mechanical stresses. A similar phenomenon, the glass transition temperature, occurs with glasses and polymers, although the mechanism is different in these amorphous materials. The DBTT is also dependent on the size of the grains within the metal, as typically smaller grain size leads to an increase in tensile strength, resulting in an increase in ductility and decrease in the DBTT. This increase in tensile strength is due to the smaller grain sizes resulting in grain boundary hardening occurring within the material, where the dislocations require a larger stress to cross the grain boundaries and continue to propagate throughout the material. It has been shown that by continuing to refine ferrite grains to reduce their size, from 40 microns down to 1.3 microns, that it is possible to eliminate the DBTT entirely so that a brittle fracture never occurs in ferritic steel (as the DBTT required would be below absolute zero). In some materials, the transition is sharper than others and typically requires a temperature-sensitive deformation mechanism. For example, in materials with a body-centered cubic (bcc) lattice the DBTT is readily apparent, as the motion of screw dislocations is very temperature sensitive because the rearrangement of the dislocation core prior to slip requires thermal activation. This can be problematic for steels with a high ferrite content. This famously resulted in serious hull cracking in Liberty ships in colder waters during World War II, causing many sinkings. DBTT can also be influenced by external factors such as neutron radiation, which leads to an increase in internal lattice defects and a corresponding decrease in ductility and increase in DBTT. The most accurate method of measuring the DBTT of a material is by fracture testing. Typically four-point bend testing at a range of temperatures is performed on pre-cracked bars of polished material. Two fracture tests are typically utilized to determine the DBTT of specific metals: the Charpy V-Notch test and the Izod test. The Charpy V-notch test determines the impact energy absorption ability or toughness of the specimen by measuring the potential energy difference resulting from the collision between a mass on a free-falling pendulum and the machined V-shaped notch in the sample, resulting in the pendulum breaking through the sample. The DBTT is determined by repeating this test over a variety of temperatures and noting when the resulting fracture changes to a brittle behavior which occurs when the absorbed energy is dramatically decreased. The Izod test is essentially the same as the Charpy test, with the only differentiating factor being the placement of the sample; In the former the sample is placed vertically, while in the latter the sample is placed horizontally with respect to the bottom of the base. For experiments conducted at higher temperatures, dislocation activity increases. At a certain temperature, dislocations shield the crack tip to such an extent that the applied deformation rate is not sufficient for the stress intensity at the crack-tip to reach the critical value for fracture (KiC). The temperature at which this occurs is the ductile–brittle transition temperature. If experiments are performed at a higher strain rate, more dislocation shielding is required to prevent brittle fracture, and the transition temperature is raised. See also Deformation Work hardening, which improves ductility in uniaxial tension by delaying the onset of instability Strength of materials Further reading References External links Ductility definition at engineersedge.com DoITPoMS Teaching and Learning Package- "The Ductile-Brittle Transition Continuum mechanics Deformation (mechanics) Physical properties
Ductility
[ "Physics", "Materials_science", "Engineering" ]
3,014
[ "Physical phenomena", "Continuum mechanics", "Deformation (mechanics)", "Classical mechanics", "Materials science", "Physical properties" ]
87,027
https://en.wikipedia.org/wiki/Malleability%20%28cryptography%29
Malleability is a property of some cryptographic algorithms. An encryption algorithm is "malleable" if it is possible to transform a ciphertext into another ciphertext which decrypts to a related plaintext. That is, given an encryption of a plaintext , it is possible to generate another ciphertext which decrypts to , for a known function , without necessarily knowing or learning . Malleability is often an undesirable property in a general-purpose cryptosystem, since it allows an attacker to modify the contents of a message. For example, suppose that a bank uses a stream cipher to hide its financial information, and a user sends an encrypted message containing, say, "." If an attacker can modify the message on the wire, and can guess the format of the unencrypted message, the attacker could change the amount of the transaction, or the recipient of the funds, e.g. "". Malleability does not refer to the attacker's ability to read the encrypted message. Both before and after tampering, the attacker cannot read the encrypted message. On the other hand, some cryptosystems are malleable by design. In other words, in some circumstances it may be viewed as a feature that anyone can transform an encryption of into a valid encryption of (for some restricted class of functions ) without necessarily learning . Such schemes are known as homomorphic encryption schemes. A cryptosystem may be semantically secure against chosen plaintext attacks or even non-adaptive chosen ciphertext attacks (CCA1) while still being malleable. However, security against adaptive chosen ciphertext attacks (CCA2) is equivalent to non-malleability. Example malleable cryptosystems In a stream cipher, the ciphertext is produced by taking the exclusive or of the plaintext and a pseudorandom stream based on a secret key , as . An adversary can construct an encryption of for any , as . In the RSA cryptosystem, a plaintext is encrypted as , where is the public key. Given such a ciphertext, an adversary can construct an encryption of for any , as . For this reason, RSA is commonly used together with padding methods such as OAEP or PKCS1. In the ElGamal cryptosystem, a plaintext is encrypted as , where is the public key. Given such a ciphertext , an adversary can compute , which is a valid encryption of , for any . In contrast, the Cramer-Shoup system (which is based on ElGamal) is not malleable. In the Paillier, ElGamal, and RSA cryptosystems, it is also possible to combine several ciphertexts together in a useful way to produce a related ciphertext. In Paillier, given only the public key and an encryption of and , one can compute a valid encryption of their sum . In ElGamal and in RSA, one can combine encryptions of and to obtain a valid encryption of their product . Block ciphers in the cipher block chaining mode of operation, for example, are partly malleable: flipping a bit in a ciphertext block will completely mangle the plaintext it decrypts to, but will result in the same bit being flipped in the plaintext of the next block. This allows an attacker to 'sacrifice' one block of plaintext in order to change some data in the next one, possibly managing to maliciously alter the message. This is essentially the core idea of the padding oracle attack on CBC, which allows the attacker to decrypt almost an entire ciphertext without knowing the key. For this and many other reasons, a message authentication code is required to guard against any method of tampering. Complete non-malleability Fischlin, in 2005, defined the notion of complete non-malleability as the ability of the system to remain non-malleable while giving the adversary additional power to choose a new public key which could be a function of the original public key. In other words, the adversary shouldn't be able to come up with a ciphertext whose underlying plaintext is related to the original message through a relation that also takes public keys into account. See also Homomorphic encryption References Cryptography Theory of cryptography
Malleability (cryptography)
[ "Mathematics", "Engineering" ]
908
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
87,175
https://en.wikipedia.org/wiki/Herd%20immunity
Herd immunity (also called herd effect, community immunity, population immunity, or mass immunity) is a form of indirect protection that applies only to contagious diseases. It occurs when a sufficient percentage of a population has become immune to an infection, whether through previous infections or vaccination, that the communicable pathogen cannot maintain itself in the population, its low incidence thereby reducing the likelihood of infection for individuals who lack immunity. Once the herd immunity has been reached, disease gradually disappears from a population and may result in eradication or permanent reduction of infections to zero if achieved worldwide. Herd immunity created via vaccination has contributed to the reduction of many diseases. Effects Protection of those without immunity Some individuals either cannot develop immunity after vaccination or for medical reasons cannot be vaccinated. Newborn infants are too young to receive many vaccines, either for safety reasons or because passive immunity renders the vaccine ineffective. Individuals who are immunodeficient due to HIV/AIDS, lymphoma, leukemia, bone marrow cancer, an impaired spleen, chemotherapy, or radiotherapy may have lost any immunity that they previously had and vaccines may not be of any use for them because of their immunodeficiency. A portion of those vaccinated may not develop long-term immunity. Vaccine contraindications may prevent certain individuals from being vaccinated. In addition to not being immune, individuals in one of these groups may be at a greater risk of developing complications from infection because of their medical status, but they may still be protected if a large enough percentage of the population is immune. High levels of immunity in one age group can create herd immunity for other age groups. Vaccinating adults against pertussis reduces pertussis incidence in infants too young to be vaccinated, who are at the greatest risk of complications from the disease. This is especially important for close family members, who account for most of the transmissions to young infants. In the same manner, children receiving vaccines against pneumococcus reduces pneumococcal disease incidence among younger, unvaccinated siblings. Vaccinating children against pneumococcus and rotavirus has had the effect of reducing pneumococcus- and rotavirus-attributable hospitalizations for older children and adults, who do not normally receive these vaccines. Influenza (flu) is more severe in the elderly than in younger age groups, but influenza vaccines lack effectiveness in this demographic due to a waning of the immune system with age. The prioritization of school-age children for seasonal flu immunization, which is more effective than vaccinating the elderly, however, has been shown to create a certain degree of protection for the elderly. For sexually transmitted infections (STIs), high levels of immunity in heterosexuals of one sex induces herd immunity for heterosexuals of both sexes. Vaccines against STIs that are targeted at heterosexuals of one sex result in significant declines in STIs in heterosexuals of both sexes if vaccine uptake in the target sex is high. Herd immunity from female vaccination does not, however, extend to males who have sex with males. High-risk behaviors make eliminating STIs difficult because, even though most infections occur among individuals with moderate risk, the majority of transmissions occur because of individuals who engage in high-risk behaviors. For this reason, in certain populations it may be necessary to immunize high-risk individuals regardless of sex. Evolutionary pressure and serotype replacement Herd immunity itself acts as an evolutionary pressure on pathogens, influencing viral evolution by encouraging the production of novel strains, referred to as escape mutants, that are able to evade herd immunity and infect previously immune individuals. The evolution of new strains is known as serotype replacement, or serotype shifting, as the prevalence of a specific serotype declines due to high levels of immunity, allowing other serotypes to replace it. At the molecular level, viruses escape from herd immunity through antigenic drift, which is when mutations accumulate in the portion of the viral genome that encodes for the virus's surface antigen, typically a protein of the virus capsid, producing a change in the viral epitope. Alternatively, the reassortment of separate viral genome segments, or antigenic shift, which is more common when there are more strains in circulation, can also produce new serotypes. When either of these occur, memory T cells no longer recognize the virus, so people are not immune to the dominant circulating strain. For both influenza and norovirus, epidemics temporarily induce herd immunity until a new dominant strain emerges, causing successive waves of epidemics. As this evolution poses a challenge to herd immunity, broadly neutralizing antibodies and "universal" vaccines that can provide protection beyond a specific serotype are in development. Initial vaccines against Streptococcus pneumoniae significantly reduced nasopharyngeal carriage of vaccine serotypes (VTs), including antibiotic-resistant types, only to be entirely offset by increased carriage of non-vaccine serotypes (NVTs). This did not result in a proportionate increase in disease incidence though, since NVTs were less invasive than VTs. Since then, pneumococcal vaccines that provide protection from the emerging serotypes have been introduced and have successfully countered their emergence. The possibility of future shifting remains, so further strategies to deal with this include expansion of VT coverage and the development of vaccines that use either killed whole-cells, which have more surface antigens, or proteins present in multiple serotypes. Eradication of diseases If herd immunity has been established and maintained in a population for a sufficient time, the disease is inevitably eliminatedno more endemic transmissions occur. If elimination is achieved worldwide and the number of cases is permanently reduced to zero, then a disease can be declared eradicated. Eradication can thus be considered the final effect or end-result of public health initiatives to control the spread of contagious disease. In cases in which herd immunity is compromised, on the contrary, disease outbreaks among the unvaccinated population are likely to occur. The benefits of eradication include ending all morbidity and mortality caused by the disease, financial savings for individuals, health care providers, and governments, and enabling resources used to control the disease to be used elsewhere. To date, two diseases have been eradicated using herd immunity and vaccination: rinderpest and smallpox. Eradication efforts that rely on herd immunity are currently underway for poliomyelitis, though civil unrest and distrust of modern medicine have made this difficult. Mandatory vaccination may be beneficial to eradication efforts if not enough people choose to get vaccinated. Free riding Herd immunity is vulnerable to the free rider problem. Individuals who lack immunity, including those who choose not to vaccinate, free ride off the herd immunity created by those who are immune. As the number of free riders in a population increases, outbreaks of preventable diseases become more common and more severe due to loss of herd immunity. Individuals may choose to free ride or be hesitant to vaccinate for a variety of reasons, including the belief that vaccines are ineffective, or that the risks associated with vaccines are greater than those associated with infection, mistrust of vaccines or public health officials, bandwagoning or groupthinking, social norms or peer pressure, and religious beliefs. Certain individuals are more likely to choose not to receive vaccines if vaccination rates are high enough to convince a person that he or she may not need to be vaccinated, since a sufficient percentage of others are already immune. Mechanism Individuals who are immune to a disease act as a barrier in the spread of disease, slowing or preventing the transmission of disease to others. An individual's immunity can be acquired via a natural infection or through artificial means, such as vaccination. When a critical proportion of the population becomes immune, called the herd immunity threshold (HIT) or herd immunity level (HIL), the disease may no longer persist in the population, ceasing to be endemic. The theoretical basis for herd immunity generally assumes that vaccines induce solid immunity, that populations mix at random, that the pathogen does not evolve to evade the immune response, and that there is no non-human vector for the disease. Theoretical basis The critical value, or threshold, in a given population, is the point where the disease reaches an endemic steady state, which means that the infection level is neither growing nor declining exponentially. This threshold can be calculated from the effective reproduction number Re, which is obtained by taking the product of the basic reproduction number R0, the average number of new infections caused by each case in an entirely susceptible population that is homogeneous, or well-mixed, meaning each individual is equally likely to come into contact with any other susceptible individual in the population, and S, the proportion of the population who are susceptible to infection, and setting this product to be equal to 1: S can be rewritten as (1 − p), where p is the proportion of the population that is immune so that p + S equals one. Then, the equation can be rearranged to place p by itself as follows: With p being by itself on the left side of the equation, it can be renamed as pc, representing the critical proportion of the population needed to be immune to stop the transmission of disease, which is the same as the "herd immunity threshold" HIT. R0 functions as a measure of contagiousness, so low R0 values are associated with lower HITs, whereas higher R0s result in higher HITs. For example, the HIT for a disease with an R0 of 2 is theoretically only 50%, whereas a disease with an R0 of 10 the theoretical HIT is 90%. When the effective reproduction number Re of a contagious disease is reduced to and sustained below 1 new individual per infection, the number of cases occurring in the population gradually decreases until the disease has been eliminated. If a population is immune to a disease in excess of that disease's HIT, the number of cases reduces at a faster rate, outbreaks are even less likely to happen, and outbreaks that occur are smaller than they would be otherwise. If the population immunity falls below the herd immunity threshold, where the effective reproduction number increases to above 1, the population is said to have an "immunity gap", and then the disease is neither in a steady state nor decreasing in incidence, but is actively spreading through the population and infecting a larger number of people than usual. An assumption in these calculations is that populations are homogeneous, or well-mixed, meaning that every individual is equally likely to come into contact with any other individual, when in reality populations are better described as social networks as individuals tend to cluster together, remaining in relatively close contact with a limited number of other individuals. In these networks, transmission only occurs between those who are geographically or physically close to one another. The shape and size of a network is likely to alter a disease's HIT, making incidence either more or less common. Mathematical models can use contact matrices to estimate the likelihood of encounters and thus transmission. In heterogeneous populations, R0 is considered to be a measure of the number of cases generated by a "typical" contagious person, which depends on how individuals within a network interact with each other. Interactions within networks are more common than between networks, in which case the most highly connected networks transmit disease more easily, resulting in a higher R0 and a higher HIT than would be required in a less connected network. In networks that either opt not to become immune or are not immunized sufficiently, diseases may persist despite not existing in better-immunized networks. Overshoot The cumulative proportion of individuals who get infected during the course of a disease outbreak can exceed the HIT. This is because the HIT does not represent the point at which the disease stops spreading, but rather the point at which each infected person infects fewer than one additional person on average. When the HIT is reached, the number of additional infections does not immediately drop to zero. The excess of the cumulative proportion of infected individuals over the theoretical HIT is known as the overshoot. Boosts Vaccination The primary way to boost levels of immunity in a population is through vaccination. Vaccination is originally based on the observation that milkmaids exposed to cowpox were immune to smallpox, so the practice of inoculating people with the cowpox virus began as a way to prevent smallpox. Well-developed vaccines provide protection in a far safer way than natural infections, as vaccines generally do not cause the diseases they protect against and severe adverse effects are significantly less common than complications from natural infections. The immune system does not distinguish between natural infections and vaccines, forming an active response to both, so immunity induced via vaccination is similar to what would have occurred from contracting and recovering from the disease. To achieve herd immunity through vaccination, vaccine manufacturers aim to produce vaccines with low failure rates, and policy makers aim to encourage their use. After the successful introduction and widespread use of a vaccine, sharp declines in the incidence of diseases it protects against can be observed, which decreases the number of hospitalizations and deaths caused by such diseases. Assuming a vaccine is 100% effective, then the equation used for calculating the herd immunity threshold can be used for calculating the vaccination level needed to eliminate a disease, written as Vc. Vaccines are usually imperfect however, so the effectiveness, E, of a vaccine must be accounted for: From this equation, it can be observed that if E is less than (1 − 1/R0), then it is impossible to eliminate a disease, even if the entire population is vaccinated. Similarly, waning vaccine-induced immunity, as occurs with acellular pertussis vaccines, requires higher levels of booster vaccination to sustain herd immunity. If a disease has ceased to be endemic to a population, then natural infections no longer contribute to a reduction in the fraction of the population that is susceptible. Only vaccination contributes to this reduction. The relation between vaccine coverage and effectiveness and disease incidence can be shown by subtracting the product of the effectiveness of a vaccine and the proportion of the population that is vaccinated, pv, from the herd immunity threshold equation as follows: It can be observed from this equation that, all other things being equal ("ceteris paribus"), any increase in either vaccine coverage or vaccine effectiveness, including any increase in excess of a disease's HIT, further reduces the number of cases of a disease. The rate of decline in cases depends on a disease's R0, with diseases with lower R0 values experiencing sharper declines. Vaccines usually have at least one contraindication for a specific population for medical reasons, but if both effectiveness and coverage are high enough then herd immunity can protect these individuals. Vaccine effectiveness is often, but not always, adversely affected by passive immunity, so additional doses are recommended for some vaccines while others are not administered until after an individual has lost his or her passive immunity. Passive immunity Individual immunity can also be gained passively, when antibodies to a pathogen are transferred from one individual to another. This can occur naturally, whereby maternal antibodies, primarily immunoglobulin G antibodies, are transferred across the placenta and in colostrum to fetuses and newborns. Passive immunity can also be gained artificially, when a susceptible person is injected with antibodies from the serum or plasma of an immune person. Protection generated from passive immunity is immediate, but wanes over the course of weeks to months, so any contribution to herd immunity is temporary. For diseases that are especially severe among fetuses and newborns, such as influenza and tetanus, pregnant women may be immunized in order to transfer antibodies to the child. In the same way, high-risk groups that are either more likely to experience infection, or are more likely to develop complications from infection, may receive antibody preparations to prevent these infections or to reduce the severity of symptoms. Cost–benefit analysis Herd immunity is often accounted for when conducting cost–benefit analyses of vaccination programs. It is regarded as a positive externality of high levels of immunity, producing an additional benefit of disease reduction that would not occur had no herd immunity been generated in the population. Therefore, herd immunity's inclusion in cost–benefit analyses results both in more favorable cost-effectiveness or cost–benefit ratios, and an increase in the number of disease cases averted by vaccination. Study designs done to estimate herd immunity's benefit include recording disease incidence in households with a vaccinated member, randomizing a population in a single geographic area to be vaccinated or not, and observing the incidence of disease before and after beginning a vaccination program. From these, it can be observed that disease incidence may decrease to a level beyond what can be predicted from direct protection alone, indicating that herd immunity contributed to the reduction. When serotype replacement is accounted for, it reduces the predicted benefits of vaccination. History Herd immunity was recognized as a naturally occurring phenomenon in the 1930s when it was observed that after a significant number of children had become immune to measles, the number of new infections temporarily decreased. Mass vaccination to induce herd immunity has since become common and proved successful in preventing the spread of many contagious diseases. Opposition to vaccination has posed a challenge to herd immunity, allowing preventable diseases to persist in or return to populations with inadequate vaccination rates. The exact herd immunity threshold (HIT) varies depending on the basic reproduction number of the disease. An example of a disease with a high threshold was the measles, with a HIT exceeding 95%. The term "herd immunity" was first used in 1894 by American veterinary scientist and then Chief of the Bureau of Animal Industry of the US Department of Agriculture Daniel Elmer Salmon to describe the healthy vitality and resistance to disease of well-fed herds of hogs. In 1916 veterinary scientists inside the same Bureau of Animal Industry used the term to refer to the immunity arising following recovery in cattle infected with brucellosis, also known as "contagious abortion." By 1923 it was being used by British bacteriologists to describe experimental epidemics with mice, experiments undertaken as part of efforts to model human epidemic disease. By the end of the 1920s the concept was used extensively - particularly among British scientists - to describe the build up of immunity in populations to diseases such as diphtheria, scarlet fever, and influenza. Herd immunity was recognized as a naturally occurring phenomenon in the 1930s when A. W. Hedrich published research on the epidemiology of measles in Baltimore, and took notice that after many children had become immune to measles, the number of new infections temporarily decreased, including among susceptible children. In spite of this knowledge, efforts to control and eliminate measles were unsuccessful until mass vaccination using the measles vaccine began in the 1960s. Mass vaccination, discussions of disease eradication, and cost–benefit analyses of vaccination subsequently prompted more widespread use of the term herd immunity. In the 1970s, the theorem used to calculate a disease's herd immunity threshold was developed. During the smallpox eradication campaign in the 1960s and 1970s, the practice of ring vaccination, to which herd immunity is integral, began as a way to immunize every person in a "ring" around an infected individual to prevent outbreaks from spreading. Since the adoption of mass and ring vaccination, complexities and challenges to herd immunity have arisen. Modeling of the spread of contagious disease originally made a number of assumptions, namely that entire populations are susceptible and well-mixed, which is not the case in reality, so more precise equations have been developed. In recent decades, it has been recognized that the dominant strain of a microorganism in circulation may change due to herd immunity, either because of herd immunity acting as an evolutionary pressure or because herd immunity against one strain allowed another already-existing strain to spread. Emerging or ongoing fears and controversies about vaccination have reduced or eliminated herd immunity in certain communities, allowing preventable diseases to persist in or return to these communities. See also Premunity Social distancing Notes References External links A visual simulation of herd immunity written by Shane Killian and modified by Robert Webb Herd immunity simulation Epidemiology Immunology Infection-control measures Vaccination Medical terminology
Herd immunity
[ "Biology", "Environmental_science" ]
4,200
[ "Epidemiology", "Immunology", "Vaccination", "Environmental social science" ]
87,310
https://en.wikipedia.org/wiki/Supercavitation
Supercavitation is the use of a cavitation bubble to reduce skin friction drag on a submerged object and enable high speeds. Applications include torpedoes and propellers, but in theory, the technique could be extended to an entire underwater vessel. Physical principle Cavitation is the formation of vapour bubbles in liquid caused by flow around an object. Bubbles form when water accelerates around sharp corners and the pressure drops below the vapour pressure. Pressure increases upon deceleration, and the water generally reabsorbs the vapour; however, vapour bubbles can implode and apply small concentrated impulses that may damage surfaces like ship propellers and pump impellers. The potential for vapour bubbles to form in a liquid is given by the nondimensional cavitation number. It equals local pressure minus vapour pressure, divided by dynamic pressure. At increasing depths (or pressures in piping), the potential for cavitation is lower because the difference between local pressure and vapour pressure is greater. A supercavitating object is a high-speed submerged object that is designed to initiate a cavitation bubble at its nose. The bubble extends (either naturally or augmented with internally generated gas) past the aft end of the object and prevents contact between the sides of the object and the liquid. This separation substantially reduces the skin friction drag on the supercavitating object. A key feature of the supercavitating object is the nose, which typically has a sharp edge around its perimeter to form the cavitation bubble. The nose may be articulated and shaped as a flat disk or cone. The shape of the supercavitating object is generally slender so the cavitation bubble encompasses the object. If the bubble is not long enough to encompass the object, especially at slower speeds, the bubble can be enlarged and extended by injecting high-pressure gas near the object's nose. The very high speed required for supercavitation can be temporarily reached by underwater-fired projectiles and projectiles entering water. For sustained supercavitation, rocket propulsion is used, and the high-pressure rocket gas can be routed to the nose to enhance the cavitation bubble. In principle, supercavitating objects can be maneuvered using various methods, including the following: Drag fins that project through the bubble into the surrounding liquid A tilted object nose Gas injected asymmetrically near the nose to distort the cavity's geometry Vectoring rocket thrust through gimbaling for a single nozzle Differential thrust from multiple nozzles Applications The Russian Navy developed the VA-111 Shkval supercavitation torpedo, which uses rocket propulsion and exceeds the speed of conventional torpedoes by at least a factor of five. NII-24 began development in 1960 under the code name "Шквал" (Squall). The VA-111 Shkval has been in service (exclusively in the Russian Navy) since 1977 with mass production starting in 1978. Several models were developed, with the most successful, the M-5, completed by 1972. From 1972 to 1977, over 300 test launches were conducted (95% of them on Issyk Kul lake). In 2006, German weapons manufacturer Diehl BGT Defence announced their own supercavitating torpedo, the Barracuda, now officially named (). According to Diehl, it reaches speeds greater than . In 1994, the United States Navy began development of the Rapid Airborne Mine Clearance System (RAMICS), a sea mine clearance system invented by C Tech Defense Corporation. The system is based on a supercavitating projectile stable in both air and water. RAMICS projectiles have been produced in diameters of , , and . The projectile's terminal ballistic design enables the explosive destruction of sea mines as deep as with a single round. In 2000 at Aberdeen Proving Ground, RAMICS projectiles fired from a hovering Sea Cobra gunship successfully destroyed a range of live underwater mines. As of March 2009, Northrop Grumman completed the initial phase of RAMICS testing for introduction into the fleet. Iran claimed to have successfully tested its first supercavitation torpedo, the Hoot (Whale), on 2–3 April 2006. Some sources have speculated it is based on the Russian VA-111 Shkval supercavitation torpedo, which travels at the same speed. Russian Foreign Minister Sergey Lavrov denied supplying Iran with the technology. In 2004, DARPA announced the Underwater Express program, a research and evaluation program to demonstrate the use of supercavitation for a high-speed underwater craft application. The US Navy's ultimate goal is a new class of underwater craft for littoral missions that can transport small groups of navy personnel or specialized military cargo at speeds up to 100 knots. DARPA awarded contracts to Northrop Grumman and General Dynamics Electric Boat in late 2006. In 2009, DARPA announced progress on a new class of submarine: A prototype ship named the Ghost, uses supercavitation to propel itself atop two struts with sharpened edges. It was designed for stealth operations by Gregory Sancoff of Juliet Marine Systems. The vessel rides smoothly in choppy water and has reached speeds of 29 knots. The Chinese Navy and US Navy are reportedly working on their own supercavitating submarines using technical information obtained on the Russian VA-111 Shkval supercavitation torpedo. A supercavitating propeller uses supercavitation to reduce water skin friction and increase propeller speed. The design is used in military applications, high-performance racing boats, and model racing boats. It operates fully submerged with wedge-shaped blades to force cavitation on the entire forward face, starting at the leading edge. Since the cavity collapses well behind the blade, the supercavitating propeller avoids spalling damage caused by cavitation, which is a problem with conventional propellers. Supercavitating ammunition is used with German and Russian underwater firearms, and other similar weapons. Alleged incidents The Kursk submarine disaster was initially thought to have been caused by a faulty Shkval supercavitating torpedo, though later evidence points to a faulty 65-76 torpedo. See also Supercavitating torpedo "Shkval" supercavitating torpedo APS amphibious rifle SPP-1 underwater pistol Supercavitating propeller References Further reading Office of Naval Research (2004, June 14). Mechanics and energy conversion: high-speed (supercavitating) undersea weaponry (D&I). Retrieved April 12, 2006, from Office of Naval Research Home Page Savchenko Y. N. (n.d.). CAV 2001 - Fourth Annual Symposium on Cavitation - California Institute of Technology Retrieved April 9, 2006, archived at Wayback Machine Hargrove, J. (2003). Supercavitation and aerospace technology in the development of high-speed underwater vehicles. In 42nd AIAA Aerospace Sciences Meeting and Exhibit. Texas A&M University. Kirschner et al. (2001, October) Supercavitation research and development. Undersea Defense Technologies Miller, D. (1995). Supercavitation: going to war in a bubble. Jane's Intelligence Review. Retrieved Apr 14, 2006, from Defence & Security Intelligence & Analysis | Jane's 360 Graham-Rowe, & Duncan. (2000). Faster than a speeding bullet. NewScientist, 167(2248), 26–30. Tulin, M. P. (1963). Supercavitating flows - small perturbation theory. Laurel, Md, Hydronautics Inc. Niam J W (Dec 2014), Numerical Simulation Of Supercavitation External links Supercavitation Research Group at the University of Minnesota Diehl BGT Defence's "Barracuda" - a German supercavitating Torpedo DARPA Underwater Express Program Global Security.org on Supercavitation How to Build a Supercavitating Weapon, Scientific American Fluid dynamics
Supercavitation
[ "Chemistry", "Engineering" ]
1,633
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
87,410
https://en.wikipedia.org/wiki/Coral%20reef
A coral reef is an underwater ecosystem characterized by reef-building corals. Reefs are formed of colonies of coral polyps held together by calcium carbonate. Most coral reefs are built from stony corals, whose polyps cluster in groups. Coral belongs to the class Anthozoa in the animal phylum Cnidaria, which includes sea anemones and jellyfish. Unlike sea anemones, corals secrete hard carbonate exoskeletons that support and protect the coral. Most reefs grow best in warm, shallow, clear, sunny and agitated water. Coral reefs first appeared 485 million years ago, at the dawn of the Early Ordovician, displacing the microbial and sponge reefs of the Cambrian. Sometimes called rainforests of the sea, shallow coral reefs form some of Earth's most diverse ecosystems. They occupy less than 0.1% of the world's ocean area, about half the area of France, yet they provide a home for at least 25% of all marine species, including fish, mollusks, worms, crustaceans, echinoderms, sponges, tunicates and other cnidarians. Coral reefs flourish in ocean waters that provide few nutrients. They are most commonly found at shallow depths in tropical waters, but deep water and cold water coral reefs exist on smaller scales in other areas. Shallow tropical coral reefs have declined by 50% since 1950, partly because they are sensitive to water conditions. They are under threat from excess nutrients (nitrogen and phosphorus), rising ocean heat content and acidification, overfishing (e.g., from blast fishing, cyanide fishing, spearfishing on scuba), sunscreen use, and harmful land-use practices, including runoff and seeps (e.g., from injection wells and cesspools). Coral reefs deliver ecosystem services for tourism, fisheries and shoreline protection. The annual global economic value of coral reefs has been estimated at anywhere from US$30–375 billion (1997 and 2003 estimates) to US$2.7 trillion (a 2020 estimate) to US$9.9 trillion (a 2014 estimate). Though the shallow water tropical coral reefs are best known, there are also deeper water reef-forming corals, which live in colder water and in temperate seas. Formation Most coral reefs were formed after the Last Glacial Period when melting ice caused sea level to rise and flood continental shelves. Most coral reefs are less than 10,000 years old. As communities established themselves, the reefs grew upwards, pacing rising sea levels. Reefs that rose too slowly could become drowned, without sufficient light. Coral reefs are also found in the deep sea away from continental shelves, around oceanic islands and atolls. The majority of these islands are volcanic in origin. Others have tectonic origins where plate movements lifted the deep ocean floor. In The Structure and Distribution of Coral Reefs, Charles Darwin set out his theory of the formation of atoll reefs, an idea he conceived during the voyage of the Beagle. He theorized that uplift and subsidence of Earth's crust under the oceans formed the atolls. Darwin set out a sequence of three stages in atoll formation. A fringing reef forms around an extinct volcanic island as the island and ocean floor subside. As the subsidence continues, the fringing reef becomes a barrier reef and ultimately an atoll reef. Darwin predicted that underneath each lagoon would be a bedrock base, the remains of the original volcano. Subsequent research supported this hypothesis. Darwin's theory followed from his understanding that coral polyps thrive in the tropics where the water is agitated, but can only live within a limited depth range, starting just below low tide. Where the level of the underlying earth allows, the corals grow around the coast to form fringing reefs, and can eventually grow to become a barrier reef. Where the bottom is rising, fringing reefs can grow around the coast, but coral raised above sea level dies. If the land subsides slowly, the fringing reefs keep pace by growing upwards on a base of older, dead coral, forming a barrier reef enclosing a lagoon between the reef and the land. A barrier reef can encircle an island, and once the island sinks below sea level a roughly circular atoll of growing coral continues to keep up with the sea level, forming a central lagoon. Barrier reefs and atolls do not usually form complete circles but are broken in places by storms. Like sea level rise, a rapidly subsiding bottom can overwhelm coral growth, killing the coral and the reef, due to what is called coral drowning. Corals that rely on zooxanthellae can die when the water becomes too deep for their symbionts to adequately photosynthesize, due to decreased light exposure. The two main variables determining the geomorphology, or shape, of coral reefs are the nature of the substrate on which they rest, and the history of the change in sea level relative to that substrate. The approximately 20,000-year-old Great Barrier Reef offers an example of how coral reefs formed on continental shelves. Sea level was then lower than in the 21st century. As sea level rose, the water and the corals encroached on what had been hills of the Australian coastal plain. By 13,000 years ago, sea level had risen to lower than at present, and many hills of the coastal plains had become continental islands. As sea level rise continued, water topped most of the continental islands. The corals could then overgrow the hills, forming cays and reefs. Sea level on the Great Barrier Reef has not changed significantly in the last 6,000 years. The age of living reef structure is estimated to be between 6,000 and 8,000 years. Although the Great Barrier Reef formed along a continental shelf, and not around a volcanic island, Darwin's principles apply. Development stopped at the barrier reef stage, since Australia is not about to submerge. It formed the world's largest barrier reef, from shore, stretching for . Healthy tropical coral reefs grow horizontally from per year, and grow vertically anywhere from per year; however, they grow only at depths shallower than because of their need for sunlight, and cannot grow above sea level. Material As the name implies, coral reefs are made up of coral skeletons from mostly intact coral colonies. As other chemical elements present in corals become incorporated into the calcium carbonate deposits, aragonite is formed. However, shell fragments and the remains of coralline algae such as the green-segmented genus Halimeda can add to the reef's ability to withstand damage from storms and other threats. Such mixtures are visible in structures such as Eniwetok Atoll. In the geologic past The times of maximum reef development were in the Middle Cambrian (513–501 Ma), Devonian (416–359 Ma) and Carboniferous (359–299 Ma), owing to extinct order Rugosa corals, and Late Cretaceous (100–66 Ma) and Neogene (23 Ma–present), owing to order Scleractinia corals. Not all reefs in the past were formed by corals: those in the Early Cambrian (542–513 Ma) resulted from calcareous algae and archaeocyathids (small animals with conical shape, probably related to sponges) and in the Late Cretaceous (100–66 Ma), when reefs formed by a group of bivalves called rudists existed; one of the valves formed the main conical structure and the other, much smaller valve acted as a cap. Measurements of the oxygen isotopic composition of the aragonitic skeleton of coral reefs, such as Porites, can indicate changes in sea surface temperature and sea surface salinity conditions during the growth of the coral. This technique is often used by climate scientists to infer a region's paleoclimate. Types Since Darwin's identification of the three classical reef formations – the fringing reef around a volcanic island becoming a barrier reef and then an atoll – scientists have identified further reef types. While some sources find only three, Thomas lists "Four major forms of large-scale coral reefs" – the fringing reef, barrier reef, atoll and table reef based on Stoddart, D.R. (1969). Spalding et al. list four main reef types that can be clearly illustrated – the fringing reef, barrier reef, atoll, and "bank or platform reef"—and notes that many other structures exist which do not conform easily to strict definitions, including the "patch reef". Fringing reef A fringing reef, also called a shore reef, is directly attached to a shore, or borders it with an intervening narrow, shallow channel or lagoon. It is the most common reef type. Fringing reefs follow coastlines and can extend for many kilometres. They are usually less than 100 metres wide, but some are hundreds of metres wide. Fringing reefs are initially formed on the shore at the low water level and expand seawards as they grow in size. The final width depends on where the sea bed begins to drop steeply. The surface of the fringe reef generally remains at the same height: just below the waterline. In older fringing reefs, whose outer regions pushed far out into the sea, the inner part is deepened by erosion and eventually forms a lagoon. Fringing reef lagoons can become over 100 metres wide and several metres deep. Like the fringing reef itself, they run parallel to the coast. The fringing reefs of the Red Sea are "some of the best developed in the world" and occur along all its shores except off sandy bays. Barrier reef Barrier reefs are separated from a mainland or island shore by a deep channel or lagoon. They resemble the later stages of a fringing reef with its lagoon but differ from the latter mainly in size and origin. Their lagoons can be several kilometres wide and 30 to 70 metres deep. Above all, the offshore outer reef edge formed in open water rather than next to a shoreline. Like an atoll, it is thought that these reefs are formed either as the seabed lowered or sea level rose. Formation takes considerably longer than for a fringing reef, thus barrier reefs are much rarer. The best known and largest example of a barrier reef is the Australian Great Barrier Reef. Other major examples are the Mesoamerican Barrier Reef System and the New Caledonian Barrier Reef. Barrier reefs are also found on the coasts of Providencia, Mayotte, the Gambier Islands, on the southeast coast of Kalimantan, on parts of the coast of Sulawesi, southeastern New Guinea and the south coast of the Louisiade Archipelago. Platform reef Platform reefs, variously called bank or table reefs, can form on the continental shelf, as well as in the open ocean, in fact anywhere where the seabed rises close enough to the surface of the ocean to enable the growth of zooxanthemic, reef-forming corals. Platform reefs are found in the southern Great Barrier Reef, the Swain and Capricorn Group on the continental shelf, about 100–200 km from the coast. Some platform reefs of the northern Mascarenes are several thousand kilometres from the mainland. Unlike fringing and barrier reefs which extend only seaward, platform reefs grow in all directions. They are variable in size, ranging from a few hundred metres to many kilometres across. Their usual shape is oval to elongated. Parts of these reefs can reach the surface and form sandbanks and small islands around which may form fringing reefs. A lagoon may form In the middle of a platform reef. Platform reefs are typically situated within atolls, where they adopt the name "patch reefs" and often span a diameter of just a few dozen meters. In instances where platform reefs develop along elongated structures, such as old and weathered barrier reefs, they tend to arrange themselves in a linear formation. This is the case, for example, on the east coast of the Red Sea near Jeddah. In old platform reefs, the inner part can be so heavily eroded that it forms a pseudo-atoll. These can be distinguished from real atolls only by detailed investigation, possibly including core drilling. Some platform reefs of the Laccadives are U-shaped, due to wind and water flow. Atoll Atolls or atoll reefs are a more or less circular or continuous barrier reef that extends all the way around a lagoon without a central island. They are usually formed from fringing reefs around volcanic islands. Over time, the island erodes away and sinks below sea level. Atolls may also be formed by the sinking of the seabed or rising of the sea level. A ring of reefs results, which enclose a lagoon. Atolls are numerous in the South Pacific, where they usually occur in mid-ocean, for example, in the Caroline Islands, the Cook Islands, French Polynesia, the Marshall Islands and Micronesia. Atolls are found in the Indian Ocean, for example, in the Maldives, the Chagos Islands, the Seychelles and around Cocos Island. The entire Maldives consist of 26 atolls. Other reef types or variants Apron reef – short reef resembling a fringing reef, but more sloped; extending out and downward from a point or peninsular shore. The initial stage of a fringing reef. Bank reef – isolated, flat-topped reef larger than a patch reef and usually on mid-shelf regions and linear or semi-circular in shape; a type of platform reef. Patch reef – common, isolated, comparatively small reef outcrop, usually within a lagoon or embayment, often circular and surrounded by sand or seagrass. Can be considered as a type of platform reef or as features of fringing reefs, atolls and barrier reefs. The patches may be surrounded by a ring of reduced seagrass cover referred to as a grazing halo. Ribbon reef – long, narrow, possibly winding reef, usually associated with an atoll lagoon. Also called a shelf-edge reef or sill reef. Drying reef – a part of a reef which is above water at low tide but submerged at high tide Habili – reef specific to the Red Sea; does not reach near enough to the surface to cause visible surf; may be a hazard to ships (from the Arabic for "unborn") Microatoll – community of species of corals; vertical growth limited by average tidal height; growth morphologies offer a low-resolution record of patterns of sea level change; fossilized remains can be dated using radioactive carbon dating and have been used to reconstruct Holocene sea levels Cays – small, low-elevation, sandy islands formed on the surface of coral reefs from eroded material that piles up, forming an area above sea level; can be stabilized by plants to become habitable; occur in tropical environments throughout the Pacific, Atlantic and Indian Oceans (including the Caribbean and on the Great Barrier Reef and Belize Barrier Reef), where they provide habitable and agricultural land Seamount or guyot – formed when a coral reef on a volcanic island subsides; tops of seamounts are rounded and guyots are flat; flat tops of guyots, or tablemounts, are due to erosion by waves, winds, and atmospheric processes Zones Coral reef ecosystems contain distinct zones that host different kinds of habitats. Usually, three major zones are recognized: the fore reef, reef crest, and the back reef (frequently referred to as the reef lagoon). The three zones are physically and ecologically interconnected. Reef life and oceanic processes create opportunities for the exchange of seawater, sediments, nutrients and marine life. Most coral reefs exist in waters less than 50 m deep. Some inhabit tropical continental shelves where cool, nutrient-rich upwelling does not occur, such as the Great Barrier Reef. Others are found in the deep ocean surrounding islands or as atolls, such as in the Maldives. The reefs surrounding islands form when islands subside into the ocean, and atolls form when an island subsides below the surface of the sea. Alternatively, Moyle and Cech distinguish six zones, though most reefs possess only some of the zones. The reef surface is the shallowest part of the reef. It is subject to surge and tides. When waves pass over shallow areas, they shoal, as shown in the adjacent diagram. This means the water is often agitated. These are the precise condition under which corals flourish. The light is sufficient for photosynthesis by the symbiotic zooxanthellae, and agitated water brings plankton to feed the coral. The off-reef floor is the shallow sea floor surrounding a reef. This zone occurs next to reefs on continental shelves. Reefs around tropical islands and atolls drop abruptly to great depths and do not have such a floor. Usually sandy, the floor often supports seagrass meadows which are important foraging areas for reef fish. The reef drop-off is, for its first 50 m, habitat for reef fish who find shelter on the cliff face and plankton in the water nearby. The drop-off zone applies mainly to the reefs surrounding oceanic islands and atolls. The reef face is the zone above the reef floor or the reef drop-off. This zone is often the reef's most diverse area. Coral and calcareous algae provide complex habitats and areas that offer protection, such as cracks and crevices. Invertebrates and epiphytic algae provide much of the food for other organisms. A common feature on this forereef zone is spur and groove formations that serve to transport sediment downslope. The reef flat is the sandy-bottomed flat, which can be behind the main reef, containing chunks of coral. This zone may border a lagoon and serve as a protective area, or it may lie between the reef and the shore, and in this case is a flat, rocky area. Fish tend to prefer it when it is present. The reef lagoon is an entirely enclosed region, which creates an area less affected by wave action and often contains small reef patches. However, the topography of coral reefs is constantly changing. Each reef is made up of irregular patches of algae, sessile invertebrates, and bare rock and sand. The size, shape and relative abundance of these patches change from year to year in response to the various factors that favor one type of patch over another. Growing coral, for example, produces constant change in the fine structure of reefs. On a larger scale, tropical storms may knock out large sections of reef and cause boulders on sandy areas to move. Locations Coral reefs are estimated to cover 284,300 km2 (109,800 sq mi), just under 0.1% of the oceans' surface area. The Indo-Pacific region (including the Red Sea, Indian Ocean, Southeast Asia and the Pacific) account for 91.9% of this total. Southeast Asia accounts for 32.3% of that figure, while the Pacific including Australia accounts for 40.8%. Atlantic and Caribbean coral reefs account for 7.6%. Although corals exist both in temperate and tropical waters, shallow-water reefs form only in a zone extending from approximately 30° N to 30° S of the equator. Tropical corals do not grow at depths of over . The optimum temperature for most coral reefs is , and few reefs exist in waters below . When the net production by reef building corals no longer keeps pace with relative sea level and the reef structure permanently drowns a Darwin Point is reached. One such point exists at the northwestern end of the Hawaiian Archipelago; see Evolution of Hawaiian volcanoes#Coral atoll stage. However, reefs in the Persian Gulf have adapted to temperatures of in winter and in summer. 37 species of scleractinian corals inhabit such an environment around Larak Island. Deep-water coral inhabits greater depths and colder temperatures at much higher latitudes, as far north as Norway. Although deep water corals can form reefs, little is known about them. The northernmost coral reef on Earth is located near Eilat, Israel. Coral reefs are rare along the west coasts of the Americas and Africa, due primarily to upwelling and strong cold coastal currents that reduce water temperatures in these areas (the Humboldt, Benguela, and Canary Currents, respectively). Corals are seldom found along the coastline of South Asia—from the eastern tip of India (Chennai) to the Bangladesh and Myanmar borders—as well as along the coasts of northeastern South America and Bangladesh, due to the freshwater release from the Amazon and Ganges Rivers respectively. Significant coral reefs include: The Great Barrier Reef—largest, comprising over 2,900 individual reefs and 900 islands stretching for over off Queensland, Australia The Mesoamerican Barrier Reef System—second largest, stretching from Isla Contoy at the tip of the Yucatán Peninsula down to the Bay Islands of Honduras The New Caledonia Barrier Reef—second longest double barrier reef, covering The Andros, Bahamas Barrier Reef—third largest, following the east coast of Andros Island, Bahamas, between Andros and Nassau The Red Sea—includes 6,000-year-old fringing reefs located along a coastline The Florida Reef Tract—largest continental US reef and the third-largest coral barrier reef, extends from Soldier Key, located in Biscayne Bay, to the Dry Tortugas in the Gulf of Mexico Blake Plateau has the world's largest known deep-water coral reef, comprising a 6.4 million acre reef that stretches from Miami to Charleston, S. C. Its discovery was announced in January 2024. Pulley Ridge—deepest photosynthetic coral reef, Florida Numerous reefs around the Maldives The Philippines coral reef area, the second-largest in Southeast Asia, is estimated at 26,000 square kilometres. 915 reef fish species and more than 400 scleractinian coral species, 12 of which are endemic are found there. The Raja Ampat Islands in Indonesia's Southwest Papua province offer the highest known marine diversity. Bermuda is known for its northernmost coral reef system, located at . The presence of coral reefs at this high latitude is due to the proximity of the Gulf Stream. Bermuda coral species represent a subset of those found in the greater Caribbean. The world's northernmost individual coral reef is located in the Finlayson Channel, in the inside passage of British Columbia, Canada. The world's southernmost coral reef is at Lord Howe Island, in the Pacific Ocean off the east coast of Australia. Coral When alive, corals are colonies of small animals embedded in calcium carbonate shells. Coral heads consist of accumulations of individual animals called polyps, arranged in diverse shapes. Polyps are usually tiny, but they can range in size from a pinhead to across. Reef-building or hermatypic corals live only in the photic zone (above 70 m), the depth to which sufficient sunlight penetrates the water. Zooxanthellae Coral polyps do not photosynthesize, but have a symbiotic relationship with microscopic algae (dinoflagellates) of the genus Symbiodinium, commonly referred to as zooxanthellae. These organisms live within the polyps' tissues and provide organic nutrients that nourish the polyp in the form of glucose, glycerol and amino acids. Because of this relationship, coral reefs grow much faster in clear water, which admits more sunlight. Without their symbionts, coral growth would be too slow to form significant reef structures. Corals get up to 90% of their nutrients from their symbionts. In return, as an example of mutualism, the corals shelter the zooxanthellae, averaging one million for every cubic centimetre of coral, and provide a constant supply of the carbon dioxide they need for photosynthesis. The varying pigments in different species of zooxanthellae give them an overall brown or golden-brown appearance and give brown corals their colors. Other pigments such as reds, blues, greens, etc. come from colored proteins made by the coral animals. Coral that loses a large fraction of its zooxanthellae becomes white (or sometimes pastel shades in corals that are pigmented with their own proteins) and is said to be bleached, a condition which, unless corrected, can kill the coral. There are eight clades of Symbiodinium phylotypes. Most research has been conducted on clades A–D. Each clade contributes their own benefits as well as less compatible attributes to the survival of their coral hosts. Each photosynthetic organism has a specific level of sensitivity to photodamage to compounds needed for survival, such as proteins. Rates of regeneration and replication determine the organism's ability to survive. Phylotype A is found more in the shallow waters. It is able to produce mycosporine-like amino acids that are UV resistant, using a derivative of glycerin to absorb the UV radiation and allowing them to better adapt to warmer water temperatures. In the event of UV or thermal damage, if and when repair occurs, it will increase the likelihood of survival of the host and symbiont. This leads to the idea that, evolutionarily, clade A is more UV resistant and thermally resistant than the other clades. Clades B and C are found more frequently in deeper water, which may explain their higher vulnerability to increased temperatures. Terrestrial plants that receive less sunlight because they are found in the undergrowth are analogous to clades B, C, and D. Since clades B through D are found at deeper depths, they require an elevated light absorption rate to be able to synthesize as much energy. With elevated absorption rates at UV wavelengths, these phylotypes are more prone to coral bleaching versus the shallow clade A. Clade D has been observed to be high temperature-tolerant, and has a higher rate of survival than clades B and C during modern bleaching events. Skeleton Reefs grow as polyps and other organisms deposit calcium carbonate, the basis of coral, as a skeletal structure beneath and around themselves, pushing the coral head's top upwards and outwards. Waves, grazing fish (such as parrotfish), sea urchins, sponges and other forces and organisms act as bioeroders, breaking down coral skeletons into fragments that settle into spaces in the reef structure or form sandy bottoms in associated reef lagoons. Typical shapes for coral species are named by their resemblance to terrestrial objects such as wrinkled brains, cabbages, table tops, antlers, wire strands and pillars. These shapes can depend on the life history of the coral, like light exposure and wave action, and events such as breakages. Reproduction Corals reproduce both sexually and asexually. An individual polyp uses both reproductive modes within its lifetime. Corals reproduce sexually by either internal or external fertilization. The reproductive cells are found on the mesenteries, membranes that radiate inward from the layer of tissue that lines the stomach cavity. Some mature adult corals are hermaphroditic; others are exclusively male or female. A few species change sex as they grow. Internally fertilized eggs develop in the polyp for a period ranging from days to weeks. Subsequent development produces a tiny larva, known as a planula. Externally fertilized eggs develop during synchronized spawning. Polyps across a reef simultaneously release eggs and sperm into the water en masse. Spawn disperse over a large area. The timing of spawning depends on time of year, water temperature, and tidal and lunar cycles. Spawning is most successful given little variation between high and low tide. The less water movement, the better the chance for fertilization. The release of eggs or planula usually occurs at night and is sometimes in phase with the lunar cycle (three to six days after a full moon). The period from release to settlement lasts only a few days, but some planulae can survive afloat for several weeks. During this process, the larvae may use several different cues to find a suitable location for settlement. At long distances sounds from existing reefs are likely important, while at short distances chemical compounds become important. The larvae are vulnerable to predation and environmental conditions. The lucky few planulae that successfully attach to substrate then compete for food and space. Gallery of reef-building corals Other reef builders Corals are the most prodigious reef-builders. However many other organisms living in the reef community contribute skeletal calcium carbonate in the same manner as corals. These include coralline algae, some sponges and bivalves. Reefs are always built by the combined efforts of these different phyla, with different organisms leading reef-building in different geological periods. Coralline algae Coralline algae are important contributors to reef structure. Although their mineral deposition rates are much slower than corals, they are more tolerant of rough wave-action, and so help to create a protective crust over those parts of the reef subjected to the greatest forces by waves, such as the reef front facing the open ocean. They also strengthen the reef structure by depositing limestone in sheets over the reef surface. Sponges "Sclerosponge" is the descriptive name for all Porifera that build reefs. In the early Cambrian period, Archaeocyatha sponges were the world's first reef-building organisms, and sponges were the only reef-builders until the Ordovician. Sclerosponges still assist corals building modern reefs, but like coralline algae are much slower-growing than corals and their contribution is (usually) minor. In the northern Pacific Ocean cloud sponges still create deep-water mineral-structures without corals, although the structures are not recognizable from the surface like tropical reefs. They are the only extant organisms known to build reef-like structures in cold water. Bivalves Oyster reefs are dense aggregations of oysters living in colonial communities. Other regionally-specific names for these structures include oyster beds and oyster banks. Oyster larvae require a hard substrate or surface to attach on, which includes the shells of old or dead oysters. Thus reefs can build up over time as new larvae settle on older individuals. Crassostrea virginica were once abundant in Chesapeake Bay and shorelines bordering the Atlantic coastal plain until the late nineteenth century. Ostrea angasi is a species of flat oyster that had also formed large reefs in South Australia. Hippuritida, an extinct order of bivalves known as rudists, were major reef-building organisms during the Cretaceous. By the mid-Cretaceous, rudists became the dominant tropical reef-builders, becoming more numerous than scleractinian corals. During this period, ocean temperatures and saline levels—which corals are sensitive to—were higher than it is today, which may have contributed to the success of rudist reefs. Gastropods Some gastropods, like family Vermetidae, are sessile and cement themselves to the substrate, contributing to the reef building. Darwin's paradox In The Structure and Distribution of Coral Reefs, published in 1842, Darwin described how coral reefs were found in some tropical areas but not others, with no obvious cause. The largest and strongest corals grew in parts of the reef exposed to the most violent surf and corals were weakened or absent where loose sediment accumulated. Tropical waters contain few nutrients yet a coral reef can flourish like an "oasis in the desert". This has given rise to the ecosystem conundrum, sometimes called "Darwin's paradox": "How can such high production flourish in such nutrient poor conditions?" Coral reefs support over one-quarter of all marine species. This diversity results in complex food webs, with large predator fish eating smaller forage fish that eat yet smaller zooplankton and so on. However, all food webs eventually depend on plants, which are the primary producers. Coral reefs typically produce 5–10 grams of carbon per square meter per day (gC·m−2·day−1) biomass. One reason for the unusual clarity of tropical waters is their nutrient deficiency and drifting plankton. Further, the sun shines year-round in the tropics, warming the surface layer, making it less dense than subsurface layers. The warmer water is separated from deeper, cooler water by a stable thermocline, where the temperature makes a rapid change. This keeps the warm surface waters floating above the cooler deeper waters. In most parts of the ocean, there is little exchange between these layers. Organisms that die in aquatic environments generally sink to the bottom, where they decompose, which releases nutrients in the form of nitrogen (N), phosphorus (P) and potassium (K). These nutrients are necessary for plant growth, but in the tropics, they do not directly return to the surface. Plants form the base of the food chain and need sunlight and nutrients to grow. In the ocean, these plants are mainly microscopic phytoplankton which drift in the water column. They need sunlight for photosynthesis, which powers carbon fixation, so they are found only relatively near the surface, but they also need nutrients. Phytoplankton rapidly use nutrients in the surface waters, and in the tropics, these nutrients are not usually replaced because of the thermocline. Explanations Around coral reefs, lagoons fill in with material eroded from the reef and the island. They become havens for marine life, providing protection from waves and storms. Most importantly, reefs recycle nutrients, which happens much less in the open ocean. In coral reefs and lagoons, producers include phytoplankton, as well as seaweed and coralline algae, especially small types called turf algae, which pass nutrients to corals. The phytoplankton form the base of the food chain and are eaten by fish and crustaceans. Recycling reduces the nutrient inputs needed overall to support the community. Corals also absorb nutrients, including inorganic nitrogen and phosphorus, directly from water. Many corals extend their tentacles at night to catch zooplankton that pass near. Zooplankton provide the polyp with nitrogen, and the polyp shares some of the nitrogen with the zooxanthellae, which also require this element. Sponges live in crevices in the reefs. They are efficient filter feeders, and in the Red Sea they consume about 60% of the phytoplankton that drifts by. Sponges eventually excrete nutrients in a form that corals can use. The roughness of coral surfaces is key to coral survival in agitated waters. Normally, a boundary layer of still water surrounds a submerged object, which acts as a barrier. Waves breaking on the extremely rough edges of corals disrupt the boundary layer, allowing the corals access to passing nutrients. Turbulent water thereby promotes reef growth. Without the access to nutrients brought by rough coral surfaces, even the most effective recycling would not suffice. Deep nutrient-rich water entering coral reefs through isolated events may have significant effects on temperature and nutrient systems. This water movement disrupts the relatively stable thermocline that usually exists between warm shallow water and deeper colder water. Temperature regimes on coral reefs in the Bahamas and Florida are highly variable with temporal scales of minutes to seasons and spatial scales across depths. Water can pass through coral reefs in various ways, including current rings, surface waves, internal waves and tidal changes. Movement is generally created by tides and wind. As tides interact with varying bathymetry and wind mixes with surface water, internal waves are created. An internal wave is a gravity wave that moves along density stratification within the ocean. When a water parcel encounters a different density it oscillates and creates internal waves. While internal waves generally have a lower frequency than surface waves, they often form as a single wave that breaks into multiple waves as it hits a slope and moves upward. This vertical breakup of internal waves causes significant diapycnal mixing and turbulence. Internal waves can act as nutrient pumps, bringing plankton and cool nutrient-rich water to the surface. The irregular structure characteristic of coral reef bathymetry may enhance mixing and produce pockets of cooler water and variable nutrient content. Arrival of cool, nutrient-rich water from depths due to internal waves and tidal bores has been linked to growth rates of suspension feeders and benthic algae as well as plankton and larval organisms. The seaweed Codium isthmocladum reacts to deep water nutrient sources because their tissues have different concentrations of nutrients dependent upon depth. Aggregations of eggs, larval organisms and plankton on reefs respond to deep water intrusions. Similarly, as internal waves and bores move vertically, surface-dwelling larval organisms are carried toward the shore. This has significant biological importance to cascading effects of food chains in coral reef ecosystems and may provide yet another key to unlocking the paradox. Cyanobacteria provide soluble nitrates via nitrogen fixation. Coral reefs often depend on surrounding habitats, such as seagrass meadows and mangrove forests, for nutrients. Seagrass and mangroves supply dead plants and animals that are rich in nitrogen and serve to feed fish and animals from the reef by supplying wood and vegetation. Reefs, in turn, protect mangroves and seagrass from waves and produce sediment in which the mangroves and seagrass can root. Biodiversity Coral reefs form some of the world's most productive ecosystems, providing complex and varied marine habitats that support a wide range of organisms. Fringing reefs just below low tide level have a mutually beneficial relationship with mangrove forests at high tide level and sea grass meadows in between: the reefs protect the mangroves and seagrass from strong currents and waves that would damage them or erode the sediments in which they are rooted, while the mangroves and sea grass protect the coral from large influxes of silt, fresh water and pollutants. This level of variety in the environment benefits many coral reef animals, which, for example, may feed in the sea grass and use the reefs for protection or breeding. Reefs are home to a variety of animals, including fish, seabirds, sponges, cnidarians (which includes some types of corals and jellyfish), worms, crustaceans (including shrimp, cleaner shrimp, spiny lobsters and crabs), mollusks (including cephalopods), echinoderms (including starfish, sea urchins and sea cucumbers), sea squirts, sea turtles and sea snakes. Aside from humans, mammals are rare on coral reefs, with visiting cetaceans such as dolphins the main exception. A few species feed directly on corals, while others graze on algae on the reef. Reef biomass is positively related to species diversity. The same hideouts in a reef may be regularly inhabited by different species at different times of day. Nighttime predators such as cardinalfish and squirrelfish hide during the day, while damselfish, surgeonfish, triggerfish, wrasses and parrotfish hide from eels and sharks. The great number and diversity of hiding places in coral reefs, i.e. refuges, are the most important factor causing the great diversity and high biomass of the organisms in coral reefs. Coral reefs also have a very high degree of microorganism diversity compared to other environments. Algae Reefs are chronically at risk of algal encroachment. Overfishing and excess nutrient supply from onshore can enable algae to outcompete and kill the coral. Increased nutrient levels can be a result of sewage or chemical fertilizer runoff. Runoff can carry nitrogen and phosphorus which promote excess algae growth. Algae can sometimes out-compete the coral for space. The algae can then smother the coral by decreasing the oxygen supply available to the reef. Decreased oxygen levels can slow down calcification rates, weakening the coral and leaving it more susceptible to disease and degradation. Algae inhabit a large percentage of surveyed coral locations. The algal population consists of turf algae, coralline algae and macro algae. Some sea urchins (such as Diadema antillarum) eat these algae and could thus decrease the risk of algal encroachment. Sponges Sponges are essential for the functioning of the coral reef system. Algae and corals in coral reefs produce organic material. This is filtered through sponges which convert this organic material into small particles which in turn are absorbed by algae and corals. Sponges are essential to the coral reef system however, they are quite different from corals. While corals are complex and many celled while sponges are very simple organisms with no tissue. They are alike in that they are both immobile aquatic invertebrates but otherwise are completely different. Types of sponges- There are several different species of sea sponge. They come in multiple shapes and sizes and all have unique characteristics. Some types of sea sponges include; the tube sponge, vase sponge, yellow sponge, bright red tree sponge, painted tunicate sponge, and the sea squirt sponge. Medicinal Qualities of Sea Sponges- Sea sponges have provided the base for many life saving medications. Scientists began to study them in the 1940s and after a few years, discovered that sea sponges contain properties that can stop viral infections. The first drug developed from sea sponges was released in 1969. Fish Over 4,000 species of fish inhabit coral reefs. The reasons for this diversity remain unclear. Hypotheses include the "lottery", in which the first (lucky winner) recruit to a territory is typically able to defend it against latecomers, "competition", in which adults compete for territory, and less-competitive species must be able to survive in poorer habitat, and "predation", in which population size is a function of postsettlement piscivore mortality. Healthy reefs can produce up to 35 tons of fish per square kilometre each year, but damaged reefs produce much less. Invertebrates Sea urchins, Dotidae and sea slugs eat seaweed. Some species of sea urchins, such as Diadema antillarum, can play a pivotal part in preventing algae from overrunning reefs. Researchers are investigating the use of native collector urchins, Tripneustes gratilla, for their potential as biocontrol agents to mitigate the spread of invasive algae species on coral reefs. Nudibranchia and sea anemones eat sponges. A number of invertebrates, collectively called "cryptofauna", inhabit the coral skeletal substrate itself, either boring into the skeletons (through the process of bioerosion) or living in pre-existing voids and crevices. Animals boring into the rock include sponges, bivalve mollusks, and sipunculans. Those settling on the reef include many other species, particularly crustaceans and polychaete worms. Seabirds Coral reef systems provide important habitats for seabird species, some endangered. For example, Midway Atoll in Hawaii supports nearly three million seabirds, including two-thirds (1.5 million) of the global population of Laysan albatross, and one-third of the global population of black-footed albatross. Each seabird species has specific sites on the atoll where they nest. Altogether, 17 species of seabirds live on Midway. The short-tailed albatross is the rarest, with fewer than 2,200 surviving after excessive feather hunting in the late 19th century. Other Sea snakes feed exclusively on fish and their eggs. Marine birds, such as herons, gannets, pelicans and boobies, feed on reef fish. Some land-based reptiles intermittently associate with reefs, such as monitor lizards, the marine crocodile and semiaquatic snakes, such as Laticauda colubrina. Sea turtles, particularly hawksbill sea turtles, feed on sponges. Ecosystem services Coral reefs deliver ecosystem services to tourism, fisheries and coastline protection. The global economic value of coral reefs has been estimated to be between US$29.8 billion and $375 billion per year. About 500 million people benefit from ecosystem services provided by coral reefs. The economic cost over a 25-year period of destroying one square kilometre of coral reef has been estimated to be somewhere between $137,000 and $1,200,000. To improve the management of coastal coral reefs, the World Resources Institute (WRI) developed and published tools for calculating the value of coral reef-related tourism, shoreline protection and fisheries, partnering with five Caribbean countries. As of April 2011, published working papers covered St. Lucia, Tobago, Belize, and the Dominican Republic. The WRI was "making sure that the study results support improved coastal policies and management planning". The Belize study estimated the value of reef and mangrove services at $395–559 million annually. Bermuda's coral reefs provide economic benefits to the Island worth on average $722 million per year, based on six key ecosystem services, according to Sarkis et al (2010). Shoreline protection Coral reefs protect shorelines by absorbing wave energy, and many small islands would not exist without reefs. Coral reefs can reduce wave energy by 97%, helping to prevent loss of life and property damage. Coastlines protected by coral reefs are also more stable in terms of erosion than those without. Reefs can attenuate waves as well as or better than artificial structures designed for coastal defence such as breakwaters. An estimated 197 million people who live both below 10 m elevation and within 50 km of a reef consequently may receive risk reduction benefits from reefs. Restoring reefs is significantly cheaper than building artificial breakwaters in tropical environments. Expected damages from flooding would double, and costs from frequent storms would triple without the topmost meter of reefs. For 100-year storm events, flood damages would increase by 91% to $US 272 billion without the top meter. Fisheries About six million tons of fish are taken each year from coral reefs. Well-managed reefs have an average annual yield of 15 tons of seafood per square kilometre. Southeast Asia's coral reef fisheries alone yield about $2.4 billion annually from seafood. Threats Since their emergence 485 million years ago, coral reefs have faced many threats, including disease, predation, invasive species, bioerosion by grazing fish, algal blooms, and geologic hazards. Recent human activities present new threats. From 2009 to 2018, coral reefs worldwide declined 14%. Human activities that threaten coral include coral mining, bottom trawling, and the digging of canals and accesses into islands and bays, all of which can damage marine ecosystems if not done sustainably. Other localized threats include blast fishing, overfishing, coral overmining, and marine pollution, including use of the banned anti-fouling biocide tributyltin; although absent in developed countries, these activities continue in places with few environmental protections or poor regulatory enforcement. Chemicals in sunscreens may awaken latent viral infections in zooxanthellae and impact reproduction. However, concentrating tourism activities via offshore platforms has been shown to limit the spread of coral disease by tourists. Greenhouse gas emissions present a broader threat through sea temperature rise and sea level rise, resulting in widespread coral bleaching and loss of coral cover. Climate change causes more frequent and more severe storms, also changes ocean circulation patterns, which can destroy coral reefs.Ocean acidification also affects corals by decreasing calcification rates and increasing dissolution rates, although corals can adapt their calcifying fluids to changes in seawater pH and carbonate levels to mitigate the impact. Volcanic and human-made aerosol pollution can modulate regional sea surface temperatures. In 2011, two researchers suggested that "extant marine invertebrates face the same synergistic effects of multiple stressors" that occurred during the end-Permian extinction, and that genera "with poorly buffered respiratory physiology and calcareous shells", such as corals, were particularly vulnerable. Corals respond to stress by "bleaching", or expelling their colorful zooxanthellate endosymbionts. Corals with Clade C zooxanthellae are generally vulnerable to heat-induced bleaching, whereas corals with the hardier Clade A or D are generally resistant, as are tougher coral genera like Porites and Montipora. Every 4–7 years, an El Niño event causes some reefs with heat-sensitive corals to bleach, with especially widespread bleachings in 1998 and 2010. However, reefs that experience a severe bleaching event become resistant to future heat-induced bleaching, due to rapid directional selection. Similar rapid adaption may protect coral reefs from global warming. A large-scale systematic study of the Jarvis Island coral community, which experienced ten El Niño-coincident coral bleaching events from 1960 to 2016, found that the reef recovered from almost complete death after severe events. Protection Marine protected areas (MPAs) are areas designated because they provide various kinds of protection to ocean and/or estuarine areas. They are intended to promote responsible fishery management and habitat protection. MPAs can also encompass social and biological objectives, including reef restoration, aesthetics, biodiversity and economic benefits. The effectiveness of MPAs is still debated. For example, a study investigating the success of a small number of MPAs in Indonesia, the Philippines and Papua New Guinea found no significant differences between the MPAs and unprotected sites. Furthermore, in some cases they can generate local conflict, due to a lack of community participation, clashing views of the government and fisheries, effectiveness of the area and funding. In some situations, as in the Phoenix Islands Protected Area, MPAs provide revenue to locals. The level of income provided is similar to the income they would have generated without controls. Overall, it appears the MPA's can provide protection to local coral reefs, but that clear management and sufficient funds are required. The Caribbean Coral Reefs – Status Report 1970–2012, states that coral decline may be reduced or even reversed. For this overfishing needs to be stopped, especially fishing on species key to coral reefs, such as parrotfish. Direct human pressure on coral reefs should also be reduced and the inflow of sewage should be minimised. Measures to achieve this could include restricting coastal settlement, development and tourism. The report shows that healthier reefs in the Caribbean are those with large, healthy populations of parrotfish. These occur in countries that protect parrotfish and other species, like sea urchins. They also often ban fish trapping and spearfishing. Together these measures help creating "resilient reefs". Protecting networks of diverse and healthy reefs, not only climate refugia, helps ensure the greatest chance of genetic diversity, which is critical for coral to adapt to new climates. A variety of conservation methods applied across marine and terrestrial threatened ecosystems makes coral adaption more likely and effective. Designating a reef as a biosphere reserve, marine park, national monument or world heritage site can offer protections. For example, Belize's barrier reef, Sian Ka'an, the Galapagos islands, Great Barrier Reef, Henderson Island, Palau and Papahānaumokuākea Marine National Monument are world heritage sites. In Australia, the Great Barrier Reef is protected by the Great Barrier Reef Marine Park Authority, and is the subject of much legislation, including a biodiversity action plan. Australia compiled a Coral Reef Resilience Action Plan. This plan consists of adaptive management strategies, including reducing carbon footprint. A public awareness plan provides education on the "rainforests of the sea" and how people can reduce carbon emissions. Inhabitants of Ahus Island, Manus Province, Papua New Guinea, have followed a generations-old practice of restricting fishing in six areas of their reef lagoon. Their cultural traditions allow line fishing, but no net or spear fishing. Both biomass and individual fish sizes are significantly larger than in places where fishing is unrestricted. Increased levels of atmospheric CO2 contribute to ocean acidification, which in turn damages coral reefs. To help combat ocean acidification, several countries have put laws in place to reduce greenhouse gases such as carbon dioxide. Many land use laws aim to reduce CO2 emissions by limiting deforestation. Deforestation can release significant amounts of CO2 absent sequestration via active follow-up forestry programs. Deforestation can also cause erosion, which flows into the ocean, contributing to ocean acidification. Incentives are used to reduce miles traveled by vehicles, which reduces carbon emissions into the atmosphere, thereby reducing the amount of dissolved CO2 in the ocean. State and federal governments also regulate land activities that affect coastal erosion. High-end satellite technology can monitor reef conditions. The United States Clean Water Act puts pressure on state governments to monitor and limit run-off of polluted water. Restoration Coral reef restoration has grown in prominence over the past several decades because of the unprecedented reef die-offs around the planet. Coral stressors can include pollution, warming ocean temperatures, extreme weather events, and overfishing. With the deterioration of global reefs, fish nurseries, biodiversity, coastal development and livelihood, and natural beauty are under threat. Fortunately, researchers have taken it upon themselves to develop a new field, coral restoration, in the 1970s–1980s Coral farming Coral aquaculture, also known as coral farming or coral gardening, is showing promise as a potentially effective tool for restoring coral reefs. The "gardening" process bypasses the early growth stages of corals when they are most at risk of dying. Coral seeds are grown in nurseries, then replanted on the reef. Coral is farmed by coral farmers whose interests range from reef conservation to increased income. Due to its straight forward process and substantial evidence of the technique having a significant effect on coral reef growth, coral nurseries became the most widespread and arguably the most effective method for coral restoration. Coral gardens take advantage of a coral's natural ability to fragment and continuing to grow if the fragments are able to anchor themselves onto new substrates. This method was first tested by Baruch Rinkevich in 1995 which found success at the time. By today's standards, coral farming has grown into a variety of different forms, but still has the same goals of cultivating corals. Consequently, coral farming quickly replaced previously used transplantation methods or the act of physically moving sections or whole colonies of corals into a new area. Transplantation has seen success in the past and decades of experiments have led to a high success and survival rate. However, this method still requires the removal of corals from existing reefs. With the current state of reefs, this kind of method should generally be avoided if possible. Saving healthy corals from eroding substrates or reefs that are doomed to collapse could be a major advantage of utilizing transplantation. Coral gardens generally take on the safe forms no matter where you go. It begins with the establishment of a nursery where operators can observe and care for coral fragments. It goes without saying that nurseries should be established in areas that are going to maximize growth and minimize mortality. Floating offshore coral trees or even aquariums are possible locations where corals can grow. After a location has been determined, collection and cultivation can occur. The major benefit of using coral farms is it lowers polyp and juvenile mortality rates. By removing predators and recruitment obstacles, corals are able to mature without much hindrance. However, nurseries cannot stop climate stressors. Warming temperatures or hurricanes can still disrupt or even kill nursery corals. Technology is becoming more popular in the coral farming process. Teams from the Reef Restoration and Adaptation Program (RRAP) have trialled coral counting technology utilizing a prototype robotic camera. The camera uses computer vision and learning algorithms to detect and count individual coral babies and track their growth and health in real time. This technology, with research led by QUT, is intended to be used during annual coral spawning events and will provide researchers with control that is not currently possible when mass-producing corals. Creating substrates Efforts to expand the size and number of coral reefs generally involve supplying substrate to allow more corals to find a home. Substrate materials include discarded vehicle tires, scuttled ships, subway cars and formed concrete, such as reef balls. Reefs grow unaided on marine structures such as oil rigs. In large restoration projects, propagated hermatypic coral on substrate can be secured with metal pins, superglue or milliput. Needle and thread can also attach A-hermatype coral to substrate. Biorock is a substrate produced by a patented process that runs low voltage electrical currents through seawater to cause dissolved minerals to precipitate onto steel structures. The resultant white carbonate (aragonite) is the same mineral that makes up natural coral reefs. Corals rapidly colonize and grow at accelerated rates on these coated structures. The electrical currents also accelerate the formation and growth of both chemical limestone rock and the skeletons of corals and other shell-bearing organisms, such as oysters. The vicinity of the anode and cathode provides a high-pH environment which inhibits the growth of competitive filamentous and fleshy algae. The increased growth rates fully depend on the accretion activity. Under the influence of the electric field, corals display an increased growth rate, size and density. Simply having many structures on the ocean floor is not enough to form coral reefs. Restoration projects must consider the complexity of the substrates they are creating for future reefs. Researchers conducted an experiment near Ticao Island in the Philippines in 2013 where several substrates in varying complexities were laid in the nearby degraded reefs. Large complexity consisted of plots that had both a human-made substrates of both smooth and rough rocks with a surrounding fence, medium consisted of only the human-made substrates, and small had neither the fence or substrates. After one month, researchers found that there was a positive correlation between structure complexity and recruitment rates of larvae. The medium complexity performed the best with larvae favoring rough rocks over smooth rocks. Following one year of their study, researchers visited the site and found that many of the sites were able to support local fisheries. They came to the conclusion that reef restoration could be done cost-effectively and will yield long term benefits given they are protected and maintained. Relocation One case study with coral reef restoration was conducted on the island of Oahu in Hawaii. The University of Hawaii operates a Coral Reef Assessment and Monitoring Program to help relocate and restore coral reefs in Hawaii. A boat channel from the island of Oahu to the Hawaii Institute of Marine Biology on Coconut Island was overcrowded with coral reefs. Many areas of coral reef patches in the channel had been damaged from past dredging in the channel. Dredging covers corals with sand. Coral larvae cannot settle on sand; they can only build on existing reefs or compatible hard surfaces, such as rock or concrete. Because of this, the university decided to relocate some of the coral. They transplanted them with the help of United States Army divers, to a site relatively close to the channel. They observed little if any damage to any of the colonies during transport and no mortality of coral reefs was observed on the transplant site. While attaching the coral to the transplant site, they found that coral placed on hard rock grew well, including on the wires that attached the corals to the site. No environmental effects were seen from the transplantation process, recreational activities were not decreased, and no scenic areas were affected. As an alternative to transplanting coral themselves, juvenile fish can also be encouraged to relocate to existing coral reefs by auditory simulation. In damaged sections of the Great Barrier Reef, loudspeakers playing recordings of healthy reef environments were found to attract fish twice as often as equivalent patches where no sound was played, and also increased species biodiversity by 50%. Heat-tolerant symbionts Another possibility for coral restoration is gene therapy: inoculating coral with genetically modified bacteria, or naturally-occurring heat-tolerant varieties of coral symbiotes, may make it possible to grow corals that are more resistant to climate change and other threats. Warming oceans are forcing corals to adapt to unprecedented temperatures. Those that do not have a tolerance for the elevated temperatures experience coral bleaching and eventually mortality. There is already research that looks to create genetically modified corals that can withstand a warming ocean. Madeleine J. H. van Oppen, James K. Oliver, Hollie M. Putnam, and Ruth D. Gates described four different ways that gradually increase in human intervention to genetically modify corals. These methods focus on altering the genetics of the zooxanthellae within coral rather than the alternative. The first method is to induce acclimatization of the first generation of corals. The idea is that when adult and offspring corals are exposed to stressors, the zooxanthellae will gain a mutation. This method is based mostly on the chance that the zooxanthellae will acquire the specific trait that will allow it to better survive in warmer waters. The second method focuses on identifying what different kinds of zooxanthellae are within the coral and configuring how much of each zooxanthella lives within the coral at a given age. Use of zooxanthellae from the previous method would only boost success rates for this method. However, this method would only be applicable to younger corals, for now, because previous experiments of manipulation zooxanthellae communities at later life stages have all failed. The third method focuses on selective breeding tactics. Once selected, corals would be reared and exposed to simulated stressors in a laboratory. The last method is to genetically modify the zooxanthellae itself. When preferred mutations are acquired, the genetically modified zooxanthellae will be introduced to an aposymbiotic poly and a new coral will be produced. This method is the most laborious of the fourth, but researchers believe this method should be utilized more and holds the most promise in genetic engineering for coral restoration. Invasive algae Hawaiian coral reefs smothered by the spread of invasive algae were managed with a two-prong approach: divers manually removed invasive algae, with the support of super-sucker barges. Grazing pressure on invasive algae needed to be increased to prevent the regrowth of the algae. Researchers found that native collector urchins were reasonable candidate grazers for algae biocontrol, to extirpate the remaining invasive algae from the reef. Invasive algae in Caribbean reefs Macroalgae, or better known as seaweed, has to potential to cause reef collapse because they can outcompete many coral species. Macroalgae can overgrow on corals, shade, block recruitment, release biochemicals that can hinder spawning, and potentially form bacteria harmful to corals. Historically, algae growth was controlled by herbivorous fish and sea urchins. Parrotfish are a prime example of reef caretakers. Consequently, these two species can be considered as keystone species for reef environments because of their role in protecting reefs. Before the 1980s, Jamaica's reefs were thriving and well cared for, however, this all changed after Hurricane Allen occurred in 1980 and an unknown disease spread across the Caribbean. In the wake of these events, massive damage was caused to both the reefs and sea urchin population across Jamaican's reefs and into the Caribbean Sea. As little as 2% of the original sea urchin population survived the disease. Primary macroalgae succeeded the destroyed reefs and eventually larger, more resilient macroalgae soon took its place as the dominant organism. Parrotfish and other herbivorous fish were few in numbers because of decades of overfishing and bycatch at the time. Historically, the Jamaican coast had 90% coral cover and was reduced to 5% in the 1990s. Eventually, corals were able to recover in areas where sea urchin populations were increasing. Sea urchins were able to feed and multiply and clear off substrates, leaving areas for coral polyps to anchor and mature. However, sea urchin populations are still not recovering as fast as researchers predicted, despite being highly fecundate. It is unknown whether or not the mysterious disease is still present and preventing sea urchin populations from rebounding. Regardless, these areas are slowly recovering with the aid of sea urchin grazing. This event supports an early restoration idea of cultivating and releasing sea urchins into reefs to prevent algal overgrowth. Microfragmentation and fusion In 2014, Christopher Page, Erinn Muller, and David Vaughan from the International Center for Coral Reef Research & Restoration at Mote Marine Laboratory in Summerland Key, Florida developed a new technology called "microfragmentation", in which they use a specialized diamond band saw to cut corals into 1 cm2 fragments instead of 6 cm2 to advance the growth of brain, boulder, and star corals. Corals Orbicella faveolata and Montastraea cavernosa were outplanted off the Florida's shores in several microfragment arrays. After two years, O. faveolata had grown 6.5x its original size while M. cavernosa had grown nearly twice its size. Under conventional means, both corals would have required decades to reach the same size. It is suspected that if predation events had not occurred near the beginning of the experiment O. faveolata would have grown at least ten times its original size. By using this method, Mote Marine Laboratory successfully generated 25,000 corals within a single year, subsequently transplanting 10,000 of them into the Florida Keys. Shortly after, they discovered that these microfragments fused with other microfragments from the same parent coral. Typically, corals that are not from the same parent fight and kill nearby corals in an attempt to survive and expand. This new technology is known as "fusion" and has been shown to grow coral heads in just two years instead of the typical 25–75 years. After fusion occurs, the reef will act as a single organism rather than several independent reefs. Currently, there has been no published research into this method. See also Deep-water coral — Corals living in the cold waters of deeper, darker parts of the oceans Mesophotic coral reef — Corals living in the mesopelagic or twilight zone References Further references Coral Reef Protection: What Are Coral Reefs?. US EPA. External links Corals and Coral Reefs overview at the Smithsonian Ocean Portal About Corals Australian Institute of Marine Science. International Coral Reef Initiative Moorea Coral Reef Long Term Ecological Research Site (US NSF) ARC Centre of Excellence for Coral Reef Studies NOAA's Coral-List Listserver for Coral Reef Information and News NOAA's Coral Reef Conservation Program NOAA's Coral Reef Information System ReefBase: A Global Information System on Coral Reefs National Coral Reef Institute Nova Southeastern University Marine Aquarium Council NCORE National Center for Coral Reef Research University of Miami Microdocs : 4 kinds of Reef & Reef structure Reef Relief Active Florida environmental non-profit focusing on coral reef education and protection Global Reef Record – Catlin Seaview Survey of reef, a database of images and other information "Corals and Coral Reefs" (archived). Nancy Knowlton, iBioSeminars, 2011. Nancy Knowlton's Seminar: "Corals and Coral Reefs". Nancy Knowlton, iBioSeminars, 2011. About coral reefs Living Reefs Foundation, Bermuda Caribbean Coral Reefs – Status Report 1970-2012'' by the IUCN. – , featuring the report. Animal products Environmental impact of fishing Coastal and oceanic landforms Ecosystems Oceanographical terminology Oceanography
Coral reef
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
13,834
[ "Hydrology", "Symbiosis", "Natural products", "Applied and interdisciplinary physics", "Coral reefs", "Oceanography", "Animal products", "Biogeomorphology", "Ecosystems" ]
87,793
https://en.wikipedia.org/wiki/Joseph-Louis%20Lagrange
Joseph-Louis Lagrange (born Giuseppe Luigi Lagrangia or Giuseppe Ludovico De la Grange Tournier; 25 January 1736 – 10 April 1813), also reported as Giuseppe Luigi Lagrange or Lagrangia, was an Italian mathematician, physicist and astronomer, later naturalized French. He made significant contributions to the fields of analysis, number theory, and both classical and celestial mechanics. In 1766, on the recommendation of Leonhard Euler and d'Alembert, Lagrange succeeded Euler as the director of mathematics at the Prussian Academy of Sciences in Berlin, Prussia, where he stayed for over twenty years, producing many volumes of work and winning several prizes of the French Academy of Sciences. Lagrange's treatise on analytical mechanics (Mécanique analytique, 4. ed., 2 vols. Paris: Gauthier-Villars et fils, 1788–89), which was written in Berlin and first published in 1788, offered the most comprehensive treatment of classical mechanics since Isaac Newton and formed a basis for the development of mathematical physics in the nineteenth century. In 1787, at age 51, he moved from Berlin to Paris and became a member of the French Academy of Sciences. He remained in France until the end of his life. He was instrumental in the decimalisation process in Revolutionary France, became the first professor of analysis at the École Polytechnique upon its opening in 1794, was a founding member of the Bureau des Longitudes, and became Senator in 1799. Scientific contribution Lagrange was one of the creators of the calculus of variations, deriving the Euler–Lagrange equations for extrema of functionals. He extended the method to include possible constraints, arriving at the method of Lagrange multipliers. Lagrange invented the method of solving differential equations known as variation of parameters, applied differential calculus to the theory of probabilities and worked on solutions for algebraic equations. He proved that every natural number is a sum of four squares. His treatise Theorie des fonctions analytiques laid some of the foundations of group theory, anticipating Galois. In calculus, Lagrange developed a novel approach to interpolation and Taylor's theorem. He studied the three-body problem for the Earth, Sun and Moon (1764) and the movement of Jupiter's satellites (1766), and in 1772 found the special-case solutions to this problem that yield what are now known as Lagrangian points. Lagrange is best known for transforming Newtonian mechanics into a branch of analysis, Lagrangian mechanics. He presented the mechanical "principles" as simple results of the variational calculus. Biography Early years Firstborn of eleven children as Giuseppe Lodovico Lagrangia, Lagrange was of Italian and French descent. His paternal great-grandfather was a French captain of cavalry, whose family originated from the French region of Tours. After serving under Louis XIV, he had entered the service of Charles Emmanuel II, Duke of Savoy, and married a Conti from the noble Roman family. Lagrange's father, Giuseppe Francesco Lodovico, was a doctor in Law at the University of Torino, while his mother was the only child of a rich doctor of Cambiano, in the countryside of Turin. He was raised as a Roman Catholic (but later on became an agnostic). His father, who had charge of the King's military chest and was Treasurer of the Office of Public Works and Fortifications in Turin, should have maintained a good social position and wealth, but before his son grew up he had lost most of his property in speculations. A career as a lawyer was planned out for Lagrange by his father, and certainly Lagrange seems to have accepted this willingly. He studied at the University of Turin and his favourite subject was classical Latin. At first, he had no great enthusiasm for mathematics, finding Greek geometry rather dull. It was not until he was seventeen that he showed any taste for mathematics – his interest in the subject being first excited by a paper by Edmond Halley from 1693 which he came across by accident. Alone and unaided he threw himself into mathematical studies; at the end of a year's incessant toil he was already an accomplished mathematician. Charles Emmanuel III appointed Lagrange to serve as the "Sostituto del Maestro di Matematica" (mathematics assistant professor) at the Royal Military Academy of the Theory and Practice of Artillery in 1755, where he taught courses in calculus and mechanics to support the Piedmontese army's early adoption of the ballistics theories of Benjamin Robins and Leonhard Euler. In that capacity, Lagrange was the first to teach calculus in an engineering school. According to Alessandro Papacino D'Antoni, the academy's military commander and famous artillery theorist, Lagrange unfortunately proved to be a problematic professor with his oblivious teaching style, abstract reasoning, and impatience with artillery and fortification-engineering applications. In this academy one of his students was François Daviet. Variational calculus Lagrange is one of the founders of the calculus of variations. Starting in 1754, he worked on the problem of the tautochrone, discovering a method of maximizing and minimizing functionals in a way similar to finding extrema of functions. Lagrange wrote several letters to Leonhard Euler between 1754 and 1756 describing his results. He outlined his "δ-algorithm", leading to the Euler–Lagrange equations of variational calculus and considerably simplifying Euler's earlier analysis. Lagrange also applied his ideas to problems of classical mechanics, generalising the results of Euler and Maupertuis. Euler was very impressed with Lagrange's results. It has been stated that "with characteristic courtesy he withheld a paper he had previously written, which covered some of the same ground, in order that the young Italian might have time to complete his work, and claim the undisputed invention of the new calculus"; however, this chivalric view has been disputed. Lagrange published his method in two memoirs of the Turin Society in 1762 and 1773. Miscellanea Taurinensia In 1758, with the aid of his pupils (mainly with Daviet), Lagrange established a society, which was subsequently incorporated as the Turin Academy of Sciences, and most of his early writings are to be found in the five volumes of its transactions, usually known as the Miscellanea Taurinensia. Many of these are elaborate papers. The first volume contains a paper on the theory of the propagation of sound; in this he indicates a mistake made by Newton, obtains the general differential equation for the motion, and integrates it for motion in a straight line. This volume also contains the complete solution of the problem of a string vibrating transversely; in this paper, he points out a lack of generality in the solutions previously given by Brook Taylor, D'Alembert, and Euler, and arrives at the conclusion that the form of the curve at any time t is given by the equation . The article concludes with a masterly discussion of echoes, beats, and compound sounds. Other articles in this volume are on recurring series, probabilities, and the calculus of variations. The second volume contains a long paper embodying the results of several papers in the first volume on the theory and notation of the calculus of variations, and he illustrates its use by deducing the principle of least action, and by solutions of various problems in dynamics. The third volume includes the solution of several dynamical problems by means of the calculus of variations; some papers on the integral calculus; a solution of a Fermat's problem: given an integer which is not a perfect square, to find a number such that is a perfect square; and the general differential equations of motion for three bodies moving under their mutual attractions. The next work he produced was in 1764 on the libration of the Moon, and an explanation as to why the same face was always turned to the earth, a problem which he treated by the aid of virtual work. His solution is especially interesting as containing the germ of the idea of generalised equations of motion, equations which he first formally proved in 1780. Berlin Already by 1756, Euler and Maupertuis, seeing Lagrange's mathematical talent, tried to persuade Lagrange to come to Berlin, but he shyly refused the offer. In 1765, d'Alembert interceded on Lagrange's behalf with Frederick of Prussia and by letter, asked him to leave Turin for a considerably more prestigious position in Berlin. He again turned down the offer, responding that It seems to me that Berlin would not be at all suitable for me while M.Euler is there. In 1766, after Euler left Berlin for Saint Petersburg, Frederick himself wrote to Lagrange expressing the wish of "the greatest king in Europe" to have "the greatest mathematician in Europe" resident at his court. Lagrange was finally persuaded. He spent the next twenty years in Prussia, where he produced a long series of papers published in the Berlin and Turin transactions, and composed his monumental work, the Mécanique analytique. In 1767, he married his cousin Vittoria Conti. Lagrange was a favourite of the king, who frequently lectured him on the advantages of perfect regularity of life. The lesson was accepted, and Lagrange studied his mind and body as though they were machines, and experimented to find the exact amount of work which he could do before exhaustion. Every night he set himself a definite task for the next day, and on completing any branch of a subject he wrote a short analysis to see what points in the demonstrations or the subject-matter were capable of improvement. He carefully planned his papers before writing them, usually without a single erasure or correction. Nonetheless, during his years in Berlin, Lagrange's health was rather poor, and that of his wife Vittoria was even worse. She died in 1783 after years of illness and Lagrange was very depressed. In 1786, Frederick II died, and the climate of Berlin became difficult for Lagrange. Paris In 1786, following Frederick's death, Lagrange received similar invitations from states including Spain and Naples, and he accepted the offer of Louis XVI to move to Paris. In France he was received with every mark of distinction and special apartments in the Louvre were prepared for his reception, and he became a member of the French Academy of Sciences, which later became part of the Institut de France (1795). At the beginning of his residence in Paris, he was seized with an attack of melancholy, and even the printed copy of his Mécanique on which he had worked for a quarter of a century lay for more than two years unopened on his desk. Curiosity as to the results of the French Revolution first stirred him out of his lethargy, a curiosity which soon turned to alarm as the revolution developed. It was about the same time, 1792, that the unaccountable sadness of his life and his timidity moved the compassion of 24-year-old Renée-Françoise-Adélaïde Le Monnier, daughter of his friend, the astronomer Pierre Charles Le Monnier. She insisted on marrying him and proved a devoted wife to whom he became warmly attached. In September 1793, the Reign of Terror began. Under the intervention of Antoine Lavoisier, who himself was by then already thrown out of the academy along with many other scholars, Lagrange was specifically exempted by name in the decree of October 1793 that ordered all foreigners to leave France. On 4 May 1794, Lavoisier and 27 other tax farmers were arrested and sentenced to death and guillotined on the afternoon after the trial. Lagrange said on the death of Lavoisier: It took only a moment to cause this head to fall and a hundred years will not suffice to produce its like. Though Lagrange had been preparing to escape from France while there was yet time, he was never in any danger; different revolutionary governments (and at a later time, Napoleon) gave him honours and distinctions. This luckiness or safety may to some extent be due to his life attitude he expressed many years before: "I believe that, in general, one of the first principles of every wise man is to conform strictly to the laws of the country in which he is living, even when they are unreasonable". A striking testimony to the respect in which he was held was shown in 1796 when the French commissary in Italy was ordered to attend in the full state on Lagrange's father and tender the congratulations of the republic on the achievements of his son, who "had done honour to all mankind by his genius, and whom it was the special glory of Piedmont to have produced". It may be added that Napoleon, when he attained power, warmly encouraged scientific studies in France, and was a liberal benefactor of them. Appointed senator in 1799, he was the first signer of the Sénatus-consulte which in 1802 annexed his fatherland Piedmont to France. He acquired French citizenship in consequence. The French claimed he was a French mathematician, but the Italians continued to claim him as Italian. Units of measurement Lagrange was involved in the development of the metric system of measurement in the 1790s. He was offered the presidency of the Commission for the reform of weights and measures (la Commission des Poids et Mesures) when he was preparing to escape. After Lavoisier's death in 1794, it was largely Lagrange who influenced the choice of the metre and kilogram units with decimal subdivision, by the commission of 1799. Lagrange was also one of the founding members of the Bureau des Longitudes in 1795. École Normale In 1795, Lagrange was appointed to a mathematical chair at the newly established École Normale, which enjoyed only a short existence of four months. His lectures there were elementary; they contain nothing of any mathematical importance, though they do provide a brief historical insight into his reason for proposing undecimal or Base 11 as the base number for the reformed system of weights and measures. The lectures were published because the professors had to "pledge themselves to the representatives of the people and to each other neither to read nor to repeat from memory" ["Les professeurs aux Écoles Normales ont pris, avec les Représentants du Peuple, et entr'eux l'engagement de ne point lire ou débiter de mémoire des discours écrits"]. The discourses were ordered and taken down in shorthand to enable the deputies to see how the professors acquitted themselves. It was also thought the published lectures would interest a significant portion of the citizenry ["Quoique des feuilles sténographiques soient essentiellement destinées aux élèves de l'École Normale, on doit prévoir quיelles seront lues par une grande partie de la Nation"]. École Polytechnique In 1794, Lagrange was appointed professor of the École Polytechnique; and his lectures there, described by mathematicians who had the good fortune to be able to attend them, were almost perfect both in form and matter. Beginning with the merest elements, he led his hearers on until, almost unknown to themselves, they were themselves extending the bounds of the subject: above all he impressed on his pupils the advantage of always using general methods expressed in a symmetrical notation. However, Lagrange does not seem to have been a successful teacher. Fourier, who attended his lectures in 1795, wrote: his voice is very feeble, at least in that he does not become heated; he has a very marked Italian accent and pronounces the s like z [...] The students, of whom the majority are incapable of appreciating him, give him little welcome, but the professeurs make amends for it. Late years In 1810, Lagrange started a thorough revision of the Mécanique analytique, but he was able to complete only about two-thirds of it before his death in Paris in 1813, in 128 rue du Faubourg Saint-Honoré. Napoleon honoured him with the Grand Croix of the Ordre Impérial de la Réunion just two days before he died. He was buried that same year in the Panthéon in Paris. The inscription on his tomb reads in translation:JOSEPH LOUIS LAGRANGE. Senator. Count of the Empire. Grand Officer of the Legion of Honour. Grand Cross of the Imperial Order of the Reunion. Member of the Institute and the Bureau of Longitude. Born in Turin on 25 January 1736. Died in Paris on 10 April 1813. Work in Berlin Lagrange was extremely active scientifically during the twenty years he spent in Berlin. Not only did he produce his Mécanique analytique, but he contributed between one and two hundred papers to the Academy of Turin, the Berlin Academy, and the French Academy. Some of these are really treatises, and all without exception are of a high order of excellence. Except for a short time when he was ill he produced on average about one paper a month. Of these, note the following as amongst the most important. First, his contributions to the fourth and fifth volumes, 1766–1773, of the Miscellanea Taurinensia; of which the most important was the one in 1771, in which he discussed how numerous astronomical observations should be combined so as to give the most probable result. And later, his contributions to the first two volumes, 1784–1785, of the transactions of the Turin Academy; to the first of which he contributed a paper on the pressure exerted by fluids in motion, and to the second an article on integration by infinite series, and the kind of problems for which it is suitable. Most of the papers sent to Paris were on astronomical questions, and among these, including his paper on the Jovian system in 1766, his essay on the problem of three bodies in 1772, his work on the secular equation of the Moon in 1773, and his treatise on cometary perturbations in 1778. These were all written on subjects proposed by the Académie française, and in each case, the prize was awarded to him. Lagrangian mechanics Between 1772 and 1788, Lagrange re-formulated Classical/Newtonian mechanics to simplify formulas and ease calculations. These mechanics are called Lagrangian mechanics. Algebra The greater number of his papers during this time were, however, contributed to the Prussian Academy of Sciences. Several of them deal with questions in algebra. His discussion of representations of integers by quadratic forms (1769) and by more general algebraic forms (1770). His tract on the Theory of Elimination, 1770. Lagrange's theorem that the order of a subgroup H of a group G must divide the order of G. His papers of 1770 and 1771 on the general process for solving an algebraic equation of any degree via the Lagrange resolvents. This method fails to give a general formula for solutions of an equation of degree five and higher because the auxiliary equation involved has a higher degree than the original one. The significance of this method is that it exhibits the already known formulas for solving equations of second, third, and fourth degrees as manifestations of a single principle, and was foundational in Galois theory. The complete solution of a binomial equation (namely an equation of the form ± ) is also treated in these papers. In 1773, Lagrange considered a functional determinant of order 3, a special case of a Jacobian. He also proved the expression for the volume of a tetrahedron with one of the vertices at the origin as the one-sixth of the absolute value of the determinant formed by the coordinates of the other three vertices. Number theory Several of his early papers also deal with questions of number theory. Lagrange (1766–1769) was the first European to prove that Pell's equation has a nontrivial solution in the integers for any non-square natural number . He proved the theorem, stated by Bachet without justification, that every positive integer is the sum of four squares, 1770. He proved Wilson's theorem that (for any integer ): is a prime if and only if is a multiple of , 1771. His papers of 1773, 1775, and 1777 gave demonstrations of several results enunciated by Fermat, and not previously proved. His Recherches d'Arithmétique of 1775 developed a general theory of binary quadratic forms to handle the general problem of when an integer is representable by the form . He made contributions to the theory of continued fractions. Other mathematical work There are also numerous articles on various points of analytical geometry. In two of them, written rather later, in 1792 and 1793, he reduced the equations of the quadrics (or conicoids) to their canonical forms. During the years from 1772 to 1785, he contributed a long series of papers which created the science of partial differential equations. A large part of these results was collected in the second edition of Euler's integral calculus which was published in 1794. Astronomy Lastly, there are numerous papers on problems in astronomy. Of these the most important are the following: Attempting to solve the general three-body problem, with the consequent discovery of the two constant-pattern solutions, collinear and equilateral, 1772. Those solutions were later seen to explain what are now known as the Lagrangian points. On the attraction of ellipsoids, 1773: this is founded on Maclaurin's work. On the secular equation of the Moon, 1773; also noticeable for the earliest introduction of the idea of the potential. The potential of a body at any point is the sum of the mass of every element of the body when divided by its distance from the point. Lagrange showed that if the potential of a body at an external point were known, the attraction in any direction could be at once found. The theory of the potential was elaborated in a paper sent to Berlin in 1777. On the motion of the nodes of a planet's orbit, 1774. On the stability of the planetary orbits, 1776. Two papers in which the method of determining the orbit of a comet from three observations is completely worked out, 1778 and 1783: this has not indeed proved practically available, but his system of calculating the perturbations by means of mechanical quadratures has formed the basis of most subsequent researches on the subject. His determination of the secular and periodic variations of the elements of the planets, 1781–1784: the upper limits assigned for these agree closely with those obtained later by Le Verrier, and Lagrange proceeded as far as the knowledge then possessed of the masses of the planets permitted. Three papers on the method of interpolation, 1783, 1792 and 1793: the part of finite differences dealing therewith is now in the same stage as that in which Lagrange left it. Fundamental treatise Over and above these various papers he composed his fundamental treatise, the Mécanique analytique. In this book, he lays down the law of virtual work, and from that one fundamental principle, by the aid of the calculus of variations, deduces the whole of mechanics, both of solids and fluids. The object of the book is to show that the subject is implicitly included in a single principle, and to give general formulae from which any particular result can be obtained. The method of generalised co-ordinates by which he obtained this result is perhaps the most brilliant result of his analysis. Instead of following the motion of each individual part of a material system, as D'Alembert and Euler had done, he showed that, if we determine its configuration by a sufficient number of variables x, called generalized coordinates, whose number is the same as that of the degrees of freedom possessed by the system, then the kinetic and potential energies of the system can be expressed in terms of those variables, and the differential equations of motion thence deduced by simple differentiation. For example, in dynamics of a rigid system he replaces the consideration of the particular problem by the general equation, which is now usually written in the form where T represents the kinetic energy and V represents the potential energy of the system. He then presented what we now know as the method of Lagrange multipliers—though this is not the first time that method was published—as a means to solve this equation. Amongst other minor theorems here given it may suffice to mention the proposition that the kinetic energy imparted by the given impulses to a material system under given constraints is a maximum, and the principle of least action. All the analysis is so elegant that Sir William Rowan Hamilton said the work could be described only as a scientific poem. Lagrange remarked that mechanics was really a branch of pure mathematics analogous to a geometry of four dimensions, namely, the time and the three coordinates of the point in space; and it is said that he prided himself that from the beginning to the end of the work there was not a single diagram. At first no printer could be found who would publish the book; but Legendre at last persuaded a Paris firm to undertake it, and it was issued under the supervision of Laplace, Cousin, Legendre (editor) and Condorcet in 1788. Work in France Differential calculus and calculus of variations Lagrange's lectures on the differential calculus at École Polytechnique form the basis of his treatise Théorie des fonctions analytiques, which was published in 1797. This work is the extension of an idea contained in a paper he had sent to the Berlin papers in 1772, and its object is to substitute for the differential calculus a group of theorems based on the development of algebraic functions in series, relying in particular on the principle of the generality of algebra. A somewhat similar method had been previously used by John Landen in the Residual Analysis, published in London in 1758. Lagrange believed that he could thus get rid of those difficulties, connected with the use of infinitely large and infinitely small quantities, to which philosophers objected in the usual treatment of the differential calculus. The book is divided into three parts: of these, the first treats of the general theory of functions, and gives an algebraic proof of Taylor's theorem, the validity of which is, however, open to question; the second deals with applications to geometry; and the third with applications to mechanics. Another treatise on the same lines was his Leçons sur le calcul des fonctions, issued in 1804, with the second edition in 1806. It is in this book that Lagrange formulated his celebrated method of Lagrange multipliers, in the context of problems of variational calculus with integral constraints. These works devoted to differential calculus and calculus of variations may be considered as the starting point for the researches of Cauchy, Jacobi, and Weierstrass. Infinitesimals At a later period Lagrange fully embraced the use of infinitesimals in preference to founding the differential calculus on the study of algebraic forms; and in the preface to the second edition of the Mécanique Analytique, which was issued in 1811, he justifies the employment of infinitesimals, and concludes by saying that: When we have grasped the spirit of the infinitesimal method, and have verified the exactness of its results either by the geometrical method of prime and ultimate ratios, or by the analytical method of derived functions, we may employ infinitely small quantities as a sure and valuable means of shortening and simplifying our proofs.Number theory His Résolution des équations numériques, published in 1798, was also the fruit of his lectures at École Polytechnique. There he gives the method of approximating the real roots of an equation by means of continued fractions, and enunciates several other theorems. In a note at the end, he shows how Fermat's little theorem, that is where p is a prime and a is prime to p, may be applied to give the complete algebraic solution of any binomial equation. He also here explains how the equation whose roots are the squares of the differences of the roots of the original equation may be used so as to give considerable information as to the position and nature of those roots. Celestial mechanics A theory of the planetary motions had formed the subject of some of the most remarkable of Lagrange's Berlin papers. In 1806 the subject was reopened by Poisson, who, in a paper read before the French Academy, showed that Lagrange's formulae led to certain limits for the stability of the orbits. Lagrange, who was present, now discussed the whole subject afresh, and in a letter communicated to the academy in 1808 explained how, by the variation of arbitrary constants, the periodical and secular inequalities of any system of mutually interacting bodies could be determined. Prizes and distinctions Euler proposed Lagrange for election to the Berlin Academy and he was elected on 2 September 1756. He was elected a Fellow of the Royal Society of Edinburgh in 1790, a Fellow of the Royal Society and a foreign member of the Royal Swedish Academy of Sciences in 1806. In 1808, Napoleon made Lagrange a Grand Officer of the Legion of Honour and a Count of the Empire. He was awarded the Grand Croix of the Ordre Impérial de la Réunion in 1813, a week before his death in Paris, and was buried in the Panthéon, a mausoleum dedicated to the most honoured French people. Lagrange was awarded the 1764 prize of the French Academy of Sciences for his memoir on the libration of the Moon. In 1766 the academy proposed a problem of the motion of the satellites of Jupiter, and the prize again was awarded to Lagrange. He also shared or won the prizes of 1772, 1774, and 1778. Lagrange is one of the 72 prominent French scientists who were commemorated on plaques at the first stage of the Eiffel Tower when it first opened. Rue Lagrange in the 5th Arrondissement in Paris is named after him. In Turin, the street where the house of his birth still stands is named via Lagrange. The lunar crater Lagrange and the asteroid 1006 Lagrangea also bear his name. See also List of things named after Joseph-Louis Lagrange Four-dimensional space Gauss's law History of the metre Lagrange's role in measurement reform Seconds pendulum Notes References Citations Sources The initial version of this article was taken from the public domain resource A Short Account of the History of Mathematics (4th edition, 1908) by W. W. Rouse Ball. Columbia Encyclopedia, 6th ed., 2005, "Lagrange, Joseph Louis." W. W. Rouse Ball, 1908, "Joseph Louis Lagrange (1736–1813)" A Short Account of the History of Mathematics, 4th ed. also on Gutenberg Chanson, Hubert, 2007, "Velocity Potential in Real Fluid Flows: Joseph-Louis Lagrange's Contribution," La Houille Blanche 5: 127–31. Fraser, Craig G., 2005, "Théorie des fonctions analytiques" in Grattan-Guinness, I., ed., Landmark Writings in Western Mathematics. Elsevier: 258–76. Lagrange, Joseph-Louis. (1811). Mécanique Analytique. Courcier (reissued by Cambridge University Press, 2009; ) Lagrange, J.L. (1781) "Mémoire sur la Théorie du Mouvement des Fluides"(Memoir on the Theory of Fluid Motion) in Serret, J.A., ed., 1867. Oeuvres de Lagrange, Vol. 4. Paris" Gauthier-Villars: 695–748. Pulte, Helmut, 2005, "Méchanique Analytique" in Grattan-Guinness, I., ed., Landmark Writings in Western Mathematics''. Elsevier: 208–24. External links Lagrange, Joseph Louis de: The Encyclopedia of Astrobiology, Astronomy and Space Flight The Founders of Classical Mechanics: Joseph Louis Lagrange The Lagrange Points Derivation of Lagrange's result (not Lagrange's method) Lagrange's works (in French) Oeuvres de Lagrange, edited by Joseph Alfred Serret, Paris 1867, digitized by Göttinger Digitalisierungszentrum (Mécanique analytique is in volumes 11 and 12.) Joseph Louis de Lagrange – Œuvres complètes Gallica-Math Inventaire chronologique de l'œuvre de Lagrange Persee Mécanique analytique (Paris, 1811-15) Lagrangian mechanics 1736 births 1813 deaths Scientists from Turin 18th-century Italian mathematicians 19th-century Italian mathematicians Burials at the Panthéon, Paris Counts of the First French Empire Italian people of French descent Naturalized citizens of France French agnostics 18th-century French astronomers 18th-century Italian astronomers Mathematical analysts Members of the French Academy of Sciences Members of the Prussian Academy of Sciences Members of the Royal Swedish Academy of Sciences Honorary members of the Saint Petersburg Academy of Sciences Number theorists French geometers Scientists from the Kingdom of Sardinia Grand Officers of the Legion of Honour Fellows of the Royal Society 18th-century French mathematicians 19th-century French mathematicians
Joseph-Louis Lagrange
[ "Physics", "Mathematics" ]
6,903
[ "Mathematical analysis", "Lagrangian mechanics", "Classical mechanics", "Mathematical analysts", "Number theorists", "Number theory", "Dynamical systems" ]
87,806
https://en.wikipedia.org/wiki/Cornucopia
In classical antiquity, the cornucopia (; ), also called the horn of plenty, was a symbol of abundance and nourishment, commonly a large horn-shaped container overflowing with produce, flowers, or nuts. In Greek, it was called the "horn of Amalthea" (), after Amalthea, a nurse of Zeus, who is often part of stories of the horn's origin. Baskets or panniers of this form were traditionally used in western Asia and Europe to hold and carry newly harvested food products. The horn-shaped basket would be worn on the back or slung around the torso, leaving the harvester's hands free for picking. In Greek/Roman mythology Mythology offers multiple explanations of the origin of the cornucopia. One of the best-known involves the birth and nurturance of the infant Zeus, who had to be hidden from his devouring father Cronus. In a cave on Mount Ida on the island of Crete, baby Zeus was cared for and protected by a number of divine attendants, including the goat Amalthea ("Nourishing Goddess"), who fed him with her milk. The suckling future king of the gods had unusual abilities and strength, and in playing with his nursemaid accidentally broke off one of her horns, which then had the divine power to provide unending nourishment, as the foster mother had to the god. In another myth, the cornucopia was created when Heracles (Roman Hercules) wrestled with the river god Achelous and ripped off one of his horns; river gods were sometimes depicted as horned. This version is represented in the Achelous and Hercules mural painting by the American Regionalist artist Thomas Hart Benton. The cornucopia became the attribute of several Greek and Roman deities, particularly those associated with the harvest, prosperity, or spiritual abundance, such as personifications of Earth (Gaia or Terra); the child Plutus, god of riches and son of the grain goddess Demeter; the nymph Maia; and Fortuna, the goddess of luck, who had the power to grant prosperity. In Roman Imperial cult, abstract Roman deities who fostered peace (pax Romana) and prosperity were also depicted with a cornucopia, including Abundantia, "Abundance" personified, and Annona, goddess of the grain supply to the city of Rome. Hades, the classical ruler of the underworld in the mystery religions, was a giver of agricultural, mineral and spiritual wealth, and in art often holds a cornucopia. Modern depictions In modern depictions, the cornucopia is typically a hollow, horn-shaped wicker basket filled with various kinds of festive fruit and vegetables. In most of North America, the cornucopia has come to be associated with Thanksgiving and the harvest. Cornucopia is also the name of the annual November Food and Wine celebration in Whistler, British Columbia, Canada. Two cornucopias are seen in the flag and state seal of Idaho. The Great Seal of North Carolina depicts Liberty standing and Plenty holding a cornucopia. The coats of arms of Colombia, Panama, Peru, Venezuela, Victoria, Australia and Kharkiv, Ukraine, also feature the cornucopia, symbolizing prosperity. Cornucopia motifs appear in some modern literature, such as Terry Pratchett's Wintersmith, and Suzanne Collins's The Hunger Games. The horn of plenty is used for body art and at Thanksgiving, as it is a symbol of fertility, fortune and abundance. Gallery See also Akshaya Patra Ark of the Covenant Chalice of Doña Urraca Cup of Jamshid Drinking horn Holy Chalice Holy Grail List of mythological objects Nanteos Cup Relic Sampo Venus of Laussel Śarīra Cintamani Mani stone Ashtamangala Yasakani no Magatama Kaustubha Gem Luminous gemstones Philosopher's stone Sendai Daikannon statue Syamantaka Gem Eight Treasures Cornucopian Notes References External links Food storage containers Heraldic charges Iconography Magic items Mythological objects Objects in Greek mythology Ornaments Ornaments (architecture) Roman mythology Symbols Thanksgiving Visual motifs
Cornucopia
[ "Physics", "Mathematics" ]
862
[ "Visual motifs", "Symbols", "Magic items", "Physical objects", "Matter" ]
87,872
https://en.wikipedia.org/wiki/Antiproton
The antiproton, , (pronounced p-bar) is the antiparticle of the proton. Antiprotons are stable, but they are typically short-lived, since any collision with a proton will cause both particles to be annihilated in a burst of energy. The existence of the antiproton with electric charge of , opposite to the electric charge of of the proton, was predicted by Paul Dirac in his 1933 Nobel Prize lecture. Dirac received the Nobel Prize for his 1928 publication of his Dirac equation that predicted the existence of positive and negative solutions to Einstein's energy equation () and the existence of the positron, the antimatter analog of the electron, with opposite charge and spin. The antiproton was first experimentally confirmed in 1955 at the Bevatron particle accelerator by University of California, Berkeley physicists Emilio Segrè and Owen Chamberlain, for which they were awarded the 1959 Nobel Prize in Physics. In terms of valence quarks, an antiproton consists of two up antiquarks and one down antiquark (). The properties of the antiproton that have been measured all match the corresponding properties of the proton, with the exception that the antiproton has electric charge and magnetic moment that are the opposites of those in the proton, which is to be expected from the antimatter equivalent of a proton. The questions of how matter is different from antimatter, and the relevance of antimatter in explaining how our universe survived the Big Bang, remain open problems—open, in part, due to the relative scarcity of antimatter in today's universe. Occurrence in nature Antiprotons have been detected in cosmic rays beginning in 1979, first by balloon-borne experiments and more recently by satellite-based detectors. The standard picture for their presence in cosmic rays is that they are produced in collisions of cosmic ray protons with atomic nuclei in the interstellar medium, via the reaction, where A represents a nucleus: + A → + + + A The secondary antiprotons () then propagate through the galaxy, confined by the galactic magnetic fields. Their energy spectrum is modified by collisions with other atoms in the interstellar medium, and antiprotons can also be lost by "leaking out" of the galaxy. The antiproton cosmic ray energy spectrum is now measured reliably and is consistent with this standard picture of antiproton production by cosmic ray collisions. These experimental measurements set upper limits on the number of antiprotons that could be produced in exotic ways, such as from annihilation of supersymmetric dark matter particles in the galaxy or from the Hawking radiation caused by the evaporation of primordial black holes. This also provides a lower limit on the antiproton lifetime of about 1–10 million years. Since the galactic storage time of antiprotons is about 10 million years, an intrinsic decay lifetime would modify the galactic residence time and distort the spectrum of cosmic ray antiprotons. This is significantly more stringent than the best laboratory measurements of the antiproton lifetime: LEAR collaboration at CERN: Antihydrogen Penning trap of Gabrielse et al.: BASE experiment at CERN: APEX collaboration at Fermilab: for → + anything APEX collaboration at Fermilab: for → + The magnitude of properties of the antiproton are predicted by CPT symmetry to be exactly related to those of the proton. In particular, CPT symmetry predicts the mass and lifetime of the antiproton to be the same as those of the proton, and the electric charge and magnetic moment of the antiproton to be opposite in sign and equal in magnitude to those of the proton. CPT symmetry is a basic consequence of quantum field theory and no violations of it have ever been detected. List of recent cosmic ray detection experiments BESS: balloon-borne experiment, flown in 1993, 1995, 1997, 2000, 2002, 2004 (Polar-I) and 2007 (Polar-II). CAPRICE: balloon-borne experiment, flown in 1994 and 1998. HEAT: balloon-borne experiment, flown in 2000. AMS: space-based experiment, prototype flown on the Space Shuttle in 1998, intended for the International Space Station, launched May 2011. PAMELA: satellite experiment to detect cosmic rays and antimatter from space, launched June 2006. Recent report discovered 28 antiprotons in the South Atlantic Anomaly. Modern experiments and applications Production Antiprotons were routinely produced at Fermilab for collider physics operations in the Tevatron, where they were collided with protons. The use of antiprotons allows for a higher average energy of collisions between quarks and antiquarks than would be possible in proton–proton collisions. This is because the valence quarks in the proton, and the valence antiquarks in the antiproton, tend to carry the largest fraction of the proton or antiproton's momentum. Formation of antiprotons requires energy equivalent to a temperature of 10 trillion K (1013 K), and this does not tend to happen naturally. However, at CERN, protons are accelerated in the Proton Synchrotron to an energy of 26 GeV and then smashed into an iridium rod. The protons bounce off the iridium nuclei with enough energy for matter to be created. A range of particles and antiparticles are formed, and the antiprotons are separated off using magnets in vacuum. Measurements In July 2011, the ASACUSA experiment at CERN determined the mass of the antiproton to be times that of the electron. This is the same as the mass of a proton, within the level of certainty of the experiment. In October 2017, scientists working on the BASE experiment at CERN reported a measurement of the antiproton magnetic moment to a precision of 1.5 parts per billion. It is consistent with the most precise measurement of the proton magnetic moment (also made by BASE in 2014), which supports the hypothesis of CPT symmetry. This measurement represents the first time that a property of antimatter is known more precisely than the equivalent property in matter. In January 2022, by comparing the charge-to-mass ratios between antiproton and negatively charged hydrogen ion, the BASE experiment has determined the antiproton's charge-to-mass ratio is identical to the proton's, down to 16 parts per trillion. Possible applications Antiprotons have been shown within laboratory experiments to have the potential to treat certain cancers, in a similar method currently used for ion (proton) therapy. The primary difference between antiproton therapy and proton therapy is that following ion energy deposition the antiproton annihilates, depositing additional energy in the cancerous region. See also Antineutron Antiprotonic helium List of particles Recycling antimatter Positron References Antimatter Baryons Nucleons Proton
Antiproton
[ "Physics" ]
1,433
[ "Antimatter", "Nucleons", "Matter", "Nuclear physics" ]
87,945
https://en.wikipedia.org/wiki/Terminator%20%28solar%29
A terminator or twilight zone is a moving line that divides the daylit side and the dark night side of a planetary body. The terminator is defined as the locus of points on a planet or moon where the line through the center of its parent star is tangent. An observer on the terminator of such an orbiting body with an atmosphere would experience twilight due to light scattering by particles in the gaseous layer. Earth's terminator On Earth, the terminator is a circle with a diameter that is approximately that of Earth. The terminator passes through any point on Earth's surface twice a day, at sunrise and at sunset, apart from polar regions where this only occurs when the point is not experiencing midnight sun or polar night. The circle separates the portion of Earth experiencing daylight from that experiencing darkness (night). While a little over one half of Earth is illuminated at any point in time (with exceptions during eclipses), the terminator path varies by time of day due to Earth's rotation on its axis. The terminator path also varies by time of year due to Earth's orbital revolution around the Sun; thus, the plane of the terminator is nearly parallel to planes created by lines of longitude during the equinoxes, and its maximum angle is approximately 23.5° to the pole during the solstices. Surface transit speed At the equator, under flat conditions (without obstructions like mountains or at a height above any such obstructions), the terminator moves at approximately . This speed can appear to increase when near obstructions, such as the height of a mountain, as the shadow of the obstruction will be cast over the ground in advance of the terminator along a flat landscape. The speed of the terminator decreases as it approaches the poles, where it can reach a speed of zero (full-day sunlight or darkness). Supersonic aircraft like jet fighters or Concorde and Tupolev Tu-144 supersonic transports are the only aircraft able to overtake the maximum speed of the terminator at the equator. However, slower vehicles can overtake the terminator at higher latitudes, and it is possible to walk faster than the terminator at the poles, near to the equinoxes. The visual effect is that of seeing the sun rise in the west, or set in the east. Grey-line radio propagation Strength of radio propagation changes between day- and night-side of the ionosphere. This is primarily because the D layer, which absorbs high frequency signals, disappears rapidly on the dark side of the terminator, whereas the E and F layers above the D layer take longer to form. This time-difference puts the ionosphere into a unique intermediate state along the terminator, called the "grey line". Amateur radio operators take advantage of conditions along the terminator to perform long-distance communications. Called "gray-line" or "grey-line" propagation, this signal path is a type of skywave propagation. Under good conditions, radio waves can travel along the terminator to antipodal points. Gallery Lunar terminator The lunar terminator is the division between the illuminated and dark hemispheres of the Moon. It is the lunar equivalent of the division between night and day on the Earth spheroid, although the Moon's much lower rate of rotation means it takes longer for it to pass across the surface. At the equator, it moves at , as fast as an athletic human can run on earth. Due to the angle at which sunlight strikes this portion of the Moon, shadows cast by craters and other geological features are elongated, thereby making such features more apparent to the observer. This phenomenon is similar to the lengthening of shadows on Earth when the Sun is low in the sky. For this reason, much lunar photographic study centers on the illuminated area near the lunar terminator, and the resulting shadows provide accurate descriptions of the lunar terrain. Lunar terminator illusion The lunar terminator (or tilt) illusion is an optical illusion arising from the expectation of an observer on Earth that the direction of sunlight illuminating the Moon (i.e. a line perpendicular to the terminator) should correspond with the position of the Sun, but does not appear to do so. The illusion results from misinterpreting the arrangement of objects in the sky according to intuition based on planar geometry. Scientific significance Examination of a terminator can yield information about the surface of a planetary body; for example, the presence of an atmosphere can create a fuzzier terminator. As the particles within an atmosphere are at a higher elevation, the light source can remain visible even after it has set at ground level. These particles scatter the light, reflecting some of it to the ground. Hence, the sky can remain illuminated even after the sun has set. Images showing a planetary terminator can be used to map topography: the position of the tip of a mountain behind the terminator line is measured when the Sun still or already illuminates it while the base of the mountain remains in shadow.  Low Earth orbit satellites take advantage of the fact that certain polar orbits set near the terminator do not suffer from eclipse, therefore their solar cells are continuously lit by sunlight. Such orbits are called dawn-dusk orbits, a type of Sun-synchronous orbit. This prolongs the operational life of a LEO satellite, as onboard battery life is prolonged. It also enables specific experiments that require minimum interference from the Sun, as the designers can opt to install the relevant sensors on the dark side of the satellite. See also Subsolar point Ground track Colongitude Lunar grazing occultation Lunar phase References External links Current terminator aa.usno.navy.mil – Website calculating synthetic images (B&W or color) representing the terminator for a given time (date & hour) The Moon Terminator Illusion (video) Earth phenomena Light Solar phenomena Parts of a day Articles containing video clips Lunar science
Terminator (solar)
[ "Physics", "Technology" ]
1,207
[ "Physical phenomena", "Earth phenomena", "Spectrum (physical sciences)", "Parts of a day", "Electromagnetic spectrum", "Waves", "Light", "Solar phenomena", "Stellar phenomena", "Components" ]
87,947
https://en.wikipedia.org/wiki/Sharkovskii%27s%20theorem
In mathematics, Sharkovskii's theorem (also spelled Sharkovsky, Sharkovskiy, Šarkovskii or Sarkovskii), named after Oleksandr Mykolayovych Sharkovsky, who published it in 1964, is a result about discrete dynamical systems. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period. Statement For some interval , suppose that is a continuous function. The number is called a periodic point of period if , where denotes the iterated function obtained by composition of copies of . The number is said to have least period if, in addition, for all . Sharkovskii's theorem concerns the possible least periods of periodic points of . Consider the following ordering of the positive integers, sometimes called the Sharkovskii ordering: It consists of: the odd numbers in increasing order, 2 times the odd numbers in increasing order, 4 times the odd numbers in increasing order, 8 times the odd numbers , etc. finally, the powers of two in decreasing order. This ordering is a total order: every positive integer appears exactly once somewhere on this list. However, it is not a well-order. In a well-order, every subset would have an earliest element, but in this order there is no earliest power of two. Sharkovskii's theorem states that if has a periodic point of least period , and precedes in the above ordering, then has also a periodic point of least period . One consequence is that if has only finitely many periodic points, then they must all have periods that are powers of two. Furthermore, if there is a periodic point of period three, then there are periodic points of all other periods. Sharkovskii's theorem does not state that there are stable cycles of those periods, just that there are cycles of those periods. For systems such as the logistic map, the bifurcation diagram shows a range of parameter values for which apparently the only cycle has period 3. In fact, there must be cycles of all periods there, but they are not stable and therefore not visible on the computer-generated picture. The assumption of continuity is important. Without this assumption, the discontinuous piecewise linear function defined as: for which every value has period 3, would be a counterexample. Similarly essential is the assumption of being defined on an interval. Otherwise , which is defined on real numbers except the one: and for which every non-zero value has period 3, would be a counterexample. Generalizations and related results Sharkovskii also proved the converse theorem: every upper set of the above order is the set of periods for some continuous function from an interval to itself. In fact all such sets of periods are achieved by the family of functions , for , except for the empty set of periods which is achieved by , . On the other hand, with additional information on the combinatorial structure of the interval map acting on the points in a periodic orbit, a period-n point may force period-3 (and hence all periods). Namely, if the orbit type (the cyclic permutation generated by the map acting on the points in the periodic orbit) has a so-called stretching pair, then this implies the existence of a periodic point of period-3. It can be shown (in an asymptotic sense) that almost all cyclic permutations admit at least one stretching pair, and hence almost all orbit types imply period-3. Tien-Yien Li and James A. Yorke showed in 1975 that not only does the existence of a period-3 cycle imply the existence of cycles of all periods, but in addition it implies the existence of an uncountable infinitude of points that never map to any cycle (chaotic points)—a property known as period three implies chaos. Sharkovskii's theorem does not immediately apply to dynamical systems on other topological spaces. It is easy to find a circle map with periodic points of period 3 only: take a rotation by 120 degrees, for example. But some generalizations are possible, typically involving the mapping class group of the space minus a periodic orbit. For example, Peter Kloeden showed that Sharkovskii's theorem holds for triangular mappings, i.e., mappings for which the component depends only on the first components . References External links Keith Burns and Boris Hasselblatt, The Sharkovsky theorem: a natural direct proof scholarpedia: Sharkovsky ordering by Aleksandr Nikolayevich Sharkovsky Eponymous theorems of physics Theorems in dynamical systems Soviet inventions
Sharkovskii's theorem
[ "Physics", "Mathematics" ]
966
[ "Theorems in dynamical systems", "Mathematical theorems", "Equations of physics", "Eponymous theorems of physics", "Physics theorems", "Mathematical problems", "Dynamical systems" ]
88,168
https://en.wikipedia.org/wiki/Bilge
The bilge of a ship or boat is the part of the hull that would rest on the ground if the vessel were unsupported by water. The "turn of the bilge" is the transition from the bottom of a hull to the sides of a hull. Internally, the bilges (usually used in the plural in this context) is the lowest compartment on a ship or seaplane, on either side of the keel and (in a traditional wooden vessel) between the floors. The first known use of the word is from 1513. Bilge water The word is sometimes also used to describe the water that collects in this area. Water that does not drain off the side of the deck or through a hole in the hull, which it would typically do via a scupper, instead drains down into the ship into the bilge. This water may be from rough seas, rain, leaks in the hull or stuffing box, or other interior spillage. The collected water must be pumped out to prevent the bilge from becoming too full and threatening to sink the ship. Bilge water can be found aboard almost every vessel. Depending on the ship's design and function, bilge water may contain water, oil, urine, detergents, solvents, chemicals, pitch, particles, and other materials. By housing water in a compartment, the bilge keeps these liquids below decks, making it safer for the crew to operate the vessel and for people to move around in heavy weather. Regulations Discharge of bilge liquids is regulated for commercial vessels under Marpol Annex I as it can lead to bilge pollution. Princess Cruises' Caribbean Princess was fined $40 million USD for dumping bilge into the ocean in 2016. Bilge water can be offloaded at a port, or treated to remove pollutants. Even treated bilge water is harmful to the environment, all the way up the food chain. The European Maritime Safety Agency tracks bilge dumping by satellite. There are an estimated 3000 cases of illegal bilge dumping per year in Europe. Bilge maintenance Methods of removing water from bilges have included buckets and pumps. Modern vessels usually use electric bilge pumps controlled by automated bilge switches. Bilge coatings are applied to protect the bilge surfaces. The water that collects is often noxious, and "bilge water" or just "bilge" has thus become a derogatory colloquial term used to refer to something bad, fouled, or otherwise offensive. Bilges may contain partitions to damp the rush of water from side to side and fore and aft to avoid destabilizing the ship due to the free surface effect. Partitions may contain limber holes to allow water to flow at a controlled rate into lower compartments. Cleaning the bilge and bilge water is also possible using "passive" methods such as bioremediation, which uses bacteria or archaea to break down the hydrocarbons in the bilge water into harmless byproducts. Of the two general schools of thought on bioremediation, the one that uses beneficial microbes local to the bilge is regarded as being more "green" because it does not introduce foreign bacteria to the waters that the vessel sits in or travels through. But archaea that are non-indigenous also can be used and discharged, since the archaea will die off anyway, leaving only local indigenous microbes remaining. Bilge alarm Large commercial vessels need bilge alarms to notify the crew how much and where the bilge water is rising. These bilge alarms are electric devices that are also designed to detect leakages in the ship early before major damage is done to the vessel. Oil content meters are sometimes referred to as bilge alarms. "Bilge rat" The term "bilge rat" typically refers to members of a ship's crew who work in the bowels of the vessel, where rats would sometimes breed. The term was sometimes used in the Royal Navy to describe stokers who shovelled coal into the boilers of steam-powered warships. It is utilized as an insult, and often employed as such by writers of pirate fiction. In popular culture The term bilgewater is commonly used to mean nonsense. See also Basement Sump Notes References Naval architecture Ship compartments ru:Трюм
Bilge
[ "Engineering" ]
879
[ "Naval architecture", "Marine engineering" ]
88,231
https://en.wikipedia.org/wiki/List%20of%20IBM%20products
The list of IBM products is a partial list of products, services, and subsidiaries of International Business Machines (IBM) Corporation and its predecessor corporations, beginning in the 1890s. Context Products, services, and subsidiaries have been offered from International Business Machines (IBM) Corporation and its predecessor corporations since the 1890s. This list comprises those offerings and is eclectic; it includes, for example, the AN/FSQ-7, which was not a product in the sense of offered for sale, but was a product in the sense of manufactured—produced by the labor of IBM. Several machines manufactured for the Astronomical Computing Bureau at Columbia University are included, as are some machines built only as demonstrations of IBM technology. Missing are many RPQs, OEM products (semiconductors, for example), and supplies (punched cards, for example). These products and others are missing simply because no one has added them. IBM sometimes uses the same number for a system and for the principal component of that system. For example, the IBM 604 Calculating Unit is a component of the IBM 604 Calculating Punch. And different IBM divisions used the same model numbers; for example IBM 01 without context clues could be a reference to a keypunch or to IBM's first electric typewriter. Number sequence may not correspond to product development sequence. For example, the 402 tabulator was an improved, modernized 405. IBM uses two naming structures for its modern hardware products. Products are normally given a three- or four-digit machine type and a model number (it can be a mix of letters and numbers). A product may also have a marketing or brand name. For instance, 2107 is the machine type for the IBM System Storage DS8000. While the majority of products are listed here by machine type, there are instances where only a marketing or brand name is used. Care should be taken when searching for a particular product as sometimes the type and model numbers overlap. For instance the IBM storage product known as the Enterprise Storage Server is machine type 2105, and the IBM printing product known as the IBM Infoprint 2105 is machine type 2705, so searching for an IBM 2105 could result in two different products—or the wrong product—being found. IBM introduced the 80-column rectangular hole punched card in 1928. Pre-1928 machine models that continued in production with the new 80-column card format had the same model number as before. Machines manufactured prior to 1928 were, in some cases, retrofitted with 80-column card readers and/or punches thus there existed machines with pre-1928 dates of manufacture that contain 1928 technology. This list is organized by classifications of both machines and applications, rather than by product name. Thus some (few) entries will be duplicated. The 1420, for example, is listed both as a member of the 1401 family and as a machine for Bank and finance. IBM product names have varied over the years; for example these two texts both reference the same product. Mechanical Key Punch, Type 1 (in Machine Methods of Accounting, IBM, 1936) Mechanical Punch, Type 001 (in IBM Electric Punched Card Accounting Machines: Principles of Operation, IBM, 1946) This article uses the name, or combination of names, most descriptive of the product. Thus the entry for the above is IBM 001: Mechanical Key Punch Products of The Tabulating Machine Company can be identified by date, before 1933 when the subsidiaries were merged into IBM. Unit record equipment Keypunches and verifiers Hollerith Keyboard (pantograph) punch: Manual card punch, 1890 IBM 001: Mechanical Key Punch, 1910 IBM 003: Lever Set Gang Punch, 1920 IBM 010: Card Punch IBM 011: Electric Key Punch, 1923 IBM 012: Electric Duplicating Key Punch, 1926 IBM 013: Badge Punch IBM 015: Motor Drive Key Punch, 1915 IBM 016: Motor Drive Duplicating Key Punch, 1927 IBM 020: Card Punch IBM 024: Card Punch (electronic—tube, BCD zone codes); 1949 IBM 026: Printing Card Punch (electronic—tube, BCD zone codes); 1949 IBM 027: Card Proof Punch, 1956 IBM 028: Printing Card Proof Punch, 1956 IBM 029: Card Punch (electric—diodes & relays, EBCDIC zone codes); 1964 IBM 031: Alphabetic Duplicating Key Punch; 1933 IBM 032: Alphabetic Printing Key Punch; 1933 IBM 033: Alphabetic Duplicating Printing Punch IBM 034: Alphabetic Duplicating Printing Key Punch; 1933 IBM 036: Alphabetic Printing Punch, 1930 IBM 037: Alphabetic Stencil Punch IBM 040: Tape Controlled Card Punch; 1941 IBM 041: Tape to Card Punch IBM 043: Tape Controlled Card Punch IBM 044: Tape Controlled Card Punch IBM 046: Tape-to-Card Punch IBM 047: Tape-to-Card Printing Punch IBM 051: Mechanical Verifier IBM 052: Motor Drive Verifier IBM 053: Motor Drive Verifier IBM 054: Motor Drive Verifier IBM 055: Alphabetic Verifier, 1946 IBM 056: Card Verifier (electronic—tube, BCD zone codes); 1949 IBM 058: Card Operated Typewriter IBM 059: Card Verifier (electric, diodes & relays, EBCDIC zone codes); 1964 IBM 060: Card to Tape Punch (5 channel) IBM 063: Card-Controlled Tape Punch IBM Data Transceiver: A 65 or 66 in combination with a 67 or 68 IBM 065: Data Transceiver Card Unit IBM 066: Data Transceiver Printing Card Unit IBM 067: Telegraph Signal Unit for 065/066 IBM 068: Telephone Signal Unit for 065/066 IBM 116: Numeric Duplicating Punch IBM 129: Card Data Recorder (integrated circuits—SLT, EBCDIC zone codes); 1971 IBM 131: Alphabetic Duplicating Punch IBM 143: Tape Controlled Card Punch IBM 151: Verifier IBM 155: Numeric Verifier IBM 156: Alphabetic Verifier IBM 163: Card Controlled Tape Punch IBM 210: Electric Verifier IBM 797: Document Numbering Punch; 1951 IBM 824: Typewriter Card Punch IBM 826: Typewriter Card Punch Printing IBM 884: Typewriter Tape Punch IBM 963: Tape Punch IBM 5471: Printer-Keyboard for System/3 IBM 5475: Data Entry Keyboard for System/3 IBM 5496: Data Recorder, Keypunch for IBM System/3's 96 column cards IBM 5924: IBM 029 attached with a special keyboard to allow input of Chinese, Japanese and Korean characters (RPQ) IBM Port-A-Punch: Port-A-Punch; 1958 IBM Votomatic: Voting machine (Port-A-Punch balloting, 1965) Sorters, statistical, and derived machines Hollerith automatic sorter: Horizontal sorter, 1901 Hollerith 2: Card counting sorter IBM 70: Hollerith Vertical Sorter; 1908 IBM 71: Vertical Sorter; 1928 IBM 74: Printing Card Counting Sorter, 1930 IBM 75: Card Counting Sorter IBM 76: Searching Sorter Punch IBM 80: Card Sorter, 1925 IBM 81: Card Stencil Sorter IBM 82: Card Sorter, 1948 IBM 83: Card Sorter, 1955 IBM 84: Card Sorter, 1959 IBM 86: Coupon Sorter IBM 101: Statistical Machine; 1952 IBM 524: Duplicating Summary Punch (Numerical card punch, features of an 016 and can also be connected to a 101) IBM 106: Coupon Statistical Machine IBM 108: Card Proving Machine; 196X IBM 867: IBM 108 Output Typewriter IBM 109: Statistical Sorter IBM 5486: Card Sorter for IBM System/3's 96 column cards IBM 9900: Continuous Multiple Access Comparator Collators IBM 072: Alphabetic Collator IBM 077: Electric Punched Card Collator; 1937 IBM 078: Stencil Collator IBM 079: Stencil Printing Collator IBM 085: Numerical Collator; 1957 IBM 087: Alphabetic Collator IBM 088: Numerical Collator IBM 089: Alphabetic Collator IBM 188: Alphabetic Collator Reproducing punch, summary punch, gang punch, and derived machines IBM 501: Automatic Numbering Gang Punch IBM 511: Automatic Reproducing Punch IBM 512: Reproducing Punch, 1940 IBM 513: Reproducing Punch, 1945 IBM 514: Reproducing Punch IBM 515: Interpreting Reproducing Punch IBM 516: Automatic Summary Punch IBM 517: Gang Summary Punch, 1929 IBM 518: Gang Summary Punch, 1929 IBM 519: End Printing Reproducing Punch, 1946 IBM 520: Computing Punch IBM 522: Duplicator Summary Punch IBM 523: Gang Summary Punch; 1949 IBM 524: Duplicating Summary Punch (Numerical card punch, features of an 016 and up to 2 can also be connected to a 101) IBM 526: Printing Summary Punch (electronic, BCD zone codes, "an 026 arranged for summary punching") IBM 528: Accumulating Reproducer IBM 534: Card Punch (connects to 870, 108, 1230, 1232) IBM 545: Output Punch (an 029 plus connector) IBM 549: Ticket Converter Interpreters IBM 548: Interpreter IBM 550: Numerical Interpreter, 1935 IBM 551: Automatic Check Writing Interpreter, 1935 IBM 552: Alphabetic Interpreter IBM 554: Interpreter IBM 555: Alphabetic Interpreter IBM 556: Interpreter IBM 557: Alphabetic Interpreter IBM 938: Electrostatic Card Printer Tabulators, accounting machines, printers Hollerith Census Tabulator: 1890 Hollerith Integrating Tabulator: 1896 Hollerith Automatic Feed Tabulator: 1900 IBM 090: Hollerith Type I Tabulator, 1906 IBM 091: Hollerith Type III Tabulator, 1921 IBM 092: Electric Tabulating Machine(first Plugboard, later known as a Control Panel) IBM 093: Automatic Control Tabulator, 1914 (2 sets of reading brushes, STOP cards not needed) Hollerith Type 3-S Tabulator: 192x IBM 094: Non-print Automatic Checking Machine IBM 211: Accounting Machine IBM 212: Accounting Machine IBM 285: Electric Accounting Machine; 1927 IBM 297: Numerical Accounting Machine IBM 298: Numerical Accounting Machine IBM 301: Hollerith Type IV Tabulator, 1928 IBM 375: Invoicing Tabulator IBM Direct Subtraction Accounting Machine: IBM ATB: Alphabetic Tabulating model B; 1931 IBM ATC: Alphabetic Tabulating model C; 1931? (soon after the ATB) IBM 401: Tabulator; 1933 IBM Electromatic Table Printing Machine: Typesetting-quality printer; 1946 402 and known versions IBM 402: Alphabetic Accounting Machine 1948 IBM 402: Computing Accounting Machine (with solid-state computing device) IBM 403: Alphabetic Accounting Machine, 1948(MLP—multiple line printing)(version of 402) IBM 403: Computing Accounting Machine (with solid-state computing device)(version of 402) IBM 412: Accounting Machine (version of 402) IBM 417: Numerical Accounting Machine (version of 402) IBM 419: Numerical Accounting Machine(version of 402) IBM 513, 514, 517, 519, 523, 526, 528, or 549: Summary punch for 402 IBM 916: Bill Feed for 402(single sheet feed) IBM 923: Tape-Controlled Carriage for 402 IBM 924: Dual Feed Tape Carriage for 402 IBM 1997: Tape-Controlled Bill Feed 402 404 IBM 404: Accounting Machine 405 and known versions IBM 405: Alphabetic Bookkeeping and Accounting Machine; 1934 (later: 405 Electric Punched Card Accounting Machine) IBM 416: Numerical Accounting Machine(version of 405) IBM 514, 519, 523, 526, 528, 549: Summary punch for 405 IBM 921: International Automatic Carriage for 405, 416 (1938) 407 and known versions IBM 407: Alphabetic Accounting Machine; 1949 IBM 407: Computing Accounting Machine (with solid-state computing device) IBM 408: Alphabetic Accounting Machine, 1957(version of 407) IBM 409: Accounting Machine; 1959(version of 407) IBM 421: WTC Computing Accounting Machine (with solid-state computing device)(version of 407) IBM 444: Accounting Machine(version of 407) IBM 447: WTC Computing Accounting Machine (with solid-state computing device)(version of 407) IBM 514, 519, 523, 528, 549: Summary punch for 407 IBM 922: Tape-Controlled Carriage for 407 IBM 418: Numerical Accounting Machine IBM 420: Alphabetical Accounting Machine IBM 424: WTC Computing Accounting Machine (with solid-state computing device) IBM 426: Accounting Machine IBM 427: WTC Accounting Machine (for instance, suitable for British £sd currency) IBM 450: Accounting Machine IBM 632: Accounting Machine IBM 850: Stencil Cutter IBM 856: Card-A-Type IBM 857: Document Writer IBM 858: Cardatype Accounting Machine, 1955 IBM 534: IBM 858 Card Punch (similar to 024) IBM 536: IBM 858 Printing Card Punch (similar to 026) IBM 858: IBM 858 Control Unit IBM 863: IBM 858 Arithmetic Unit IBM 866: IBM 858 Non-Transmitting Typewriter IBM 868: IBM 858 Transmitting Typewriter IBM 961: IBM 858 8-channel Tape Punch IBM 962: IBM 858 5-channel Tape Punch IBM 972-1: IBM 858 Auxiliary Keyboard for Manual Entry—Twelve columns of keys* IBM 861: Stencil Charger IBM 869: Typewriter IBM 870: Document Writing System IBM 834: IBM 870 Control Unit IBM 836: IBM 870 Control Unit IBM 865: IBM 870 Output typewriters IBM 866: IBM 870 Non-transmitting Typewriter IBM 868: IBM 870 Transmitting Typewriter IBM 536: IBM 870 Printing Card Punch IBM 961: IBM 870 Tape Punch (8 channel) IBM 962: IBM 870 Tape Punch (5 track) IBM 972-2: IBM 870 Auxiliary Keyboard IBM 919: Comparing Bill Feed IBM 920: Bill Feed IBM 921: International Automatic Carriage IBM 933: Carbon Ribbon Feed IBM 939: Electrostatic Address Label Printer IBM 953: Multiline Posting Machine IBM 954: Facsimile Posting Machine (fused carbon copy fanfold printout onto an account ledger card) IBM 964: Auxiliary Printing Tape Punch IBM 966: Code Comparing Unit IBM 973: Keyboard IBM 6400: Accounting Machine system; 1962 IBM 6405: Account Machine IBM 6410: Account Machine IBM 6420: Account Machine IBM 6430: Account Machine IBM 6422: Auto Ledger Feed IBM 6425: Magnetic Ledger Unit IBM 6426: Card Punch IBM 6428: Card Reader IBM 6454: Paper Tape Reader IBM 6455: Paper Tape Punch Calculators IBM 20–8704-1 Machine Load Computer: Slide rule to calculate punch card processing time; 1953-1959 IBM 600: Automatic Multiplying Punch; 1931 IBM 601: Electric Multiplier aka Automatic Cross-Footing Multiplying Punch; 1933 IBM Relay Calculator: aka The IBM Pluggable Sequence Relay Calculator (Aberdeen Machine) IBM 602: Calculating Punch; 1946 IBM 602A: Calculating Punch; 1948 IBM 603: Electronic Multiplier; 1946 IBM 604: Electronic Calculating Punch; 1948 IBM 604: IBM 604 Calculating Unit IBM 521: IBM 604 Card Read Punch IBM 541: IBM 604 Card Read Punch IBM 605: Electronic Calculator; 1949 (version of 604) IBM 527: IBM 605 High-Speed Punch IBM CPC: Card Programmed Electronic Calculator; 1949 IBM 604: IBM 604 Calculating Unit IBM 521: IBM 604 Card Read Punch IBM 402: Accounting Machine IBM 417: Accounting Machine IBM 941: IBM CPC Auxiliary Storage Unit; (16—10-digit words) IBM CPC-II: Card Programmed Electronic Calculator; 1949 IBM 605: Electronic Calculating Punch IBM 527: Card Read Punch IBM 412: Accounting Machine IBM 418: Accounting Machine IBM 941: IBM CPC Auxiliary Storage Unit; (16—10-digit words) IBM 607: Electronic Calculator; 1953 IBM 529: IBM 607 Card Read Punch IBM 542: IBM 607 Card Read Punch IBM 942: IBM 607 Electronic Storage Unit; 1953 IBM 608: Transistorized Electronic Calculator; 1957 IBM 535: IBM 608 Card Read Punch IBM 609: Calculator; (transistorized) 1960 IBM 623: Calculating Punch IBM 625: Calculating Punch IBM 626: Calculating Punch IBM 628: Magnetic Core Calculator IBM 565: IBM 628 Punching Unit IBM 632, IBM 633: Electronic Typing Calculator; 1958 IBM 614: IBM 632/3 Typewriter output IBM 630: IBM 632 Arithmetic Unit IBM 631: IBM 632 Buffer memory IBM 634: IBM 632 Non-printing Card Punch IBM 635: IBM 632 Non-Printing Card Punch IBM 636: IBM 632/3 Printing Card Punch IBM 637: IBM 632 Printing Card Punch IBM 638: IBM 632 Companion Keyboard IBM 641: IBM 632 Card Reader IBM 645: IBM 632 Card Reader IBM 648: IBM 632 Tape Punch IBM 649: IBM 632 Paper Tape Reader IBM 644: Calculating Punch Time equipment division IBM manufactured a range of clocks and other devices until 1958 when they sold the Time Equipment Division to Simplex Time Recorder Company (SimplexGrinnell, as of 2001). Typewriters IBM Remote control keyboard IBM Electric typewriter: Model 01, 1935; Model 01 (Formsholder), Model 02 (Formswriter), Model 10 (Front Feed) and Model 01 (Carbon Ribbon Model), 1937; Chinese Typewriter and Model 04 Arabic Electric Typewriter, 1946; Model 07 Card Stencil Typewriter, 1947; Models 01 and 06 with Automatic Line Selector, 1948; IBM Electromatic typewriter: Model 03 (Hektowriter), 1938; Model 06 (Toll Biller), 1940; Model 08 (Auto. Formswriter) and Model 09 (Manifest), 1941; IBM Electric Executive Typewriter, 1944; IBM Electric typewriter, both Standard and Executive: Model A, 1948, 1949; Model B, 1954; Model C, 1959; Model D, 1967; Flexowriter: sold to Friden, Inc. in the late 1950s Typeball-based IBM Selectric typewriter: IBM 6121: IBM 700 Series Selectric I, 1961; IBM 6126: IBM 800 Series Selectric II (1971) and Correcting Selectric II (1973); IBM 6701, 6702, 6703, 6704, 6705: IBM Selectric III and Correcting Selectric III. Selectric-based typewriters: IBM Selectric Composer, 1966; IBM 6375: IBM Electronic Selectric Composer, 1975; IBM 6240: Magnetic card typewriter; 1977 IBM Electronic Typewriter 50 and Electronic Typewriter 60, 1978; IBM Personal Typewriter, 1982; Daisy wheel-based IBM Wheelwriter; Wheelwriter 3 and Wheelwriter 5, 1984; Wheelwriter System/20 and System/40, 1985; Wheelwriter 6, 1986; Wheelwriter Series II and Personal Wheelwriter, 1988; IBM Quietwriter; IBM dictation machines IBM dictation machines are always referenced by family and model name and never by machine type. In fact the models are sometimes mistakenly taken to be machine types. There are three brand names and several well known models: IBM Executary dictation equipment line (1960-1972). IBM Executary Model 211 Dictation Machine (6165-211) IBM Executary Model 212 Transcribing Machine (6166-212) IBM Executary Model 224 Dictation Unit (6161-224) IBM Executary Model 271 Recorder (6171-271) IBM input processing equipment (1972-1975) IBM 6:5 Cartridge System (1975-1981) 6:5 Recorder (6164-281) 6:5 Transcriber (6164-282) 6:5 Portable (6164-284) Copier/Duplicators IBM Copiers: IBM Copier (Machine type 6800-001); introduced 1970, withdrawn June 30, 1981 IBM Copier II (Machine type 6801-001); introduced 1972 IBM 3896 tape/document converter (a modified IBM Copier II); withdrawn 1980 IBM Series III Copier Model 10 (Machine type 6802-001); introduced 1976 IBM Series III Copier Model 20 (Machine type 6803-001); introduced 1976 IBM Series III Copier Model 30 (Machine type 6805-001) IBM Series III Copier Model 40 (Machine type 6806-001) IBM Series III Copier Model 50 (Machine type 6809-001) IBM Series III Copier/Duplicator Model 60 (Machine type 6808-001) IBM Series III Copier/Duplicator Model 70 (Machine type 8880-001) IBM Series III Copier/Duplicator Model 85 (Machine type 8885-001) IBM Executive 102 Copier (Machine type 6820-001);introduced 1981, withdrawn 1982 Collators (a collator was a feature of a copier, but was sold as a separate machine type): IBM 6852-001 Collator IBM 6852-002 Collator IBM 6852-003 Collator IBM 6852-004 Collator IBM 8881-001 Collator IBM 8881-002 Collator IBM also sold a range of copier supplies including paper rolls (marketed as IBM General Copy Bond), cut sheet paper (marketed as IBM multi-system paper) and toner. The IBM line of Copier/Duplicators, and their associated service contracts, were sold to Eastman Kodak in 1988. World War II ordnance and related products M1 Carbine: Rifle M7 grenade launchers for M1 Garand rifles Browning Automatic Rifle: light machine gun 20-millimeter aircraft cannon Aircraft and naval fire-control instruments 90-millimeter anti-aircraft gun directors and prediction units Supercharger impellers Norden bombsight Other non-computer products IBM 805: IBM Test Scoring Machine, 1938 IBM 820 Time Punch IBM 9902: Test Scoring Punch IBM Lectern: 1954 IBM Radiotype — IBM Scanistor: Experimental solid-state optical scanning device IBM Shoebox: Voice recognition, 1962 IBM Ticketograph: 1937 IBM Toll Collection System — IBM Wireless Translation System: 1947 IBM Hydrogen Peroxide Analyzer: 1982 IBM PW 200 Percussive Welder: 1960s IBM Industrial Scale: 1930s IBM Style 5011: ¼ horsepower electric coffee mill; 1920s IBM Style 5117: ½ horsepower meat chopper; late 1920s IBM Electric Scoreboard: 1949 IBM Cheese Slicer: 1901 Computers based on vacuum tubes (1950s) For these computers most components were unique to a specific computer and are shown here immediately following the computer entry. IBM 305: RAMAC: Random Access Method of Accounting and Control; 1956 IBM 305: Processing Unit IBM 323: IBM 305 Card Punch IBM 340: IBM 305 Power Supply IBM 350: IBM 305 Disk Storage IBM 370: IBM 305 Printer (not to be confused with the much later System/370 computers) IBM 380: IBM 305 Console IBM 381: IBM 305 Remote Printing Station IBM 382: IBM 305 Paper Tape Reader IBM 407: IBM 305 Accounting Machine (models R1, R2 used on-line) IBM 610: Automatic Decimal Point Computer; 1957 IBM 650: Magnetic Drum Data Processing Machine; 1954 IBM 355: IBM 650 RAMAC (Disk drive) IBM 407: IBM 650 Accounting machine on-line IBM 533: IBM 650 Card Read Punch IBM 537: IBM 650 Card Read Punch IBM 543: IBM 650 Card Reader IBM 544: IBM 650 Card Punch IBM 650: IBM 650 Console Unit IBM 652: IBM 650 Disk and Magnetic Tape Control Unit IBM 653: IBM 650 Auxiliary Unit (60—10-digit words of auxiliary storage, index registers, and decimal floating point) IBM 654: IBM 650 Auxiliary Alphabetic Unit IBM 655: IBM 650 Power Unit IBM 727: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 200 characters/inch) IBM 838: Inquiry Station IBM 701: Electronic Data Processing Machine; 1952. Known as the Defense Calculator while in development. IBM 706: IBM 701 Electrostatic Storage Unit (2048—36-bit words) IBM 711: IBM 701 Card reader (150 cards/min); 1952 IBM 716: IBM 701 Printer (150 lines/min); 1952 IBM 721: IBM 701 Punched card recorder; 1952 (100 cards/min) IBM 726: IBM 701 Dual Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 100 characters/inch) IBM 727: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 200 characters/inch) IBM 731: IBM 701 Magnetic Drum Reader/Recorder; 1952 IBM 736: IBM 701 Power Frame #1 IBM 737: IBM 701/IBM 704/IBM 709 Magnetic Core Storage Unit (4096—36-bit words) IBM 740: IBM 701/IBM 704/IBM 709 Cathode Ray Tube Output Recorder IBM 741: IBM 701 Power Frame #2 IBM 746: IBM 701 Power Distribution Unit IBM 753: IBM 701 Magnetic Tape Control Unit IBM 780: Cathode Ray Tube Display (used with IBM 740) IBM 702: Electronic Data Processing Machine; 1953 IBM 712: IBM 702 Card Reader IBM 717: IBM 702 Printer IBM 922: Tape-Controlled Carriage IBM 722: IBM 702 Card Punch IBM 727: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 200 characters/inch) IBM 732: IBM 702 Magnetic Drum Storage Unit IBM 752: IBM 702 Tape Control Unit IBM 756: IBM 702 Card Reader Control Unit IBM 757: IBM 702 Printer Control Unit IBM 758: IBM 702 Card Punch Control Unit IBM 704: Data Processing System; 1956 IBM 711: Card Reader IBM 716: Line Printer IBM 721: Card Punch IBM 727: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 200 characters/inch) IBM 733: Magnetic Drum IBM 737: IBM 701/IBM 704/IBM 709 Magnetic Core Storage Unit (4096—36-bit words, 6-bit BCD characters) IBM 738: IBM 704/IBM 709 Magnetic Core Storage Unit (32768—36-bit words, 6-bit BCD characters) IBM 740: IBM 701/IBM 704/IBM 709 Cathode Ray Tube Output Recorder IBM 780: Cathode Ray Tube Display (used with IBM 740) IBM Card-to-Tape Converter (described in IBM 704 Reference manual) IBM 714: Card Reader IBM 727: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 200 characters/inch) IBM 759: Card Reader Control Unit IBM Tape-to-Card Converter (described in IBM 704 Reference manual) IBM 722: Card Punch IBM 727: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 200 characters/inch) IBM 758: Control Unit IBM Tape-controlled Printer (described in IBM 704 Reference manual) IBM 717: Printer IBM 922: Tape-Controlled Carriage IBM 727: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 200 characters/inch) IBM 757: Control Unit IBM Tape-controlled Printer (described in IBM 704 Reference manual) IBM 720: Printer IBM 727: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 200 characters/inch) IBM 719: Printer (dot matrix, 60 print positions) IBM 730: Printer (dot matrix, 120 print positions) IBM 760: Printer Control Unit IBM 705: Data Processing System; 1954 IBM 714: Card Reader IBM 717: Printer IBM 922: Tape-Controlled Carriage IBM 720: Printer IBM 722: Card Punch IBM 727: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 parity bit; 200 characters/inch) IBM 729: Magnetic tape drive models 1 and 3 (7 Track—6 data bits & 1 parity bit; 200/556/800 characters/inch) IBM 730: Printer (dot matrix, 120 print positions) IBM 734: Magnetic Drum Storage IBM 754: Tape Control IBM 757: Printer Control IBM 758: Card Punch Control IBM 759: Card Reader Control IBM 760: Control and Storage; connects 2 727 tape units and a 720A or 730A printer to CPU. IBM 767: Data Synchronizer IBM 774: Tape Data Selector IBM 777: Tape Record Coordinator IBM 782: Console IBM 709: Data Processing System; 1958 IBM 711: Card Reader IBM 716: Printer IBM 721: Card Punch IBM 729: Magnetic tape drive (7 Track—6 data bits & 1 parity bit; 200/556/800 characters/inch) IBM 733: Magnetic Drum IBM 737: IBM 701/IBM 704/IBM 709 Magnetic Core Storage Unit (4096—36-bit words, 6-bit BCD characters) IBM 738: IBM 704/IBM 709 Magnetic Core Storage Unit (32768—36-bit words, 6-bit BCD characters) IBM 740: IBM 701/IBM 704/IBM 709 Cathode Ray Tube Output Recorder IBM 755: Tape Control Unit IBM 766: Data Synchronizer IBM 780: Cathode Ray Tube Display (used with IBM 740) Other (system not known) IBM 735: Print Control IBM 739: Additional Core Storage IBM 742: Power Unit IBM 743: Power Supply IBM 744: Power Unit IBM 745: Power Unit IBM 747: Tape Data Selector PS IBM 748: Data Synchronizer IBM 771: Card/Tape Converter IBM 775: Record Storage Unit IBM 776: Sp EDPM IBM 781: Console IBM 786: Stretch Solid-state computers based on discrete transistors (1960s) Further information: IBM mainframe, IBM minicomputer. IBM 1400 series: 1240, 1401, 1410, 1420, 1440, 1450, 1460, 7010 IBM 1240: Banking system; 1963 IBM 1241: Bank Processing Unit IBM 1401: Small business computer; 1959 IBM 1402: IBM 1401 Card reader/punch IBM 1403: IBM 1401 Printer, type chain; 1959 IBM 1416: IBM 1403 and IBM 3203 Interchangeable Train Cartridge IBM 1405: IBM 1401/1410 RAMAC (Disk drive) IBM 1406: IBM 1401 Memory Expansion Unit (4000/8000/12000—6-bit characters, check bit, and wordmark) IBM 1407: IBM 1401 Console Inquiry Station IBM 1409: IBM 1401 Console Auxiliary IBM 7641: IBM 1401/1410/1460 Hypertape Control IBM 1410: Midrange business computer; 1960 IBM 1411: IBM 1410 processing unit IBM 1414: IBM 1410/7010: I/O Synchronizer IBM 1014: IBM 1414 Remote Inquiry Unit IBM 1415: IBM 1410/7010—Console IBM 7631: IBM 1410/7010, IBM 7070/7074, 7080—File Control IBM 1420: High-speed bank transit system; 1962 IBM 1440: Low-cost business computer; 1962 IBM 1441: IBM 1440 Processing unit; 1962 IBM 1442: IBM 1440, IBM 1130, and IBM System/360 Card reader/punch IBM 1443: IBM 1440/IBM 1620 II Printer, flying type bar IBM 1447: IBM 1240/1401/1440/1450/1460 Operator's Console IBM 1448: IBM 1240/1440/1460 Transmission Control Unit(between system and 1030/1050/1060/AT&T...) IBM 1450: Data Processing System for small banks; 1968 IBM 1460: Almost twice as fast as the 1401; 1963 IBM 1447: IBM 1460 System Console IBM 1461: IBM 1460—Input/Output Control IBM 1462: IBM 1460—Printer Control IBM 7010: High-capacity version of 1410; 1962 IBM 1620 IBM 1620: Data Processing System; 1959 IBM 1443: IBM 1440/IBM 1620 II Printer, flying type bar IBM 1621: IBM 1620 Paper tape reader IBM 1622: IBM 1620 Punched card reader/punch IBM 1623: IBM 1620 I Memory Expansion Unit (20000/40000—4-bit digits, flag and check bits; CF8421) IBM 1624: IBM 1620 Paper tape punch IBM 1625: IBM 1620 II Memory Unit (20000/40000/60000—4-bit digits, flag and check bits; CF8421) IBM 1626: IBM 1620 Plotter control IBM 1627: IBM 1620 Plotter. Also used by IBM 1130. IBM 7030 (Stretch) IBM 7030: Supercomputer; 1960 (Stretch) IBM 353: IBM 7030 Disk drive IBM 354: IBM 7030 Disk drive controller IBM 7152: IBM 7030 Operator's Console IBM 7302: IBM 7030 Core Storage (16384 72-bit words: 64 data bits & 8 ECC bits) IBM 7303: IBM 7030 Disk Storage IBM 7503: IBM 7030 Punched card reader IBM 7612: IBM 7030 Disk Synchronizer IBM 7619: IBM 7030 I/O exchange (8, 16, 24, or 32 I/O channels) IBM 7070 series: 7070, 7072, 7074 IBM 7070: Intermediate data processing system; 1960 IBM 7072: Intermediate data processing system; 1962 IBM 7074: Intermediate data processing system; 1961 IBM 729: IBM 7070/IBM 7074 Magnetic tape Unit IBM 1301: IBM 7070/IBM 7074 Disk Storage IBM 1302: IBM 7070/IBM 7074 Disk Storage IBM 7104: IBM 7074 High-Speed Processor IBM 7150: IBM 7070/IBM 7074 Console Control Unit IBM 7300: IBM 7070/IBM 7074 Disk Storage IBM 7301: IBM 7070/IBM 7074 Core Storage (5000/9990—10-digit words) IBM 7340: IBM 7070/IBM 7074 hypertape (7074 only) IBM 7400: IBM 7070/IBM 7074 Printer IBM 7500: IBM 7070/IBM 7074 Card Reader IBM 7501: IBM 7070/IBM 7074 Console Card Reader IBM 7550: IBM 7070/IBM 7074 Card Punch IBM 7600: IBM 7070/IBM 7074 Input-Output Control IBM 7601: IBM 7070 Arithmetic and Program Control IBM 7602: IBM 7070/IBM 7074 Core Storage Controller for IBM 7301 IBM 7603: IBM 7070/IBM 7074 Input-Output Synchronizer IBM 7604: IBM 7070/IBM 7074 Tape Control IBM 7605: IBM 7070/IBM 7074 Disk Control IBM 7631: IBM 1410/IBM 7010, IBM 7070/IBM 7074, IBM 7080 File Control IBM 7640: IBM 7074/IBM 7080 Hypertape Control IBM 7802: IBM 7070/IBM 7074 Power Converter IBM 7907: IBM 7070/IBM 7074 Data Channel (8 bit) IBM 7710: Data Communication Unit IBM 7711: Data Communication Unit IBM 7080 IBM 7080: High-capacity business computer; 1961 IBM 717: IBM 7080 150 LPM printer IBM 720: IBM 7080 500 LPM printer IBM 729: IBM 7080 Magnetic tape Unit IBM 730: IBM 7080 1000 LPM printer IBM 735: IBM 7080 Printer Control for IBM 730 IBM 757: IBM 7080 printer control for 717 IBM 760: IBM 7080 Control and Storage Model 1 for IBM 720 Printer Model 2 for IBM 730 Printer IBM 1301: IBM 7080 Disk Storage IBM 1302: IBM 7080 Disk Storage IBM 7153: IBM 7080 Console Control Unit IBM 7302: IBM 7080 Core Storage (80000/160000—6-bit characters, check bit ; CBA8421) IBM 7305: IBM 7080 Core Storage Controller and I/O Controller for IBM 7302 IBM 7502: IBM 7080 Console Card Reader IBM 7621: IBM 7080 Tape Control (729) IBM 7622: IBM 7080 Signal Control (vacuum tube peripherals) IBM 7631: IBM 7080 File Control IBM 7640: IBM 7080 Hypertape Control IBM 7800: IBM 7080 Power Converter IBM 7801: IBM 7080 Power Control IBM 7908: IBM 7080 Data Channel (8 bit) IBM 7090 series: 7040, 7044, 7090, 7094, 7094 II IBM 7040: Low-cost version of 7094; 1963 Included an extension to the 7090/7094 instruction set to handle character string(s) thus improving the speed of commercial applications (COBOL). IBM 7106: Processing Unit IBM 1414: IBM 7040 I/O Synchronizer IBM 1014: IBM 1414 Remote Inquiry Unit IBM 1401: IBM 7040 card, printer, magnetic tape, tele-processing input/output IBM 7044: Low-cost version of 7094; 1963 This was a high performance version of the 7040 with the same extensions to the 7090/7094 instruction set; it also attached 7094 I/O devices. IBM 7107: Processing Unit IBM 1414: IBM 7040 I/O Synchronizer IBM 1401: IBM 7040 card, printer, magnetic tape, tele-processing input/output IBM 7090: High-capacity scientific computer; 1959 IBM 7094: Improved version of 7090; 1962 IBM 7094 II: Improved version of 7094; 1964 IBM 711: IBM 7090/IBM 7094 Card Reader IBM 716: IBM 7090/IBM 7094 Printer IBM 721: IBM 7090/IBM 7094 Card Punch IBM 729: IBM 7090/IBM 7094 Magnetic tape Unit IBM 1301: IBM 7090/IBM 7094 Disk Storage IBM 1302: IBM 7090/IBM 7094 Disk Storage IBM 7151: IBM 7090 Console Control Unit IBM 7151-2: IBM 7094 Console Control Unit IBM 7302: IBM 7090/IBM 7094/IBM 7094 II Core Storage (32768—36-bit words, 6-bit BCD characters) IBM 7320: IBM 7090/IBM 7094 Drum Storage IBM 7340: IBM 7090/IBM 7094 Hypertape IBM 7606: IBM 7090/IBM 7094/IBM 7094 II Multiplexer and Core Storage Controller for IBM 7302 IBM 7607: IBM 7090/IBM 7094 Data Channel (6 bit) IBM 7608: IBM 7090 Power Converter IBM 7617: IBM 7090/IBM 7094 Data Channel Console IBM 7618: IBM 7090 Power Control IBM 7631: IBM 7090/IBM 7094 File Control IBM 7640: IBM 7090/IBM 7094 Hypertape Control IBM 7909: IBM 7090/IBM 7094 Data Channel (8 bit) IBM 2361: NASA's Manned Spacecraft Center IBM 7094 II Core Storage Unit (524288—36-bit words); 1964 Later solid-state computers & systems Computers based on SLT or discrete IC CPUs (1964–1989) IBM 1130: high-precision scientific computer; 1965 IBM 1132: IBM 1130 Printer, based on IBM 407 type-wheel mechanism IBM 1133: IBM 1130 Multiplexer and cycle stealer, to connect an IBM 1403 fast printer IBM 2020: System/360 Model 20 Central Processing Unit; almost a 360: 1966 IBM 2022: System/360 Model 22 Central Processing Unit; small range 360 IBM 2025: System/360 Model 25 Central Processing Unit; small range 360 IBM 2030: System/360 Model 30 Central Processing Unit; small range 360 IBM 2040: System/360 Model 40 Central Processing Unit; small range 360 IBM 2044: System/360 Model 44 Central Processing Unit; scientific 360; business with special feature IBM 2050: System/360 Model 50 Central Processing Unit; mid range 360 IBM 2060: System/360 Models 60 and 62 Central Processing Unit; mid-range 360; announced but never released IBM 2064: System/360 Models 64 and 66 Central Processing Unit; mid range 360; multi-processor with virtual memory (DAT); announced but never released IBM 2065: System/360 Model 65 Central Processing Unit; mid range 360: used by NASA in Apollo project IBM 2067: System/360 Model 67 Central Processing Unit; mid range 360; multi-processor with virtual memory (DAT) IBM 2070: System/360 Model 70 Central Processing Unit; high range 360; announced but never released IBM 2075: System/360 Model 75 Central Processing Unit; high range 360 IBM 2085: System/360 Model 85 Central Processing Unit; high range 360 IBM 5450: Display console used with Model 85 (80 characters x 35 lines) IBM 2091: System/360 Model 91 Central Processing Unit; high range 360 IBM 2095: System/360 Model 95 Central Processing Unit; high range 360 IBM 2195: System/360 Model 195 Central Processing Unit; high range 360 IBM 3031: System/370-compatible mainframe; high range (first series to incorporate integral, i.e., internal, stand-alone channels, these being stripped-down 3158-type CPUs, but operating in "channel mode", only) IBM 3017: Power Distribution Unit/Motor Generator (3031 processor complex) IBM 3032: System/370-compatible mainframe; high range (first series to incorporate integral, i.e., internal, stand-alone channels, these being stripped-down 3158-type CPUs, but operating in "channel mode", only) IBM 3027: Power and Coolant Distribution Unit (3032 processor complex) IBM 3033: System/370-compatible multiprocessor complex; high range; 1977 (first series to incorporate integral, i.e., internal, stand-alone channels, these being stripped-down 3158-type CPUs, but operating in "channel mode", only) IBM 3037: Power and Coolant Distribution Unit (3033 processor complex) IBM 3036: Dual-display (operator's) console, shipped with 303X IBM 3038: Multiprocessor Communication Unit for 3033 MP IBM 3042: Attached processor for 3033 Model A IBM 3081: System/370-compatible dual-processor mainframe; high range; models: D, G, G2, GX, K (1981), K2, KX (2 = enhanced version); 1980 IBM 3082: Processor Controller IBM 3087: Coolant Distribution Unit IBM 3089: Power Unit IBM 3083: System/370-compatible mainframe, single processor 3081; high range; models: B (1982), B2, BX, CX, E (1982), E2, EX, J (1982), J2, JX IBM 3084: System/370-compatible Quad-processor mainframe; high range; 3081 + 3081 with same serial number, but two on/off switches; models: Q 2-way, Q 2-way2, QX 2-way, Q 4-way, Q 4-way2, QX 4-way; 1982 IBM 3090: System/370 mainframe; high range; J series supersedes S series. Models: 150, 150E, 180, 200 (1985), 400 2-way (1985), 400 4-way (1985), 600E (1987), 600S (1988). A 400 actually consists of two 200s mounted together in a single frame. Although it provides an enormous computing power, some limits, like CSA size, are still fixed by the 16MB line in MVS. IBM 3097: Power and Coolant Distribution Unit IBM 3115: System/370 Model 115 Central Processing Unit; small range IBM 3125: System/370 Model 125 Central Processing Unit; small range IBM 3135: System/370 Model 135 Central Processing Unit; small range IBM 3145: System/370 Model 145 Central Processing Unit; small range IBM 3155: System/370 Model 155 Central Processing Unit; mid range; without virtual memory [DAT] unless upgraded to 155-II IBM 3165: System/370 Model 165 Central Processing Unit; mid range; without virtual memory [DAT] unless upgraded to 165-II IBM 3066: Display console used with Models 165 and 166 (80 characters x 35 lines) IBM 3138: System/370 Model 138 Central Processing Unit; small range; IBM 3148: System/370 Model 148 Central Processing Unit; small range; IBM 3158: System/370 Model 158 Central Processing Unit; mid range; IBM 3168: System/370 Model 168 Central Processing Unit; high range; IBM 3066: Display console used with Models 165 and 166 (80 characters x 35 lines) IBM 3195: System/370 Model 195 Central Processing Unit; high range; without virtual memory [DAT] IBM 3741: data station; 1973 IBM 3790: distributed computer; announced 1975 (followed by the IBM 8100) IBM 3791: Controller, model 1 or 2. IBM 3792: Auxiliary control unit. IBM 3793: Keyboard-Printer. IBM 4300: series of System/370-compatible mainframe models; 1979 IBM 4321: System/370-compatible mainframe; low range; successor of 4331 IBM 4321: System/370-compatible mainframe; low range; 1979 IBM 4331: System/370-compatible mainframe; low range; 1979 IBM 4341: System/370-compatible mainframe; mid range; 1979 IBM 4361: System/370-compatible mainframe; low range; 1983 IBM 4381: System/370-compatible mainframe; mid range; 1983 IBM 5100: portable computer; evolution of the 1973 SCAMP (Special Computer APL Machine Portable) prototype; 1975 IBM 5103: Dot matrix printer IBM 5110: portable computer; models 1, 2 & 3 featured a QIC tape drive, and then floppy disk drives; 1978 IBM 5120: portable computer; featured two built-in 8-inch 1.2 MB floppy disk drives; 1980 IBM 5280: Distributed Data System; 1980 IBM 5281: Data Station for 5280 IBM 5282: Dual Data Station for 5280 IBM 5285: Programmable Data Station IBM 5286: Dual Programmable Data Station IBM 5288: Programmable Control Unit IBM 5225: Printer for 5280 (floor-standing; Models 1, 2, 3, 4) IBM 5256: Printer for 5280 (table-top, dot-matrix; Models 1, 2, 3) IBM 5320: System/32, low-end business computer; 1975 IBM 5340: System/34, System unit, successor of System/32, but had also a second System/3 processor; 1977 IBM 5360: System/36 System Unit IBM 5362: System/36 System Unit IBM 5363: System/36 System Unit IBM 5364; System/36 System Unit IBM 5381: System/38 System Unit; 1978 IBM 5382: System/38 System Unit IBM 5410: System/3 model 10 processor; for small businesses; 1969 IBM 5415: System/3 model 15 processor; 1973 IBM 5520: Administrative System; 1979 IBM 8100: distributed computer; announced 1978 IBM 8150: processor IBM 9370: series of System/370 mainframe models; partly replaced IBM 8100; low range; 1986 IBM 9371: "Micro Channel 370" ESA models 010, 012, 014 (later 110, 112, 114); 1990 IBM 9373: models 20, 30 IBM 9375: models 40, 50, 60 IBM 9377: models 80 and 90 IBM Series/1: brand name for process control computers; 1976 IBM System/3: brand name for small business computers; 1969 IBM System/36: brand name for minicomputers; successor of System/34; 1983 IBM System/38: brand name for minicomputers; indirect successor of IBM Future Systems project; 1979 IBM System/360: brand name for mainframes; 1964 IBM System/370: brand name for mainframes, successor of System/360; 1970 Application System/400: brand name for computers, successor of System/38; 1988 Computers based on discrete IC CPUs (1990–present) IBM ES/9000 family of System/390 mainframes; 1990 IBM ES/9021: water-cooled ES/9000 type IBM ES/9121: air-cooled standalone ES/9000 type IBM ES/9221: air-cooled rack mounted ES/9000 type IBM 9406: AS/400 minicomputer IBM AS/400: midrange computer system, successor to System/38; 1988 System/390: brand name for mainframes with ESA/390 architecture; successor of System/370; 1990 Computers based on microprocessor CPUs (1981–present) Computers IBM System/23: DataMaster, based on the Intel 8085 5322 Desktop all-in-one model 5324 Floor tower model IBM 2003: a very small mainframe with System/390 architecture; 1990s, also known as Multiprise 2000 IBM 2064: zSeries z900; note number collision with earlier System/360-64; 2000 IBM 2066: zSeries z800; less powerful variant of the z900 IBM 2084: zSeries z990; successor of larger z900 models IBM 2086: zSeries z890; successor of the z800 and smaller z900 models; 2004 IBM 2094: System z9 Enterprise Class (z9 EC); initially known as z9-109; 2005 IBM 2096: System z9 Business Class (z9 BC); successor to z890; 2006 IBM 2097: System z10 Enterprise Class (z10 EC); successor to z9 EC; 2008 IBM 2098: System z10 Business Class (z10 BC); successor to z9 BC; 2008 IBM 2817: zEnterprise 196 (z196); successor to z10 EC; 2010 IBM 2818: zEnterprise 114 (z114); successor to z10 BC; 2011 IBM 2827: zEnterprise EC12 (zEC12); successor to z196; 2012 IBM 2828: zEnterprise BC12 (zBC12); successor to z114; 2013 IBM 2964: IBM z Systems z13 (z13); successor to zEC12; 2015 IBM Personal Computer: Superseded the IBM Portable Computer. IBM 5150: the classic IBM PC—1981 IBM 5160: IBM Personal Computer XT—1983 IBM 5162: IBM Personal Computer XT/286 IBM 5271: IBM 3270 PC—1983 IBM 5160 Model 588: PC XT/370, a PC XT with a special add-in card containing an Intel 8087 math coprocessor and two modified Motorola 68000 chips to execute/emulate the System/370 instructions—1983. IBM 5155: IBM Portable—1984 IBM 4860: IBM PCjr—1984 IBM 5170: IBM Personal Computer/AT—1984 IBM 5140: IBM Convertible—1986 IBM 5281: IBM 3270 PC but based on an IBM AT. IBM 5550: Personal Computer Series for Japan, South Korea, Taiwan and China IBM 5510: IBM JX (for Japan, Australia and New Zealand) IBM 5511: IBM JX (for Japan, Australia and New Zealand) IBM 5530: Smaller desktop, without communications adapter IBM 5535: Portable IBM 5541: Desktop IBM 5551: Floor standing IBM 5561: Larger floor standing IBM PS/2: range IBM PS/1: range, later succeeded by IBM Aptiva IBM Aptiva: Personal Computer IBM PS/ValuePoint: range IBM RT PC: series; ROMP-based; 1986 IBM 4575: System/88 processor; 1986 IBM 4576: System/88 processor IBM 7060, also known as Multiprise 3000: a very small mainframe with System/390 architecture; models H30, H50, H70; 1999 IBM System 9000: lab data controller, based on Motorola 68000 IBM 9075: PCradio, a battery-powered personal computer; 1991 IBM 9672: largest mainframes from System/390 line; 1994 G1: 9672-Rn1, 9672-Enn, 9672-Pnn G2: 9672-Rn2, 9672-Rn3 G3: 9672-Rn4 G4: 9672-Rn5 G5: 9672-nn6 G6: 9672-nn7 IBM 9674: coupling facility for interconnecting IBM 9672 computers IBM PC Series: PC300 and 700 range including 300GL and 300PL IBM NetVista: Corporate PCs IBM ThinkCentre: PC range now made under license by Lenovo Group IBM ThinkPad: Notebooks now made under license by Lenovo Group IBM IntelliStation Workstations: Pro based on Intel PC processors, and POWER based on PowerPC processors System/390: brand name for mainframes with ESA/390 architecture; successor of System/370; 1990 IBM AS/400: Later iSeries and System i, merged into IBM Power Systems in 2008; 1988 IBM System p: First RS/6000, then pSeries, then p5 and now System p5, merged into IBM Power Systems in 2008; 1990 IBM System x: Originally PC Server, then Netfinity, then xSeries and now System x System z: brand name for mainframes with z/Architecture; rename of zSeries; 2006 zSeries: brand name for mainframes with z/Architecture; successor of System/390; 2000 IBM PureSystems: Converged system IBM System Cluster 1350 IBM BladeCenter: IBM's Blade server architecture IBM eServer 32x: AMD processor-based server products IBM OpenPower: POWER5 based hardware for running Linux. Supercomputers IBM Blue Gene: 2000 IBM Kittyhawk: 2008 White paper issued. Microprocessors IBM 801: Pioneering prototype RISC processor; 1980 IBM ROMP: RISC processor, also known as 032 processor IBM APC: RISC Processor, successor to the 032 IBM CnC/M68000: Processor for XT/370 and AT/370 IBM P/370: Processor for Personal System 370 IBM P/390 microprocessor: processor for P/390 and R/390 IBM Power: Processors for some RS/6000 and successors, later IBM AS/400, and IBM Power Systems POWER1 POWER2 POWER3 POWER4 POWER5 POWER6 POWER7 POWER8 POWER9 Power10 PowerPC: Processors for some RS/6000 and successors and earlier IBM AS/400, some also used in non-IBM systems PowerPC 601 PowerPC 603 PowerPC 604 PowerPC 620 PowerPC 7xx PowerPC 4xx embedded CPUs IBM RS64 PowerPC 970 Cell microprocessor Gekko, Broadway and Xenon CPUs for game consoles. IBM z/Architecture processors: for z/Architecture mainframes IBM z10 IBM z196 IBM zEC12 IBM z13 IBM z14 IBM z15 IBM Telum Solid-state computer peripherals Punched card and paper tape equipment IBM 1011: IBM 1401/1440/1460/1414 I/O Sync—Paper Tape Reader IBM 1012: IBM 1401/1440/1460—Tape Punch IBM 1017: IBM S/360—Paper Tape Reader IBM 1018: IBM S/360—Paper Tape Punch IBM 1134: paper tape reader IBM 1402: IBM 1401 and several other systems card reader/punch IBM 1412: Punched card reader/punch IBM 1442: IBM 1440 and IBM System/360 Card reader/punch IBM 1444: IBM 1240/1440 Punched card reader/punch IBM 1622: IBM 1620 Card reader/punch IBM 1902: Paper Tape Punch IBM 1903: Paper Tape Reader IBM 2501: IBM System/360 Card reader (up to 1,200 cpm) IBM 2502: Card Reader IBM 2520 Card Read Punch (Model A1), Card Punch (Models A2, A3) IBM 2540: IBM System/360 Card reader/punch IBM 2560: IBM System/360 Model 20 Multifunction card machine (reader/punch/interpreter/multi-hopper) IBM 2671: Paper Tape Reader IBM 2826: Control unit for 1017 and 1018 IBM 3504: Card reader IBM 3505: Card reader IBM 3525: Multi-function card unit IBM 5424: IBM System/3 MFCU Multi Function Card Unit (reader/punch/printer/multi-hopper)- 96 column cards IBM 5425: IBM System/370 MFCU Multi Function Card Unit (reader/punch/printer/multi-hopper), for handling 96-column cards Microfilm products IBM announced a range of Microfilm products in 1963 and 1964 and withdrew them in 1969. IBM 9921: Document Viewer Model I IBM 9922: Document Viewer Model II IBM 9948: Thermal Copier IBM 9949: Micro Viewer IBM 9950: Diazo Copier IBM 9951: Camera IBM 9952: Standard Micro-Viewer-Printer IBM 9953: Viewer-Printer Stacker Module IBM 9954: Diazo Copier IBM 9955: Microfiche Processor IBM 9956: Camera IBM 9965: Diazo Copier Printer/plotter equipment IBM 1094: Line-Entry Keyboard IBM 1403: High-Speed Impact Printer IBM 1404: IBM 1401/Sys360—Printer IBM 1416: Impact Printer print character chain IBM 1445: IBM 1240/1401/1440/Sys360—Printer IBM 1446: IBM 1440—Printer Control unit for 1403 IBM 2203: Printer IBM 2213: Printer IBM 2245: Line printer for Chinese, Japanese and Korean text IBM 2280: Film Recorder IBM 2282: Film Recorder/Scanner IBM 2285: Display Copier IBM 2680: High-speed photo typesetter; 1967 IBM 3130: Advanced Function Printer IBM 3160: Advanced Function Printer IBM 3170: Full Color Digital Printer IBM 3203: Printer IBM 3211: High-Speed Impact Printer for Sys/370 IBM 3216: 3211 Impact Printer's character print train IBM 3262: Line printer IBM 3268: Dot matrix printer IBM 3284: Printer IBM 3287: Color printer; 1979 IBM 3288: Line printer IBM 3800: First laser printer introduced by IBM; 1976–1990. incl. photo IBM 3800-1: Early laser printer, 1975 IBM 3800-2: Part of IBM Kanji System for Japanese language processing, 1979 IBM 3800-3: Continuous form printer; 1982 IBM 3811: Control Unit for 3211 IBM 3812: Table top page printer; 12 ppm, 1986 IBM 3816: Table top page printer; 24 ppm, 1989 IBM 3820: Laser page printer; 20 ppm, 1985 IBM 3825: Laser page printer; 58 ppm, 1989 IBM 3827: Laser page printer; 92 ppm, 1988 IBM 3828: MICR Laser page printer; 92 ppm, 1990 IBM 3829: Laser page printer; 92 ppm, 1993 IBM 3835: Continuous forms laser printer; 88ppm, 1988 IBM 3852-2: Inkjet printer for IBM 3192 terminal IBM 3900: Various models 001; OW1 DR1/2 etc., succeeded by infoprint 4000 IBM 3935: Laser page printer; 35 ppm, 1993 IBM 4000: Various models succeeded by infoprint 4100 IBM 4019: Laser printer for PC. 10 text pages per minute. IBM 4039-16L: Lexmark laser printer IBM 4055: InfoWindow touch screen display IBM 4079: Color inkjet printer IBM 4201: ProPrinterII Model 002 IBM 4202: ProPrinter XL IBM 4207: ProPrinter X24 IBM 4208: ProPrinter XL24 IBM 4210: APA matrix table top WS printer for the S/38-36 IBM 4214: Table top printer IBM 4216: Personal pageprinter model 020 IBM 4224: Table top serial printer; 1986 IBM 4230: Tabletop matrix printer, 600cps. Also 4232 IBM 4234: Floor standing dot band printer; 1986 IBM 4245: Line printer IBM 4247: Tabletop matrix printer, 1100cps IBM 4248: Impact printer; 1984 IBM 4250/II: ElectroCompositor model 002 IBM 4279: Terminal Control Unit (for 4506 Digital TV Displays) IBM 4506: Digital TV display unit IBM 4975: Printer IBM 5083: Tablet IBM 5087: Screen printer IBM 5201: Printer IBM 5202: Printer (Quietwriter III) IBM 5203: Line printer for System/3. Ran at 100 or 200 lines per minute. IBM 5210: Printer IBM 5211: Printer 160 or 300 lpm, sold with System/34 IBM 5215: Selectric-element printer for Displaywriter IBM 5218: Daisywheel printer for Displaywriter IBM 5219: Letter quality printer IBM 5223: Wheelprinter E IBM 5224: Table top printer IBM 5225: Floor standing printer IBM 5253: CRT display station for 5520; 1979 IBM 5254: CRT display station for 5520; 1979 IBM 5256: Table top printer; 1977 IBM 5257: Daisy wheel printer for 5520; 1979 IBM 5258: Ink jet printer for 5520; 1979 IBM 5262: Floor standing line printer IBM 5294: Twinax remote control unit IBM 5394: Twinax remote controller (also 5494) IBM 6180: Color plotter IBM 6186: Color plotter IBM 6262: Line Printer IBM 6400: Line matrix printer IBM 6500: IPDS printer, coax or twinax attached IBM 6670: Information Distributor; combination laser printer and photocopier; part of Office System/6; 1979 IBM 7701: Magnetic Tape Transmission Terminal; 1960 IBM 7372: Color plotter, 6 pen, desktop IBM 7374: Color plotter IBM 7375: Color plotter IBM 7350: Image processor, a specialized terminal for scientific and research applications; 1983 IBM 7400: IBM 7070/IBM 7074 Printer IBM 7404: Graphic Output IBM 7456: Plant floor terminal IBM 7900: IBM 7070/IBM 7074 Inquiry Station IBM 8775: Terminal IBM LPFK: Lighted Program Function Keyboard IBM XY749: Plotter IBM XY750: Plotter Graphics displays IBM 2350: Graphics display system; 1977 IBM 5081: Color and monochrome display; separate RGB connections, capable of 1280×1024 resolution, up to diagonal. IBM 5080: Graphics System; for System/370 IBM 5085: Graphics Processor. Part of IBM 5080 Graphics System for System/370. IBM 5088: Graphics Channel Controller. Part of IBM 5080 Graphics System for System/370. IBM 6090: High-end graphics system for the System/370 IBM 6153: Advanced monochrome graphics display IBM 6154: Advanced color graphics display IBM 6155: Extended monochrome graphics display Data storage units Core storage IBM 2360: Processor Storage for the (never shipped) IBM System/360 models 60 and 64 IBM 2361: Large Capacity Storage for the IBM System/360 models 50, 60, 62, 65, 70, and 75 IBM 2362: Processor Storage for the (never shipped) IBM System/360 models 62, 66, 68 and 70 IBM 2365: Processor Storage for the IBM System/360 models 65, 67, 75 and 85 IBM 2385: Processor Storage for the IBM System/360 model 85 IBM 2395: Processor Storage for the IBM System/360 models 91 and 95 Direct-access storage devices In IBM's terminology beginning with the System/360 disk and such devices featuring short access times were collectively called DASD. The IBM 2321 Data Cell is a DASD that used tape as its storage medium. See also history of IBM magnetic disk drives. IBM 353: Disk drive for IBM 7030 Stretch IBM 1301: IBM 1240/1410/1440/1460/70XX—Disk drive; 1961 IBM 1302: Disk drive IBM 1311: IBM 1240/1401/1410/1440/1450/1460/1620/7010/1710/7740 Disk drive using IBM 1316 disk pack IBM 1316: 2,000,000-character removable disk pack for 1311, 2311; 1962 IBM 1405: Disk drive IBM 1742: IBM System Storage DS4500 IBM 1814: IBM System Storage DS4700 IBM 1750: IBM System Storage DS6000 Series IBM 1815: IBM System Storage DS4800 IBM 2072: IBM Storwize V3700 (IBM FlashSystem 5000) IBM 2073: IBM Storwize V7000 Unified IBM 2076: IBM Storwize V7000 (IBM FlashSystem 7200) IBM 2078: IBM Storwize V5000 IBM 2105: Enterprise Storage Server, or ESS, or Shark (utilized 7133) IBM 2106: Extender for IBM 2105 Shark IBM 2107: IBM System Storage DS8000 Series IBM 2301: Drum Storage Unit IBM 2302: Disk drive IBM 2303: Drum Storage Unit IBM 2305-1: Fixed head disk 3.0 MB/s Transfer rate, 5 MB capacity IBM 2305-2: Fixed head disk 1.5 MB/s Transfer rate, 10 MB capacity IBM 2310: Cartridge disk drive, used 2315 cartridge. IBM 2315: 1 MB cartridge used on 2310 and with a disk drive component on multiple systems, e.g. IBM 1130. IBM 2311: Disk drive using IBM 1316 disk pack (removable—7.5 MB) IBM 2312: Disk drive using IBM 2316 disk pack (removable—28.6 MB) IBM 2313: Disk facility with 4 disk drives using IBM 2316 disk pack (removable—28.6 MB) IBM 2314: Disk subsystem with 9 drives, one spare using IBM 2316 disk pack (removable—28.6 MB) IBM 2318: Disk facility with 2 disk drives using IBM 2316 disk pack (removable—28.6 MB) IBM 2319: Disk Facility with 3 disk drives using IBM 2316 disk pack (removable—28.6 MB) IBM 2316: 28.6 MB Disk pack for 2314 et al. IBM 2321: Data cell drive. Drive with removable cells containing tape strips (400 MB) IBM 2421: IBM System Storage DS8000 Series with 1 year's warranty IBM 2422: IBM System Storage DS8000 Series with 2 years' warranty IBM 2423: IBM System Storage DS8000 Series with 3 years' warranty IBM 2424: IBM System Storage DS8000 Series with 4 years' warranty IBM 2810: IBM XIV Storage System (Generations 1 through 3; varies by model) IBM 2812: IBM XIV Storage System (Generations 1 through 3; varies by model) IBM 2851: IBM Scale-Out Network Attached Storage (SONAS) IBM 3310: Fixed FBA drive IBM 3330: Disk drive. (100 MB each spindle, up to 32 spindles per "subsystem"); 1970 IBM 3336: Disk pack for 3330–1, 3330–2; 1970 IBM 3330-11: Disk drive. Double the density of 3330–1; 1973. IBM 3336-11: Disk pack for 3330–11; 1973 IBM 3333: Disk drive, a variant of 3330 and 3333-11 IBM 3340: 'Winchester'-type disk drive, removable. Model -4, more?; 1973 IBM 3348: 35 or 70 MB data modules used with IBM 3340 IBM 3344: Four 3340's simulated with a 3350 HDA under the covers IBM 3350: Disk drive (317.10 MB—1976) IBM 3363: Optical disk drive IBM 3370: FBA drive (used to store microcode and config info for the 3090. Connected through 3092); native DASD for 4331, 4361 (70 MB—1979). IBM 3375: Disk drive ("The Ugly Duckling" of IBM's DASD devices). 409.8 MB/actuator. First with dual-path access (via 'D' box) IBM 3380: Disk drive; 2.46 GB per each 2-drive module (1981), later double- and triple-density versions IBM 3390: Disk drive; 1, 2, 3 and 9 GB initially; later expanded to 27 GB IBM 3540: Diskette I/O unit IBM 3830: Storage control models 1, 2 and 3 IBM 3850: Mass Storage System (MSS); virtual 3330-1 volumes, each backed up by a pair of cartridges, 1974 IBM 3830-11: Provided virtual 3330-1 (3330V) drives to the host; attached staging 3330 and 3350 drives for use by the 3851, 1974 IBM 3851: Mass Storage Facility. Robot arms retrieving cylindrical helically scanned tape cartridges. IBM 3880: Dual-channel DASD controller for 3350,3375,3380. 1981. Later models with up to 64MB cache. First hard disk cache in the industry. IBM 3990: Quad-channel DASD controller for 3390. IBM 4662: IBM FlashSystem 5200 IBM 4963: Disk subsystem IBM 4964: Diskette unit for Series/1 IBM 4965: Diskette drive and I/O expansion unit IBM 4966: Diskette magazine unit IBM 4967: High performance disk subsystem IBM 5444: Fixed/Removable disk file for System/3 IBM 5445: Disk Storage for System/3 IBM 5447: Disk Storage and Control for System/3 IBM 7133: SSA Disk Enclosure (for RS/6000) IBM 7300: IBM 7070/IBM 7074 Disk Storage IBM 7320: Drum drive IBM 9331: 8" Floppy disk drive IBM 9332: Disk drive; 1986 IBM 9333: Serial Link Disk Subsystem IBM 9335: Disk subsystem in a set of drawers. For AS/400, System 36/38 or 9370 IBM 9337: Disk Array Subsystem; 1992 IBM 9345: Disk Array Subsystem; employed commodity 5¼" hard drives; simulated 3390 hard disks but had a smaller track capacity Magnetic tape storage IBM 050: Magnetic Data Inscriber (key operated, records on tape cartridge for IBM 2495 data entry into an IBM System 360) IBM 729: Magnetic tape drive (7 Track—6 data bits & 1 parity bit; 200/556/800 characters/inch) IBM 2401: Magnetic tape drive (7 Track—6 data bits & 1 parity bit; 200/556/800 characters/inch) IBM 2401: Magnetic tape drive (9 Track—8 data bits & 1 parity bit; 800/1600 characters/inch) IBM 2415: Magnetic tape drive (9 Track—8 data bits & 1 parity bit; 800/1600 characters/inch) IBM 2420: Magnetic tape drive (9 Track—8 data bits & 1 parity bit) IBM 2440: Magnetic tape drive (9 Track—8 data bits & 1 parity bit) IBM 2495: Tape Cartridge Reader (reads IBM 050 prepared cartridges into an IBM System 360) IBM 3400-4: Lower density tape IBM 3400-6: Normal tape IBM 3410: Magnetic tape drive (9 Track—8 data bits & 1 parity bit); 1971 IBM 3411: Magnetic tape unit and controller IBM 3420: Magnetic tape drive (9 Track—8 data bits & 1 parity bit) IBM 3422: Magnetic tape drive (9 Track—8 data bits & 1 parity bit); 1986 IBM 3424: Tape unit. Brazil and SA only. IBM 3430: Top loading tape drive; 1983 IBM 3440: Magnetic tape drive (9 Track—8 data bits & 1 parity bit) IBM 3480: Cartridge tape drive; 1984 IBM 3490: Cartridge tape drive; 1991 IBM 3494: Enterprise tape library IBM Virtual Tape Server (VTS): tape virtualization engine for IBM 3494 IBM 3495: Robotic tape library IBM 3573 models L2U, L3S, F3S: TS3100 Tape Library IBM 3573 models L4U, L2H, F3H: TS3200 Tape Library IBM 3576: TS3310 Tape Library IBM 3577: TS3400 Tape Library IBM 3580: LTO tape drive IBM 3584: TS3500 Tape Library IBM 3584: TS4500 Tape Library IBM 3588 model F3B: TS1030 Tape Drive; LTO3 IBM 3588 model F4A: TS1040 Tape Drive; 2007; LTO4; TS2340 is a standalone version. IBM 3590: tape drive (Magstar) IBM 3592: TS1120 Tape Drive; model J1A known as Jaguar in 2004; model E05 in 2007 IBM 3803: Magnetic tape drive (9 Track—8 data bits & 1 parity bit) IBM 3954: TS7510 and TS7520 Virtualization Engines IBM 3954: TS7510 and TS7520 Virtualization Engines IBM 3956: TS7740 Virtualization Engine; models CC6 and CX6 IBM 3957: TS7700 Virtualization Engine; model V06 IBM 4480: Cartridge drives which could be mounted by a robot IBM 4580: System/88 disk drive IBM 4581: System/88 disk drive IBM 4585: Autoload streaming magnetic tape unit IBM 4968: Autoload streaming magnetic tape unit IBM 6157: Streaming tape drive IBM 7208: 8-mm SCSI tape drive IBM 7330: Magnetic tape drive (7 Track—6 data bits & 1 parity bit; 200/556 characters/inch) IBM 7340: Hypertape IBM 8809: Magnetic tape unit IBM 9347: Magnetic tape drive (9 Track—8 data bits & 1 parity bit) IBM 9349: Magnetic tape drive (9 Track—8 data bits & 1 parity bit) Optical storage IBM 1350: Photo Image Retrieval System IBM 1360: Photodigital Storage System (terabit) IBM 1352: Cell File IBM 1361: Cell File and Control IBM 1364: Photo-Digital Reader IBM 1365: Photo-Digital Recorder IBM 1367: Data Controller IBM 3995: Optical Library (terabyte) Storage networking and virtualization IBM 3044: Fiber optic channel extender link IBM 9034: ESCON/Parallel Converter IBM 2005: Storage area network (SAN) Fibre Channel switch (OEM from Brocade Communications Systems) IBM 2029: Dense Wavelength Division Multiplexer (OEM from Nortel) IBM 2031: Storage area network (SAN) Fibre Channel switch (OEM from McData) IBM 2032: Storage area network (SAN) Fibre Channel switch (OEM from McData) IBM 2053: Storage area network (SAN) Fibre Channel switch (OEM from Cisco) IBM 2054: Storage area network (SAN) Fibre Channel switch (OEM from Cisco) IBM 2061: Storage area network (SAN) Fibre Channel switch (OEM from Cisco) IBM 2062: Storage area network (SAN) Fibre Channel switch (OEM from Cisco) IBM 2103-H07: SAN Fibre Channel Hub IBM 2109: Storage area network (SAN) Fibre Channel switch (OEM from Brocade Communications Systems) IBM 2498: Storage area network (SAN) Fibre Channel switch (OEM from Brocade Communications Systems) IBM 2499: Storage area network (SAN) Fibre Channel switch (OEM from Brocade Communications Systems) IBM 3534: Storage area network (SAN) Fibre Channel switch (OEM from Brocade Communications Systems) IBM SAN File System: a software for sharing file systems in SAN IBM 2145: System Storage SAN Volume Controller (SVC) IBM 9729: Optical Wavelength Division Multiplexer Coprocessor units IBM 2938: Array processor; attach to 2044 (model 1) or 2165 (model 2) IBM 3092: IBM 3090 Processor controller IBM 3838: Array processor; 1976 IBM 4758: PCI Cryptographic Coprocessor IBM 4764: PCI-X Cryptographic Coprocessor IBM 4765: PCIe Cryptographic Coprocessor IBM 4767: PCIe Cryptographic Coprocessor (Crypto Express5S [CEX5S] on Z, MTM 4767–002, FC EJ32/EJ33 on Power) IBM 4768: PCIe Cryptographic Coprocessor (Crypto Express6S [CEX6S] on Z) IBM 4769: PCIe Cryptographic Coprocessor (Crypto Express7S [CEX7S] on Z) Channels and input/output control units IBM 2820: Drum Storage Control Unit for 2301 Drum Storage Units IBM 2821: Control unit (for 2540 Reader/Punch and 1403 Printer) IBM 2822: Paper Tape Reader Control IBM 2835: Control unit model 1 (for 2305-1 Disk) IBM 2835: Control unit model 2 (for 2305-2 Disk) IBM 2841: DASD Control unit (for 2311, 2302, 2303, 2321 and 7320) IBM 2846: Channel controller for System/360 Model 67 IBM 2860: Selector Channel (for SYS/360 2065 & above, 370/165, 168 and 195) IBM 2870: Multiplex Channel (for SYS/360 2065 & above, 370/165, 168 and 195) IBM 2880: Block Multiplex Channel (for 360/85 and 195, 370/165, 168, 195) IBM 2914: Switching Unit (for manually switching channels between central processing units) IBM 3088: Multisystem channel communications unit IBM 3172: LAN Interconnect Controller (or Nways Interconnect Controller) IBM 3814: Switching Management System IBM 4959: I/O expansion unit IBM 4987: Programmable communication subsystem IBM 5085: Graphics Processor. Part of IBM 5080 Graphics System. IBM 5088: Graphics Channel Controller. Part of IBM 5080 Graphics System. IBM 5209: 5250-3270 link protocol converter IBM 7299: Active Star Hub for twinax terminals IBM 7426: Terminal interface unit IBM 7621: Tape Control IBM 7909: Data Channel IBM 8102: Storage and I/O unit for 8100 Information System Data communications devices IBM 3270 IBM 3178: Display station for IBM 3270 IBM 3179: Display station (color or graphics) for IBM 3270 IBM 3180: Monochrome display station, configurable to 80 columns (24, 32 or 43 rows), 132 columns (27 rows) IBM 3191: Monochrome display station IBM 3192G: Terminal. 24 or 32 lines. Graphics. IBM 3193: Display station IBM 3194: Advanced function color display IBM 3196: Display station IBM 3197: Color display work station IBM 3279: Color graphic terminal; 1979 IBM 3290: Gas panel display terminal with 62x160 screen configurable with one to four logical screens, each of which could be further subdivided into partitions under software control; 1983 IBM 3174: 3270 Subsystem controller IBM 3271: Remote 3270 control unit IBM 3272: Local 3270 control unit IBM 3274: 3270 Control unit IBM 3275: Display station IBM 3276: 3270 Control unit display station IBM 3277: Terminal IBM 3278: Display station IBM 3299: 3270 Terminal Multiplexer IBM 1009: IBM 1401/1440/1414/1460 Data Transmission Unit IBM 1013: Card Transmission Terminal IBM 1015: Inquiry/Display Terminal IBM 2210: NWays Multiprotocol Router (router) IBM 2217: NWays Multiprotocol Concentrator IBM 2250: Vector Graphics Display Terminal IBM 2260: CRT Terminal IBM 2265: Display Station IBM 2701: Data Adapter Unit (communication controller) IBM 2702: Transmission Control (communication controller) IBM 2703: Transmission Control (communication controller) IBM 2740: Typewriter communication terminal; 1965 IBM 2741: Typewriter communication terminal; 1965 IBM 2770: Data Communications System; 1969 IBM 2772: Multi-Purpose Control Unit: 1969 IBM 2922: Programmable terminal; 1972 IBM 2840: Display unit IBM 3101: ASCII display station IBM 3102: Thermal printer for attachment to IBM 3101, 3151, 3161, etc. IBM 3104: Display station for attachment to IBM 5250 IBM 2840: Display Control Unit Model I for 2250 Model-II Analog Displays IBM 2840: Display Control Unit Model II for 2250 Model III Analog Displays IBM 2848: Display Controller (for 2260) IBM 3151: ASCII display station IBM 3161: ASCII display station IBM 3163: ASCII display station IBM 3164: ASCII color display station IBM 3192: Monochrome display station, configurable to 80 columns (24, 32 or 43 rows), 132 columns (27 rows). Record and playback keystrokes function. All configuration done through keyboard. IBM 3486: 3487, 3488 "Info Window" twinax displays IBM 3735: Programmable Buffered Terminal IBM 3767: Communication terminal IBM 3780: Data communications terminal; 1972 IBM 3781: Card Punch (optional) IBM 3770: Data Communication system. All Terminals came with integrated desk IBM 3771: Communication Terminal Models 1, 2 and 3 IBM 3773: Communication Terminal Models 1, P1, 2, P2, 3 and P3 IBM 3774: Communication Terminal Models 1, P1, 2 and P2 IBM 3775: Communication Terminal Models 1 and P1 IBM 3776: Communication Terminal Models 1 and IBM 3777: Communication Terminal Model 1 IBM 3783: Card Attachment Unit, attached 2502 or 3521 to any 3770 terminal except 3777 IBM 3784: Line Printer, optional second printer for the 3774 IBM 7740: Communication control unit; 1963 IBM 7750: Transmission Control Unit IBM 3704: Communication Controller IBM 3705: Communication Controller IBM 3708: Network control unit IBM 3710: Network Controller IBM 3720: Communication Controller IBM 3721: Expansion unit for IBM 3720 IBM 3724: Controller IBM 3725: Communication Controller IBM 3728: Communication control matrix switch IBM 3745: High-speed communication controller; 1988. Model -410, more? IBM 3746: Multiprotocol Controller IBM 5250: CRT terminal; 1977 IBM 5251: Display Station IBM 5252: Dual display CRT terminal; 1978 IBM 7171: ASCII Device Attachment Control Unit (S/370 Channel-attached protocol converter for mapping ASCII display screens to IBM 3270 format) Power supply/distribution units IBM 3089: IBM 3081/IBM 3090 Power controller. 50 Hz → 400 kHz Modems IBM 3833: Modem; 1985 IBM 3834: Modem; 1985 IBM 3863: Modem IBM 3864: Modem IBM 3865: Modem IBM 3868: Rack-mounted modem IBM 5810: Limited-distance multi-modem enclosure (for 5811 and 5812) IBM 5811: Limited-distance modem IBM 5812: Limited-distance modem IBM 5841: 1,200-bit/s modem IBM 5842: 2,400-bit/s modem; 1986 IBM 5865: Modem IBM 5866: Modem IBM 5868: Rack mounted modem Magnetic ink and optical readers IBM 1210: Magnetic character-reader/sorter; 1959 IBM 1219: Reader/sorter (to sort things like postal orders); 1961 IBM 1230: Test Scoring IBM 1231: Optical Mark Page Reader IBM 1232: Optical Mark Page Reader IBM 1255: Magnetic Character Reader IBM 1259: Magnetic Character Reader IBM 1270: Optical Reader Sorter IBM 1275: Optical Reader Sorter IBM 1285: IBM 1401/1440/1460/Sys360 Optical Reader for printed numbers IBM 1287: S/360 Optical Reader for handwritten numbers IBM 1288: S/360 Optical Page Reader for hand written numbers and OCR-A Font IBM 1412: Magnetic Character Reader IBM 1418: IBM 1401/1460/Sys360—Optical Reader IBM 1419: IBM 1401/1410/Sys360—Magnetic Character Reader IBM 1428: IBM 1401/1460/Sys360—Optical Reader IBM 1975: Optical Page Reader (Used at SSA from 1965 to 1977) IBM 2956-2: Optical Mark/Hole Reader IBM 2956-3: Optical Mark/Hole Reader IBM 2956-5: Multi-Pocket MCR Reader Sorter (RPQ W19976) IBM 3881: Optical Mark Reader IBM 3886: Optical Character Reader IBM 3890: Document processor IBM 3897: Image capture system IBM 3898: Image processor Other IBM 3117: Image scanner IBM 3118: Image scanner IBM 4577: System/88 expansion cabinet IBM 4993: Series/1-S/370 termination enclosure IBM 4997: Rack enclosure IBM 7170: Device attachment control unit IBM 7770: Audio Response Unit IBM 7772: Audio Response Unit IBM 9037: Sysplex Timer IBM PC components and peripherals IBM 2215: 15" Multisync Color Monitor with Digital Controls 65 kHz for Asia Pacific IBM 4707: Monochrome monitor for Wheelwriter word processor IBM 5144: PC convertible monochrome display IBM 5145: PC convertible color display IBM 5151: IBM PC Display—Monochrome (green) CRT monitor, designed for MDA (1981) IBM 5152: IBM PC Graphics Printer (technically this was an Epson MX-80 dot matrix printer (1979), but it was IBM-labelled (1981) IBM 5153: IBM PC Color Display—CRT monitor, designed for CGA (1983) IBM 5154: IBM Enhanced Color Display—for EGA (1984) IBM 5161: Expansion Unit for the IBM PC, a second chassis that was connected via ISA bus extender and receiver cards and a 60-pin cable connector; the Expansion Unit had its own power supply with enough wattage to drive up to two hard drives (the IBM 5150's original power supply was insufficient for hard drives) (1981–1987?) IBM 5173: PC Network baseband extender IBM 5175: IBM Professional Graphics Controller (PGC, PGA) (1984) IBM 5181: Personal Computer Compact Printer IBM 5182: Personal Computer Color Printer IBM 5201: Quietwriter Printer Model 2 IBM 5202: Quietwriter III printer IBM 6312: PS/ValuePoint Color Display IBM 6314: PS/ValuePoint Color Display IBM 6317: Color display IBM 6319: PS/ValuePoint Color Display IBM 6324: Color display IBM 6325: Color display IBM 6327: Color display IBM 8503: Monochrome monitor for PC IBM 8507: PS/2 monochrome display IBM 8512: PS/2 color display IBM 8513: PS/2 color display IBM 8514: PS/2 large color display IBM 8514/A: Display adaptor IBM T220/T221 LCD monitors: 9503 Ultra-high resolution monitor IBM 9521: Monitor IBM 9524: Monitor IBM 9525: Monitor IBM 9527: Monitor IBM E74: CRT monitor, ca 2001 IBM E74M: CRT monitor with built-in speakers and microphone (model no. 6517-U7N) ca 2001 IBM PC keyboard (84 keys)(1981) IBM PC keyboard (101 keys) Enhanced (1984) Monochrome Display Adapter (MDA) Color Graphics Adapter (CGA) Enhanced Graphics Adapter (EGA) Professional Graphics controller (PGC) Multicolor Graphics Adapter (MCGA) Video Graphics Array (VGA) Micro Channel architecture (MCA): 32-bit expansion bus for PS/2 Mwave IBM Deskstar, Travelstar and Ultrastar series of hard disk drives for desktops and laptops, respectively (Acquired by hard disk drive division of Hitachi) Embedded systems, application-specific machines/systems Airline reservation systems Deltamatic: Delta Air Lines reservations system PANAMAC: Pan American World Airways reservations system Programmed Airline Reservations System (PARS): airline reservations system Sabre: reservations system, originally used by American Airlines IBM 9081: airlines version of the 3081 IBM 9083: airlines version of the 3083 IBM 9190: airlines version of the 3090 Bank and finance IBM 801: Proof Machine IBM 802: Proof Machine, 24 pockets IBM 803: Proof Machine, 32 pockets; 1949 to 1981, a product for 32 years! IBM 1201: Proof Inscriber. Proofing machine that was also an inscriber IBM 1202: Utility Inscriber, an electric type-writer, used to inscribe documents with magnetic ink IBM 1203: Unit Inscriber (keyoperated, print on checks, etc. with magnetic ink) IBM 1206: Unit Inscriber (CMC-7 encoder) IBM 1240: Banking system; 1963 IBM 1241: Bank Processing Unit IBM 1260: Electronic Inscriber (keyoperated for proving deposits, sorting and listing of checks) IBM 1420: High-speed Bank Transit System; 1962 IBM 1450: Data Processing System for small banks; 1968 IBM 2730: Transaction validation terminal; 1971 IBM 2984: Cash dispensing terminal; 1972 IBM 3600: Finance Communication System; 1973 IBM 3601: Branch Controller IBM 3602: Branch Controller IBM 3604: Teller Terminal (Keyboard/Magnetic Swipe/Display/Optional PINpad) IBM 3606: Teller Terminal (Keyboard/Magnetic Swipe/Display) IBM 3608: Printer with Keyboard and Display IBM 3609: Printer IBM 3610: Document Printer IBM 3611: Passbook Printer IBM 3612: Document/Passbook Printer IBM 3613: Journal Printer IBM 3614: Automatic teller machine (ATM aka CTF); 1973 IBM 3615: Administrative Printer IBM 3616: Journal Printer IBM 3618: Administrative Line Printer (155 lpm, first IBM band printer) IBM 3619: Line Printer ('Australian' administrative printer version) IBM 3620: Magnetic Stripe Reader Encoder and Journal/Document Printer IBM 3621: Statement Printer with Magnetic Stripe Reader and optional Keyboard/PINpad IBM 3624: Through-the-wall ATM; 1979 IBM 3670: Brokerage communications system; 1971 IBM 3895: Deposit processing system; 1978 IBM 4700: Branch Banking Equipment; 1981 IBM 4701: Branch Controller (8" floppy disc) IBM 4702: Branch Controller (5¼" HD floppy disc; hard disc) IBM 4704: Teller Terminal (Keyboard/Magnetic Swipe/Display/Optional PINpad) IBM 4710: Journal/Cutform Printer IBM 4712: Journal/Cutform Printer IBM 4713: Verification Printer IBM 4715: Printer IBM 4720: Cutform/Passbook Printer IBM 4722: Passbook Printer IBM 4723: Document Processor IBM 4730: Counter-style Personal Banking Machine (PBM); 1983 IBM 4731: In-lobby PBM; 1983 IBM 4732: In-lobby PBM; 1987 IBM 4736: Cash-only PBM IBM 4737: Self-service transaction station IBM 4781: Table Top ATM; 1991 (re-badged Diebold 1060) IBM 4782: In-lobby ATM; 1991 (re-badged Diebold 1062) IBM 4783: Cash-only ATM; 1991 (re-badged Diebold 1064) IBM 4785: Exterior ATM; 1991 (re-badged Diebold 1072) IBM 4786: Exterior Cash-only ATM; 1991 (re-badged Diebold 1071) IBM 4787: Exterior Drive-up ATM; 1991 (re-badged Diebold 1073) IBM 4788: Exterior Self-standing Cash-only ATM; 1991 (re-badged Diebold 1074) IBM 4789: Cash-only ATM; 1991 (re-badged Diebold 1063) IBM 5922: Low-speed magnetic ink character recognition (MICR) Reader IBM 5995: Branch Controller Computer-aided drafting (CAD) IBM 7361: Fastdraft System; 1982, a low-cost drafting system using a light pen and a CRT screen IBM 7361: Graphics Processor Unit IBM 3251: Graphics Display Station Model 2 Word processing IBM MT/ST: Magnetic Tape/Selectric Typewriter; 1964 IBM MC/ST: Magnetic Card/Selectric Typewriter (Mag Card); 1969 IBM Displaywriter System; 1980 IBM 6360: IBM Displaywriter: Diskette Unit IBM 6361: IBM Displaywriter: Mag Card Unit IBM 6580: IBM Displaywriter: Display Station IBM Office System/6 IBM 6/420: stand-alone information processing unit; part of the Office System/6; 1978 IBM 6/430: information processor; part of the Office System/6; 1977 IBM 6/440: information processor; part of the Office System/6; 1977 IBM 6/442: information processor; part of the Office System/6; 1978 IBM 6/450: information processor; part of the Office System/6; 1977 IBM 6/452: information processor; part of the Office System/6; 1978 Other document processing IBM 1282: Optical reader card punch IBM 3740: Data entry system; 1973 IBM 3741: Data Station Models 1 and 2, Programmable Work Stations Models 3 and 4 IBM 3742: Dual Data Station IBM 3713: Printer IBM 3715: Printer IBM 3717: Printer IBM 3747: Data Converter IBM 3694: Document Processor; 1980 IBM 3881: Optical Mark Reader; 1972 IBM 3886: Optical Character Reader; 1972 IBM 3890: Document Processor; 1973 IBM 3891: Document Processor; 1989 IBM 3892: Document Processor; 1987 IBM 3895: Document Reader/Inscriber; 1977 IBM 5321: Mag Card Unit for System/32; 1976 IBM 6640: Document printer; 1976; in 1977 reassigned being part of the Office System/6 IBM 9370: Document reproducer; 1966 Educational IBM 1500: Computer-assisted instruction system; 1966 IBM 1510: Display Console IBM 1512: Image Projector Government: avionics, computation, command and control, and space systems IBM Relay Calculator: aka The IBM Pluggable Sequence Relay Calculator (Aberdeen Machine), 1944 IBM NORC: Naval Ordnance Research Calculator; 1954 AN/FSQ-7: computer for the Semi-Automatic Ground Environment; 1959 (IBM had the manufacturing contract.) IBM 728: Magnetic Tape Reader/Recorder (7 Track—6 data bits & 1 synchronization bit; 248 characters/inch) AN/FSQ-8 Combat Control Central: variant of the AN/FSQ-7 AN/FSQ-31V: US Air Force Command and Control Data Processing Element for SACCS; 1959–1960 IBM 4020: IBM id for the AN/FSQ-31V AN/FSQ-32: SAGE Solid State Computer IBM 2361: NASA's Manned Spacecraft Center IBM 7094 II Core Storage Unit (524288—36-bit words); 1964 ASC-15 Titan II Guidance Computer Gemini Guidance Computer Saturn Guidance Computer Saturn instrument unit IBM System/4 Pi: avionics computers; military and NASA; 1967 Skylab Onboard Computers Space Shuttle General Purpose Computer AN/ASQ-155 computer IBM RAD6000: Radiation-hardened single board computer, based on the IBM RISC Single Chip CPU ASCI White Supercomputer: Built as stage three of the Accelerated Strategic Computing Initiative (ASCI) started by the U.S. Department of Energy and the National Nuclear Security Administration IBM 7950: Cryptanalytic computer using 7030 as CPU; 1962 (Harvest) IBM 7951: IBM 7950 Stream coprocessor IBM 7952: IBM 7950 High performance core storage (1024—72-bit words: 64 data bits & 8 ECC bits) IBM 7955: IBM 7950 Tractor Magnetic tape system (22 Track—16 data bits & 6 ECC bits; 2,400 words/inch) IBM 7959: IBM 7950 High-speed I/O exchange IBM 9020: for FAA and one system for the UK CAA. IBM 7201: enhanced 2065 (S/360-65) used as a Computing Element (CE) in the IBM 9020 complex IBM 7231: enhanced 2050 (S/360-50) used as an Input Output Control Element (IOCE) in the IBM 9020 complex IBM 7251: 512KiB (byte = 8 bits + P) core Storage Element (SE) used in the IBM 9020 complex IBM 7289-02: Peripheral Adapter Module (PAM) used in the IBM 9020D complex IBM 7289-04: Display Element (DE) used in the IBM 9020E complex IBM 7262: System Console (SC) used in the IBM 9020D complex IBM 7265: Configuration Console (CC) used in the IBM 9020E complex Industry and manufacturing IBM 357: Data Collection system; 1959 IBM 013: Badge Punch IBM 024/026: Card Punch (81 col) IBM 357: Input Station (Badge and/or serial card reader) IBM 358: Input Control Unit IBM 360: Clock Read-Out Control IBM 361: Read-Out Clock IBM 372: Manual Entry IBM 373: Punch Switch IBM 374: Cartridge Reader IBM 1001: Data transmission system; 1960 IBM 1030: Data Collection system; 1963 IBM 1031: Input Station. IBM 1032: Digital Time Unit. IBM 1033: Printer. IBM 1034: Card Punch IBM 1035: Badge Reader IBM 1050: Data Communications System; 1963 IBM 1026: Transmission Control Unit IBM 1051: Central Control Unit IBM 1052: Printer-Keyboard, based on Selectric mechanism IBM 1053: Console Printer, based on Selectric mechanism IBM 1054: Paper Tape Reader IBM 1055: Paper Tape Punch IBM 1057: Punched Card Output IBM 1058: Printing Card Punch Output IBM 1092: Programmed Keyboard (keyboard storage for input to 1050) IBM 1093: Programmed Keyboard (used in tandem with 1092 for transmission to 24/26 or 7770) IBM 1060: Data Communications System IBM 1026: Transmission Control Unit IBM 1070: Process Communication System; 1964 IBM 1026: IBM 1030/1050/1060/1070 Transmission Control Unit IBM 1071: Terminal Control Unit IBM 1072: Terminal Multiplexer IBM 1073: Latching Contact Operate Model 1 IBM 1073: Counter Terminal Model 2 IBM 1073: Digital-Pulse Converter Model 3 IBM 1074: Binary Display IBM 1075: Decimal Display IBM 1076: Manual Binary Input IBM 1077: Manual Decimal Input IBM 1078: Pulse Counter IBM 1080: Data Acquisition System IBM 1081: DAS Control...for analytical applications IBM 1082: Punched Card Input IBM 1083: Remote Control (provides Operator Scan Request) IBM 1084: Sampler Reader (Technicon Sampler 40) IBM 1055: Paper Tape Punch IBM 1057: Punched Card Output IBM 1058: Printing Card Punch Output IBM 1710: Control system based on IBM 1620; 1961 IBM 1620: IBM 1710 Central Processing Unit IBM 1711: IBM 1710 Data Converter (A/D) IBM 1712: IBM 1710 Multiplexer and Terminal Unit IBM 1720: Control system based on IBM 1620; 1961 IBM 1800: Process control variant of the IBM 1130; 1964 IBM 2790: Data Communications System; 1969 IBM 2715: Transmission controller IBM 2791: Area Station IBM 2793: Area Station IBM 2795: Data Entry Unit IBM 2796: Data Entry Unit IBM 2797: Data Entry Unit IBM 2798: Guided Display Unit IBM 3630: Plant Communications System; 1978 IBM 3730: Distributed office communication system; 1978 IBM Series/1: brand name for process control computers; 1976 IBM 4953: Series/1 processor model 3; 1976 IBM 4954: Series/1 processor model 4 IBM 4955: Series/1 processor model 5; 1976 IBM 4956: Series/1 processor model 6 IBM 4982: Sensor I/O unit IBM 5010: System/7 processor; industrial control; 1970 IBM 5012: Multifunction Module IBM 5013: Digital Input/Output Module IBM 5014: Analog Input Module IBM 5022: Disk Storage Unit IBM 5025 Enclosure IBM 5028: Operator Station IBM 5010E: System/7 Maritime Application/Bridge System; 1974 IBM 5090: N01 Radar Navigation Interface Module IBM 5090: N02 Bridge Console IBM 5026: C03 Enclosure (vibration hardened) IBM 5230: Data Collection system; IBM 5231: Controller Models 1,2, and 3 IBM 5234: Time Entry Station Models 1 and 2 IBM 5235: Data Entry Station IBM 5230: Data Collection System Accessory Package IBM 5275: Direct Numerical Control Station; 1973 IBM 5531: Industrial computer for plant environments; 1984 IBM 5937: Industrial Terminal; 1976 IBM 7531: Industrial computer; 1985 IBM 7532: Industrial computer; 1985 IBM 7535: Industrial robotic system; 1982 IBM 7552: Industrial computer; 1986 IBM 7565: Industrial robotic system; 1982 IBM 7700: Data Acquisition System, not marketed; 1964 IBM 9003: Industrial computer; 1985 Medical/science/lab equipment IBM 2991: Blood cell separator; 1972; model 2 1976 IBM 2997: Blood cell separator; 1977 IBM 5880: Electrocardiograph system; 1978 IBM 9630: Gas chromograph; 1985 Research/advertising (not product) machines IBM Columbia Difference Tabulator: 1931 IBM ASCC: Automatic Sequence Controlled Calculator (aka. Harvard Mark I); 1944 IBM SSEC: Selective Sequence Electronic Calculator; 1948 IBM Deep Blue: Chess playing computer developed for 1997 match with Garry Kasparov IBM Watson: An artificially intelligent computer system capable of answering questions posed in natural language, specifically developed to answer questions on the quiz show Jeopardy!. Retail/point-of-sale (POS) IBM 3650: Retail Store System; 1973 IBM 3651: Store Controller Model A50 or B50 IBM 3653: Point of Sale Terminal IBM 3657: Ticket Unit IBM 3659: Remote Communications Unit IBM 3784: Line Printer IBM 3660: Supermarket System; 1973 IBM 3651: Store Controller Model A60 or B60 IBM 3661: Store Controller IBM 3663: Supermarket Terminal ; 1973 IBM 3666 Checkout Scanner IBM 3669: Store Communications Unit IBM 4610: SureMark Retail Printer IBM 4683: PC Based Retail System; 1987 IBM 4693: PC Based Retail System IBM 4694: PC Based Retail System IBM SurePOS 300: Cost effective PC Based Retail System IBM SurePOS 500: All in one PC Based Retail System IBM SurePOS 700: High performance PC Based Retail System IBM SureOne: PC Based Retail System AnyPlace POS: Customer touch screen kiosk BART (Bay Area Rapid Transport) fare collection machines; 1972 Telecommunications International Time Recording Co. Series 970: Telephone System (1930s) SAIS (Semi-Automatic Intercept System): Added automated custom intercept messages to the Bell System's operator-based centralized intercept system, using a computer-controlled magnetic drum audio playback medium. Late 1960s. IBM 1750: Switching System IBM 1755: Operator station IBM 2750: Switching System IBM 3750: Switching System IBM 3755: Operator Desk IBM 8750: Business Communications System (ROLM) IBM 9750: Business Communications System (ROLM) IBM 9751: CBX: Main component of 9750 system IBM Simon: Smartphone; 1994 Unclassified IBM TouchMobile a hand-held computer announced in 1993 Computer software Some software listings are for software families, not products (Fortran was not a product; Fortran H was a product). Some IBM software products were distributed free (no charge for the software itself, a common practice early in the industry). The term "Program Product" was used by IBM to denote that the software is generally available at an additional charge. Prior to June 1969, the majority of software packages written by IBM were available at no charge to IBM customers; with the June 1969 announcement, new software not designated as "System Control Programming" became Program Products, although existing non-system software remained available for free. Operating systems AIX, IBM's family of proprietary UNIX OS's (Advanced Interactive eXecutive) on multiple platforms BPS/360 (Basic Programming Support/360) BOS/360 (Basic Operating System/360) TOS/360 (Tape Operating System/360) DM2, Disk Monitor System Version 2 for the IBM 1130 DOS/360 (Disk Operating System/360) DOS/VS (Disk Operating System/Virtual Storage—370), virtual memory successor to DOS/360 DOS/VSE (Virtual Storage Extended—370, 4300) VSE/AF (VSE/Advanced Functions) enhancements to DOS/VSE VSE/SP (VSE/System Package), integrates DOS/VSE, VSE/AF and other products, replaces SSX/VSE VSE/ESA (Virtual Storage Extended/Enterprise System Architecture), replaces VSE/SP z/VSE for z/Architecture DPCX (Distributed Processing Control eXecutive) for IBM 8100 DPPX (Distributed Processing Programming eXecutive) for IBM 8100 and, later, the ES/9370 CPF (Control Program Facility) for the System/38 IBM i, previously i5/OS and OS/400, successor to CPF for AS/400, IBM Power Systems, and PureSystems IBSYS/IBJOB (IBM 7090/94 operating system) IX/370 An IBM proprietary UNIX OS (Interactive eXecutive for IBM System/370) Model 44 Programming System for the System/360 Model 44 OS/360 (Operating System/360 for IBM System/360) PCP (Primary Control Program option) MFT (Multiprogramming with a Fixed number of Tasks option) MVT (Multiprogramming with a Variable number of Tasks option) M65MP (Model 65 Multiprocessor option) OS/VS1 (Operating System—Virtual Storage 1) for IBM System/370, virtual memory successor to MFT OS/VS2 (Operating System—Virtual Storage 2) for IBM System/370, virtual memory successor to MVT SVS: Release 1 (Single Virtual Storage) MVS: Release 2–3.8 (Multiple Virtual address Spaces) MVS/370 (OS/VS2 2.0-3.8, MVS/SE, MVS/SP V1) MVS/SE: MVS System Extensions Release 1: based on OS/VS2 R3.7 plus selectable units Release 2: based on OS/VS2 R3.8 plus selectable units MVS/SP V1: MVS/System Product, replacement for MVS/SE MVS/XA (Multiple Virtual Systems—Extended Architecture): MVS/SP V2 MVS/ESA (Multiple Virtual Systems—Enterprise Systems Architecture) MVS/SP V3 MVS/ESA SP V4 MVS/ESA SP V5 OS/390, successor to MVS/ESA for IBM System/390 z/OS, successor to OS/390 for z/Architecture and, up through Version 1.5, System/390 OS/2 (Operating System/2) for the IBM PS/2 and other x86 systems PC DOS (Personal Computer Disk Operating System) System Support Program for System/34, System/36 Transaction Processing Facility (TPF), formerly IBM Airline Control Program (ACP) z/TPF, successor to TPF TSS/360 (Time Sharing System, a failed predecessor to VM/CMS, intended for the IBM System/360 Model 67) CP-67 May refer to either a package for the 360/67 or only to the Control program of that package. CP/CMS Another name for the CP-67 package for the 360/67; predecessor to VM. VM, sometimes called VM/CMS (Virtual Machine/Conversational Monitor System) Successor systems to CP-67 for the S/370 and later machines. First appeared as Virtual Machine Facility/370 and most recently as z/VM. VM/SE Virtual Machine/System Extension, also known as System Extension Program Product (SEPP). An enhancement to Virtual Machine Facility/370, replaced by VM/SP. VM/BSE Virtual Machine/Basic System Extension, also known as Basic System Extension Program Product (BSEPP). An enhancement to Virtual Machine Facility/370, providing some of the facilities of VM/se, replaced by VM/SP. VM/SP Virtual Machine/System Product, replacing VM/SE and the base for all future VM versions. VM/XA Virtual Machine/Extended Architecture 31-bit VM VM/XA MF (Virtual machine/Extended architecture Migration Aid) VM/XA SF (Virtual Machine/Extended Architecture Systems Facility), successor to VM/XA SF VM/XA SP (Virtual Machine/Extended Architecture Systems Product), successor to VM/XA SF VM/ESA (Virtual Machine/Enterprise System Architecture), successor to VM/XA z/VM, successor to VM/ESA 4690 OS (retail) Utilities and languages A20 handler for the PC (address line 20 handler) Ada ALGOL 60 ALGOL F compiler for OS/360 APL IBM APL implementations IBM APL2 implementations Autocoder macro assemblers for various machines, with nothing in common but the name COBOL IBM COBOL compilers IBM Compilers (formerly VisualAge compilers (C/C++, Fortran, Java, et al.)) CSP (Cross System Product) Document Composition Facility (DCF) A package that contains SCRIPT/VS, the GML Starter Set (GMLSS) and supporting files. Eclipse an IDE EGL (Enterprise Generation Language) FARGO (Fourteen-o-one Automatic Report Generation Operation). Predecessor of RPG for the IBM 1401 FAP assembler for the IBM 709, 7090, and 7094 (FORTRAN Assembly Program) FORTRAN (originally developed by IBM for the 704) (FORmula TRANslator) Generalized Markup Language (GML) A document markup language, part of Document Composition Facility (DCF) BookMaster An enhanced version of the GML Starter Set (GMLSS) in DCF BookManager BUILD/MVS and BookManager BUILD/VM An enhanced version of BookMaster. IBM Information Access Gave customers access to the Retain and PTF databases, circa 1981 ISPF Interactive System Productivity Facility. An IDE for MVS and z/OS systems JCL batch job language for OS/360 and successors JES1, JES2 and JES3, job entry and spooling subsystems MAP (Macro Assembly Program in the IBJOB component of IBSYS) Pascal PL/I (Programming Language/One) PL/I F compiler for OS/360 and PL/I D compiler for DOS/360 PL/I Optimizing Compiler and PL/I Checkout Compiler IBM Enterprise PL/I IBM PL/I for OS/2, AIX, Linux, and z/OS PL/S (Programming Language/Systems), originally named BSL (Basic Systems Language), later PL/AS, PL/X POWER spooler for DOS/360 and successors (Program Output Writers and Execution Readers) REXX scripting language (REstructured eXtended eXecutor) RPG (Report Program Generator) RPG for IBM 1401 and System/360 RPG II for System/3, System/32, System/34, System/36, and System/370 RPG III for System/38, its successor AS/400, and System/370 RPG IV for RISC AS/400 and other machines running IBM i SOAP (Symbolic Optimal Assembly Program for IBM 650) Script A document markup language SCRIPT component of CP/CMS SCRIPT/370 SCRIPT/VS Component of Document Composition Facility (DCF) SCRIPT/PC A subset of SCRIPT running under PC DOS SPS (Symbolic Programming System). An assembler for IBM 1401 or IBM 1620 systems, less capable than Autocoder VFU (Vocabulary File Utility) for IBM 7772 XEDIT an editor for VM/CMS systems Middleware and applications IBM distributes its diverse collection of software products over several brands; mainly: IBM's own branding for many software products originally developed in-house; Lotus: collaboration and communication; Rational: software development and maintenance; Tivoli: management, operations, and Cloud; WebSphere: Internet. Watson Main article: IBM Watson Watsonx Main article: IBM Watsonx 9PAC Report generator for the IBM 7090 (709 PACkage) IBM Administrative Terminal System (ATS) Online Text Entry, Editing, Processing, Storage and Retrieval IBM Advanced Text Management System (ATMS) A CICS-based successor to ATS, ATMS served as the text entry system for STorage And Retrieval System (STAIRS) IBM Assistant Series (Filing Assistant, Reporting Assistant, Graphing Assistant, Writing Assistant and Planning Assistant) IBM Audio Distribution System IBM BS12 (IBM Business System 12) IBM CICS (Customer Information Control System) IBM CICS Transaction Gateway IBM CICS Web interpreter, IBM OD390 IBM Cloudscape Pure Java Database Server. Now open source Apache Derby IBM Cognos Business Intelligence Business Intelligence Suite IBM Concurrent Copy, backup software IBM Content Manager OnDemand (CMOD) IBM Db2 Relational DBMS (DataBase 2) IBM DB2 Content Manager IBM DB2 Document Manager IBM DB2 Records Manager IBM Deep Computing Visualization for Linux V1.2 IBM DISOSS Distributed Office Support System IBM Document Composition Facility (DCF); includes SCRIPT/VS IBM Document Library Facility (DLF) IBM BookMaster IBM BookManager IBM FileNet products, P8 Business Process Management and Enterprise Content Management (FileNet bought by IBM) IBM Graphical Data Display Manager (GDDM). IBM Generalized Information System (GIS). IBM HTTP Server IBM i2 Analyst's Notebook and COPLINK IBM Information Management System (IMS) Hierarchical database management system (DBMS) IBM Informix Dynamic Server IBM Lotus cc:Mail IBM Lotus Connections IBM Lotus Expeditor IBM Lotus QuickPlace IBM Lotus Quickr IBM Lotus Notes (Lotus Development was bought by IBM in 1995) IBM Lotus Sametime IBM Lotus SmartSuite Office Suite IBM Lotus Symphony Office Suite IBM Maximo Asset Management IBM Network Design and Analysis (NETDA) IBM Network Performance Monitor (NPM) IBM OfficeVision (originally named PROFS) IBM OMEGAMON IBM Personal Communications Emulator, also known as Host Access Client IBM Planning Analytics IBM Print Management Facility (PMF) IBM Print Services Facility (PSF) IBM QualityStage Acquired from Ascential Rational Software's products (Rational bought by IBM in 2003) IBM Rational Application Developer IBM Rational Software Architect IBM Rational System Architect IBM Rational Asset Manager IBM Rational Automation Framework Previously known as IBM Rational Automation Framework for WebSphere IBM Red Brick Database Server IBM RFID Information Center (RFIDIC) Tracking and tracing products through supply chains IBM Screen Definition Facility II (SDF II), a software tool for the interactive development of screen definition panels. IBM SearchManager text search, successor to STAIRS IBM Security Key Lifecycle Manager IBM Softek TDMF IBM STorage And Information Retrieval System (STAIRS) Text search IBM Sterling B2B Integrator IBM Teleprocessing Network Simulator (TPNS) IBM Tivoli Access Manager (TAM) IBM Tivoli Application Dependency Discovery Manager (TADDM) IBM Tivoli Asset Manager for IT (TAMIT) IBM Tivoli Framework (Tivoli Systems was bought by IBM in 1995) IBM Tivoli Change and Configuration Management Database (CCMDB) IBM Tivoli Compliance Insight Manager (TCIM) IBM Tivoli Monitoring IBM Tivoli Netview IBM Tivoli Netcool IBM Tivoli Provisioning Manager IBM Tivoli Service Automation Manager IBM Tivoli Storage Manager (Formerly ADSM, moved to Tivoli in 1999) IBM Tivoli Storage Manager FastBack IBM Tivoli Workload Scheduler IBM Tivoli System Automation IBM U2, including IBM UniVerse and IBM UniData Dimensional database DBMS IBM ViaVoice Dictation (early version: IBM VoiceType) IBM Virtualization Engine IBM VSPC IBM WebSphere IBM WebSphere Application Server IBM WebSphere Adapters IBM Websphere Business Events IBM WebSphere Banking Transformation Toolkit IBM Websphere Host On-Demand (HOD) Host On-Demand Web-based TN3270, TN5250 and VT440 Terminal Emulation. IBM WebSphere Message Broker IBM WebSphere MQ (previously known as IBM MQSeries) IBM WebSphere Portal IBM WebSphere Portlet Factory IBM WebSphere Process Server WebSphere Service Registry and Repository IBM Worklight (Mobile application platform) IBM Workplace Web Content Management (IWWCM) Web content management for WebSphere Portal and Domino servers (Presence Online dba Aptrix bought by IBM in 2003) IBM Works Office suite for OS/2 IBM Z Operational Log and Data Analytics IBM Z Anomaly Analytics with Watson IBM z/OS Workload Interaction Navigator TOURCast CoScripter ICCF Interactive Computing and Control Facility. An interactive editor that runs under CICS on DOS/VSE. Now included as part of "VSE Central Functions." NCCF Network Communications Control Facility. A network monitoring and control subsystem Watson Customer Engagement The Watson Customer Engagement (commonly known as WCE and formerly known as IBM Commerce) business unit supports marketing, commerce, and supply chain software development and product offerings for IBM. Software and solutions offered as part of these three portfolios by WCE are as follows: Watson Marketing Portfolio Watson Campaign Automation IBM Tealeaf IBM Campaign Customer Experience Analytics Watson Marketing Insights IBM Journey Designer Watson Real-Time Personalization Watson Content Hub Watson Commerce IBM Configure, Price, Quote IBM Digital Commerce IBM WebSphere Commerce Watson Commerce Insights IBM Order Management IBM Store Engagement Watson Order Optimizer IBM Call Center IBM Inventory Visibility IBM Watson Pay IBM Payment Gateway IBM Dynamic Pricing IBM Price Optimization IBM Price Management IBM Markdown Optimization Forms Experience Builder Watson Supply Chain IBM Supply Chain Business Network IBM Connect:Direct IBM Supply Chain Insights IBM B2B Integration Portfolio IBM Strategic Supply Management Watsonx watsonx.ai watsonx.data watsonx.governance Models IBM Granite Data centers Portable Modular Data Center Scalable Modular Data Center Services Call/360 timesharing service (1968) IBM's service bureau business: an in-house service, offered until 1957. See SBC, below. Silverpop, an Atlanta-based software company Service Bureau Corporation (SBC) was a subsidiary of IBM formed in 1957 to operate IBM's former service bureau business as an independent company. In 1973 sold to Control Data Corporation. See also IBM Product Center History of IBM magnetic disk drives History of hard disk drives OS/360 and successors :Category:IBM products Notes References External links IBM Mainframe Family tree & chronology IBM Storage basic information sources IBM Offering Information products IBM IBM
List of IBM products
[ "Technology" ]
25,407
[ "Computing-related lists", "IBM lists" ]
88,340
https://en.wikipedia.org/wiki/Galvanism
Galvanism is a term invented by the late 18th-century physicist and chemist Alessandro Volta to refer to the generation of electric current by chemical action. The term also came to refer to the discoveries of its namesake, Luigi Galvani, specifically the generation of electric current within biological organisms and the contraction/convulsion of biological muscle tissue upon contact with electric current. While Volta theorized and later demonstrated the phenomenon of his "Galvanism" to be replicable with otherwise inert materials, Galvani thought his discovery to be a confirmation of the existence of "animal electricity," a vital force which gave life to organic matter. History Johann Georg Sulzer Galvanic phenomena were described in the literature before it was understood that they were of an electrical nature. In 1752, when the Swiss mathematician and physicist Johann Georg Sulzer placed his tongue between a piece of lead and a piece of silver, joined at their edges, he perceived a taste similar to that of iron(II) sulfate. Neither of the metals alone produced this taste. He realized that the contact between the metals probably did not produce a solution of either on the tongue. He did, however, not realize that this was an electrical phenomenon. He concluded that the contact between the metals caused their particles to vibrate, producing this taste by stimulating the nerves of the tongue. Luigi Galvani According to popular legend, Galvani discovered the effects of electricity on muscle tissue when investigating an unrelated phenomenon which required skinned frogs in the 1780s and 1790s. His assistant is claimed to have accidentally touched a scalpel to the sciatic nerve of the frog and this resulted in a spark and animation of its legs. This was building on the theories of Giovanni Battista Beccaria, Felice Fontana, Leopoldo Marco Antonio Caldani, and . Galvani was investigating the effects of distant atmospheric electricity (lightning) on prepared frog legs when he discovered the legs convulsed not only when lightning struck but also when he pressed the brass hooks attached to the frog's spinal cord to the iron railing they were suspended from. In his laboratory, Galvani later discovered that he could replicate this phenomenon by touching metal electrodes of brass connected to the frog's spinal cord to an iron plate. He concluded that this was proof of "animal electricity," the electric power which animated living things. Alessandro Volta Alessandro Volta, a contemporary physicist, believed that the effect was explicable not by any vital force but rather it was the presence of two different metals that was generating the electricity. Volta demonstrated his theory by creating the first chemical electric battery. Despite their differences in opinion, Volta named the phenomenon of the chemical generation of electricity "Galvanism" after Galvani. Galvani publishes his work On March 27, 1791, Galvani published a book about his work on animal electricity. It contained comprehensive details of his 11 years of research and experimentation on the topic. The 1797 edition of Gren’s Grundriss der Naturlehre provides the first explicit definition of 'galvanism' as clearly reflecting Volta’s opinion in the following terms: Galvani from Bologna was the first to observe muscular motions elicited by the contact between two different metals; after him, the phenomena of this sort were termed and included under the name of Galvanism. Giovanni Aldini Giovanni Aldini, Galvani's nephew, continued his uncle's work after Luigi Galvani died in 1798. In 1803, Aldini performed a famous public demonstration of the electro-stimulation technique of deceased limbs on the corpse of an executed criminal George Foster at Newgate in London. The Newgate Calendar describes what happened when the galvanic process was used on the body: Galvani has been called the father of electrophysiology. The debate between Galvani and Volta "would result in the creation of electrophysiology, electromagnetism, electrochemistry and the electric battery." Scientific and intellectual legacy Literature Mary Shelley's Frankenstein, wherein a man stitches together a human body from corpses and brings it to life, was inspired in part by the theory and demonstrations of Galvanism which may have been conducted by James Lind. Although the Creature was described in later works as a composite of whole body parts grafted together from cadavers and reanimated by the use of electricity, this description is not consistent with Shelley's work; both the use of electricity and the cobbled-together image of Frankenstein's monster were more the result of James Whale's popular 1931 film adaptation of the story. Abiogenesis Galvanism influenced metaphysical thought in the domain of abiogenesis, the underlying process of the generation of living forms. In 1836, Andrew Crosse recorded what he referred to as "the perfect insect, standing erect on a few bristles which formed its tail," as having appeared during an experiment wherein he used electricity to produce mineral crystals. While Crosse himself never claimed to have generated the insects, even in private, the scientific world at the time viewed the connection between life and electricity to be sufficiently clear that he received threats against his life for this "blasphemy." Medicine Giovanni Aldini is claimed to have applied Galvanic principles (application of electricity to biological organisms) in successfully alleviating the symptoms of "several cases of insanity", and with "complete success". Today, electroconvulsive therapy is used as a treatment option for severely depressed pregnant mothers (as it is the least harmful for the developing fetus) and people suffering treatment-resistant major depressive disorder. It is found to be effective for half of those who receive treatment while the other half may relapse within 12 months. The modern application of electricity to the human body for medical diagnostics and treatments is practiced under the term electrophysiology. This includes the monitoring of the electric activity of the heart, muscles, and even the brain, respectively termed electrocardiography, electromyography, and electrocorticography. See also Action potential Bioelectromagnetics Electrochemistry Electrohomeopathy Electrotherapy Electrotherapy (cosmetic) Hallerian physiology, for a counter-theory to Galvanism References External links The history of galvanism Electrochemistry Muscular system
Galvanism
[ "Chemistry" ]
1,267
[ "Electrochemistry" ]
11,437,063
https://en.wikipedia.org/wiki/Levomefolic%20acid
Levomefolic acid (INN, also known as L-5-MTHF, L-methylfolate and L-5-methyltetrahydrofolate and (6S)-5-methyltetrahydrofolate, and (6S)-5-MTHF) is the primary biologically active form of folate used at the cellular level for DNA reproduction, the cysteine cycle and the regulation of homocysteine. It is also the form found in circulation and transported across membranes into tissues and across the blood–brain barrier. In the cell, L-methylfolate is used in the methylation of homocysteine to form methionine and tetrahydrofolate (THF). THF is the immediate acceptor of one carbon unit for the synthesis of thymidine-DNA, purines (RNA and DNA) and methionine. The un-methylated form, folic acid (vitamin B9), is a synthetic form of folate, and must undergo enzymatic reduction by dihydrofolate reductase (DHFR) to become biologically active. It is synthesized in the absorptive cells of the small intestine from polyglutamylated dietary folate. It is a methylated derivative of tetrahydrofolate. Levomefolic acid is generated by methylenetetrahydrofolate reductase (MTHFR) from 5,10-methylenetetrahydrofolate (MTHF) and used to recycle homocysteine back to methionine by methionine synthase (MS). L-Methylfolate is water-soluble and primarily excreted via the kidneys. In a study of 21 subjects with coronary artery disease, peak plasma levels were reached in one to three hours following oral or parenteral administration. Peak concentrations were found to be more than seven times higher than folic acid (129 ng/ml vs. 14.1 ng/ml). Patients at risk for vitamin B12 deficiency should consult with their medical provider prior to taking L-Methylfolate. The interrelationship between these two vitamins (L-Methylfolate and B12) is best explained by the methyl trap hypothesis. Metabolism Medical uses Major depressive disorder Research suggests that levomefolic acid (L-methylfolate) taken with a first-line antidepressant provides a modest adjunctive antidepressant effect for individuals who do not respond or have only a partial therapeutic response to SSRI or SNRI medication, and might be a more cost-effective adjunctive agent than second-generation antipsychotics. Cardiovascular disease and cancer Levomefolic acid (and folic acid in turn) has been proposed for treatment of cardiovascular disease and advanced cancers such as breast and colorectal cancers. It bypasses several metabolic steps in the body and better binds thymidylate synthase with FdUMP, a metabolite of the drug fluorouracil. Patent issues In March 2012, Merck & Cie of Switzerland, Pamlab LLC (maker of Metanx and Cerefolin, Neevo DHA, and Deplin), and South Alabama Medical Science Foundation (SAMSF) (the plaintiffs) filed a complaint in the United States District Court for the Eastern District of Texas against four defendants: Macoven Pharmaceuticals (owned by Pernix Therapeutics), Gnosis SpA of Italy, Gnosis U.S.A and Gnosis Bioresearch Switzerland. The plaintiffs alleged that the defendants infringed on several of the plaintiffs' patents. The Macoven products named in the suit are: "Vitaciric-B", "ALZ-NAC", "PNV DHA", and l-methylfolate calcium (levomefolate calcium). In September 2012, the same three plaintiffs filed a complaint requesting that the International Trade Commission begin a investigation of the same four defendants. The complaint states that Gnosis' "Extrafolic-S" and products which are made from it, infringe upon three of their patents: , , and . Formulations Levomefolate calcium, a calcium salt of levomefolic acid is sold under the brand name Metafolin and incorporated in Deplin. Levomefolate magnesium is a magnesium salt of levomefolic acid, manufactured as DeltaFolate, a primary ingredient in EnLyte. See also 5,10-Methylenetetrahydrofolate S-Adenosylmethionine (SAMe) References External links Calcium L5-methyltetrahydrofolate (L-5-MTHF-Ca) Folates Coenzymes Medical food Patent case law
Levomefolic acid
[ "Chemistry" ]
998
[ "Organic compounds", "Coenzymes" ]
11,437,108
https://en.wikipedia.org/wiki/5%2C10-Methylenetetrahydrofolate
5,10-Methylenetetrahydrofolate (N5,N10-Methylenetetrahydrofolate; 5,10-CH2-THF) is cofactor in several biochemical reactions. It exists in nature as the diastereoisomer [6R]-5,10-methylene-THF. As an intermediate in one-carbon metabolism, 5,10-CH2-THF converts to 5-methyltetrahydrofolate, 5-formyltetrahydrofolate, and methenyltetrahydrofolate. It is substrate for the enzyme methylenetetrahydrofolate reductase (MTHFR) It is mainly produced by the reaction of tetrahydrofolate with serine, catalyzed by the enzyme serine hydroxymethyltransferase. Selected functions Formaldehyde equivalent Methylenetetrahydrofolate is a source of the equivalent of formaldehyde or CH22+ in biosyntheses. Methylenetetrahydrofolate is also an intermediate in the detoxification of formaldehyde. Pyrimidine biosynthesis It is the one-carbon donor for thymidylate synthase, for methylation of 2-deoxy-uridine-5-monophosphate (dUMP) to 2-deoxy-thymidine-5-monophosphate (dTMP). The coenzyme is necessary for the biosynthesis of thymidine and is the C1-donor in the reactions catalyzed by TS and thymidylate synthase (FAD). Biomodulator [6R]-5,10-methylene-THF is a biomodulator that has proven to enhance the desired cytotoxic antitumor effect of Fluorouracil (5-FU) and can bypass the metabolic pathway required by other folates (such as leucovorin) to achieve necessary activation. The active metabolite is being evaluated in clinical trials for patients with colorectal cancer in combination with 5-FU. See also 5,10-Methenyltetrahydrofolate References Folates Coenzymes
5,10-Methylenetetrahydrofolate
[ "Chemistry" ]
478
[ "Organic compounds", "Coenzymes" ]
11,437,402
https://en.wikipedia.org/wiki/Dimethyl%20sulfoxide%20%28data%20page%29
This page provides supplementary chemical data on dimethyl sulfoxide. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions. Structure and properties Thermodynamic properties Vapor pressure of liquid vapor pressure at 20 °C = 0.556 mbar = 0.417 mmHg Distillation data Spectral data References Chemical data pages Chemical data pages cleanup
Dimethyl sulfoxide (data page)
[ "Chemistry" ]
111
[ "Chemical data pages", "nan" ]
11,438,514
https://en.wikipedia.org/wiki/Analog%20device
Analog devices are a combination of both analog machine and analog media that can together measure, record, reproduce, receive or broadcast continuous information, for example, the almost infinite number of grades of transparency, voltage, resistance, rotation, or pressure. In theory, the continuous information in an analog signal has an infinite number of possible values with the only limitation on resolution being the accuracy of the analog device. Analog media are materials with analog properties, such as photographic film, which are used in analog devices, such as cameras. Example devices Non-electrical There are notable non-electrical analog devices, such as some clocks (sundials, water clocks), the astrolabe, slide rules, the governor of a steam engine, the planimeter (a simple device that measures the surface area of a closed shape), Kelvin's mechanical tide predictor, acoustic rangefinders, servomechanisms (e.g. the thermostat), a simple mercury thermometer, a weighing scale, and the speedometer. Electrical The telautograph is an analogue precursor to the modern fax machine. It transmits electrical impulses recorded by potentiometers to stepping motors attached to a pen, thus being able to reproduce a drawing or signature made by the sender at the receiver's station. It was the first such device to transmit drawings to a stationary sheet of paper; previous inventions in Europe used rotating drums to make such transmissions. An analog synthesizer is a synthesizer that uses analog circuits and analog computer techniques to generate sound electronically. The analog television encodes television and transports the picture and sound information as an analog signal, that is, by varying the amplitude and/or frequencies of the broadcast signal. All systems preceding digital television, such as NTSC, PAL, and SECAM are analog television systems. An analog computer is a form of computer that uses electrical, mechanical, or hydraulic phenomena to model the problem being solved. More generally an analog computer uses one kind of physical quantity to represent the behavior of another physical system, or mathematical function. Modeling a real physical system in a computer is called simulation. Example processes Media The chemical reactions in photographic film and film stock involve analog processes, with camera as machinery. Interfacing the digital and analog worlds In electronics, a digital-to-analog converter is a circuit for converting a digital signal (usually binary) to an analog signal (current, voltage or electric charge). Digital-to-analog converters are interfaces between the digital world and analog worlds. An analog-to-digital converter is an electronic circuit that converts continuous signals to discrete digital numbers. References Citations Analog circuits Electronic circuits Electronic engineering
Analog device
[ "Technology", "Engineering" ]
542
[ "Computer engineering", "Analog circuits", "Electronic circuits", "Electronic engineering", "Electrical engineering" ]
11,438,515
https://en.wikipedia.org/wiki/Material%20flow%20accounting
Material flow accounting (MFA) is the study of material flows on a national or regional scale. It is therefore sometimes also referred to as regional, national or economy-wide material flow analysis. Introduction Material flow accounting provides economy-wide data on material use. Through international standardization, this data has become reliable and comparable across countries. Increasingly, the data are also being made available in medium- to long-term time series allowing for the analysis of past trends as well as potential future developments. Material flow accounts provide information on the material inputs into, the changes in material stock within, and the material outputs in the form of exports to other economies or discharges to the environment of an economy. Material flow accounting can be used in national planning, especially for scarce resources, and also allows for forecasting. The method can be used to assess environmental burdens associated with the economic activities of a nation and to determine how material intensive an economy is. The principle concept underlying MFA is a simple model of this interrelation between the economy and the environment, in which the economy is an embedded subsystem of the environment. Similar to living beings, this subsystem is dependent on a constant throughput of materials and energy. Raw materials, water and air are extracted from the natural system as inputs, transformed into products and finally re-transferred to the natural system as outputs (waste and emissions). In order to highlight the similarity to natural metabolic processes, the terms "industrial" or "societal" metabolism have been introduced. In MFA studies for a region or on a national level the flows of materials between the natural environment and the economy are analyzed and quantified on a physical level. The focus may be on individual substances (e.g. Cadmium flows), specific materials, or bulk material flows (e.g. steel and steel scrap flows within an economy). Researchers in this field are organized in the Socio-Economic Metabolism (SEM) section of the International Society for Industrial Ecology (ISIE). Statistics related to material flow accounting are usually compiled by national statistical offices, using economic, agricultural and trade statistics measuring the exchange of material between different products available in an economy. Scope and indicators Aside from calculating the net additions to stock (NAS) as a balancing item, flows within the economy are not considered (advances are currently being made in the field of dynamic stock modelling). MFA covers all solid, gaseous, and liquid materials, mobilized by humans or by their livestock, with the exception of bulk water and air. The unit of measurement is most commonly (metric) tonnes per year (t/a). Flows are distinguished by whether they are extracted domestically (domestic extraction, DE) or are trade flows (imports or exports). Materials are most commonly grouped according to four main material categories: biomass, fossil energy carriers, metals, and non-metallic minerals. The former category may be further differentiated by type of use into industrial and construction minerals. MFA seeks to provide a complete picture of an economy's material use so that materials are included in these accounts irrespective of whether or not they have direct market value. The most prominent non-market flows covered by MFA are grazed biomass and used crop residues as well as waste rock extracted during mining activities. In 2010, these material flows accounted for 21% of global extraction. The data collected in MFA is used to calculate several different standardized indicators: Direct material input (DMI) is a measure of the total material inputs into an economy and is calculated as the sum of domestic extraction (DE) and imports. The physical trade balance (PTB) is a measure of net-imports and is calculated as the difference between imports and exports. Reflecting that material and money flow in opposite directions during trade, this is a contrast to the monetary trade balance which calculates net-exports. Domestic material consumption (DMC) is a measure of apparent consumption and calculated from domestic extraction plus imports minus exports (or DE plus PTB). Economy-wide MFA is a satellite system to the system of national accounts and provides a rich empirical database for analytical studies. More information on how the statistics are collected, under what legal framework and how they are defined is available in Economy-wide material flow accounts. In addition, the following indicators are may be used in material flow accounting: Total material requirement (TMR) includes the domestic extraction of resources (minerals, fossil fuels, biomass), the indirect flows caused by and associated with the domestic extraction (called "hidden flows") and the imports. Hidden flows are materials that are extracted or moved, but do not enter the economy. According to OECD, the "displacement of environmental assets without absorption into the economic sphere", such as overburden from mining operations. Domestic processed output (DPO) is defined by the OECD as "the total mass of materials which have been used in the national economy, before flowing into the environment. These flows occur at the processing, manufacturing, use, and final disposal stages of the economic production-consumption chain." Total domestic output (TDO) includes the domestic processed output (DPO) plus the hidden flows associated with the domestic production. Raw material consumption (RMC) includes the raw materials that are embodied in traded products (e.g. metal ores from which metals have been extracted). RMC does not include hidden flows. RMC is most commonly calculated via environmentally extended input–output analysis. See also References External links LIAISE KIT: Economy-wide accounts The Sustainable Scale Project European Topic Centre on Sustainable Consumption and Production CSIRO and UNEP Material Flow and Resource Productivity Database for Asia and the Pacific Material Flow Accounts (environmental accounting) Resource economics Industrial ecology
Material flow accounting
[ "Chemistry", "Engineering" ]
1,158
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
11,440,843
https://en.wikipedia.org/wiki/Plutonium-242
Plutonium-242 (Pu or Pu-242) is the second longest-lived isotope of plutonium, with a half-life of 375,000 years. The half-life of Pu is about 15 times that of Pu; so it is one-fifteenth as radioactive, and not one of the larger contributors to nuclear waste radioactivity. Pu's gamma ray emissions are also weaker than those of the other isotopes. It is not fissile (but it is fissionable by fast neutrons), and its neutron capture cross section is low. In the nuclear fuel cycle Plutonium-242 is produced by successive neutron capture on Pu, Pu, and Pu. The odd-mass isotopes Pu and Pu have about a 3/4 chance of undergoing fission on capture of a thermal neutron and about a 1/4 chance of retaining the neutron and becoming the following isotope. The proportion of Pu is low at low burnup but increases nonlinearly. Pu has a particularly low cross section for thermal neutron capture; and it takes three neutron absorptions to become another fissile isotope (either curium-245 or plutonium-241) and then one more neutron to undergo fission. Even then, there is a chance either of those two fissile isotopes will absorb the fourth neutron instead of fissioning, becoming curium-246 (on the way to even heavier actinides like californium, which is a neutron emitter by spontaneous fission and difficult to handle) or becoming Pu again, so the mean number of neutrons absorbed until fission is even higher than 4. Therefore, Pu is particularly unsuited to recycling in a thermal reactor and would be better used in a fast reactor where it can be fissioned directly. However, Pu's low cross section means that relatively little of it is transmuted during one cycle in a thermal reactor. Decay Pu alpha decays into uranium-238, before continuing along the uranium series. Pu decays by spontaneous fission in about 5.5 × 10% of cases. References Actinides Nuclear materials Isotopes of plutonium
Plutonium-242
[ "Physics", "Chemistry" ]
427
[ "Isotopes of plutonium", "Isotopes", "Materials", "Nuclear materials", "Matter" ]
11,443,297
https://en.wikipedia.org/wiki/Shear%20force
In solid mechanics, shearing forces are unaligned forces acting on one part of a body in a specific direction, and another part of the body in the opposite direction. When the forces are collinear (aligned with each other), they are called tension forces or compression forces. Shear force can also be defined in terms of planes: "If a plane is passed through a body, a force acting along this plane is called a shear force or shearing force." Force required to shear steel This section calculates the force required to cut a piece of material with a shearing action. The relevant information is the area of the material being sheared, i.e. the area across which the shearing action takes place, and the shear strength of the material. A round bar of steel is used as an example. The shear strength is calculated from the tensile strength using a factor which relates the two strengths. In this case 0.6 applies to the example steel, known as EN8 bright, although it can vary from 0.58 to 0.62 depending on application. EN8 bright has a tensile strength of 800MPa and mild steel, for comparison, has a tensile strength of 400MPa. To calculate the force to shear a 25 mm diameter bar of EN8 bright steel; area of the bar in mm2 = (12.52)(π) ≈ 490.8mm2 0.8kN/mm2 × 490.8mm2 = 392.64kN ≈ 40tonne-force 40tonne-force × 0.6 (to change force from tensile to shear) = 24tonne-force When working with a riveted or tensioned bolted joint, the strength comes from friction between the materials bolted together. Bolts are correctly torqued to maintain the friction. The shear force only becomes relevant when the bolts are not torqued. A bolt with property class 12.9 has a tensile strength of 1200MPa (1MPa = 1N/mm2) or 1.2kN/mm2 and the yield strength is 0.90 times tensile strength, 1080MPa in this case. A bolt with property class 4.6 has a tensile strength of 400MPa (1MPa = 1N/mm2) or 0.4 kN/mm2 and yield strength is 0.60 times tensile strength, 240MPa in this case. See also ASTM F568M, mechanical properties of different grades of steel fasteners Cantilever method Résal effect Newton's laws of motion § Newton's third law References Force Civil engineering
Shear force
[ "Physics", "Mathematics", "Engineering" ]
544
[ "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Construction", "Civil engineering", "Wikipedia categories named after physical quantities", "Matter" ]
19,046,783
https://en.wikipedia.org/wiki/Process%20hazard%20analysis
A process hazard analysis (PHA) (or process hazard evaluation) is an exercise for the identification of hazards of a process facility and the qualitative or semi-quantitative assessment of the associated risk. A PHA provides information intended to assist managers and employees in making decisions for improving safety and reducing the consequences of unwanted or unplanned releases of hazardous materials. A PHA is directed toward analyzing potential causes and consequences of fires, explosions, releases of toxic or flammable chemicals and major spills of hazardous chemicals, and it focuses on equipment, instrumentation, utilities, human actions, and external factors that might impact the process. It is one of the elements of OSHA's program for Process Safety Management. There are several methodologies that can be used to conduct a PHA, including checklists, hazard identification (HAZID) reviews, what-if reviews and SWIFT, hazard and operability studies (HAZOP), failure mode and effect analysis (FMEA), etc. PHA methods are qualitative or, at best, semi-quantitative in nature. A simple element of risk quantification is often introduced in the form of a risk matrix, as in preliminary hazard analysis (PreHA). The selection of the methodology to be used depends on a number of factors, including the complexity of the process, the length of time a process has been in operation and if a PHA has been conducted on the process before, and if the process is unique, or industrially common. Quantitative methods for risk assessment, such as layer-of-protection analysis (LOPA) or fault tree analysis (FTA) may be used after a PHA, if the PHA team could not reach a risk decision for a given scenario. In the United States, the use of PHAs is mandated as one of the elements of the Occupational Safety and Health Administration (OSHA)' process safety management regulation for the identification of risks involved in the design, operation, and modification of processes that handle highly hazardous chemicals. See also Cyber PHA References Further reading Primatech (2017). Comparison of PHA Methods. Primatech. Retrieved 2023-06-24. Process safety
Process hazard analysis
[ "Chemistry", "Engineering" ]
447
[ "Chemical process engineering", "Safety engineering", "Process safety" ]
19,047,010
https://en.wikipedia.org/wiki/Online%20land%20planning
Online land planning is a collaborative process in which sustainable development practices and design professionals from across the world are networked to provide advice and solutions on urban design and land planning issues. The target audience includes property owners, communities, businesses and government agencies that have limited access, time, finances or personnel to make informed decisions about land use. In many cases, this approach provides electronic documents that become the catalyst to rebuild after natural or man-made spur rural community development and stimulate or create a new microeconomy. Importance of technology One goal of online land planning is the effective use of the internet to support information sharing and decision making from remote locations such as the home. The use of the Internet, coupled with software technology such as geographic information systems (GIS), allows municipalities and other public and private organizations to compile base information, exchange information, present solutions to land planning issues, and receive feedback vis the internet. Benefits for land development companies and real estate industry organizations like the Urban Land Institute include easier access to efficient digital planning technologies, along with the opportunity for immediate participation and feedback. Worldwide, local and regional governments have created their own websites with access to map-centric and enterprise GIS databases that provide operational and public-service resources. References Climate Alarm: Oxfam Briefing Paper 108 (November 2007) Food & Agriculture Organization of the United Nations Kotka IV: (1995) Prideaux, B. Building Visitor Attractions in Peripheral Areas - International Journal of Tourism Research, 4, 379-389. (2002) Vesterby, M & Krupa, K, U.S. Department of Agriculture; Bulletin 973. Shiode, N., Urban Planning, Information Technology and Cyberspace, Journal of Urban Technology (2000); McGinn, M., Getting Involved in Planning – Edinburgh: Scottish Executive Development Department (2001) Carver, S., The Future of Participatory Approaches Using Geographic Information (2003) Yigitcanlar, Tan, Australian Local Governments Practice and Prospects with Online Planning (2005) Huxol, J., A Participatory Model Using the Web (May 2004) Hall, Carly & Heffernan, Maree: GIS and its Potential Use in Human Services (2006) Penzu journal Urban planning
Online land planning
[ "Engineering" ]
465
[ "Urban planning", "Architecture" ]
19,047,417
https://en.wikipedia.org/wiki/Rational%20Synergy
Rational Synergy is a software tool that provides software configuration management (SCM) capabilities for all artifacts related to software development including source code, documents and images as well as the final built software executable and libraries. Rational Synergy also provides the repository for the change management tool known as Rational Change. Together these two tools form an integrated configuration management and change management environment that is used in software development organizations that need controlled SCM processes and an understanding of what is in a build of their software. The name Synergy refers to its database level integration with Change Management that provides views into what is in a build in terms of defects. History Synergy began in 1988 as a research project for computer-aided software engineering by software developer Pete Orelup at Computers West of Irvine, California. Computers West was supporting itself through contract software development and an application for finance and insurance at automobile dealerships on the Pick OS. In 1989, the company decided to pursue development of a software configuration management and revision control product, renamed itself CaseWare, Inc., and hired three more developers. The system was re-imagined as a platform for building SCM systems running on Unix (Sun Solaris). It was decided that a compiled language such as C++ was not sufficiently flexible, reliable, and productive, and so a new programming language called ACcent was created. ACcent has many features similar to Java, but pre-dates it by five years. It has a compiler that compiles to machine-independent byte-codes, and a virtual machine execution environment with automatic memory management. Except for the compiler and execution environment, the entire Amplify Control product was written in the ACcent language, including a scalable, networked client-server architecture and use of a SQL database with a schema flexible enough to allow customer extension of the built-in data types in ACcent without changes to the physical schema. CaseWare Amplify Control also included a distributed build automation and continuous integration system, much like today's Maven and Hudson tools. It was first released in 1990. Later a bug tracking system was also built on the platform. The company was somewhat successful, but lacked experienced leadership and started to lose market share to IBM DevOps Code ClearCase. In 1991 the company was nearly broke and the original developers walked out en masse. A new CEO was brought in, and the company was relaunched, although without the developers. Both CaseWare and Amplify Control were renamed Continuus Software in 1993. By 1997 Continuus was approaching 100m in revenue and expanded into Europe, eventually opening a help desk office in Ireland with the intention of eventually providing 24x7 support to the Fortune 500. It considered the Rational Clearcase product line as its competitor in the Engineering and Scientific market and Platinum Harvest as its competitor on Wall Street. It began to recruit CM people as sales engineers out of its client base at this point from clients. The fears over the Y2K bug was a profitable motivator for clients to buy SCM products such as Continuus at this point. Smaller organizations that grew too large for Visual Sourcesafe and PVCS looked to "move up" as they "got religion" after realizing that they were missing code, stomping on each other's changes or not having enough workflow to be able to run smoothly. One of Continuus' major selling points at this time was Task Based CM, a customization that one of their major clients (Tandem Computer) had requested which they had rolled into the main product. This turned into a major selling point over Rational Clearcase which still needed major add-on professional services work to adapt to a customer's workflow and methodology. Continuus tried with mixed success to jump on the .com bandwagon as well. During this period with the VP of Engineering looked at getting things to work under Tomcat using servlets and a "light" version of the middleware process which was known as the "engine process". This eventually became part of the product suite which was renamed CM/Synergy and PT/Synergy. After getting Continuus to run its Informix database and server processes on Windows Server, an integration with Visual Studio was added to make Continuus look like Visual Sourcesafe to the IDE. Walt Disney bought into the product as it addressed its Y2K issues. US Internetworking (USi) became the "largest single transaction" in early 1999. Other companies that were clients at this point included Remedy (help desk software), Signet Bank, Bank of America, SAIC (including a rather bizarre collaboration with the Web development company that created the Dr. Ruth sex site), and Novell. On July 29, 1999, Continuus Software announced a public offering listing its stock on the NASDAQ Stock Market. In October 2000, the Swedish software company Telelogic, agreed to purchase Continuus Software in a deal worth $42 million. Under Telelogic, Continuus was renamed Synergy. It had also recently acquired QSS and the DOORS product line. As a result, in the summer of 2001, it decided to lay off the entire Continuus Professional Services organization staff, reasoning that the QSS services folk would be able to support both products. That strategy didn't work out so well, and some of the ex-services folk were able to find consulting jobs with Continuus clients. In 2008 IBM announced that it had purchased Telelogic. Synergy was added to the IBM's Rational Software family of SCM tools and named Rational Synergy. In 2021 IBM anncounced the withdrawal and support discontinance of Rational Synergy and Rational Change. Notes External links Rational Synergy Configuration management Synergy Version control systems
Rational Synergy
[ "Engineering" ]
1,189
[ "Systems engineering", "Configuration management" ]
19,047,741
https://en.wikipedia.org/wiki/D%C3%BChring%27s%20rule
Dühring's rule is a scientific rule developed by Eugen Dühring which states that a linear relationship exists between the temperatures at which two solutions exert the same vapour pressure. The rule is often used to compare a pure liquid and a solution at a given concentration. Dühring's plot is a graphical representation of such a relationship, typically with the pure liquid's boiling point along the x-axis and the mixture's boiling point along the y-axis; each line of the graph represents a constant concentration. See also Solubility Evaporator Raoult's law References Engineering thermodynamics Solutions
Dühring's rule
[ "Physics", "Chemistry", "Engineering" ]
131
[ "Thermodynamics stubs", "Engineering thermodynamics", "Homogeneous chemical mixtures", "Thermodynamics", "Mechanical engineering", "Solutions", "Physical chemistry stubs" ]
19,049,087
https://en.wikipedia.org/wiki/Concentric%20objects
In geometry, two or more objects are said to be concentric when they share the same center. Any pair of (possibly unalike) objects with well-defined centers can be concentric, including circles, spheres, regular polygons, regular polyhedra, parallelograms, cones, conic sections, and quadrics. Geometric objects are coaxial if they share the same axis (line of symmetry). Geometric objects with a well-defined axis include circles (any line through the center), spheres, cylinders, conic sections, and surfaces of revolution. Concentric objects are often part of the broad category of whorled patterns, which also includes spirals (a curve which emanates from a point, moving farther away as it revolves around the point). Geometric properties In the Euclidean plane, two circles that are concentric necessarily have different radii from each other. However, circles in three-dimensional space may be concentric, and have the same radius as each other, but nevertheless be different circles. For example, two different meridians of a terrestrial globe are concentric with each other and with the globe of the earth (approximated as a sphere). More generally, every two great circles on a sphere are concentric with each other and with the sphere. By Euler's theorem in geometry on the distance between the circumcenter and incenter of a triangle, two concentric circles (with that distance being zero) are the circumcircle and incircle of a triangle if and only if the radius of one is twice the radius of the other, in which case the triangle is equilateral. The circumcircle and the incircle of a regular n-gon, and the regular n-gon itself, are concentric. For the circumradius-to-inradius ratio for various n, see Bicentric polygon#Regular polygons. The same can be said of a regular polyhedron's insphere, midsphere and circumsphere. The region of the plane between two concentric circles is an annulus, and analogously the region of space between two concentric spheres is a spherical shell. For a given point c in the plane, the set of all circles having c as their center forms a pencil of circles. Each two circles in the pencil are concentric, and have different radii. Every point in the plane, except for the shared center, belongs to exactly one of the circles in the pencil. Every two disjoint circles, and every hyperbolic pencil of circles, may be transformed into a set of concentric circles by a Möbius transformation. Applications and examples The ripples formed by dropping a small object into still water naturally form an expanding system of concentric circles. Evenly spaced circles on the targets used in target archery or similar sports provide another familiar example of concentric circles. Coaxial cable is a type of electrical cable in which the combined neutral and earth core completely surrounds the live core(s) in system of concentric cylindrical shells. Johannes Kepler's Mysterium Cosmographicum envisioned a cosmological system formed by concentric regular polyhedra and spheres. Concentric circles have been used on firearms surfaces as means of holding lubrication or reducing friction on components, similar to jewelling. Concentric circles are also found in diopter sights, a type of mechanic sights commonly found on target rifles. They usually feature a large disk with a small-diameter hole near the shooter's eye, and a front globe sight (a circle contained inside another circle, called tunnel). When these sights are correctly aligned, the point of impact will be in the middle of the front sight circle. See also Centered cube number Homoeoid Focaloid Circular symmetry Magic circle (mathematics) Osculating circle Spiral References External links Geometry: Concentric circles demonstration With interactive animation Corrosion prevention Geometric centers Visual motifs
Concentric objects
[ "Physics", "Chemistry", "Mathematics" ]
816
[ "Corrosion prevention", "Point (geometry)", "Visual motifs", "Geometric centers", "Symbols", "Corrosion", "Symmetry" ]
19,057,150
https://en.wikipedia.org/wiki/Intersection%20form%20of%20a%204-manifold
In mathematics, the intersection form of an oriented compact 4-manifold is a special symmetric bilinear form on the 2nd (co)homology group of the 4-manifold. It reflects much of the topology of the 4-manifolds, including information on the existence of a smooth structure. Definition using intersection Let be a closed 4-manifold (PL or smooth). Take a triangulation of . Denote by the dual cell subdivision. Represent classes by -cycles and modulo viewed as unions of -simplices of T and of , respectively. Define the intersection form modulo by the formula This is well-defined because the intersection of a cycle and a boundary consists of an even number of points (by definition of a cycle and a boundary). If is oriented, analogously (i.e. counting intersections with signs) one defines the intersection form on the nd homology group Using the notion of transversality, one can state the following results (which constitute an equivalent definition of the intersection form). If classes are represented by closed surfaces (or -cycles modulo ) and meeting transversely, then If is oriented and classes are represented by closed oriented surfaces (or -cycles) and meeting transversely, then every intersection point in has the sign or depending on the orientations, and is the sum of these signs. Definition using cup product Using the notion of the cup product , one can give a dual (and so an equivalent) definition as follows. Let be a closed oriented 4-manifold (PL or smooth). Define the intersection form on the nd cohomology group by the formula The definition of a cup product is dual (and so is analogous) to the above definition of the intersection form on homology of a manifold, but is more abstract. However, the definition of a cup product generalizes to complexes and topological manifolds. This is an advantage for mathematicians who are interested in complexes and topological manifolds (not only in PL and smooth manifolds). When the 4-manifold is smooth, then in de Rham cohomology, if and are represented by -forms and , then the intersection form can be expressed by the integral where is the wedge product. The definition using cup product has a simpler analogue modulo (which works for non-orientable manifolds). Of course one does not have this in de Rham cohomology. Properties and applications Poincare duality states that the intersection form is unimodular (up to torsion). By Wu's formula, a spin 4-manifold must have even intersection form, i.e., is even for every x. For a simply-connected smooth 4-manifold (or more generally one with no 2-torsion residing in the first homology), the converse holds. The signature of the intersection form is an important invariant. A 4-manifold bounds a 5-manifold if and only if it has zero signature. Van der Blij's lemma implies that a spin 4-manifold has signature a multiple of eight. In fact, Rokhlin's theorem implies that a smooth compact spin 4-manifold has signature a multiple of 16. Michael Freedman used the intersection form to classify simply-connected topological 4-manifolds. Given any unimodular symmetric bilinear form over the integers, Q, there is a simply-connected closed 4-manifold M with intersection form Q. If Q is even, there is only one such manifold. If Q is odd, there are two, with at least one (possibly both) having no smooth structure. Thus two simply-connected closed smooth 4-manifolds with the same intersection form are homeomorphic. In the odd case, the two manifolds are distinguished by their Kirby–Siebenmann invariant. Donaldson's theorem states a smooth simply-connected 4-manifold with positive definite intersection form has the diagonal (scalar 1) intersection form. So Freedman's classification implies there are many non-smoothable 4-manifolds, for example the E8 manifold. References 4-manifolds Geometric topology
Intersection form of a 4-manifold
[ "Mathematics" ]
836
[ "Topology", "Geometric topology" ]
19,057,218
https://en.wikipedia.org/wiki/Cleaved%20amplified%20polymorphic%20sequence
The cleaved amplified polymorphic sequence (CAPS) method is a technique in molecular biology for the analysis of genetic markers. It is an extension to the restriction fragment length polymorphism (RFLP) method, using polymerase chain reaction (PCR) to more quickly analyse the results. Like RFLP, CAPS works on the principle that genetic differences between individuals can create or abolish restriction endonuclease restriction sites, and that these differences can be detected in the resulting DNA fragment length after digestion. In the CAPS method, PCR amplification is directed across the altered restriction site, and the products digested with the restriction enzyme. When fractionated by agarose or polyacrylamide gel electrophoresis, the digested PCR products will give readily distinguishable patterns of bands. Alternatively, the amplified segment can be analyzed by allele-specific oligonucleotide (ASO) probes, a process that can often be done by a simple dot blot. See also RFLP References External links https://www.ncbi.nlm.nih.gov/projects/genome/probe/doc/TechCAPS.shtml DNA profiling techniques Molecular biology
Cleaved amplified polymorphic sequence
[ "Chemistry", "Biology" ]
253
[ "Genetics techniques", "DNA profiling techniques", "Molecular and cellular biology stubs", "Biochemistry stubs", "Molecular biology", "Biochemistry" ]
19,058,043
https://en.wikipedia.org/wiki/Uncertain%20data
In computer science, uncertain data is data that contains noise that makes it deviate from the correct, intended or original values. In the age of big data, uncertainty or data veracity is one of the defining characteristics of data. Data is constantly growing in volume, variety, velocity and uncertainty (1/veracity). Uncertain data is found in abundance today on the web, in sensor networks, within enterprises both in their structured and unstructured sources. For example, there may be uncertainty regarding the address of a customer in an enterprise dataset, or the temperature readings captured by a sensor due to aging of the sensor. In 2012 IBM called out managing uncertain data at scale in its global technology outlook report that presents a comprehensive analysis looking three to ten years into the future seeking to identify significant, disruptive technologies that will change the world. In order to make confident business decisions based on real-world data, analyses must necessarily account for many different kinds of uncertainty present in very large amounts of data. Analyses based on uncertain data will have an effect on the quality of subsequent decisions, so the degree and types of inaccuracies in this uncertain data cannot be ignored. Uncertain data is found in the area of sensor networks; text where noisy text is found in abundance on social media, web and within enterprises where the structured and unstructured data may be old, outdated, or plain incorrect; in modeling where the mathematical model may only be an approximation of the actual process. When representing such data in a database, an appropriate uncertain database model needs to be selected. Example data model for uncertain data One way to represent uncertain data is through probability distributions. Let us take the example of a relational database. There are three main ways to do represent uncertainty as probability distributions in such a database model. In attribute uncertainty, each uncertain attribute in a tuple is subject to its own independent probability distribution. For example, if readings are taken of temperature and wind speed, each would be described by its own probability distribution, as knowing the reading for one measurement would not provide any information about the other. In correlated uncertainty, multiple attributes may be described by a joint probability distribution. For example, if readings are taken of the position of an object, and the x- and y-coordinates stored, the probability of different values may depend on the distance from the recorded coordinates. As distance depends on both coordinates, it may be appropriate to use a joint distribution for these coordinates, as they are not independent. In tuple uncertainty, all the attributes of a tuple are subject to a joint probability distribution. This covers the case of correlated uncertainty, but also includes the case where there is a probability of a tuple not belonging in the relevant relation, which is indicated by all the probabilities not summing to one. For example, assume we have the following tuple from a probabilistic database: Then, the tuple has 10% chance of not existing in the database. References Machine learning Data mining Statistical theory
Uncertain data
[ "Engineering" ]
605
[ "Artificial intelligence engineering", "Machine learning" ]
19,058,424
https://en.wikipedia.org/wiki/TESEO
Tecnica Empirica Stima Errori Operatori (TESEO) is a technique in the field of Human reliability Assessment (HRA), that evaluates the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines. This is a time based model that describes the probability of a system operator's failure as a multiplicative function of 5 main factors. These factors are as follows: K1: The type of task to be executed K2: The time available to the operator to complete the task K3: The operator's level of experience/characteristics K4: The operator's state of mind K5: The environmental and ergonomic conditions prevalent Using these figures, an overall Human Error Probability (HEP) can be calculated with the formulation provided below: K1 x K2 x K3 x K4 x K5 The specific value of each of the above functions can be obtained by consulting standard tables that take account of the method in which the HEP is derived. Background Developed in 1980 by Bello and Colombari, TESEO created with the intention of using it for the purpose of conducting HRA of process industries. The methodology is relatively straightforward and is easy to use but is also limited; it is useful for quick overview HRA assessments, as opposed to highly detailed and in-depth assessments. Within the field of HRA, there is a lack of theoretical foundation for the technique, as is widely acknowledged throughout. TESEO Methodology When putting this technique into practice, it is necessary for the designated HRA assessor to thoroughly consider the task requiring assessment and therefore also consider the value for Kn that applies in the context. Once this value has been decided upon, the tables, previously mentioned, are then consulted from which a related value for each of the identified factors is found to allow the HEP to be calculated. Worked Example Provided below is an example of how TESEO methodology can be used in practice; each of the stages of the process described above are worked through in order. Context An operator works on a production transfer line that operates between two tanks. His role is to ensure the correct product is selected for transfer from one tanker to the other by operating remotely located valves. The essential valves must be opened to perform the task. The operator possesses average experience for this role. The individual is in a control room that has a relatively noisy environment and poor lighting. There is a time window of five minutes for the required task. Method The figures for the HEP calculation, obtained from the relevant tables, are given as follows: The type of task to be executed: K1 = 0.01 Time available to complete the task: K2 = 0.5 Level of experience: K3 = 1 Operator's state of mind: K4 = 1 Environmental and ergonomic conditions: K5 = 10 The calculation for the final HEP figure is therefore calculated as: K1 x K2 x K3 x K4 x K5 =0.01 x 0.5 x 1 x 1 x 10 = 0.05 Result Given the result of this calculation, it can be deduced that were the control room notified of the valves’ positions and if the microclimate was better, K5 would be unity, and therefore the HEP would be 0.005, representing an improvement of 1 order of magnitude. Advantages of TESEO The technique of TESEO is typically quick and straightforward in comparison to other HRA tools, not only in producing a final result, but also in sensitivity analysis e.g., it is useful in identifying the effects improvements in human factors have on overall human reliability of a task. It is widely applicable to various control room designs or with procedures with varying characteristics. Disadvantages of TESEO There is limited work published with regards to the theoretical foundations of this technique, in particular relating to the justification of the five factor methodology. Regardless of the situation, it remains to be assumed that these 5 factors are suffice for an accurate assessment of human performance; as no other factors are considered, this suggests that to solely use these 5 factors to adequately describe the full range of error producing conditions fails to be highly realistic. Further to this, the values of K1-5 are unsubstantiated and the suggested multiplicative relationship has no sufficient theoretical or empirical evidence for justification purposes. References Human reliability
TESEO
[ "Engineering" ]
1,070
[ "Human reliability", "Reliability engineering" ]
19,058,606
https://en.wikipedia.org/wiki/HBV%20RNA%20encapsidation%20signal%20epsilon
The HBV RNA encapsidation signal epsilon (HBV_epsilon) is an element essential for HBV virus replication. It is an RNA structure situated near the 5' end of the HBV pregenomic RNA. The structure consists of a lower stem, a bulge region, an upper stem and a tri-loop. The structure was determined and refined through enzymatic probing and NMR spectroscopy. The closure of the tri-loop was not predicted by RNA structure prediction programs but observed in the NMR structure. The regions shown to be critical for encapsidation of the RNA in the viral lifecycle are the bulge, upper stem and tri-loop which interact with the terminal protein domain of the HBV viral polymerase. See also Heron HBV RNA encapsidation signal epsilon Duck HBV RNA encapsidation signal epsilon Hepatitis B virus PRE alpha Hepatitis B virus PRE beta Hepatitis B virus PRE 1151–1410 References External links HBVRegDB Hepatitis B Virus HBV Regulatory Sequence Database (HBVRegDB) Cis-regulatory RNA elements Hepatitis B virus
HBV RNA encapsidation signal epsilon
[ "Chemistry" ]
232
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
19,058,746
https://en.wikipedia.org/wiki/Technique%20for%20human%20error-rate%20prediction
The Technique for human error-rate prediction (THERP) is a technique that is used in the field of Human Reliability Assessment (HRA) to evaluate the probability of human error occurring throughout the completion of a task. From such an analysis (after calculating a probability of human error in a given task), some corrective measures could be taken to reduce the likelihood of errors occurring within a system. The overall goal of THERP is to apply and document probabilistic methodological analyses to increase safety during a given process. THERP is used in fields such as error identification, error quantification and error reduction. Techniques THERP may refer to a number of techniques, which are split into one of two classifications: first-generation techniques and second-generation techniques. First-generation techniques are based on a simple dichotomy, or a dichotomous structure, of whether the technique fits an error situation in the related error identification and quantification of consideration. Second-generation techniques are more theoretical in their assessment and quantification of errors, addressing, rather, the schematic’s situational or interactive elements. HRA techniques are utilized for various applications in a range of disciplines and industries including healthcare, engineering, nuclear power, transportation, and business. THERP models human error probabilities (HEPs) using a fault-tree approach (similar to an engineering risk assessment), which integrate & account for performance-shaping factors that may influence these probabilities. The probabilities for the human reliability analysis event tree (HRAET), for example, are a calculative assessment tool drawn from a database developed by authors Alan D. Swain and H. E. Guttmann. Local data from simulations or accident reports may be used instead if supplemental data may deepen the examination of human-related error. The resultant tree portrays a step-by-step account of the stages involved in a task, in a logical order. The technique is known as a total methodology because it simultaneously manages many different activities, including task analysis, error identification, and representation in the form of HRAET and HEP quantification. Background THERP is a first-generation methodology, which means that its procedures follow the way conventional reliability analysis models a machine. The technique was developed in the Sandia Laboratories for the US Nuclear Regulatory Commission. Its primary author is Swain, who developed the THERP methodology gradually over a lengthy period. THERP relies on a large human reliability database that contains HEPs and is based upon both plant data and expert judgments. The technique was the first approach in HRA to come into broad use and is still widely used in a range of applications even beyond its original nuclear setting. THERP methodology The methodology for the THERP technique is broken down into 5 main stages: 1. Define the system failures of interest These failures include functions of the system where human error has a greater likelihood of influencing the probability of a fault, and those of interest to the risk assessor; operations in which there may be no interest include those not operationally critical or those for which there already exist safety countermeasures. 2. List and analyse the related human operations, and identify human errors that can occur and relevant human error recovery modes This stage of the process necessitates a comprehensive task and human error analysis. The task analysis lists and sequences the discrete elements and information required by task operators. For each step of the task, possible errors are considered by the analyst and precisely defined. The possible errors are then considered by the analyst, for each task step. Such errors can be broken down into the following categories: Errors of omission – leaving out a step of the task or the whole task itself Error of commission – this involves several different types of error: Errors of selection – error in use of controls or in issuing of commands Errors of sequence – required action is carried out in the wrong order Errors of timing – task is executed before or after when required Errors of quantity – inadequate amount or in excess The opportunity for error recovery must also be considered as this, if achieved, has the potential to drastically reduce error probability for a task. The tasks and associated outcomes are input to an HRAET in order to provide a graphical representation of a task’s procedure. The trees’ compatibility with conventional event-tree methodology i.e. including binary decision points at the end of each node, allows it to be evaluated mathematically. An event tree visually displays all events that occur within a system. It starts off with an initiating event, then branches develop as various consequences of the starting event. These are represented in a number of different paths, each associated with a probability of occurrence. As mentioned previously, the tree works on a binary logic, so each event either succeeds or fails. Below is an example of an event tree that represents a system fire: Under the condition that all of a task’s sub-tasks are fully represented within an HRAET and the failure probability for each sub-task is known it is possible to calculate the final reliability for the task. 3. Estimate the relevant error probabilities HEPs for each sub-task are entered into the tree; all failure branches must have a known probability, otherwise the system will fail to provide a final answer. HRAETs provide the function of breaking down the primary operator tasks into finer steps, which are represented in the form of successes and failures. This tree indicates the order in which the events occur and also considers likely failures that may occur at each of the represented branches. The degree to which each high-level task is broken down into lower-level tasks is dependent on the availability of HEPs for the successive individual branches. The HEPs may be derived from a range of sources such as the THERP database; simulation data; historical accident data, and expert judgment. PSFs should be incorporated into these HEP calculations; the primary source of guidance for this is the THERP handbook. However, the analyst must use their own discretion when deciding the extent to which each of the factors applies to the task. 4. Estimate the effects of human error on the system failure events With the completion of the HRA, the human contribution to failure can then be assessed in comparison with the results of the overall reliability analysis. This can be completed by inserting the HEPs into the full system’s fault event tree, which allows human factors to be considered within the context of the full system. 5. Recommend changes to the system and recalculate the system failure probabilities Once the human factor contribution is known, sensitivity analysis can be used to identify how HEPs can be reduced. Error recovery paths may be incorporated into the event tree as this will aid the assessor when considering the possible approaches by which the identified errors can be reduced. Worked example Context The following example illustrates how the THERP methodology can be used in practice in the calculation of human error probabilities (HEPs). It is used to determine the HEP for establishing air-based ventilation using emergency purge ventilation equipment on in-tank precipitation (ITP) processing tanks 48 and 49 after failure of the nitrogen purge system following a seismic event. Assumptions In order for the final HEP calculation to be valid, the following assumptions are required to be fulfilled: There exists a seismic event initiator that leads to the establishment of air-based ventilation on the ITP processing tanks 48 and 49, possibly 50 in some cases. It is assumed that both on and offsite power is unavailable within the context and therefore control actions performed by the operator are done so locally, on the tank top The time available for operations personnel to establish air-based ventilation by use of the emergency purge ventilation, following the occurrence of the seismic event, is a duration of 3 days There is a necessity for an ITP equipment status monitoring procedure to be developed to allow for a consistent method to be adopted for the purposes of evaluating the ITP equipment and component status and selected process parameters for the period of an accident condition Assumed response times exist for the initial diagnosis of the event and for the placement of emergency purge ventilation equipment on the tank top. The former is 10 hours while the latter is 4 hours. The in-tank precipitation process has associated operational safety requirements (OSR) that identify the precise conditions under which the emergency purge ventilation equipment should be hooked up to the riser The “tank 48 system” standard operating procedure has certain conditions and actions that must be included for correct completion to be performed (see file for more details) A vital component of the emergency purge ventilation equipment unit is a flow indicator; this is required in the event of the emergency purge ventilation equipment being hooked up incorrectly as it would allow for a recovery action The personnel available to perform the necessary tasks all possess the required skills Throughout the installation of the emergency purge ventilation equipment, carried out by maintenance personnel, a tank operator must be present to monitor this process. Method The method considers various factors that may contribute to human errors and provides a systematic approach for evaluating and quantifying these probabilities. Here are the key steps involved in the THERP method: Task Analysis: The first step is to break down the overall task into discrete steps or stages. Each stage represents a specific activity or action performed by the human operator. Error Identification: For each task stage, potential human errors are identified. These errors can result from a variety of factors, such as misinterpretation, distraction, or memory lapses. Error Quantification: The next step is to assign probabilities to each identified error. These probabilities are based on historical data, expert judgment, or other relevant sources. THERP often uses a database of generic human error probabilities for different types of tasks. Calculation of Overall Error Probability: The overall error probability for a task is calculated by combining the probabilities of individual errors at each stage. The method considers both independent and dependent errors, recognizing that the occurrence of one error may influence the likelihood of others. Sensitivity Analysis: THERP allows for sensitivity analysis, which involves assessing the impact of variations in error probabilities on the overall result. This helps identify which factors have the most significant influence on the predicted human error rate. Documentation and Reporting: The final step involves documenting the analysis, including the task breakdown, identified errors, assigned probabilities, and the overall predicted human error rate. This information is crucial for decision-makers and system designers. THERP is widely used in industries where human performance is critical, such as nuclear power, aviation, and chemical processing. While THERP provides a systematic framework for human error prediction, it's important to note that the method relies on expert judgment and historical data, and its accuracy can be influenced by the quality of the input data and the expertise of the analysts. Keep in mind that other HRA methods, such the as Human Error Assessment and Reduction Technique (HEART) and Bayesian Network-based approaches, also exist, and the choice of method depends on the specific requirements and characteristics of the system being analyzed. An initial task analysis was carried out on the normal procedure and standard operating procedure. This allowed the operator to align and then initiate the emergency purge ventilation equipment given the loss of the ventilation system. Thereafter, each individual task was analyzed from which it was then possible to assign error probabilities and error factors to events that represented operator responses. A number of the HEPs were adjusted to take account of various identified performance-shaping factors (PSFs) Upon assessment of characteristics of the task and behavior of the crew, recovery probabilities were deciphered. Such probabilities are influenced by such factors as task familiarity, alarms, and independent checking Once error probabilities were decided upon for the individual tasks, event trees were then constructed from which calculation formulations were derived. The probability of failure was obtained through the multiplication of each of the failure probabilities along the path under consideration. HRA event tree for aligning and starting emergency purge ventilation equipment on in-tank precipitation tanks 48 or 49 after a seismic event. The summation of each of the failure path probabilities provided the total failure path probability (FT) Results Task A: Diagnosis, HEP 6.0E-4 EF=30 Task B: Visual inspection performed swiftly, recovery factor HEP=0.001 EF=3 Task C: Initiate standard operating procedure HEP= .003 EF=3 Task D: Maintainer hook-up emergency purge ventilation equipment HEP=.003 EF=3 Task E: Maintainer 2 hook-up emergency purge, recovery factor CHEP=0.5 EF=2 Task G: Tank operator instructing /verifying hook-up, recovery factor CHEP=0.5 Lower bound = .015 Upper bound = 0.15 Task H: Read flow indicator, recovery factor CHEP= .15 Lower bound= .04 Upper bound = .5 Task I: Diagnosis HEP= 1.0E-5 EF=30 Task J: Analyze LFL using portable LFL analyzer, recovery factor CHEP= 0.5 Lower bound = .015 Upper bound =.15 From the various figures and workings, it can be determined that the HEP for establishing air-based ventilation using the emergency purge ventilation equipment on In-tank Precipitation processing tanks 48 and 49 after a failure of the nitrogen purge system following a seismic event is 4.2 E-6. This numerical value is judged to be a median value on the lognormal scale. However, this result is only valid given that all the previously stated assumptions are implemented. Advantages of THERP It is possible to use THERP at all stages of design. Furthermore, THERP is not restricted to the assessment of designs already in place and due to the level of detail in the analysis it can be specifically tailored to the requirements of a particular assessment. THERP is compatible with Probabilistic Risk Assessments (PRA); the methodology of the technique means that it can be readily integrated with fault tree reliability methodologies. The THERP process is transparent and structured, providing a logical review of the human factors considered in a risk assessment; this allows the results to be examined in a straightforward manner and assumptions to be challenged. The technique can be utilized within a wide range of differing human reliability domains and has a high degree of face validity. It is a unique methodology in the way that it highlights error recovery, and it also quantitatively models a dependency relation between the various actions or errors. Disadvantages of THERP THERP analysis is very resource-intensive and may require a large amount of effort to produce reliable HEP values. This can be controlled by ensuring an accurate assessment of the level of work required in the analysis of each stage. The technique does not lend itself to system improvement. Compared to some other Human Reliability Assessment tools such as HEART, THERP is a relatively unsophisticated tool as the range of PSFs considered is generally low and the underlying psychological causes of errors are not identified. With regard to the consistency of the technique, large discrepancies have been found in practice with regard to different analysts' assessment of the risk associated with the same tasks. Such discrepancies may have arisen from either the process mapping of the tasks in question or in the estimation of the HEPs associated with each of the tasks through the use of THERP tables compared to, for example, expert judgment or the application of PSFs. The methodology fails to provide guidance to the assessor on how to model the impact of PSFs and the influence of the situation on the errors being assessed. The THERP HRAETs implicitly assume that each sub-task’s HEP is independent from all others i.e. the HRAET does not update itself in the event that an operator takes a suboptimal route through the task path. This is reinforced by the HEP being merely reduced by the chance of recovery from a mistake, rather than by introducing alternative (i.e. suboptimal) “success” routes into the event tree, which could allow for Bayesian updating of subsequent HEPs. THERP is a “first generation” HRA tool, and in common with other such tools has been criticized for not taking adequate account of context. Other human reliability assessments Other Human Reliability Assessments (HRA) have been created by multiple different researchers. They include cognitive reliability and error analysis method (CREAM), technique for human error assessment (THEA), cause-based decision tree (CBDT), human error repository and analysis (HERA), standardized plant analysis risk (SPAR), a technique for human error analysis (ATHEANA), hazard and operability study (HAZOP), system for predictive error analysis and reduction (SPEAR), and human error assessment and reduction technique (HEART). References Human reliability
Technique for human error-rate prediction
[ "Engineering" ]
3,433
[ "Human reliability", "Reliability engineering" ]
19,059,421
https://en.wikipedia.org/wiki/Human%20error%20assessment%20and%20reduction%20technique
Human error assessment and reduction technique (HEART) is a technique used in the field of human reliability assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA: error identification, error quantification, and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications: first-generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of 'fits/doesn't fit' in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. HRA techniques have been used in a range of industries including healthcare, engineering, nuclear, transportation, and business sectors. Each technique has varying uses within different disciplines. HEART method is based upon the principle that every time a task is performed there is a possibility of failure and that the probability of this is affected by one or more Error Producing Conditions (EPCs) – for instance: distraction, tiredness, cramped conditions etc. – to varying degrees. Factors which have a significant effect on performance are of greatest interest. These conditions can then be applied to a "best-case-scenario" estimate of the failure probability under ideal conditions to then obtain a final error chance. This figure assists in communication of error chances with the wider risk analysis or safety case. By forcing consideration of the EPCs potentially affecting a given procedure, HEART also has the indirect effect of providing a range of suggestions as to how the reliability may therefore be improved (from an ergonomic standpoint) and hence minimising risk. Background HEART was developed by Williams in 1986. It is a first generation HRA technique, yet it is dissimilar to many of its contemporaries in that it remains to be widely used throughout the UK. The method essentially takes into consideration all factors which may negatively affect performance of a task in which human reliability is considered to be dependent, and each of these factors is then independently quantified to obtain an overall Human Error Probability (HEP), the collective product of the factors. HEART methodology 1. The first stage of the process is to identify the full range of sub-tasks that a system operator would be required to complete within a given task. 2. Once this task description has been constructed a nominal human unreliability score for the particular task is then determined, usually by consulting local experts. Based around this calculated point, a 5th – 95th percentile confidence range is established. 3. The EPCs, which are apparent in the given situation and highly probable to have a negative effect on the outcome, are then considered and the extent to which each EPC applies to the task in question is discussed and agreed, again with local experts. As an EPC should never be considered beneficial to a task, it is calculated using the following formula: Calculated Effect = ((Max Effect – 1) × Proportion of Effect) + 1 4. A final estimate of the HEP is then calculated, in determination of which the identified EPC's play a large part. Only those EPC's which show much evidence with regards to their affect in the contextual situation should be used by the assessor. Worked example Context A reliability engineer has the task of assessing the probability of a plant operator failing to carry out the task of isolating a plant bypass route as required by procedure. However, the operator is fairly inexperienced in fulfilling this task and therefore typically does not follow the correct procedure; the individual is therefore unaware of the hazards created when the task is carried out Assumptions There are various assumptions that should be considered in the context of the situation: the operator is working a shift in which he is in his 7th hour. there is talk circulating the plant that it is due to close down it is possible for the operator's work to be checked at any time local management aim to keep the plant open despite a desperate need for re-vamping and maintenance work; if the plant is closed down for a short period, if the problems are unattended, there is a risk that it may remain closed permanently. Method A representation of this situation using the HEART methodology would be done as follows: From the relevant tables it can be established that the type of task in this situation is of the type (F) which is defined as 'Restore or shift a system to original or new state following procedures, with some checking'. This task type has the proposed nominal human unreliability value of 0.003. Other factors to be included in the calculation are provided in the table below: Result The final calculation for the normal likelihood of failure can therefore be formulated as: 0.003 x 1.8 x 6.0 x 3.4 x 2.2 x 1.12 = 0.27 Advantages HEART is very quick and straightforward to use and also has a small demand for resource usage The technique provides the user with useful suggestions as to how to reduce the occurrence of errors It provides ready linkage between Ergonomics and Process Design, with reliability improvement measures being a direct conclusion which can be drawn from the assessment procedure. It allows cost benefit analyses to be conducted It is highly flexible and applicable in a wide range of areas which contributes to the popularity of its use Disadvantages The main criticism of the HEART technique is that the EPC data has never been fully released and it is therefore not possible to fully review the validity of Williams EPC data base. Kirwan has done some empirical validation on HEART and found that it had "a reasonable level of accuracy" but was not necessarily better or worse than the other techniques in the study. Further theoretical validation is thus required. HEART relies to a high extent on expert opinion, first in the point probabilities of human error, and also in the assessed proportion of EPC effect. The final HEPs are therefore sensitive to both optimistic and pessimistic assessors The interdependence of EPCs is not modelled in this methodology, with the HEPs being multiplied directly. This assumption of independence does not necessarily hold in a real situation. See also The curse of expertise Threat and error management Expert witnesses in English law Winner's curse Sports Illustrated cover jinx References External links HEART technique for Quantitative Human Error Assessment Human error analysis and reliability assessment - Michael Harrison Human reliability
Human error assessment and reduction technique
[ "Engineering" ]
1,345
[ "Human reliability", "Reliability engineering" ]
7,736,707
https://en.wikipedia.org/wiki/Die%20%28integrated%20circuit%29
A die, in the context of integrated circuits, is a small block of semiconducting material on which a given functional circuit is fabricated. Typically, integrated circuits are produced in large batches on a single wafer of electronic-grade silicon (EGS) or other semiconductor (such as GaAs) through processes such as photolithography. The wafer is cut (diced) into many pieces, each containing one copy of the circuit. Each of these pieces is called a die. There are three commonly used plural forms: dice, dies, and die. To simplify handling and integration onto a printed circuit board, most dies are packaged in various forms. Manufacturing process Most dies are composed of silicon and used for integrated circuits. The process begins with the production of monocrystalline silicon ingots. These ingots are then sliced into disks with a diameter of up to 300 mm. These wafers are then polished to a mirror finish before going through photolithography. In many steps the transistors are manufactured and connected with metal interconnect layers. These prepared wafers then go through wafer testing to test their functionality. The wafers are then sliced and sorted to filter out the faulty dies. Functional dies are then packaged and the completed integrated circuit is ready to be shipped. Uses A die can host many types of circuits. One common use case of an integrated circuit die is in the form of a Central Processing Unit (CPU). Through advances in modern technology, the size of the transistor within the die has shrunk exponentially, following Moore's Law. Other uses for dies can range from LED lighting to power semiconductor devices. Images Images of dies are commonly called die shots. See also Die preparation Integrated circuit design Wire bonding and ball bonding References External links – animation Integrated circuits
Die (integrated circuit)
[ "Technology", "Engineering" ]
372
[ "Computer engineering", "Integrated circuits" ]
7,737,918
https://en.wikipedia.org/wiki/Load%20bank
A load bank is a piece of electrical test equipment used to simulate an electrical load, to test an electric power source without connecting it to its normal operating load. During testing, adjustment, calibration, or verification procedures, a load bank is connected to the output of a power source, such as an electric generator, battery, servoamplifier or photovoltaic system, in place of its usual load. The load bank presents the source with electrical characteristics similar to its standard operating load, while dissipating the power output that would normally be consumed by it. The power is usually converted to heat by a heavy duty resistor or bank of resistive heating elements in the device, and the heat removed by a forced air or water cooling system. The device usually also includes instruments for metering, load control, and overload protection. Load banks can either be permanently installed at a facility to be connected to a power source when needed, or portable versions can be used for testing power sources such as standby generators and batteries. They are necessary adjuncts to replicate, prove, and verify the real-life demands on critical power systems. They are also used during operation of intermittent renewable power sources such as wind turbines to shed excess power that the electric power grid cannot absorb. Applications Load banks are used in a variety of applications, including: Factory testing of turbines and engine diesel generator sets Reduction of wet stacking problems in diesel engines run at light load Periodic exercising of stand-by engine generator sets Battery and UPS system testing Ground power testing Load optimization in prime power applications Removal of carbon build-up on generator piston rings Load rejection tests Data center tests (electricity and air-conditioning) Load bank types The three most common types of load banks are resistive, inductive, and capacitive. Both inductive and capacitive loads create what is known as reactance in an AC circuit. Reactance is a circuit element's opposition to an alternating current, caused by the buildup of electric or magnetic fields in the element due to the current and is the "imaginary" component of impedance, or the resistance to AC signals at a certain frequency. Capacitive reactance is equal to 1/(2⋅π⋅f⋅C), and inductive reactance is equal to 2⋅π⋅f⋅L. The unit of reactance is the ohm. Inductive reactance resists the change to current, causing the circuit current to lag voltage. Capacitive reactance resists the change to voltage, causing the circuit current to lead voltage. Resistive load bank A resistive load bank, the most common type, provides equivalent loading for both generators and prime movers. That is, for each kilowatt (or horsepower) of load applied to the generator by the load bank, an equal amount of load is applied to the prime mover by the generator. A resistive load bank, therefore, removes energy from the complete system: load bank from generator—generator from prime mover—prime mover from fuel. Additional energy is removed as a consequence of resistive load bank operation: waste heat from coolant, exhaust and generator losses and energy consumed by accessory devices. A resistive load bank impacts upon all aspects of a generating system. The load of a resistive load bank is created by the conversion of electrical energy to heat via high-power resistors such as grid resistors. This heat must be dissipated from the load bank, either by air or by water, by forced means or convection. In a testing system, a resistive load simulates real-life resistive loads, such as incandescent lighting and heating loads as well as the resistive or unity power factor component of magnetic (motors, transformers) loads. The most common type uses wire resistance, usually with fan cooling, and this type is often portable and moved from generator to generator for test purposes. Sometimes a load of this type is built into a building, but this is unusual. Rarely a salt water rheostat is used. It can be readily improvised, which makes it useful in remote locations. For testing automotive batteries, a carbon pile load bank allows an adjustable load to be placed on the battery or charging system, allowing accurate simulation of the heavy load on the battery during cranking of the engine. Such devices are usually portable and may include metering to show voltage and current. Inductive load bank An inductive load includes inductive (lagging power factor) loads. An inductive load consists of an iron-core reactive element which, when used in conjunction with a resistive load bank, creates a lagging power factor load. Typically, the inductive load will be rated at a numeric value 75% that of the corresponding resistive load such that when applied together a resultant 0.8 power factor load is provided. That is to say, for each 100 kW of resistive load, 75 kVAr of inductive load is provided. Other ratios are possible to obtain other power factor ratings. An inductive load is used to simulate a real-life mixed commercial loads consisting of lighting, heating, motors, transformers, etc. With a resistive-inductive load bank, full power system testing is possible, because the provided impedance supplies currents out of phase with voltage and allows for performance evaluation of generators, voltage regulators, load tap changers, conductors, switchgear and other equipment. Capacitive load bank A capacitive load bank or capacitor bank is similar to an inductive load bank in rating and purpose, except leading power factor loads are created, so reactive power is supplied from these loads to the system instead of vice versa. Hence for a mostly inductive load this can bring the power factor closer to unity improving the quality of supply. These loads simulate certain electronic or non-linear loads typical of telecommunications, computer or UPS industries. Fluorescent light tubes are also capacitive loads. Resistive Reactive (Combined) load bank A combined load bank usually consists of both resistive elements and inductors that can be used to provide load testing at non-unity PF (lagging) including the capability to test the generator set fully at 100% nameplate kVA rating. Combined load banks incorporate resistors and inductors all in a single construction which can be independently switched to allow resistive only, inductive only, or varying lagging power factor testing. Combined load banks are rated in kilovolt-amperes (kVA). It’s worth noting that combined load banks can consist of resistive, inductive, and capacitive (RLC) also. Typically, facilities require motor-driven devices, transformers and capacitors. If this is the case, then the load banks used for testing require reactive power compensation. The ideal solution is a combination of both resistive and reactive elements in one load bank package. Resistive/reactive loads are able to mimic motor loads and electromagnetic devices within a power system, as well as provide purely resistive loads. Many backup generators and turbines need to be commissioned at nameplate capacity using a combination of resistive and reactive load to fully qualify their operating capability. Using a resistive/reactive load bank enables comprehensive testing from a single unit. A range of resistive/reactive load banks are available to simulate these types of loads on a power source and the transformers, relays and switches which will distribute the power throughout the facility. Resistive/reactive load banks may be used for testing turbines, switchgear, rotary UPS, generators and UPS systems. They can also be used for integrated system testing of utility substation protection systems, particularly for more complex relays like distance, directional overcurrent, power directional and others. A resistive/reactive inductive and/or capacitive load is often required to test solar inverters to ensure solar panels can be stopped from producing electricity in the event of a power outage. The resistive/reactive combination load banks are used to test the engine generator set at its rated power factor. In most cases this is 0.8 power factor. Electronic load bank An electronic load bank tends to be a fully programmable, air- or water-cooled design used to simulate a solid state load and to provide constant power and current loading on circuits for precision testing. Railways Where a diesel-electric locomotive is equipped for dynamic braking, the braking resistor may be used as a load bank for testing the engine-generator set. On electric railways, old electric locomotives no longer required for regular service sometimes get converted into mobile load banks for testing the overhead line equipment and power distribution systems. See also Dummy load Relative cost of electricity generated by different sources Diesel–electric transmission Motor–generator Three-phase electric power Regenerative braking References Electric power
Load bank
[ "Physics", "Engineering" ]
1,801
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
7,737,980
https://en.wikipedia.org/wiki/Incipient%20wetness%20impregnation
Incipient wetness impregnation (IW or IWI), also called capillary impregnation or dry impregnation, is a commonly used technique for the synthesis of heterogeneous catalysts. Typically, the active metal precursor is dissolved in an aqueous or organic solution. Then the metal-containing solution is added to a catalyst support containing the same pore volume as the volume of the solution that was added. Capillary action draws the solution into the pores. Solution added in excess of the support pore volume causes the solution transport to change from a capillary action process to a diffusion process, which is much slower. The catalyst can then be dried and calcined to drive off the volatile components within the solution, depositing the metal on the catalyst surface. The maximum loading is limited by the solubility of the precursor in the solution. The concentration profile of the impregnated compound depends on the mass transfer conditions within the pores during impregnation and drying. References Catalysts Materials science
Incipient wetness impregnation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
220
[ "Catalysis", "Catalysts", "Materials science stubs", "Applied and interdisciplinary physics", "Materials science", "nan", "Chemical reaction stubs", "Chemical kinetics", "Chemical process stubs" ]
7,739,128
https://en.wikipedia.org/wiki/Islanding
Islanding is the intentional or unintentional division of an interconnected power grid into individual disconnected regions with their own power generation. Intentional islanding is often performed as a defence in depth to mitigate a cascading blackout. If one island collapses, it will not take neighboring islands with it. For example, nuclear power plants have safety-critical cooling systems that are typically powered from the general grid. The coolant loops typically lie on a separate circuit that can also operate off of reactor power or emergency diesel generators if the grid collapses. Grid designs that lend themselves to islanding near the customer level are commonly referred to as microgrids. In a power outage, the microgrid controller disconnects the local circuit from the grid on a dedicated switch and forces any online distributed generators to power the local load. Unintentional islanding is a dangerous condition that may induce severe stress on the generator, as the generator must match any changes in electrical load alone. If not properly communicated to power line workers, unintentional island can also present a risk of electrical shock. Unlike unpowered wires, islands require special techniques to reconnect to the larger grid, because the alternating current they carry is not in phase. For these reasons, solar inverters that are designed to supply power to the grid are generally required to have some sort of automatic anti-islanding circuitry, which shorts out the panels rather than continue to power the unintentional island. Methods that detect islands without a large number of false positives constitute the subject of considerable research. Each method has some threshold that needs to be crossed before a condition is considered to be a signal of grid interruption, which leads to a "non-detection zone" (NDZ), the range of conditions where a real grid failure will be filtered out. For this reason, before field deployment, grid-interactive inverters are typically tested by reproducing at their output terminals specific grid conditions and evaluating the effectiveness of the anti-islanding methods in detecting island conditions. Intentional islanding Intentional islanding divides an electrical network into fragments with adequate power generation in each fragment to supply that fragment's loads. In practice, balancing generation and load in each fragment is difficult, and often the formation of islands requires temporarily shedding load. Synchronous generators may not deliver sufficient reactive power to prevent severe transients during fault-induced island formation, and any inverters must switch from constant-current to constant-voltage control. Assuming P≠NP, no good cut set criterion exists to implement islanding. Polynomial-time approximations exist, but finding the exactly optimal divisions can be computationally infeasible. However, islanding localizes any failures to the containing island, preventing failures from spreading. In general, blackout statistics follow a power law, such that fragmenting a network increases the probability of blackouts, but reduces the total amount of unsatisfied electricity demand. Islanding reduces the economic efficiency of the wholesale power market, and is typically a last resort applied when the grid is known to be unstable but has not yet collapsed. In particular, islanding improves resilience to threats with known time but not location, such as terrorist attacks, military strikes on electrical infrastructure, or extreme weather events. Home islanding Following the 2019 California power shutoffs, there was a rise in interest in the possibility of operating a house's electrical grid as an island. While typical distributed generation systems are too small to power all appliances in a home simultaneously, it is possible for them to manage critical household power needs through traditional load-frequency control. Modules installed in series between the generator and large loads like air conditioners and electric ovens measure the island power frequency and perform automatic load shedding as the inverter nears overload. Detection methods Automatically detecting an island is the subject of considerable research. These can be performed passively, looking for transient events on the grid; or actively, by creating small instances of those transient events that will be negligible on a large grid but detectable on a small one. Active methods may be performed by local generators or "upstream" at the utility level. Many passive methods rely on the inherent stress of operating an island. Each device in the island comprises a much larger proportion of the total load, such that the voltage and frequency changes as devices are added or removed are likely to be much larger than in normal grid conditions. However, the difference is not so large as to prevent identification errors, and voltage and frequency shifts are generally used along with other signals. The active analogue of voltage and frequency shift detection attempts to measure the overall impedance fed by the inverter. When the circuit is grid-connected, there is almost no voltage response to slight variations in inverter current; but an island will observe a change in voltage. In principle, this technique has a vanishingly small NDZ, but in practice the grid is not always an infinitely-stiff voltage source, especially if multiple inverters attempt to measure impedance simultaneously. Unlike the shifts, a random circuit is highly unlikely to have a characteristic frequency matching standard grid power. However, many devices, like televisions, deliberately synchronize to the grid frequency. Motors, in particular, may be able to stabilize circuit frequency close to the grid standard as they "wind down". At the utility level, protective relays designed to isolate a portion of the grid can also switch in high impedance components, such that an islanded distributed generator will necessarily overload and shut down. This practice, however, relies on the expensive widespread provision of high-impedance devices. Alternatively, anti-islanding circuitry can rely on out-of-band signals. For example, utilities can send a shut-down signal through power line carrier communications or a telephony hookup. Inverter-specific techniques Certain passive methods are uniquely viable with direct current generators (inverter-based resources), such as solar panels. For example, inverters typically generate a phase shift when islanding. Inverters generally match the grid signal with a phase locked loop (PLL) that tracks zero-crossings. Between those events, the inverter produces a sinusoidal output, varying the current to produce the proper voltage waveform given the previous cycle's load. When the main grid disconnects, the power factor on the island suddenly decreases, and inverter's current no longer produces the proper waveform. By the time the waveform is completed and returns to zero, the signal will be out of phase. However, many common events, like motors starting, also cause phase jumps as new impedances are added to the circuit. A more effective technique inverts the islanding phase shift: the inverter is designed to produce output slightly mis-aligned with the grid, with the expectation that the grid will overwhelm the signal. The phase-locked loop then becomes unstable when the grid signal is missing; the system drifts away from the design frequency; and the inverter shuts down. A very secure islanding detection method searches for distinctive 2nd and 3rd harmonics generated by nonlinear interactions inside the inverter transformers. There are generally no other total harmonic distortion (THD) sources that match an inverter. Even noisy sources, like motors, do not effect measurable distortion on a grid-connected circuit, as the latter has essentially infinite filtration capacity. Switched-mode inverters generally have large distortions — as much as 5%. When the grid disconnects, the local circuit then exhibits inverter-induced distortion. Modern inverters attempt to minimize harmonic distortion, in some cases to unmeasurable limits, but in principle it is straightforward to design one which introduces a controlled amount of distortion to actively search for island formation. Distributed generation controversy Utilities have refused to allow installation of home solar or other distributed generation systems, on the grounds that they may create uncontrolled grid islands. In Ontario, a 2009 modification to the feed-in tariff induced many rural customers to establish small (10 kW) systems under the "capacity exempt" microFIT. However, Hydro One then refused to connect the systems to the grid after construction. The issue can be hotly political, in part because distributed generation proponents believe the islanding concern is largely pretextual. A 1999 test in the Netherlands was unable to find distributed-generation islands 60 seconds after grid collapse. Moreover, moments when distributed generation only matched distributed loads occurred at a rate comparable to 10−6 yr−1, and that the chance that the grid would disconnect at that point in time was even less, so that the "probability of encountering an islanding is virtually zero". Unintentional islanding risk is primarily the case of synchronous generators, as in microhydro. A 2004 Canadian report concluded that "Anti-islanding technology for inverter based DG systems is much better developed, and published risk assessments suggest that the current technology and standards provide adequate protection." Utilities generally argue that the distributed generators might effect the following problems: Safety concerns If an island forms, repair crews may be faced with unexpected live wires. End-user damage Distributed generators may not be able to maintain grid frequencies or voltages close to standard, and nonstandard currents can damage customer equipment. Depending on the circuit configuration, the utility may be liable for the damage. Controlled grid reconnection Reclosing distribution circuits onto an active island may damage equipment or be inhibited by out-of-phase protection relays. Procedures to prevent these outcomes may delay restoration of electric service to dropped customers. The first two claims are disputed within the power industry. For example, normal linework constantly risks exposure to live wires, and standard procedures require explicit checks to ensure that a wire is dead before worker contact. Supervisory Control and Data Acquisition (SCADA) systems can be set to alarm if there is unexpected voltage on a purportedly-isolated line. A UK-based study concluded that "The risk of electric shock associated with islanding of PV systems under worst-case PV penetration scenarios to both network operators and customers is typically <10−9 per year." Likewise, damage to end-user devices is largely inhibited by modern island-detection systems. It is, generally, the last problem that most concerns utilities. Reclosers are commonly used to divide up the grid into smaller sections that will automatically, and quickly, re-energize the branch as soon as the fault condition (a tree branch on lines for instance) clears. There is some concern that the reclosers may not re-energize in the case of an island or that an intervening loss of synchrony might damage distributed generators on the island. However, it is neither clear that reclosers are still useful in modern utility practice nor that breaker-reclosers must act on all phases. References Bibliography Bas Verhoeven, "Probability of Islanding in Utility Network due to Grid Connected Photovoltaic Power Systems", KEMA, 1999 H. Karimi, A. Yazdani, and R. Iravani, "Negative-Sequence Current Injection for Fast Islanding Detection of a Distributed Resource Unit", IEEE Trans. on Power Electronics, vol. 23, no. 1, January 2008. Standards IEEE 1547 Standards, IEEE Standard for Interconnecting Distributed Resources with Electric Power Systems UL 1741 Table of Contents, UL 1741: Standard for Inverters, Converters, Controllers and Interconnection System Equipment for Use With Distributed Energy Resources Further reading "First-Ever Islanding Application of an Energy Storage System" accidental islanding of a generator during transformer maintenance causes severe overfrequency on the island and requires manual control of the turbines to reintegrate with the larger grid External links Distributed Energy Resources Sandia National Laboratories Electric power distribution Electric power
Islanding
[ "Physics", "Engineering" ]
2,435
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
7,739,252
https://en.wikipedia.org/wiki/Elasticity%20tensor
The elasticity tensor is a fourth-rank tensor describing the stress-strain relation in a linear elastic material. Other names are elastic modulus tensor and stiffness tensor. Common symbols include and . The defining equation can be written as where and are the components of the Cauchy stress tensor and infinitesimal strain tensor, and are the components of the elasticity tensor. Summation over repeated indices is implied. This relationship can be interpreted as a generalization of Hooke's law to a 3D continuum. A general fourth-rank tensor in 3D has 34 = 81 independent components , but the elasticity tensor has at most 21 independent components. This fact follows from the symmetry of the stress and strain tensors, together with the requirement that the stress derives from an elastic energy potential. For isotropic materials, the elasticity tensor has just two independent components, which can be chosen to be the bulk modulus and shear modulus. Definition The most general linear relation between two second-rank tensors is where are the components of a fourth-rank tensor . The elasticity tensor is defined as for the case where and are the stress and strain tensors, respectively. The compliance tensor is defined from the inverse stress-strain relation: The two are related by where is the Kronecker delta. Unless otherwise noted, this article assumes is defined from the stress-strain relation of a linear elastic material, in the limit of small strain. Special cases Isotropic For an isotropic material, simplifies to where and are scalar functions of the material coordinates , and is the metric tensor in the reference frame of the material. In an orthonormal Cartesian coordinate basis, there is no distinction between upper and lower indices, and the metric tensor can be replaced with the Kronecker delta: Substituting the first equation into the stress-strain relation and summing over repeated indices gives where is the trace of . In this form, and can be identified with the first and second Lamé parameters. An equivalent expression is where is the bulk modulus, and are the components of the shear tensor . Cubic crystals The elasticity tensor of a cubic crystal has components where , , and are unit vectors corresponding to the three mutually perpendicular axes of the crystal unit cell. The coefficients , , and are scalars; because they are coordinate-independent, they are intrinsic material constants. Thus, a crystal with cubic symmetry is described by three independent elastic constants. In an orthonormal Cartesian coordinate basis, there is no distinction between upper and lower indices, and is the Kronecker delta, so the expression simplifies to Other crystal classes There are similar expressions for the components of in other crystal symmetry classes. The number of independent elastic constants for several of these is given in table 1. Properties Symmetries The elasticity tensor has several symmetries that follow directly from its defining equation . The symmetry of the stress and strain tensors implies that Usually, one also assumes that the stress derives from an elastic energy potential : which implies Hence, must be symmetric under interchange of the first and second pairs of indices: The symmetries listed above reduce the number of independent components from 81 to 21. If a material has additional symmetries, then this number is further reduced. Transformations Under rotation, the components transform as where are the covariant components in the rotated basis, and are the elements of the corresponding rotation matrix. A similar transformation rule holds for other linear transformations. Invariants The components of generally acquire different values under a change of basis. Nevertheless, for certain types of transformations, there are specific combinations of components, called invariants, that remain unchanged. Invariants are defined with respect to a given set of transformations, formally known as a group operation. For example, an invariant with respect to the group of proper orthogonal transformations, called SO(3), is a quantity that remains constant under arbitrary 3D rotations. possesses two linear invariants and seven quadratic invariants with respect to SO(3). The linear invariants are and the quadratic invariants are These quantities are linearly independent, that is, none can be expressed as a linear combination of the others. They are also complete, in the sense that there are no additional independent linear or quadratic invariants. Decompositions A common strategy in tensor analysis is to decompose a tensor into simpler components that can be analyzed separately. For example, the displacement gradient tensor can be decomposed as where is a rank-0 tensor (a scalar), equal to the trace of ; is symmetric and trace-free; and is antisymmetric. Component-wise, Here and later, symmeterization and antisymmeterization are denoted by and , respectively. This decomposition is irreducible, in the sense of being invariant under rotations, and is an important tool in the conceptual development of continuum mechanics. The elasticity tensor has rank 4, and its decompositions are more complex and varied than those of a rank-2 tensor. A few examples are described below. M and N tensors This decomposition is obtained by symmeterization and antisymmeterization of the middle two indices: where A disadvantage of this decomposition is that and do not obey all original symmetries of , as they are not symmetric under interchange of the first two indices. In addition, it is not irreducible, so it is not invariant under linear transformations such as rotations. Irreducible representations An irreducible representation can be built by considering the notion of a totally symmetric tensor, which is invariant under the interchange of any two indices. A totally symmetric tensor can be constructed from by summing over all permutations of the indices where is the set of all permutations of the four indices. Owing to the symmetries of , this sum reduces to The difference is an asymmetric tensor (not antisymmetric). The decomposition can be shown to be unique and irreducible with respect to . In other words, any additional symmetrization operations on or will either leave it unchanged or evaluate to zero. It is also irreducible with respect to arbitrary linear transformations, that is, the general linear group . However, this decomposition is not irreducible with respect to the group of rotations SO(3). Instead, decomposes into three irreducible parts, and into two: See Itin (2020) for explicit expressions in terms of the components of . This representation decomposes the space of elasticity tensors into a direct sum of subspaces: with dimensions These subspaces are each isomorphic to a harmonic tensor space . Here, is the space of 3D, totally symmetric, traceless tensors of rank . In particular, and correspond to , and correspond to , and corresponds to . See also Continuum mechanics Solid mechanics Constitutive equation Strength of materials Representation theory of finite groups Voigt notation Footnotes References Bibliography The Feynman Lectures on Physics - The tensor of elasticity Tensor physical quantities Continuum mechanics
Elasticity tensor
[ "Physics", "Mathematics", "Engineering" ]
1,436
[ "Tensors", "Physical quantities", "Continuum mechanics", "Quantity", "Tensor physical quantities", "Classical mechanics" ]
7,739,570
https://en.wikipedia.org/wiki/Wigner%20D-matrix
The Wigner D-matrix is a unitary matrix in an irreducible representation of the groups SU(2) and SO(3). It was introduced in 1927 by Eugene Wigner, and plays a fundamental role in the quantum mechanical theory of angular momentum. The complex conjugate of the D-matrix is an eigenfunction of the Hamiltonian of spherical and symmetric rigid rotors. The letter stands for Darstellung, which means "representation" in German. Definition of the Wigner D-matrix Let be generators of the Lie algebra of SU(2) and SO(3). In quantum mechanics, these three operators are the components of a vector operator known as angular momentum. Examples are the angular momentum of an electron in an atom, electronic spin, and the angular momentum of a rigid rotor. In all cases, the three operators satisfy the following commutation relations, where i is the purely imaginary number and the Planck constant has been set equal to one. The Casimir operator commutes with all generators of the Lie algebra. Hence, it may be diagonalized together with . This defines the spherical basis used here. That is, there is a complete set of kets (i.e. orthonormal basis of joint eigenvectors labelled by quantum numbers that define the eigenvalues) with where j = 0, 1/2, 1, 3/2, 2, ... for SU(2), and j = 0, 1, 2, ... for SO(3). In both cases, . A 3-dimensional rotation operator can be written as where α, β, γ are Euler angles (characterized by the keywords: z-y-z convention, right-handed frame, right-hand screw rule, active interpretation). The Wigner D-matrix is a unitary square matrix of dimension 2j + 1 in this spherical basis with elements where is an element of the orthogonal Wigner's (small) d-matrix. That is, in this basis, is diagonal, like the γ matrix factor, but unlike the above β factor. Wigner (small) d-matrix Wigner gave the following expression: The sum over s is over such values that the factorials are nonnegative, i.e. , . Note: The d-matrix elements defined here are real. In the often-used z-x-z convention of Euler angles, the factor in this formula is replaced by causing half of the functions to be purely imaginary. The realness of the d-matrix elements is one of the reasons that the z-y-z convention, used in this article, is usually preferred in quantum mechanical applications. The d-matrix elements are related to Jacobi polynomials with nonnegative and Let If Then, with the relation is where It is also useful to consider the relations , where and , which lead to: Properties of the Wigner D-matrix The complex conjugate of the D-matrix satisfies a number of differential properties that can be formulated concisely by introducing the following operators with which have quantum mechanical meaning: they are space-fixed rigid rotor angular momentum operators. Further, which have quantum mechanical meaning: they are body-fixed rigid rotor angular momentum operators. The operators satisfy the commutation relations and the corresponding relations with the indices permuted cyclically. The satisfy anomalous commutation relations (have a minus sign on the right hand side). The two sets mutually commute, and the total operators squared are equal, Their explicit form is, The operators act on the first (row) index of the D-matrix, The operators act on the second (column) index of the D-matrix, and, because of the anomalous commutation relation the raising/lowering operators are defined with reversed signs, Finally, In other words, the rows and columns of the (complex conjugate) Wigner D-matrix span irreducible representations of the isomorphic Lie algebras generated by and . An important property of the Wigner D-matrix follows from the commutation of with the time reversal operator , or Here, we used that is anti-unitary (hence the complex conjugation after moving from ket to bra), and . A further symmetry implies Orthogonality relations The Wigner D-matrix elements form a set of orthogonal functions of the Euler angles and : This is a special case of the Schur orthogonality relations. Crucially, by the Peter–Weyl theorem, they further form a complete set. The fact that are matrix elements of a unitary transformation from one spherical basis to another is represented by the relations: The group characters for SU(2) only depend on the rotation angle β, being class functions, so, then, independent of the axes of rotation, and consequently satisfy simpler orthogonality relations, through the Haar measure of the group, The completeness relation (worked out in the same reference, (3.95)) is whence, for Kronecker product of Wigner D-matrices, Clebsch–Gordan series The set of Kronecker product matrices forms a reducible matrix representation of the groups SO(3) and SU(2). Reduction into irreducible components is by the following equation: The symbol is a Clebsch–Gordan coefficient. Relation to spherical harmonics and Legendre polynomials For integer values of , the D-matrix elements with second index equal to zero are proportional to spherical harmonics and associated Legendre polynomials, normalized to unity and with Condon and Shortley phase convention: This implies the following relationship for the d-matrix: A rotation of spherical harmonics then is effectively a composition of two rotations, When both indices are set to zero, the Wigner D-matrix elements are given by ordinary Legendre polynomials: In the present convention of Euler angles, is a longitudinal angle and is a colatitudinal angle (spherical polar angles in the physical definition of such angles). This is one of the reasons that the z-y-z convention is used frequently in molecular physics. From the time-reversal property of the Wigner D-matrix follows immediately There exists a more general relationship to the spin-weighted spherical harmonics: Connection with transition probability under rotations The absolute square of an element of the D-matrix, gives the probability that a system with spin prepared in a state with spin projection along some direction will be measured to have a spin projection along a second direction at an angle to the first direction. The set of quantities itself forms a real symmetric matrix, that depends only on the Euler angle , as indicated. Remarkably, the eigenvalue problem for the matrix can be solved completely: Here, the eigenvector, , is a scaled and shifted discrete Chebyshev polynomial, and the corresponding eigenvalue, , is the Legendre polynomial. Relation to Bessel functions In the limit when we have where is the Bessel function and is finite. List of d-matrix elements Using sign convention of Wigner, et al. the d-matrix elements for j = 1/2, 1, 3/2, and 2 are given below. For j = 1/2 For j = 1 For j = 3/2 For j = 2 Wigner d-matrix elements with swapped lower indices are found with the relation: Symmetries and special cases See also Clebsch–Gordan coefficients Tensor operator Symmetries in quantum mechanics References External links Representation theory of Lie groups Matrices Special hypergeometric functions Rotational symmetry
Wigner D-matrix
[ "Physics", "Mathematics" ]
1,555
[ "Matrices (mathematics)", "Mathematical objects", "Symmetry", "Rotational symmetry" ]
7,740,082
https://en.wikipedia.org/wiki/Parry%E2%80%93Sullivan%20invariant
In mathematics, the Parry–Sullivan invariant (or Parry–Sullivan number) is a numerical quantity of interest in the study of incidence matrices in graph theory, and of certain one-dimensional dynamical systems. It provides a partial classification of non-trivial irreducible incidence matrices. It is named after the English mathematician Bill Parry and the American mathematician Dennis Sullivan, who introduced the invariant in a joint paper published in the journal Topology in 1975. Definition Let A be an n × n incidence matrix. Then the Parry–Sullivan number of A is defined to be where I denotes the n × n identity matrix. Properties It can be shown that, for nontrivial irreducible incidence matrices, flow equivalence is completely determined by the Parry–Sullivan number and the Bowen–Franks group. References Dynamical systems Matrices Algebraic graph theory Graph invariants
Parry–Sullivan invariant
[ "Physics", "Mathematics" ]
169
[ "Graph theory stubs", "Mathematical objects", "Graph theory", "Matrices (mathematics)", "Algebraic graph theory", "Graph invariants", "Mechanics", "Mathematical relations", "Matrix stubs", "Algebra", "Dynamical systems" ]
5,889,331
https://en.wikipedia.org/wiki/Mixed/dual%20cycle
The dual combustion cycle (also known as the mixed cycle, Trinkler cycle, Seiliger cycle or Sabathe cycle) is a thermal cycle that is a combination of the Otto cycle and the Diesel cycle, first introduced by Russian-German engineer Gustav Trinkler, who never claimed to have developed the cycle himself. Heat is added partly at constant volume (isochoric) and partly at constant pressure (isobaric), the significance of which is that more time is available for the fuel to completely combust. Because of lagging characteristics of fuel this cycle is invariably used for Diesel and hot spot ignition engines. It consists of two adiabatic and two constant volume and one constant pressure processes. The dual cycle consists of following operations: Process 1-2: Isentropic compression Process 2-3: Addition of heat at constant volume. Process 3-4: Addition of heat at constant pressure. Process 4-5: Isentropic expansion. Process 5-1: Rejection of heat at constant volume. Bibliography Cornel Stan: Alternative Propulsion for Automobiles, Springer, 2016, , p. 48 References Thermodynamic cycles
Mixed/dual cycle
[ "Physics", "Chemistry" ]
236
[ "Thermodynamics stubs", "Physical chemistry stubs", "Thermodynamics" ]
5,895,015
https://en.wikipedia.org/wiki/Inertial%20fusion%20power%20plant
Inertial Fusion Energy is a proposed approach to building a nuclear fusion power plant based on performing inertial confinement fusion at industrial scale. This approach to fusion power is still in a research phase. ICF first developed shortly after the development of the laser in 1960, but was a classified US research program during its earliest years. In 1972, John Nuckolls wrote a paper predicting that compressing a target could create conditions where fusion reactions are chained together, a process known as fusion ignition or a burning plasma. On August 8, 2021, the NIF at Livermore National Laboratory became the first ICF facility in the world to demonstrate this (see plot). This breakthrough drove the US Department of Energy to create an Inertial Fusion Energy program in 2022 with a budget of 3 million dollars in its first year. Design of a IFE power plant This kind of fusion reactor would consist of two parts: Targets which can be small capsules (<7 millimeter diameter) that contain fusion fuel. Although many kinds of targets have been tested including: cylinders, shells coated with nanotubes, solid blocks, hohlraum, glass shells filled with fusion fuel, cryogenically frozen targets, plastic shells, foam shells and materials suspended on spider silk. Drivers which are used to compress and create a shock wave that squeezes the target. This compression wave pushes the material down to the temperature and pressure where fusion occurs. Drivers that have been explored are solid-state lasers, excimer lasers, high velocity solid objects, X-rays, beams of ions (heavy ion fusion (HIF)) and beams of electrons. Net energy in ICF comes from getting fusion reactions to chain together in a process known as ignition. To get there we need to squeeze material to hot and dense conditions for long enough. But a key problem is that after a plasma becomes hot - it becomes hard to compress. The goal then is to avoid getting material hot until after it is compressed. In literature, this is known as the low adiabatic approach to compression. These steps are outlined below: Keeping the plasma very cold, squeeze it together. Heat the plasma only after it is squeezed; ideally inside a "hot spot". Fusion happens, and the resulting products deposit their energy creating more fusion. Several compression approaches attempt to do this including: Central Hot Spot Ignition, Fast Ignition, Shock Ignition and Magneto-inertial-fusion. ICF Research Institutions This program was originally established as a way to develop Nuclear weapons, because ICF mimics the compression physics of a fission-fusion bomb. These facilities have been built around the world, below are some examples. Laser Mégajoule in France was developed in 2002 and upgraded in 2014. Omega Laser was first built in 1992 at the University of Rochester. Omega-EP was first built in 2008 at the University of Rochester as second more powerful laser beam. Gecko Laser was first built at Osaka University in Japan in 1983 but has since been upgraded nearly a dozen times. NIF was first operational in 2009 at the Livermore National Laboratory. NIKE Laser was built at the Naval Research Laboratory to study excimer (gas-based) lasers. Electra Laser was built at the Naval Research Laboratory to study excimer (gas-based) lasers. PALS laser facility in the Czech Republic was established to research ICF laser implosions. Machine 3 was developed by First Light Fusion to accelerate blocks of material to create a shockwave on the target. There have also been multiple ICF facilities built, tested and decommissioned in the past. For example, Sandia National Laboratory pursued a series (<10 machines) of ion-beam and electron-beam driven ICF research program through the 1970s and into the middle 1980s. Alternatively, Los Alamos built a large, excimer laser facility called Aurora in the late 1980s. Livermore National Laboratory built a succession of laser facilities including Nova, Cyclops, 4-PI, SHIVA and other devices. As part of the run up to the NIF opening and achieving ignition, Livermore National Laboratory funded a body of research around the Laser Inertial Fusion Energy program. Under this program, a reactor design was developed, costing, reactor chambers and energy capture programs were explored. IFE Research Programs IFE development has come in waves within the United States. Below are some government programs that have been funded over the years to push this technology forward: HAPL The high average laser program was administered by the Naval Research Laboratory from 1999 to 2008. This program doled out grants to target, laser and driver teams across the United States and organized 19 meetings between member organizations. LIFE The Laser Inertial Fusion Energy program was administered by Livermore National Laboratory from 2008 to 2016. This program was funded to develop an IFE fusion power plant based around the National Ignition Facility. SDI The Strategic Defense Initiative (SDI) inadvertently supported many of the IFE laser technologies seen today. Driver Development It is still unclear which driver would work best for an IFE power plant, with supporters of different drivers pushing their favorite approach. Lasers have thus far proven to be the most well researched. Below is a summary of the laser drivers that have been studied. The challenge with implementing laser systems does not just come from the beam, but also the optics, mirrors, amplifiers and gratings that are also needed to put this system in place. Related Driver Technologies Depending on the driver that is being used there are key related technologies that need to be matured; below are some of these: Glass that can handle the laser energy (Joules) crossing through the glass cross section (meters^2) and not melt or get damaged. The glass is then used to make mirrors, lenses, gratings or windows inside the power plant. Amplifiers that can be used to increase the power of the laser beam. Compressors that can compress the laser beam or ion beam in space and time to increase the overall on-target power. Pulsed Power systems that can deliver the megajoules needed to either a laser, ion-beam or solid object driver. The workhorse of pulsed power (the Marx Generator) has limitations for an ICF plant and research has gone into Linear transformer driver as an alternative power source. Laser Diodes are used as the first step in transferring electrical energy into light energy to initiate the laser beam. Such systems can be expensive and are not needed for excimer lasers. Phase-Plate Smoothing is a technique to smooth out laser beams in solid-state laser systems. Target Development There are many kinds of targets that have been developed for ICF research - but a power plant would require thousands if not millions of identical targets to be fired repeatedly. This will be exceedingly challenging. At present, the Department of Energy contracts with General Atomics to produce ICF targets for the national laboratories. These targets are partially built at GA and then shipped across the country to the ICF facility for a shot day. The Laboratories maintain hardware and staff onsite to complete the last steps to prepare the targets for a shot. Target Example Glass Shell targets were spheres of glass on stalks and filled with DT gas; these were some of the earliest targets. Overcoated targets involve growing chemical materials over a shell target. This can be done using directed Chemical Vapor Deposition of plastics or layers of gold or silvers. Hohlraum targets are pellets of DT fusion fuel that are surrounded by tubes of gold foil. The laser strikes the foil and creates x-rays that compresses the targets; simulates nuclear weapons. Silk Mounted Targets have been mounted on strands of spider silk; this material is the strongest material per cross-section known and maintains good characteristics down to cryogenic temperatures. Cryogenic targets are those that must be kept below ≈34 Kelvin to condense the hydrogen gas into liquid or ≈14 kelvin to condense to a solid. Foam Wetted targets are made using a variety of carbon-hydrogen foams and filled with liquid DT material cooled to below ≈34 Kelvin. Ice targets are made using a variety of carbon-hydrogen foams and filled with liquid DT material cooled to below ≈14 Kelvin. Cryogenic Targets There are several ways to get tritium and deuterium into an already-made capsule. High pressure fills work by putting the shells in a chamber with 1 to 100 Atm of gas pressure and having the gas diffuse into the shell. Cryogenic foam shells work can work by wicking in the liquid DT fluid into the foam. This involves getting the delicate shell down in temperature and pressure without damaging it. This is a stepwise process that can take hours to days in time and requires multiple containment chambers and various kinds of pumps. At cryogenic temperatures, the DT gas forms into a fluid which can be wicked into the foam shell. Once filled, operators slowly lower the temperature further to form the ice crystal. Ice can start formation around the equator of the target and then grow into a complete crystal. The ice is embedded with the foam shell structure. Engineers have had problems with ice cracking during this formation process – all of which impacts the performance of the shot. Monitoring of all of this is done using shadow grams, 360 X-ray diagnostics, visual inspection, and other tools; information is all run through software that gets a complete picture of the target during filling. Moving Cryogenic Targets Keeping an ICF frozen at cryogenic temperatures while delivering it to the chamber for a shot is challenging. For example, at the Laboratory for Laser Energetics the frozen target is held inside a custom-built, mobile cryogenic cart that can be moved into position under the target chamber. The cart has a coolant system and vacuum pump to keep the material cold. This cart holds the frozen target at the end of a "cold finger" which is then raised on an elevator and positioned at the center of the chamber. When the metal shroud is removed, the cryogenic target is exposed to room temperatures and starts to sublimate immediately into gas. This means that laser pulses must coordinate directly with the exposure of the target and everything has to happen quickly to keep the target from melting. See also Nuclear fusion Fusion power Inertial electrostatic confinement Inertial confinement fusion Laser inertial confinement Megajoule laser National Ignition Facility Z-pinch inertial confinement Z Pulsed Power Facility Project PACER Notes and references Further reading   (Université Bordeaux I, November 2005) Tutorial on Heavy-Ion Fusion Energy(Virtual National Laboratory for Heavy-Ion Fusion)   (November 2003)   (Fusion Energy Sciences Advisory Committee, March 2004)   (June 2005) Views on neutronics and activation issues facing liquid-protected IFE chambers IEEE-USA Position : Fusion Energy Research & Development(June 2006) Inertial confinement fusion Energy development Nuclear technology Nuclear power stations Osaka University research
Inertial fusion power plant
[ "Physics" ]
2,190
[ "Nuclear technology", "Nuclear physics" ]
5,895,033
https://en.wikipedia.org/wiki/Platelet-derived%20growth%20factor%20receptor
Platelet-derived growth factor receptors (PDGF-R) are cell surface tyrosine kinase receptors for members of the platelet-derived growth factor (PDGF) family. PDGF subunits -A and -B are important factors regulating cell proliferation, cellular differentiation, cell growth, development and many diseases including cancer. There are two forms of the PDGF-R, alpha and beta each encoded by a different gene. Depending on which growth factor is bound, PDGF-R homo- or heterodimerizes. Mechanism of action The PDGF family consists of PDGF-A, -B, -C and -D, which form either homo- or heterodimers (PDGF-AA, -AB, -BB, -CC, -DD). The four PDGFs are inactive in their monomeric forms. The PDGFs bind to the protein tyrosine kinase receptors PDGF receptor-α and -β. These two receptor isoforms dimerize upon binding the PDGF dimer, leading to three possible receptor combinations, namely -αα, -ββ and -αβ. The extracellular region of the receptor consists of five immunoglobulin-like domains while the intracellular part is a tyrosine kinase domain. The ligand-binding sites of the receptors are located to the three first immunoglobulin-like domains. PDGF-CC specifically interacts with PDGFR-αα and -αβ, but not with -ββ, and thereby resembles PDGF-AB. PDGF-DD binds to PDGFR-ββ with high affinity, and to PDGFR-αβ to a markedly lower extent and is therefore regarded as PDGFR-ββ specific. PDGF-AA binds only to PDGFR-αα, while PDGF-BB is the only PDGF that can bind all three receptor combinations with high affinity. Dimerization is a prerequisite for the activation of the kinase. Kinase activation is visualized as tyrosine phosphorylation of the receptor molecules, which occurs between the dimerized receptor molecules (transphosphorylation). In conjunction with dimerization and kinase activation, the receptor molecules undergo conformational changes, which allow a basal kinase activity to phosphorylate a critical tyrosine residue, thereby "unlocking" the kinase, leading to full enzymatic activity directed toward other tyrosine residues in the receptor molecules as well as other substrates for the kinase. Expression of both receptors and each of the four PDGFs is under independent control, giving the PDGF/PDGFR system a high flexibility. Different cell types vary greatly in the ratio of PDGF isoforms and PDGFRs expressed. Different external stimuli such as inflammation, embryonic development or differentiation modulate cellular receptor expression allowing binding of some PDGFs but not others. Additionally, some cells display only one of the PDGFR isoforms while other cells express both isoforms, simultaneously or separately. Interaction with signal transduction molecules Tyrosine phosphorylation sites in growth factor receptors serve two major purposes—to control the state of activity of the kinase and to create binding sites for downstream signal transduction molecules, which in many cases also are substrates for the kinase. The second part of the tyrosine kinase domain in the PDGFβ receptor is phosphorylated at Tyr-857, and mutant receptors carrying phenylalanine at this position have reduced kinase activity. Tyr-857 has therefore been assigned a role in positive regulation of kinase activity. Sites of tyrosine phosphorylation involved in binding signal transduction molecules have been identified in the juxtamembrane domain, the kinase insert, and in the C-terminal tail in the PDGFβ receptor. The phosphorylated tyrosine residue and in general three adjacent C-terminal amino acid residues form specific binding sites for signal transduction molecules. Binding to these sites involves a common conserved stretches, denoted the Src homology (SH) 2 domain and/or Phosphotyrosine Binding Domains (PTB). The specificity of these interactions appears to be very high, since mutant receptors carrying phenylalanine residues in one or several of the different phosphorylation sites generally lack the capacity to bind the targeted signal transduction molecule. The signal transduction molecules are either equipped with different enzymatic activities, or they are adaptor molecules, which in some but not all cases are found in complexes with subunits that carry a catalytic activity. Upon interaction with the activated receptor, the catalytic activities become up-regulated, through tyrosine phosphorylation or other mechanisms, generating a signal that may be unique for each type of signal transduction molecule. Examination of the different signaling cascades induced by RTKs established Ras/mitogen-activated protein kinase (MAPK), PI-3 kinase, and phospholipase-γ (PLCγ) pathways as key downstream mediators of PDGFR signaling. In addition, reactive oxygen species (ROS)-dependent STAT3 activation has been established to be a key downstream mediator of PDGFR signaling in vascular smooth muscle cells. MAPK pathway The adaptor protein Grb2 forms a complex with Sos by the Grb2 SH3 domain. Grb2 (or the Grb2/Sos complex) is recruited to the membrane by the Grb2 SH2 domain binding to activated PDGFR-bound SHP2 (also known as PTPN11, a cytosolic PTP), thereby allowing interaction with Ras and the exchange of GDP for GTP on Ras. Whereas the interaction between Grb2 and PDGFR occurs through interaction with the SHP2 protein, Grb2 instead binds to activated EGFR through Shc, another adaptor protein that forms a complex with many receptors via its PTB domain. Once activated, Ras interacts with several proteins, namely Raf. Activated Raf stimulates MAPK-kinase (MAPKK or MEK) by phosphorylating a serine residue in its activation loop. MAPKK then phosphorylates MAPK (ERK1/2) on T and Y residues at the activation-loop leading to its activation. Activated MAPK phosphorylates a variety of cytoplasmic substrates, as well as transcription factors, when translocated into the nucleus. MAPK family members have been found to regulate various biological functions by phosphorylation of particular target molecules (such as transcription factors, other kinases etc.) located in cell membrane, cytoplasm and nucleus, and thus contribute to the regulation of different cellular processes such as cell proliferation, differentiation, apoptosis and immunoresponses. PI3K pathway The class IA phospholipid kinase, PI-3 kinase, is activated by the majority of RTKs. Similarly to other SH2 domain-containing proteins, PI-3 kinase forms a complex with PY sites on activated receptors. The main function of PI3K activation is the generation of PIP3, which functions as a second messenger to activate downstream tyrosine kinases Btk and Itk, the Ser/Thr kinases PDK1 and Akt (PKB). The major biological functions of Akt activation can be classified into three categories – survival, proliferation and cell growth. Akt is also known to be implicated in several cancers, particularly breast. PLCγ is immediately recruited by an activated RTK through the binding of its SH2 domains to phosphotyrosine sites of the receptor. After activation, PLCγ hydrolyses its substrate PtdIns(4,5)P2 and forms two second messengers, diacylglycerol and Ins(1,4,5)P3. Ins(1,4,5)P3 stimulates the release of Ca 2+ from intracellular supplies. Ca 2+ then binds to calmodulin, which subsequently activates a family of calmodulindependent protein kinases (CamKs). In addition, both diacylglycerol and Ca 2+ activate members of the PKC family. The second messengers generated by PtdIns(4,5)P2 hydrolysis stimulate a variety of intracellular processes such as proliferation, angiogenesis, cell motility. See also Receptor tyrosine kinase PDGF Imatinib PDGFRA PDGFRB Crenolanib (CP-868,596-26) References External links Growth factors Signal transduction Tyrosine kinase receptors
Platelet-derived growth factor receptor
[ "Chemistry", "Biology" ]
1,790
[ "Growth factors", "Signal transduction", "Biochemistry", "Neurochemistry", "Tyrosine kinase receptors" ]
5,895,398
https://en.wikipedia.org/wiki/Thrombin%20receptor
There are three known thrombin receptors (ThrR), termed PAR1, PAR3 and PAR4 (PAR for protease-activated receptor). G-protein-coupled receptors that are responsible for the coagulation effects and responses of thrombin on cells are known as protease-activated receptors, or PARs. These receptors are members of the 7-transmembrane g protein-coupled family of receptors, however, their method of activation is unique. Unlike most G-protein-coupled receptors, PARs are irreversibly activated by proteolytic mechanism and therefore, are strictly regulated. Thrombin is an allosteric serine protease that is an essential effector of coagulation that is produced at sites of vascular injury and plays a critical role in cellular response to blood-related diseases. It binds to and cleaves the extracellular N-terminal domain of the receptor. A tethered ligand corresponding to the new N-terminus, SFLLRN, is then unmasked, binding to the second extracellular loop of the receptor and activating it. Tissue distribution PAR1, PAR3, and PAR4 are activated by thrombin. There are species-specific differences in thrombin receptor expression in platelets and other cell types, in which differences in thrombin concentrations may considerably affect platelet activation of distinct PARs. As seen in human platelets, PAR1 and PAR4 are the functional thrombin receptors, whereas PAR3 and PAR4 are functional thrombin receptors in mouse platelets Thrombin receptors are also differentially expressed in cell types, e.g. PAR1 is expressed in fibroblasts, smooth muscle cells, sensory neurons and glial cells, whereas the other two are less clearly defined. There are various roles depending on location of activation. Fibroblasts and smooth muscle cells induces growth factor and matrix production, migration and proliferation. Sensory neurons induces proliferation and release of neuroactive agents. Regulation of signaling Desensitization and internalization Initial desensitization due to rapid phosphorylation of activated receptors by kinases, which increases affinity for arrestin. Arrestin prevents protein-receptor interaction and the receptor becomes dephosphorylated and inhibited from signaling. This is a sufficient and rapid form of termination of PAR signaling. Irreversibly activated PAR1 is internalized and terminated from further signaling by clathrin-mediated endocytosis and lysosome degradation, preventing replenishment at the cell surface. Biased signaling is a form of regulating thrombin receptors by allowing specific ligands to activate certain pathways. It is known that thrombin activates PAR1 signaling, which can activate many pathways involving the G-protein-coupled receptors, however, with biased signaling it is different. Biased antagonists made for thrombin receptors are important for therapeutical therapies that can treat different inflammatory-related diseases. There have been studies of PAR-1 inhibitors, vorapaxar and atopaxar, which could provide an alternative treatment for atherothrombotic disease. References G protein-coupled receptors
Thrombin receptor
[ "Chemistry" ]
654
[ "G protein-coupled receptors", "Signal transduction" ]
15,110,024
https://en.wikipedia.org/wiki/Transitive%20model
In mathematical set theory, a transitive model is a model of set theory that is standard and transitive. Standard means that the membership relation is the usual one, and transitive means that the model is a transitive set or class. Examples An inner model is a transitive model containing all ordinals. A countable transitive model (CTM) is, as the name suggests, a transitive model with a countable number of elements. Properties If M is a transitive model, then ωM is the standard ω. This implies that the natural numbers, integers, and rational numbers of the model are also the same as their standard counterparts. Each real number in a transitive model is a standard real number, although not all standard reals need be included in a particular transitive model. References Set theory
Transitive model
[ "Mathematics" ]
168
[ "Mathematical logic", "Set theory" ]
16,824,055
https://en.wikipedia.org/wiki/True%20polar%20wander
True polar wander is a solid-body rotation (or reorientation) of a planet or moon with respect to its spin axis, causing the geographic locations of the north and south poles to change, or "wander". In rotational equilibrium, a planetary body has the largest moment of inertia axis aligned with the spin axis, with the smaller two moments of inertia axes lying in the plane of the equator. This is because planets are not rigid - they form a rotational bulge which affects the inertia tensor of the body. Internal or external processes that change the distribution of mass (internal or external loadings) disrupt the equilibrium and true polar wander will occur: the planet or moon will rotate as a rigid body (reorient in space) to realign the largest moment of inertia axis with the spin axis. Because stabilization of rotation by the rotational bulge is only transient, even relatively small loads can result in a significant reorientation (See .) If the body is near the steady state but with the angular momentum not exactly lined up with the largest moment of inertia axis, the pole position will oscillate (Chandler wobble). Weather and water movements can also induce small changes. These subjects are covered in the article Polar motion. Description in the context of Earth The mass distribution of the Earth is not spherically symmetric, and the Earth has three different moments of inertia. The axis around which the moment of inertia is greatest is closely aligned with the rotation axis (the axis going through the geographic North and South Poles). The other two axes are near the equator. That is similar to a brick rotating around an axis going through its shortest dimension (a vertical axis when the brick is lying flat). On Earth and most other planets, the difference in the polar and equatorial moments of inertia is dominated by the formation of a rotational bulge - excess mass around the equator (flattening) caused by rotational deformation (planetary bodies are not rigid - they deform in response to rotation and its changes). Internal and external processes such as mantle convection, deglaciation, formation of volcanoes, or large meteorite impacts can disrupt rotational equilibrium and cause bodies to move as a whole relative to their rotation axis (reorient). Most natural loadings are small when compared to the rotational bulge and hence change the direction of the main axis of inertia only slightly. However, since the rotational bulge eventually readjusts when the spin axis moves within the body, the stabilization by the rotational bulge disappears on geological timescales and the equilibrium orientation of the planet is given by its dominant loads. Throughout true polar wander, the spin axis lies close to the main axis of inertia of the body, and the time evolution of the latter is driven by gradual readjustment of the rotational bulge. On short timescales and for rapid loadings, the secular motion of the pole is accompanied by free (or Chandler) wobbling. Such a reorientation changes the latitudes of most points on the Earth by an amount that depends on how far they are from the axis near the equator that does not move. In the context of tidally locked bodies, also the longitude of surface features can change in time and the dynamics of reorientation can be more rapid. Examples Cases of true polar wander have occurred several times in the course of the Earth's history. It has been suggested that east Asia moved south due to true polar wander by 25° between about 174 and 157 million years ago. Mars, Europa, and Enceladus are also believed to have undergone true pole wander, in the case of Europa by 80°. Uranus' extreme inclination with respect to the ecliptic is not an instance of true polar wander (a shift of the body relative to its rotational axis), but instead a large shift of the rotational axis itself. This axis shift is believed to be the result of a catastrophic series of impacts that occurred billions of years ago. Distinctions and delimitations Polar wander should not be confused with precession, which is where the axis of rotation moves, in other words the North Pole points toward a different star. There are also smaller and faster variations in the axis of rotation going under the term nutation. Precession is caused by the gravitational attraction of the Moon and Sun, and occurs all the time and at a much faster rate than polar wander. It does not result in changes of latitude (it results in changes of star inclinations). True polar wander has to be distinguished from continental drift, which is where different parts of the Earth's crust move in different directions because of circulation in the mantle. Because of plate tectonics, the polar wander as seen from an individual continent may differ from the true polar wander (see also apparent polar wander). The effect should further not be confused with the effect known as geomagnetic reversal that describes the repeated proven reversal of the magnetic field of the Earth. Tectonic plate reconstructions Paleomagnetism is used to create tectonic plate reconstructions by finding the paleolatitude of a particular site. This paleolatitude is affected both by true polar wander and by plate tectonics. To reconstruct plate tectonic histories, geologists must obtain a number of dated paleomagnetic samples. Because true polar wander is a global phenomenon but tectonic motions are specific to each plate, multiple dates allow them to separate the tectonic and true polar wander signals. See also Apparent polar wander Axial tilt Cataclysmic pole shift hypothesis (includes discussion of various historical conjectures involving rapid shift of the poles) Polar motion True polar wander on Mars References Geodesy Geodynamics Paleomagnetism
True polar wander
[ "Mathematics" ]
1,178
[ "Applied mathematics", "Geodesy" ]
16,831,188
https://en.wikipedia.org/wiki/Geometric%20and%20material%20buckling
Geometric buckling is a measure of neutron leakage and material buckling is a measure of the difference between neutron production and neutron absorption. When nuclear fission occurs inside of a nuclear reactor, neutrons are produced. These neutrons then, to state it simply, either react with the fuel in the reactor or escape from the reactor. These two processes are referred to as neutron absorption and neutron leakage, and their sum is the neutron loss. When the rate of neutron production is equal to the rate of neutron loss, the reactor is able to sustain a chain reaction of nuclear fissions and is considered a critical reactor. In the case of a bare, homogenous, steady-state reactor (that is, a reactor that has only one region, a homogenous mixture of fuel and coolant, no blanket nor reflector, and does not change over time), the geometric and material buckling are equal to each other. Derivation Both buckling terms are derived from the particular diffusion equation which is valid for neutrons: . where k is the criticality eigenvalue, is the neutrons per fission, is the macroscopic cross section for fission, and from diffusion theory, the diffusion coefficient is defined as: . In addition, the diffusion length is defined as: . Rearranging the terms, the diffusion equation becomes: . The left side is the material buckling and the right side of the equation is the geometric buckling. Geometric Buckling The geometric buckling is a simple Helmholtz eigenvalue problem that is simply solved for different geometries. The table below lists the geometric buckling for some common geometries. Since the diffusion theory calculations overpredict the critical dimensions, an extrapolation distance δ must be subtracted to obtain an estimate of actual values. The buckling could also be calculated using actual dimensions and extrapolated distances using the following table. Expressions for Geometric Buckling in Terms of Actual Dimensions and Extrapolated Distances. Material Buckling Materials buckling is the buckling of a homogeneous configuration with respect to material properties only. If we redefine in terms of purely material properties (and assume the fundamental mode), we have: . As stated previously, the geometric buckling is defined as: . Solving for k (in the fundamental mode), ; thus, . Assuming the reactor is in a critical state (k = 1), . This expression is in purely material properties; therefore, this is called the materials buckling: . Critical Reactor Dimensions By equating the geometric and material buckling, one can determine the critical dimensions of a one region nuclear reactor. References Nuclear technology
Geometric and material buckling
[ "Physics" ]
537
[ "Nuclear technology", "Nuclear physics" ]
16,835,402
https://en.wikipedia.org/wiki/ZNF268
Zinc finger protein 268 is a protein that in humans is encoded by the ZNF268 gene. ZNF268 is associated with cervical cancer. References Further reading
ZNF268
[ "Chemistry" ]
37
[ "Biochemistry stubs", "Protein stubs" ]