id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
24,201,048
https://en.wikipedia.org/wiki/Aspirator%20%28medicine%29
A medical aspirator is a suction machine used to remove mucus, blood, and other bodily fluids from a patient. They can be used during surgical procedures but an operating theater is generally equipped with a central system of vacuum tubes. Most aspirators are therefore portable, for use in ambulances and nursing homes, and can run on AC or battery power. They consist of a vacuum pump, a vacuum regulator and gauge, a collection canister, and sometimes a bacterial filter. Plastic tubing is used to continuously draw fluid into the collection canister. In the past manually operated aspirators were used such as Potain's aspirator. See also Suction (medicine) References Medical equipment
Aspirator (medicine)
[ "Biology" ]
147
[ "Medical equipment", "Medical technology" ]
24,201,718
https://en.wikipedia.org/wiki/C13H12O2
{{DISPLAYTITLE:C13H12O2}} The molecular formula C13H12O2 (molar mass: 200.23 g/mol) may refer to: Bisphenol F, a small aromatic organic compound Monobenzone, an organic chemical in the phenol family Molecular formulas
C13H12O2
[ "Physics", "Chemistry" ]
68
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,201,774
https://en.wikipedia.org/wiki/C21H26O3
{{DISPLAYTITLE:C21H26O3}} The molecular formula C21H26O3 (molar mass: 326.42 g/mol, exact mass: 326.1882 u) may refer to: Acitretin Buparvaquone Moxestrol Octabenzone RU-16117 11-Hydroxycannabinol Molecular formulas
C21H26O3
[ "Physics", "Chemistry" ]
81
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,202,127
https://en.wikipedia.org/wiki/C17H24O3
{{DISPLAYTITLE:C17H24O3}} The molecular formula C17H24O3 may refer to: Cyclandelate, a vasodilator Onchidal, a naturally occurring neurotoxin Shogaols, pungent constituents of ginger Molecular formulas
C17H24O3
[ "Physics", "Chemistry" ]
63
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,202,383
https://en.wikipedia.org/wiki/C8H8O5
{{DISPLAYTITLE:C8H8O5}} The molecular formula C8H8O5 (molar mass: 184.14 g/mol, exact mass: 184.0372 u) may refer to: 3,4-Dihydroxymandelic acid Methyl gallate Molecular formulas
C8H8O5
[ "Physics", "Chemistry" ]
69
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
24,202,456
https://en.wikipedia.org/wiki/C16H17NO3
{{DISPLAYTITLE:C16H17NO3}} The molecular formula C16H17NO3 (molar mass: 271.31 g/mol, exact mass: 271.1208 u) may refer to: A-68930 Crinine Higenamine, or norcoclaurine Normorphine Molecular formulas
C16H17NO3
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
22,690,948
https://en.wikipedia.org/wiki/Kirchhoff%20integral%20theorem
Kirchhoff's integral theorem (sometimes referred to as the Fresnel–Kirchhoff integral theorem) is a surface integral to obtain the value of the solution of the homogeneous scalar wave equation at an arbitrary point P in terms of the values of the solution and the solution's first-order derivative at all points on an arbitrary closed surface (on which the integration is performed) that encloses P. It is derived by using Green's second identity and the homogeneous scalar wave equation that makes the volume integration in Green's second identity zero. Integral Monochromatic wave The integral has the following form for a monochromatic wave: where the integration is performed over an arbitrary closed surface S enclosing the observation point , in is the wavenumber, in is the distance from an (infinitesimally small) integral surface element to the point , is the spatial part of the solution of the homogeneous scalar wave equation (i.e., as the homogeneous scalar wave equation solution), is the unit vector inward from and normal to the integral surface element, i.e., the inward surface normal unit vector, and denotes differentiation along the surface normal (i.e., a normal derivative) i.e., for a scalar field . Note that the surface normal is inward, i.e., it is toward the inside of the enclosed volume, in this integral; if the more usual outer-pointing normal is used, the integral will have the opposite sign. This integral can be written in a more familiar form where . Non-monochromatic wave A more general form can be derived for non-monochromatic waves. The complex amplitude of the wave can be represented by a Fourier integral of the form where, by Fourier inversion, we have The integral theorem (above) is applied to each Fourier component , and the following expression is obtained: where the square brackets on V terms denote retarded values, i.e. the values at time t − s/c. Kirchhoff showed that the above equation can be approximated to a simpler form in many cases, known as the Kirchhoff, or Fresnel–Kirchhoff diffraction formula, which is equivalent to the Huygens–Fresnel equation, except that it provides the inclination factor, which is not defined in the Huygens–Fresnel equation. The diffraction integral can be applied to a wide range of problems in optics. Integral derivation Here, the derivation of the Kirchhoff's integral theorem is introduced. First, the Green's second identity as the following is used. where the integral surface normal unit vector here is toward the volume closed by an integral surface . Scalar field functions and are set as solutions of the Helmholtz equation, where is the wavenumber ( is the wavelength), that gives the spatial part of a complex-valued monochromatic (single frequency in time) wave expression. (The product between the spatial part and the temporal part of the wave expression is a solution of the scalar wave equation.) Then, the volume part of the Green's second identity is zero, so only the surface integral is remained as Now is set as the solution of the Helmholtz equation to find and is set as the spatial part of a complex-valued monochromatic spherical wave where is the distance from an observation point in the closed volume . Since there is a singularity for at where (the value of not defined at ), the integral surface must not include . (Otherwise, the zero volume integral above is not justified.) A suggested integral surface is an inner sphere centered at with the radius of and an outer arbitrary closed surface . Then the surface integral becomes For the integral on the inner sphere , and by introducing the solid angle in , due to . (The spherical coordinate system which origin is at can be used to derive this equality.) By shrinking the sphere toward the zero radius (but never touching to avoid the singularity), and the first and last terms in the surface integral becomes zero, so the integral becomes . As a result, denoting , the location of , and by , the position vector , and respectively, See also Kirchhoff's diffraction formula Vector calculus Integral Huygens–Fresnel principle Wavefront Surface integral References Further reading The Cambridge Handbook of Physics Formulas, G. Woan, Cambridge University Press, 2010, . Introduction to Electrodynamics (3rd Edition), D.J. Griffiths, Pearson Education, Dorling Kindersley, 2007, Light and Matter: Electromagnetism, Optics, Spectroscopy and Lasers, Y.B. Band, John Wiley & Sons, 2010, The Light Fantastic – Introduction to Classic and Quantum Optics, I.R. Kenyon, Oxford University Press, 2008, Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3 McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994, Physical optics Gustav Kirchhoff
Kirchhoff integral theorem
[ "Physics", "Chemistry", "Materials_science" ]
1,085
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
22,691,628
https://en.wikipedia.org/wiki/European%20Underground%20Rare%20Event%20Calorimeter%20Array
The European Underground Rare Event Calorimeter Array (EURECA) is a planned dark matter search experiment using cryogenic detectors and an absorber mass of up to 1 tonne. The project will be built in the Modane Underground Laboratory and will bring together researchers working on the CRESST and EDELWEISS experiments. EURECA featured prominently in the ASPERA road map of Astroparticle Physics experiments in Europe. Dark matter Dark matter is one of the significant unsolved problems in modern science. There is considerable evidence from astronomy and cosmology that a significant fraction of the mass of the Universe, and of galaxies is made up of non-luminous material. The nature of dark matter is currently unknown. However a popular hypothesis is that it consists of Weakly Interacting Massive Particles (WIMPs), particles with a large mass, but which only interact with ordinary matter through the weak nuclear force, so the majority that pass through the Earth do not hit a single atom. The aim of dark matter search experiments such as EURECA is to test this hypothesis by searching for WIMP dark matter interactions. WIMPs are predicted to exist by supersymmetry theory, which predicts a wide range of scattering cross-sections down to 10−10pb, corresponding to an interaction rate of ~1 event per year in a 1 tonne detector. Existing experiments such as CRESST and EDELWEISS have already ruled out higher interaction rates, but EURECA will search down to this lower limit. Cryogenic dark matter searches Cryogenic dark matter experiments use particle detectors operating at millikelvin temperatures to search for the elastic scattering of WIMPs of an atomic nuclei. A particle interaction inside an absorber crystal will create a large number of phonons, these thermalise inside a thermometer on the crystal surface, which records the rise in temperature. Such cryogenic detectors are used as they combine a high sensitivity with a low energy threshold and excellent resolution. Dark matter experiments are located in deep underground laboratories, and use extensive shielding to reduce the background radiation levels from cosmic rays. Early experiments were limited by the remaining background due to radioactive impurities close to the detectors. Therefore the second phase of CRESST and EDELWEISS used new detectors capable of distinguishing electron recoil events from nuclear recoils. Electron recoils are produced by alpha, beta and gamma particles which account for the vast majority of background events. WIMPs (and also neutrons) produce nuclear recoils. This is done by measuring an additional signal, which is much higher for electron recoils than nuclear recoils. CRESST detectors measure the scintillation light produced in a CaWO4 or ZnWO4 absorber crystal. EDELWEISS detectors measure the ionization produced in a semiconducting germanium crystal. EURECA EURECA will take this cryogenic detector technology pioneered by CRESST and EDELWEISS further by building a 1 tonne absorber mass made up from a large number of cryogenic detector modules. The experiment plans to use a range of detector materials. This provides a way to show if a positive signal is due to dark matter, as the event rate is expected to scale with the atomic mass of the target nuclei. Whereas the event rate from neutrons will be higher for lighter nuclei. The EURECA collaboration includes the member institutions of CRESST, EDELWEISS, and ROSEBUD dark matter experiments, and some new members. These are: Oxford University Commissariat à l'Énergie Atomique Centre National de la Recherche Scientifique Max-Planck-Institut für Physik München Technische Universität München Universität Tübingen Universität Karlsruhe Forschungszentrum Karlsruhe JINR Universidad de Zaragoza INR Kiev CERN The collaboration spokesman is Gilles Gerbier. The experiment will be built in the Modane Underground Laboratory, in the Fréjus road tunnel between France and Italy, the deepest underground laboratory in Europe. R&D activities EURECA researchers are currently involved in data taking and analysis for CRESST and EDELWEISS. In addition, there are various R&D activities under way associated with scaling up the detector technology to a 1-tonne scale. These include: Cryogenics: EURECA will require a one tonne mass to be cooled to millikelvin temperature. This will be done using large scale cryogenic technology, as used to cool gravitational wave experiments and the 27 km LHC accelerator ring. Scintillators: Research is being carried out to develop large radiopure absorber crystals with good scintillation properties at low temperatures. Detector readout: EURECA will require hardware and software to read-out the signals from 1000+ detector channels. See also Dark Matter Weakly Interacting Massive Particle EDELWEISS CRESST References External links EURECA Collaboration home page ASPERA: Astroparticle physics European network Experiments for dark matter search
European Underground Rare Event Calorimeter Array
[ "Physics" ]
1,022
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
22,692,296
https://en.wikipedia.org/wiki/Nuclear%20interaction%20length
Nuclear interaction length is the mean distance travelled by a hadronic particle before undergoing an inelastic nuclear interaction. See also Nuclear collision length Radiation length External links Particle Data Group site Experimental particle physics
Nuclear interaction length
[ "Physics" ]
41
[ "Nuclear physics", "Experimental physics", "Nuclear and atomic physics stubs", "Particle physics", "Experimental particle physics" ]
22,693,937
https://en.wikipedia.org/wiki/EDELWEISS
EDELWEISS (Expérience pour DEtecter Les WIMPs En Site Souterrain) is a dark matter search experiment located at the Modane Underground Laboratory in France. The experiment uses cryogenic detectors, measuring both the phonon and ionization signals produced by particle interactions in germanium crystals. This technique allows nuclear recoils events to be distinguished from electron recoil events. The EURECA project is a proposed future dark matter experiment, which will involve researchers from EDELWEISS and the CRESST dark matter search. Dark matter Dark matter is material which does not emit or absorb light. Measurements of the rotation curves of spiral galaxies suggest it makes up the majority of the mass of galaxies; and precision measurements of the cosmic microwave background radiation suggest it accounts for a significant fraction of the density of the Universe. A possible explanation of dark matter comes from particle physics. WIMP (Weakly Interacting Massive Particle) is a general term for hypothetical particles which interact only through the weak nuclear and gravitational force. This theory suggests our galaxy is surrounded by a dark halo of such particles. EDELWEISS is one of a number of dark matter search experiments aiming to directly detect WIMP dark matter, by detecting the elastic scattering of a WIMP off an atom within a particle detector. As the interaction rate is so low, this requires sensitive detectors, good background discrimination, and a deep underground site (to reduce the background from cosmic rays). Experiment EDELWEISS is located in the Modane underground laboratory, in the Fréjus road tunnel between France and Italy, below 1800m of rock. A 20 cm lead shield reduces the gamma background, and a polyethylene shield reduces the neutron flux. All materials close to the detectors are screened for radiopurity. A dilution refrigerator is used to cool the detectors, built in the opposite orientation to most instruments with the detectors at the top and the refrigeration mechanism below. EDELWEISS uses high purity germanium cryogenic bolometers cooled to 20 milliKelvin above absolute zero. The phonon and ionization signals produced by a particle interaction are measured. This allows background events to be rejected as nuclear recoils events (produced by WIMP or neutron interactions) produce much less ionization than electron recoil events (produced by alpha, beta and gamma radiation). The detectors are similar to those used by the CDMS experiment. Simultaneous detection of ionization and heat with semiconductors at low temperature was an original idea by Lawrence M. Krauss, Mark Srednicki and Frank Wilczek. A major limitation of early detectors was the problem of surface events. Due to incomplete charge collection, a particle interaction near the surface of the crystal gave no ionization signal, so electron recoils near the surface could be mistaken for nuclear recoils. To avoid this, the collaboration developed new detectors with interdigitised electrodes. Different voltages are applied to a series of electrodes so the direction of electric field is different near the surface of the crystal, allowing over 99.5% of surface events to be rejected. Results The results from the first phase of the experiment (EDELWEISS I) were published in 2005, excluding WIMP dark matter with an interaction cross-section above (at ≈85 GeV). EDELWEISS-II ran 2009–10 with 10 detectors, that is, 4 kg of detector mass (for a total effective exposure of 384 kg·d) limiting high mass and low mass WIMPs, and axions. A cross-section of is excluded at 90% C.L. for WIMP mass of 85 GeV. (Just above projected CDMS results in Fig A.) EDELWEISS-III had 40 detectors. EDELWEISS-III conducted first science run 2014-2015 with results published in 2016. EURECA design work will continue for operation after the EDELWEISS-III run. It is planned that EURECA would start operating after 2017. Collaboration EDELWEISS is a collaboration of the following member institutions: CEA – Commissariat à l'Énergie Atomique IRFU - Institut de Recherche sur les Lois Fondamentales de l'Univers IRAMIS - Institut Rayonnement Matière de Saclay CNRS – Centre National de la Recherche Scientifique CSNSM - Centre de Spectrométrie Nucléaire et de Spectrométrie de Masse, Orsay IPNL - Institut de Physique Nucléaire de Lyon Institut NÉEL, Grenoble IAS - Institut d'Astrophysique Spatiale, Paris Institutions outside France Universität Karlsruhe, Germany Forschungszentrum Karlsruhe, Germany JINR – Joint Institute for Nuclear Research, Dubna, Russia University of Oxford References External links EDELWEISS Collaboration home page EURECA Collaboration home page Experiments for dark matter search
EDELWEISS
[ "Physics" ]
1,004
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
22,696,587
https://en.wikipedia.org/wiki/Genetically%20modified%20animal
Genetically modified animals are animals that have been genetically modified for a variety of purposes including producing drugs, enhancing yields, increasing resistance to disease, etc. The vast majority of genetically modified animals are at the research stage while the number close to entering the market remains small. Production The process of genetically engineering mammals is a slow, tedious, and expensive process. As with other genetically modified organisms (GMOs), first genetic engineers must isolate the gene they wish to insert into the host organism. This can be taken from a cell containing the gene or artificially synthesised. If the chosen gene or the donor organism's genome has been well studied it may already be accessible from a genetic library. The gene is then combined with other genetic elements, including a promoter and terminator region and usually a selectable marker. A number of techniques are available for inserting the isolated gene into the host genome. With animals DNA is generally inserted into using microinjection, where it can be injected through the cell's nuclear envelope directly into the nucleus, or through the use of viral vectors. The first transgenic animals were produced by injecting viral DNA into embryos and then implanting the embryos in females. It is necessary to ensure that the inserted DNA is present in the embryonic stem cells. The embryo would develop and it would be hoped that some of the genetic material would be incorporated into the reproductive cells. Then researchers would have to wait until the animal reached breeding age and then offspring would be screened for presence of the gene in every cell, using PCR, Southern hybridization, and DNA sequencing. New technologies are making genetic modifications easier and more precise. Gene targeting techniques, which creates double-stranded breaks and takes advantage on the cells natural homologous recombination repair systems, have been developed to target insertion to exact locations. Genome editing uses artificially engineered nucleases that create breaks at specific points. There are four families of engineered nucleases: meganucleases, zinc finger nucleases, transcription activator-like effector nucleases (TALENs), and the Cas9-guideRNA system (adapted from CRISPR). TALEN and CRISPR are the two most commonly used and each has its own advantages. TALENs have greater target specificity, while CRISPR is easier to design and more efficient. The development of the CRISPR-Cas9 gene editing system has effectively halved the amount of time needed to develop genetically modified animals. Humans have domesticated animals since around 12,000 BCE, using selective breeding or artificial selection (as contrasted with natural selection). The process of selective breeding, in which organisms with desired traits (and thus with the desired genes) are used to breed the next generation and organisms lacking the trait are not bred, is a precursor to the modern concept of genetic modification Various advancements in genetics allowed humans to directly alter the DNA and therefore genes of organisms. In 1972, Paul Berg created the first recombinant DNA molecule when he combined DNA from a monkey virus with that of the lambda virus. In 1974, Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world's first transgenic animal. However it took another eight years before transgenic mice were developed that passed the transgene to their offspring. Genetically modified mice were created in 1984 that carried cloned oncogenes, predisposing them to developing cancer. Mice with genes knocked out (knockout mouse) were created in 1989. The first transgenic livestock were produced in 1985 and the first animal to synthesise transgenic proteins in their milk were mice, engineered to produce human tissue plasminogen activator in 1987. The first genetically modified animal to be commercialised was the GloFish, a Zebra fish with a fluorescent gene added that allows it to glow in the dark under ultraviolet light. It was released to the US market in 2003. The first genetically modified animal to be approved for food use was AquAdvantage salmon in 2015. The salmon were transformed with a growth hormone-regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer. Mammals GM mammals are created for research purposes, production of industrial or therapeutic products, agricultural uses or improving their health. There is also a market for creating genetically modified pets. Medicine Mammals are the best models for human disease, making genetic engineered ones vital to the discovery and development of cures and treatments for many serious diseases. Knocking out genes responsible for human genetic disorders allows researchers to study the mechanism of the disease and to test possible cures. Genetically modified mice have been the most common mammals used in biomedical research, as they are cheap and easy to manipulate. Examples include humanized mice created by xenotransplantation of human gene products, so as to be utilized as murine human-animal hybrids for gaining relevant insights in the in vivo context for understanding of human-specific physiology and pathologies. Pigs are also a good target, because they have a similar body size, anatomical features, physiology, pathophysiological response, and diet. Nonhuman primates are the most similar model organisms to humans, but there is less public acceptance toward using them as research animals. In 2009, scientists announced that they had successfully transferred a gene into a primate species (marmosets) and produced a stable line of breeding transgenic primates for the first time. Their first research target for these marmosets was Parkinson's disease, but they were also considering amyotrophic lateral sclerosis and Huntington's disease. Human proteins expressed in mammals are more likely to be similar to their natural counterparts than those expressed in plants or microorganisms. Stable expression has been accomplished in sheep, pigs, rats, and other animals. In 2009, the first human biological drug produced from such an animal, a goat, was approved. The drug, ATryn, is an anticoagulant which reduces the probability of blood clots during surgery or childbirth was extracted from the goat's milk. Human alpha-1-antitrypsin is another protein that is used in treating humans with this deficiency. Another area is in creating pigs with greater capacity for human organ transplants (xenotransplantation). Pigs have been genetically modified so that their organs can no longer carry retroviruses or have modifications to reduce the chance of rejection. Pig lungs from genetically modified pigs are being considered for transplantation into humans. There is even potential to create chimeric pigs that can carry human organs. Livestock Livestock are modified with the intention of improving economically important traits such as growth-rate, quality of meat, milk composition, disease resistance and survival. Animals have been engineered to grow faster, be healthier and resist diseases. Modifications have also improved the wool production of sheep and udder health of cows. Goats have been genetically engineered to produce milk with strong spiderweb-like silk proteins. The goat gene sequence has been modified, using fresh umbilical cords taken from kids, in order to code for the human enzyme lysozyme. Researchers wanted to alter the milk produced by the goats, to contain lysozyme in order to fight off bacteria causing diarrhea in humans. Enviropig was a genetically enhanced line of Yorkshire pigs in Canada created with the capability of digesting plant phosphorus more efficiently than conventional Yorkshire pigs. The A transgene construct consisting of a promoter expressed in the murine parotid gland and the Escherichia coli phytase gene was introduced into the pig embryo by pronuclear microinjection. This caused the pigs to produce the enzyme phytase, which breaks down the indigestible phosphorus, in their saliva. As a result, they excrete 30 to 70% less phosphorus in manure depending upon the age and diet. The lower concentrations of phosphorus in surface runoff reduces algal growth, because phosphorus is the limiting nutrient for algae. Because algae consume large amounts of oxygen, excessive growth can result in dead zones for fish. Funding for the Enviropig program ended in April 2012, and as no new partners were found the pigs were killed. However, the genetic material will be stored at the Canadian Agricultural Genetics Repository Program. In 2006, a pig was engineered to produce omega-3 fatty acids through the expression of a roundworm gene. In 1990, the world's first transgenic bovine, Herman the Bull, was developed. Herman was genetically engineered by micro-injected embryonic cells with the human gene coding for lactoferrin. The Dutch Parliament changed the law in 1992 to allow Herman to reproduce. Eight calves were born in 1994 and all calves inherited the lactoferrin gene. With subsequent sirings, Herman fathered a total of 83 calves. Dutch law required Herman to be slaughtered at the conclusion of the experiment. However the Dutch Agriculture Minister at the time, Jozias van Aartsen, granted him a reprieve provided he did not have more offspring after public and scientists rallied to his defence. Together with cloned cows named Holly and Belle, he lived out his retirement at Naturalis, the National Museum of Natural History in Leiden. On 2 April 2004, Herman was euthanised by veterinarians from the University of Utrecht because he suffered from osteoarthritis. At the time of his death Herman was one of the oldest bulls in the Netherlands. Herman's hide has been preserved and mounted by taxidermists and is permanently on display in Naturalis. They say that he represents the start of a new era in the way man deals with nature, an icon of scientific progress, and the subsequent public discussion of these issues. In October 2017, Chinese scientists announced they used CRISPR gene editing technology to create of a line of pigs with better body temperature regulation, resulting in about 24% less body fat than typical livestock. Researchers have developed GM dairy cattle to grow without horns (sometimes referred to as "polled") which can cause injuries to farmers and other animals. DNA was taken from the genome of Red Angus cattle, which is known to suppress horn growth, and inserted into cells taken from an elite Holstein bull called "Randy". Each of the progeny will be a clone of Randy, but without his horns, and their offspring should also be hornless. In 2011, Chinese scientists generated dairy cows genetically engineered with genes from human beings to produce milk that would be the same as human breast milk. This could potentially benefit mothers who cannot produce breast milk but want their children to have breast milk rather than formula. The researchers claim these transgenic cows to be identical to regular cows. Two months later, scientists from Argentina presented Rosita, a transgenic cow incorporating two human genes, to produce milk with similar properties as human breast milk. In 2012, researchers from New Zealand also developed a genetically engineered cow that produced allergy-free milk. In 2016 Jayne Raper and a team announced the first trypanotolerant transgenic cow in the world. This team, spanning the International Livestock Research Institute, Scotland's Rural College, the Roslin Institute's Centre for Tropical Livestock Genetics and Health, and the City University of New York, announced that a Kenyan Boran bull had been born and had already successfully had two children. Tumaini - named for the Swahili word for "hope" - carries a trypanolytic factor from a baboon via CRISPR/Cas9. Research Scientists have genetically engineered several organisms, including some mammals, to include green fluorescent protein (GFP), for research purposes. GFP and other similar reporting genes allow easy visualisation and localisation of the products of the genetic modification. Fluorescent pigs have been bred to study human organ transplants, regenerating ocular photoreceptor cells, and other topics. In 2011 green-fluorescent cats were created to find therapies for HIV/AIDS and other diseases as feline immunodeficiency virus (FIV) is related to HIV. Researchers from the University of Wyoming have developed a way to incorporate spiders' silk-spinning genes into goats, allowing the researchers to harvest the silk protein from the goats' milk for a variety of applications. Conservation Genetic modification of the myxoma virus has been proposed to conserve European wild rabbits in the Iberian peninsula and to help regulate them in Australia. To protect the Iberian species from viral diseases, the myxoma virus was genetically modified to immunize the rabbits, while in Australia the same myxoma virus was genetically modified to lower fertility in the Australian rabbit population. There have also been suggestions that genetic engineering could be used to bring animals back from extinction. It involves changing the genome of a close living relative to resemble the extinct one and is currently being attempted with the passenger pigeon. Genes associated with the woolly mammoth have been added to the genome of an African Elephant, although the lead researcher says he has no intention of using live elephants. Humans Gene therapy uses genetically modified viruses to deliver genes which can cure disease in humans. Although gene therapy is still relatively new, it has had some successes. It has been used to treat genetic disorders such as severe combined immunodeficiency and Leber's congenital amaurosis. Treatments are also being developed for a range of other currently incurable diseases, such as cystic fibrosis, sickle cell anemia, Parkinson's disease, cancer, diabetes, heart disease, and muscular dystrophy. These treatments only affect somatic cells, which means that any changes would not be inheritable. Germline gene therapy results in any change being inheritable, which has raised concerns within the scientific community. In 2015, CRISPR was used to edit the DNA of non-viable human embryos. In November 2018, He Jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the CCR5 gene, which codes for a receptor that HIV uses to enter cells. He said that twin girls- Lulu and Nana, had been born a few weeks earlier, and that they carried functional copies of CCR5 along with disabled CCR5 (mosaicism), and were still vulnerable to HIV. The work was widely condemned as unethical, dangerous, and premature. Fish Genetically modified fish are used for scientific research, as pets, and as a food source. Aquaculture is a growing industry, currently providing over half of the consumed fish worldwide. Through genetic engineering, it is possible to increase growth rates, reduce food intake, remove allergenic properties, increase cold tolerance, and provide disease resistance. Detecting pollution Fish can also be used to detect aquatic pollution or function as bioreactors. Several groups have been developing zebrafish to detect pollution by attaching fluorescent proteins to genes activated by the presence of pollutants. The fish will then glow and can be used as environmental sensors. Pets The GloFish is a brand of genetically modified fluorescent zebrafish with bright red, green, and orange fluorescent color. It was originally developed by one of the groups to detect pollution, but is now part of the ornamental fish trade, becoming the first genetically modified animal to become publicly available as a pet when it was introduced for sale in 2003. Research GM fish are widely used in basic research in genetics and development. Two species of fish- zebrafish and medaka, are most commonly modified, because they have optically clear chorions (membranes in the egg), rapidly develop, and the 1-cell embryo is easy to see and microinject with transgenic DNA. Zebrafish are model organisms for developmental processes, regeneration, genetics, behaviour, disease mechanisms, and toxicity testing. Their transparency allows researchers to observe developmental stages, intestinal functions, and tumour growth. The generation of transgenic protocols (whole organism, cell or tissue specific, tagged with reporter genes) has increased the level of information gained by studying these fish. Growth GM fish have been developed with promoters driving an over-production of "all fish" growth hormone for use in the aquaculture industry, to increase the speed of development and potentially reduce fishing pressure on wild stocks. This has resulted in dramatic growth enhancement in several species, including salmon, trout, and tilapia. AquaBounty Technologies have produced a salmon that can mature in half the time as wild salmon. The fish is an Atlantic salmon with a Chinook salmon (Oncorhynchus tshawytscha) gene inserted. This allows the fish to produce growth hormones all year round compared to the wild-type fish that produces the hormone for only part of the year. The fish also has a second gene inserted from the eel-like ocean pout that acts like an "on" switch for the hormone. Pout also have antifreeze proteins in their blood, which allow the GM salmon to survive near-freezing waters and continue their development. A wild-type salmon takes 24 to 30 months to reach market size (4–6 kg), whereas the producers of the GM salmon say that it requires only 18 months for the GM fish to reach that size. In November 2015, the FDA of the USA approved the AquAdvantage salmon for commercial production, sale, and consumption, the first non-plant GMO food to be commercialized. AquaBounty says that to prevent the genetically modified fish from inadvertently breeding with wild salmon, all of the fish will be female and reproductively sterile, although a small percentage of the females may remain fertile. Some opponents of the GM salmon have dubbed it the "Frankenfish". Insects Research In biological research, transgenic fruit flies (Drosophila melanogaster) are model organisms used to study the effects of genetic changes on development. Fruit flies are often preferred over other animals due to their short life cycle and low maintenance requirements. It also has a relatively simple genome compared to many vertebrates, with typically only one copy of each gene, making phenotypic analysis easy. Drosophila have been used to study genetics and inheritance, embryonic development, learning, behavior, and aging. Transposons (particularly P elements) are well developed in Drosophila and provided an early method to add transgenes to their genome, although this has been taken over by more modern gene-editing techniques. Population control Due to their significance to human health, scientists are looking at ways to control mosquitoes through genetic engineering. Malaria-resistant mosquitoes have been developed in the laboratory. by inserting a gene that reduces the development of the malaria parasite and then use homing endonucleases to rapidly spread that gene throughout the male population (known as a gene drive). This has been taken further by swapping it for a lethal gene. In trials the populations of Aedes aegypti mosquitoes, the single most important carrier of dengue fever and Zika virus, were reduced by between 80% and by 90%. Another approach is to use the sterile insect technique, whereby males genetically engineered to be sterile out compete viable males, to reduce population numbers. Other insect pests that make attractive targets are moths. Diamondback moths cause US$4 to $5 billion of damage a year worldwide. The approach is similar to the mosquitoes, where males transformed with a gene that prevents females from reaching maturity will be released. They underwent field trials in 2017. Genetically modified moths have previously been released in field trials. A strain of pink bollworm that were sterilised with radiation were genetically engineered to express a red fluorescent protein making it easier for researchers to monitor them. Industry Silkworm, the larvae stage of Bombyx mori, is an economically important insect in sericulture. Scientists are developing strategies to enhance silk quality and quantity. There is also potential to use the silk producing machinery to make other valuable proteins. Proteins expressed by silkworms include; human serum albumin, human collagen α-chain, mouse monoclonal antibody and N-glycanase. Silkworms have been created that produce spider silk, a stronger but extremely difficult to harvest silk, and even novel silks. Birds Attempts to produce genetically modified birds began before 1980. Chickens have been genetically modified for a variety of purposes. This includes studying embryo development, preventing the transmission of bird flu and providing evolutionary insights using reverse engineering to recreate dinosaur-like phenotypes. A GM chicken that produces the drug Kanuma, an enzyme that treats a rare condition, in its egg passed regulatory approval in 2015. Disease control One potential use of GM birds could be to reduce the spread of avian disease. Researchers at Roslin Institute have produced a strain of GM chickens (Gallus gallus domesticus) that does not transmit avian flu to other birds; however, these birds are still susceptible to contracting it. The genetic modification is an RNA molecule that prevents the virus reproduction by mimicking the region of the flu virus genome that controls replication. It is referred to as a "decoy" because it diverts the flu virus enzyme, the polymerase, from functions that are required for virus replication. Evolutionary insights A team of geneticists led by University of Montana paleontologist Jack Horner is seeking to modify a chicken to express several features present in ancestral maniraptorans but absent in modern birds, such as teeth and a long tail, creating what has been dubbed a 'chickenosaurus'. Parallel projects have produced chicken embryos expressing dinosaur-like skull, leg, and foot anatomy. In-ovo sexing Gene editing is one possible tool in the laying hen breeding industry to provide an alternative to Chick culling. With this technology, breeding hens are given a genetic marker that is only passed down to male offspring. These males can then be identified during incubation and removed from the egg supply, so that only females hatch. For example, the Israeli startup eggXYt uses CRISPR to give male eggs a biomarker that makes then glow under certain conditions. Importantly, the resulting laying hen and the eggs it producers are not themselves genetically edited. The European Union's Director General for Health and Food Safety has confirmed that made in this way eggs can be marketed, although none are commercially available as of June 2023. Amphibians The first experiments that successfully developed transgenic amphibians into embryos began in the 1980s with Xenopus laevis. Later, germline transgenic axolotls in Ambystoma mexicanum were produced in 2006 using a technique called I-SceI-mediated transgenesis which utilizes the I-SceI endonuclease enzyme that can break DNA at specific sites and allow for foreign DNA to be inserted into the genome. Both Xenopus laevis and Ambystoma mexicanum are model organisms used to study regeneration. In addition, transgenic lines have been produced in other salamanders including the Japanese newt Pyrrhogaster and Pleurodeles watl. Genetically modified frogs, in particular Xenopus laevis and Xenopus tropicalis, are used in development biology. GM frogs can also be used as pollution sensors, especially for endocrine disrupting chemicals. There are proposals to use genetic engineering to control cane toads in Australia. Many lines of transgenic X. laevis are used to study immunology to address how bacteria and viruses cause infectious disease at the University of Rochester Medical Center's X. laevis Research Resource for Immunobiology (XLRRI). Amphibians can also be used to study and validate regenerative signaling pathways such as the Wnt pathway. The wound-healing abilities of amphibians have many practical applications and can potentially provide a foundation for scar-free repair in human plastic surgery, such as treating the skin of burn patients. Amphibians like X. laevis are suitable for experimental embryology because they have large embryos that can be easily manipulated and observed during development. In experiments with axolotls, mutants with white pigmented skin are often used because their semi-transparent skin provides an efficient visualization and tracking method for fluorescently tagged proteins like GFP. Amphibians are not always ideal when it comes to the resources required to produce genetically modified animals; along with the one to two-year generation time, Xenopus laevis can be considered less than ideal for transgenic experiments because of its pseudotetraploid genome. Due to the same genes appearing in the genome multiple times, the chance of mutagenesis experiments working is lower. Current methods of freezing and thawing axolotl sperm render them nonfunctional, meaning transgenic lines must be maintained in a facility and this can get quite costly. Producing transgenic axolotls has many challenges due to their large genome size. Current methods of generating transgenic axolotls are limited to random integration of the transgene cassette into the genome, which can lead to uneven expression or silencing. Gene duplicates also complicate efforts to generate efficient gene knockouts. Despite the costs, axolotls have unique regenerative abilities and ultimately provide useful information in understanding tissue regeneration because they can regenerate their limbs, spinal cord, skin, heart, lungs, and other organs. Naturally occurring mutant axolotls like the white strain that are often used in research have a transcriptional mutation at the Edn3 gene locus. Unlike other model organisms, the first fluorescently labeled cells in axolotls were differentiated muscle cells instead of embryos. In these initial experiments in the early 2000s, scientists were able to visualize muscle cell regeneration in the axolotl tail using a microinjecting technique, but cells could not be traced for the entire course of regeneration due to too harsh conditions that caused early cell death in labeled cells. Though the process of producing transgenic axolotls was a challenge, scientists were able to label cells for longer durations using a plasmid transfection technique, which involves injecting DNA into cells using an electrical pulse in a process called electroporation. Transfecting axolotl cells is thought to be more difficult because of the composition of the extracellular matrix (ECM). This technique allows spinal cord cells to be labeled and is very important in studying limb regeneration in many other cells; it has been used to study the role of the immune system in regeneration. Using gene knockout approaches, scientists can target specific regions of DNA using techniques like CRISPR/Cas9 to understand the function of certain genes based on the absence of the gene of interest. For example, gene knockouts of the Sox2 gene confirm this region's role in neural stem cell amplification in the axolotl. The technology to do more complex conditional gene knockouts, or conditional knockouts that give the scientist spatiotemporal control of the gene is not yet suitable for axolotls. However, research in this field continues to develop and is made easier by recent sequencing of the genome and resources created for scientists, including data portals that contain axolotl genome and transcriptome reference assemblies to identify orthologs. Nematodes The nematode Caenorhabditis elegans is one of the major model organisms for researching molecular biology. RNA interference (RNAi) was discovered in C. elegans and could be induced by simply feeding them bacteria modified to express double stranded RNA. It is also relatively easy to produce stable transgenic nematodes and this along with RNAi are the major tools used in studying their genes. The most common use of transgenic nematodes has been studying gene expression and localisation by attaching reporter genes. Transgenes can also be combined with RNAi to rescue phenotypes, altered to study gene function, imaged in real time as the cells develop or used to control expression for different tissues or developmental stages. Transgenic nematodes have been used to study viruses, toxicology, and diseases and to detect environmental pollutants. Other Systems have been developed to create transgenic organisms in a wide variety of other animals. The gene responsible for albinism in sea cucumbers has been found, and used to engineer white sea cucumbers, a rare delicacy. The technology also opens the way to investigate the genes responsible for some of the cucumbers more unusual traits, including hibernating in summer, eviscerating their intestines, and dissolving their bodies upon death. Flatworms have the ability to regenerate themselves from a single cell. Until 2017 there was no effective way to transform them, which hampered research. By using microinjection and radiation, scientists have now created the first genetically modified flatworms. The bristle worm, a marine annelid, has been modified. It is of interest due to its reproductive cycle being synchronized with lunar phases, regeneration capacity and slow evolution rate. Cnidaria such as Hydra and the sea anemone Nematostella vectensis are attractive model organisms to study the evolution of immunity and certain developmental processes. Other organisms that have been genetically modified include snails, geckos, turtles, crayfish, oysters, shrimp, clams, abalone, and sponges. Food products derived from genetically modified (GM) animals have not yet entered the European market. Nonetheless, the on-going discussion about GM crops [1], and the developing debate about the safety and ethics of foods and pharmaceutical products produced by both GM animals and plants, have provoked varying views across different sectors of society Ethics Genetic modification and genome editing hold potential for the future, but decisions regarding the use of these technologies must be based not only on what is possible, but also on what is ethically reasonable. Principles such as animal integrity, naturalness, risk identification and animal welfare are examples of ethically important factors that must be taken into consideration, and they also influence public perception and regulatory decisions by authorities. The utility of extrapolating animal data to humans has been questioned. This has led ethical committees to adopt the principles of the four Rs (Reduction, Refinement, Replacement, and Responsibility) as a guide for decision-making regarding animal experimentation. However, complete abandonment of laboratory animals has not yet been possible, and further research is needed to develop a roadmap for robust alternatives before their use can be fully discontinued. References Genetic engineering Genetically modified organisms
Genetically modified animal
[ "Chemistry", "Engineering", "Biology" ]
6,216
[ "Biological engineering", "Genetic engineering", "Genetically modified organisms", "Molecular biology" ]
22,697,066
https://en.wikipedia.org/wiki/SARK
The SARK (Search and Rescue Knife) or NSAR (Navy Search and Rescue) is a folding knife designed by knifemaker Ernest Emerson for use as a search and rescue knife by the US military. It has a hawkbill with a blunt tip in order to cut free trapped victims without cutting them in the process. There is a variant with a pointed-tip designed for police, known as the P-SARK (Police Search and Rescue Knife). History After a helicopter crash in 1999, which resulted in the deaths of six marines and one sailor, the United States Navy performed an assessment of its equipment and decided, among other things, that it needed a new search and rescue knife. The KA-BAR knives issued to the Special Boat Units (SBUs) had catastrophically failed to cut the marines free from their webbing. The Navy went to Emerson Knives, Inc., whose owner, Ernest Emerson, designed and fabricated a working prototype within 24 hours. The Navy found that the knife met its needs, and the model was dubbed the "SARK" (Search and Rescue Knife). The SARK is a folding knife with a wharncliffe-style blade and a blunt tip designed so that a rescuer could cut trapped victims free without stabbing them. Seeing another need in the police community, Senior Corporal Darryl Bolke, a police officer of the Ontario Police Department, approached Emerson and asked for a modification to the SARK. Bolke's request was to make the tip of the blade pointed rather than blunt. Emerson replaced the blunt end of the SARK with a pointed end and named it the "P-SARK", or Police Search And Rescue Knife. Bolke wrote the knife policy for his department, the first of its kind in the United States. The P-SARK has been adopted by a number of law enforcement agencies since that time. In 2005, the Navy changed the requirements on the SARK to incorporate a guthook on the back of the blade for use as a line-cutter. Emerson made the change on this model, which was designated the NSAR (Navy Search And Rescue) Knife and only made available to the United States Navy. Specifications The SARK, PSARK, and NSAR, like all of Emerson's knives, are made in the US. All three models have a wharncliffe shaped chisel-ground blade that is long and hardened to a Rockwell hardness of 57-59 RC. The handle is long, making the knife in length when opened. The blade steel is Crucible's 154CM and is thick. The butt-end of the knife is square-shaped and has a hole for tying a lanyard. Some models are made with partially serrated blades to aid in the cutting of seatbelts or webbing. The handle of the SARK is made of two titanium liners utilizing a Walker linerlock and a double detent as the locking mechanism. Titanium is used due to its exceptional strength-to-weight ratio and corrosion resistance. The handle's scales are made from black G-10 fiberglass, although models were made for a few years using green G-10. A pocket clip held in place by three screws allows the knife to be clipped to a pocket, web-gear or MOLLE. Each model is equipped with Emerson's Wave opening mechanism, a small hook on the spine of the blade designed to catch the edge of a user's pocket, opening the blade as the knife is drawn. References External links Emerson Knives Homepage Military knives Equipment of the United States Navy Rescue equipment Pocket knives Mechanical hand tools Goods manufactured in the United States
SARK
[ "Physics" ]
741
[ "Mechanics", "Mechanical hand tools" ]
22,697,098
https://en.wikipedia.org/wiki/Distance%20of%20closest%20approach
The distance of closest approach of two objects is the distance between their centers when they are externally tangent. The objects may be geometric shapes or physical particles with well-defined boundaries. The distance of closest approach is sometimes referred to as the contact distance. For the simplest objects, spheres, the distance of closest approach is simply the sum of their radii. For non-spherical objects, the distance of closest approach is a function of the orientation of the objects, and its calculation can be difficult. The maximum packing density of hard particles, an important problem of ongoing interest, depends on their distance of closest approach. The interactions of particles typically depend on their separation, and the distance of closest approach plays an important role in determining the behavior of condensed matter systems. Excluded volume The excluded volume of particles (the volume excluded to the centers of other particles due to the presence of one) is a key parameter in such descriptions,; the distance of closest approach is required to calculate the excluded volume. The excluded volume for identical spheres is just four times the volume of one sphere. For other anisotropic objects, the excluded volume depends on orientation, and its calculation can be surprising difficult. The simplest shapes after spheres are ellipses and ellipsoids; these have received considerable attention, yet their excluded volume is not known. Vieillard Baron was able to provide an overlap criterion for two ellipses. His results were useful for computer simulations of hard particle systems and for packing problems using Monte Carlo simulations. The one anisotropic shape whose excluded volume can be expressed analytically is the spherocylinder; the solution of this problem is a classic work by Onsager. The problem was tackled by considering the distance between two line segments, which are the center lines of the capped cylinders. Results for other shapes are not readily available. The orientation dependence of the distance of closest approach has surprising consequences. Systems of hard particles, whose interactions are only entropic, can become ordered. Hard spherocylinders form not only orientationally ordered nematic, but also positionally ordered smectic phases. Here, the system gives up some (orientational and even positional) disorder to gain disorder and entropy elsewhere. Case of two ellipses Vieillard Baron first investigated this problem, and although he did not obtain a result for the distance of closest approaches, he derived the overlap criterion for two ellipses. His final results were useful for the study of the phase behavior of hard particles and for the packing problem using Monte Carlo simulations. Although overlap criteria have been developed, analytic solutions for the distance of closest approach and the location of the point of contact have only recently become available. The details of the calculations are provided in Ref. The Fortran 90 subroutine is provided in Ref. The procedure consists of three steps: Transformation of the two tangent ellipses and , whose centers are joined by the vector , into a circle and an ellipse whose centers are joined by the vector . The circle and the ellipse remain tangent after the transformation. Determination of the distance of closest approach of and analytically. It requires the appropriate solution of a quartic equation. The normal is calculated. Determination of the distance of closest approach and the location of the point of contact of and by the inverse transformations of the vectors and Input: lengths of the semiaxes , unit vectors , along major axes of both ellipses, and unit vector joining the centers of the two ellipses. Output: distance between the centers when the ellipses and are externally tangent, and location of point of contact in terms of ,. Case of two ellipsoids Consider two ellipsoids, each with a given shape and orientation, whose centers are on a line with given direction. We wish to determine the distance between centers when the ellipsoids are in point contact externally. This distance of closest approach is a function of the shapes of the ellipsoids and their orientation. There is no analytic solution for this problem, since solving for the distance requires the solution of a sixth order polynomial equation. Here an algorithm is developed to determine this distance, based on the analytic results for the distance of closest approach of ellipses in 2D, which can be implemented numerically. Details are given in publications. Subroutines are provided in two formats: Fortran90 and C. The algorithm consists of three steps. Constructing a plane containing the line joining the centers of the two ellipsoids, and finding the equations of the ellipses formed by the intersection of this plane and the ellipsoids. Determining the distance of closest approach of the ellipses; that is the distance between the centers of the ellipses when they are in point contact externally. Rotating the plane until the distance of closest approach of the ellipses is a maximum. The distance of closest approach of the ellipsoids is this maximum distance. See also Apsis Impact parameter References Conic sections Distance
Distance of closest approach
[ "Physics", "Mathematics" ]
1,017
[ "Distance", "Physical quantities", "Quantity", "Size", "Space", "Spacetime", "Wikipedia categories named after physical quantities" ]
28,752,034
https://en.wikipedia.org/wiki/Telepathology
Telepathology is the practice of pathology at a distance. It uses telecommunications technology to facilitate the transfer of image-rich pathology data between distant locations for the purposes of diagnosis, education, and research. Performance of telepathology requires that a pathologist selects the video images for analysis and the rendering of diagnoses. The use of "television microscopy", the forerunner of telepathology, did not require that a pathologist have physical or virtual "hands-on" involvement in the selection of microscopic fields-of-view for analysis and diagnosis. An academic pathologist, Ronald S. Weinstein, M.D., coined the term "telepathology" in 1986. In a medical journal editorial, Weinstein outlined the actions that would be needed to create remote pathology diagnostic services. He and his collaborators published the first scientific paper on robotic telepathology. Weinstein was also granted the first U.S. patents for robotic telepathology systems and telepathology diagnostic networks. Weinstein is known to many as the "father of telepathology". In Norway, Eide and Nordrum implemented the first sustainable clinical telepathology service in 1989; this is still in operation decades later. A number of clinical telepathology services have benefited many thousands of patients in North America, Europe, and Asia. Telepathology has been successfully used for many applications, including the rendering of histopathology tissue diagnoses at a distance. Although digital pathology imaging, including virtual microscopy, is the mode of choice for telepathology services in developed countries, analog telepathology imaging is still used for patient services in some developing countries. Types of systems Telepathology systems are divided into three major types: static image-based systems, real-time systems, and virtual slide systems. Static image systems have the benefit of being the most reasonably priced and usable systems. They have the significant drawback in only being able to capture a selected subset of microscopic fields for off-site evaluation. Real-time robotic microscopy systems and virtual slides allow a consultant pathologist the opportunity to evaluate histopathology slides in their entirety, from a distance. With real-time systems, the consultant actively operates a robotically controlled motorized microscope located at a distant site—changing focus, illumination, magnification, and field of view—at will. Either an analog video camera or a digital video camera can be used for robotic microscopy. Another form of real-time microscopy involves utilizing a high resolution video camera mounted on a path lab microscope to send live digital video of a slide to a large computer monitor at the pathologist's remote location via encrypted store-and-forward software. An echo-cancelling microphone at each end of the video conference allows the pathologist to communicate with the person moving the slide under the microscope. Virtual slide systems utilize automated digital slide scanners that create a digital image file of an entire glass slide (whole slide image). This file is stored on a computer server and can be navigated at a distance, over the Internet, using a browser. Digital imaging is required for virtual microscopy. While real-time and virtual slide systems offer higher diagnostic accuracy when compared with static-image telepathology, there are drawbacks to each. Real-time systems perform best on local area networks (LANs), but performance may suffer if employed during periods of high network traffic or using the Internet proper as a backbone. Expense is an issue with real-time systems and virtual slide systems as they can be costly. Virtual slide telepathology is emerging as the technology of choice for telepathology services. However, high throughput virtual slide scanners (those producing one virtual slide or more per minute) are currently expensive. Also, virtual slide digital files are relatively large, often exceeding one gigabyte in size. Storing and simultaneously retrieving large numbers of telepathology whole-slide image files can be cumbersome, introducing their own workflow challenges in the clinical laboratory. Types of Telepathology Platform: Telepathology platforms that have adopted whole slide imaging enables remote viewing to aid pathologist in following ways: By remote sharing and secondly by uploading images for expert consultations. Uses and benefits Telepathology is currently being used for a wide spectrum of clinical applications including diagnosing of frozen section specimens, primary histopathology diagnoses, second opinion diagnoses, subspecialty pathology expert diagnoses, investigative and regulated preclinical toxicology studies, education, competency assessment, and research. Benefits of telepathology include providing immediate access to off-site pathologists for rapid frozen section diagnoses. Another benefit can be gaining direct access to subspecialty pathologists such as a renal pathologist, a neuropathologist, or a dermatopathologist, for immediate consultations. Services by country Canada Canada Health Infoway is the organization responsible for the implementation of telepathology in Canada. Canada Health Infoway is a federal non-profit which provides funding for improving digital health infrastructure. Canada Health Infoway has targeted funding of $1.2 million CAD to the Telepathology Solution for the province of British Columbia. The system is designed to connect all pathologists within the province. The long-term expectations are improvement to patient care and safety through access to pathology expertise, improved timeliness of results and quality of service. In Ontario, the University Health Network (UHN) hospitals are the primary drivers of the development of telepathology. The three northern Ontario communities of Timmins, Sault Ste. Marie and Kapuskasing have several community hospitals virtually linked to UHN pathologists via the Internet 24 hours a day. See also Anatomical pathology Cytopathology Digital pathology Juan Rosai, a surgical pathology professor working with telepathology Medical laboratory Microscopy Ronald S. Weinstein, an early innovator in telepathology Virtual microscope Virtual slide References Bibliography Maiolino P, De Vico G. Telepathology in veterinary diagnostic cytopathology. In: Kumar S, Dunn BE, editors. Telepathology. Berlin, Springer, 2009; 6:63-69. Schroeder JA. Ultrasructural telepathology: remote EM diagnostic via Internet. In: Kumar S, Dunn BE, editors. Telepathology. Berlin, Springer, 2009; 14:179-204. Sinard JH. Practical pathology informatics. New York, Springer. 2006:265-286. External links Organizing Knowledge in a Semantic Web for Pathology Welcome to Digital Pathology at Brown Medical School Holycross Cancer Center (Poland, Kielce) Pathomorphology Department virtual slides Digital pathology: DICOM-conform draft, testbed, and first results OpenSlide - C library that provides a simple interface to read whole-slide images. Pathology Microscopy Telemedicine
Telepathology
[ "Chemistry", "Biology" ]
1,426
[ "Pathology", "Microscopy" ]
28,752,069
https://en.wikipedia.org/wiki/Saffman%E2%80%93Delbr%C3%BCck%20model
The Saffman–Delbrück model describes a lipid membrane as a thin layer of viscous fluid, surrounded by a less viscous bulk liquid. This picture was originally proposed to determine the diffusion coefficient of membrane proteins, but has also been used to describe the dynamics of fluid domains within lipid membranes. The Saffman–Delbrück formula is often applied to determine the size of an object embedded in a membrane from its observed diffusion coefficient, and is characterized by the weak logarithmic dependence of diffusion constant on object radius. Origin In a three-dimensional highly viscous liquid, a spherical object of radius a has diffusion coefficient by the well-known Stokes–Einstein relation. By contrast, the diffusion coefficient of a circular object embedded in a two-dimensional fluid diverges; this is Stokes' paradox. In a real lipid membrane, the diffusion coefficient may be limited by: the size of the membrane the inertia of the membrane (finite Reynolds number) the effect of the liquid surrounding the membrane Philip Saffman and Max Delbrück calculated the diffusion coefficient for these three cases, and showed that Case 3 was the relevant effect. Saffman–Delbrück formula The diffusion coefficient of a cylindrical inclusion of radius in a membrane with thickness and viscosity , surrounded by bulk fluid with viscosity is: where the Saffman–Delbrück length and is the Euler–Mascheroni constant. Typical values of are 0.1 to 10 micrometres. This result is an approximation applicable for radii , which is appropriate for proteins ( nm), but not for micrometre-scale lipid domains. The Saffman–Delbrück formula predicts that diffusion coefficients will only depend weakly on the size of the embedded object; for example, if , changing from 1 nm to 10 nm only reduces the diffusion coefficient by 30%. Beyond the Saffman–Delbrück length Hughes, Pailthorpe, and White extended the theory of Saffman and Delbrück to inclusions with any radii ; for , A useful formula that produces the correct diffusion coefficients between these two limits is where , , , , and . Please note that the original version of has a typo in ; the value in the correction to that article should be used. Experimental studies Though the Saffman–Delbruck formula is commonly used to infer the sizes of nanometer-scale objects, recent controversial experiments on proteins have suggested that the diffusion coefficient's dependence on radius should be instead of . However, for larger objects (such as micrometre-scale lipid domains), the Saffman–Delbruck model (with the extensions above) is well-established Extending Saffman–Delbrück for Hydrodynamic Coupling of Proteins within Curved Lipid Bilayer Membranes The Saffman–Delbrück approach has also been extended in recent works for modeling hydrodynamic interactions between proteins embedded within curved lipid bilayer membranes, such as in vesicles and other structures. These works use related formulations to study the roles of the membrane hydrodynamic coupling and curvature in the collective drift-diffusion dynamics of proteins within bilayer membranes. Various models of the protein inclusions within curved membranes have been developed, including models based on series truncations, immersed boundary methods, and fluctuating hydrodynamics. References Biophysics Proteins Membrane biology
Saffman–Delbrück model
[ "Physics", "Chemistry", "Biology" ]
689
[ "Biomolecules by chemical classification", "Applied and interdisciplinary physics", "Membrane biology", "Biophysics", "Molecular biology", "Proteins" ]
28,752,673
https://en.wikipedia.org/wiki/TLA%2B
{{DISPLAYTITLE:TLA+}} TLA+ is a formal specification language developed by Leslie Lamport. It is used for designing, modelling, documentation, and verification of programs, especially concurrent systems and distributed systems. TLA+ is considered to be exhaustively-testable pseudocode, and its use likened to drawing blueprints for software systems; TLA is an acronym for Temporal Logic of Actions. For design and documentation, TLA+ fulfills the same purpose as informal technical specifications. However, TLA+ specifications are written in a formal language of logic and mathematics, and the precision of specifications written in this language is intended to uncover design flaws before system implementation is underway. Since TLA+ specifications are written in a formal language, they are amenable to finite model checking. The model checker finds all possible system behaviours up to some number of execution steps, and examines them for violations of desired invariance properties such as safety and liveness. TLA+ specifications use basic set theory to define safety (bad things won't happen) and temporal logic to define liveness (good things eventually happen). TLA+ is also used to write machine-checked proofs of correctness both for algorithms and mathematical theorems. The proofs are written in a declarative, hierarchical style independent of any single theorem prover backend. Both formal and informal structured mathematical proofs can be written in TLA+; the language is similar to LaTeX, and tools exist to translate TLA+ specifications to LaTeX documents. TLA+ was introduced in 1999, following several decades of research into a verification method for concurrent systems. Ever since, a toolchain has been developed, including an IDE and a distributed model checker. The pseudocode-like language PlusCal was created in 2009; it transpiles to TLA+ and is useful for specifying sequential algorithms. TLA+2 was announced in 2014, expanding language support for proof constructs. The current TLA+ reference is The TLA+ Hyperbook by Leslie Lamport. History Modern temporal logic was developed by Arthur Prior in 1957, then called tense logic. Although Amir Pnueli was the first to seriously study the applications of temporal logic to computer science, Prior speculated on its use a decade earlier in 1967: Pnueli researched the use of temporal logic in specifying and reasoning about computer programs, introducing linear temporal logic in 1977. LTL became an important tool for analysis of concurrent programs, easily expressing properties such as mutual exclusion and freedom from deadlock. Concurrent with Pnueli's work on LTL, academics were working to generalize Hoare logic for verification of multiprocess programs. Leslie Lamport became interested in the problem after peer review found an error in a paper he submitted on mutual exclusion. Ed Ashcroft introduced invariance in his 1975 paper "Proving Assertions About Parallel Programs", which Lamport used to generalize Floyd's method in his 1977 paper "Proving Correctness of Multiprocess Programs". Lamport's paper also introduced safety and liveness as generalizations of partial correctness and termination, respectively. This method was used to verify the first concurrent garbage collection algorithm in a 1978 paper with Edsger Dijkstra. Lamport first encountered Pnueli's LTL during a 1978 seminar at Stanford organized by Susan Owicki. According to Lamport, "I was sure that temporal logic was some kind of abstract nonsense that would never have any practical application, but it seemed like fun, so I attended." In 1980 he published "'Sometime' is Sometimes 'Not Never'", which became one of the most frequently-cited papers in the temporal logic literature. Lamport worked on writing temporal logic specifications during his time at SRI, but found the approach to be impractical: His search for a practical method of specification resulted in the 1983 paper "Specifying Concurrent Programming Modules", which introduced the idea of describing state transitions as boolean-valued functions of primed and unprimed variables. Work continued throughout the 1980s, and Lamport began publishing papers on the temporal logic of actions in 1990; however, it was not formally introduced until "The Temporal Logic of Actions" was published in 1994. TLA enabled the use of actions in temporal formulas, which according to Lamport "provides an elegant way to formalize and systematize all the reasoning used in concurrent system verification." TLA specifications mostly consisted of ordinary non-temporal mathematics, which Lamport found less cumbersome than a purely temporal specification. TLA provided a mathematical foundation to the specification language TLA+, introduced with the paper "Specifying Concurrent Systems with TLA+" in 1999. Later that same year, Yuan Yu wrote the TLC model checker for TLA+ specifications; TLC was used to find errors in the cache coherence protocol for a Compaq multiprocessor. Lamport published a full textbook on TLA+ in 2002, titled "Specifying Systems: The TLA+ Language and Tools for Software Engineers". PlusCal was introduced in 2009, and the TLA+ proof system (TLAPS) in 2012. TLA+2 was announced in 2014, adding some additional language constructs as well as greatly increasing in-language support for the proof system. Lamport is engaged in creating an updated TLA+ reference, "The TLA+ Hyperbook". The incomplete work is available from his official website. Lamport is also creating The TLA+ Video Course, described therein as "a work in progress that consists of the beginning of a series of video lectures to teach programmers and software engineers how to write their own TLA+ specifications". Language TLA+ specifications are organized into modules. Modules can extend (import) other modules to use their functionality. Although the TLA+ standard is specified in typeset mathematical symbols, existing TLA+ tools use LaTeX-like symbol definitions in ASCII. TLA+ uses several terms which require definition: State – an assignment of values to variables Behaviour – a sequence of states Step – a pair of successive states in a behavior Stuttering step – a step during which variables are unchanged Next-state relation – a relation describing how variables can change in any step State function – an expression containing variables and constants that is not a next-state relation State predicate – a Boolean-valued state function Invariant – a state predicate true in all reachable states Temporal formula – an expression containing statements in temporal logic Safety TLA+ concerns itself with defining the set of all correct system behaviours. For example, a one-bit clock ticking endlessly between 0 and 1 could be specified as follows: VARIABLE clock Init == clock \in {0, 1} Tick == IF clock = 0 THEN clock' = 1 ELSE clock' = 0 Spec == Init /\ [][Tick]_<<clock>> The next-state relation Tick sets clock′ (the value of clock in the next state) to 1 if clock is 0, and 0 if clock is 1. The state predicate Init is true if the value of clock is either 0 or 1. Spec is a temporal formula asserting all behaviours of one-bit clock must initially satisfy Init and have all steps either match Tick or be stuttering steps. Two such behaviours are: 0 -> 1 -> 0 -> 1 -> 0 -> ... 1 -> 0 -> 1 -> 0 -> 1 -> ... The safety properties of the one-bit clock – the set of reachable system states – are adequately described by the spec. Liveness The above spec disallows strange states for the one-bit clock, but does not say the clock will ever tick. For example, the following perpetually-stuttering behaviours are accepted: 0 -> 0 -> 0 -> 0 -> 0 -> ... 1 -> 1 -> 1 -> 1 -> 1 -> ... A clock which does not tick is not useful, so these behaviours should be disallowed. One solution is to disable stuttering, but TLA+ requires stuttering always be enabled; a stuttering step represents a change to some part of the system not described in the spec, and is useful for refinement. To ensure the clock must eventually tick, weak fairness is asserted for Tick: Spec == Init /\ [][Tick]_<<clock>> /\ WF_<<clock>>(Tick) Weak fairness over an action means if that action is continuously enabled, it must eventually be taken. With weak fairness on Tick only a finite number of stuttering steps are permitted between ticks. This temporal logical statement about Tick is called a liveness assertion. In general, a liveness assertion should be machine-closed: it shouldn't constrain the set of reachable states, only the set of possible behaviours. Most specifications do not require assertion of liveness properties. Safety properties suffice both for model checking and guidance in system implementation. Operators TLA+ is based on ZF, so operations on variables involve set manipulation. The language includes set membership, union, intersection, difference, powerset, and subset operators. First-order logic operators such as , , , , , are also included, as well as universal and existential quantifiers and . Hilbert's is provided as the CHOOSE operator, which uniquely selects an arbitrary set element. Arithmetic operators over reals, integers, and natural numbers are available from the standard modules. Temporal logic operators are built into TLA+. Temporal formulas use to mean P is always true, and to mean P is eventually true. The operators are combined into to mean P is true infinitely often, or to mean eventually P will always be true. Other temporal operators include weak and strong fairness. Weak fairness WFe(A) means if action A is enabled continuously (i.e. without interruptions), it must eventually be taken. Strong fairness SFe(A) means if action A is enabled continually (repeatedly, with or without interruptions), it must eventually be taken. Temporal existential and universal quantification are included in TLA+, although without support from the tools. User-defined operators are similar to macros. Operators differ from functions in that their domain need not be a set: for example, the set membership operator has the category of sets as its domain, which is not a valid set in ZFC (since its existence leads to Russell's paradox). Recursive and anonymous user-defined operators were added in TLA+2. Data structures The foundational data structure of TLA+ is the set. Sets are either explicitly enumerated or constructed from other sets using operators or with {x \in S : p} where p is some condition on x, or {e : x \in S} where e is some function of x. The unique empty set is represented as {}. Functions in TLA+ assign a value to each element in their domain, a set. [S -> T] is the set of all functions with f[x] in T, for each x in the domain set S. For example, the TLA+ function Double[x \in Nat] == x*2 is an element of the set [Nat -> Nat] so Double \in [Nat -> Nat] is a true statement in TLA+. Functions are also defined with [x \in S |-> e] for some expression e, or by modifying an existing function [f EXCEPT ![v1] = v2]. Records are a type of function in TLA+. The record [name |-> "John", age |-> 35] is a record with fields name and age, accessed with r.name and r.age, and belonging to the set of records [name : String, age : Nat]. Tuples are included in TLA+. They are explicitly defined with <<e1,e2,e3>> or constructed with operators from the standard Sequences module. Sets of tuples are defined by Cartesian product; for example, the set of all pairs of natural numbers is defined Nat \X Nat. Standard modules TLA+ has a set of standard modules containing common operators. They are distributed with the syntactic analyzer. The TLC model checker uses Java implementations for improved performance. FiniteSets: Module for working with finite sets. Provides IsFiniteSet(S) and Cardinality(S) operators. Sequences: Defines operators on tuples such as Len(S), Head(S), Tail(S), Append(S, E), concatenation, and filter. Bags: Module for working with multisets. Provides primitive set operation analogues and duplicate counting. Naturals: Defines the Natural numbers along with inequality and arithmetic operators. Integers: Defines the Integers. Reals: Defines the Real numbers along with division and infinity. RealTime: Provides definitions useful in real-time system specifications. TLC: Provides utility functions for model-checked specifications, such as logging and assertions. Standard modules are imported with the EXTENDS or INSTANCE statements. Tools IDE An integrated development environment is implemented on top of Eclipse. It includes an editor with error and syntax highlighting, plus a GUI front-end to several other TLA+ tools: The SANY syntactic analyzer, which parses and checks the spec for syntax errors. The LaTeX translator, to generate pretty-printed specs. The PlusCal translator. The TLC model checker. The TLAPS proof system. The IDE is distributed in The TLA Toolbox. Model checker The TLC model checker builds a finite state model of TLA+ specifications for checking invariance properties. TLC generates a set of initial states satisfying the spec, then performs a breadth-first search over all defined state transitions. Execution stops when all state transitions lead to states which have already been discovered. If TLC discovers a state which violates a system invariant, it halts and provides a state trace path to the offending state. TLC provides a method of declaring model symmetries to defend against combinatorial explosion. It also parallelizes the state exploration step, and can run in distributed mode to spread the workload across a large number of computers. As an alternative to exhaustive breadth-first search, TLC can use depth-first search or generate random behaviours. TLC operates on a subset of TLA+; the model must be finite and enumerable, and some temporal operators are not supported. In distributed mode TLC cannot check liveness properties, nor check random or depth-first behaviours. TLC is available as a command line tool or bundled with the TLA toolbox. Proof system The TLA+ Proof System, or TLAPS, mechanically checks proofs written in TLA+. It was developed at the Microsoft Research-INRIA Joint Centre to prove correctness of concurrent and distributed algorithms. The proof language is designed to be independent of any particular theorem prover; proofs are written in a declarative style, and transformed into individual obligations which are sent to back-end provers. The primary back-end provers are Isabelle and Zenon, with fallback to SMT solvers CVC3, Yices, and Z3. TLAPS proofs are hierarchically structured, easing refactoring and enabling non-linear development: work can begin on later steps before all prior steps are verified, and difficult steps are decomposed into smaller sub-steps. TLAPS works well with TLC, as the model checker quickly finds small errors before verification is begun. In turn, TLAPS can prove system properties which are beyond the capabilities of finite model checking. TLAPS does not currently support reasoning with real numbers, nor most temporal operators. Isabelle and Zenon generally cannot prove arithmetic proof obligations, requiring use of the SMT solvers. TLAPS has been used to prove correctness of Byzantine Paxos, the Memoir security architecture, components of the Pastry distributed hash table, and the Spire consensus algorithm. It is distributed separately from the rest of the TLA+ tools and is free software, distributed under the BSD license. TLA+2 greatly expanded language support for proof constructs. Industry use At Microsoft, a critical bug was discovered in the Xbox 360 memory module during the process of writing a specification in TLA+. TLA+ was used to write formal proofs of correctness for Byzantine Paxos and components of the Pastry distributed hash table. Amazon Web Services has used TLA+ since 2011. TLA+ model checking uncovered bugs in DynamoDB, S3, EBS, and an internal distributed lock manager; some bugs required state traces of 35 steps. Model checking was also used to verify aggressive optimizations. In addition, TLA+ specifications were found to hold value as documentation and design aids. Microsoft Azure used TLA+ to design Cosmos DB, a globally-distributed database with five different consistency models. Altreonic NV used TLA+ to model check OpenComRTOS. Examples A key-value store with snapshot isolation: --------------------------- MODULE KeyValueStore --------------------------- CONSTANTS Key, \* The set of all keys. Val, \* The set of all values. TxId \* The set of all transaction IDs. VARIABLES store, \* A data store mapping keys to values. tx, \* The set of open snapshot transactions. snapshotStore, \* Snapshots of the store for each transaction. written, \* A log of writes performed within each transaction. missed \* The set of writes invisible to each transaction. ---------------------------------------------------------------------------- NoVal == \* Choose something to represent the absence of a value. CHOOSE v : v \notin Val Store == \* The set of all key-value stores. [Key -> Val \cup {NoVal}] Init == \* The initial predicate. /\ store = [k \in Key |-> NoVal] \* All store values are initially NoVal. /\ tx = {} \* The set of open transactions is initially empty. /\ snapshotStore = \* All snapshotStore values are initially NoVal. [t \in TxId |-> [k \in Key |-> NoVal]] /\ written = [t \in TxId |-> {}] \* All write logs are initially empty. /\ missed = [t \in TxId |-> {}] \* All missed writes are initially empty. TypeInvariant == \* The type invariant. /\ store \in Store /\ tx \subseteq TxId /\ snapshotStore \in [TxId -> Store] /\ written \in [TxId -> SUBSET Key] /\ missed \in [TxId -> SUBSET Key] TxLifecycle == /\ \A t \in tx : \* If store != snapshot & we haven't written it, we must have missed a write. \A k \in Key : (store[k] /= snapshotStore[t][k] /\ k \notin written[t]) => k \in missed[t] /\ \A t \in TxId \ tx : \* Checks transactions are cleaned up after disposal. /\ \A k \in Key : snapshotStore[t][k] = NoVal /\ written[t] = {} /\ missed[t] = {} OpenTx(t) == \* Open a new transaction. /\ t \notin tx /\ tx' = tx \cup {t} /\ snapshotStore' = [snapshotStore EXCEPT ![t] = store] /\ UNCHANGED <<written, missed, store>> Add(t, k, v) == \* Using transaction t, add value v to the store under key k. /\ t \in tx /\ snapshotStore[t][k] = NoVal /\ snapshotStore' = [snapshotStore EXCEPT ![t][k] = v] /\ written' = [written EXCEPT ![t] = @ \cup {k}] /\ UNCHANGED <<tx, missed, store>> Update(t, k, v) == \* Using transaction t, update the value associated with key k to v. /\ t \in tx /\ snapshotStore[t][k] \notin {NoVal, v} /\ snapshotStore' = [snapshotStore EXCEPT ![t][k] = v] /\ written' = [written EXCEPT ![t] = @ \cup {k}] /\ UNCHANGED <<tx, missed, store>> Remove(t, k) == \* Using transaction t, remove key k from the store. /\ t \in tx /\ snapshotStore[t][k] /= NoVal /\ snapshotStore' = [snapshotStore EXCEPT ![t][k] = NoVal] /\ written' = [written EXCEPT ![t] = @ \cup {k}] /\ UNCHANGED <<tx, missed, store>> RollbackTx(t) == \* Close the transaction without merging writes into store. /\ t \in tx /\ tx' = tx \ {t} /\ snapshotStore' = [snapshotStore EXCEPT ![t] = [k \in Key |-> NoVal]] /\ written' = [written EXCEPT ![t] = {}] /\ missed' = [missed EXCEPT ![t] = {}] /\ UNCHANGED store CloseTx(t) == \* Close transaction t, merging writes into store. /\ t \in tx /\ missed[t] \cap written[t] = {} \* Detection of write-write conflicts. /\ store' = \* Merge snapshotStore writes into store. [k \in Key |-> IF k \in written[t] THEN snapshotStore[t][k] ELSE store[k]] /\ tx' = tx \ {t} /\ missed' = \* Update the missed writes for other open transactions. [otherTx \in TxId |-> IF otherTx \in tx' THEN missed[otherTx] \cup written[t] ELSE {}] /\ snapshotStore' = [snapshotStore EXCEPT ![t] = [k \in Key |-> NoVal]] /\ written' = [written EXCEPT ![t] = {}] Next == \* The next-state relation. \/ \E t \in TxId : OpenTx(t) \/ \E t \in tx : \E k \in Key : \E v \in Val : Add(t, k, v) \/ \E t \in tx : \E k \in Key : \E v \in Val : Update(t, k, v) \/ \E t \in tx : \E k \in Key : Remove(t, k) \/ \E t \in tx : RollbackTx(t) \/ \E t \in tx : CloseTx(t) Spec == \* Initialize state with Init and transition with Next. Init /\ [][Next]_<<store, tx, snapshotStore, written, missed>> ---------------------------------------------------------------------------- THEOREM Spec => [](TypeInvariant /\ TxLifecycle) ============================================================================= See also Communicating sequential processes Alloy (specification language) B-Method Computation tree logic PlusCal Temporal logic Temporal logic of actions Z notation References Formal methods Formal methods tools Software using the BSD license Specification languages Formal specification languages Concurrency (computer science)
TLA+
[ "Mathematics", "Engineering" ]
5,312
[ "Specification languages", "Software engineering", "Formal methods tools", "Formal methods", "Mathematical software" ]
27,453,461
https://en.wikipedia.org/wiki/Integrated%20information%20theory
Integrated information theory (IIT) proposes a mathematical model for the consciousness of a system. It comprises a framework ultimately intended to explain why some physical systems (such as human brains) are conscious, and to be capable of providing a concrete inference about whether any physical system is conscious, to what degree, and what particular experience it has; why they feel the particular way they do in particular states (e.g. why our visual field appears extended when we gaze out at the night sky), and what it would take for other physical systems to be conscious (Are other animals conscious? Might the whole universe be?). According to IIT, a system's consciousness (what it is like subjectively) is conjectured to be identical to its causal properties (what it is like objectively). Therefore, it should be possible to account for the conscious experience of a physical system by unfolding its complete causal powers. IIT was proposed by neuroscientist Giulio Tononi in 2004. Despite significant interest, IIT remains controversial and has been widely criticized, including that it is unfalsifiable pseudoscience. Overview Relationship to the "hard problem of consciousness" David Chalmers has argued that any attempt to explain consciousness in purely physical terms (i.e., to start with the laws of physics as they are currently formulated and derive the necessary and inevitable existence of consciousness) eventually runs into the so-called "hard problem". Rather than try to start from physical principles and arrive at consciousness, IIT "starts with consciousness" (accepts the existence of our own consciousness as certain) and reasons about the properties that a postulated physical substrate would need to have in order to account for it. The ability to perform this jump from phenomenology to mechanism rests on IIT's assumption that if the formal properties of a conscious experience can be fully accounted for by an underlying physical system, then the properties of the physical system must be constrained by the properties of the experience. The limitations on the physical system for consciousness to exist are unknown and consciousness may exist on a spectrum, as implied by studies involving split-brain patients and conscious patients with large amounts of brain matter missing. Specifically, IIT moves from phenomenology to mechanism by attempting to identify the essential properties of conscious experience (dubbed "axioms") and, from there, the essential properties of conscious physical systems (dubbed "postulates"). Extensions The calculation of even a modestly-sized system's is often computationally intractable, so efforts have been made to develop heuristic or proxy measures of integrated information. For example, Masafumi Oizumi and colleagues have developed both and geometric integrated information or , which are practical approximations for integrated information. These are related to proxy measures developed earlier by Anil Seth and Adam Barrett. However, none of these proxy measures have a mathematically proven relationship to the actual value, which complicates the interpretation of analyses that use them. They can give qualitatively different results even for very small systems. In 2021, Angus Leung and colleagues published a direct application of IIT's mathematical formalism to neural data. To circumvent the computational challenges associated with larger datasets, the authors focused on neuronal population activity in the fly. The study showed that can readily be computed for smaller sets of neural data. Moreover, matching IIT's predictions, was significantly decreased when the animals underwent general anesthesia. A significant computational challenge in calculating integrated information is finding the minimum information partition of a neural system, which requires iterating through all possible network partitions. To solve this problem, Daniel Toker and Friedrich T. Sommer have shown that the spectral decomposition of the correlation matrix of a system's dynamics is a quick and robust proxy for the minimum information partition. Related experimental work While the algorithm for assessing a system's and conceptual structure is relatively straightforward, its high time complexity makes it computationally intractable for many systems of interest. Heuristics and approximations can sometimes be used to provide ballpark estimates of a complex system's integrated information, but precise calculations are often impossible. These computational challenges, combined with the already difficult task of reliably and accurately assessing consciousness under experimental conditions, make testing many of the theory's predictions difficult. Despite these challenges, researchers have attempted to use measures of information integration and differentiation to assess levels of consciousness in a variety of subjects. For instance, a recent study using a less computationally-intensive proxy for was able to reliably discriminate between varying levels of consciousness in wakeful, sleeping (dreaming vs. non-dreaming), anesthetized, and comatose (vegetative vs. minimally-conscious vs. locked-in) individuals. IIT also makes several predictions which fit well with existing experimental evidence, and can be used to explain some counterintuitive findings in consciousness research. For example, IIT can be used to explain why some brain regions, such as the cerebellum do not appear to contribute to consciousness, despite their size and/or functional importance. Reception Integrated information theory has received both broad criticism and support. Support Neuroscientist Christof Koch, who has helped to develop later versions of the theory, has called IIT "the only really promising fundamental theory of consciousness". Neuroscientist and consciousness researcher Anil Seth is supportive of the theory, with some caveats, claiming that "conscious experiences are highly informative and always integrated."; and that "One thing that immediately follows from [IIT] is that you have a nice post hoc explanation for certain things we know about consciousness.". But he also claims "the parts of IIT that I find less promising are where it claims that integrated information actually is consciousness — that there's an identity between the two.", and has criticized the panpsychist extrapolations of the theory. Philosopher David Chalmers, famous for the idea of the hard problem of consciousness, has expressed some enthusiasm about IIT. According to Chalmers, IIT is a development in the right direction, whether or not it is correct. Max Tegmark has tried to address the problem of the computational complexity behind the calculations. According to Max Tegmark "the integration measure proposed by IIT is computationally infeasible to evaluate for large systems, growing super-exponentially with the system's information content." As a result, Φ can only be approximated in general. However, different ways of approximating Φ provide radically different results. Other works have shown that Φ can be computed in some large mean-field neural network models, although some assumptions of the theory have to be revised to capture phase transitions in these large systems. In 2019, the Templeton Foundation announced funding in excess of $6,000,000 to test opposing empirical predictions of IIT and a rival theory (Global Neuronal Workspace Theory, GNWT). The originators of both theories signed off on experimental protocols and data analyses as well as the exact conditions that satisfy if their championed theory correctly predicted the outcome or not. Initial results were revealed in June 2023. None of GNWT's predictions passed what was agreed upon pre-registration while two out of three of IIT's predictions passed that threshold. Criticism Influential philosopher John Searle has given a critique of theory saying "The theory implies panpsychism" and "The problem with panpsychism is not that it is false; it does not get up to the level of being false. It is strictly speaking meaningless because no clear notion has been given to the claim." However, whether or not a theory has panpsychist implications (that all or most of what exists physically must be, be part of something that is, or be composed of parts that are, conscious) has no bearing on the scientific validity of the theory. Searle's take has also been countered by other philosophers, for misunderstanding and misrepresenting a theory that is actually resonant with his own ideas. Theoretical computer scientist Scott Aaronson has criticized IIT by demonstrating through its own formulation that an inactive series of logic gates, arranged in the correct way, would not only be conscious but be "unboundedly more conscious than humans are." Tononi himself agrees with the assessment and argues that according to IIT, an even simpler arrangement of inactive logic gates, if large enough, would also be conscious. However he further argues that this is a strength of IIT rather than a weakness, because that's exactly the sort of cytoarchitecture followed by large portions of the cerebral cortex, specially at the back of the brain, which is the most likely neuroanatomical correlate of consciousness according to some reviews. Philosopher Tim Bayne has criticized the axiomatic foundations of the theory. He concludes that "the so-called 'axioms' that Tononi et al. appeal to fail to qualify as genuine axioms". IIT as a scientific theory of consciousness has been criticized in the scientific literature as only able to be "either false or unscientific" by its own definitions. IIT has also been denounced by other members of the consciousness field as requiring "an unscientific leap of faith", but it is not clear that this is in fact the case if the theory is properly understood. The theory has also been derided for failing to answer the basic questions required of a theory of consciousness. Philosopher Adam Pautz says "As long as proponents of IIT do not address these questions, they have not put a clear theory on the table that can be evaluated as true or false." Neuroscientist Michael Graziano, proponent of the competing attention schema theory, rejects IIT as pseudoscience. He claims IIT is a "magicalist theory" that has "no chance of scientific success or understanding". Similarly, IIT was criticized that its claims are "not scientifically established or testable at the moment". However, while it is true that the complete analysis suggested by IIT cannot be completed at the moment for human brains, IIT has already been applied to models of visual cortex to explain why visual space feels the way it does. Neuroscientists Björn Merker, David Rudrauf and Philosopher Kenneth Williford co-authored a paper criticizing IIT on several grounds. Firstly, by not demonstrating that all members of systems which do in fact combine integration and differentiation in the formal IIT sense are conscious, systems which demonstrate high levels of integration and differentiation of information might provide the necessary conditions for consciousness but those combinations of attributes do not amount to the conditions for consciousness. Secondly that the measure, Φ, reflects efficiency of global information transfer rather than level of consciousness, and that the correlation of Φ with level of consciousness through different states of wakefulness (e.g. awake, dreaming and dreamless sleep, anesthesia, seizures and coma) actually reflect the level of efficient network interactions performed for cortical engagement. Hence Φ reflects network efficiency rather than consciousness, which would be one of the functions served by cortical network efficiency. A letter published on 15 September 2023 in the preprint repository PsyArXiv and signed by 124 scholars asserted that until IIT is empirically testable, it should be labeled pseudoscience. A number of researchers defended the theory in response. Regarding this letter, IIT, and what he considers a similarly unscientific theory, Assembly theory (AT), University of Cambridge and University of Oxford computer scientist Hector Zenil made criticisms based on the lack of correspondence of the methods and theory in some of the IIT research papers and the media frenzy. Zenil criticized both the shallowness and misleading nature of the media coverage, including in apparently respected journals such as Nature and Science. He also criticized testing methods and evidence used by IIT proponents, noting that one test amounted to simply applying LZW compression to measure entropy rather than to indicate consciousness as proponents claimed. An anonymized public survey invited all authors from peer-reviewed papers published between 2013 and 2023 found by a query of Web of Science using "consciousness AND theor*". From the total 60 respondents, 31% partially agreed with the letter, 8% "fully" agreed and 20% did "not at all" agree with the letter. See also Causality Consciousness Global workspace theory Hard problem of consciousness Mind–body problem Neural correlates of consciousness Phenomenology (philosophy) Phenomenology (psychology) Philosophy of mind Qualia Sentience References External links Related papers Integrated Information Theory: An Updated Account (2012) (First presentation of IIT 3.0) Websites IIT-wiki: An online learning resource aimed at teaching the foundations of IIT; includes texts, slideshows, interactive coding exercises, and sections for discussion and asking questions. integratedinformationtheory.org: a (somewhat out-of-date) hub for sources about IIT; features a graphical user interface to an old version of PyPhi. Software PyPhi: an open-source Python package for calculating integrated information. Graphical user interface Documentation Books The Feeling of Life Itself: Why Consciousness is Widespread but Can't Be Computed by Christof Koch (2019) Phi: A Voyage from the Brain to the Soul by Giulio Tononi (2012) News articles New Scientist (2019): How does consciousness work? A radical theory has mind-blowing answers Nautilus (2017): Is Matter Conscious? Aeon (2016): Consciousness creep MIT Technology Review (2014): What It Will Take for Computers to Be Conscious Wired (2013): A Neuroscientist's Radical Theory of How Networks Become Conscious The New Yorker (2013): How Much Consciousness Does an iPhone Have? New York Times (2010): Sizing Up Consciousness by Its Bits Scientific American (2009): A "Complex" Theory of Consciousness IEEE Spectrum (2008): A Bit of Theory: Consciousness as Integrated Information Theory Talks Christof Koch (2014): The Integrated Information Theory of Consciousness David Chalmers (2014): How do you explain consciousness? Computational neuroscience Consciousness Information theory Panpsychism
Integrated information theory
[ "Mathematics", "Technology", "Engineering" ]
2,915
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
27,456,094
https://en.wikipedia.org/wiki/List%20of%20restriction%20enzyme%20cutting%20sites%3A%20A
This article contains a list of restriction enzymes whose names start with A and have a clearly defined cutting site. The following information is given for each enzyme: Name of Restriction Enzyme: Accepted name of the molecule, according to the internationally adopted nomenclature, and bibliographical references. Note: When alphabetizing, enzymes are first ordered alphabetically by the acronyms (everything before the roman numeral); then enzymes of a given acronym are ordered alphabetically by the roman numeral, treating the numeral as a number and not a string of letters. This helps keep the entries ordered hierarchically while also alphabetic.(Further reading: see the section "Nomenclature" in the article "Restriction enzyme".) PDB code: Code used to identify the structure of a protein in the PDB database of protein structures. The 3D atomic structure of a protein provides highly valuable information to understand the intimate details of its mechanism of action. REBASE Number: Number used to identify restriction enzymes in the REBASE restriction enzyme database. This database includes important information about the enzyme such as Recognition sequence, source, and Isoschizomers, as well as other data, such as the commercial suppliers of the enzyme. Source: Organism that naturally produces the enzyme. Recognition sequence: Sequence of DNA recognized by the enzyme and to which it specifically binds. Cut: Displays the cut site and pattern and products of the cut. The recognition sequence and the cut site usually match, but sometimes the cut site can be dozens of nucleotides away from the recognition site. Isoschizomers and neoschizomers: An isoschizomer is a restriction enzyme that recognizes the same sequence as another. A neoschizomer is a special type of isoschizomer that recognizes the same sequence as another, but cuts in a different manner. A maximum number of 8–10 most common isoschizomers are indicated for every enzyme but there may be many more. Neoschizomers are shown in bold and green color font (e.g.: BamHI). When "None as of [date]" is indicated, that means that there were no registered isoschizomers in the databases on that date with a clearly defined cutting site. Isoschizomers indicated in white font and grey background correspond to enzymes not listed in the current lists, as in this not listed enzyme: Abc123I Whole list navigation Restriction enzymes A § An HF version of this enzyme is available Notes Biotechnology Restriction enzyme cutting sites Restriction enzymes
List of restriction enzyme cutting sites: A
[ "Chemistry", "Biology" ]
517
[ "Genetics techniques", "Molecular-biology-related lists", "Biotechnology", "nan", "Molecular biology", "Restriction enzymes" ]
27,456,331
https://en.wikipedia.org/wiki/Sheet%20metal%20forming%20analysis
For sheet metal forming analysis within the metal forming process, a successful technique requires a non-contact optical 3D deformation measuring system. The system analyzes, calculates and documents deformations of sheet metal parts, for example. It provides the 3D coordinates of the component's surface as well as the distribution of major and minor strain on the surface and the material thickness reduction. In the Forming Limit Diagram, the measured deformations are compared to the material characteristics. The system supports optimization processes in sheet metal forming by means of; Fast detection of critical deformation areas Solving complex forming problems Verification of numerical simulations Verification of FE models Creation of Forming Limit Curves, FLC Comparison of measured deformations to the material characteristics by means of a Forming Limit Diagram. The optical forming analysis with Forming analysis system provides for precise and fast measurement of small and large components using a high scanning density. Forming analysis system operates independently of the material. It can analyze components made from flat blanks, tubes or other components manufactured by an internal high pressure forming process (IHPF, Hydro forming). Functional principle explained by means of a standard measuring project The forming analysis system compares the 3D positions of measuring points in a flat and in a deformed state. Prior to the deformation, a regular point pattern is applied to the surface of the measuring object. For measuring objects which undergo high friction during the forming process, the measuring points are applied, for example, with the help of electrolytic methods. After the forming process of the measuring object, a camera (online or stand-alone operation) records the measuring points in several different images with different views. Forming analysis system works with two point types. In the Forming analysis system, the 3D computation of the measuring points is done using photogrammetric methods. For the automatic spatial orientation of the individual images or views, coded points are position close to or on the measuring object. The basic idea of Photogrammetry is to look at points (coded and uncoded) from different directions and to calculate the 3D coordinates of these points from the images or point rays thus obtained. The points visible in an image have a fixed relation to each other. Therefore, by means of images made from other angles of view, it is possible to calculate the camera location using this point relation. During the acquisition of an image set it is the goal to record points from multiple different directions that show the largest possible angles (A, B, C) to each other. It is the task of the Forming analysis system software to precisely find ellipses (a perspective view of point surfaces) in all images of the image set and their 3D orientation. The Forming analysis system software interprets the images and generates 3D measuring data. In order to compute the strain, the flat state is compared to the deformed state. (#1 & #2) In a standard measuring project, the flat state, the strain reference, is not captured optically but results from the theoretical point distance defined in the project parameters. As a default, Forming analysis system presumes an exactly regular initial pattern which is on one plane and for which the point distance is known. This is called the "virtual reference stage" and is marked with Stage 0 in italic letters in the software. All strain values refer to the adjusted computation parameter Point distance. The Forming analysis system software is also capable of analyzing several static deformation states (stages) within one project where each deformation stage can be set as strain reference any time. This procedure may be used, for example, for the deformation analysis of tubes. To allow for a full-field view of the strain, the software changes to the so-called grid mode (#3 & #4). This means that based on the center points of the measuring points a grid surface is created. Each grid line intersection point represents a 3D measuring point. The full-field color representation of the strain results from the 3D positions of these grid line intersection points. (#5 & #6) References https://www.researchgate.net/publication/321168677_Investigation_of_Forming_Limit_Curves_of_Various_Sheet_Materials_Using_Hydraulic_Bulge_Testing_With_Analytical_Experimental_and_FEA_Techniques External links FEA of Sheet Metal Photogrammetry Metal forming Computer-aided engineering
Sheet metal forming analysis
[ "Engineering" ]
878
[ "Construction", "Industrial engineering", "Computer-aided engineering" ]
27,458,282
https://en.wikipedia.org/wiki/Liquid%20metal%20ion%20source
A liquid metal ion source (LMIS) is an ion source which uses metal that is heated to the liquid state and used to form an electrospray to form ions. An electrospray Taylor cone is formed by the application of a strong electric field and ions are produced by field evaporation at the sharp tip of the cone, which has a high electric field. Ions from a LMIS are used in ion implantation and in focused ion beam instruments. Typically gallium is preferred for its low melting point, low vapor pressure, its relatively unreactive nature, and because the gallium ion is sufficiently heavy for ion milling. Development The LMIS technique originated in the development of colloid thruster spacecraft propulsion systems. Research beginning in the early 1960s showed that liquid metal can generate large numbers of ions. By the early 1970s, these results spawned the development of LMIS ion microprobes. Initially, in the development of this technique, the liquid metal was supplied by a capillary tube. This method can be difficult to control at low emission currents. The technique of "blunt-needle" LMIS was discovered by accident in the early 1970s. For this method a thin-film of liquid metal is allowed to flow to the apex of a sharp needle. Focused ion beam Most focused ion beam instruments use a liquid-metal ion sources (LMIS) often with gallium. In a gallium LMIS, gallium metal is placed in contact with a tungsten needle and heated gallium wets the tungsten and flows to the tip of the needle where the opposing forces of surface tension and electric field produce the cusp shaped Taylor cone. The tip radius of this cone is ~2 nm. The electric field at this small tip is typically greater than 1 x 108 V/cm and causes ionization and field emission of the gallium atoms. The ions are then accelerated to an energy of 1–50 keV and focused onto the sample with electrostatic lenses. LMIS produces a high current density ion beam with a small energy spread and can deliver tens of nanoamperes of current to a sample with a spot size of a few nanometers. References Ion source
Liquid metal ion source
[ "Physics" ]
441
[ "Ion source", "Mass spectrometry", "Spectrum (physical sciences)" ]
27,459,241
https://en.wikipedia.org/wiki/List%20of%20restriction%20enzyme%20cutting%20sites%3A%20G%E2%80%93K
This article contains a list of the most studied restriction enzymes whose names start with G to K inclusive. It contains approximately 90 enzymes. The following information is given: Whole list navigation Restriction enzymes G H I K Notes Biotechnology Restriction enzyme cutting sites Restriction enzymes
List of restriction enzyme cutting sites: G–K
[ "Chemistry", "Biology" ]
51
[ "Genetics techniques", "Molecular-biology-related lists", "Biotechnology", "nan", "Molecular biology", "Restriction enzymes" ]
6,646,221
https://en.wikipedia.org/wiki/Alternatives%20to%20general%20relativity
Alternatives to general relativity are physical theories that attempt to describe the phenomenon of gravitation in competition with Einstein's theory of general relativity. There have been many different attempts at constructing an ideal theory of gravity. These attempts can be split into four broad categories based on their scope. In this article, straightforward alternatives to general relativity are discussed, which do not involve quantum mechanics or force unification. Other theories which do attempt to construct a theory using the principles of quantum mechanics are known as theories of quantized gravity. Thirdly, there are theories which attempt to explain gravity and other forces at the same time; these are known as classical unified field theories. Finally, the most ambitious theories attempt to both put gravity in quantum mechanical terms and unify forces; these are called theories of everything. None of these alternatives to general relativity have gained wide acceptance. General relativity has withstood many tests, remaining consistent with all observations so far. In contrast, many of the early alternatives have been definitively disproven. However, some of the alternative theories of gravity are supported by a minority of physicists, and the topic remains the subject of intense study in theoretical physics. Notation in this article is the speed of light, is the gravitational constant. "Geometric variables" are not used. Latin indices go from 1 to 3, Greek indices go from 0 to 3. The Einstein summation convention is used. is the Minkowski metric. is a tensor, usually the metric tensor. These have signature (−,+,+,+). Partial differentiation is written or . Covariant differentiation is written or . General relativity For comparison with alternatives, the formulas of General Relativity are: which can also be written The Einstein–Hilbert action for general relativity is: where is Newton's gravitational constant, is the Ricci curvature of space, and is the action due to mass. General relativity is a tensor theory, the equations all contain tensors. Nordström's theories, on the other hand, are scalar theories because the gravitational field is a scalar. Other proposed alternatives include scalar–tensor theories that contain a scalar field in addition to the tensors of general relativity, and other variants containing vector fields as well have been developed recently. Classification of theories Theories of gravity can be classified, loosely, into several categories. Most of the theories described here have: an 'action' (see the principle of least action, a variational principle based on the concept of action) a Lagrangian density a metric If a theory has a Lagrangian density for gravity, say , then the gravitational part of the action is the integral of that: . In this equation it is usual, though not essential, to have at spatial infinity when using Cartesian coordinates. For example, the Einstein–Hilbert action uses where R is the scalar curvature, a measure of the curvature of space. Almost every theory described in this article has an action. It is the most efficient known way to guarantee that the necessary conservation laws of energy, momentum and angular momentum are incorporated automatically; although it is easy to construct an action where those conservation laws are violated. Canonical methods provide another way to construct systems that have the required conservation laws, but this approach is more cumbersome to implement. The original 1983 version of MOND did not have an action. A few theories have an action but not a Lagrangian density. A good example is Whitehead, the action there is termed non-local. A theory of gravity is a "metric theory" if and only if it can be given a mathematical representation in which two conditions hold: Condition 1: There exists a symmetric metric tensor of signature (−, +, +, +), which governs proper-length and proper-time measurements in the usual manner of special and general relativity: where there is a summation over indices and . Condition 2: Stressed matter and fields being acted upon by gravity respond in accordance with the equation: where is the stress–energy tensor for all matter and non-gravitational fields, and where is the covariant derivative with respect to the metric and is the Christoffel symbol. The stress–energy tensor should also satisfy an energy condition. Metric theories include (from simplest to most complex): Scalar field theories (includes conformally flat theories & Stratified theories with conformally flat space slices) Bergman Coleman Einstein (1912) Einstein–Fokker theory Lee–Lightman–Ni Littlewood Ni Nordström's theory of gravitation (first metric theory of gravity to be developed) Page–Tupper Papapetrou Rosen (1971) Whitrow–Morduch Yilmaz theory of gravitation (attempted to eliminate event horizons from the theory.) Quasilinear theories (includes Linear fixed gauge) Bollini–Giambiagi–Tiomno Deser–Laurent Whitehead's theory of gravity (intended to use only retarded potentials) Tensor theories Einstein's general relativity Fourth-order gravity (allows the Lagrangian to depend on second-order contractions of the Riemann curvature tensor) f(R) gravity (allows the Lagrangian to depend on higher powers of the Ricci scalar) Gauss–Bonnet gravity Lovelock theory of gravity (allows the Lagrangian to depend on higher-order contractions of the Riemann curvature tensor) Infinite derivative gravity Scalar–tensor theories Bekenstein Bergmann–Wagoner Brans–Dicke theory (the most well-known alternative to general relativity, intended to be better at applying Mach's principle) Jordan Nordtvedt Thiry Chameleon Pressuron Vector–tensor theories Hellings–Nordtvedt Will–Nordtvedt Bimetric theories Lightman–Lee Rastall Rosen (1975) Other metric theories (see section Modern theories below) Non-metric theories include Belinfante–Swihart Einstein–Cartan theory (intended to handle spin-orbital angular momentum interchange) Kustaanheimo (1967) Teleparallelism Gauge theory gravity A word here about Mach's principle is appropriate because a few of these theories rely on Mach's principle (e.g. Whitehead), and many mention it in passing (e.g. Einstein–Grossmann, Brans–Dicke). Mach's principle can be thought of a half-way-house between Newton and Einstein. It goes this way: Newton: Absolute space and time. Mach: The reference frame comes from the distribution of matter in the universe. Einstein: There is no reference frame. Theories from 1917 to the 1980s At the time it was published in the 17th century, Isaac Newton's theory of gravity was the most accurate theory of gravity. Since then, a number of alternatives were proposed. The theories which predate the formulation of general relativity in 1915 are discussed in history of gravitational theory. This section includes alternatives to general relativity published after general relativity but before the observations of galaxy rotation that led to the hypothesis of "dark matter". Those considered here include (see Will Lang): These theories are presented here without a cosmological constant or added scalar or vector potential unless specifically noted, for the simple reason that the need for one or both of these was not recognized before the supernova observations by the Supernova Cosmology Project and High-Z Supernova Search Team. How to add a cosmological constant or quintessence to a theory is discussed under Modern Theories (see also Einstein–Hilbert action). Scalar field theories The scalar field theories of Nordström have already been discussed. Those of Littlewood, Bergman, Yilmaz, Whitrow and Morduch and Page and Tupper follow the general formula give by Page and Tupper. According to Page and Tupper, who discuss all these except Nordström, the general scalar field theory comes from the principle of least action: where the scalar field is, and may or may not depend on . In Nordström, In Littlewood and Bergmann, In Whitrow and Morduch, In Whitrow and Morduch, In Page and Tupper, Page and Tupper matches Yilmaz's theory to second order when . The gravitational deflection of light has to be zero when c is constant. Given that variable c and zero deflection of light are both in conflict with experiment, the prospect for a successful scalar theory of gravity looks very unlikely. Further, if the parameters of a scalar theory are adjusted so that the deflection of light is correct then the gravitational redshift is likely to be wrong. Ni summarized some theories and also created two more. In the first, a pre-existing special relativity space-time and universal time coordinate acts with matter and non-gravitational fields to generate a scalar field. This scalar field acts together with all the rest to generate the metric. The action is: Misner et al. gives this without the term. is the matter action. is the universal time coordinate. This theory is self-consistent and complete. But the motion of the solar system through the universe leads to serious disagreement with experiment. In the second theory of Ni there are two arbitrary functions and that are related to the metric by: Ni quotes Rosen as having two scalar fields and that are related to the metric by: In Papapetrou the gravitational part of the Lagrangian is: In Papapetrou there is a second scalar field . The gravitational part of the Lagrangian is now: Bimetric theories Bimetric theories contain both the normal tensor metric and the Minkowski metric (or a metric of constant curvature), and may contain other scalar or vector fields. Rosen (1975) bimetric theory The action is: Lightman–Lee developed a metric theory based on the non-metric theory of Belinfante and Swihart. The result is known as BSLL theory. Given a tensor field , , and two constants and the action is: and the stress–energy tensor comes from: In Rastall, the metric is an algebraic function of the Minkowski metric and a Vector field. The Action is: where and (see Will for the field equation for and ). Quasilinear theories In Whitehead, the physical metric is constructed (by Synge) algebraically from the Minkowski metric and matter variables, so it doesn't even have a scalar field. The construction is: where the superscript (−) indicates quantities evaluated along the past light cone of the field point and Nevertheless, the metric construction (from a non-metric theory) using the "length contraction" ansatz is criticised. Deser and Laurent and Bollini–Giambiagi–Tiomno are Linear Fixed Gauge theories. Taking an approach from quantum field theory, combine a Minkowski spacetime with the gauge invariant action of a spin-two tensor field (i.e. graviton) to define The action is: The Bianchi identity associated with this partial gauge invariance is wrong. Linear Fixed Gauge theories seek to remedy this by breaking the gauge invariance of the gravitational action through the introduction of auxiliary gravitational fields that couple to . A cosmological constant can be introduced into a quasilinear theory by the simple expedient of changing the Minkowski background to a de Sitter or anti-de Sitter spacetime, as suggested by G. Temple in 1923. Temple's suggestions on how to do this were criticized by C. B. Rayner in 1955. Tensor theories Einstein's general relativity is the simplest plausible theory of gravity that can be based on just one symmetric tensor field (the metric tensor). Others include: Starobinsky (R+R^2) gravity, Gauss–Bonnet gravity, f(R) gravity, and Lovelock theory of gravity. Starobinsky Starobinsky gravity, proposed by Alexei Starobinsky has the Lagrangian and has been used to explain inflation, in the form of Starobinsky inflation. Here is a constant. Gauss–Bonnet Gauss–Bonnet gravity has the action where the coefficients of the extra terms are chosen so that the action reduces to general relativity in 4 spacetime dimensions and the extra terms are only non-trivial when more dimensions are introduced. Stelle's 4th derivative gravity Stelle's 4th derivative gravity, which is a generalization of Gauss–Bonnet gravity, has the action f(R) f(R) gravity has the action and is a family of theories, each defined by a different function of the Ricci scalar. Starobinsky gravity is actually an theory. Infinite derivative gravity Infinite derivative gravity is a covariant theory of gravity, quadratic in curvature, torsion free and parity invariant, and in order to make sure that only massless spin −2 and spin −0 components propagate in the graviton propagator around Minkowski background. The action becomes non-local beyond the scale , and recovers to general relativity in the infrared, for energies below the non-local scale . In the ultraviolet regime, at distances and time scales below non-local scale, , the gravitational interaction weakens enough to resolve point-like singularity, which means Schwarzschild's singularity can be potentially resolved in infinite derivative theories of gravity. Lovelock Lovelock gravity has the action and can be thought of as a generalization of general relativity. Scalar–tensor theories These all contain at least one free parameter, as opposed to general relativity which has no free parameters. Although not normally considered a Scalar–Tensor theory of gravity, the 5 by 5 metric of Kaluza–Klein reduces to a 4 by 4 metric and a single scalar. So if the 5th element is treated as a scalar gravitational field instead of an electromagnetic field then Kaluza–Klein can be considered the progenitor of Scalar–Tensor theories of gravity. This was recognized by Thiry. Scalar–Tensor theories include Thiry, Jordan, Brans and Dicke, Bergman, Nordtveldt (1970), Wagoner, Bekenstein and Barker. The action is based on the integral of the Lagrangian . where is a different dimensionless function for each different scalar–tensor theory. The function plays the same role as the cosmological constant in general relativity. is a dimensionless normalization constant that fixes the present-day value of . An arbitrary potential can be added for the scalar. The full version is retained in Bergman and Wagoner. Special cases are: Nordtvedt, Since was thought to be zero at the time anyway, this would not have been considered a significant difference. The role of the cosmological constant in more modern work is discussed under Cosmological constant. Brans–Dicke, is constant Bekenstein variable mass theory Starting with parameters and , found from a cosmological solution, determines function then Barker constant G theory Adjustment of allows Scalar Tensor Theories to tend to general relativity in the limit of in the current epoch. However, there could be significant differences from general relativity in the early universe. So long as general relativity is confirmed by experiment, general Scalar–Tensor theories (including Brans–Dicke) can never be ruled out entirely, but as experiments continue to confirm general relativity more precisely and the parameters have to be fine-tuned so that the predictions more closely match those of general relativity. The above examples are particular cases of Horndeski's theory, the most general Lagrangian constructed out of the metric tensor and a scalar field leading to second order equations of motion in 4-dimensional space. Viable theories beyond Horndeski (with higher order equations of motion) have been shown to exist. Vector–tensor theories Before we start, Will (2001) has said: "Many alternative metric theories developed during the 1970s and 1980s could be viewed as "straw-man" theories, invented to prove that such theories exist or to illustrate particular properties. Few of these could be regarded as well-motivated theories from the point of view, say, of field theory or particle physics. Examples are the vector–tensor theories studied by Will, Nordtvedt and Hellings." Hellings and Nordtvedt and Will and Nordtvedt are both vector–tensor theories. In addition to the metric tensor there is a timelike vector field The gravitational action is: where are constants and (See Will for the field equations for and ) Will and Nordtvedt is a special case where Hellings and Nordtvedt is a special case where These vector–tensor theories are semi-conservative, which means that they satisfy the laws of conservation of momentum and angular momentum but can have preferred frame effects. When they reduce to general relativity so, so long as general relativity is confirmed by experiment, general vector–tensor theories can never be ruled out. Other metric theories Others metric theories have been proposed; that of Bekenstein is discussed under Modern Theories. Non-metric theories Cartan's theory is particularly interesting both because it is a non-metric theory and because it is so old. The status of Cartan's theory is uncertain. Will claims that all non-metric theories are eliminated by Einstein's Equivalence Principle. Will (2001) tempers that by explaining experimental criteria for testing non-metric theories against Einstein's Equivalence Principle. Misner et al. claims that Cartan's theory is the only non-metric theory to survive all experimental tests up to that date and Turyshev lists Cartan's theory among the few that have survived all experimental tests up to that date. The following is a quick sketch of Cartan's theory as restated by Trautman. Cartan suggested a simple generalization of Einstein's theory of gravitation. He proposed a model of space time with a metric tensor and a linear "connection" compatible with the metric but not necessarily symmetric. The torsion tensor of the connection is related to the density of intrinsic angular momentum. Independently of Cartan, similar ideas were put forward by Sciama, by Kibble in the years 1958 to 1966, culminating in a 1976 review by Hehl et al. The original description is in terms of differential forms, but for the present article that is replaced by the more familiar language of tensors (risking loss of accuracy). As in general relativity, the Lagrangian is made up of a massless and a mass part. The Lagrangian for the massless part is: The is the linear connection. is the completely antisymmetric pseudo-tensor (Levi-Civita symbol) with , and is the metric tensor as usual. By assuming that the linear connection is metric, it is possible to remove the unwanted freedom inherent in the non-metric theory. The stress–energy tensor is calculated from: The space curvature is not Riemannian, but on a Riemannian space-time the Lagrangian would reduce to the Lagrangian of general relativity. Some equations of the non-metric theory of Belinfante and Swihart have already been discussed in the section on bimetric theories. A distinctively non-metric theory is given by gauge theory gravity, which replaces the metric in its field equations with a pair of gauge fields in flat spacetime. On the one hand, the theory is quite conservative because it is substantially equivalent to Einstein–Cartan theory (or general relativity in the limit of vanishing spin), differing mostly in the nature of its global solutions. On the other hand, it is radical because it replaces differential geometry with geometric algebra. Modern theories 1980s to present This section includes alternatives to general relativity published after the observations of galaxy rotation that led to the hypothesis of "dark matter". There is no known reliable list of comparison of these theories. Those considered here include: Bekenstein, Moffat, Moffat, Moffat. These theories are presented with a cosmological constant or added scalar or vector potential. Motivations Motivations for the more recent alternatives to general relativity are almost all cosmological, associated with or replacing such constructs as "inflation", "dark matter" and "dark energy". The basic idea is that gravity agrees with general relativity at the present epoch but may have been quite different in the early universe. In the 1980s, there was a slowly dawning realisation in the physics world that there were several problems inherent in the then-current big-bang scenario, including the horizon problem and the observation that at early times when quarks were first forming there was not enough space on the universe to contain even one quark. Inflation theory was developed to overcome these difficulties. Another alternative was constructing an alternative to general relativity in which the speed of light was higher in the early universe. The discovery of unexpected rotation curves for galaxies took everyone by surprise. Could there be more mass in the universe than we are aware of, or is the theory of gravity itself wrong? The consensus now is that the missing mass is "cold dark matter", but that consensus was only reached after trying alternatives to general relativity, and some physicists still believe that alternative models of gravity may hold the answer. In the 1990s, supernova surveys discovered the accelerated expansion of the universe, now usually attributed to dark energy. This led to the rapid reinstatement of Einstein's cosmological constant, and quintessence arrived as an alternative to the cosmological constant. At least one new alternative to general relativity attempted to explain the supernova surveys' results in a completely different way. The measurement of the speed of gravity with the gravitational wave event GW170817 ruled out many alternative theories of gravity as explanations for the accelerated expansion. Another observation that sparked recent interest in alternatives to General Relativity is the Pioneer anomaly. It was quickly discovered that alternatives to general relativity could explain this anomaly. This is now believed to be accounted for by non-uniform thermal radiation. Cosmological constant and quintessence The cosmological constant is a very old idea, going back to Einstein in 1917. The success of the Friedmann model of the universe in which led to the general acceptance that it is zero, but the use of a non-zero value came back when data from supernovae indicated that the expansion of the universe is accelerating. In Newtonian gravity, the addition of the cosmological constant changes the Newton–Poisson equation from: to In general relativity, it changes the Einstein–Hilbert action from to which changes the field equation from: to: In alternative theories of gravity, a cosmological constant can be added to the action in the same way. More generally a scalar potential can be added to scalar tensor theories. This can be done in every alternative the general relativity that contains a scalar field by adding the term inside the Lagrangian for the gravitational part of the action, the part of Because is an arbitrary function of the scalar field rather than a constant, it can be set to give an acceleration that is large in the early universe and small at the present epoch. This is known as quintessence. A similar method can be used in alternatives to general relativity that use vector fields, including Rastall and vector–tensor theories. A term proportional to is added to the Lagrangian for the gravitational part of the action. Farnes' theories In December 2018, the astrophysicist Jamie Farnes from the University of Oxford proposed a dark fluid theory, related to notions of gravitationally repulsive negative masses that were presented earlier by Albert Einstein. The theory may help to better understand the considerable amounts of unknown dark matter and dark energy in the universe. The theory relies on the concept of negative mass and reintroduces Fred Hoyle's creation tensor in order to allow matter creation for only negative mass particles. In this way, the negative mass particles surround galaxies and apply a pressure onto them, thereby resembling dark matter. As these hypothesised particles mutually repel one another, they push apart the Universe, thereby resembling dark energy. The creation of matter allows the density of the exotic negative mass particles to remain constant as a function of time, and so appears like a cosmological constant. Einstein's field equations are modified to: According to Occam's razor, Farnes' theory is a simpler alternative to the conventional LambdaCDM model, as both dark energy and dark matter (two hypotheses) are solved using a single negative mass fluid (one hypothesis). The theory will be directly testable using the world's largest radio telescope, the Square Kilometre Array which should come online in 2022. Relativistic MOND The original theory of MOND by Milgrom was developed in 1983 as an alternative to "dark matter". Departures from Newton's law of gravitation are governed by an acceleration scale, not a distance scale. MOND successfully explains the Tully–Fisher observation that the luminosity of a galaxy should scale as the fourth power of the rotation speed. It also explains why the rotation discrepancy in dwarf galaxies is particularly large. There were several problems with MOND in the beginning. It did not include relativistic effects It violated the conservation of energy, momentum and angular momentum It was inconsistent in that it gives different galactic orbits for gas and for stars It did not state how to calculate gravitational lensing from galaxy clusters. By 1984, problems 2 and 3 had been solved by introducing a Lagrangian (AQUAL). A relativistic version of this based on scalar–tensor theory was rejected because it allowed waves in the scalar field to propagate faster than light. The Lagrangian of the non-relativistic form is: The relativistic version of this has: with a nonstandard mass action. Here and are arbitrary functions selected to give Newtonian and MOND behaviour in the correct limits, and is the MOND length scale. By 1988, a second scalar field (PCC) fixed problems with the earlier scalar–tensor version but is in conflict with the perihelion precession of Mercury and gravitational lensing by galaxies and clusters. By 1997, MOND had been successfully incorporated in a stratified relativistic theory [Sanders], but as this is a preferred frame theory it has problems of its own. Bekenstein introduced a tensor–vector–scalar model (TeVeS). This has two scalar fields and and vector field . The action is split into parts for gravity, scalars, vector and mass. The gravity part is the same as in general relativity. where are constants, square brackets in indices represent anti-symmetrization, is a Lagrange multiplier (calculated elsewhere), and is a Lagrangian translated from flat spacetime onto the metric . Note that need not equal the observed gravitational constant . is an arbitrary function, and is given as an example with the right asymptotic behaviour; note how it becomes undefined when The Parametric post-Newtonian parameters of this theory are calculated in, which shows that all its parameters are equal to general relativity's, except for both of which expressed in geometric units where ; so Moffat's theories J. W. Moffat developed a non-symmetric gravitation theory. This is not a metric theory. It was first claimed that it does not contain a black hole horizon, but Burko and Ori have found that nonsymmetric gravitational theory can contain black holes. Later, Moffat claimed that it has also been applied to explain rotation curves of galaxies without invoking "dark matter". Damour, Deser & MaCarthy have criticised nonsymmetric gravitational theory, saying that it has unacceptable asymptotic behaviour. The mathematics is not difficult but is intertwined so the following is only a brief sketch. Starting with a non-symmetric tensor , the Lagrangian density is split into where is the same as for matter in general relativity. where is a curvature term analogous to but not equal to the Ricci curvature in general relativity, and are cosmological constants, is the antisymmetric part of . is a connection, and is a bit difficult to explain because it's defined recursively. However, Haugan and Kauffmann used polarization measurements of the light emitted by galaxies to impose sharp constraints on the magnitude of some of nonsymmetric gravitational theory's parameters. They also used Hughes-Drever experiments to constrain the remaining degrees of freedom. Their constraint is eight orders of magnitude sharper than previous estimates. Moffat's metric-skew-tensor-gravity (MSTG) theory is able to predict rotation curves for galaxies without either dark matter or MOND, and claims that it can also explain gravitational lensing of galaxy clusters without dark matter. It has variable , increasing to a final constant value about a million years after the big bang. The theory seems to contain an asymmetric tensor field and a source current vector. The action is split into: Both the gravity and mass terms match those of general relativity with cosmological constant. The skew field action and the skew field matter coupling are: where and is the Levi-Civita symbol. The skew field coupling is a Pauli coupling and is gauge invariant for any source current. The source current looks like a matter fermion field associated with baryon and lepton number. Scalar–tensor–vector gravity Moffat's Scalar–tensor–vector gravity contains a tensor, vector and three scalar fields. But the equations are quite straightforward. The action is split into: with terms for gravity, vector field scalar fields and mass. is the standard gravity term with the exception that is moved inside the integral. The potential function for the vector field is chosen to be: where is a coupling constant. The functions assumed for the scalar potentials are not stated. Infinite derivative gravity In order to remove ghosts in the modified propagator, as well as to obtain asymptotic freedom, Biswas, Mazumdar and Siegel (2005) considered a string-inspired infinite set of higher derivative terms where is the exponential of an entire function of the D'Alembertian operator. This avoids a black hole singularity near the origin, while recovering the 1/r fall of the general relativity potential at large distances. Lousto and Mazzitelli (1997) found an exact solution to this theories representing a gravitational shock-wave. General relativity self-interaction (GRSI) The General Relativity Self-interaction or GRSI model is an attempt to explain astrophysical and cosmological observations without dark matter, dark energy by adding self-interaction terms when calculating the gravitational effects in general relativity, analogous to the self-interaction terms in quantum chromodynamics. Additionally, the model explains the Tully-Fisher relation, the radial acceleration relation, observations that are currently challenging to understand within Lambda-CDM. The model was proposed in a series of articles, the first dating from 2003. The basic point is that since within General Relativity, gravitational fields couple to each other, this can effectively increase the gravitational interaction between massive objects. The additional gravitational strength then avoid the need for dark matter. This field coupling is the origin of General Relativity's non-linear behavior. It can be understood, in particle language, as gravitons interacting with each other (despite being massless) because they carry energy-momentum. A natural implication of this model is its explanation of the accelerating expansion of the universe without resorting to dark energy. The increased binding energy within a galaxy requires, by energy conservation, a weakening of gravitational attraction outside said galaxy. This mimics the repulsion of dark energy. The GRSI model is inspired from the Strong Nuclear Force, where a comparable phenomenon occurs. The interaction between gluons emitted by static or nearly static quarks dramatically strengthens quark-quark interaction, ultimately leading to quark confinement on the one hand (analogous to the need of stronger gravity to explain away dark matter) and the suppression of the Strong Nuclear Force outside hadrons (analogous to the repulsion of dark energy that balances gravitational attraction at large scales.) Two other parallel phenomena are the Tully-Fisher relation in galaxy dynamics that is analogous to the Regge trajectories emerging from the strong force. In both cases, the phenomenological formulas describing these observations are similar, albeit with different numerical factors. These parallels are expected from a theoretical point of view: General Relativity and the Strong Interaction Lagrangians have the same form. The validity of the GRSI model then simply hinges on whether the coupling of the gravitational fields is large enough so that the same effects that occur in hadrons also occur in very massive systems. This coupling is effectively given by , where is the gravitational constant, is the mass of the system, and is a characteristic length of the system. The claim of the GRSI proponents, based either on lattice calculations, a background-field model. or the coincidental phenomenologies in galactic or hadronic dynamics mentioned in the previous paragraph, is that is indeed sufficiently large for large systems such as galaxies. List of topics studied in the Model The main observations that appear to require dark matter and/or dark energy can be explained within this model. Namely, The flat rotation curves of galaxies. These results, however, have been challenged. The Cosmic Microwave Background anisotropies. The fainter luminosities of distant supernovae and their consequence on the accelerating expansion of the universe. The formation of the Universe's large structures. The matter power spectrum. The internal dynamics of galaxy clusters, including that of the Bullet Cluster. Additionally, the model explains observations that are currently challenging to understand within Lambda-CDM: The Tully-Fisher relation. The radial acceleration relation. The Hubble tension. The cosmic coincidence, that is the fact that at present time, the purported repulsion of dark energy nearly exactly cancels the action of gravity in the overall dynamics of the universe. Finally, the model made a prediction that the amount of missing mass (i.e., the dark mass in dark matter approaches) in elliptical galaxies correlates with the ellipticity of the galaxies. This was tested and verified. Testing of alternatives to general relativity Any putative alternative to general relativity would need to meet a variety of tests for it to become accepted. For in-depth coverage of these tests, see Misner et al. Ch.39, Will Table 2.1, and Ni. Most such tests can be categorized as in the following subsections. Self-consistency Self-consistency among non-metric theories includes eliminating theories allowing tachyons, ghost poles and higher order poles, and those that have problems with behaviour at infinity. Among metric theories, self-consistency is best illustrated by describing several theories that fail this test. The classic example is the spin-two field theory of Fierz and Pauli; the field equations imply that gravitating bodies move in straight lines, whereas the equations of motion insist that gravity deflects bodies away from straight line motion. Yilmaz (1971) contains a tensor gravitational field used to construct a metric; it is mathematically inconsistent because the functional dependence of the metric on the tensor field is not well defined. Completeness To be complete, a theory of gravity must be capable of analysing the outcome of every experiment of interest. It must therefore mesh with electromagnetism and all other physics. For instance, any theory that cannot predict from first principles the movement of planets or the behaviour of atomic clocks is incomplete. Many early theories are incomplete in that it is unclear whether the density used by the theory should be calculated from the stress–energy tensor as or as , where is the four-velocity, and is the Kronecker delta. The theories of Thirry (1948) and Jordan are incomplete unless Jordan's parameter is set to -1, in which case they match the theory of Brans–Dicke and so are worthy of further consideration. Milne is incomplete because it makes no gravitational red-shift prediction. The theories of Whitrow and Morduch, Kustaanheimo and Kustaanheimo and Nuotio are either incomplete or inconsistent. The incorporation of Maxwell's equations is incomplete unless it is assumed that they are imposed on the flat background space-time, and when that is done they are inconsistent, because they predict zero gravitational redshift when the wave version of light (Maxwell theory) is used, and nonzero redshift when the particle version (photon) is used. Another more obvious example is Newtonian gravity with Maxwell's equations; light as photons is deflected by gravitational fields (by half that of general relativity) but light as waves is not. Classical tests There are three "classical" tests (dating back to the 1910s or earlier) of the ability of gravity theories to handle relativistic effects; they are gravitational redshift, gravitational lensing (generally tested around the Sun), and anomalous perihelion advance of the planets. Each theory should reproduce the observed results in these areas, which have to date always aligned with the predictions of general relativity. In 1964, Irwin I. Shapiro found a fourth test, called the Shapiro delay. It is usually regarded as a "classical" test as well. Agreement with Newtonian mechanics and special relativity As an example of disagreement with Newtonian experiments, Birkhoff theory predicts relativistic effects fairly reliably but demands that sound waves travel at the speed of light. This was the consequence of an assumption made to simplify handling the collision of masses. The Einstein equivalence principle Einstein's Equivalence Principle has three components. The first is the uniqueness of free fall, also known as the Weak Equivalence Principle. This is satisfied if inertial mass is equal to gravitational mass. η is a parameter used to test the maximum allowable violation of the Weak Equivalence Principle. The first tests of the Weak Equivalence Principle were done by Eötvös before 1900 and limited η to less than 5. Modern tests have reduced that to less than 5. The second is Lorentz invariance. In the absence of gravitational effects the speed of light is constant. The test parameter for this is δ. The first tests of Lorentz invariance were done by Michelson and Morley before 1890 and limited δ to less than 5. Modern tests have reduced this to less than 1. The third is local position invariance, which includes spatial and temporal invariance. The outcome of any local non-gravitational experiment is independent of where and when it is performed. Spatial local position invariance is tested using gravitational redshift measurements. The test parameter for this is α. Upper limits on this found by Pound and Rebka in 1960 limited α to less than 0.1. Modern tests have reduced this to less than 1. Schiff's conjecture states that any complete, self-consistent theory of gravity that embodies the Weak Equivalence Principle necessarily embodies Einstein's Equivalence Principle. This is likely to be true if the theory has full energy conservation. Metric theories satisfy the Einstein Equivalence Principle. Extremely few non-metric theories satisfy this. For example, the non-metric theory of Belinfante & Swihart is eliminated by the THεμ formalism for testing Einstein's Equivalence Principle. Gauge theory gravity is a notable exception, where the strong equivalence principle is essentially the minimal coupling of the gauge covariant derivative. Parametric post-Newtonian formalism See also Tests of general relativity, Misner et al. and Will for more information. Work on developing a standardized rather than ad hoc set of tests for evaluating alternative gravitation models began with Eddington in 1922 and resulted in a standard set of Parametric post-Newtonian numbers in Nordtvedt and Will and Will and Nordtvedt. Each parameter measures a different aspect of how much a theory departs from Newtonian gravity. Because we are talking about deviation from Newtonian theory here, these only measure weak-field effects. The effects of strong gravitational fields are examined later. These ten are: is a measure of space curvature, being zero for Newtonian gravity and one for general relativity. is a measure of nonlinearity in the addition of gravitational fields, one for general relativity. is a check for preferred location effects. measure the extent and nature of "preferred-frame effects". Any theory of gravity in which at least one of the three is nonzero is called a preferred-frame theory. measure the extent and nature of breakdowns in global conservation laws. A theory of gravity possesses 4 conservation laws for energy-momentum and 6 for angular momentum only if all five are zero. Strong gravity and gravitational waves Parametric post-Newtonian is only a measure of weak field effects. Strong gravity effects can be seen in compact objects such as white dwarfs, neutron stars, and black holes. Experimental tests such as the stability of white dwarfs, spin-down rate of pulsars, orbits of binary pulsars and the existence of a black hole horizon can be used as tests of alternative to general relativity. General relativity predicts that gravitational waves travel at the speed of light. Many alternatives to general relativity say that gravitational waves travel faster than light, possibly breaking causality. After the multi-messaging detection of the GW170817 coalescence of neutron stars, where light and gravitational waves were measured to travel at the same speed with an error of 1/1015, many of those modified theories of gravity were excluded. Cosmological tests Useful cosmological scale tests are just beginning to become available. Given the limited astronomical data and the complexity of the theories, comparisons involve complex parameters. For example, Reyes et al., analyzed 70,205 luminous red galaxies with a cross-correlation involving galaxy velocity estimates and gravitational potentials estimated from lensing and yet results are still tentative. For those theories that aim to replace dark matter, observations like the galaxy rotation curve, the Tully–Fisher relation, the faster rotation rate of dwarf galaxies, and the gravitational lensing due to galactic clusters act as constraints. For those theories that aim to replace inflation, the size of ripples in the spectrum of the cosmic microwave background radiation is the strictest test. For those theories that incorporate or aim to replace dark energy, the supernova brightness results and the age of the universe can be used as tests. Another test is the flatness of the universe. With general relativity, the combination of baryonic matter, dark matter and dark energy add up to make the universe exactly flat. Results of testing theories Parametric post-Newtonian parameters for a range of theories (See Will and Ni for more details. Misner et al. gives a table for translating parameters from the notation of Ni to that of Will) General Relativity is now more than 100 years old, during which one alternative theory of gravity after another has failed to agree with ever more accurate observations. One illustrative example is Parameterized post-Newtonian formalism. The following table lists Parametric post-Newtonian values for a large number of theories. If the value in a cell matches that in the column heading then the full formula is too complicated to include here. † The theory is incomplete, and can take one of two values. The value closest to zero is listed. All experimental tests agree with general relativity so far, and so Parametric post-Newtonian analysis immediately eliminates all the scalar field theories in the table. A full list of Parametric post-Newtonian parameters is not available for Whitehead, Deser-Laurent, Bollini–Giambiagi–Tiomino, but in these three cases , which is in strong conflict with general relativity and experimental results. In particular, these theories predict incorrect amplitudes for the Earth's tides. (A minor modification of Whitehead's theory avoids this problem. However, the modification predicts the Nordtvedt effect, which has been experimentally constrained.) Theories that fail other tests The stratified theories of Ni, Lee Lightman and Ni are non-starters because they all fail to explain the perihelion advance of Mercury. The bimetric theories of Lightman and Lee, Rosen, Rastall all fail some of the tests associated with strong gravitational fields. The scalar–tensor theories include general relativity as a special case, but only agree with the Parametric post-Newtonian values of general relativity when they are equal to general relativity to within experimental error. As experimental tests get more accurate, the deviation of the scalar–tensor theories from general relativity is being squashed to zero. The same is true of vector–tensor theories, the deviation of the vector–tensor theories from general relativity is being squashed to zero. Further, vector–tensor theories are semi-conservative; they have a nonzero value for which can have a measurable effect on the Earth's tides. Non-metric theories, such as Belinfante and Swihart, usually fail to agree with experimental tests of Einstein's equivalence principle. And that leaves, as a likely valid alternative to general relativity, nothing except possibly Cartan. That was the situation until cosmological discoveries pushed the development of modern alternatives. References External links Carroll, Sean. Video lecture discussion on the possibilities and constraints to revision of the General Theory of Relativity. Theories of gravity General relativity
Alternatives to general relativity
[ "Physics" ]
9,325
[ "Theories of gravity", "General relativity", "Theoretical physics", "Theory of relativity" ]
6,648,322
https://en.wikipedia.org/wiki/Flexible%20display
A flexible display or rollable display is an electronic visual display which is flexible in nature, as opposed to the traditional flat screen displays used in most electronic devices. In recent years there has been a growing interest from numerous consumer electronics manufacturers to apply this display technology in e-readers, mobile phones and other consumer electronics. Such screens can be rolled up like a scroll without the image or text being distorted. Technologies involved in building a rollable display include electronic ink, Gyricon, Organic LCD, and OLED. Electronic paper displays which can be rolled up have been developed by E Ink. At CES 2006, Philips showed a rollable display prototype, with a screen capable of retaining an image for several months without electricity. In 2007, Philips launched a 5-inch, 320 x 240-pixel rollable display based on E Ink’s electrophoretic technology. Some flexible organic light-emitting diode displays have been demonstrated.The first commercially sold flexible display was an electronic paper wristwatch. A rollable display is an important part of the development of the roll-away computer. Applications With the flat panel display having already been widely used more than 40 years, there have been many desired changes in the display technology, focusing on developing a lighter, thinner product that was easier to carry and store. Through the development of rollable displays in recent years, scientists and engineers agree that flexible flat panel display technology has huge market potential in the future. Rollable displays can be used in many places: Mobile devices. Laptops and PDAs. A permanently conformed display that securely fits around the wrists. A child's mask for Halloween and other uses. An odd-shaped display integrated in a steering wheel or automobile. History Flexible electronic paper based displays Flexible electronic paper (e-paper) based displays were the first flexible displays conceptualized and prototyped. While the idea for this type of display is not recent and had been attempted by several companies in the past, only recently has mass production of this technology begun for implementation in consumer electronic devices. Xerox PARC The concept of developing a flexible display was first put forth by Xerox PARC (Palo Alto Research Company). In 1974, Nicholas K. Sheridon, a PARC employee, made a major breakthrough in flexible display technology and produced the first flexible e-paper display. Dubbed Gyricon, this new display technology was designed to mimic the properties of paper, but married with the capacity to display dynamic digital images. Sheridon envisioned the advent of paperless offices and sought commercial applications for Gyricon. In 2003 Gyricon LLC was formed as a direct subsidiary of Xerox to commercialize the electronic paper technology developed at Xerox PARC. Gyricon LLC's operations were short lived and in December 2005 Xerox closed the subsidiary company in a move to focus on licensing the technology instead. HP and ASU In 2005, Arizona State University (ASU) opened a 250,000 square foot facility dedicated to flexible display research named the ASU Flexible Display Center (FDC). ASU received $43.7 million from the U.S. Army Research Laboratory (ARL) towards the development of this research facility in February 2004. A planned prototype device was slated for public demonstration later that year. However, the project met a series of delays. In December 2008, ASU in partnership with Hewlett Packard demonstrated a prototype flexible e-paper from the Flexible Display Center at the university. HP continued on with the research, and in 2010, showcased another demonstration. However, due to limitations in technology, HP stated "[our company] doesn't actually see these panels being used in truly flexible or rollable displays, but instead sees them being used to simply make displays thinner and lighter." Between 2004–2008, ASU developed its first small-scale flexible displays. Between 2008–2012, ARL committed to further sponsorship of ASU’s Flexible Display Center, which included an additional $50 million in research funding. Although the U.S. Army funds ASU’s development of the flexible display, the center’s focus is on commercial applications. Plastic Logic Plastic Logic is a company that develops and manufactures monochrome plastic flexible displays in various sizes based on its proprietary organic thin film transistor (OTFT) technology. They have also demonstrated their ability to produce colour displays with this technology, however they are currently not capable of manufacturing them on a large scale. The displays are manufactured in the company's purpose-built factory in Dresden, Germany, which was the first factory of its kind to be built – dedicated to the high volume manufacture of organic electronics. These flexible displays are cited as being "unbreakable", because they are made completely of plastic and do not contain glass. They are also lighter and thinner than glass-based displays and low-power. Applications of this flexible display technology include signage, wristwatches and wearable devices as well as automotive and mobile devices. Organic User Interfaces and the Human Media Lab In 2004, a team led by Prof. Roel Vertegaal at Queen's University's Human Media Lab in Canada developed PaperWindows, the first prototype bendable paper computer and first Organic User Interface. Since full-colour, US Letter-sized displays were not available at the time, PaperWindows deployed a form of active projection mapping of computer windows on real paper documents that worked together as one computer through 3D tracking. At a lecture to the Gyricon and Human-Computer Interaction teams at Xerox PARC on 4 May 2007, Prof. Vertegaal publicly introduced the term Organic User Interface (OUI) as a means of describing the implications of non-flat display technologies on user interfaces of the future: paper computers, flexible form factors for computing devices, but also encompassing rigid display objects of any shape, with wrap-around, skin-like displays. The lecture was published a year later as part of a special issue on Organic User Interfaces in the Communications of the ACM. In May 2010, the Human Media Lab partnered with ASU's Flexible Display Center to produce PaperPhone, the first flexible smartphone with a flexible electrophoretic display. PaperPhone used bend gestures for navigating contents. Since then, the Human Media Lab has partnered with Plastic Logic and Intel to introduce the first flexible tablet PC and multi-display e-paper computer, PaperTab, at CES 2013, debuting the world's first actuated flexible smartphone prototype, MorePhone in April 2013. Others Since 2010, Sony Electronics, AU Optronics and LG Electronics have all expressed interest in developing flexible e-paper displays. However, only LG have formally announced plans for mass production of flexible e-paper displays. Flexible OLED-based displays Research and development into flexible OLED displays largely began in the late 2000s with the main intentions of implementing this technology in mobile devices. However, this technology has recently made an appearance, to a moderate extent, in consumer television displays as well. Nokia Morph and Kinetic concepts Nokia first conceptualized the application of flexible OLED displays in mobile phone with the Nokia Morph concept mobile phone. Released to the press in February 2008, the Morph concept was project Nokia had co-developed with the University of Cambridge. With the Morph, Nokia intended to demonstrate their vision of future mobile devices to incorporate flexible and polymorphic designs; allowing the device to seamlessly change and match a variety of needs by the user within various environments. Though the focus of the Morph was to demonstrate the potential of nanotechnology, it pioneered the concept of utilizing a flexible video display in a consumer electronics device. Nokia renewed their interest in flexible mobile devices again in 2011 with the Nokia Kinetic concept. Nokia unveiled the Kinetic flexible phone prototype at Nokia World 2011 in London, alongside Nokia’s new range of Windows Phone 7 devices. The Kinetic proved to be a large departure from the Morph physically, but it still incorporated Nokia's vision of polymorphism in mobile devices. Sony Sony Electronics has expressed interest for research and development towards a flexible display video display since 2005. In partnership with RIKEN (the Institute of Physical and Chemical Research), Sony promised to commercialize this technology in TVs and cellphones sometime around 2010. In May 2010 Sony showcased a rollable TFT-driven OLED display. Samsung In late 2010, Samsung Electronics announced the development of a prototype 4.5 inch flexible AMOLED display. The prototype device was then showcased at Consumer Electronics Show 2011. During the 2011 Q3 quarterly earnings call, Samsung’s vice president of investor relations, Robert Yi, confirmed the company’s intentions of applying the technology and releasing products utilizing it by early 2012. In January 2012 Samsung acquired Liquavista, a company with expertise in manufacturing flexible displays, and announced plans to begin mass production by Q2 2012. In January 2013, Samsung exposed its brand new, unnamed product during the company's keynote address at CES in Las Vegas. Brian Berkeley, the senior vice president of Samsung's display lab in San Jose, California had announced the development of flexible displays. He said "the technology will let the company's partners make bendable, rollable, and foldable displays," and he demonstrated how the new phone can be rollable and flexible during his speech. During Samsung's CES 2013 keynote presentation, two prototype mobile devices codenamed "Youm" that incorporated the flexible AMOLED display technology were shown to the public. "Youm" has curved display screen, the use of OLED screen giving this phone deeper blacks and a higher overall contrast ratio with better power efficiency than traditional LCD displays. Also this phone has the advantages of a rollable display; it is lighter, thinner, and more durable than LCD displays. Samsung stated that "Youm" panels will be seen in the market in a short time and production will commence in 2013. Samsung subsequently released the Galaxy Round, a smartphone with an inward curving screen and body, in October 2013. One of the Youm concepts, which featured a curved screen edge used as a secondary area for notifications and shortcuts, was developed into the Galaxy Note Edge released in 2014. In 2015, Samsung applied the technology to its flagship Galaxy S series with the release of the Galaxy S6 Edge, a variant of the S6 model with a screen sloped over both sides of the device. During a developer conference in 2018, Samsung showed a foldable smartphone prototype, which was subsequently revealed in February 2019 as the Galaxy Fold. ASU The Flexible Display Center (FDC) at Arizona State University announced a continued effort in forwarding flexible displays in 2012. On 30 May, in partnership with Army Research Lab scientists, ASU announced that it has successfully manufactured the world's largest flexible OLED display using thin-film transistor (TFTs) technology. ASU intends the display to be used in "thin, lightweight, bendable and highly rugged devices." Xiaomi In January 2019, Chinese manufacturer Xiaomi showed a foldable smartphone prototype. CEO Lin Bin of Xiaomi demoed the device in a video on the Weibo social network. The device features a large foldable display that curves 180 degrees inwards on two sides. The tablet turns into a smartphone, with a screen diagonal of 4,5 inch, adjusting the user interface on the fly. Advantages Flexible displays have many advantages over glass: better durability, lighter weight, thinner as plastic, and can be perfectly curved and used in many devices. Moreover, the major difference between glass and rollable display is that the display area of a rollable display can be bigger than the device itself; If a flexible device measuring, for example, 5 inches in diagonal and a roll of 7.5mm, it can be stored in a device smaller than the screen itself and close to 15mm in thickness. Technical details Electronic paper Flexible displays that use electronic paper technology commonly use Electrophoretic or Electrowetting technologies. However, each type of flexible electronic paper varies in specification due to different implementation techniques by different companies. HP and ASU e-paper The flexible electronic paper display technology co-developed by Arizona State University and HP employs a manufacturing process developed by HP Labs called Self-Aligned Imprint Lithography (SAIL). The screens are made by layering stacks of semi-conductor materials and metals between pliable plastic sheets. The stacks need to be perfectly aligned and stay that way. Alignment proves difficult during manufacturing when heat during manufacturing can deform the materials and when the resulting screen also needs to remain flexible. The SAIL process gets around this by ‘printing’ the semiconductor pattern on a fully composed substrate, so that the layers always remain in perfect alignment. The limitation of the material the screen is based on allows only a finite amount of full rolls, hence limiting its commercial application as a flexible display. Specifications provided regarding the prototype display are as follows: flexible and rollable up to "about half a dozen times" "unbreakable" AUO e-paper The flexible electronic paper display announced by AUO is unique as it is the only solar powered variant. A separate rechargeable battery is also attached when solar charging is unavailable. Specifications 6-inch diagonal display size radius of curvature can reach 100mm 9:1 high contrast ratio reflectance of 33% 16 gray levels solar powered "unbreakable" LG e-paper Specifications: 6-inch diagonal display size 1024x768 (XGA) resolution 4:3 aspect ratio TFT based electronic display "allows bending at a range of 40 degrees from the center of the screen" 0.7mm thickness from the side 14g weight can drop from 1.5m above ground with no resultant damage "unbreakable" (from tests with a small urethane hammer) List of displays by their reported curvature *Lower is more sharply curved OLED Many of the e-paper based flexible displays are based on OLED technology and its variants. Though this technology is relatively new in comparison with e-paper based flexible displays, implementation of OLED flexible displays saw considerable growth in the last few years. ASU Specifications: 6-inch diagonal display size 480x360 4k resolution 4:3 aspect ratio OLED display technology with a TFT back plane Samsung Specifications: 4.5-inch diagonal display size 800x480 WVGA, 1280x720 WXGA and WQXGA (2560×1600) resolutions AMOLED display technology "unbreakable" Concept devices Mobile devices In May 2011, Human Media Lab at Queen's University in Canada introduced PaperPhone, the first flexible smartphone, in partnership with the Arizona State University Flexible Display Center. PaperPhone used 5 bend sensors to implement navigation of the user interface through bend gestures of corners and sides of the display. In January 2013, the Human Media Lab introduced the first flexible tablet PC, PaperTab, in collaboration with Plastic Logic and Intel Labs, at CES. PaperTab is a multi-display environment in which each display represents a window, app or computer document. Displays are tracked in 3D to allow multidisplay operations, such as collate to enlarge the display space, or pointing with one display onto another to pull open a document file. In April 2013 in Paris, the Human Media Lab, in collaboration with Plastic Logic, unveiled the world's first actuated flexible smartphone prototype, MorePhone. MorePhone actuates its body to notify users upon receiving a phone call or message. Nokia introduced the Kinetic concept phone at Nokia World 2011 in London. The flexible OLED display allows users to interact with the phone by twisting, bending, squeezing and folding in different manners across both the vertical and horizontal planes. The technology journalist website Engadget described interactions such as "[when] bend the screen towards yourself, [the device] acts as a selection function, or zooms in on any pictures you're viewing." Nokia envisioned this type of device to be available to consumers in "as little as three years", and claimed to already possess "the technology to produce it." At CES 2013, Samsung showcased the two handsets which incorporates AMOLED flexible display technology during its keynote presentation, the Youm and an unnamed Windows Phone 8 prototype device. The Youm possessed a static implementation of flexible AMOLED display technology, as its screen has a set curvature along one of its edges. The benefit of the curvature allows users "to read text messages, stock tickers, and other notifications from the side of the device even if [the user] have a case covering the screen." The unnamed Windows Phone 8 prototype device was composed of a solid base from that extends a flexible AMOLED display. The AMOLED display itself bends and was described as "virtually unbreakable even when dropped" according to Samsung representatives. Brian Berkeley, the senior vice president of Samsung Display, believes that this flexible form factor "will really begin to change how people interact with their devices, opening up new lifestyle possibilities ... [and] allow our partners to create a whole new ecosystem of devices." The Youm's form factor was ultimately utilized on the Galaxy Note Edge, and future Samsung Galaxy S series devices. ReFlex is a flexible smartphone created by Queen's University’s Human Media Lab. Curved OLED TVs LG Electronics and Samsung Electronics both introduced curved OLED televisions with a curved display at CES 2013 hours apart from each other. Both companies recognized their respective curved OLED prototype television as a first-of-its-kind due to its flexed OLED display. The technology journalist website The Verge noted the subtle curve on 55" Samsung OLED TV allowed it to have a "more panoramic, more immersive viewing experience, and actually improves viewing angles from the side." The experience was also shared viewing the curved 55" LG OLED TV. The LG set is also 3D capable, in addition to the curvature. *Lower is more sharply curved See also Organic user interface (OUI), the category of user interfaces commonly implemented on consumer devices with flexible displays. Flexible glass Fish scale Modular design Smart watch MSG Sphere Evans & Sutherland References Flexible displays Display technology Electronic paper technology Liquid crystal displays Flexible electronics
Flexible display
[ "Materials_science", "Mathematics", "Engineering" ]
3,728
[ "Flexible displays", "Electronic engineering", "Flexible electronics", "Display technology", "Planes (geometry)", "Thin films" ]
6,650,279
https://en.wikipedia.org/wiki/Spacecraft%20electric%20propulsion
Spacecraft electric propulsion (or just electric propulsion) is a type of spacecraft propulsion technique that uses electrostatic or electromagnetic fields to accelerate mass to high speed and thus generating thrust to modify the velocity of a spacecraft in orbit. The propulsion system is controlled by power electronics. Electric thrusters typically use much less propellant than chemical rockets because they have a higher exhaust speed (operate at a higher specific impulse) than chemical rockets. Due to limited electric power the thrust is much weaker compared to chemical rockets, but electric propulsion can provide thrust for a longer time. Electric propulsion was first demonstrated in the 1960s and is now a mature and widely used technology on spacecraft. American and Russian satellites have used electric propulsion for decades. , over 500 spacecraft operated throughout the Solar System use electric propulsion for station keeping, orbit raising, or primary propulsion. In the future, the most advanced electric thrusters may be able to impart a delta-v of , which is enough to take a spacecraft to the outer planets of the Solar System (with nuclear power), but is insufficient for interstellar travel. An electric rocket with an external power source (transmissible through laser on the photovoltaic panels) has a theoretical possibility for interstellar flight. However, electric propulsion is not suitable for launches from the Earth's surface, as it offers too little thrust. On a journey to Mars, an electrically powered ship might be able to carry 70% of its initial mass to the destination, while a chemical rocket could carry only a few percent. History The idea of electric propulsion for spacecraft was introduced in 1911 by Konstantin Tsiolkovsky. Earlier, Robert Goddard had noted such a possibility in his personal notebook. On 15 May 1929, the Soviet research laboratory Gas Dynamics Laboratory (GDL) commenced development of electric rocket engines. Headed by Valentin Glushko, in the early 1930s he created the world's first example of an electrothermal rocket engine. This early work by GDL has been steadily carried on and electric rocket engines were used in the 1960s on board the Voskhod 1 spacecraft and Zond-2 Mars probe. The first test of electric propulsion was an experimental ion engine carried on board the Soviet Zond 1 spacecraft in April 1964, however they operated erratically possibly due to problems with the probe. The Zond 2 spacecraft also carried six Pulsed Plasma Thrusters (PPT) that served as actuators of the attitude control system. The PPT propulsion system was tested for 70 minutes on the 14 December 1964 when the spacecraft was 4.2 million kilometers from Earth. The first successful demonstration of an ion engine was NASA SERT-1 (Space Electric Rocket Test) spacecraft. It launched on 20 July 1964 and operated for 31 minutes. A follow-up mission launched on 3 February 1970, SERT-2. It carried two ion thrusters, one operated for more than five months and the other for almost three months. Electrically powered propulsion with a nuclear reactor was considered by Tony Martin for interstellar Project Daedalus in 1973, but the approach was rejected because of its thrust profile, the weight of equipment needed to convert nuclear energy into electricity, and as a result a small acceleration, which would take a century to achieve the desired speed. By the early 2010s, many satellite manufacturers were offering electric propulsion options on their satellites—mostly for on-orbit attitude control—while some commercial communication satellite operators were beginning to use them for geosynchronous orbit insertion in place of traditional chemical rocket engines. Types Ion and plasma drives These types of rocket-like reaction engines use electric energy to obtain thrust from propellant. Electric propulsion thrusters for spacecraft may be grouped into three families based on the type of force used to accelerate the ions of the plasma: Electrostatic If the acceleration is caused mainly by the Coulomb force (i.e. application of a static electric field in the direction of the acceleration) the device is considered electrostatic. Types: Gridded ion thruster NASA Solar Technology Application Readiness (NSTAR) HiPEP Radiofrequency ion thruster Hall-effect thruster, including its subtypes Stationary Plasma Thruster (SPT) and Thruster with Anode Layer (TAL) Colloid ion thruster Field-emission electric propulsion Nano-particle field extraction thruster Electrothermal The electrothermal category groups devices that use electromagnetic fields to generate a plasma to increase the temperature of the bulk propellant. The thermal energy imparted to the propellant gas is then converted into kinetic energy by a nozzle of either solid material or magnetic fields. Low molecular weight gases (e.g. hydrogen, helium, ammonia) are preferred propellants for this kind of system. An electrothermal engine uses a nozzle to convert heat into linear motion, so it is a true rocket even though the energy producing the heat comes from an external source. Performance of electrothermal systems in terms of specific impulse (Isp) is 500 to ~1000 seconds, but exceeds that of cold gas thrusters, monopropellant rockets, and even most bipropellant rockets. In the USSR, electrothermal engines entered use in 1971; the Soviet "Meteor-3", "Meteor-Priroda", "Resurs-O" satellite series and the Russian "Elektro" satellite are equipped with them. Electrothermal systems by Aerojet (MR-510) are currently used on Lockheed Martin A2100 satellites using hydrazine as a propellant. Resistojet Arcjet Microwave Variable specific impulse magnetoplasma rocket (VASIMR) Electromagnetic Electromagnetic thrusters accelerate ions either by the Lorentz force or by the effect of electromagnetic fields where the electric field is not in the direction of the acceleration. Types: Electrodeless plasma thruster Magnetoplasmadynamic thruster Pulsed inductive thruster Pulsed plasma thruster Helicon Double Layer Thruster Magnetic field oscillating amplified thruster Non-ion drives Photonic A photonic drive interacts only with photons. Electrodynamic tether Electrodynamic tethers are long conducting wires, such as one deployed from a tether satellite, which can operate on electromagnetic principles as generators, by converting their kinetic energy to electric energy, or as motors, converting electric energy to kinetic energy. Electric potential is generated across a conductive tether by its motion through the Earth's magnetic field. The choice of the metal conductor to be used in an electrodynamic tether is determined by factors such as electrical conductivity, and density. Secondary factors, depending on the application, include cost, strength, and melting point. Controversial Some proposed propulsion methods apparently violate currently-understood laws of physics, including: Quantum Vacuum Thruster EM Drive or Cannae Drive Steady vs. unsteady Electric propulsion systems can be characterized as either steady (continuous firing for a prescribed duration) or unsteady (pulsed firings accumulating to a desired impulse). These classifications can be applied to all types of propulsion engines. Dynamic properties Electrically powered rocket engines provide lower thrust compared to chemical rockets by several orders of magnitude because of the limited electrical power available in a spacecraft. A chemical rocket imparts energy to the combustion products directly, whereas an electrical system requires several steps. However, the high velocity and lower reaction mass expended for the same thrust allows electric rockets to run on less fuel. This differs from the typical chemical-powered spacecraft, where the engines require more fuel, requiring the spacecraft to mostly follow an inertial trajectory. When near a planet, low-thrust propulsion may not offset the gravitational force. An electric rocket engine cannot provide enough thrust to lift the vehicle from a planet's surface, but a low thrust applied for a long interval can allow a spacecraft to manoeuvre near a planet. See also Magnetic sail, a proposed system powered by solar wind from the Sun or any star List of spacecraft with electric propulsion, a list of past and proposed spacecraft which used electric propulsion Rocket propulsion technologies (disambiguation) References External links NASA Jet Propulsion Laboratory The technological and commercial expansion of electric propulsion - D. Lev et al. The technological and commercial expansion of electric propulsion Electric (Ion) Propulsion, University Center for Atmospheric Research, University of Colorado at Boulder, 2000. Distributed Power Architecture for Electric Propulsion Choueiri, Edgar Y. (2009). New dawn of electric rocket Robert G. Jahn and Edgar Y. Choueiri. Electric Propulsion Colorado State University Electric Propulsion and Plasma Engineering (CEPPE) Laboratory Stationary plasma thrusters(PDF) electric space propulsion Public Lessons Learned Entry: 0736 A Critical History of Electric Propulsion:The First Fifty Years (1906–1956) - AIAA-2004-3334 Aerospace America, AIAA publication, December 2005, Propulsion and Energy section, pp. 54–55, written by Mitchell Walker. Russian inventions Soviet inventions Spacecraft propulsion Electric motors
Spacecraft electric propulsion
[ "Technology", "Engineering" ]
1,799
[ "Electrical engineering", "Engines", "Electric motors" ]
27,116,986
https://en.wikipedia.org/wiki/Hydrological%20Ensemble%20Prediction%20Experiment
HEPEX is an international initiative bringing together hydrologists, meteorologists, researchers and endusers to develop advanced probabilistic hydrological forecast techniques for improved flood, drought and water management. HEPEX was launched in 2004 as an independent, cooperative international scientific activity. During the first meeting, the overarching goal was defined as to develop and test procedures to produce reliable hydrological ensemble forecasts, and to demonstrate their utility in decision making related to the water, environmental and emergency management sectors Key questions of HEPEX are: What adaptations are required for meteorological ensemble systems to be coupled with hydrological ensemble systems? How should the existing hydrological ensemble prediction systems be modified to account for all sources of uncertainty within a forecast? What is the best way for the user community to take advantage of ensemble forecasts and to make better decisions based on them? The applications of Hydrological Ensemble Predictions span across large spatio-temporal scales ranging from short-term and very localized predictions to global climate change modeling. HEPEX is organised around six major themes: Input and pre-processing Ensemble techniques and process modelling Data assimilation Post-processing Verification Communication and use in decision making Organisation of HEPEX HEPEX is currently co-chaired by NOAA, the European Centre for Medium-Range Weather Forecast and the European Commission Joint Research Centre. Co-chairs are elected during plenary HEPEX meetings. There is no formal membership for HEPEX. The HEPEX community is established through active participation of scientists, end users and decision makers in research, discussions and exchange of information on topics related to probabilistic hydrological predictions for floods, droughts, water management or related topics. The community has been very active with growing importance. Information on the initiative and the possibility to actively contribute to ongoing discussions can be found on the HEPEX website. HEPEX webinars can be followed online with the possibility to participate in the discussion. They are then transferred for online viewing here. See also European Flood Alert System - Probabilistic flood forecasting on European Scale HEPEX webinars European Centre for Medium-Range Weather Forecast NOAA External links HEPEX webpage Floods Portal of the EC Joint Research Centre MAP D-PHASE Hydrology
Hydrological Ensemble Prediction Experiment
[ "Chemistry", "Engineering", "Environmental_science" ]
458
[ "Hydrology", "Environmental engineering" ]
27,117,243
https://en.wikipedia.org/wiki/C12H15NO3S
{{DISPLAYTITLE:C12H15NO3S}} The molecular formula C12H15NO3S (molar mass: 253.32 g/mol, exact mass: 253.0773 u) may refer to: Benzylmercapturic acid Thiorphan Molecular formulas
C12H15NO3S
[ "Physics", "Chemistry" ]
67
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
2,765,047
https://en.wikipedia.org/wiki/Molecular%20motor
Molecular motors are natural (biological) or artificial molecular machines that are the essential agents of movement in living organisms. In general terms, a motor is a device that consumes energy in one form and converts it into motion or mechanical work; for example, many protein-based molecular motors harness the chemical free energy released by the hydrolysis of ATP in order to perform mechanical work. In terms of energetic efficiency, this type of motor can be superior to currently available man-made motors. One important difference between molecular motors and macroscopic motors is that molecular motors operate in the thermal bath, an environment in which the fluctuations due to thermal noise are significant. Examples Some examples of biologically important molecular motors: Cytoskeletal motors Myosins are responsible for muscle contraction, intracellular cargo transport, and producing cellular tension. Kinesin moves cargo inside cells away from the nucleus along microtubules, in anterograde transport. Dynein produces the axonemal beating of cilia and flagella and also transports cargo along microtubules towards the cell nucleus, in retrograde transport. Polymerisation motors Actin polymerization generates forces and can be used for propulsion. ATP is used. Microtubule polymerization using GTP. Dynamin is responsible for the separation of clathrin buds from the plasma membrane. GTP is used. Rotary motors: FoF1-ATP synthase family of proteins convert the chemical energy in ATP to the electrochemical potential energy of a proton gradient across a membrane or the other way around. The catalysis of the chemical reaction and the movement of protons are coupled to each other via the mechanical rotation of parts of the complex. This is involved in ATP synthesis in the mitochondria and chloroplasts as well as in pumping of protons across the vacuolar membrane. The bacterial flagellum responsible for the swimming and tumbling of E. coli and other bacteria acts as a rigid propeller that is powered by a rotary motor. This motor is driven by the flow of protons across a membrane, possibly using a similar mechanism to that found in the Fo motor in ATP synthase. Nucleic acid motors: RNA polymerase transcribes RNA from a DNA template. DNA polymerase turns single-stranded DNA into double-stranded DNA. Helicases separate double strands of nucleic acids prior to transcription or replication. ATP is used. Topoisomerases reduce supercoiling of DNA in the cell. ATP is used. RSC and SWI/SNF complexes remodel chromatin in eukaryotic cells. ATP is used. SMC proteins responsible for chromosome condensation in eukaryotic cells. Viral DNA packaging motors inject viral genomic DNA into capsids as part of their replication cycle, packing it very tightly. Several models have been put forward to explain how the protein generates the force required to drive the DNA into the capsid. An alternative proposal is that, in contrast with all other biological motors, the force is not generated directly by the protein, but by the DNA itself. In this model, ATP hydrolysis is used to drive protein conformational changes that alternatively dehydrate and rehydrate the DNA, cyclically driving it from B-DNA to A-DNA and back again. A-DNA is 23% shorter than B-DNA, and the DNA shrink/expand cycle is coupled to a protein-DNA grip/release cycle to generate the forward motion that propels DNA into the capsid. Enzymatic motors: The enzymes below have been shown to diffuse faster in the presence of their catalytic substrates, known as enhanced diffusion. They also have been shown to move directionally in a gradient of their substrates, known as chemotaxis. Their mechanisms of diffusion and chemotaxis are still debated. Possible mechanisms include solutal buoyancy, phoresis or conformational changes leading to change in effective diffusivity and kinetic asymmetry. Catalase Urease Aldolase Hexokinase Phosphoglucose isomerase Phosphofructokinase Glucose Oxidase A recent study has also shown that certain enzymes, such as Hexokinase and Glucose Oxidase, are aggregating or fragmenting during catalysis. This changes their hydrodynamic size that can affect enhanced diffusion measurements. Synthetic molecular motors have been created by chemists that yield rotation, possibly generating torque. Organelle and vesicle transport There are two major families of molecular motors that transport organelles throughout the cell. These families include the dynein family and the kinesin family. Both have very different structures from one another and different ways of achieving a similar goal of moving organelles around the cell. These distances, though only few micrometers, are all preplanned out using microtubules. Kinesin These molecular motors always move towards the positive end of the cell Uses ATP hydrolysis during the process converting ATP to ADP This process consists of ... The "foot" of the motor binds using ATP, the "foot" proceeds a step, and then ADP comes off. This repeats itself until the destination has been reached The kinesin family consists of a multitude of different motor types Kinesin-1 (Conventional) Kinesin-2 (Heterotrimeric) Kinesin-5 (Bipolar) Kinesin-13 Dynein These molecular motors always move towards the negative end of the cell Uses ATP hydrolysis during the process converting ATP to ADP Unlike kinesin, the dynein is structured in a different way which requires it to have different movement methods. One of these methods includes the power stroke, which allows the motor protein to "crawl" along the microtubule to its location. The structure of dynein consists of A Stem Containing A region that binds to dynactin Intermediate/light chains that will attach to the dynactin bonding region A Head A Stalk With a domain that will bind to the microtubule These molecular motors tend to take the path of the microtubules. This is most likely due to the facts that the microtubules spring forth out of the centrosome and surround the entire volume of the cell. This in turn creates a "Rail system" of the whole cell and paths leading to its organelles. Theoretical considerations Because the motor events are stochastic, molecular motors are often modeled with the Fokker–Planck equation or with Monte Carlo methods. These theoretical models are especially useful when treating the molecular motor as a Brownian motor. Experimental observation In experimental biophysics, the activity of molecular motors is observed with many different experimental approaches, among them: Fluorescent methods: fluorescence resonance energy transfer (FRET), fluorescence correlation spectroscopy (FCS), total internal reflection fluorescence (TIRF). Magnetic tweezers can also be useful for analysis of motors that operate on long pieces of DNA. Neutron spin echo spectroscopy can be used to observe motion on nanosecond timescales. Optical tweezers (not to be confused with molecular tweezers in context) are well-suited for studying molecular motors because of their low spring constants. Scattering techniques: single particle tracking based on dark field microscopy or interferometric scattering microscopy (iSCAT) Single-molecule electrophysiology can be used to measure the dynamics of individual ion channels. Many more techniques are also used. As new technologies and methods are developed, it is expected that knowledge of naturally occurring molecular motors will be helpful in constructing synthetic nanoscale motors. Non-biological Recently, chemists and those involved in nanotechnology have begun to explore the possibility of creating molecular motors de novo. These synthetic molecular motors currently suffer many limitations that confine their use to the research laboratory. However, many of these limitations may be overcome as our understanding of chemistry and physics at the nanoscale increases. One step toward understanding nanoscale dynamics was made with the study of catalyst diffusion in the Grubb's catalyst system. Other systems like the nanocars, while not technically motors, are also illustrative of recent efforts towards synthetic nanoscale motors. Other non-reacting molecules can also behave as motors. This has been demonstrated by using dye molecules that move directionally in gradients of polymer solution through favorable hydrophobic interactions. Another recent study has shown that dye molecules, hard and soft colloidal particles are able to move through gradient of polymer solution through excluded volume effects. See also Brownian motor Brownian ratchet Cytoskeleton Molecular machines Molecular mechanics Molecular propeller Motor proteins Nanomotor Protein dynamics Synthetic molecular motors References External links MBInfo - Molecular Motor Activity MBInfo - Cytoskeleton-dependent MBInfo - Intracellular Transport Cymobase - A database for cytoskeletal and motor protein sequence information Jonathan Howard (2001), Mechanics of motor proteins and the cytoskeleton. Molecular machines Biophysics Cell movement
Molecular motor
[ "Physics", "Chemistry", "Materials_science", "Technology", "Biology" ]
1,841
[ "Machines", "Applied and interdisciplinary physics", "Physical systems", "Molecular machines", "Biophysics", "Nanotechnology" ]
2,767,457
https://en.wikipedia.org/wiki/Wired%20for%20Management
Wired for Management (WfM) was a primarily hardware-based system allowing a newly built computer without any software to be manipulated by a master computer that could access the hard disk of the new PC to paste the install program. It could also be used to update software and monitor system status remotely. Intel developed the system in the 1990s; it is now considered obsolete. WfM included the Preboot Execution Environment (PXE) and Wake-on-LAN (WOL) standards. WfM has been replaced by the Intelligent Platform Management Interface standard for servers and Intel Active Management Technology for PCs. See also Provisioning (telecommunications) References Networking hardware System administration
Wired for Management
[ "Technology", "Engineering" ]
134
[ "Information systems", "Computer networks engineering", "Networking hardware", "System administration" ]
2,769,612
https://en.wikipedia.org/wiki/Spectral%20imaging
Spectral imaging is imaging that uses multiple bands across the electromagnetic spectrum. While an ordinary camera captures light across three wavelength bands in the visible spectrum, red, green, and blue (RGB), spectral imaging encompasses a wide variety of techniques that go beyond RGB. Spectral imaging may use the infrared, the visible spectrum, the ultraviolet, x-rays, or some combination of the above. It may include the acquisition of image data in visible and non-visible bands simultaneously, illumination from outside the visible range, or the use of optical filters to capture a specific spectral range. It is also possible to capture hundreds of wavelength bands for each pixel in an image. Multispectral imaging captures a small number of spectral bands, typically three to fifteen, through the use of varying filters and illumination. Many off-the-shelf RGB camera sensors can detect wavelengths of light from 300 nm to 1200 nm. A scene may be illuminated with NIR light, and, simultaneously, an infrared-passing filter may be used on the camera to ensure that visible light is blocked and only NIR is captured in the image. Industrial, military, and scientific work, however, uses sensors built for the purpose. Hyperspectral imaging is another subcategory of spectral imaging, which combines spectroscopy and digital photography. In hyperspectral imaging, a complete spectrum or some spectral information (such as the Doppler shift or Zeeman splitting of a spectral line) is collected at every pixel in an image plane. A hyperspectral camera uses special hardware to capture hundreds of wavelength bands for each pixel, which can be interpreted as a complete spectrum. In other words, the camera has a high spectral resolution. The phrase "spectral imaging" is sometimes used as a shorthand way of referring to this technique, but it is preferable to use the term "hyperspectral imaging" in places when ambiguity may arise. Hyperspectral images are often represented as an image cube, which is type of data cube. Applications of spectral imaging include art conservation, astronomy, solar physics, planetology, and Earth remote sensing. It also applies to digital and print reproduction, and exhibition lighting design for small and medium cultural institutions. Systems Spectral imaging systems are the systems that through the acquisition of one or more images of a subject are able of giving back a spectrum for each pixel of the original images. There are a number of parameters to characterize the obtained data: Spatial resolution, which can be described in terms of number of pixels for the whole image, or in terms of minimum square area distinguishable on the surface. Typically it depends on the number of mega pixels of the photographic camera Spectral resolution, that define the smallest spectral variation that the system is able of distinguish Radiometric accuracy, that says how accurate is the system in measuring the spectral reflectance percentage The most used way to achieve spectral imaging is to take an image for each desired band, using a narrowband filters. This leads to a huge number of images and large bank of filters when a significant spectral resolution is required. There is another technique, much more efficient and based on multibandpass filters, which allows obtaining a number of final bands starting from a limited number of images. The taken images build a mathematical base with enough information to reconstruct data for each pixel with a high spectral resolution. This is the approach followed by the Hypercolorimetric Multispectral Imaging (HMI) of Profilocolore SRL. See also Imaging spectroscopy Chemical imaging Dopplergraph Imaging spectrometer Vegetation index References Astronomical spectroscopy
Spectral imaging
[ "Physics", "Chemistry" ]
722
[ "Spectrum (physical sciences)", "Spectroscopy", "Astronomical spectroscopy", "Astrophysics" ]
2,769,779
https://en.wikipedia.org/wiki/Baum%C3%A9%20scale
The Baumé scale is a pair of hydrometer scales developed by French pharmacist Antoine Baumé in 1768 to measure density of various liquids. The unit of the Baumé scale has been notated variously as degrees Baumé, B°, Bé° and simply Baumé (the accent is not always present). One scale measures the density of liquids heavier than water and the other, liquids lighter than water. The Baumé of distilled water is 0. The API gravity scale is based on errors in early implementations of the Baumé scale. Definitions Baumé degrees (heavy) originally represented the percent by mass of sodium chloride in water at . Baumé degrees (light) was calibrated with 0°Bé (light) being the density of 10% NaCl in water by mass and 10°Bé (light) set to the density of water. Consider, at near room temperature: +100°Bé (specific gravity, 3.325) would be among the densest fluids known (except some liquid metals), such as diiodomethane. Near 0°Bé would be approximately the density of water. −100°Bé (specific gravity, 0.615) would be among the lightest fluids known, such as liquid butane. Thus, the system could be understood as representing a practical spectrum of the density of liquids between −100 and 100, with values near 0 being the approximate density of water. Conversions The relationship between specific gravity (s.g.; i.e., water-specific gravity, the density relative to water) and degrees Baumé is a function of the temperature. Different versions of the scale may use different reference temperatures. Different conversions formulae can therefore be found in various handbooks. As an example, a 2008 handbook states the conversions between specific gravity and degrees Baumé at a temperature of : The numerator in the specific gravity calculation is commonly known as the "modulus". An older handbook gives the following formulae (no reference temperature being mentioned): Other scales Because of vague instructions or errors in translation a large margin of error was introduced when the scale was adopted. The API gravity scale is a result of adapting to the subsequent errors from the Baumé scale. The Baumé scale is related to the Balling, Brix, Plato and 'specific gravity times 1000' scales. Use Before standardization on specific gravity around the time of World War II the Baumé scale was generally used in industrial chemistry and pharmacology for the measurement of density of liquids. Today the Baumé scale is still used in various industries such as sugar beet processing, ophthalmics, starch industry, winemaking, industrial water treatment, metal finishing, and printed circuit board (PCB) fabrication. It is also used for caustic in refining process. See also Brix scale Ripeness in viticulture Notes References Further reading Oenology Units of density Food analysis
Baumé scale
[ "Physics", "Chemistry", "Mathematics" ]
598
[ "Physical quantities", "Units of density", "Quantity", "Density", "Food analysis", "Food chemistry", "Units of measurement" ]
2,769,817
https://en.wikipedia.org/wiki/Motor%20drive
A motor drive is a physical system that includes a motor. An adjustable speed motor drive is a system that includes a motor that has multiple operating speeds. A variable speed motor drive is a system that includes a motor that is continuously variable in speed. If the motor is generating electrical energy rather than using it, the motor drive could be called a generator drive but is often still referred to as a motor drive. A variable frequency drive (VFD) or variable speed drive (VSD) describes the electronic portion of the system that controls the speed of the motor. More generally, the term drive, describes equipment used to control the speed of machinery. Many industrial processes such as assembly lines must operate at different speeds for different products. Where process conditions demand adjustment of flow from a pump or fan, varying the speed of the drive may save energy compared with other techniques for flow control. Where speeds may be selected from several different pre-set ranges, usually the drive is said to be adjustable speed. If the output speed can be changed without steps over a range, the drive is usually referred to as variable speed. Adjustable and variable speed drives may be purely mechanical (termed variators), electromechanical, hydraulic, or electronic. Sometimes motor drive refers to a drive used to control a motor and therefore gets interchanged with VFD or VSD. Electric motors AC electric motors can be run in fixed-speed operation determined by the number of stator pole pairs in the motor and the frequency of the alternating current supply. AC motors can be made for "pole changing" operation, reconnecting the stator winding to vary the number of poles so that two, sometimes three, speeds are obtained. For example a machine with eight physical pairs of poles, could be connected to allow running with either four or eight pole pairs, giving two speeds - at 60 Hz, these would be 1800 RPM and 900 RPM. If speed changes are rare, the motor may be initially connected for one speed then re-wired for the other speed as process conditions change, or, magnetic contactors can be used to switch between the two speeds as process needs fluctuate. Connections for more than three speeds are uneconomic. The number of such fixed-speed-operation speeds is constrained by cost as number of pole pairs increases. If many different speeds or continuously variable speeds are required, other methods are required. Direct-current motors allow for changes of speed by adjusting the shunt field current. Another way of changing speed of a direct current motor is to change the voltage applied to the armature. An adjustable-speed motor drive might consist of an electric motor and controller that is used to adjust the motor's operating speed. The combination of a constant-speed motor and a continuously adjustable mechanical speed-changing device might also be called an "adjustable speed motor drive". Power electronics-based variable frequency drives are rapidly making older technologies redundant. Reasons for using adjustable speed drives Process control and energy conservation are the two primary reasons for using an adjustable-speed drive. Historically, adjustable-speed drives were developed for process control, but energy conservation has emerged as an equally important objective. Acceleration control An adjustable-speed drive can often provide smoother operation compared to an alternative fixed-speed mode of operation. For example, in a sewage lift station sewage usually flows through sewer pipes under the force of gravity to a wet well location. From there it is pumped up to a treatment process. When fixed-speed pumps are used, the pumps are set to start when the level of the liquid in the wet well reaches some high point and stop when the level has been reduced to a low point. Cycling the pumps on and off results in frequent high surges of electric current to start the motors that results in electromagnetic and thermal stresses in the motors and power control equipment, the pumps and pipes are subjected to mechanical and hydraulic stresses, and the sewage treatment process is forced to accommodate surges in the flow of sewage through the process. When adjustable speed drives are used, the pumps operate continuously at a speed that increases as the wet well level increases. This matches the outflow to the average inflow and provides a much smoother operation of the process. Saving energy by using efficient adjustable-speed drives Fans and pumps consume a large part of the energy used by industrial electrical motors. Where fans and pumps serve a varying process load, a simple way to vary the delivered quantity of fluid is with a damper or valve in the outlet of the fan or pump, which by its increased pressure drop, reduces the flow in the process. However, this additional pressure drop represents energy loss. Sometimes it is economically practical to put in some device that recovers this otherwise lost energy. With a variable-speed drive on the pump or fan, the supply can be adjusted to match demand and no extra loss is introduced. For example, when a fan is driven directly by a fixed-speed motor, the airflow is designed for the maximum demand of the system, and so will usually be higher than it needs to be. Airflow can be regulated using a damper but it is more efficient to directly regulate fan motor speed. Following the affinity laws, for 50% of the airflow, the variable-speed motor consumes about 20% of the input power (amps). The fixed-speed motor still consumes about 85% of the input power at half the flow. Types of drives Some prime movers (internal combustion engines, reciprocating or turbine steam engines, water wheels, and others) have a range of operating speeds which can be varied continuously (by adjusting fuel rate or similar means). However, efficiency may be low at extremes of the speed range, and there may be system reasons why the prime mover speed cannot be maintained at very low or very high speeds. Before electric motors were invented, mechanical speed changers were used to control the mechanical power provided by water wheels and steam engines. When electric motors came into use, means of controlling their speed were developed almost immediately. Today, various types of mechanical drives, hydraulic drives and electric drives compete with one another in the industrial drives market. Mechanical drives There are two types of mechanical drives, variable-pitch drives, and traction drives. Variable-pitch drives are pulley and belt drives in which the pitch diameter of one or both pulleys can be adjusted. Traction drives transmit power through metal rollers running against mating metal rollers. The input-output speed ratio is adjusted by moving the rollers to change the diameters of the contact path. Many different roller shapes and mechanical designs have been used. Hydraulic adjustable speed drives There are three types of hydraulic drives, those are: hydrostatic drives, hydrodynamic drives and hydroviscous drives. A hydrostatic drive consists of a hydraulic pump and a hydraulic motor. Since positive displacement pumps and motors are used, one revolution of the pump or motor corresponds to a set volume of fluid flow that is determined by the displacement regardless of speed or torque. Speed is regulated by regulating the fluid flow with a valve or by changing the displacement of the pump or motor. Many different design variations have been used. A swash plate drive employs an axial piston pump or motor in which the swash plate angle can be changed to adjust the displacement and thus adjust the speed. Hydrodynamic drives or fluid couplings use oil to transmit torque between an impeller on the constant-speed input shaft and a rotor on the adjustable-speed output shaft. The torque converter in the automatic transmission of a car is a hydrodynamic drive. A hydroviscous drive consists of one or more discs connected to an input shaft pressed against a similar disc or discs connected to an output shaft. Torque is transmitted from the input shaft to the output shaft through an oil film between the discs. The transmitted torque is proportional to the pressure exerted by a hydraulic cylinder that presses the discs together. This effect may be used as a clutch, such as the Hele-Shaw clutch, or as a variable-speed drive, such as the Beier variable-ratio gear. Continuously variable transmission (CVT) Mechanical and hydraulic adjustable speed drives are usually called "transmissions" or "continuously variable transmissions" when they are used in vehicles, farm equipment and some other types of equipment. Electric adjustable speed drives Types of control Control can mean either manually adjustable - by means of a potentiometer or linear hall effect device, (which is more resistant to dust and grease) or it can also be automatically controlled, for example, by using a rotational detector such as a Gray code optical encoder. Types of drives There are three general categories of electric drives: DC motor drives, eddy current drives and AC motor drives. Each of these general types can be further divided into numerous variations. Electric drives generally include both an electric motor and a speed control unit or system. The term drive is often applied to the controller without the motor. In the early days of electric drive technology, electromechanical control systems were used. Later, electronic controllers were designed using various types of vacuum tubes. As suitable solid state electronic components became available, new controller designs incorporated the latest electronic technology. DC drives DC drives are DC motor speed control systems. Since the speed of a DC motor is directly proportional to armature voltage and inversely proportional to motor flux (which is a function of field current), either armature voltage or field current can be used to control speed. Eddy current drives An eddy current drive (sometimes called a "Dynamatic drive", after one of the most common brand names) consists of a fixed-speed motor (generally an induction motor) and an eddy current clutch. The clutch contains a fixed-speed rotor and an adjustable-speed rotor separated by a small air gap. A direct current in a field coil produces a magnetic field that determines the torque transmitted from the input rotor to the output rotor. The controller provides closed loop speed regulation by varying clutch current, only allowing the clutch to transmit enough torque to operate at the desired speed. Speed feedback is typically provided via an integral AC tachometer. Eddy current drives are slip-controlled systems the slip energy of which is necessarily all dissipated as heat. Such drives are therefore generally less efficient than AC/DC-AC conversion based drives. The motor develops the torque required by the load and operates at full speed. The output shaft transmits the same torque to the load, but turns at a slower speed. Since power is proportional to torque multiplied by speed, the input power is proportional to motor speed times operating torque while the output power is output speed times operating torque. The difference between the motor speed and the output speed is called the slip speed. Power proportional to the slip speed times operating torque is dissipated as heat in the clutch. While it has been surpassed by the variable-frequency drive in most variable-speed applications, the eddy current clutch is still often used to couple motors to high-inertia loads that are frequently stopped and started, such as stamping presses, conveyors, hoisting machinery, and some larger machine tools, allowing gradual starting, with less maintenance than a mechanical clutch or hydraulic transmission. AC drives AC drives are AC motor speed control systems. A slip-controlled wound-rotor induction motor (WRIM) drive controls speed by varying motor slip via rotor slip rings either by electronically recovering slip power fed back to the stator bus or by varying the resistance of external resistors in the rotor circuit. Along with eddy current drives, resistance-based WRIM drives have lost popularity because they are less efficient than AC/DC-AC-based WRIM drives and are used only in special situations.. Slip energy recovery systems return energy to the WRIM's stator bus, converting slip energy and feeding it back to the stator supply. Such recovered energy would otherwise be wasted as heat in resistance-based WRIM drives. Slip energy recovery variable-speed drives are used in such applications as large pumps and fans, wind turbines, shipboard propulsion systems, large hydro-pumps andgenerators and utility energy storage flywheels. Early slip energy recovery systems using electromechanical components for AC/DC-AC conversion (i.e., consisting of rectifier, DC motor and AC generator) are termed Kramer drives, with more recent systems using variable-frequency drives (VFDs) being referred to as static Kramer drives. In general, a VFD in its most basic configuration controls the speed of an induction or synchronous motor by adjusting the frequency of the power supplied to the motor. When changing VFD frequency in standard low-performance variable-torque applications using Volt-per-Hertz (V/Hz) control, the AC motor's voltage-to-frequency ratio can be maintained constant, and its power can be varied, between the minimum and maximum operating frequencies up to a base frequency. Constant voltage operation above base frequency, and therefore with reduced V/Hz ratio, provides reduced torque and constant power capability. Regenerative AC drives are a type of AC drive which have the capacity to recover the braking energy of a load moving faster than the motor speed (an overhauling load) and return it to the power system. See also DC injection braking Doubly fed electric machine Regenerative variable-frequency drives Scherbius Drive References Robotics hardware Electric motors Electric power systems components Mechanical power transmission Mechanical power control Variators Electric motor control
Motor drive
[ "Physics", "Technology", "Engineering" ]
2,727
[ "Robotics hardware", "Engines", "Electric motors", "Robotics engineering", "Mechanics", "Electrical engineering", "Mechanical power transmission", "Mechanical power control" ]
2,770,172
https://en.wikipedia.org/wiki/Jojoba%20oil
Jojoba oil () is the liquid produced in the seed of the Simmondsia chinensis (jojoba) plant, a shrub, which is native to southern Arizona, southern California, and northwestern Mexico. The oil makes up approximately 50% of the jojoba seed by weight. The terms "jojoba oil" and "jojoba wax" are often used interchangeably because the wax visually appears to be a mobile oil, but as a wax it is composed almost entirely (~97%) of mono-esters of long-chain fatty acids (wax ester) and alcohols (isopropyl jojobate), accompanied by only a tiny fraction of triglyceride esters. This composition accounts for its extreme shelf-life stability and extraordinary resistance to high temperatures, compared with true vegetable oils. History The O'odham Native American tribe extracted the oil from jojoba seeds to treat sores and wounds. The collection and processing of the seed from naturally occurring stands marked the beginning of jojoba domestication in the early 1970s. In 1943, natural resources of the U.S, including the jojoba oil, were used during war as additives to motor oil, transmission oil, and differential gear oil. Machine guns were lubricated and maintained with jojoba. Appearance Unrefined jojoba oil appears as a clear golden liquid at room temperature with a slightly nutty odor. Refined jojoba oil is colorless and odorless. The melting point of jojoba oil is approximately and the iodine value is approximately 80. Jojoba oil is relatively shelf-stable when compared with other vegetable oils mainly because it contains few triglycerides, unlike most other vegetable oils such as grape seed oil and coconut oil. It has an oxidative stability index of approximately 60, which means that it is more shelf-stable than safflower oil, canola oil, almond oil, or squalene but less than castor oil and coconut oil. Chemistry The fatty acid content of Jojoba oil can vary significantly depending on the soil and climate in which the plant is grown, as well as when it is harvested and how the oil is processed. In general, it contains a high proportion of mono-unsaturated fatty acids, primarily 11-Eicosenoic acid (gondoic acid). Uses Being derived from a plant that is slow-growing and difficult to cultivate, jojoba oil is mainly used for small-scale applications such as pharmaceuticals and cosmetics. Overall, it is used as a replacement for whale oil and its derivatives, such as cetyl alcohol. The ban on importing whale oil to the U.S. in 1971 led to the discovery that jojoba oil is "in many regards superior to sperm whale oil for applications in the cosmetics and other industries". Jojoba oil is found as an additive in many cosmetic products, especially those marketed as being made from natural ingredients. In particular, such products commonly containing jojoba are lotions and moisturizers, hair shampoos and conditioners. Like olestra, jojoba oil is edible but non-caloric and non-digestible, meaning the oil will pass out of the intestines unchanged and can mimic steatorrhea—a health condition characterized by the inability to digest or absorb normal dietary fats. Thus, this indigestible oil is present in the stool, but does not indicate an intestinal disease. If consumption of jojoba oil is discontinued in a healthy person, the indigestible oil in the stool will disappear. Jojoba oil also contains approximately 12.1% of the fatty acid erucic acid that would appear to have toxic effects on the heart at high enough doses, if it were digestible. See also Oleochemical Simmondsia chinensis (jojoba) seed powder Photo gallery References External links International Jojoba Export Council Description and chemical structure of jojoba oil Can This Unassuming Little Desert Shrub Really Save The World? - The first article from 1977 Waxes Vegetable oils Cosmetics chemicals
Jojoba oil
[ "Physics" ]
849
[ "Materials", "Matter", "Waxes" ]
2,770,230
https://en.wikipedia.org/wiki/Dirichlet%27s%20approximation%20theorem
In number theory, Dirichlet's theorem on Diophantine approximation, also called Dirichlet's approximation theorem, states that for any real numbers and , with , there exist integers and such that and Here represents the integer part of . This is a fundamental result in Diophantine approximation, showing that any real number has a sequence of good rational approximations: in fact an immediate consequence is that for a given irrational α, the inequality is satisfied by infinitely many integers p and q. This shows that any irrational number has irrationality measure at least 2. The Thue–Siegel–Roth theorem says that, for algebraic irrational numbers, the exponent of 2 in the corollary to Dirichlet’s approximation theorem is the best we can do: such numbers cannot be approximated by any exponent greater than 2. The Thue–Siegel–Roth theorem uses advanced techniques of number theory, but many simpler numbers such as the golden ratio can be much more easily verified to be inapproximable beyond exponent 2. Simultaneous version The simultaneous version of the Dirichlet's approximation theorem states that given real numbers and a natural number then there are integers such that Method of proof Proof by the pigeonhole principle This theorem is a consequence of the pigeonhole principle. Peter Gustav Lejeune Dirichlet who proved the result used the same principle in other contexts (for example, the Pell equation) and by naming the principle (in German) popularized its use, though its status in textbook terms comes later. The method extends to simultaneous approximation. Proof outline: Let be an irrational number and be an integer. For every we can write such that is an integer and . One can divide the interval into smaller intervals of measure . Now, we have numbers and intervals. Therefore, by the pigeonhole principle, at least two of them are in the same interval. We can call those such that . Now: Dividing both sides by will result in: And we proved the theorem. Proof by Minkowski's theorem Another simple proof of the Dirichlet's approximation theorem is based on Minkowski's theorem applied to the set Since the volume of is greater than , Minkowski's theorem establishes the existence of a non-trivial point with integral coordinates. This proof extends naturally to simultaneous approximations by considering the set Related theorems Legendre's theorem on continued fractions In his Essai sur la théorie des nombres (1798), Adrien-Marie Legendre derives a necessary and sufficient condition for a rational number to be a convergent of the simple continued fraction of a given real number. A consequence of this criterion, often called Legendre's theorem within the study of continued fractions, is as follows: Theorem. If α is a real number and p, q are positive integers such that , then p/q is a convergent of the continued fraction of α. Proof. We follow the proof given in An Introduction to the Theory of Numbers by G. H. Hardy and E. M. Wright. Suppose α, p, q are such that , and assume that α > p/q. Then we may write , where 0 < θ < 1/2. We write p/q as a finite continued fraction [a0; a1, ..., an], where due to the fact that each rational number has two distinct representations as finite continued fractions differing in length by one (namely, one where an = 1 and one where an ≠ 1), we may choose n to be even. (In the case where α < p/q, we would choose n to be odd.) Let p0/q0, ..., pn/qn = p/q be the convergents of this continued fraction expansion. Set , so that and thus, where we have used the fact that pn−1 qn - pn qn−1 = (-1)n and that n is even. Now, this equation implies that α = [a0; a1, ..., an, ω]. Since the fact that 0 < θ < 1/2 implies that ω > 1, we conclude that the continued fraction expansion of α must be [a0; a1, ..., an, b0, b1, ...], where [b0; b1, ...] is the continued fraction expansion of ω, and therefore that pn/qn = p/q is a convergent of the continued fraction of α. This theorem forms the basis for Wiener's attack, a polynomial-time exploit of the RSA cryptographic protocol that can occur for an injudicious choice of public and private keys (specifically, this attack succeeds if the prime factors of the public key n = pq satisfy p < q < 2p and the private key d is less than (1/3)n1/4). See also Dirichlet's theorem on arithmetic progressions Hurwitz's theorem (number theory) Heilbronn set Kronecker's theorem (generalization of Dirichlet's theorem) Notes References External links Diophantine approximation Theorems in number theory
Dirichlet's approximation theorem
[ "Mathematics" ]
1,067
[ "Mathematical theorems", "Theorems in number theory", "Mathematical relations", "Mathematical problems", "Diophantine approximation", "Approximations", "Number theory" ]
2,770,340
https://en.wikipedia.org/wiki/OS-level%20virtualization
OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, including containers (LXC, Solaris Containers, AIX WPARs, HP-UX SRP Containers, Docker, Podman), zones (Solaris Containers), virtual private servers (OpenVZ), partitions, virtual environments (VEs), virtual kernels (DragonFly BSD), and jails (FreeBSD jail and chroot). Such instances may look like real computers from the point of view of programs running in them. A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer. Programs running inside a container can only see the container's contents and devices assigned to the container. On Unix-like operating systems, this feature can be seen as an advanced implementation of the standard chroot mechanism, which changes the apparent root folder for the current running process and its children. In addition to isolation mechanisms, the kernel often provides resource-management features to limit the impact of one container's activities on other containers. Linux containers are all based on the virtualization, isolation, and resource management mechanisms provided by the Linux kernel, notably Linux namespaces and cgroups. Although the word container most commonly refers to OS-level virtualization, it is sometimes used to refer to fuller virtual machines operating in varying degrees of concert with the host OS, such as Microsoft's Hyper-V containers. For an overview of virtualization since 1960, see Timeline of virtualization technologies. Operation On ordinary operating systems for personal computers, a computer program can see (even though it might not be able to access) all the system's resources. They include: Hardware capabilities that can be employed, such as the CPU and the network connection Data that can be read or written, such as files, folders and network shares Connected peripherals it can interact with, such as webcam, printer, scanner, or fax The operating system may be able to allow or deny access to such resources based on which program requests them and the user account in the context in which it runs. The operating system may also hide those resources, so that when the computer program enumerates them, they do not appear in the enumeration results. Nevertheless, from a programming point of view, the computer program has interacted with those resources and the operating system has managed an act of interaction. With operating-system-virtualization, or containerization, it is possible to run programs within containers, to which only parts of these resources are allocated. A program expecting to see the whole computer, once run inside a container, can only see the allocated resources and believes them to be all that is available. Several containers can be created on each operating system, to each of which a subset of the computer's resources is allocated. Each container may contain any number of computer programs. These programs may run concurrently or separately, and may even interact with one another. Containerization has similarities to application virtualization: In the latter, only one computer program is placed in an isolated container and the isolation applies to file system only. Uses Operating-system-level virtualization is commonly used in virtual hosting environments, where it is useful for securely allocating finite hardware resources among a large number of mutually-distrusting users. System administrators may also use it for consolidating server hardware by moving services on separate hosts into containers on the one server. Other typical scenarios include separating several programs to separate containers for improved security, hardware independence, and added resource management features. The improved security provided by the use of a chroot mechanism, however, is not perfect. Operating-system-level virtualization implementations capable of live migration can also be used for dynamic load balancing of containers between nodes in a cluster. Overhead Operating-system-level virtualization usually imposes less overhead than full virtualization because programs in OS-level virtual partitions use the operating system's normal system call interface and do not need to be subjected to emulation or be run in an intermediate virtual machine, as is the case with full virtualization (such as VMware ESXi, QEMU, or Hyper-V) and paravirtualization (such as Xen or User-mode Linux). This form of virtualization also does not require hardware support for efficient performance. Flexibility Operating-system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, with Linux, different distributions are fine, but other operating systems such as Windows cannot be hosted. Operating systems using variable input systematics are subject to limitations within the virtualized architecture. Adaptation methods including cloud-server relay analytics maintain the OS-level virtual environment within these applications. Solaris partially overcomes the limitation described above with its branded zones feature, which provides the ability to run an environment within a container that emulates an older Solaris 8 or 9 version in a Solaris 10 host. Linux branded zones (referred to as "lx" branded zones) are also available on x86-based Solaris systems, providing a complete Linux user space and support for the execution of Linux applications; additionally, Solaris provides utilities needed to install Red Hat Enterprise Linux 3.x or CentOS 3.x Linux distributions inside "lx" zones. However, in 2010 Linux branded zones were removed from Solaris; in 2014 they were reintroduced in Illumos, which is the open source Solaris fork, supporting 32-bit Linux kernels. Storage Some implementations provide file-level copy-on-write (CoW) mechanisms. (Most commonly, a standard file system is shared between partitions, and those partitions that change the files automatically create their own copies.) This is easier to back up, more space-efficient and simpler to cache than the block-level copy-on-write schemes common on whole-system virtualizers. Whole-system virtualizers, however, can work with non-native file systems and create and roll back snapshots of the entire system state. Implementations Linux containers not listed above include: LXD, an alternative wrapper around LXC developed by Canonical Podman, an advanced Kubernetes ready root-less secure drop-in replacement for Docker with support for multiple container image formats, including OCI and Docker images Charliecloud, a set of container tools used on HPC systems Kata Containers MicroVM Platform Bottlerocket is a Linux-based open-source operating system that is purpose-built by Amazon Web Services for running containers on virtual machines or bare metal hosts Azure Linux is an open-source Linux distribution that is purpose-built by Microsoft Azure and similar to Fedora CoreOS See also Container Linux Container orchestration Flatpak package manager Linux cgroups Linux namespaces Hypervisor Portable application creators Open Container Initiative Sandbox (software development) Separation kernel Serverless computing Snap package manager Storage hypervisor Virtual private server (VPS) Virtual resource partitioning Notes References External links An introduction to virtualization A short intro to three different virtualization techniques Virtualization and containerization of application infrastructure: A comparison, June 22, 2015, by Mathijs Jeroen Scheepers Containers and persistent data, LWN.net, May 28, 2015, by Josh Berkus Operating system security Virtualization Linux Linux containerization Linux kernel features
OS-level virtualization
[ "Engineering" ]
1,554
[ "Computer networks engineering", "Virtualization" ]
2,771,009
https://en.wikipedia.org/wiki/List%20of%20games%20in%20game%20theory
Game theory studies strategic interaction between individuals in situations called games. Classes of these games have been given names. This is a list of the most commonly studied games Explanation of features Games can have several features, a few of the most common are listed here. Number of players: Each person who makes a choice in a game or who receives a payoff from the outcome of those choices is a player. Strategies per player: In a game each player chooses from a set of possible actions, known as pure strategies. If the number is the same for all players, it is listed here. Number of pure strategy Nash equilibria: A Nash equilibrium is a set of strategies which represents mutual best responses to the other strategies. In other words, if every player is playing their part of a Nash equilibrium, no player has an incentive to unilaterally change their strategy. Considering only situations where players play a single strategy without randomizing (a pure strategy) a game can have any number of Nash equilibria. Sequential game: A game is sequential if one player performs their actions after another player; otherwise, the game is a simultaneous move game. Perfect information: A game has perfect information if it is a sequential game and every player knows the strategies chosen by the players who preceded them. Constant sum: A game is a constant sum game if the sum of the payoffs to every player are the same for every single set of strategies. In these games, one player gains if and only if another player loses. A constant sum game can be converted into a zero sum game by subtracting a fixed value from all payoffs, leaving their relative order unchanged. Move by nature: A game includes a random move by nature. List of games Notes References Arthur, W. Brian “Inductive Reasoning and Bounded Rationality”, American Economic Review (Papers and Proceedings), 84,406-411, 1994. Bolton, Katok, Zwick 1998, "Dictator game giving: Rules of fairness versus acts of kindness" International Journal of Game Theory, Volume 27, Number 2 Gibbons, Robert (1992) A Primer in Game Theory, Harvester Wheatsheaf Glance, Huberman. (1994) "The dynamics of social dilemmas." Scientific American. H. W. Kuhn, Simplified Two-Person Poker; in H. W. Kuhn and A. W. Tucker (editors), Contributions to the Theory of Games, volume 1, pages 97–103, Princeton University Press, 1950. Martin J. Osborne & Ariel Rubinstein: A Course in Game Theory (1994). McKelvey, R. and T. Palfrey (1992) "An experimental study of the centipede game," Econometrica 60(4), 803-836. Nash, John (1950) "The Bargaining Problem" Econometrica 18: 155-162. Ochs, J. and A.E. Roth (1989) "An Experimental Study of Sequential Bargaining" American Economic Review 79: 355-384. Rapoport, A. (1966) The game of chicken, American Behavioral Scientist 10: 10-14. Rasmussen, Eric: Games and Information, 2004 Shubik, Martin "The Dollar Auction Game: A Paradox in Noncooperative Behavior and Escalation," The Journal of Conflict Resolution, 15, 1, 1971, 109-111. Sinervo, B., and Lively, C. (1996). "The Rock-Paper-Scissors Game and the evolution of alternative male strategies". Nature Vol.380, pp. 240–243 Skyrms, Brian. (2003) The stag hunt and Evolution of Social Structure Cambridge: Cambridge University Press. External links List of games from gametheory.net
List of games in game theory
[ "Mathematics" ]
774
[ "Game theory game classes", "Game theory" ]
5,051,129
https://en.wikipedia.org/wiki/Rapid%20Communications%20in%20Mass%20Spectrometry
Rapid Communications in Mass Spectrometry (RCM) is a biweekly peer-reviewed scientific journal published since 1987 by John Wiley & Sons. It covers research on all aspects of mass spectrometry. According to the Journal Citation Reports, the journal has a 2020 impact factor of 2.419. RCM Beynon Prize To mark the 80th birthday of John H. Beynon, the founding Editor of RCM, in 2004 an annual award was established in his honour by the publishers. The award is made on the recommendation of an ad hoc Sub-committee of the editorial board of RCM. References External links Mass spectrometry journals Academic journals established in 1987 Wiley (publisher) academic journals
Rapid Communications in Mass Spectrometry
[ "Physics", "Chemistry" ]
147
[ "Spectrum (physical sciences)", "Biochemistry journal stubs", "Biochemistry stubs", "Mass spectrometry", "Mass spectrometry journals" ]
5,052,383
https://en.wikipedia.org/wiki/Maxwell%20bridge
A Maxwell bridge is a modification to a Wheatstone bridge used to measure an unknown inductance (usually of low Q value) in terms of calibrated resistance and inductance or resistance and capacitance. When the calibrated components are a parallel resistor and capacitor, the bridge is known as a Maxwell bridge. It is named for James C. Maxwell, who first described it in 1873. It uses the principle that the positive phase angle of an inductive impedance can be compensated by the negative phase angle of a capacitive impedance when put in the opposite arm and the circuit is at resonance; i.e., no potential difference across the detector (an AC voltmeter or ammeter)) and hence no current flowing through it. The unknown inductance then becomes known in terms of this capacitance. With reference to the picture, in a typical application and are known fixed entities, and and are known variable entities. and are adjusted until the bridge is balanced. and can then be calculated based on the values of the other components: To avoid the difficulties associated with determining the precise value of a variable capacitance, sometimes a fixed-value capacitor will be installed and more than one resistor will be made variable. It cannot be used for the measurement of high Q values. It is also unsuited for the coils with low Q values, less than one, because of balance convergence problem. Its use is limited to the measurement of low Q values from 1 to 10. The frequency of the AC current used to assess the unknown inductor should match the frequency of the circuit the inductor will be used in - the impedance and therefore the assigned inductance of the component varies with frequency. For ideal inductors, this relationship is linear, so that the inductance value at an arbitrary frequency can be calculated from the inductance value measured at some reference frequency. Unfortunately, for real components, this relationship is not linear, and using a derived or calculated value in place of a measured one can lead to serious inaccuracies. A practical issue in construction of the bridge is mutual inductance: two inductors in propinquity will give rise to mutual induction: when the magnetic field of one intersects the coil of the other, it will reinforce the magnetic field in that other coil, and vice versa, distorting the inductance of both coils. To minimize mutual inductance, orient the inductors with their axes perpendicular to each other, and separate them as far as is practical. Similarly, the nearby presence of electric motors, chokes and transformers (like that in the power supply for the bridge!) may induce mutual inductance in the circuit components, so locate the circuit remotely from any of these. The frequency dependence of inductance values gives rise to other constraints on this type of bridge: the calibration frequency must be well below the lesser of the self-resonance frequency of the inductor and the self-resonance frequency of the capacitor, Fr < min(Lsrf,Csrf)/10. Before those limits are approached, the ESR of the capacitor will likely have significant effect, and have to be explicitly modeled. For ferromagnetic core inductors, there are additional constraints. There is a minimum magnetization current required to magnetize the core of an inductor, so the current in the inductor branches of the circuit must exceed the minimum, but must not be so great as to saturate the core of either inductor. The additional complexity of using a Maxwell-Wien bridge over simpler bridge types is warranted in circumstances where either the mutual inductance between the load and the known bridge entities, or stray electromagnetic interference, distorts the measurement results. The capacitive reactance in the bridge will exactly oppose the inductive reactance of the load when the bridge is balanced, allowing the load's resistance and reactance to be reliably determined. See also Wien bridge, a similar circuit for calibrating unknown capacitance Anderson's bridge, a modification of Maxwell's bridge that accurately measures capacitance Bridge circuit Further reading References Electrical meters Bridge circuits Measuring instruments James Clerk Maxwell Impedance measurements
Maxwell bridge
[ "Physics", "Technology", "Engineering" ]
889
[ "Electrical resistance and conductance", "Physical quantities", "Measuring instruments", "Impedance measurements", "Electrical meters" ]
5,053,681
https://en.wikipedia.org/wiki/TaqI
TaqI is a restriction enzyme isolated from the bacterium Thermus aquaticus in 1978. It has a recognition sequence of 5'TCGA 3'AGCT and makes the cut 5'---T CGA---3' 3'---AGC T---5' References Restriction enzymes Bacterial enzymes
TaqI
[ "Biology" ]
69
[ "Genetics techniques", "Restriction enzymes" ]
5,054,730
https://en.wikipedia.org/wiki/Biomolecular%20structure
Biomolecular structure is the intricate folded, three-dimensional shape that is formed by a molecule of protein, DNA, or RNA, and that is important to its function. The structure of these molecules may be considered at any of several length scales ranging from the level of individual atoms to the relationships among entire protein subunits. This useful distinction among scales is often expressed as a decomposition of molecular structure into four levels: primary, secondary, tertiary, and quaternary. The scaffold for this multiscale organization of the molecule arises at the secondary level, where the fundamental structural elements are the molecule's various hydrogen bonds. This leads to several recognizable domains of protein structure and nucleic acid structure, including such secondary-structure features as alpha helixes and beta sheets for proteins, and hairpin loops, bulges, and internal loops for nucleic acids. The terms primary, secondary, tertiary, and quaternary structure were introduced by Kaj Ulrik Linderstrøm-Lang in his 1951 Lane Medical Lectures at Stanford University. Primary structure The primary structure of a biopolymer is the exact specification of its atomic composition and the chemical bonds connecting those atoms (including stereochemistry). For a typical unbranched, un-crosslinked biopolymer (such as a molecule of a typical intracellular protein, or of DNA or RNA), the primary structure is equivalent to specifying the sequence of its monomeric subunits, such as amino acids or nucleotides. The primary structure of a protein is reported starting from the amino N-terminus to the carboxyl C-terminus, while the primary structure of DNA or RNA molecule is known as the nucleic acid sequence reported from the 5' end to the 3' end. The nucleic acid sequence refers to the exact sequence of nucleotides that comprise the whole molecule. Often, the primary structure encodes sequence motifs that are of functional importance. Some examples of such motifs are: the C/D and H/ACA boxes of snoRNAs, LSm binding site found in spliceosomal RNAs such as U1, U2, U4, U5, U6, U12 and U3, the Shine-Dalgarno sequence, the Kozak consensus sequence and the RNA polymerase III terminator. Secondary structure The secondary structure of a protein is the pattern of hydrogen bonds in a biopolymer. These determine the general three-dimensional form of local segments of the biopolymers, but does not describe the global structure of specific atomic positions in three-dimensional space, which are considered to be tertiary structure. Secondary structure is formally defined by the hydrogen bonds of the biopolymer, as observed in an atomic-resolution structure. In proteins, the secondary structure is defined by patterns of hydrogen bonds between backbone amine and carboxyl groups (sidechain–mainchain and sidechain–sidechain hydrogen bonds are irrelevant), where the DSSP definition of a hydrogen bond is used. The secondary structure of a nucleic acid is defined by the hydrogen bonding between the nitrogenous bases. For proteins, however, the hydrogen bonding is correlated with other structural features, which has given rise to less formal definitions of secondary structure. For example, helices can adopt backbone dihedral angles in some regions of the Ramachandran plot; thus, a segment of residues with such dihedral angles is often called a helix, regardless of whether it has the correct hydrogen bonds. Many other less formal definitions have been proposed, often applying concepts from the differential geometry of curves, such as curvature and torsion. Structural biologists solving a new atomic-resolution structure will sometimes assign its secondary structure by eye and record their assignments in the corresponding Protein Data Bank (PDB) file. The secondary structure of a nucleic acid molecule refers to the base pairing interactions within one molecule or set of interacting molecules. The secondary structure of biological RNA's can often be uniquely decomposed into stems and loops. Often, these elements or combinations of them can be further classified, e.g. tetraloops, pseudoknots and stem loops. There are many secondary structure elements of functional importance to biological RNA. Famous examples include the Rho-independent terminator stem loops and the transfer RNA (tRNA) cloverleaf. There is a minor industry of researchers attempting to determine the secondary structure of RNA molecules. Approaches include both experimental and computational methods (see also the List of RNA structure prediction software). Tertiary structure The tertiary structure of a protein or any other macromolecule is its three-dimensional structure, as defined by the atomic coordinates. Proteins and nucleic acids fold into complex three-dimensional structures which result in the molecules' functions. While such structures are diverse and complex, they are often composed of recurring, recognizable tertiary structure motifs and domains that serve as molecular building blocks. Tertiary structure is considered to be largely determined by the biomolecule's primary structure (its sequence of amino acids or nucleotides). Quaternary structure The protein quaternary structure refers to the number and arrangement of multiple protein molecules in a multi-subunit complex. For nucleic acids, the term is less common, but can refer to the higher-level organization of DNA in chromatin, including its interactions with histones, or to the interactions between separate RNA units in the ribosome or spliceosome. Viruses, in general, can be regarded as molecular machines. Bacteriophage T4 is a particularly well studied virus and its protein quaternary structure is relatively well defined. A study by Floor (1970) showed that, during the in vivo construction of the virus by specific morphogenetic proteins, these proteins need to be produced in balanced proportions for proper assembly of the virus to occur. Insufficiency (due to mutation) in the production of one particular morphogenetic protein (e.g. a critical tail fiber protein), can lead to the production of progeny viruses almost all of which have too few of the particular protein component to properly function, i.e. to infect host cells. However, a second mutation that reduces another morphogenetic component (e.g. in the base plate or head of the phage) could in some cases restore a balance such that a higher proportion of the virus particles produced are able to function. Thus it was found that a mutation that reduces expression of one gene, whose product is employed in morphogenesis, may be partially suppressed by a mutation that reduces expression of a second morphogenetic gene resulting in a more balanced production of the virus gene products. The concept that, in vivo, a balanced availability of components is necessary for proper molecular morphogenesis may have general applicability for understanding the assembly of protein molecular machines. Structure determination Structure probing is the process by which biochemical techniques are used to determine biomolecular structure. This analysis can be used to define the patterns that can be used to infer the molecular structure, experimental analysis of molecular structure and function, and further understanding on development of smaller molecules for further biological research. Structure probing analysis can be done through many different methods, which include chemical probing, hydroxyl radical probing, nucleotide analog interference mapping (NAIM), and in-line probing. Protein and nucleic acid structures can be determined using either nuclear magnetic resonance spectroscopy (NMR) or X-ray crystallography or single-particle cryo electron microscopy (cryoEM). The first published reports for DNA (by Rosalind Franklin and Raymond Gosling in 1953) of A-DNA X-ray diffraction patterns—and also B-DNA—used analyses based on Patterson function transforms that provided only a limited amount of structural information for oriented fibers of DNA isolated from calf thymus. An alternate analysis was then proposed by Wilkins et al. in 1953 for B-DNA X-ray diffraction and scattering patterns of hydrated, bacterial-oriented DNA fibers and trout sperm heads in terms of squares of Bessel functions. Although the B-DNA form' is most common under the conditions found in cells, it is not a well-defined conformation but a family or fuzzy set of DNA conformations that occur at the high hydration levels present in a wide variety of living cells. Their corresponding X-ray diffraction & scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder (over 20%), and the structure is not tractable using only the standard analysis. In contrast, the standard analysis, involving only Fourier transforms of Bessel functions and DNA molecular models, is still routinely used to analyze A-DNA and Z-DNA X-ray diffraction patterns. Structure prediction Biomolecular structure prediction is the prediction of the three-dimensional structure of a protein from its amino acid sequence, or of a nucleic acid from its nucleobase (base) sequence. In other words, it is the prediction of secondary and tertiary structure from its primary structure. Structure prediction is the inverse of biomolecular design, as in rational design, protein design, nucleic acid design, and biomolecular engineering. Protein structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry. Protein structure prediction is of high importance in medicine (for example, in drug design) and biotechnology (for example, in the design of novel enzymes). Every two years, the performance of current methods is assessed in the Critical Assessment of protein Structure Prediction'' (CASP) experiment. There has also been a significant amount of bioinformatics research directed at the RNA structure prediction problem. A common problem for researchers working with RNA is to determine the three-dimensional structure of the molecule given only the nucleic acid sequence. However, in the case of RNA, much of the final structure is determined by the secondary structure or intra-molecular base-pairing interactions of the molecule. This is shown by the high conservation of base pairings across diverse species. Secondary structure of small nucleic acid molecules is determined largely by strong, local interactions such as hydrogen bonds and base stacking. Summing the free energy for such interactions, usually using a nearest-neighbor method, provides an approximation for the stability of given structure. The most straightforward way to find the lowest free energy structure would be to generate all possible structures and calculate the free energy for them, but the number of possible structures for a sequence increases exponentially with the length of the molecule. For longer molecules, the number of possible secondary structures is vast. Sequence covariation methods rely on the existence of a data set composed of multiple homologous RNA sequences with related but dissimilar sequences. These methods analyze the covariation of individual base sites in evolution; maintenance at two widely separated sites of a pair of base-pairing nucleotides indicates the presence of a structurally required hydrogen bond between those positions. The general problem of pseudoknot prediction has been shown to be NP-complete. Design Biomolecular design can be considered the inverse of structure prediction. In structure prediction, the structure is determined from a known sequence, whereas, in protein or nucleic acid design, a sequence that will form a desired structure is generated. Other biomolecules Other biomolecules, such as polysaccharides, polyphenols and lipids, can also have higher-order structure of biological consequence. See also Biomolecular Comparison of nucleic acid simulation software Gene structure List of RNA structure prediction software Non-coding RNA Notes References Biomolecules
Biomolecular structure
[ "Chemistry", "Biology" ]
2,376
[ "Natural products", "Organic compounds", "Structural biology", "Biomolecules", "Biochemistry", "Molecular biology" ]
5,054,772
https://en.wikipedia.org/wiki/Mitotic%20recombination
Mitotic recombination is a type of genetic recombination that may occur in somatic cells during their preparation for mitosis in both sexual and asexual organisms. In asexual organisms, the study of mitotic recombination is one way to understand genetic linkage because it is the only source of recombination within an individual. Additionally, mitotic recombination can result in the expression of recessive alleles in an otherwise heterozygous individual. This expression has important implications for the study of tumorigenesis and lethal recessive alleles. Mitotic homologous recombination occurs mainly between sister chromatids subsequent to replication (but prior to cell division). Inter-sister homologous recombination is ordinarily genetically silent. During mitosis the incidence of recombination between non-sister homologous chromatids is only about 1% of that between sister chromatids. Discovery The discovery of mitotic recombination came from the observation of twin spotting in Drosophila melanogaster. This twin spotting, or mosaic spotting, was observed in D. melanogaster as early as 1925, but it was only in 1936 that Curt Stern explained it as a result of mitotic recombination. Prior to Stern's work, it was hypothesized that twin spotting happened because certain genes had the ability to eliminate the chromosome on which they were located. Later experiments uncovered when mitotic recombination occurs in the cell cycle and the mechanisms behind recombination. Occurrence Mitotic recombination can happen at any locus but is observable in individuals that are heterozygous at a given locus. If a crossover event between non-sister chromatids affects that locus, then both homologous chromosomes will have one chromatid containing each genotype. The resulting phenotype of the daughter cells depends on how the chromosomes line up on the metaphase plate. If the chromatids containing different alleles line up on the same side of the plate, then the resulting daughter cells will appear heterozygous and be undetectable, despite the crossover event. However, if chromatids containing the same alleles line up on the same side, the daughter cells will be homozygous at that locus. This results in twin spotting, where one cell presents the homozygous recessive phenotype and the other cell has the homozygous wild type phenotype. If those daughter cells go on to replicate and divide, the twin spots will continue to grow and reflect the differential phenotype. Mitotic recombination takes place during interphase. It has been suggested that recombination takes place during G1, when the DNA is in its 2-strand phase, and replicated during DNA synthesis. It is also possible to have the DNA break leading to mitotic recombination happen during G1, but for the repair to happen after replication. Response to DNA damage In the budding yeast Saccharomyces cerevisiae, mutations in several genes needed for mitotic (and meiotic) recombination cause increased sensitivity to inactivation by radiation and/or genotoxic chemicals. For example, gene rad52 is required for mitotic recombination as well as meiotic recombination. Rad52 mutant yeast cells have increased sensitivity to killing by X-rays, methyl methanesulfonate and the DNA crosslinking agent 8-methoxypsoralen-plus-UV light, suggesting that mitotic recombinational repair is required for removal of the different DNA damages caused by these agents. Mechanisms The mechanisms behind mitotic recombination are similar to those behind meiotic recombination. These include sister chromatid exchange and mechanisms related to DNA double strand break repair by homologous recombination such as single-strand annealing, synthesis-dependent strand annealing (SDSA), and gene conversion through a double-Holliday Junction intermediate or SDSA. In addition, non-homologous mitotic recombination is a possibility and can often be attributed to non-homologous end joining. Method There are several theories on how mitotic crossover occurs. In the simple crossover model, the two homologous chromosomes overlap on or near a common Chromosomal fragile site (CFS). This leads to a double-strand break, which is then repaired using one of the two strands. This can lead to the two chromatids switching places. In another model, two overlapping sister chromatids form a double Holliday junction at a common repeat site and are later sheared in such a way that they switch places. In either model, the chromosomes are not guaranteed to trade evenly, or even to rejoin on opposite sides thus most patterns of cleavage do not result in any crossover event. Uneven trading introduces many of the deleterious effects of mitotic crossover. Alternatively, a crossover can occur during DNA repair if, due to extensive damage, the homologous chromosome is chosen to be the template over the sister chromatid. This leads to gene synthesis since one copy of the allele is copied across from the homologous chromosome and then synthesized into the breach on the damaged chromosome. The net effect of this would be one heterozygous chromosome and one homozygous chromosome. Advantages and disadvantages Mitotic crossover is known to occur in D. melanogaster, some asexually reproducing fungi and in normal human cells, where the event may allow normally recessive cancer-causing alleles to be expressed and thus predispose the cell in which it occurs to the development of cancer. Alternately, a cell may become a homozygous mutant for a tumor-suppressing gene, leading to the same result. For example, Bloom's syndrome is caused by a mutation in RecQ helicase, which plays a role in DNA replication and repair. This mutation leads to high rates of mitotic recombination in mice, and this recombination rate is in turn responsible for causing tumor susceptibility in those mice. At the same time, mitotic recombination may be beneficial: it may play an important role in repairing double stranded breaks, and it may be beneficial to the organism if having homozygous dominant alleles is more functional than the heterozygous state. For use in experimentation with genomes in model organisms such as Drosophila melanogaster, mitotic recombination can be induced via X-ray and the FLP-FRT recombination system. References Griffiths et al. 1999. Modern Genetic Analysis. W. H. Freeman and Company. Cellular processes Modification of genetic information Molecular genetics
Mitotic recombination
[ "Chemistry", "Biology" ]
1,429
[ "Modification of genetic information", "Molecular genetics", "Cellular processes", "Molecular biology" ]
529,953
https://en.wikipedia.org/wiki/Ablation
Ablation ( – removal) is the removal or destruction of something from an object by vaporization, chipping, erosive processes, or by other means. Examples of ablative materials are described below, including spacecraft material for ascent and atmospheric reentry, ice and snow in glaciology, biological tissues in medicine and passive fire protection materials. Artificial intelligence In artificial intelligence (AI), especially machine learning, ablation is the removal of a component of an AI system. The term is by analogy with biology: removal of components of an organism. Biology Biological ablation is the removal of a biological structure or functionality. Genetic ablation is another term for gene silencing, in which gene expression is abolished through the alteration or deletion of genetic sequence information. In cell ablation, individual cells in a population or culture are destroyed or removed. Both can be used as experimental tools, as in loss-of-function experiments. Medicine In medicine, ablation is the removal of a part of biological tissue, usually by surgery. Surface ablation of the skin (dermabrasion, also called resurfacing because it induces regeneration) can be carried out by chemicals (chemoablation), by lasers (laser ablation), by freezing (cryoablation), or by electricity (fulguration). Its purpose is to remove skin spots, aged skin, wrinkles, thus rejuvenating it. Surface ablation is also employed in otolaryngology for several kinds of surgery, such as the one for snoring. Radiofrequency ablation (RFA) is a method of removing aberrant tissue from within the body via minimally invasive procedures, it is used to cure a variety of cardiac arrhythmia such as supraventricular tachycardia, Wolff–Parkinson–White syndrome (WPW), ventricular tachycardia, and more recently as management of atrial fibrillation. The term is often used in the context of laser ablation, a process in which a laser dissolves a material's molecular bonds. For a laser to ablate tissues, the power density or fluence must be high, otherwise thermocoagulation occurs, which is simply thermal vaporization of the tissues. Rotablation is a type of arterial cleansing that consists of inserting a tiny, diamond-tipped, drill-like device into the affected artery to remove fatty deposits or plaque. The procedure is used in the treatment of coronary heart disease to restore blood flow. Microwave ablation (MWA) is similar to RFA but at higher frequencies of electromagnetic radiation. High-intensity focused ultrasound (HIFU) ablation removes tissue from within the body noninvasively. Bone marrow ablation is a process whereby the human bone marrow cells are eliminated in preparation for a bone marrow transplant. This is performed using high-intensity chemotherapy and total body irradiation. As such, it has nothing to do with the vaporization techniques described in the rest of this article. Ablation of brain tissue is used for treating certain neurological disorders, particularly Parkinson's disease, and sometimes for psychiatric disorders as well. Recently, some researchers reported successful results with genetic ablation. In particular, genetic ablation is potentially a much more efficient method of removing unwanted cells, such as tumor cells, because large numbers of animals lacking specific cells could be generated. Genetically ablated lines can be maintained for a prolonged period of time and shared within the research community. Researchers at Columbia University report of reconstituted caspases combined from C. elegans and humans, which maintain a high degree of target specificity. The genetic ablation techniques described could prove useful in battling cancer. Electro-ablation Electro-ablation, is a process that removes material from a metallic workpiece to reduce surface roughness. Electro-ablation breaks through highly resistive oxide surfaces, such as those found on titanium and other exotic metals and alloys without melting the underlying non-oxidised metal or alloy. This allows very quick surface finishing The process is capable of providing surface finishing for a wide range of exotic and widely used metals and alloys, including: titanium, stainless steel, niobium, chromium–cobalt, Inconel, aluminium, and a range of widely available steels and alloys. Electro-ablation is very effective at achieving high levels of surface finishing in holes, valleys and hidden or internal surfaces on metallic workpieces (parts). The process is particularly applicable to components produced by additive manufacturing process, such as 3D-printed metals. These components tend to be produced with roughness levels well above 5–20 micron. Electro-ablation can be used to quickly reduce the surface roughness to less than 0.8 micron, allowing the post-process to be used for volume production surface finishing. Glaciology In glaciology and meteorology, ablation—the opposite of accumulation—refers to all processes that remove snow, ice, or water from a glacier or snowfield. Ablation refers to the melting of snow or ice that runs off the glacier, evaporation, sublimation, calving, or erosive removal of snow by wind. Air temperature is typically the dominant control of ablation, with precipitation exercising secondary control. In a temperate climate during ablation season, ablation rates typically average around 2 mm/h. Where solar radiation is the dominant cause of snow ablation (e.g., if air temperatures are low under clear skies), characteristic ablation textures such as suncups and penitentes may develop on the snow surface. Ablation can refer to mass loss from the upper surface of a glacier or ocean-driven melt and calving on the face of a glacier terminus. Ablation can refer either to the processes removing ice and snow or to the quantity of ice and snow removed. Debris-covered glaciers have also been shown to greatly impact the ablation process. There is a thin debris layer that can be located on the top of glaciers that intensifies the ablation process below the ice. The debris-covered parts of a glacier that is experiencing ablation are sectioned into three categories which include ice cliffs, ponds, and debris. These three sections allow scientists to measure the heat digested by the debris-covered area and is calculated. The calculations are dependent on the area and net absorbed heat amounts in regards to the entire debris-covered zones. These types of calculations are done to various glaciers to understand and analyze future patterns of melting. Moraine (glacial debris) is moved by natural processes that allow for down-slope movement of materials on the glacier body. It is noted that if the slope of a glacier is too high then the debris will continue to move along the glacier to a further location. The sizes and locations of glaciers vary around the world, so depending on the climate and physical geography the varieties of debris can differ. The size and magnitude of the debris is dependent on the area of glacier and can vary from dust-size fragments to blocks as large as a house. There have been many experiments done to demonstrate the effect of debris on the surface of glaciers. Yoshiyuki Fujii, a professor at the National Institute of Polar Research, designed an experiment that showed ablation rate was accelerated under a thin debris layer and was retarded under a thick one as compared with that of a natural snow surface. This science is significant due to the importance of long-term availability of water resources and assess glacier response to climate change. Natural resource availability is a major drive behind research conducted in regards to the ablation process and overall study of glaciers. Laser ablation Laser ablation is greatly affected by the nature of the material and its ability to absorb energy, therefore the wavelength of the ablation laser should have a minimum absorption depth. While these lasers can average a low power, they can offer peak intensity and fluence given by: while the peak power is Surface ablation of the cornea for several types of eye refractive surgery is now common, using an excimer laser system (LASIK and LASEK). Since the cornea does not grow back, laser is used to remodel the cornea refractive properties to correct refraction errors, such as astigmatism, myopia, and hyperopia. Laser ablation is also used to remove part of the uterine wall in women with menstruation and adenomyosis problems in a process called endometrial ablation. Researchers have demonstrated a successful technique for ablating subsurface tumors with minimal thermal damage to surrounding healthy tissue, by using a focused laser beam from an ultra-short pulse diode laser source. Marine surface coatings Antifouling paints and other related coatings are routinely used to prevent the buildup of microorganisms and other animals, such as barnacles for the bottom hull surfaces of recreational, commercial and military sea vessels. Ablative paints are often utilized for this purpose to prevent the dilution or deactivation of the antifouling agent. Over time, the paint will slowly decompose in the water, exposing fresh antifouling compounds on the surface. Engineering the antifouling agents and the ablation rate can produce long-lived protection from the deleterious effects of biofouling. Passive fire protection Firestopping and fireproofing products can be ablative in nature. This can mean endothermic materials, or merely materials that are sacrificial and become "spent" over time while exposed to fire, such as silicone firestop products. Given sufficient time under fire or heat conditions, these products char away, crumble, and disappear. The idea is to put enough of this material in the way of the fire that a level of fire-resistance rating can be maintained, as demonstrated in a fire test. Ablative materials usually have a large concentration of organic matter that is reduced by fire to ashes. In the case of silicone, organic rubber surrounds very finely divided silica dust (up to 380 m2 of combined surface area of all the dust particles per gram of this dust). When the organic rubber is exposed to fire, it burns to ash and leaves behind the silica dust with which the product started. Protoplanetary disk ablation Protoplanetary disks are rotating circumstellar disks of dense gas and dust surrounding young, newly formed stars. Shortly after star formation, stars often have leftover surrounding material that is still gravitationally bound to them, forming primitive disks that orbit around the star's equator – not too dissimilarly from the rings of Saturn. This occurs because the decrease in the protostellar material's radius during formation increases angular momentum, which means that this remaining material gets whipped into a flattened circumstellar disk around the star. This circumstellar disk may eventually mature into what is referred to as a protoplanetary disk: a disk of gas, dust, ice and other materials from which planetary systems may form. In these disks, orbiting matter starts to accrete in the colder mid-plane of the disk from dust grains and ices sticking together. These small accretions grow from pebbles to rocks to early baby planets, called planetesimals, then protoplanets, and eventually, full planets. As it is believed that massive stars may play a role in actively triggering star formation (by introducing gravitational instabilities amongst other factors), it is plausible that young, smaller stars with disks may be living relatively nearby to older, more massive stars. This has already been confirmed through observations to be the case in certain clusters, e.g. in the Trapezium cluster. Since massive stars tend to collapse through supernovae at the end of their lives, research is now investigating what role the shockwave of such an explosion, and the resulting supernova remnant (SNR), would play if it occurred in the line of fire of a protoplanetary disk. According to computationally modelled simulations, a SNR striking a protoplanetary disk would result in significant ablation of the disk, and this ablation would strip a significant amount of protoplanetary material from the disk – but not necessarily destroy the disk entirely. This is an important point because a disk that survives such an interaction with sufficient material leftover to form a planetary system may inherit an altered disk chemistry from the SNR, which could have effects on the planetary systems that later form. Spaceflight In spacecraft design, ablation is used to both cool and protect mechanical parts and/or payloads that would otherwise be damaged by extremely high temperatures. Two principal applications are heat shields for spacecraft entering a planetary atmosphere from space and cooling of rocket engine nozzles. Examples include the Apollo Command Module that protected astronauts from the heat of atmospheric reentry and the Kestrel second stage rocket engine designed for exclusive use in an environment of space vacuum since no heat convection is possible. In a basic sense, ablative material is designed so that instead of heat being transmitted into the structure of the spacecraft, only the outer surface of the material bears the majority of the heating effect. The outer surface chars and burns away – but quite slowly, only gradually exposing new fresh protective material beneath. The heat is carried away from the spacecraft by the gases generated by the ablative process, and never penetrates the surface material, so the metallic and other sensitive structures they protect, remain at a safe temperature. As the surface burns and disperses into space, the remaining solid material continues to insulate the craft from ongoing heat and superheated gases. The thickness of the ablative layer is calculated to be sufficient to survive the heat it will encounter on its mission. There is an entire branch of spaceflight research involving the search for new fireproofing materials to achieve the best ablative performance; this function is critical to protect the spacecraft occupants and payload from otherwise excessive heat loading. The same technology is used in some passive fire protection applications, in some cases by the same vendors, who offer different versions of these fireproofing products, some for aerospace and some for structural fire protection. See also Electrical arc flash burns Ablative armor References External links Chemical Peeling. American Society for Dermatological Surgery. Lasik Laser Eye Surgery. USA Food an Drugs Administration info. Physics of laser ablation Plasma phenomena Materials degradation
Ablation
[ "Physics", "Materials_science", "Engineering" ]
2,942
[ "Physical phenomena", "Plasma physics", "Plasma phenomena", "Materials science", "Materials degradation" ]
530,217
https://en.wikipedia.org/wiki/Energy%20value%20of%20coal
The energy value of coal, or fuel content, is the amount of potential energy coal contains that can be converted into heat. This value can be calculated and compared with different grades of coal and other combustible materials, which produce different amounts of heat according to their grade. While chemistry provides ways of calculating the heating value of a certain amount of a substance, there is a difference between this theoretical value and its application to real coal. The grade of a sample of coal does not precisely define its chemical composition, so calculating the coal's actual usefulness as a fuel requires determining its proximate and ultimate analysis (see "Chemical Composition" below). Chemical composition Chemical composition of the coal is defined in terms of its proximate and ultimate (elemental) analyses. The parameters of proximate analysis are moisture, volatile matter, ash, and fixed carbon. Elemental or ultimate analysis encompasses the quantitative determination of carbon, hydrogen, nitrogen, sulfur and oxygen within the coal. Additionally, specific physical and mechanical properties of coal and particular carbonization properties The calorific value Q of coal [kJ/kg] is the heat liberated by its complete combustion with oxygen. Q is a complex function of the elemental composition of the coal. Q can be determined experimentally using calorimeters. Dulong suggests the following approximate formula for Q when the oxygen content is less than 10%: Q = 337C + 1442(H - O/8) + 93S, where C is the mass percent of carbon, H is the mass percent of hydrogen, O is the mass percent of oxygen, and S is the mass percent of sulfur in the coal. With these constants, Q is given in kilojoules per kilogram. See also Coal assay techniques Energies per unit mass Heat of combustion References "Using Charcoal Efficiently." Food and Agriculture Organization of the United Nations. Retrieved 10 Nov 2011. "Combustion Training." (c)2011 E Instruments International, LLC. Coal technology Thermodynamic properties
Energy value of coal
[ "Physics", "Chemistry", "Mathematics" ]
411
[ "Thermodynamic properties", "Quantity", "Physical quantities", "Thermodynamics" ]
530,340
https://en.wikipedia.org/wiki/Control%20grid
The control grid is an electrode used in amplifying thermionic valves (vacuum tubes) such as the triode, tetrode and pentode, used to control the flow of electrons from the cathode to the anode (plate) electrode. The control grid usually consists of a cylindrical screen or helix of fine wire surrounding the cathode, and is surrounded in turn by the anode. The control grid was invented by Lee De Forest, who in 1906 added a grid to the Fleming valve (thermionic diode) to create the first amplifying vacuum tube, the Audion (triode). Operation In a valve, the hot cathode emits negatively charged electrons, which are attracted to and captured by the anode, which is given a positive voltage by a power supply. The control grid between the cathode and anode functions as a "gate" to control the current of electrons reaching the anode. A more negative voltage on the grid will repel the electrons back toward the cathode so fewer get through to the anode. A less negative, or positive, voltage on the grid will allow more electrons through, increasing the anode current. A given change in grid voltage causes a proportional change in plate current, so if a time-varying voltage is applied to the grid, the plate current waveform will be a copy of the applied grid voltage. A relatively small variation in voltage on the control grid causes a significantly large variation in anode current. The presence of a resistor in the anode circuit causes a large variation in voltage to appear at the anode. The variation in anode voltage can be much larger than the variation in grid voltage which caused it, and thus the tube can amplify, functioning as an amplifier. Construction The grid in the first triode valve consisted of a zig-zag piece of wire placed between the filament and the anode. This quickly evolved into a helix or cylindrical screen of fine wire placed between a single strand filament (or later, a cylindrical cathode) and a cylindrical anode. The grid is usually made of a very thin wire that can resist high temperatures and is not prone to emitting electrons itself. Molybdenum alloy with a gold plating is frequently used. It is wound on soft copper sideposts, which are swaged over the grid windings to hold them in place. A 1950s variation is the frame grid, which winds very fine wire onto a rigid stamped metal frame. This allows the holding of very close tolerances, so the grid can be placed closer to the filament (or cathode). Effects of grid position By placing the control grid closer to the filament/cathode relative to the anode, a greater amplification results. This degree of amplification is referred to in valve data sheets as the amplification factor, or "mu". It also results in higher transconductance, which is a measure of the anode current change versus grid voltage change. The noise figure of a valve is inversely proportional to its transconductance; higher transconductance generally means lower noise figure. Lower noise can be very important when designing a radio or television receiver. Multiple control grids A valve can contain more than one control grid. The hexode contains two such grids, one for a received signal and one for the signal from a local oscillator. The valve's inherent non-linearity causes not only both original signals to appear in the anode circuit, but also the sum and difference of those signals. This can be exploited as a frequency-changer in superheterodyne receivers. Grid variations A variation of the control grid is to produce the helix with a variable pitch. This gives the resultant valve a distinct non-linear characteristic. This is often exploited in R.F. amplifiers where an alteration of the grid bias changes the mutual conductance and hence the gain of the device. This variation usually appears in the pentode form of the valve, where it is then called a variable-mu pentode or remote-cutoff pentode. One of the principal limitations of the triode valve is that there is considerable capacitance between the grid and the anode (Cag). A phenomenon known as the Miller Effect causes the input capacitance of an amplifier to be the product of Cag and amplification factor of the valve. This, and the instability of an amplifier with tuned input and output when Cag is large can severely limit the upper operating frequency. These effects can be overcome by the addition of a screen grid, however in the later years of the tube era, constructional techniques were developed that rendered this 'parasitic capacitance' so low that triodes operating in the upper very high frequency (VHF) bands became possible. The Mullard EC91 operated at up to 250 MHz. The anode-grid capacitance of the EC91 is quoted in manufacturer's literature as 2.5 pF, which is higher than many other triodes of the era, while many triodes of the 1920s had figures which are strictly comparable, so there was no advance in this area. However, early screen-grid tetrodes of the 1920s, have Cag of only 1 or 2 fF, around a thousand times less. 'Modern' pentodes have comparable values of Cag. Triodes were used in VHF amplifiers in 'grounded-grid' configuration, a circuit arrangement which prevents Miller feedback. References Vacuum tubes Electrodes
Control grid
[ "Physics", "Chemistry" ]
1,157
[ "Vacuum tubes", "Electrodes", "Vacuum", "Electrochemistry", "Matter" ]
530,417
https://en.wikipedia.org/wiki/Suppressor%20grid
A suppressor grid is a wire screen used in a thermionic valve (i.e. vacuum tube) to suppress secondary emission. It is also called the antidynatron grid, as it reduces or prevents dynatron oscillations. It is located between the screen grid and the plate electrode (anode). The suppressor grid is used in the pentode vacuum tube, so called because it has five concentric electrodes: cathode, control grid, screen grid, suppressor grid, and plate, and also in other tubes with more grids, such as the hexode. The suppressor grid and pentode tube were invented in 1926 by Gilles Holst and Bernard D. H. Tellegen at Phillips Electronics. In a vacuum tube, electrons emitted by the heated cathode are attracted to the positively-charged plate and pass through the grids to the plate. When they strike the plate they knock other electrons out of the metal surface. This is called secondary emission. In the four-electrode vacuum tube, the tetrode, the second grid, the screen grid, is operated at a positive voltage close to the plate voltage. During portions of the cycle when the plate voltage is below the screen grid voltage, secondary electrons from the plate are attracted to the screen grid and return to the cathode through the screen grid power supply. This flow of electrons away from the plate causes a reduction of plate current when the plate voltage increases, in other words the plate has a negative resistance with respect to the cathode. This can cause distortion in the plate waveform and parasitic oscillations called dynatron oscillations in an amplifier. In the pentode, to prevent the secondary electrons from reaching the screen grid, a suppressor grid, a coarse screen of wires, is interposed between the screen grid and plate. It is biased at the cathode voltage, often connected to the cathode inside the glass tube. The negative potential of the suppressor with respect to the plate repels the secondary electrons back to the plate. Since it is at the same potential as the cathode, the primary electrons from the cathode have no problem passing through the suppressor grid to the plate. In addition to preventing the distortion of plate current, the suppressor grid also increases the electrostatic shielding between the cathode and plate, causing the plate current to be almost independent of plate voltage. This increases the plate output resistance, and the amplification factor of the tube. Pentodes can have amplification factors of 1000 or more. References Vacuum tubes Electrodes
Suppressor grid
[ "Physics", "Chemistry" ]
542
[ "Vacuum tubes", "Electrodes", "Vacuum", "Electrochemistry", "Matter" ]
530,715
https://en.wikipedia.org/wiki/Residual-current%20device
A residual-current device (RCD), residual-current circuit breaker (RCCB) or ground fault circuit interrupter (GFCI) is an electrical safety device, more specifically a form of Earth-leakage protection device, that interrupts an electrical circuit when the current passing through a conductor is not equal and opposite in both directions, therefore indicating leakage current to ground or current flowing to another powered conductor. The device's purpose is to reduce the severity of injury caused by an electric shock. This type of circuit interrupter cannot protect a person who touches both circuit conductors at the same time, since it then cannot distinguish normal current from that passing through a person. If the RCD has additional overcurrent protection integrated into the same device, then it is referred to as a residual-current circuit breaker with integrated overcurrent protection (RCBO). These devices are designed to quickly interrupt the protected circuit when it detects that the electric current is unbalanced between the supply and return conductors of the circuit. Any difference between the currents in these conductors indicates leakage current, which presents a shock hazard. Alternating 60 Hz current above 20 mA (0.020 amperes) through the human body is potentially sufficient to cause cardiac arrest or serious harm if it persists for more than a small fraction of a second. RCDs are designed to disconnect the conducting wires ("trip") quickly enough to potentially prevent serious injury to humans, and to prevent damage to electrical devices. RCDs are testable and resettable devices—a test button safely creates a small leakage condition, and another button, or switch, resets the conductors after a fault condition has been cleared. Some RCDs disconnect both the and neutral conductors upon a fault (double pole), while a single pole RCD only disconnects the conductor. If the fault has left the neutral wire "floating" or not at its expected ground potential for any reason, then a single-pole RCD will leave this conductor still connected to the circuit when it detects the fault. Purpose and operation RCDs are designed to disconnect the circuit if there is a leakage current. In their first implementation in the 1950s, power companies used them to prevent electricity theft where consumers grounded returning circuits rather than connecting them to neutral to inhibit electrical meters from registering their power consumption. The most common modern application is as a safety device to detect small leakage currents (typically 5–30mA) and disconnecting quickly enough (<30 milliseconds) to prevent device damage or electrocution. They are an essential part of the automatic disconnection of supply (ADS), i.e. to switch off when a fault develops, rather than rely on human intervention, one of the essential tenets of modern electrical practice. To reduce the risk of electrocution, RCDs should operate within 25–40 milliseconds with any leakage currents (through a person) of greater than 30mA, before electric shock can drive the heart into ventricular fibrillation, the most common cause of death through electric shock. By contrast, conventional circuit breakers or fuses only break the circuit when the total current is excessive (which may be thousands of times the leakage current an RCD responds to). A small leakage current, such as through a person, can be a very serious fault, but would probably not increase the total current enough for a fuse or overload circuit breaker to isolate the circuit, and not fast enough to save a life. RCDs operate by measuring the current balance between two conductors using a differential current transformer. This measures the difference between current flowing through and neutral. If these do not sum to zero, there is a leakage of current to somewhere else (to Earth/ground or to another circuit), and the device will open its contacts. Operation does not require a fault current to return through the earth wire in the installation; the trip will operate just as well if the return path is through plumbing or contact with the ground or anything else. Automatic disconnection and a measure of shock protection is therefore still provided even if the earth wiring of the installation is damaged or incomplete. For an RCD used with three-phase power, all three conductors and the neutral (if fitted) must pass through the current transformer. Application Electrical plugs with incorporated RCD are sometimes installed on appliances that might be considered to pose a particular safety hazard, for example long extension leads, which might be used outdoors, or garden equipment or hair dryers, which may be used near a bath or sink. Occasionally an in-line RCD may be used to serve a similar function to one in a plug. By putting the RCD in the extension lead, protection is provided at whatever outlet is used even if the building has old wiring, such as knob and tube, or wiring that does not contain a grounding conductor. The in-line RCD can also have a lower tripping threshold than the building to further improve safety for a specific electrical device. In North America, GFI receptacles can be used in cases where there is no grounding conductor, but they must be labeled as "no equipment ground". This is referenced in the National Electric Code section 406 (D) 2, however codes change and someone should always consult a licensed professional and their local building and safety departments. The code is An ungrounded GFI receptacle will trip using the built-in "test" button, but will not trip using a GFI test plug, because the plug tests by passing a small current from to the non-existent ground. It is worth noting that despite this, only one GFCI receptacle at the beginning of each circuit is necessary to protect downstream receptacles. There does not appear to be a risk of using multiple GFI receptacles on the same circuit, though it is considered redundant. In Europe, RCDs can fit on the same DIN rail as the miniature circuit breakers; much like in miniature circuit breakers, the busbar arrangements in consumer units and distribution boards provides protection for anything downstream. RCBO A pure RCD will detect imbalance in the currents of the supply and return conductors of a circuit. But it cannot protect against overload or short circuit like a fuse or a miniature circuit breaker (MCB) does (except for the special case of a short circuit from to ground, not to neutral). However, an RCD and an MCB often come integrated in the same device, thus being able to detect both supply imbalance and overload current. Such a device is called an RCBO, for residual-current circuit breaker with overcurrent protection, in Europe and Australia, and a GFCI breaker, for ground fault circuit interrupter, in the United States and Canada. Typical design The diagram depicts the internal mechanism of a residual-current device (RCD). The device is designed to be wired in-line in an appliance power cord. It is rated to carry a maximal current of 13A and is designed to trip on a leakage current of 30mA. This is an active RCD; that is, it latches electrically and therefore trips on power failure, a useful feature for equipment that could be dangerous on unexpected re-energisation. Some early RCDs were entirely electromechanical and relied on finely balanced sprung over-centre mechanisms driven directly from the current transformer. As these are hard to manufacture to the required accuracy and prone to drift in sensitivity both from pivot wear and lubricant dry-out, the electronically-amplified type with a more robust solenoid part as illustrated are now dominant. In the internal mechanism of an RCD, the incoming supply and the neutral conductors are connected to the terminals at (1), and the outgoing load conductors are connected to the terminals at (2). The earth conductor (not shown) is connected through from supply to load uninterrupted. When the reset button (3) is pressed, the contacts ((4) and another, hidden behind (5)) close, allowing current to pass. The solenoid (5) keeps the contacts closed when the reset button is released. The sense coil (6) is a differential current transformer which surrounds (but is not electrically connected to) the and neutral conductors. In normal operation, all the current flows in and out of the and neutral conductors. The amount of current in the two conductors is equal and opposite and cancel each other out. Any fault to earth (for example caused by a person touching a live component in the attached appliance) causes some of the current to take a different path, with some of the neutral current diverted, which means that there is then an imbalance in the current between the and neutral conductors (single-phase), or, more generally a nonzero sum of currents from among various conductors (for example, three phase conductors and one neutral conductor), within the RCD. This difference causes a magnetic flux in the toroidal sense coil (6), which, if sufficiently large, activates the relay (5), causing the switch to activate forcing the contacts (4) apart and thus cutting off the electricity supply to the appliance. In some designs a power failure may also cause the switch contacts to open, causing the safe trip-on-power-failure behaviour mentioned above. The test button (8) allows the correct operation of the device to be verified by passing a small current through the orange test wire (9). This simulates a fault by creating a deliberate imbalance in the sense coil. If the RCD does not trip when this button is pressed, then the device must be replaced. RCD with integral overcurrent protection (RCBO or GFCI breaker) Residual-current and over-current protection may be combined in one device for installation into the service panel; this device is known as a GFCI (Ground-Fault Circuit Interrupter) breaker in the US and Canada, and as a RCBO (residual-current circuit breaker with over-current protection) in Europe and Australia. They are effectively a combination of a RCD and a MCB. In the US, GFCI breakers are more expensive than GFCI outlets. As well as requiring both and neutral inputs and outputs (or, full three-phase), some RCDs/GFCIs require a functional earth (FE) connection. This serves to provide both EMC immunity and to reliably operate the device if the input-side neutral connection is lost but and earth remain. For reasons of space, many devices, especially in DIN rail format, use flying leads rather than screw terminals, especially for the neutral input and FE connections. Additionally, because of the small form factor, the output cables of some models (Eaton/MEM) are used to form the primary winding of the RCD part, and the outgoing circuit cables must be led through a specially dimensioned terminal tunnel with the current transformer part around it. This can lead to incorrect failed trip results when testing with meter probes from the screw heads of the terminals, rather than from the final circuit wiring. Having one RCD feeding another is generally unnecessary, provided they have been wired properly. One exception is the case of a TT earthing system, where the earth loop impedance may be high, meaning that a ground fault might not cause sufficient current to trip an ordinary circuit breaker or fuse. In this case a special 100mA (or greater) trip current time-delayed RCD is installed, covering the whole installation, and then more sensitive RCDs should be installed downstream of it for sockets and other circuits that are considered high-risk. RCD with additional arc fault protection circuitry In addition to ground fault circuit interrupters (GFCIs), arc-fault circuit interrupters (AFCI) are important as they offer added protection from potentially hazardous arc faults resulting from damage in branch circuit wiring as well as extensions to branches such as appliances and cord sets. By detecting arc faults and responding by interrupting power, AFCIs help reduce the likelihood of the home's electrical system being an ignition source of a fire. Dual function AFCI/GFCI devices offer both electrical fire prevention and shock prevention in one device making them a solution for many rooms in the home. Characteristics Differences in disconnection actions Major differences exist regarding the manner in which an RCD unit will act to disconnect the power to a circuit or appliance. There are four situations in which different types of RCD units are used: At the consumer power distribution level, usually in conjunction with an RCBO resettable circuit breaker; Built into a wall socket; Plugged into a wall socket, which may be part of a power-extension cable; and Built into the cord of a portable appliance, such as those intended to be used in outdoor or wet areas. The first three of those situations relate largely to usage as part of a power-distribution system and are almost always of the passive or latched variety, whereas the fourth relates solely to specific appliances and are always of the active or non-latching variety. Active means prevention of any re-activation of the power supply after any inadvertent form of power outage, as soon as the mains supply becomes re-established; latch relates to a switch inside the unit housing the RCD that remains as set following any form of power outage, but has to be reset manually after the detection of an error condition. In the fourth situation, it would be deemed to be highly undesirable, and probably very unsafe, for a connected appliance to automatically resume operation after a power disconnection, without having the operator in attendanceas such, manual reactivation of the RCD is necessary. The difference between the modes of operation of the essentially two different types of RCD functionality is that the operation for power distribution purposes requires the internal latch to remain set within the RCD after any form of power disconnection caused by either the user turning the power off, or after any power outage; such arrangements are particularly applicable for connections to refrigerators and freezers. Situation two is mostly installed just as described above, but some wall socket RCDs are available to fit the fourth situation, often by operating a switch on the fascia panel. RCDs for the first and third situation are most commonly rated at 30mA and 40ms. For the fourth situation, there is generally a greater choice of ratings availablegenerally all lower than the other forms, but lower values often result in more nuisance tripping. Sometimes users apply protection in addition to one of the other forms, when they wish to override those with a lower rating. It may be wise to have a selection of type four RCDs available, because connections made under damp conditions or using lengthy power cables are more prone to trip-out when any of the lower ratings of RCD are used; ratings as low as 10mA are available. Number of poles and pole terminology The number of poles represents the number of conductors that are interrupted when a fault condition occurs. RCDs used on single-phase AC supplies (two current paths), such as domestic power, are usually one- or two-pole designs, also known as single- and double-pole. A single-pole RCD interrupts only the energized conductor, while a double-pole RCD interrupts both the energized and return conductors. (In a single-pole RCD, the return conductor is usually anticipated to be at ground potential at all times and therefore safe on its own). RCDs with three or more poles can be used on three-phase AC supplies (three current paths) or to disconnect the neutral conductor as well, with four-pole RCDs used to interrupt three-phase and neutral supplies. Specially designed RCDs can also be used with both AC and DC power distribution systems. The following terms are sometimes used to describe the manner in which conductors are connected and disconnected by an RCD: Single-pole or one-pole – the RCD will disconnect the energized wire only. Double-pole or two-pole – the RCD will disconnect both the energized and return wires. 1+N and 1P+N – non-standard terms used in the context of RCBOs, at times used differently by different manufacturers. Typically these terms may signify that the return (neutral) conductor is an isolating pole only, without a protective element (an unprotected but switched neutral), that the RCBO provides a conducting path and connectors for the return (neutral) conductor but this path remains uninterrupted when a fault occurs (sometimes known as "solid neutral"), or that both conductors are disconnected for some faults (such as RCD detected leakage) but only one conductor is disconnected for other faults (such as overload). Sensitivity RCD sensitivity is expressed as the rated residual operating current, noted IΔn. Preferred values have been defined by the IEC, thus making it possible to divide RCDs into three groups according to their IΔn value: high sensitivity (HS): 5** – 10 – 30mA (for direct-contact or life injury protection), medium sensitivity (MS): 100 – 300 – 500 – 1000mA (for fire protection), low sensitivity (LS): 3 – 10 – 30A (typically for protection of machine). The 5mA sensitivity is typical for GFCI outlets. Break time (response speed) There are two groups of devices. 'G' (general use) 'instantaneous' RCDs have no intentional time delay. They must never trip at one-half of the nominal current rating, but must trip within 200 milliseconds for rated current, and within 40 milliseconds at five times rated current. 'S' (selective) or 'T' (time-delayed) RCDs have a short time delay. They are typically used at the origin of an installation for fire protection to discriminate with 'G' devices at the loads, and in circuits containing surge suppressors. They must not trip at one-half of rated current. They provide at least 130 milliseconds delay of tripping at rated current, 60 milliseconds at twice rated, and 50 milliseconds at five times rated. The maximum break time is 500ms at rated current, 200ms at twice rated, and 150ms at five times rated. Programmable earth fault relays are available to allow co-ordinated installations to minimise outage. For example, a power distribution system might have a 300mA, 300ms device at the service entry of a building, feeding several 100mA 'S' type at each sub-board, and 30mA 'G' type for each final circuit. In this way, a failure of a device to detect the fault will eventually be cleared by a higher-level device, at the cost of interrupting more circuits. Type (types of leakage current detected) IEC Standard 60755 (General requirements for residual current operated protective devices) defines the following types of RCD depending on the waveforms and frequency of the fault current: Type AC RCDs trip on alternating sinusoidal residual current, suddenly applied or smoothly increasing. Type A RCDs trip on alternating sinusoidal residual current and on residual pulsating direct current, suddenly applied or smoothly increasing. Type F RCDs trip in the same conditions as Type A and in addition: For composite residual currents, whether suddenly applied or slowly rising, intended for circuit supplied between and neutral or and earthed middle conductor; For residual pulsating direct currents superimposed on smooth direct current. Type B RCDs trip in the same conditions as Type F and in addition: For residual sinusoidal alternating currents up to 1kHz; For residual alternating currents superimposed on a smooth direct current; For residual pulsating direct currents superimposed on a smooth direct current; For residual pulsating rectified direct current which results from two or more phases; For residual smooth direct currents, whether suddenly applied or slowly increased, independent of polarity. The BEAMA RCD Handbook notes that types F and B have been introduced because some designs of types AC and A can be disabled if a DC current is present that saturates the core of the detector. Directionality RCDs may be uni-directional or bi-directional. Bi-directional devices have recently been introduced to address the problem of traditional uni-directional devices being unsuitable for certain configurations of home generation systems (PV). Surge current resistance The surge current refers to the peak current an RCD is designed to withstand using a test impulse of specified characteristics. The IEC 61008 and IEC 61009 standards require that RCDs withstand a 200A "ring wave" impulse. The standards also require RCDs classified as "selective" to withstand a 3000A impulse surge current of specified waveform. Testing of correct operation RCDs can and should be tested with a built-in test button to confirm basic functionality on a regular basis. If the switch mechanism is not operated for a long period then they can become liable to getting stuck. This is not generally a problem for an overcurrent circuit breaker because the force from the amount of current involved with those when they trip can be sufficient to break them free if stuck, however an RCD is designed to trip on a very small amount of current which can excerpt far too weak a force to break a stuck switch free, thus failing to operate the safety device. By operating the test button on a regular basis it can be seen whether or not a device is getting stuck. If so then manually operating the switch a few times may free it up temporarily and replacement can be considered. More thorough testing performed by a suitably competent person as part of a periodic test of an electrical installation might include checking what amount of current is required to make each device trip, and how quickly they trip, to check that they are performing within specification. Limitations A residual-current circuit breaker cannot remove all risk of electric shock or fire. In particular, an RCD alone will not detect overload conditions, phase-to-neutral short circuits or phase-to-phase short circuits (see three-phase electric power). Over-current protection (fuses or circuit breakers) must be provided. Circuit breakers that combine the functions of an RCD with overcurrent protection respond to both types of fault. These are known as RCBOs and are available in 2-, 3- and 4-pole configurations. RCBOs will typically have separate circuits for detecting current imbalance and for overload current but use a common interrupting mechanism. Some RCBOs have separate levers for residual-current and over-current protection or use a separate indicator for ground faults. An RCD helps to protect against electric shock when current flows through a person from a phase ( / hot) to earth. It cannot protect against electric shock when current flows through a person from phase to neutral or from phase to phase, for example where a finger touches both and neutral contacts in a light fitting; a device cannot differentiate between current flow through an intended load from flow through a person, though the RCD may still trip if the person is in contact with the ground (earth), as some current may still pass through the persons finger and body to earth. Whole installations on a single RCD, common in older installations in the UK, are prone to "nuisance" trips that can cause secondary safety problems with loss of lighting and defrosting of food. Frequently the trips are caused by deteriorating insulation on heater elements, such as water heaters and cooker elements or rings. Although regarded as a nuisance, the fault is with the deteriorated element and not the RCD: replacement of the offending element will resolve the problem, but replacing the RCD will not. RCDs are not selective, for example when a ground fault occurs on a circuit protected by a 30 mA IΔn RCD in series with a 300 mA IΔn RCD either or both may trip. Special time-delayed types are available to provide selectivity in such installations. In the case of RCDs that need a power supply, a dangerous condition can arise if the neutral wire is broken or switched off on the supply side of the RCD, while the corresponding conductor remains uninterrupted. The tripping circuit needs power to work and does not trip when the power supply fails. Connected equipment will not work without a neutral, but the RCD cannot protect people from contact with the energized wire. For this reason circuit breakers must be installed in a way that ensures that the neutral wire cannot be switched off unless the conductor is also switched off at the same time. Where there is a requirement for switching off the neutral wire, two-pole breakers (or four-pole for 3-phase) must be used. To provide some protection with an interrupted neutral, some RCDs and RCBOs are equipped with an auxiliary connection wire that must be connected to the earth busbar of the distribution board. This either enables the device to detect the missing neutral of the supply, causing the device to trip, or provides an alternative supply path for the tripping circuitry, enabling it to continue to function normally in the absence of the supply neutral. Related to this, a single-pole RCD/RCBO interrupts the energized conductor only, while a double-pole device interrupts both the energized and return conductors. Usually this is a standard and safe practice, since the return conductor is held at ground potential anyway. However, because of its design, a single-pole RCD will not isolate or disconnect all relevant wires in certain uncommon situations, for example where the return conductor is not being held, as expected, at ground potential, or where current leakage occurs between the return and earth conductors. In these cases, a double-pole RCD will offer protection, since the return conductor would also be disconnected. History and nomenclature The world's first high-sensitivity earth leakage protection system (i.e. a system capable of protecting people from the hazards of direct contact between a conductor and earth), was a second-harmonic magnetic amplifier core-balance system, known as the magamp, developed in South Africa by Henri Rubin. Electrical hazards were of great concern in South African gold mines, and Rubin, an engineer at the company C.J. Fuchs Electrical Industries of Alberton Johannesburg, initially developed a cold-cathode system in 1955 which operated at 525V and had a tripping sensitivity of 250mA. Prior to this, core balance earth leakage protection systems operated at sensitivities of about 10A. The cold cathode system was installed in a number of gold mines and worked reliably. However, Rubin began working on a completely novel system with greatly improved sensitivity, and by early 1956, he had produced a prototype second-harmonic magnetic amplifier-type core balance system (South African Patent No. 2268/56 and Australian Patent No. 218360). The prototype magamp was rated at 220V, 60A and had an internally adjustable tripping sensitivity of 12.5–17.5mA. Very rapid tripping times were achieved through a novel design, and this combined with the high sensitivity was well within the safe current–time envelope for ventricular fibrillation determined by Charles Dalziel of the University of California, Berkeley, United States, who had estimated electrical shock hazards in humans. This system, with its associated circuit breaker, included overcurrent and short-circuit protection. In addition, the original prototype was able to trip at a lower sensitivity in the presence of an interrupted neutral, thus protecting against an important cause of electrical fire. Following the accidental electrocution of a woman in a domestic accident at the Stilfontein gold mining village near Johannesburg, a few hundred F.W.J. 20mA magamp earth leakage protection units were installed in the homes of the mining village during 1957 and 1958. F.W.J. Electrical Industries, which later changed its name to FW Electrical Industries, continued to manufacture 20mA single phase and three phase magamp units. At the time that he worked on the magamp, Rubin also considered using transistors in this application, but concluded that the early transistors then available were too unreliable. However, with the advent of improved transistors, the company that he worked for and other companies later produced transistorized versions of earth leakage protection. In 1961, Dalziel, working with Rucker Manufacturing Co., developed a transistorized device for earth leakage protection which became known as a ground fault circuit interrupter (GFCI), sometimes colloquially shortened to ground fault interrupter (GFI). This name for high-sensitivity earth leakage protection is still in common use in the United States. In the early 1970s most North American GFCI devices were of the circuit breaker type. GFCIs built into the outlet receptacle became commonplace beginning in the 1980s. The circuit breaker type, installed into a distribution panel, suffered from accidental trips mainly caused by poor or inconsistent insulation on the wiring. False trips were frequent when insulation problems were compounded by long circuit lengths. So much current leaked along the length of the conductors' insulation that the breaker might trip with the slightest increase of current imbalance. The migration to outlet-receptacle–based protection in North American installations reduced the accidental trips and provided obvious verification that wet areas were under electrical-code–required protection. European installations continue to use primarily RCDs installed at the distribution board, which provides protection in case of damage to fixed wiring. In Europe socket-based RCDs are primarily used for retrofitting. Regulation and adoption Regulations differ widely from country to country. A single RCD installed for an entire electrical installation provides protection against shock hazards to all circuits, however, any fault may cut all power to the premises. A solution is to create groups of circuits, each with an RCD, or to use an RCBO for each individual circuit. Australia In Australia, residual current devices have been mandatory on power circuits since 1991 and on light circuits since 2000. In Queensland specifically, residual power devices have been compulsory for all new homes since 1992. A minimum of two RCDs is required per domestic installation. All socket outlets and lighting circuits are to be distributed over circuit RCDs. A maximum of three subcircuits only, may be connected to a single RCD. In Australia, the RCD testing procedure must meet a set standard – this is the AS/NZS 3760:2010 in-service safety inspection and testing of electrical equipment. Austria Austria regulated residual current devices in the ÖVE E8001-1/A1:2013-11-01 norm (most recent revision). It has been required in private housing since 1980. The maximum activation time must not exceed 0.4 seconds. It needs to be installed on all circuits with power plugs with a maximum leakage current of 30mA and a maximum rated current of 16A. Additional requirements are placed on circuits in wet areas, construction sites and commercial buildings. Belgium Belgian domestic installations are required to be equipped with a 300mA residual current device that protects all circuits. Furthermore, at least one 30mA residual current device is required that protects all circuits in "wet rooms" (e.g. bathroom, kitchen) as well as circuits that power certain "wet" appliances (washing machine, tumble dryer, dishwasher). Electrical underfloor heating is required to be protected by a 100mA RCD. These RCDs must be of type A. Brazil Since NBR 5410 (1997) residual current devices and grounding are required for new construction or repair in wet areas, outdoor areas, interior outlets used for external appliances, or in areas where water is more probable like bathrooms and kitchens. Denmark Denmark requires 30mA RCDs on all circuits that are rated for less than 20 A (circuits at greater rating are mostly used for distribution). RCDs became mandatory in 1975 for new buildings, and then for all buildings in 2008. France According to the NF C 15-100 regulation (1911 -> 2002), a general RCD not exceeding 100 to 300mA at the origin of the installation is mandatory. Moreover, all circuits must also include 30mA protections in the user's distribution board, with each RCD protecting up to 8 circuit breakers, usually on the same DIN rail (electric panels of 1 to 4 DIN rails are the norm for residential). Before 1991, this 30mA protection was mandatory only in rooms where there is water, high power or sensitive equipment (bathrooms, kitchens, IT...). The type of RCD required (A, AC, F) depends upon the type of the equipment that will be connected and the maximum power of the socket outlet. Minimal distances between electrical devices and water or the floor are described and mandatory. Germany Since 1 May 1984, RCDs are mandatory for all rooms with a bath tub or a shower. Since June 2007 Germany requires the use of RCDs with a trip current of no more than 30mA on sockets rated up to 32A which are for general use. (DIN Verband der Elektrotechnik, Elektronik und Informationstechnik (VDE) 0100-410 Nr. 411.3.3). It is not allowed to use type "AC" RCDs since 1987, to be used to protect humans against electrical shocks. It must be Type "A" or type "B". India According to Regulation 36 of the Electricity Regulations 1990 a) For a place of public entertainment, protection against earth leakage current must be provided by a residual current device of sensitivity not exceeding 10mA. b) For a place where the floor is likely to be wet or where the wall or enclosure is of low electrical resistance, protection against earth leakage current must be provided by a residual current device of sensitivity not exceeding 10mA. c) For an installation where hand-held equipment, apparatus or appliance is likely to be used, protection against earth leakage current must be provided by a residual current device of sensitivity not exceeding 30mA. d) For an installation other than the installation in (a), (b) and (c), protection against earth leakage current must be provided by a residual current device of sensitivity not exceeding 100mA. Italy The Italian law (n. 46 March 1990) prescribes RCDs with no more than 30mA residual current (informally called "salvavita"—life saver, after early BTicino models, or differential circuit breaker for the mode of operation) for all domestic installations to protect all the lines. The law was recently updated to mandate at least two separate RCDs for separate domestic circuits. Short-circuit and overload protection has been compulsory since 1968. Malaysia In the latest guidelines for electrical wiring in residential buildings (2008) handbook, the overall residential wiring need to be protected by a residual current device of sensitivity not exceeding 100mA. Additionally, all power sockets need to be protected by a residual current device of sensitivity not exceeding 30mA and all equipment in wet places (water heater, water pump) need to be protected by a residual current device of sensitivity not exceeding 10mA. New Zealand From January 2003, all new circuits originating at the switchboard supplying lighting or socket outlets (power points) in domestic buildings must have RCD protection. Residential facilities (such as boarding houses, hospitals, hotels and motels) will also require RCD protection for all new circuits originating at the switchboard supplying socket outlets. These RCDs will normally be located at the switchboard. They will provide protection for all electrical wiring and appliances plugged into the new circuits. North America In North America socket-outlets located in places where an easy path to ground exists—such as wet areas and rooms with uncovered concrete floors—must be protected by a GFCI. The US National Electrical Code has required devices in certain locations to be protected by GFCIs since the 1960s. Beginning with underwater swimming pool lights (1968) successive editions of the code have expanded the areas where GFCIs are required to include: construction sites (1974), bathrooms and outdoor areas (1975), garages (1978), areas near hot tubs or spas (1981), hotel bathrooms (1984), kitchen counter sockets (1987), crawl spaces and unfinished basements (1990), near wet bar sinks (1993), near laundry sinks (2005), in laundry rooms (2014) and in kitchens (2023). GFCIs are commonly available as an integral part of a socket or a circuit breaker installed in the distribution panelboard. GFCI sockets invariably have rectangular faces and accept so-called Decora face plates, and can be mixed with regular outlets or switches in a multi-gang box with standard cover plates. In both Canada and the US older two-wire, ungrounded NEMA 1 sockets may be replaced with NEMA 5 sockets protected by a GFCI (integral with the socket or with the corresponding circuit breaker) in lieu of rewiring the entire circuit with a grounding conductor. In such cases the sockets must be labeled "no equipment ground" and "GFCI protected"; GFCI manufacturers typically provide tags for the appropriate installation description. GFCIs approved for protection against electric shock trip at 5mA within 25ms. A GFCI device which protects equipment (not people) is allowed to trip as high as 30mA of current; this is known as an Equipment Protective Device (EPD). RCDs with trip currents as high as 500mA are sometimes deployed in environments (such as computing centers) where a lower threshold would carry an unacceptable risk of accidental trips. These high-current RCDs serve for equipment and fire protection instead of protection against the risks of electrical shocks. In the United States the American Boat and Yacht Council requires both GFCIs for outlets and Equipment Leakage Circuit Interrupters (ELCI) for the entire boat. The difference is GFCIs trip on 5mA of current whereas ELCIs trip on 30mA after up to 100ms. The greater values are intended to provide protection while minimizing nuisance trips. Norway In Norway, it has been required in all new homes since 2002, and on all new sockets since 2006. This applies to 32A sockets and below. The RCD must trigger after a maximum 0.4 seconds for 230V circuits, or 0.2 seconds for 400V circuits. South Africa South Africa mandated the use of Earth Leakage Protection devices in residential environments (e.g. houses, flats, hotels, etc.) from October 1974, with regulations being refined in 1975 and 1976. Devices need to be installed in new premises and when repairs are carried out. Protection is required for power outlets and lighting, with the exception of emergency lighting that should not be interrupted. The standard device used in South Africa is indeed a hybrid of ELPD and RCCB. Switzerland According to the NIBT regulation, the use of RCD type AC is forbidden (since 2010). Taiwan Taiwan requires circuits of receptacles in washrooms, balconies, and receptacles in kitchen no more than 1.8 metres from the sink the use of earth leakage circuit breakers. This requirement also apply to circuit of water heater in washrooms and circuits that involves devices in water, lights on metal frames, public drinking fountains and so on. In principle, ELCBs should be installed on branch circuits, with trip current no more than 30mA within 0.1 second according to Taiwanese law. Turkey Turkey requires the use of RCDs with no more than 30mA and 300mA in all new homes since 2004. This rule was introduced in RG-16/06/2004-25494. United Kingdom The current (18th) edition of the IET Electrical Wiring Regulations requires that all socket outlets in most installations have RCD protection, though there are exemptions. Non armoured cables buried in walls must also be RCD protected (again with some specific exemptions). Provision of RCD protection for circuits present in bathrooms and shower rooms reduces the requirement for supplementary bonding in those locations. Two RCDs may be used to cover the installation, with upstairs and downstairs lighting and power circuits spread across both RCDs. When one RCD trips, power is maintained to at least one lighting and power circuit. Other arrangements, such as the use of RCBOs, may be employed to meet the regulations. The new requirements for RCDs do not affect most existing installations unless they are rewired, the distribution board is changed, a new circuit is installed, or alterations are made such as additional socket outlets or new cables buried in walls. RCDs used for shock protection must be of the 'immediate' operation type (not time-delayed) and must have a residual current sensitivity of no greater than 30mA. If spurious tripping would cause a greater problem than the risk of the electrical accident the RCD is supposed to prevent (examples might be a supply to a critical factory process, or to life support equipment), RCDs may be omitted, providing affected circuits are clearly labelled and the balance of risks considered; this may include the provision of alternative safety measures. The previous edition of the regulations required use of RCDs for socket outlets that were liable to be used by outdoor appliances. Normal practice in domestic installations was to use a single RCD to cover all the circuits requiring RCD protection (typically sockets and showers) but to have some circuits (typically lighting) not RCD protected. This was to avoid a potentially dangerous loss of lighting should the RCD trip. Protection arrangements for other circuits varied. To implement this arrangement it was common to install a consumer unit incorporating an RCD in what is known as a split load configuration, where one group of circuit breakers is supplied direct from the main switch (or time delay RCD in the case of a TT earth) and a second group of circuits is supplied via the RCD. This arrangement had the recognised problems that cumulative earth leakage currents from the normal operation of many items of equipment could cause spurious tripping of the RCD, and that tripping of the RCD would disconnect power from all the protected circuits. See also Domestic AC power plugs and sockets Electrical injury Insulation monitoring device Protective relay Arc-fault circuit interrupter Isolation transformer Notes References External links GFCIs Fact Sheet (US Consumer Product Safety Commission) Test of RCCB as per IEC 61008/61009 (Residual Current Device Testing) Why RCD is tripping? - Explanation of nuisance tripping causes Testing of electrical leads and residual current devices (RCDs) (Government of Western Australia Department of Health) Electrical wiring Safety switches Electrical safety
Residual-current device
[ "Physics", "Engineering" ]
8,799
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
530,786
https://en.wikipedia.org/wiki/Codebook
A codebook is a type of document used for gathering and storing cryptography codes. Originally, codebooks were often literally books, but today "codebook" is a byword for the complete record of a series of codes, regardless of physical format. Cryptography In cryptography, a codebook is a document used for implementing a code. A codebook contains a lookup table for coding and decoding; each word or phrase has one or more strings which replace it. To decipher messages written in code, corresponding copies of the codebook must be available at either end. The distribution and physical security of codebooks presents a special difficulty in the use of codes compared to the secret information used in ciphers, the key, which is typically much shorter. The United States National Security Agency documents sometimes use codebook to refer to block ciphers; compare their use of combiner-type algorithm to refer to stream ciphers. Codebooks come in two forms, one-part or two-part: In one-part codes, the plaintext words and phrases and the corresponding code words are in the same alphabetical order. They are organized similar to a standard dictionary. Such codes are half the size of two-part codes but are more vulnerable since an attacker who recovers some code word meanings can often infer the meaning of nearby code words. One-part codes may be used simply to shorten messages for transmission or have their security enhanced with superencryption methods, such as adding a secret number to numeric code words. In two-part codes, one part is for converting plaintext to ciphertext, the other for the opposite purpose. They are usually organized similarly to a language translation dictionary, with plaintext words (in the first part) and ciphertext words (in the second part) presented like dictionary headwords. The earliest known use of a codebook system was by Gabriele de Lavinde in 1379 working for the Antipope Clement VII. Two-part codebooks go back as least as far as Antoine Rossignol in the 1800s. From the 15th century until the middle of the 19th century, nomenclators (named after nomenclator) were the most used cryptographic method. Codebooks with superencryption were the most used cryptographic method of World War I. The JN-25 code used in World War II used a codebook of 30,000 code groups superencrypted with 30,000 random additives. The book used in a book cipher or the book used in a running key cipher can be any book shared by sender and receiver and is different from a cryptographic codebook. Social sciences In social sciences, a codebook is a document containing a list of the codes used in a set of data to refer to variables and their values, for example locations, occupations, or clinical diagnoses. Data compression Codebooks were also used in 19th- and 20th-century commercial codes for the non-cryptographic purpose of data compression. Codebooks are used in relation to precoding and beamforming in mobile networks such as 5G and LTE. The usage is standardized by 3GPP, for example in the document TS 38.331, NR; Radio Resource Control (RRC); Protocol specification. See also Block cipher modes of operation The Code Book References Cryptography Social research
Codebook
[ "Mathematics", "Engineering" ]
689
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
531,104
https://en.wikipedia.org/wiki/Gravity%20Probe%20B
Gravity Probe B (GP-B) was a satellite-based experiment to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. This was to be accomplished by measuring, very precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth-orbiting satellite at of altitude, crossing directly over the poles. The satellite was launched on 20 April 2004 on a Delta II rocket. The spaceflight phase lasted until 2005; Its aim was to measure spacetime curvature near Earth, and thereby the stress–energy tensor (which is related to the distribution and the motion of matter in space) in and near Earth. This provided a test of general relativity, gravitomagnetism and related models. The principal investigator was Francis Everitt. Initial results confirmed the expected geodetic effect to an accuracy of about 1%. The expected frame-dragging effect was similar in magnitude to the current noise level (the noise being dominated by initially unmodeled effects due to nonuniform coatings on the gyroscopes). Work continued to model and account for these sources of error, thus permitting extraction of the frame-dragging signal. By August 2008, the frame-dragging effect had been confirmed to within 15% of the expected result, and the December 2008 NASA report indicated that the geodetic effect was confirmed to be better than 0.5%. In an article published in the journal Physical Review Letters in 2011, the authors reported analysis of the data from all four gyroscopes results in a geodetic drift rate of and a frame-dragging drift rate of , in good agreement with the general relativity predictions of and , respectively. Overview Gravity Probe B was a relativity gyroscope experiment funded by NASA. Efforts were led by the Stanford University physics department with Lockheed Martin as the primary subcontractor. Mission scientists viewed it as the second relativity experiment in space, following the successful launch of Gravity Probe A (GP-A) in 1976. The mission plans were to test two unverified predictions of general relativity: the geodetic effect and frame-dragging. This was to be accomplished by measuring, very precisely, tiny changes in the direction of spin of four gyroscopes contained in an Earth satellite orbiting at altitude, crossing directly over the poles. The gyroscopes were intended to be so free from disturbance that they would provide a near-perfect spacetime reference system. This would allow them to reveal how space and time are "warped" by the presence of the Earth, and by how much the Earth's rotation "drags" space-time around with it. The geodetic effect is an effect caused by space-time being "curved" by the mass of the Earth. A gyroscope's axis when parallel transported around the Earth in one complete revolution does not end up pointing in exactly the same direction as before. The angle "missing" may be thought of as the amount the gyroscope "leans over" into the slope of the space-time curvature. A more precise explanation for the space curvature part of the geodetic precession is obtained by using a nearly flat cone to model the space curvature of the Earth's gravitational field. Such a cone is made by cutting out a thin "pie-slice" from a circle and gluing the cut edges together. The spatial geodetic precession is a measure of the missing "pie-slice" angle. Gravity Probe B was expected to measure this effect to an accuracy of one part in , the most stringent check on general relativistic predictions to date. The much smaller frame-dragging effect is an example of gravitomagnetism. It is an analog of magnetism in classical electrodynamics, but caused by rotating masses rather than rotating electric charges. Previously, only two analyses of the laser-ranging data obtained by the two LAGEOS satellites, published in 1997 and 2004, claimed to have found the frame-dragging effect with an accuracy of about 20% and 10% respectively, whereas Gravity Probe B aimed to measure the frame dragging effect to a precision of 1%. A recent analysis of Mars Global Surveyor data has claimed to have confirmed the frame dragging effect to a precision of 0.5%, although the accuracy of this claim is disputed. The launch was planned for 19 April 2004 at Vandenberg Air Force Base but was scrubbed within 5 minutes of the scheduled launch window due to changing winds in the upper atmosphere. An unusual feature of the mission is that it only had a one-second launch window due to the precise orbit required by the experiment. On 20 April, at 9:57:23 AM PDT (16:57:23 UTC) the spacecraft was launched successfully. The satellite was placed in orbit at 11:12:33 AM (18:12:33 UTC) after a cruise period over the south pole and a short second burn. The mission lasted 16 months. Some preliminary results were presented at a special session during the American Physical Society meeting in April 2007. NASA initially requested a proposal for extending the GP-B data analysis phase through December 2007. The data analysis phase was further extended to September 2008 using funding from Richard Fairbank, Stanford and NASA, and beyond that point using non-NASA funding only. Final science results were reported in 2011. Experimental setup The Gravity Probe B experiment comprised four London moment gyroscopes and a reference telescope sighted on IM Pegasi, a binary star in the constellation Pegasus. In polar orbit, with the gyro spin directions also pointing toward IM Pegasi, the frame-dragging and geodetic effects came out at right angles, each gyroscope measuring both. The gyroscopes were housed in a dewar of superfluid helium, maintaining a temperature of under . Near-absolute zero temperatures were required to minimize molecular interference, and enable the lead and niobium components of the gyroscope mechanisms to become superconductive. At the time of their manufacture, the gyroscopes were the most nearly spherical objects ever made (two gyroscopes still hold that record, but third place has been taken by the silicon spheres made by the Avogadro project). Approximately the size of ping pong balls, they were perfectly round to within forty atoms (less than ). If one of these spheres were scaled to the size of the Earth, the tallest mountains and deepest ocean trench would measure only high. The spheres were made of fused quartz and coated with an extremely thin layer of niobium. A primary concern was minimizing any influence on their spin, so the gyroscopes could never touch their containing compartment. They were held suspended with electric fields, spun up using a flow of helium gas, and their spin axes were sensed by monitoring the magnetic field of the superconductive niobium layer with SQUIDs. (A spinning superconductor generates a magnetic field precisely aligned with the rotation axis; see London moment.) IM Pegasi was chosen as the guide star for multiple reasons. First, it needed to be bright enough to be usable for sightings. Then it was close to the ideal positions near the celestial equator. Also important was its well-understood motion in the sky, which was helped by the fact that this star emits relatively strong radio signals. In preparation for the setup of this mission, astronomers analyzed the radio-based position measurements with respect to far distant quasars taken over several years to understand its motion as precisely as needed. History The conceptual design for this mission was first proposed by an MIT professor, George Pugh, who was working with the U.S. Department of Defense in 1959 and later discussed by Leonard Schiff (Stanford) in 1960 at Pugh's suggestion, based partly on a theoretical paper about detecting frame dragging that Schiff had written in 1957. It was proposed to NASA in 1961, and they supported the project with funds in 1964. This grant ended in 1977 after a long phase of engineering research into the basic requirements and tools for the satellite. In 1986 NASA changed plans for the Space Shuttle, which forced the mission team to switch from a shuttle-based launch design to one that was based on the Delta 2, and in 1995 tests planned of a prototype on a shuttle flight were cancelled as well. Gravity Probe B marks the first time that Stanford University has been in control of the development and operations of a space satellite funded by NASA. The total cost of the project was about $750 million. Mission timeline This is a list of major events for the GP-B experiment. 20 April 2004 Launch of GP-B from Vandenberg AFB and successful insertion into polar orbit. 27 August 2004 GP-B entered its science phase. On mission day 129 all systems were configured to be ready for data collection, with the only exception being gyro 4, which needed further spin axis alignment. 15 August 2005 The science phase of the mission ended and the spacecraft instruments transitioned to the final calibration mode. 26 September 2005 The calibration phase ended with liquid helium still in the dewar. The spacecraft was returned to science mode pending the depletion of liquid helium. February 2006 Phase I of data analysis complete September 2006 Analysis team realised that more error analysis was necessary (particularly around the polhode motion of the gyros) than could be done in the time to April 2007 and applied to NASA for an extension of funding to the end of 2007. December 2006 Completion of Phase III of data analysis 14 April 2007 Announcement of best results obtained to date. Francis Everitt gave a plenary talk at the meeting of the American Physical Society announcing initial results: "The data from the GP-B gyroscopes clearly confirm Einstein's predicted geodetic effect to a precision of better than 1 percent. However, the frame-dragging effect is 170 times smaller than the geodetic effect, and Stanford scientists are still extracting its signature from the spacecraft data." 8 December 2010 GP-B spacecraft decommissioned, left in its polar orbit. 4 May 2011 GP-B Final experimental results were announced. In a public press and media event at NASA Headquarters, GP-B Principal Investigator, Francis Everitt presented the final results of Gravity Probe B. 19 November 2015 Publication of GP-B Special Volume (Volume 32, Issue 22) in the peer-reviewed journal, Classical and Quantum Gravity. On 9 February 2007, it was announced that a number of unexpected signals had been received and that these would need to be separated out before final results could be released. In April it was announced that the spin axes of the gyroscopes were affected by torque, in a manner that varied over time, requiring further analysis to allow the results to be corrected for this source of error. Consequently, the date for the final release of data was pushed back several times. In the data for the frame-dragging results presented at the April 2007 meeting of the American Physical Society, the random errors were much larger than the theoretical expected value and scattered on both the positive and negative sides of a null result, therefore causing skepticism as to whether any useful data could be extracted in the future to test this effect. In June 2007, a detailed update was released explaining the cause of the problem, and the solution that was being worked on. Although electrostatic patches caused by non-uniform coating of the spheres were anticipated, and were thought to have been controlled for before the experiment, it was subsequently found that the final layer of the coating on the spheres defined two-halves of slightly different contact potential, which gave the sphere an electrostatic axis. This created a classical dipole torque on each rotor, of a magnitude similar to the expected frame dragging effect. In addition, it dissipated energy from the polhode motion by inducing currents in the housing electrodes, causing the motion to change with time. This meant that a simple time-average polhode model was insufficient, and a detailed orbit by orbit model was needed to remove the effect. As it was anticipated that "anything could go wrong", the final part of the flight mission was calibration, where amongst other activities, data was gathered with the spacecraft axis deliberately misaligned for 24 hours, to exacerbate any potential problems. This data proved invaluable for identifying the effects. With the electrostatic torque modeled as a function of axis misalignment, and the polhode motion modeled at a sufficiently fine level, it was hoped to isolate the relativity torques to the originally expected resolution. Stanford agreed to release the raw data to the public at an unspecified date in the future. It is likely that this data will be examined by independent scientists and independently reported to the public well after the final release by the project scientists. Because future interpretations of the data by scientists outside GP-B may differ from the official results, it may take several more years for all of the data received by GP-B to be completely understood. NASA review A review by a panel of 15 experts commissioned by NASA recommended against extending the data analysis phase beyond 2008. They warned that the required reduction in noise level (due to classical torques and breaks in data collection due to solar flares) "is so large that any effect ultimately detected by this experiment will have to overcome considerable (and in our opinion, well justified) skepticism in the scientific community". Data analysis after NASA NASA funding and sponsorship of the program ended on 30 September 2008, but GP-B secured alternative funding from King Abdulaziz City for Science and Technology in Saudi Arabia that enabled the science team to continue working at least through December 2009. On 29 August 2008, the 18th meeting of the external GP-B Science Advisory Committee was held at Stanford to report progress. The Stanford-based analysis group and NASA announced on 4 May 2011 that the data from GP-B indeed confirms the two predictions of Albert Einstein's general theory of relativity. The findings were published in the journal Physical Review Letters. The prospects for further experimental measurement of frame-dragging after GP-B were commented on in the journal Europhysics Letters. See also Frame-dragging Gravity Probe A Gravitomagnetism Modified Newtonian dynamics Tests of general relativity Timeline of gravitational physics and relativity References External links Gravity Probe B web site at NASA Gravity Probe B Web site at Stanford Graphic explanation of how Gravity Probe B works NASA GP-B launch site NASA article on the technologies used in Gravity Probe B General Relativistic Frame Dragging Layman's article on the project progress IOP Classical and Quantum Gravity, Volume 32, Issue 22, Special Focus Issue on Gravity Probe B Gravity Probe B Collection, The University of Alabama in Huntsville Archives and Special Collections Tests of general relativity Physics experiments Satellites orbiting Earth Spacecraft launched in 2004 Spacecraft launched by Delta II rockets
Gravity Probe B
[ "Physics" ]
3,021
[ "Experimental physics", "Physics experiments" ]
531,239
https://en.wikipedia.org/wiki/Rotational%20spectroscopy
Rotational spectroscopy is concerned with the measurement of the energies of transitions between quantized rotational states of molecules in the gas phase. The rotational spectrum (power spectral density vs. rotational frequency) of polar molecules can be measured in absorption or emission by microwave spectroscopy or by far infrared spectroscopy. The rotational spectra of non-polar molecules cannot be observed by those methods, but can be observed and measured by Raman spectroscopy. Rotational spectroscopy is sometimes referred to as pure rotational spectroscopy to distinguish it from rotational-vibrational spectroscopy where changes in rotational energy occur together with changes in vibrational energy, and also from ro-vibronic spectroscopy (or just vibronic spectroscopy) where rotational, vibrational and electronic energy changes occur simultaneously. For rotational spectroscopy, molecules are classified according to symmetry into spherical tops, linear molecules, and symmetric tops; analytical expressions can be derived for the rotational energy terms of these molecules. Analytical expressions can be derived for the fourth category, asymmetric top, for rotational levels up to J=3, but higher energy levels need to be determined using numerical methods. The rotational energies are derived theoretically by considering the molecules to be rigid rotors and then applying extra terms to account for centrifugal distortion, fine structure, hyperfine structure and Coriolis coupling. Fitting the spectra to the theoretical expressions gives numerical values of the angular moments of inertia from which very precise values of molecular bond lengths and angles can be derived in favorable cases. In the presence of an electrostatic field there is Stark splitting which allows molecular electric dipole moments to be determined. An important application of rotational spectroscopy is in exploration of the chemical composition of the interstellar medium using radio telescopes. Applications Rotational spectroscopy has primarily been used to investigate fundamental aspects of molecular physics. It is a uniquely precise tool for the determination of molecular structure in gas-phase molecules. It can be used to establish barriers to internal rotation such as that associated with the rotation of the group relative to the group in chlorotoluene (). When fine or hyperfine structure can be observed, the technique also provides information on the electronic structures of molecules. Much of current understanding of the nature of weak molecular interactions such as van der Waals, hydrogen and halogen bonds has been established through rotational spectroscopy. In connection with radio astronomy, the technique has a key role in exploration of the chemical composition of the interstellar medium. Microwave transitions are measured in the laboratory and matched to emissions from the interstellar medium using a radio telescope. was the first stable polyatomic molecule to be identified in the interstellar medium. The measurement of chlorine monoxide is important for atmospheric chemistry. Current projects in astrochemistry involve both laboratory microwave spectroscopy and observations made using modern radiotelescopes such as the Atacama Large Millimeter/submillimeter Array (ALMA). Overview A molecule in the gas phase is free to rotate relative to a set of mutually orthogonal axes of fixed orientation in space, centered on the center of mass of the molecule. Free rotation is not possible for molecules in liquid or solid phases due to the presence of intermolecular forces. Rotation about each unique axis is associated with a set of quantized energy levels dependent on the moment of inertia about that axis and a quantum number. Thus, for linear molecules the energy levels are described by a single moment of inertia and a single quantum number, , which defines the magnitude of the rotational angular momentum. For nonlinear molecules which are symmetric rotors (or symmetric tops - see next section), there are two moments of inertia and the energy also depends on a second rotational quantum number, , which defines the vector component of rotational angular momentum along the principal symmetry axis. Analysis of spectroscopic data with the expressions detailed below results in quantitative determination of the value(s) of the moment(s) of inertia. From these precise values of the molecular structure and dimensions may be obtained. For a linear molecule, analysis of the rotational spectrum provides values for the rotational constant and the moment of inertia of the molecule, and, knowing the atomic masses, can be used to determine the bond length directly. For diatomic molecules this process is straightforward. For linear molecules with more than two atoms it is necessary to measure the spectra of two or more isotopologues, such as 16O12C32S and 16O12C34S. This allows a set of simultaneous equations to be set up and solved for the bond lengths). A bond length obtained in this way is slightly different from the equilibrium bond length. This is because there is zero-point energy in the vibrational ground state, to which the rotational states refer, whereas the equilibrium bond length is at the minimum in the potential energy curve. The relation between the rotational constants is given by where v is a vibrational quantum number and α is a vibration-rotation interaction constant which can be calculated if the B values for two different vibrational states can be found. For other molecules, if the spectra can be resolved and individual transitions assigned both bond lengths and bond angles can be deduced. When this is not possible, as with most asymmetric tops, all that can be done is to fit the spectra to three moments of inertia calculated from an assumed molecular structure. By varying the molecular structure the fit can be improved, giving a qualitative estimate of the structure. Isotopic substitution is invaluable when using this approach to the determination of molecular structure. Classification of molecular rotors In quantum mechanics the free rotation of a molecule is quantized, so that the rotational energy and the angular momentum can take only certain fixed values, which are related simply to the moment of inertia, , of the molecule. For any molecule, there are three moments of inertia: , and about three mutually orthogonal axes A, B, and C with the origin at the center of mass of the system. The general convention, used in this article, is to define the axes such that , with axis corresponding to the smallest moment of inertia. Some authors, however, define the axis as the molecular rotation axis of highest order. The particular pattern of energy levels (and, hence, of transitions in the rotational spectrum) for a molecule is determined by its symmetry. A convenient way to look at the molecules is to divide them into four different classes, based on the symmetry of their structure. These are Selection rules Microwave and far-infrared spectra Transitions between rotational states can be observed in molecules with a permanent electric dipole moment. A consequence of this rule is that no microwave spectrum can be observed for centrosymmetric linear molecules such as (dinitrogen) or HCCH (ethyne), which are non-polar. Tetrahedral molecules such as (methane), which have both a zero dipole moment and isotropic polarizability, would not have a pure rotation spectrum but for the effect of centrifugal distortion; when the molecule rotates about a 3-fold symmetry axis a small dipole moment is created, allowing a weak rotation spectrum to be observed by microwave spectroscopy. With symmetric tops, the selection rule for electric-dipole-allowed pure rotation transitions is , . Since these transitions are due to absorption (or emission) of a single photon with a spin of one, conservation of angular momentum implies that the molecular angular momentum can change by at most one unit. Moreover, the quantum number K is limited to have values between and including +J to -J. Raman spectra For Raman spectra the molecules undergo transitions in which an incident photon is absorbed and another scattered photon is emitted. The general selection rule for such a transition to be allowed is that the molecular polarizability must be anisotropic, which means that it is not the same in all directions. Polarizability is a 3-dimensional tensor that can be represented as an ellipsoid. The polarizability ellipsoid of spherical top molecules is in fact spherical so those molecules show no rotational Raman spectrum. For all other molecules both Stokes and anti-Stokes lines can be observed and they have similar intensities due to the fact that many rotational states are thermally populated. The selection rule for linear molecules is ΔJ = 0, ±2. The reason for the values ±2 is that the polarizability returns to the same value twice during a rotation. The value ΔJ = 0 does not correspond to a molecular transition but rather to Rayleigh scattering in which the incident photon merely changes direction. The selection rule for symmetric top molecules is ΔK = 0 If K = 0, then ΔJ = ±2 If K ≠ 0, then ΔJ = 0, ±1, ±2 Transitions with ΔJ = +1 are said to belong to the R series, whereas transitions with belong to an S series. Since Raman transitions involve two photons, it is possible for the molecular angular momentum to change by two units. Units The units used for rotational constants depend on the type of measurement. With infrared spectra in the wavenumber scale (), the unit is usually the inverse centimeter, written as cm−1, which is literally the number of waves in one centimeter, or the reciprocal of the wavelength in centimeters (). On the other hand, for microwave spectra in the frequency scale (), the unit is usually the gigahertz. The relationship between these two units is derived from the expression where ν is a frequency, λ is a wavelength and c is the velocity of light. It follows that As 1 GHz = 109 Hz, the numerical conversion can be expressed as Effect of vibration on rotation The population of vibrationally excited states follows a Boltzmann distribution, so low-frequency vibrational states are appreciably populated even at room temperatures. As the moment of inertia is higher when a vibration is excited, the rotational constants (B) decrease. Consequently, the rotation frequencies in each vibration state are different from each other. This can give rise to "satellite" lines in the rotational spectrum. An example is provided by cyanodiacetylene, H−C≡C−C≡C−C≡N. Further, there is a fictitious force, Coriolis coupling, between the vibrational motion of the nuclei in the rotating (non-inertial) frame. However, as long as the vibrational quantum number does not change (i.e., the molecule is in only one state of vibration), the effect of vibration on rotation is not important, because the time for vibration is much shorter than the time required for rotation. The Coriolis coupling is often negligible, too, if one is interested in low vibrational and rotational quantum numbers only. Effect of rotation on vibrational spectra Historically, the theory of rotational energy levels was developed to account for observations of vibration-rotation spectra of gases in infrared spectroscopy, which was used before microwave spectroscopy had become practical. To a first approximation, the rotation and vibration can be treated as separable, so the energy of rotation is added to the energy of vibration. For example, the rotational energy levels for linear molecules (in the rigid-rotor approximation) are In this approximation, the vibration-rotation wavenumbers of transitions are where and are rotational constants for the upper and lower vibrational state respectively, while and are the rotational quantum numbers of the upper and lower levels. In reality, this expression has to be modified for the effects of anharmonicity of the vibrations, for centrifugal distortion and for Coriolis coupling. For the so-called R branch of the spectrum, so that there is simultaneous excitation of both vibration and rotation. For the P branch, so that a quantum of rotational energy is lost while a quantum of vibrational energy is gained. The purely vibrational transition, , gives rise to the Q branch of the spectrum. Because of the thermal population of the rotational states the P branch is slightly less intense than the R branch. Rotational constants obtained from infrared measurements are in good accord with those obtained by microwave spectroscopy, while the latter usually offers greater precision. Structure of rotational spectra Spherical top Spherical top molecules have no net dipole moment. A pure rotational spectrum cannot be observed by absorption or emission spectroscopy because there is no permanent dipole moment whose rotation can be accelerated by the electric field of an incident photon. Also the polarizability is isotropic, so that pure rotational transitions cannot be observed by Raman spectroscopy either. Nevertheless, rotational constants can be obtained by ro–vibrational spectroscopy. This occurs when a molecule is polar in the vibrationally excited state. For example, the molecule methane is a spherical top but the asymmetric C-H stretching band shows rotational fine structure in the infrared spectrum, illustrated in rovibrational coupling. This spectrum is also interesting because it shows clear evidence of Coriolis coupling in the asymmetric structure of the band. Linear molecules The rigid rotor is a good starting point from which to construct a model of a rotating molecule. It is assumed that component atoms are point masses connected by rigid bonds. A linear molecule lies on a single axis and each atom moves on the surface of a sphere around the centre of mass. The two degrees of rotational freedom correspond to the spherical coordinates θ and φ which describe the direction of the molecular axis, and the quantum state is determined by two quantum numbers J and M. J defines the magnitude of the rotational angular momentum, and M its component about an axis fixed in space, such as an external electric or magnetic field. In the absence of external fields, the energy depends only on J. Under the rigid rotor model, the rotational energy levels, F(J), of the molecule can be expressed as, where is the rotational constant of the molecule and is related to the moment of inertia of the molecule. In a linear molecule the moment of inertia about an axis perpendicular to the molecular axis is unique, that is, , so For a diatomic molecule where m1 and m2 are the masses of the atoms and d is the distance between them. Selection rules dictate that during emission or absorption the rotational quantum number has to change by unity; i.e., . Thus, the locations of the lines in a rotational spectrum will be given by where denotes the lower level and denotes the upper level involved in the transition. The diagram illustrates rotational transitions that obey the =1 selection rule. The dashed lines show how these transitions map onto features that can be observed experimentally. Adjacent transitions are separated by 2B in the observed spectrum. Frequency or wavenumber units can also be used for the x axis of this plot. Rotational line intensities The probability of a transition taking place is the most important factor influencing the intensity of an observed rotational line. This probability is proportional to the population of the initial state involved in the transition. The population of a rotational state depends on two factors. The number of molecules in an excited state with quantum number J, relative to the number of molecules in the ground state, NJ/N0 is given by the Boltzmann distribution as , where k is the Boltzmann constant and T the absolute temperature. This factor decreases as J increases. The second factor is the degeneracy of the rotational state, which is equal to . This factor increases as J increases. Combining the two factors The maximum relative intensity occurs at The diagram at the right shows an intensity pattern roughly corresponding to the spectrum above it. Centrifugal distortion When a molecule rotates, the centrifugal force pulls the atoms apart. As a result, the moment of inertia of the molecule increases, thus decreasing the value of , when it is calculated using the expression for the rigid rotor. To account for this a centrifugal distortion correction term is added to the rotational energy levels of the diatomic molecule. where is the centrifugal distortion constant. Therefore, the line positions for the rotational mode change to In consequence, the spacing between lines is not constant, as in the rigid rotor approximation, but decreases with increasing rotational quantum number. An assumption underlying these expressions is that the molecular vibration follows simple harmonic motion. In the harmonic approximation the centrifugal constant can be derived as where k is the vibrational force constant. The relationship between and where is the harmonic vibration frequency, follows. If anharmonicity is to be taken into account, terms in higher powers of J should be added to the expressions for the energy levels and line positions. A striking example concerns the rotational spectrum of hydrogen fluoride which was fitted to terms up to [J(J+1)]5. Oxygen The electric dipole moment of the dioxygen molecule, is zero, but the molecule is paramagnetic with two unpaired electrons so that there are magnetic-dipole allowed transitions which can be observed by microwave spectroscopy. The unit electron spin has three spatial orientations with respect to the given molecular rotational angular momentum vector, K, so that each rotational level is split into three states, J = K + 1, K, and K - 1, each J state of this so-called p-type triplet arising from a different orientation of the spin with respect to the rotational motion of the molecule. The energy difference between successive J terms in any of these triplets is about 2 cm−1 (60 GHz), with the single exception of J = 1←0 difference which is about 4 cm−1. Selection rules for magnetic dipole transitions allow transitions between successive members of the triplet (ΔJ = ±1) so that for each value of the rotational angular momentum quantum number K there are two allowed transitions. The 16O nucleus has zero nuclear spin angular momentum, so that symmetry considerations demand that K have only odd values. Symmetric top For symmetric rotors a quantum number J is associated with the total angular momentum of the molecule. For a given value of J, there is a 2J+1- fold degeneracy with the quantum number, M taking the values +J ...0 ... -J. The third quantum number, K is associated with rotation about the principal rotation axis of the molecule. In the absence of an external electrical field, the rotational energy of a symmetric top is a function of only J and K and, in the rigid rotor approximation, the energy of each rotational state is given by where and for a prolate symmetric top molecule or for an oblate molecule. This gives the transition wavenumbers as which is the same as in the case of a linear molecule. With a first order correction for centrifugal distortion the transition wavenumbers become The term in DJK has the effect of removing degeneracy present in the rigid rotor approximation, with different K values. Asymmetric top The quantum number J refers to the total angular momentum, as before. Since there are three independent moments of inertia, there are two other independent quantum numbers to consider, but the term values for an asymmetric rotor cannot be derived in closed form. They are obtained by individual matrix diagonalization for each J value. Formulae are available for molecules whose shape approximates to that of a symmetric top. The water molecule is an important example of an asymmetric top. It has an intense pure rotation spectrum in the far infrared region, below about 200 cm−1. For this reason far infrared spectrometers have to be freed of atmospheric water vapour either by purging with a dry gas or by evacuation. The spectrum has been analyzed in detail. Quadrupole splitting When a nucleus has a spin quantum number, I, greater than 1/2 it has a quadrupole moment. In that case, coupling of nuclear spin angular momentum with rotational angular momentum causes splitting of the rotational energy levels. If the quantum number J of a rotational level is greater than I, levels are produced; but if J is less than I, levels result. The effect is one type of hyperfine splitting. For example, with 14N () in HCN, all levels with J > 0 are split into 3. The energies of the sub-levels are proportional to the nuclear quadrupole moment and a function of F and J. where , . Thus, observation of nuclear quadrupole splitting permits the magnitude of the nuclear quadrupole moment to be determined. This is an alternative method to the use of nuclear quadrupole resonance spectroscopy. The selection rule for rotational transitions becomes Stark and Zeeman effects In the presence of a static external electric field the degeneracy of each rotational state is partly removed, an instance of a Stark effect. For example, in linear molecules each energy level is split into components. The extent of splitting depends on the square of the electric field strength and the square of the dipole moment of the molecule. In principle this provides a means to determine the value of the molecular dipole moment with high precision. Examples include carbonyl sulfide, OCS, with . However, because the splitting depends on μ2, the orientation of the dipole must be deduced from quantum mechanical considerations. A similar removal of degeneracy will occur when a paramagnetic molecule is placed in a magnetic field, an instance of the Zeeman effect. Most species which can be observed in the gaseous state are diamagnetic . Exceptions are odd-electron molecules such as nitric oxide, NO, nitrogen dioxide, , some chlorine oxides and the hydroxyl radical. The Zeeman effect has been observed with dioxygen, Rotational Raman spectroscopy Molecular rotational transitions can also be observed by Raman spectroscopy. Rotational transitions are Raman-allowed for any molecule with an anisotropic polarizability which includes all molecules except for spherical tops. This means that rotational transitions of molecules with no permanent dipole moment, which cannot be observed in absorption or emission, can be observed, by scattering, in Raman spectroscopy. Very high resolution Raman spectra can be obtained by adapting a Fourier Transform Infrared Spectrometer. An example is the spectrum of . It shows the effect of nuclear spin, resulting in intensities variation of 3:1 in adjacent lines. A bond length of 109.9985 ± 0.0010 pm was deduced from the data. Instruments and methods The great majority of contemporary spectrometers use a mixture of commercially available and bespoke components which users integrate according to their particular needs. Instruments can be broadly categorised according to their general operating principles. Although rotational transitions can be found across a very broad region of the electromagnetic spectrum, fundamental physical constraints exist on the operational bandwidth of instrument components. It is often impractical and costly to switch to measurements within an entirely different frequency region. The instruments and operating principals described below are generally appropriate to microwave spectroscopy experiments conducted at frequencies between 6 and 24 GHz. Absorption cells and Stark modulation A microwave spectrometer can be most simply constructed using a source of microwave radiation, an absorption cell into which sample gas can be introduced and a detector such as a superheterodyne receiver. A spectrum can be obtained by sweeping the frequency of the source while detecting the intensity of transmitted radiation. A simple section of waveguide can serve as an absorption cell. An important variation of the technique in which an alternating current is applied across electrodes within the absorption cell results in a modulation of the frequencies of rotational transitions. This is referred to as Stark modulation and allows the use of phase-sensitive detection methods offering improved sensitivity. Absorption spectroscopy allows the study of samples that are thermodynamically stable at room temperature. The first study of the microwave spectrum of a molecule () was performed by Cleeton & Williams in 1934. Subsequent experiments exploited powerful sources of microwaves such as the klystron, many of which were developed for radar during the Second World War. The number of experiments in microwave spectroscopy surged immediately after the war. By 1948, Walter Gordy was able to prepare a review of the results contained in approximately 100 research papers. Commercial versions of microwave absorption spectrometer were developed by Hewlett-Packard in the 1970s and were once widely used for fundamental research. Most research laboratories now exploit either Balle-Flygare or chirped-pulse Fourier transform microwave (FTMW) spectrometers. Fourier transform microwave (FTMW) spectroscopy The theoretical framework underpinning FTMW spectroscopy is analogous to that used to describe FT-NMR spectroscopy. The behaviour of the evolving system is described by optical Bloch equations. First, a short (typically 0-3 microsecond duration) microwave pulse is introduced on resonance with a rotational transition. Those molecules that absorb the energy from this pulse are induced to rotate coherently in phase with the incident radiation. De-activation of the polarisation pulse is followed by microwave emission that accompanies decoherence of the molecular ensemble. This free induction decay occurs on a timescale of 1-100 microseconds depending on instrument settings. Following pioneering work by Dicke and co-workers in the 1950s, the first FTMW spectrometer was constructed by Ekkers and Flygare in 1975. Balle–Flygare FTMW spectrometer Balle, Campbell, Keenan and Flygare demonstrated that the FTMW technique can be applied within a "free space cell" comprising an evacuated chamber containing a Fabry-Perot cavity. This technique allows a sample to be probed only milliseconds after it undergoes rapid cooling to only a few kelvins in the throat of an expanding gas jet. This was a revolutionary development because (i) cooling molecules to low temperatures concentrates the available population in the lowest rotational energy levels. Coupled with benefits conferred by the use of a Fabry-Perot cavity, this brought a great enhancement in the sensitivity and resolution of spectrometers along with a reduction in the complexity of observed spectra; (ii) it became possible to isolate and study molecules that are very weakly bound because there is insufficient energy available for them to undergo fragmentation or chemical reaction at such low temperatures. William Klemperer was a pioneer in using this instrument for the exploration of weakly bound interactions. While the Fabry-Perot cavity of a Balle-Flygare FTMW spectrometer can typically be tuned into resonance at any frequency between 6 and 18 GHz, the bandwidth of individual measurements is restricted to about 1 MHz. An animation illustrates the operation of this instrument which is currently the most widely used tool for microwave spectroscopy. Chirped-Pulse FTMW spectrometer Noting that digitisers and related electronics technology had significantly progressed since the inception of FTMW spectroscopy, B.H. Pate at the University of Virginia designed a spectrometer which retains many advantages of the Balle-Flygare FT-MW spectrometer while innovating in (i) the use of a high speed (>4 GS/s) arbitrary waveform generator to generate a "chirped" microwave polarisation pulse that sweeps up to 12 GHz in frequency in less than a microsecond and (ii) the use of a high speed (>40 GS/s) oscilloscope to digitise and Fourier transform the molecular free induction decay. The result is an instrument that allows the study of weakly bound molecules but which is able to exploit a measurement bandwidth (12 GHz) that is greatly enhanced compared with the Balle-Flygare FTMW spectrometer. Modified versions of the original CP-FTMW spectrometer have been constructed by a number of groups in the United States, Canada and Europe. The instrument offers a broadband capability that is highly complementary to the high sensitivity and resolution offered by the Balle-Flygare design. Notes References Bibliography External links infrared gas spectra simulator Hyperphysics article on Rotational Spectrum A list of microwave spectroscopy research groups around the world Spectroscopy Rotation Rigid bodies mechanics
Rotational spectroscopy
[ "Physics", "Chemistry" ]
5,690
[ "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Classical mechanics", "Rotation", "Motion (physics)", "Spectroscopy" ]
531,373
https://en.wikipedia.org/wiki/Gravity%20Probe%20A
Gravity Probe A (GP-A) was a space-based experiment to test the equivalence principle, a feature of Einstein's theory of relativity. It was performed jointly by the Smithsonian Astrophysical Observatory and the National Aeronautics and Space Administration. The experiment sent a hydrogen masera highly accurate frequency standardinto space to measure with high precision the rate at which time passes in a weaker gravitational field. Masses cause distortions in spacetime, which leads to the effects of length contraction and time dilation, both predicted results of Albert Einstein's theory of general relativity. Because of the bending of spacetime, an observer on Earth (in a lower gravitational potential) should measure a slower rate at which time passes than an observer that is higher in altitude (at higher gravitational potential). This effect is known as gravitational time dilation. The experiment was a test of a major consequence of Einstein's general relativity, the equivalence principle. The equivalence principle states that a reference frame in a uniform gravitational field is indistinguishable from a reference frame that is under uniform acceleration. Further, the equivalence principle predicts that phenomenon of different time flow rates, present in a uniformly accelerating reference frame, will also be present in a stationary reference frame that is in a uniform gravitational field. The probe was launched on June 18, 1976 from the NASA-Wallops Flight Center in Wallops Island, Virginia. The probe was carried via a Scout rocket, and attained a height of , while remaining in space for 1 hour and 55 minutes, as intended. It returned to Earth by splashing down into the Atlantic Ocean. Background The objective of the Gravity Probe A experiment was to test the validity of the equivalence principle. The equivalence principle is a key component of Albert Einstein's theory of general relativity, and states that the laws of physics are the same in an accelerating reference frame as they are in a reference frame that is acted upon by a uniform gravitational field. Equivalence principle The equivalence principle can be understood by comparing a rocket ship in two scenarios. First, imagine a rocket ship that is at rest on the Earth's surface; objects dropped within the rocket ship will fall towards the floor with an acceleration of . Now, imagine a distant rocket ship that has escaped Earth's gravitational field and is accelerating at a constant due to thrust from its rockets; objects in the rocket ship that are unconstrained will move towards the floor with an acceleration of . This example shows one way that a uniformly accelerating reference frame is indistinguishable from a gravitational reference frame. Furthermore, the equivalence principle postulates that phenomena that are caused by inertial effects will also be present due to gravitational effects. Consider a beam of light that is shined horizontally across a rocket ship, which is accelerating. According to a non-accelerating observer outside the rocket ship, the floor of the rocket ship accelerates towards the light beam. Therefore, the light beam does not seem to travel on a horizontal path according to the inside observer, rather the light ray appears to bend toward the floor. This is an example of an inertial effect that causes light to bend. The equivalence principle states that this inertial phenomenon will occur in a gravitational reference frame as well. Indeed, the phenomenon of gravitational lensing states that matter can bend light, and this phenomenon has been observed by the Hubble Space Telescope, and other experiments. Time dilation Time dilation refers to the expansion or contraction in the rate at which time passes, and was the subject of the Gravity Probe A experiment. Under Einstein's theory of general relativity, matter distorts the surrounding spacetime. This distortion causes time to pass more slowly in the vicinity of a massive object, compared to the rate experienced by a distant observer. The Schwarzschild metric, surrounding a spherically symmetric gravitating body, has a smaller coefficient at closer to the body, which means slower rate of time flow there. There is a similar idea of time dilation occurrence in Einstein's theory of special relativity (which involves neither gravity nor the idea of curved spacetime). Such time dilation appears in the Rindler coordinates, attached to a uniformly accelerating particle in a flat spacetime. Such a particle would observe time passing faster on the side it is accelerating towards and more slowly on the opposite side. From this apparent variance in time, Einstein inferred that change in velocity affects the relativity of simultaneity for the particle. Einstein's equivalence principle generalizes this analogy, stating that an accelerating reference frame is locally indistinguishable from an inertial reference frame with a gravity force acting upon it. In this way, the Gravity Probe A was a test of the equivalence principle, matching the observations in the inertial reference frame (of special relativity) of the Earth's surface affected by gravity, with the predictions of special relativity for the same frame treated as being accelerating upwards with respect to free fall reference, which can thought of being inertial and gravity-less. Experimental setup The Gravity Probe A spacecraft housed an atomic hydrogen maser system. Maser is an acronym for microwave amplification by stimulated emission of radiation, and is similar to a laser, as it produces coherent electromagnetic waves in the microwave region of the electromagnetic spectrum. A hydrogen maser produces a very accurate signal (1.42 billion cycles per second), which is highly stableto one part in a quadrillion (). This is equivalent to a clock that drifts by less than two seconds every 100 million years. A microwave signal derived from the maser frequency was transmitted to the ground throughout the mission. The one-way signal received from the rocket was relativistically Doppler shifted due to the speed of the rocket and in addition was gravitationally Doppler blue-shifted by a minute amount. In addition to the hydrogen maser carried by the rocket, another hydrogen maser on the ground was used as a source for continuous transmission of a microwave signal to the rocket. A microwave transponder carried on the rocket returned the signal to the Earth. On the way up, the signal received by the rocket was Doppler shifted due to the speed of the rocket and was gravitationally red-shifted by a minute amount. The transponder signal received on the ground was Doppler shifted due to the speed of the rocket and was gravitationally blue-shifted by the same amount that it was red-shifted on the way up. Since the gravitational Doppler shift of the signals on the way up always exactly cancelled the gravitational Doppler shift on its way down, the two-way Doppler shift of the signal received on the ground depended only on the speed of the rocket. In a microwave frequency mixer, one-half of the two-way Doppler shift from the transponded ground maser signal was subtracted from the Doppler shift of the space maser. In this way, the Doppler shift due to the spacecraft's motion was completely cancelled out, leaving only the gravitational component of the Doppler shift. The probe was launched nearly vertically upward to cause a large change in the gravitational potential, reaching a height of . At this height, general relativity predicted a clock should run 4.5 parts in faster than one on the Earth, or about one second every 73 years. The maser oscillations represented the ticks of a clock, and by measuring the frequency of the maser as it changed elevation, the effects of gravitational time dilation were detected. Results The goal of the experiment was to measure the rate at which time passes in a higher gravitational potential, so to test this the maser in the probe was compared to a similar maser that remained on Earth. Before the two clock rates could be compared, the Doppler shift was subtracted from the clock rate measured by the maser that was sent into space, to correct for the relative motion between the observers on Earth and the motion of the probe. The two clock rates were then compared and further compared against the theoretical predictions of how the two clock rates should differ. The stability of the maser permitted measurement of changes in the rate of the maser of 1 part in for a 100-second measurement. The experiment was thus able to test the equivalence principle. Gravity Probe A confirmed the prediction that deeper in the gravity well, the flow of time is slower, and the observed effects matched the predicted effects to an accuracy of about 70 parts per million. See also Doppler Effect General Relativity Gravitational Redshift Gravity Probe B Pound–Rebka experiment Timeline of gravitational physics Primary references References Further reading Validation of Local Position Invariance through Gravitational Red-Shift Experiment External links Gravity Probe A Collection, The University of Alabama in Huntsville Archives and Special Collections Physics experiments Tests of general relativity 1976 in science 1976 in spaceflight
Gravity Probe A
[ "Physics" ]
1,797
[ "Experimental physics", "Physics experiments" ]
531,587
https://en.wikipedia.org/wiki/Depolarization
In biology, depolarization or hypopolarization is a change within a cell, during which the cell undergoes a shift in electric charge distribution, resulting in less negative charge inside the cell compared to the outside. Depolarization is essential to the function of many cells, communication between cells, and the overall physiology of an organism. Most cells in higher organisms maintain an internal environment that is negatively charged relative to the cell's exterior. This difference in charge is called the cell's membrane potential. In the process of depolarization, the negative internal charge of the cell temporarily becomes more positive (less negative). This shift from a negative to a more positive membrane potential occurs during several processes, including an action potential. During an action potential, the depolarization is so large that the potential difference across the cell membrane briefly reverses polarity, with the inside of the cell becoming positively charged. The change in charge typically occurs due to an influx of sodium ions into a cell, although it can be mediated by an influx of any kind of cation or efflux of any kind of anion. The opposite of a depolarization is called a hyperpolarization. Usage of the term "depolarization" in biology differs from its use in physics, where it refers to situations in which any form of polarity (i.e. the presence of any electrical charge, whether positive or negative) changes to a value of zero. Depolarization is sometimes referred to as "hypopolarization" (as opposed to hyperpolarization). Physiology The process of depolarization is entirely dependent upon the intrinsic electrical nature of most cells. When a cell is at rest, the cell maintains what is known as a resting potential. The resting potential generated by nearly all cells results in the interior of the cell having a negative charge compared to the exterior of the cell. To maintain this electrical imbalance, ions are transported across the cell's plasma membrane. The transport of the ions across the plasma membrane is accomplished through several different types of transmembrane proteins embedded in the cell's plasma membrane that function as pathways for ions both into and out of the cell, such as ion channels, sodium potassium pumps, and voltage-gated ion channels. Resting potential The resting potential must be established within a cell before the cell can be depolarized. There are many mechanisms by which a cell can establish a resting potential, however there is a typical pattern of generating this resting potential that many cells follow. The generation of a negative resting potential within the cell involves the utilization of ion channels, ion pumps, and voltage-gated ion channels by the cell. However, the process of generating the resting potential within the cell also creates an environment outside the cell that favors depolarization. The sodium potassium pump is largely responsible for the optimization of conditions on both the interior and the exterior of the cell for depolarization. By pumping three positively charged sodium ions (Na+) out of the cell for every two positively charged potassium ions (K+) pumped into the cell, not only is the resting potential of the cell established, but an unfavorable concentration gradient is created by increasing the concentration of sodium outside the cell and increasing the concentration of potassium within the cell. While there is an excessive amount of potassium in the cell and sodium outside the cell, the generated resting potential maintains the closure of voltage-gated ion channels in the plasma membrane. This not only prevents the diffusion of ions pumped across the membrane but also involves the activity of potassium leak channels, allowing a controlled passive efflux of potassium ions, which contributes to the establishment of the negative resting potential. Additionally, despite the high concentration of positively-charged potassium ions, most cells contain internal components (of negative charge), which accumulate to establish a negative inner charge. Depolarization After a cell has established a resting potential, that cell has the capacity to undergo depolarization. Depolarization is the process by which the membrane potential becomes less negative, facilitating the generation of an action potential. For this rapid change to take place within the interior of the cell, several events must occur along the plasma membrane of the cell. While the sodium–potassium pump continues to work, the voltage-gated sodium and calcium channels that had been closed while the cell was at resting potential are opened in response to an initial change in voltage. As a change in the neuronal charge leads to the opening of voltage-gated sodium channels, this results in an influx of sodium ions down their electrochemical gradient. Sodium ions enter the cell, and they contribute a positive charge to the cell interior, causing a change in the membrane potential from negative to positive. The initial sodium ion influx triggers the opening of additional sodium channels (positive-feedback loop), leading to further sodium ion transfer into the cell and sustaining the depolarization process until the positive equilibrium potential is reached. Sodium channels possess an inherent inactivation mechanism that prompts rapid reclosure, even as the membrane remains depolarized. During this equilibrium, the sodium channels enter an inactivated state, temporarily halting the influx of sodium ions until the membrane potential becomes negatively charged again. Once the cell's interior is sufficiently positively charged, depolarization concludes, and the channels close once more. Repolarization After a cell has been depolarized, it undergoes one final change in internal charge. Following depolarization, the voltage-gated sodium ion channels that had been open while the cell was undergoing depolarization close again. The increased positive charge within the cell now causes the potassium channels to open. Potassium ions (K+) begin to move down the electrochemical gradient (in favor of the concentration gradient and the newly established electrical gradient). As potassium moves out of the cell the potential within the cell decreases and approaches its resting potential once more. The sodium potassium pump works continuously throughout this process. Hyperpolarization The process of repolarization causes an overshoot in the potential of the cell. Potassium ions continue to move out of the axon so much that the resting potential is exceeded and the new cell potential becomes more negative than the resting potential. The resting potential is ultimately re-established by the closing of all voltage-gated ion channels and the activity of the sodium potassium ion pump. Neurons Depolarization is essential to the functions of many cells in the human body, which is exemplified by the transmission of stimuli both within a neuron and between two neurons. The reception of stimuli, neural integration of those stimuli, and the neuron's response to stimuli all rely upon the ability of neurons to utilize depolarization to transmit stimuli either within a neuron or between neurons. Response to stimulus Stimuli to neurons can be physical, electrical, or chemical, and can either inhibit or excite the neuron being stimulated. An inhibitory stimulus is transmitted to the dendrite of a neuron, causing hyperpolarization of the neuron. The hyperpolarization following an inhibitory stimulus causes a further decrease in voltage within the neuron below the resting potential. By hyperpolarizing a neuron, an inhibitory stimulus results in a greater negative charge that must be overcome for depolarization to occur. Excitation stimuli, on the other hand, increase the voltage in the neuron, which leads to a neuron that is easier to depolarize than the same neuron in the resting state. Regardless of it being excitatory or inhibitory, the stimulus travels down the dendrites of a neuron to the cell body for integration. Integration of stimuli Once the stimuli have reached the cell body, the nerve must integrate the various stimuli before the nerve can respond. The stimuli that have traveled down the dendrites converge at the axon hillock, where they are summed to determine the neuronal response. If the sum of the stimuli reaches a certain voltage, known as the threshold potential, depolarization continues from the axon hillock down the axon. Response The surge of depolarization traveling from the axon hillock to the axon terminal is known as an action potential. Action potentials reach the axon terminal, where the action potential triggers the release of neurotransmitters from the neuron. The neurotransmitters that are released from the axon continue on to stimulate other cells such as other neurons or muscle cells. After an action potential travels down the axon of a neuron, the resting membrane potential of the axon must be restored before another action potential can travel the axon. This is known as the recovery period of the neuron, during which the neuron cannot transmit another action potential. Rod cells of the eye The importance and versatility of depolarization within cells can be seen in the relationship between rod cells in the eye and their associated neurons. When rod cells are in the dark, they are depolarized. In the rod cells, this depolarization is maintained by ion channels that remain open due to the higher voltage of the rod cell in the depolarized state. The ion channels allow calcium and sodium to pass freely into the cell, maintaining the depolarized state. Rod cells in the depolarized state constantly release neurotransmitters which in turn stimulate the nerves associated with rod cells. This cycle is broken when rod cells are exposed to light; the absorption of light by the rod cell causes the channels that had facilitated the entry of sodium and calcium into the rod cell to close. When these channels close, the rod cells produce fewer neurotransmitters, which is perceived by the brain as an increase in light. Therefore, in the case of rod cells and their associated neurons, depolarization actually prevents a signal from reaching the brain as opposed to stimulating the transmission of the signal. Vascular endothelium Endothelium is a thin layer of simple squamous epithelial cells that line the interior of both blood and lymph vessels. The endothelium that lines blood vessels is known as vascular endothelium, which is subject to and must withstand the forces of blood flow and blood pressure from the cardiovascular system. To withstand these cardiovascular forces, endothelial cells must simultaneously have a structure capable of withstanding the forces of circulation while also maintaining a certain level of plasticity in the strength of their structure. This plasticity in the structural strength of the vascular endothelium is essential to overall function of the cardiovascular system. Endothelial cells within blood vessels can alter the strength of their structure to maintain the vascular tone of the blood vessel they line, prevent vascular rigidity, and even help to regulate blood pressure within the cardiovascular system. Endothelial cells accomplish these feats by using depolarization to alter their structural strength. When an endothelial cell undergoes depolarization, the result is a marked decrease in the rigidity and structural strength of the cell by altering the network of fibers that provide these cells with their structural support. Depolarization in vascular endothelium is essential not only to the structural integrity of endothelial cells, but also to the ability of the vascular endothelium to aid in the regulation of vascular tone, prevention of vascular rigidity, and the regulation of blood pressure. Heart Depolarization occurs in the four chambers of the heart: both atria first, and then both ventricles. The sinoatrial (SA) node on the wall of the right atrium initiates depolarization in the right and left atria, causing contraction, which corresponds to the P wave on an electrocardiogram. The SA node sends the depolarization wave to the atrioventricular (AV) node which—with about a 100 ms delay to let the atria finish contracting—then causes contraction in both ventricles, seen in the QRS wave. At the same time, the atria re-polarize and relax. The ventricles are re-polarized and relaxed at the T wave. This process continues regularly, unless there is a problem in the heart. Depolarization blockers There are drugs, called depolarization blocking agents, that cause prolonged depolarization by opening channels responsible for depolarization and not allowing them to close, preventing repolarization. Examples include the nicotinic agonists, suxamethonium and decamethonium. References Further reading External links Membrane biology Electrophysiology Electrochemistry Cellular neuroscience
Depolarization
[ "Chemistry" ]
2,586
[ "Electrochemistry", "Membrane biology", "Molecular biology" ]
531,911
https://en.wikipedia.org/wiki/Laser%20cutting
Laser cutting is a technology that uses a laser to vaporize materials, resulting in a cut edge. While typically used for industrial manufacturing applications, it is now used by schools, small businesses, architecture, and hobbyists. Laser cutting works by directing the output of a high-power laser most commonly through optics. The laser optics and CNC (computer numerical control) are used to direct the laser beam to the material. A commercial laser for cutting materials uses a motion control system to follow a CNC or G-code of the pattern to be cut onto the material. The focused laser beam is directed at the material, which then either melts, burns, vaporizes away, or is blown away by a jet of gas, leaving an edge with a high-quality surface finish. History In 1965, the first production laser cutting machine was used to drill holes in diamond dies. This machine was made by the Western Electric Engineering Research Center. In 1967, the British pioneered laser-assisted oxygen jet cutting for metals. In the early 1970s, this technology was put into production to cut titanium for aerospace applications. At the same time, CO2 lasers were adapted to cut non-metals, such as textiles, because, at the time, CO2 lasers were not powerful enough to overcome the thermal conductivity of metals. Process The laser beam is generally focused using a high-quality lens on the work zone. The quality of the beam has a direct impact on the focused spot size. The narrowest part of the focused beam is generally less than in diameter. Depending upon the material thickness, kerf widths as small as are possible. In order to be able to start cutting from somewhere other than the edge, a pierce is done before every cut. Piercing usually involves a high-power pulsed laser beam which slowly makes a hole in the material, taking around 5–15 seconds for stainless steel, for example. The parallel rays of coherent light from the laser source often fall in the range between in diameter. This beam is normally focused and intensified by a lens or a mirror to a very small spot of about to create a very intense laser beam. In order to achieve the smoothest possible finish during contour cutting, the direction of the beam polarization must be rotated as it goes around the periphery of a contoured workpiece. For sheet metal cutting, the focal length is usually . Advantages of laser cutting over mechanical cutting include easier work holding and reduced contamination of workpiece (since there is no cutting edge which can become contaminated by the material or contaminate the material). Precision may be better since the laser beam does not wear during the process. There is also a reduced chance of warping the material that is being cut, as laser systems have a small heat-affected zone. Some materials are also very difficult or impossible to cut by more traditional means. Laser cutting for metals has the advantage over plasma cutting of being more precise and using less energy when cutting sheet metal; however, most industrial lasers cannot cut through the greater metal thickness that plasma can. Newer laser machines operating at higher power (6000 watts, as contrasted with early laser cutting machines' 1500-watt ratings) are approaching plasma machines in their ability to cut through thick materials, but the capital cost of such machines is much higher than that of plasma cutting machines capable of cutting thick materials like steel plate. Types There are three main types of lasers used in laser cutting. The laser is suited for cutting, boring, and engraving. The neodymium (Nd) and neodymium yttrium-aluminium-garnet (Nd:YAG) lasers are identical in style and differ only in the application. Nd is used for boring and where high energy but low repetition are required. The Nd:YAG laser is used where very high power is needed and for boring and engraving. Both and Nd/Nd:YAG lasers can be used for welding. lasers are commonly "pumped" by passing a current through the gas mix (DC-excited) or using radio frequency energy (RF-excited). The RF method is newer and has become more popular. Since DC designs require electrodes inside the cavity, they can encounter electrode erosion and plating of electrode material on glassware and optics. Since RF resonators have external electrodes they are not prone to those problems. lasers are used for the industrial cutting of many materials including titanium, stainless steel, mild steel, aluminium, plastic, wood, engineered wood, wax, fabrics, and paper. YAG lasers are primarily used for cutting and scribing metals and ceramics. In addition to the power source, the type of gas flow can affect performance as well. Common variants of lasers include fast axial flow, slow axial flow, transverse flow, and slab. In a fast axial flow resonator, the mixture of carbon dioxide, helium, and nitrogen is circulated at high velocity by a turbine or blower. Transverse flow lasers circulate the gas mix at a lower velocity, requiring a simpler blower. Slab or diffusion-cooled resonators have a static gas field that requires no pressurization or glassware, leading to savings on replacement turbines and glassware. The laser generator and external optics (including the focus lens) require cooling. Depending on system size and configuration, waste heat may be transferred by a coolant or directly to air. Water is a commonly used coolant, usually circulated through a chiller or heat transfer system. A laser microjet is a water-jet-guided laser in which a pulsed laser beam is coupled into a low-pressure water jet. This is used to perform laser cutting functions while using the water jet to guide the laser beam, much like an optical fiber, through total internal reflection. The advantages of this are that the water also removes debris and cools the material. Additional advantages over traditional "dry" laser cutting are high dicing speeds, parallel kerf, and omnidirectional cutting. Fiber lasers are a type of solid-state laser that is rapidly growing within the metal cutting industry. Unlike CO2, Fiber technology utilizes a solid gain medium, as opposed to a gas or liquid. The “seed laser” produces the laser beam and is then amplified within a glass fiber. With a wavelength of only 1064 nanometers fiber lasers produce an extremely small spot size (up to 100 times smaller compared to the CO2) making it ideal for cutting reflective metal material. This is one of the main advantages of Fiber compared to CO2. Fibre laser cutter benefits include: Rapid processing times. Reduced energy consumption & bills – due to greater efficiency. Greater reliability and performance - no optics to adjust or align and no lamps to replace. Minimal maintenance. The ability to process highly reflective materials such as copper and brass. Higher productivity - lower operational costs offer a greater return on your investment. Methods There are many different methods of cutting using lasers, with different types used to cut different materials. Some of the methods are vaporization, melt and blow, melt blow and burn, thermal stress cracking, scribing, cold cutting, and burning stabilized laser cutting. Vaporization cutting In vaporization cutting, the focused beam heats the surface of the material to a flashpoint and generates a keyhole. The keyhole leads to a sudden increase in absorptivity quickly deepening the hole. As the hole deepens and the material boils, vapor generated erodes the molten walls blowing ejection out and further enlarging the hole. Nonmelting materials such as wood, carbon, and thermoset plastics are usually cut by this method. Melt and blow Melt and blow or fusion cutting uses high-pressure gas to blow molten material from the cutting area, greatly decreasing the power requirement. First, the material is heated to melting point then a gas jet blows the molten material out of the kerf avoiding the need to raise the temperature of the material any further. Materials cut with this process are usually metals. Thermal stress cracking Brittle materials are particularly sensitive to thermal fracture, a feature exploited in thermal stress cracking. A beam is focused on the surface causing localized heating and thermal expansion. This results in a crack that can then be guided by moving the beam. The crack can be moved in order of m/s. It is usually used in the cutting of glass. Stealth dicing of silicon wafers The separation of microelectronic chips as prepared in semiconductor device fabrication from silicon wafers may be performed by the so-called stealth dicing process, which operates with a pulsed Nd:YAG laser, the wavelength of which (1064 nm) is well adapted to the electronic band gap of silicon (1.11 eV or 1117 nm). Reactive cutting Reactive cutting is also called "burning stabilized laser gas cutting" and "flame cutting". Reactive cutting is like oxygen torch cutting but with a laser beam as the ignition source. Mostly used for cutting carbon steel in thicknesses over 1 mm. This process can be used to cut very thick steel plates with relatively little laser power. Tolerances and surface finish Laser cutters have a positioning accuracy of 10 micrometers and repeatability of 5 micrometers. Standard roughness Rz increases with the sheet thickness, but decreases with laser power and cutting speed. When cutting low carbon steel with laser power of 800 W, standard roughness Rz is 10 μm for sheet thickness of 1 mm, 20 μm for 3 mm, and 25 μm for 6 mm. Where: steel sheet thickness in mm; laser power in kW (some new laser cutters have laser power of 4 kW); cutting speed in meters per minute. This process is capable of holding quite close tolerances, often to within 0.001 inch (0.025 mm). Part geometry and the mechanical soundness of the machine have much to do with tolerance capabilities. The typical surface finish resulting from laser beam cutting may range from 125 to 250 micro-inches (0.003 mm to 0.006 mm). Machine configurations There are generally three different configurations of industrial laser cutting machines: moving material, hybrid, and flying optics systems. These refer to the way that the laser beam is moved over the material to be cut or processed. For all of these, the axes of motion are typically designated X and Y axis. If the cutting head may be controlled, it is designated as the Z-axis. Moving material lasers have a stationary cutting head and move the material under it. This method provides a constant distance from the laser generator to the workpiece and a single point from which to remove cutting effluent. It requires fewer optics but requires moving the workpiece. This style of machine tends to have the fewest beam delivery optics but also tends to be the slowest. Hybrid lasers provide a table that moves in one axis (usually the X-axis) and moves the head along the shorter (Y) axis. This results in a more constant beam delivery path length than a flying optic machine and may permit a simpler beam delivery system. This can result in reduced power loss in the delivery system and more capacity per watt than flying optics machines. Flying optics lasers feature a stationary table and a cutting head (with a laser beam) that moves over the workpiece in both of the horizontal dimensions. Flying optics cutters keep the workpiece stationary during processing and often do not require material clamping. The moving mass is constant, so dynamics are not affected by varying the size of the workpiece. Flying optics machines are the fastest type, which is advantageous when cutting thinner workpieces. Flying optic machines must use some method to take into account the changing beam length from the near field (close to the resonator) cutting to the far field (far away from the resonator) cutting. Common methods for controlling this include collimation, adaptive optics, or the use of a constant beam length axis. Five and six-axis machines also permit cutting formed workpieces. In addition, there are various methods of orienting the laser beam to a shaped workpiece, maintaining a proper focus distance and nozzle standoff. Pulsing Pulsed lasers which provide a high-power burst of energy for a short period are very effective in some laser cutting processes, particularly for piercing, or when very small holes or very low cutting speeds are required, since if a constant laser beam were used, the heat could reach the point of melting the whole piece being cut. Most industrial lasers have the ability to pulse or cut CW (continuous wave) under NC (numerical control) program control. Double pulse lasers use a series of pulse pairs to improve material removal rate and hole quality. Essentially, the first pulse removes material from the surface and the second prevents the ejecta from adhering to the side of the hole or cut. Power consumption The main disadvantage of laser cutting is the high power consumption. Industrial laser efficiency may range from 5% to 45%. The power consumption and efficiency of any particular laser will vary depending on output power and operating parameters. This will depend on the type of laser and how well the laser is matched to the work at hand. The amount of laser cutting power required, known as heat input, for a particular job depends on the material type, thickness, process (reactive/inert) used, and desired cutting rate. Production and cutting rates The maximum cutting rate (production rate) is limited by a number of factors including laser power, material thickness, process type (reactive or inert), and material properties. Common industrial systems (≥1 kW) will cut carbon steel metal from in thickness. For many purposes, a laser can be up to thirty times faster than standard sawing. See also Craft 3D printing Drilling Laser ablation Laser beam machining Laser beam quality Laser converting Laser drilling Laser engraving List of laser articles Mirror galvanometer Water jet cutter References Bibliography Cutting machines Cutting processes Cutting Metalworking cutting tools Articles containing video clips Hole making Machining
Laser cutting
[ "Physics", "Technology" ]
2,829
[ "Physical systems", "Machines", "Cutting machines" ]
532,034
https://en.wikipedia.org/wiki/Fluorophore
A fluorophore (or fluorochrome, similarly to a chromophore) is a fluorescent chemical compound that can re-emit light upon light excitation. Fluorophores typically contain several combined aromatic groups, or planar or cyclic molecules with several π bonds. Fluorophores are sometimes used alone, as a tracer in fluids, as a dye for staining of certain structures, as a substrate of enzymes, or as a probe or indicator (when its fluorescence is affected by environmental aspects such as polarity or ions). More generally they are covalently bonded to macromolecules, serving as a markers (or dyes, or tags, or reporters) for affine or bioactive reagents (antibodies, peptides, nucleic acids). Fluorophores are notably used to stain tissues, cells, or materials in a variety of analytical methods, such as fluorescent imaging and spectroscopy. Fluorescein, via its amine-reactive isothiocyanate derivative fluorescein isothiocyanate (FITC), has been one of the most popular fluorophores. From antibody labeling, the applications have spread to nucleic acids thanks to carboxyfluorescein. Other historically common fluorophores are derivatives of rhodamine (TRITC), coumarin, and cyanine. Newer generations of fluorophores, many of which are proprietary, often perform better, being more photostable, brighter, or less pH-sensitive than traditional dyes with comparable excitation and emission. Fluorescence The fluorophore absorbs light energy of a specific wavelength and re-emits light at a longer wavelength. The absorbed wavelengths, energy transfer efficiency, and time before emission depend on both the fluorophore structure and its chemical environment, since the molecule in its excited state interacts with surrounding molecules. Wavelengths of maximum absorption (≈ excitation) and emission (for example, Absorption/Emission = 485 nm/517 nm) are the typical terms used to refer to a given fluorophore, but the whole spectrum may be important to consider. The excitation wavelength spectrum may be a very narrow or broader band, or it may be all beyond a cutoff level. The emission spectrum is usually sharper than the excitation spectrum, and it is of a longer wavelength and correspondingly lower energy. Excitation energies range from ultraviolet through the visible spectrum, and emission energies may continue from visible light into the near infrared region. The main characteristics of fluorophores are: Maximum excitation and emission wavelength (expressed in nanometers (nm)): corresponds to the peak in the excitation and emission spectra (usually one peak each). Molar absorption coefficient (in mol−1cm−1): links the quantity of absorbed light, at a given wavelength, to the concentration of fluorophore in solution. Quantum yield: efficiency of the energy transferred from incident light to emitted fluorescence (the number of emitted photons per absorbed photons). Lifetime (in picoseconds): duration of the excited state of a fluorophore before returning to its ground state. It refers to the time taken for a population of excited fluorophores to decay to 1/e (≈0.368) of the original amount. Stokes shift: the difference between the maximum excitation and maximum emission wavelengths. Dark fraction: the proportion of the molecules not active in fluorescence emission. For quantum dots, prolonged single-molecule microscopy showed that 20-90% of all particles never emit fluorescence. On the other hand, conjugated polymer nanoparticles (Pdots) show almost no dark fraction in their fluorescence. Fluorescent proteins can have a dark fraction from protein misfolding or defective chromophore formation. These characteristics drive other properties, including photobleaching or photoresistance (loss of fluorescence upon continuous light excitation). Other parameters should be considered, as the polarity of the fluorophore molecule, the fluorophore size and shape (i.e. for polarization fluorescence pattern), and other factors can change the behavior of fluorophores. Fluorophores can also be used to quench the fluorescence of other fluorescent dyes or to relay their fluorescence at even longer wavelengths. Size (molecular weight) Most fluorophores are organic small molecules of 20–100 atoms (200–1000 Dalton; the molecular weight may be higher depending on grafted modifications and conjugated molecules), but there are also much larger natural fluorophores that are proteins: green fluorescent protein (GFP) is 27 kDa, and several phycobiliproteins (PE, APC...) are ≈240kDa. As of 2020, the smallest known fluorophore was claimed to be 3-hydroxyisonicotinaldehyde, a compound of 14 atoms and only 123 Da. Fluorescence particles like quantum dots (2–10 nm diameter, 100–100,000 atoms) are also considered fluorophores. The size of the fluorophore might sterically hinder the tagged molecule and affect the fluorescence polarity. Families Fluorophore molecules could be either utilized alone, or serve as a fluorescent motif of a functional system. Based on molecular complexity and synthetic methods, fluorophore molecules could be generally classified into four categories: proteins and peptides, small organic compounds, synthetic oligomers and polymers, and multi-component systems. Fluorescent proteins GFP, YFP, and RFP (green, yellow, and red, respectively) can be attached to other specific proteins to form a fusion protein, synthesized in cells after transfection of a suitable plasmid carrier. Non-protein organic fluorophores belong to following major chemical families: Xanthene derivatives: fluorescein, rhodamine, Oregon green, eosin, and Texas red Cyanine derivatives: cyanine, indocarbocyanine, oxacarbocyanine, thiacarbocyanine, and merocyanine Squaraine derivatives and ring-substituted squaraines, including Seta and Square dyes Squaraine rotaxane derivatives: See Tau dyes Naphthalene derivatives (dansyl and prodan derivatives) Coumarin derivatives Oxadiazole derivatives: pyridyloxazole, nitrobenzoxadiazole, and benzoxadiazole Anthracene derivatives: anthraquinones, including DRAQ5, DRAQ7, and CyTRAK Orange Pyrene derivatives: cascade blue, etc. Oxazine derivatives: Nile red, Nile blue, cresyl violet, oxazine 170, etc. Acridine derivatives: proflavin, acridine orange, acridine yellow, etc. Arylmethine derivatives: auramine, crystal violet, malachite green Tetrapyrrole derivatives: porphin, phthalocyanine, bilirubin Dipyrromethene derivatives: BODIPY, aza-BODIPY These fluorophores fluoresce due to delocalized electrons which can jump a band and stabilize the energy absorbed. For example, benzene, one of the simplest aromatic hydrocarbons, is excited at 254 nm and emits at 300 nm. This discriminates fluorophores from quantum dots, which are fluorescent semiconductor nanoparticles. They can be attached to proteins to specific functional groups, such as amino groups (active ester, carboxylate, isothiocyanate, hydrazine), carboxyl groups (carbodiimide), thiol (maleimide, acetyl bromide), and organic azide (via click chemistry or non-specifically (glutaraldehyde)). Additionally, various functional groups can be present to alter their properties, such as solubility, or confer special properties, such as boronic acid which binds to sugars or multiple carboxyl groups to bind to certain cations. When the dye contains an electron-donating and an electron-accepting group at opposite ends of the aromatic system, this dye will probably be sensitive to the environment's polarity (solvatochromic), hence called environment-sensitive. Often dyes are used inside cells, which are impermeable to charged molecules; as a result of this, the carboxyl groups are converted into an ester, which is removed by esterases inside the cells, e.g., fura-2AM and fluorescein-diacetate. The following dye families are trademark groups, and do not necessarily share structural similarities. CF dye (Biotium) DRAQ and CyTRAK probes (BioStatus) BODIPY (Invitrogen) EverFluor (Setareh Biotech) Alexa Fluor (Invitrogen) Bella Fluor (Setareh Biotech) DyLight Fluor (Thermo Scientific, Pierce) Atto and Tracy (Sigma Aldrich) FluoProbes (Interchim) Abberior Dyes (Abberior) DY and MegaStokes Dyes (Dyomics) Sulfo Cy dyes (Cyandye) HiLyte Fluor (AnaSpec) Seta, SeTau and Square Dyes (SETA BioMedicals) Quasar and Cal Fluor dyes (Biosearch Technologies) SureLight Dyes (APC, RPEPerCP, Phycobilisomes) (Columbia Biosciences) APC, APCXL, RPE, BPE (Phyco-Biotech, Greensea, Prozyme, Flogen) Vio Dyes (Miltenyi Biotec) Examples of frequently encountered fluorophores Reactive and conjugated dyes Abbreviations: Ex (nm): Excitation wavelength in nanometers Em (nm): Emission wavelength in nanometers MW: Molecular weight QY: Quantum yield Nucleic acid dyes Cell function dyes Fluorescent proteins Advanced fluorescent proteins StayGold and mStayGold are advanced fluorescent proteins that have significantly contributed to the field of live-cell imaging. StayGold, known for its high photostability and brightness, was originally designed as a dimeric fluorescent protein, which, while effective, posed challenges related to the aggregation and labelling accuracy. To address these limitations, mStayGold was engineered as a monomeric variant, enhancing its utility in precise protein labeling. mStayGold exhibits superior photostability, maintaining fluorescence under high irradiance conditions and demonstrates increased brightness compared to its former variant StayGold. Additionally, it matures faster, allowing for quicker imaging post-transfection. These advancements make mStayGold a versatile tool for a variety of applications, including single molecule tracking and high resolution imaging of dynamic cellular processes, thereby expanding the capabilities of fluorescent protein in biological research. Abbreviations: Ex (nm): Excitation wavelength in nanometers Em (nm): Emission wavelength in nanometers MW: Molecular weight QY: Quantum yield BR: Brightness: Molar absorption coefficient * quantum yield / 1000 PS: Photostability: time [sec] to reduce brightness by 50% Applications Fluorophores have particular importance in the field of biochemistry and protein studies, for example, in immunofluorescence, cell analysis, immunohistochemistry, and small molecule sensors. Uses outside the life sciences Fluorescent dyes find a wide use in industry, going under the name of "neon colors", such as: Multi-ton scale usages in textile dyeing and optical brighteners in laundry detergents Advanced cosmetic formulations Safety equipment and clothing Organic light-emitting diodes (OLEDs) Fine arts and design (posters and paintings) Synergists for insecticides and experimental drugs Dyes in highlighters to give off a glow-like effect Solar panels to collect more light / wavelengths Fluorescent sea dye is used to help airborne search and rescue teams locate objects in the water See also :Category:Fluorescent dyes Fluorescence in the life sciences Quenching of fluorescence Fluorescence recovery after photobleaching (FRAP) - an application for quantifying mobility of molecules in lipid bilayers. References External links The Database of fluorescent dyes Table of fluorochromes The Molecular Probes Handbook - a comprehensive resource for fluorescence technology and its applications. Dyes Luminescence
Fluorophore
[ "Chemistry" ]
2,629
[ "Luminescence", "Molecular physics" ]
532,175
https://en.wikipedia.org/wiki/Transfer%20RNA
Transfer RNA (abbreviated tRNA and formerly referred to as sRNA, for soluble RNA) is an adaptor molecule composed of RNA, typically 76 to 90 nucleotides in length (in eukaryotes). In a cell, it provides the physical link between the genetic code in messenger RNA (mRNA) and the amino acid sequence of proteins, carrying the correct sequence of amino acids to be combined by the protein-synthesizing machinery, the ribosome. Each three-nucleotide codon in mRNA is complemented by a three-nucleotide anticodon in tRNA. As such, tRNAs are a necessary component of translation, the biological synthesis of new proteins in accordance with the genetic code. Overview The process of translation starts with the information stored in the nucleotide sequence of DNA. This is first transformed into mRNA, then tRNA specifies which three-nucleotide codon from the genetic code corresponds to which amino acid. Each mRNA codon is recognized by a particular type of tRNA, which docks to it along a three-nucleotide anticodon, and together they form three complementary base pairs. On the other end of the tRNA is a covalent attachment to the amino acid corresponding to the anticodon sequence, with each type of tRNA attaching to a specific amino acid. Because the genetic code contains multiple codons that specify the same amino acid, there are several tRNA molecules bearing different anticodons which carry the same amino acid. The covalent attachment to the tRNA 3' end is catalysed by enzymes called aminoacyl tRNA synthetases. During protein synthesis, tRNAs with attached amino acids are delivered to the ribosome by proteins called elongation factors, which aid in association of the tRNA with the ribosome, synthesis of the new polypeptide, and translocation (movement) of the ribosome along the mRNA. If the tRNA's anticodon matches the mRNA, another tRNA already bound to the ribosome transfers the growing polypeptide chain from its 3' end to the amino acid attached to the 3' end of the newly delivered tRNA, a reaction catalysed by the ribosome. A large number of the individual nucleotides in a tRNA molecule may be chemically modified, often by methylation or deamidation. These unusual bases sometimes affect the tRNA's interaction with ribosomes and sometimes occur in the anticodon to alter base-pairing properties. Structure The structure of tRNA can be decomposed into its primary structure, its secondary structure (usually visualized as the cloverleaf structure), and its tertiary structure (all tRNAs have a similar L-shaped 3D structure that allows them to fit into the P and A sites of the ribosome). The cloverleaf structure becomes the 3D L-shaped structure through coaxial stacking of the helices, which is a common RNA tertiary structure motif. The lengths of each arm, as well as the loop 'diameter', in a tRNA molecule vary from species to species. The tRNA structure consists of the following: The acceptor stem is a 7- to 9-base pair (bp) stem made by the base pairing of the 5′-terminal nucleotide with the 3′-terminal nucleotide (which contains the CCA tail used to attach the amino acid). The acceptor stem may contain non-Watson-Crick base pairs. The CCA tail is a cytosine-cytosine-adenine sequence at the 3′ end of the tRNA molecule. The amino acid loaded onto the tRNA by aminoacyl tRNA synthetases, to form aminoacyl-tRNA, is covalently bonded to the 3′-hydroxyl group on the CCA tail. This sequence is important for the recognition of tRNA by enzymes and critical in translation. In prokaryotes, the CCA sequence is transcribed in some tRNA sequences. In most prokaryotic tRNAs and eukaryotic tRNAs, the CCA sequence is added during processing and therefore does not appear in the tRNA gene. The D loop is a 4- to 6-bp stem ending in a loop that often contains dihydrouridine. The anticodon loop is a 5-bp stem whose loop contains the anticodon. The TΨC loop is named so because of the characteristic presence of the unusual base Ψ in the loop, where Ψ is pseudouridine, a modified uridine. The modified base is often found within the sequence 5'-TΨCGA-3', with the T (ribothymidine, m5U) and A forming a base pair. The variable loop or V loop sits between the anticodon loop and the ΨU loop and, as its name implies, varies in size from 3 to 21 bases. In some tRNAs, the "loop" is long enough to form a rigid stem, the variable arm. tRNAs with a V loop more than 10 bases long is classified as "class II" and the rest is called "class I". Anticodon An anticodon is a unit of three nucleotides corresponding to the three bases of an mRNA codon. Each tRNA has a distinct anticodon triplet sequence that can form 3 complementary base pairs to one or more codons for an amino acid. Some anticodons pair with more than one codon due to wobble base pairing. Frequently, the first nucleotide of the anticodon is one not found on mRNA: inosine, which can hydrogen bond to more than one base in the corresponding codon position. In genetic code, it is common for a single amino acid to be specified by all four third-position possibilities, or at least by both pyrimidines and purines; for example, the amino acid glycine is coded for by the codon sequences GGU, GGC, GGA, and GGG. Other modified nucleotides may also appear at the first anticodon position—sometimes known as the "wobble position"—resulting in subtle changes to the genetic code, as for example in mitochondria. The possibility of wobble bases reduces the number of tRNA types required: instead of 61 types with one for each sense codon of the standard genetic code), only 31 tRNAs are required to translate, unambiguously, all 61 sense codons. Nomenclature A tRNA is commonly named by its intended amino acid (e.g. ), by its anticodon sequence (e.g. ), or by both (e.g. or ). These two features describe the main function of the tRNA, but do not actually cover the whole diversity of tRNA variation; as a result, numerical suffixes are added to differentiate. tRNAs intended for the same amino acid are called "isotypes"; these with the same anticodon sequence are called "isoacceptors"; and these with both being the same but differing in other places are called "isodecoders". Aminoacylation Aminoacylation is the process of adding an aminoacyl group to a compound. It covalently links an amino acid to the CCA 3′ end of a tRNA molecule. Each tRNA is aminoacylated (or charged) with a specific amino acid by an aminoacyl tRNA synthetase. There is normally a single aminoacyl tRNA synthetase for each amino acid, despite the fact that there can be more than one tRNA, and more than one anticodon for an amino acid. Recognition of the appropriate tRNA by the synthetases is not mediated solely by the anticodon, and the acceptor stem often plays a prominent role. Reaction: amino acid + ATP → aminoacyl-AMP + PPi aminoacyl-AMP + tRNA → aminoacyl-tRNA + AMP Certain organisms can have one or more aminophosphate-tRNA synthetases missing. This leads to charging of the tRNA by a chemically related amino acid, and by use of an enzyme or enzymes, the tRNA is modified to be correctly charged. For example, Helicobacter pylori has glutaminyl tRNA synthetase missing. Thus, glutamate tRNA synthetase charges tRNA-glutamine(tRNA-Gln) with glutamate. An amidotransferase then converts the acid side chain of the glutamate to the amide, forming the correctly charged gln-tRNA-Gln. Binding to ribosome The ribosome has three binding sites for tRNA molecules that span the space between the two ribosomal subunits: the A (aminoacyl), P (peptidyl), and E (exit) sites. In addition, the ribosome has two other sites for tRNA binding that are used during mRNA decoding or during the initiation of protein synthesis. These are the T site (named elongation factor Tu) and I site (initiation). By convention, the tRNA binding sites are denoted with the site on the small ribosomal subunit listed first and the site on the large ribosomal subunit listed second. For example, the A site is often written A/A, the P site, P/P, and the E site, E/E. The binding proteins like L27, L2, L14, L15, L16 at the A- and P- sites have been determined by affinity labeling by A. P. Czernilofsky et al. (Proc. Natl. Acad. Sci, USA, pp. 230–234, 1974). Once translation initiation is complete, the first aminoacyl tRNA is located in the P/P site, ready for the elongation cycle described below. During translation elongation, tRNA first binds to the ribosome as part of a complex with elongation factor Tu (EF-Tu) or its eukaryotic (eEF-1) or archaeal counterpart. This initial tRNA binding site is called the A/T site. In the A/T site, the A-site half resides in the small ribosomal subunit where the mRNA decoding site is located. The mRNA decoding site is where the mRNA codon is read out during translation. The T-site half resides mainly on the large ribosomal subunit where EF-Tu or eEF-1 interacts with the ribosome. Once mRNA decoding is complete, the aminoacyl-tRNA is bound in the A/A site and is ready for the next peptide bond to be formed to its attached amino acid. The peptidyl-tRNA, which transfers the growing polypeptide to the aminoacyl-tRNA bound in the A/A site, is bound in the P/P site. Once the peptide bond is formed, the tRNA in the P/P site is acylated, or has a free 3' end, and the tRNA in the A/A site dissociates the growing polypeptide chain. To allow for the next elongation cycle, the tRNAs then move through hybrid A/P and P/E binding sites, before completing the cycle and residing in the P/P and E/E sites. Once the A/A and P/P tRNAs have moved to the P/P and E/E sites, the mRNA has also moved over by one codon and the A/T site is vacant, ready for the next round of mRNA decoding. The tRNA bound in the E/E site then leaves the ribosome. The P/I site is actually the first to bind to aminoacyl tRNA, which is delivered by an initiation factor called IF2 in bacteria. However, the existence of the P/I site in eukaryotic or archaeal ribosomes has not yet been confirmed. The P-site protein L27 has been determined by affinity labeling by E. Collatz and A. P. Czernilofsky (FEBS Lett., Vol. 63, pp. 283–286, 1976). tRNA genes Organisms vary in the number of tRNA genes in their genome. For example, the nematode worm C. elegans, a commonly used model organism in genetics studies, has 29,647 genes in its nuclear genome, of which 620 code for tRNA. The budding yeast Saccharomyces cerevisiae has 275 tRNA genes in its genome. The number of tRNA genes per genome can vary widely, with bacterial species from groups such as Fusobacteria and Tenericutes having around 30 genes per genome while complex eukaryotic genomes such as the zebrafish (Danio rerio) can bear more than 10 thousand tRNA genes. In the human genome, which, according to January 2013 estimates, has about 20,848 protein coding genes in total, there are 497 nuclear genes encoding cytoplasmic tRNA molecules, and 324 tRNA-derived pseudogenes—tRNA genes thought to be no longer functional (although pseudo tRNAs have been shown to be involved in antibiotic resistance in bacteria). As with all eukaryotes, there are 22 mitochondrial tRNA genes in humans. Mutations in some of these genes have been associated with severe diseases like the MELAS syndrome. Regions in nuclear chromosomes, very similar in sequence to mitochondrial tRNA genes, have also been identified (tRNA-lookalikes). These tRNA-lookalikes are also considered part of the nuclear mitochondrial DNA (genes transferred from the mitochondria to the nucleus). The phenomenon of multiple nuclear copies of mitochondrial tRNA (tRNA-lookalikes) has been observed in many higher organisms from human to the opossum suggesting the possibility that the lookalikes are functional. Cytoplasmic tRNA genes can be grouped into 49 families according to their anticodon features. These genes are found on all chromosomes, except the 22 and Y chromosome. High clustering on 6p is observed (140 tRNA genes), as well as on chromosome 1. The HGNC, in collaboration with the Genomic tRNA Database (GtRNAdb) and experts in the field, has approved unique names for human genes that encode tRNAs. Typically, tRNAs genes from Bacteria are shorter (mean = 77.6 bp) than tRNAs from Archaea (mean = 83.1 bp) and eukaryotes (mean = 84.7 bp). The mature tRNA follows an opposite pattern with tRNAs from Bacteria being usually longer (median = 77.6 nt) than tRNAs from Archaea (median = 76.8 nt), with eukaryotes exhibiting the shortest mature tRNAs (median = 74.5 nt). Evolution Genomic tRNA content is a differentiating feature of genomes among biological domains of life: Archaea present the simplest situation in terms of genomic tRNA content with a uniform number of gene copies, Bacteria have an intermediate situation and Eukarya present the most complex situation. Eukarya present not only more tRNA gene content than the other two kingdoms but also a high variation in gene copy number among different isoacceptors, and this complexity seem to be due to duplications of tRNA genes and changes in anticodon specificity . Evolution of the tRNA gene copy number across different species has been linked to the appearance of specific tRNA modification enzymes (uridine methyltransferases in Bacteria, and adenosine deaminases in Eukarya), which increase the decoding capacity of a given tRNA. As an example, tRNAAla encodes four different tRNA isoacceptors (AGC, UGC, GGC and CGC). In Eukarya, AGC isoacceptors are extremely enriched in gene copy number in comparison to the rest of isoacceptors, and this has been correlated with its A-to-I modification of its wobble base. This same trend has been shown for most amino acids of eukaryal species. Indeed, the effect of these two tRNA modifications is also seen in codon usage bias. Highly expressed genes seem to be enriched in codons that are exclusively using codons that will be decoded by these modified tRNAs, which suggests a possible role of these codons—and consequently of these tRNA modifications—in translation efficiency. Many species have lost specific tRNAs during evolution. For instance, both mammals and birds lack the same 14 out of the possible 64 tRNA genes, but other life forms contain these tRNAs. For translating codons for which an exactly pairing tRNA is missing, organisms resort to a strategy called wobbling, in which imperfectly matched tRNA/mRNA pairs still give rise to translation, although this strategy also increases the propensity for translation errors. The reasons why tRNA genes have been lost during evolution remains under debate but may relate improving resistance to viral infection. Because nucleotide triplets can present more combinations than there are amino acids and associated tRNAs, there is redundancy in the genetic code, and several different 3-nucleotide codons can express the same amino acid. This codon bias is what necessitates codon optimization. Hypothetical origin The top half of tRNA (consisting of the T arm and the acceptor stem with 5′-terminal phosphate group and 3′-terminal CCA group) and the bottom half (consisting of the D arm and the anticodon arm) are independent units in structure as well as in function. The top half may have evolved first including the 3′-terminal genomic tag which originally may have marked tRNA-like molecules for replication in early RNA world. The bottom half may have evolved later as an expansion, e.g. as protein synthesis started in RNA world and turned it into a ribonucleoprotein world (RNP world). This proposed scenario is called genomic tag hypothesis. In fact, tRNA and tRNA-like aggregates have an important catalytic influence (i.e., as ribozymes) on replication still today. These roles may be regarded as 'molecular (or chemical) fossils' of RNA world. In March 2021, researchers reported evidence suggesting that an early form of transfer RNA could have been a replicator ribozyme molecule in the very early development of life, or abiogenesis. Evolution of type I and type II tRNAs is explained to the last nucleotide by the three 31 nucleotide minihelix tRNA evolution theorem, which also describes the pre-life to life transition on Earth. Three 31 nucleotide minihelices of known sequence were ligated in pre-life to generate a 93 nucleotide tRNA precursor. In pre-life, a 31 nucleotide D loop minihelix (GCGGCGGUAGCCUAGCCUAGCCUACCGCCGC) was ligated to two 31 nucleotide anticodon loop minihelices (GCGGCGGCCGGGCU/???AACCCGGCCGCCGC; / indicates a U-turn conformation in the RNA backbone; ? indicates unknown base identity) to form the 93 nucleotide tRNA precursor. To generate type II tRNAs, a single internal 9 nucleotide deletion occurred within ligated acceptor stems (CCGCCGCGCGGCGG goes to GGCGG). To generate type I tRNAs, an additional, related 9 nucleotide deletion occurred within ligated acceptor stems within the variable loop region (CCGCCGCGCGGCGG goes to CCGCC). These two 9 nucleotide deletions are identical on complementary RNA strands. tRNAomes (all of the tRNAs of an organism) were generated by duplication and mutation. Very clearly, life evolved from a polymer world that included RNA repeats and RNA inverted repeats (stem-loop-stems). Of particular importance were the 7 nucleotide U-turn loops (CU/???AA). After LUCA (the last universal common (cellular) ancestor), the T loop evolved to interact with the D loop at the tRNA “elbow” (T loop: UU/CAAAU, after LUCA). Polymer world progressed to minihelix world to tRNA world, which has endured for ~4 billion years. Analysis of tRNA sequences reveals a major successful pathway in evolution of life on Earth. tRNA-derived fragments tRNA-derived fragments (or tRFs) are short molecules that emerge after cleavage of the mature tRNAs or the precursor transcript. Both cytoplasmic and mitochondrial tRNAs can produce fragments. There are at least four structural types of tRFs believed to originate from mature tRNAs, including the relatively long tRNA halves and short 5'-tRFs, 3'-tRFs and i-tRFs. The precursor tRNA can be cleaved to produce molecules from the 5' leader or 3' trail sequences. Cleavage enzymes include Angiogenin, Dicer, RNase Z and RNase P. Especially in the case of Angiogenin, the tRFs have a characteristically unusual cyclic phosphate at their 3' end and a hydroxyl group at the 5' end. tRFs appear to play a role in RNA interference, specifically in the suppression of retroviruses and retrotransposons that use tRNA as a primer for replication. Half-tRNAs cleaved by angiogenin are also known as tiRNAs. The biogenesis of smaller fragments, including those that function as piRNAs, are less understood. tRFs have multiple dependencies and roles; such as exhibiting significant changes between sexes, among races and disease status. Functionally, they can be loaded on Ago and act through RNAi pathways, participate in the formation of stress granules, displace mRNAs from RNA-binding proteins or inhibit translation. At the system or the organismal level, the four types of tRFs have a diverse spectrum of activities. Functionally, tRFs are associated with viral infection, cancer, cell proliferation and also with epigenetic transgenerational regulation of metabolism. tRFs are not restricted to humans and have been shown to exist in multiple organisms. Two online tools are available for those wishing to learn more about tRFs: the framework for the interactive exploration of mitochondrial and nuclear tRNA fragments (MINTbase) and the relational database of Transfer RNA related Fragments (tRFdb). MINTbase also provides a naming scheme for the naming of tRFs called tRF-license plates (or MINTcodes) that is genome independent; the scheme compresses an RNA sequence into a shorter string. Engineered tRNAs tRNAs with modified anticodons and/or acceptor stems can be used to modify the genetic code. Scientists have successfully repurposed codons (sense and stop) to accept amino acids (natural and novel), for both initiation (see: start codon) and elongation. In 1990, tRNA (modified from the tRNA gene metY) was inserted into E. coli, causing it to initiate protein synthesis at the UAG stop codon, as long as it is preceded by a strong Shine-Dalgarno sequence. At initiation it not only inserts the traditional formylmethionine, but also formylglutamine, as glutamyl-tRNA synthase also recognizes the new tRNA. The experiment was repeated in 1993, now with an elongator tRNA modified to be recognized by the methionyl-tRNA formyltransferase. A similar result was obtained in Mycobacterium. Later experiments showed that the new tRNA was orthogonal to the regular AUG start codon showing no detectable off-target translation initiation events in a genomically recoded E. coli strain. tRNA biogenesis In eukaryotic cells, tRNAs are transcribed by RNA polymerase III as pre-tRNAs in the nucleus. RNA polymerase III recognizes two highly conserved downstream promoter sequences: the 5′ intragenic control region (5′-ICR, D-control region, or A box), and the 3′-ICR (T-control region or B box) inside tRNA genes. The first promoter begins at +8 of mature tRNAs and the second promoter is located 30–60 nucleotides downstream of the first promoter. The transcription terminates after a stretch of four or more thymidines. Pre-tRNAs undergo extensive modifications inside the nucleus. Some pre-tRNAs contain introns that are spliced, or cut, to form the functional tRNA molecule; in bacteria these self-splice, whereas in eukaryotes and archaea they are removed by tRNA-splicing endonucleases. Eukaryotic pre-tRNA contains bulge-helix-bulge (BHB) structure motif that is important for recognition and precise splicing of tRNA intron by endonucleases. This motif position and structure are evolutionarily conserved. However, some organisms, such as unicellular algae have a non-canonical position of BHB-motif as well as 5′- and 3′-ends of the spliced intron sequence. The 5′ sequence is removed by RNase P, whereas the 3′ end is removed by the tRNase Z enzyme. A notable exception is in the archaeon Nanoarchaeum equitans, which does not possess an RNase P enzyme and has a promoter placed such that transcription starts at the 5′ end of the mature tRNA. The non-templated 3′ CCA tail is added by a nucleotidyl transferase. Before tRNAs are exported into the cytoplasm by Los1/Xpo-t, tRNAs are aminoacylated. The order of the processing events is not conserved. For example, in yeast, the splicing is not carried out in the nucleus but at the cytoplasmic side of mitochondrial membranes. History The existence of tRNA was first hypothesized by Francis Crick as the "adaptor hypothesis" based on the assumption that there must exist an adapter molecule capable of mediating the translation of the RNA alphabet into the protein alphabet. Paul C Zamecnik, Mahlon Hoagland, and Mary Louise Stephenson discovered tRNA. Significant research on structure was conducted in the early 1960s by Alex Rich and Donald Caspar, two researchers in Boston, the Jacques Fresco group in Princeton University and a United Kingdom group at King's College London. In 1965, Robert W. Holley of Cornell University reported the primary structure and suggested three secondary structures. tRNA was first crystallized in Madison, Wisconsin, by Robert M. Bock. The cloverleaf structure was ascertained by several other studies in the following years and was finally confirmed using X-ray crystallography studies in 1974. Two independent groups, Kim Sung-Hou working under Alexander Rich and a British group headed by Aaron Klug, published the same crystallography findings within a year. Clinical relevance Interference with aminoacylation may be useful as an approach to treating some diseases: cancerous cells may be relatively vulnerable to disturbed aminoacylation compared to healthy cells. The protein synthesis associated with cancer and viral biology is often very dependent on specific tRNA molecules. For instance, for liver cancer charging tRNA-Lys-CUU with lysine sustains liver cancer cell growth and metastasis, whereas healthy cells have a much lower dependence on this tRNA to support cellular physiology. Similarly, hepatitis E virus requires a tRNA landscape that substantially differs from that associated with uninfected cells. Hence, inhibition of aminoacylation of specific tRNA species is considered a promising novel avenue for the rational treatment of a plethora of diseases. See also Cloverleaf model of tRNA Kim Sung-Hou Kissing stem-loop mRNA non-coding RNA and introns Slippery sequence tmRNA Transfer RNA-like structures Translation tRNADB Wobble hypothesis Aminoacyl-tRNA References External links tRNAdb (updated and completely restructured version of Spritzls tRNA compilation) tRNA surprising role in breast cancer growth tRNA link to heart disease and stroke GtRNAdb: Collection of tRNAs identified from complete genomes HGNC: Gene nomenclature of human tRNAs Molecule of the Month © RCSB Protein Data Bank: Transfer RNA Aminoacyl-tRNA Synthetases Elongation Factors Rfam entry for tRNA RNA Protein biosynthesis Non-coding RNA Articles containing video clips
Transfer RNA
[ "Chemistry" ]
6,011
[ "Protein biosynthesis", "Gene expression", "Biosynthesis" ]
532,405
https://en.wikipedia.org/wiki/Quantum%20number
In quantum physics and chemistry, quantum numbers are quantities that characterize the possible states of the system. To fully specify the state of the electron in a hydrogen atom, four quantum numbers are needed. The traditional set of quantum numbers includes the principal, azimuthal, magnetic, and spin quantum numbers. To describe other systems, different quantum numbers are required. For subatomic particles, one needs to introduce new quantum numbers, such as the flavour of quarks, which have no classical correspondence. Quantum numbers are closely related to eigenvalues of observables. When the corresponding observable commutes with the Hamiltonian of the system, the quantum number is said to be "good", and acts as a constant of motion in the quantum dynamics. History Electronic quantum numbers In the era of the old quantum theory, starting from Max Planck's proposal of quanta in his model of blackbody radiation (1900) and Albert Einstein's adaptation of the concept to explain the photoelectric effect (1905), and until Erwin Schrödinger published his eigenfunction equation in 1926, the concept behind quantum numbers developed based on atomic spectroscopy and theories from classical mechanics with extra ad hoc constraints. Many results from atomic spectroscopy had been summarized in the Rydberg formula involving differences between two series of energies related by integer steps. The model of the atom, first proposed by Niels Bohr in 1913, relied on a single quantum number. Together with Bohr's constraint that radiation absorption is not classical, it was able to explain the Balmer series portion of Rydberg's atomic spectrum formula. As Bohr notes in his subsequent Nobel lecture, the next step was taken by Arnold Sommerfeld in 1915. Sommerfeld's atomic model added a second quantum number and the concept of quantized phase integrals to justify them. Sommerfeld's model was still essentially two dimensional, modeling the electron as orbiting in a plane; in 1919 he extended his work to three dimensions using 'space quantization' in place of the quantized phase integrals. Karl Schwarzschild and Sommerfeld's student, Paul Epstein, independently showed that adding third quantum number gave a complete account for the Stark effect results. A consequence of space quantization was that the electron's orbital interaction with an external magnetic field would be quantized. This seemed to be confirmed when the results of the Stern-Gerlach experiment reported quantized results for silver atoms in an inhomogeneous magnetic field. The confirmation would turn out to be premature: more quantum numbers would be needed. The fourth and fifth quantum numbers of the atomic era arose from attempts to understand the Zeeman effect. Like the Stern-Gerlach experiment, the Zeeman effect reflects the interaction of atoms with a magnetic field; in a weak field the experimental results were called "anomalous", they diverged from any theory at the time. Wolfgang Pauli's solution to this issue was to introduce another quantum number taking only two possible values, . This would ultimately become the quantized values of the projection of spin, an intrinsic angular momentum quantum of the electron. In 1927 Ronald Fraser demonstrated that the quantization in the Stern-Gerlach experiment was due to the magnetic moment associated with the electron spin rather than its orbital angular momentum. Pauli's success in developing the arguments for a spin quantum number without relying on classical models set the stage for the development of quantum numbers for elementary particles in the remainder of the 20th century. Bohr, with his Aufbau or "building up" principle, and Pauli with his exclusion principle connected the atom's electronic quantum numbers in to a framework for predicting the properties of atoms. When Schrödinger published his wave equation and calculated the energy levels of hydrogen, these two principles carried over to become the basis of atomic physics. Nuclear quantum numbers With successful models of the atom, the attention of physics turned to models of the nucleus. Beginning with Heisenberg's initial model of proton-neutron binding in 1932, Eugene Wigner introduced isospin in 1937, the first 'internal' quantum number unrelated to a symmetry in real space-time. Connection to symmetry As quantum mechanics developed, abstraction increased and models based on symmetry and invariance played increasing roles. Two years before his work on the quantum wave equation, Schrödinger applied the symmetry ideas originated by Emmy Noether and Hermann Weyl to the electromagnetic field. As quantum electrodynamics developed in the 1930s and 1940s, group theory became an important tool. By 1953 Chen Ning Yang had become obsessed with the idea that group theory could be applied to connect the conserved quantum numbers of nuclear collisions to symmetries in a field theory of nucleons. With Robert Mills, Yang developed a non-abelian gauge theory based on the conservation of the nuclear isospin quantum numbers. General properties Good quantum numbers correspond to eigenvalues of operators that commute with the Hamiltonian, quantities that can be known with precision at the same time as the system's energy. Specifically, observables that commute with the Hamiltonian are simultaneously diagonalizable with it and so the eigenvalues and the energy (eigenvalues of the Hamiltonian) are not limited by an uncertainty relation arising from non-commutativity. Together, a specification of all of the quantum numbers of a quantum system fully characterize a basis state of the system, and can in principle be measured together. Many observables have discrete spectra (sets of eigenvalues) in quantum mechanics, so the quantities can only be measured in discrete values. In particular, this leads to quantum numbers that take values in discrete sets of integers or half-integers; although they could approach infinity in some cases. The tally of quantum numbers varies from system to system and has no universal answer. Hence these parameters must be found for each system to be analyzed. A quantized system requires at least one quantum number. The dynamics (i.e. time evolution) of any quantum system are described by a quantum operator in the form of a Hamiltonian, . There is one quantum number of the system corresponding to the system's energy; i.e., one of the eigenvalues of the Hamiltonian. There is also one quantum number for each linearly independent operator that commutes with the Hamiltonian. A complete set of commuting observables (CSCO) that commute with the Hamiltonian characterizes the system with all its quantum numbers. There is a one-to-one relationship between the quantum numbers and the operators of the CSCO, with each quantum number taking one of the eigenvalues of its corresponding operator. As a result of the different basis that may be arbitrarily chosen to form a complete set of commuting operators, different sets of quantum numbers may be used for the description of the same system in different situations. Electron in a hydrogen-like atom Four quantum numbers can describe an electron energy level in a hydrogen-like atom completely: Principal quantum number () Azimuthal quantum number () Magnetic quantum number () Spin quantum number () These quantum numbers are also used in the classical description of nuclear particle states (e.g. protons and neutrons). A quantum description of molecular orbitals requires other quantum numbers, because the symmetries of the molecular system are different. Principal quantum number The principal quantum number describes the electron shell of an electron. The value of ranges from 1 to the shell containing the outermost electron of that atom, that is For example, in caesium (Cs), the outermost valence electron is in the shell with energy level 6, so an electron in caesium can have an value from 1 to 6. The average distance between the electron and the nucleus increases with . Azimuthal quantum number The azimuthal quantum number, also known as the orbital angular momentum quantum number, describes the subshell, and gives the magnitude of the orbital angular momentum through the relation In chemistry and spectroscopy, is called s orbital, , p orbital, , d orbital, and , f orbital. The value of ranges from 0 to , so the first p orbital () appears in the second electron shell (), the first d orbital () appears in the third shell (), and so on: A quantum number beginning in , describes an electron in the s orbital of the third electron shell of an atom. In chemistry, this quantum number is very important, since it specifies the shape of an atomic orbital and strongly influences chemical bonds and bond angles. The azimuthal quantum number can also denote the number of angular nodes present in an orbital. For example, for p orbitals, and thus the amount of angular nodes in a p orbital is 1. Magnetic quantum number The magnetic quantum number describes the specific orbital within the subshell, and yields the projection of the orbital angular momentum along a specified axis: The values of range from to , with integer intervals. The s subshell () contains only one orbital, and therefore the of an electron in an s orbital will always be 0. The p subshell () contains three orbitals, so the of an electron in a p orbital will be −1, 0, or 1. The d subshell () contains five orbitals, with values of −2, −1, 0, 1, and 2. Spin magnetic quantum number The spin magnetic quantum number describes the intrinsic spin angular momentum of the electron within each orbital and gives the projection of the spin angular momentum along the specified axis: In general, the values of range from to , where is the spin quantum number, associated with the magnitude of particle's intrinsic spin angular momentum: An electron state has spin number , consequently will be + ("spin up") or - "spin down" states. Since electron are fermions they obey the Pauli exclusion principle: each electron state must have different quantum numbers. Therefore, every orbital will be occupied with at most two electrons, one for each spin state. The Aufbau principle and Hund's Rules A multi-electron atom can be modeled qualitatively as a hydrogen like atom with higher nuclear charge and correspondingly more electrons. The occupation of the electron states in such an atom can be predicted by the Aufbau principle and Hund's empirical rules for the quantum numbers. The Aufbau principle fills orbitals based on their principal and azimuthal quantum numbers (lowest first, with lowest breaking ties; Hund's rule favors unpaired electrons in the outermost orbital). These rules are empirical but they can be related to electron physics. Spin-orbit coupled systems When one takes the spin–orbit interaction into consideration, the and operators no longer commute with the Hamiltonian, and the eigenstates of the system no longer have well-defined orbital angular momentum and spin. Thus another set of quantum numbers should be used. This set includes The total angular momentum quantum number: which gives the total angular momentum through the relation The projection of the total angular momentum along a specified axis: analogous to the above and satisfies both and ParityThis is the eigenvalue under reflection: positive (+1) for states which came from even and negative (−1) for states which came from odd . The former is also known as even parity and the latter as odd parity, and is given by For example, consider the following 8 states, defined by their quantum numbers: {| style="border: none; border-spacing: 1em 0" class="wikitable" ! ! ! ! ! | rowspan=9 style="border:0px;" | ! ! ! |-align=right ! (1) | 2 || 1 || 1 || + || || || |-align=right ! (2) | 2 || 1 || 1 || − || || || |-align=right ! (3) | 2 || 1 || 0 || + || || || |-align=right ! (4) | 2 || 1 || 0 || − || || || − |-align=right ! (5) | 2 || 1 || −1 || + || || || − |-align=right ! (6) | 2 || 1 || −1 || − || || || − |-align=right ! (7) | 2 || 0 || 0 || + || || − || |-align=right ! (8) | 2 || 0 || 0 || − || || − || − |} The quantum states in the system can be described as linear combination of these 8 states. However, in the presence of spin–orbit interaction, if one wants to describe the same system by 8 states that are eigenvectors of the Hamiltonian (i.e. each represents a state that does not mix with others over time), we should consider the following 8 states: {| class="wikitable" ! || || parity || |- | || align=right | || align=right | odd || coming from state (1) above |- | || align=right | || align=right | odd || coming from states (2) and (3) above |- | || align=right | −|| align=right | odd || coming from states (4) and (5) above |- | || align=right | −|| align=right | odd || coming from state (6) above |- | || align=right | || align=right | odd || coming from states (2) and (3) above |- | || align=right | −|| align=right | odd || coming from states (4) and (5) above |- | || align=right | || align=right | even || coming from state (7) above |- | || align=right | −|| align=right | even || coming from state (8) above |} Atomic nuclei In nuclei, the entire assembly of protons and neutrons (nucleons) has a resultant angular momentum due to the angular momenta of each nucleon, usually denoted . If the total angular momentum of a neutron is and for a proton is (where for protons and neutrons happens to be again (see note)), then the nuclear angular momentum quantum numbers are given by: Note: The orbital angular momenta of the nuclear (and atomic) states are all integer multiples of ħ while the intrinsic angular momentum of the neutron and proton are half-integer multiples. It should be immediately apparent that the combination of the intrinsic spins of the nucleons with their orbital motion will always give half-integer values for the total spin, , of any odd-A nucleus and integer values for any even-A nucleus. Parity with the number is used to label nuclear angular momentum states, examples for some isotopes of hydrogen (H), carbon (C), and sodium (Na) are; {| | style="text-align:right;" | || = ()+||   || style="text-align:right;" | || = ()− ||   || style="text-align:right;" | || = 2+ |- | style="text-align:right;" | || = 1+||   || style="text-align:right;" | || = 0+||   || style="text-align:right;" | || = ()+ |- | style="text-align:right;" | || = ()+||   || style="text-align:right;" | || = ()−||   || style="text-align:right;" | || = 3+ |- | || ||   || style="text-align:right;" | || = 0+||   || style="text-align:right;" | || = ()+ |- | || ||   || style="text-align:right;" | || = ()−||   || style="text-align:right;" | || = 4+ |- | || ||   || style="text-align:right;" | || = 0+||   || style="text-align:right;" | || = ()+ |- | || ||   || style="text-align:right;" | || = ()+||   || style="text-align:right;" | || = 3+ |- |} The reason for the unusual fluctuations in , even by differences of just one nucleon, are due to the odd and even numbers of protons and neutrons – pairs of nucleons have a total angular momentum of zero (just like electrons in orbitals), leaving an odd or even number of unpaired nucleons. The property of nuclear spin is an important factor for the operation of NMR spectroscopy in organic chemistry, and MRI in nuclear medicine, due to the nuclear magnetic moment interacting with an external magnetic field. Elementary particles Elementary particles contain many quantum numbers which are usually said to be intrinsic to them. However, it should be understood that the elementary particles are quantum states of the standard model of particle physics, and hence the quantum numbers of these particles bear the same relation to the Hamiltonian of this model as the quantum numbers of the Bohr atom does to its Hamiltonian. In other words, each quantum number denotes a symmetry of the problem. It is more useful in quantum field theory to distinguish between spacetime and internal symmetries. Typical quantum numbers related to spacetime symmetries are spin (related to rotational symmetry), the parity, C-parity and T-parity (related to the Poincaré symmetry of spacetime). Typical internal symmetries are lepton number and baryon number or the electric charge. (For a full list of quantum numbers of this kind see the article on flavour.) Multiplicative quantum numbers Most conserved quantum numbers are additive, so in an elementary particle reaction, the sum of the quantum numbers should be the same before and after the reaction. However, some, usually called a parity, are multiplicative; i.e., their product is conserved. All multiplicative quantum numbers belong to a symmetry (like parity) in which applying the symmetry transformation twice is equivalent to doing nothing (involution). See also Electron configuration References Further reading Physical quantities Quantum numbers Dimensionless numbers of physics
Quantum number
[ "Physics", "Chemistry", "Mathematics" ]
3,931
[ "Physical phenomena", "Quantum chemistry", "Physical quantities", "Quantity", "Quantum mechanics", "Quantum numbers", "Quantum measurement", "Physical properties" ]
532,481
https://en.wikipedia.org/wiki/Principal%20quantum%20number
In quantum mechanics, the principal quantum number (symbolized n) is one of four quantum numbers assigned to each electron in an atom to describe that electron's state. Its values are natural numbers (from one) making it a discrete variable. Apart from the principal quantum number, the other quantum numbers for bound electrons are the azimuthal quantum number ℓ, the magnetic quantum number ml, and the spin quantum number s. Overview and history As n increases, the electron is also at a higher energy and is, therefore, less tightly bound to the nucleus. For higher n the electron is farther from the nucleus, on average. For each value of n there are n accepted ℓ (azimuthal) values ranging from 0 to n − 1 inclusively, hence higher-n electron states are more numerous. Accounting for two states of spin, each n-shell can accommodate up to 2n2 electrons. In a simplistic one-electron model described below, the total energy of an electron is a negative inverse quadratic function of the principal quantum number n, leading to degenerate energy levels for each n > 1. In more complex systems—those having forces other than the nucleus–electron Coulomb force—these levels split. For multielectron atoms this splitting results in "subshells" parametrized by ℓ. Description of energy levels based on n alone gradually becomes inadequate for atomic numbers starting from 5 (boron) and fails completely on potassium (Z = 19) and afterwards. The principal quantum number was first created for use in the semiclassical Bohr model of the atom, distinguishing between different energy levels. With the development of modern quantum mechanics, the simple Bohr model was replaced with a more complex theory of atomic orbitals. However, the modern theory still requires the principal quantum number. Derivation There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers n, ℓ, m, and s specify the complete and unique quantum state of a single electron in an atom, called its wave function or orbital. Two electrons belonging to the same atom cannot have the same values for all four quantum numbers, due to the Pauli exclusion principle. The Schrödinger wave equation reduces to the three equations that when solved lead to the first three quantum numbers. Therefore, the equations for the first three quantum numbers are all interrelated. The principal quantum number arose in the solution of the radial part of the wave equation as shown below. The Schrödinger wave equation describes energy eigenstates with corresponding real numbers En and a definite total energy, the value of En. The bound state energies of the electron in the hydrogen atom are given by: The parameter n can take only positive integer values. The concept of energy levels and notation were taken from the earlier Bohr model of the atom. Schrödinger's equation developed the idea from a flat two-dimensional Bohr atom to the three-dimensional wavefunction model. In the Bohr model, the allowed orbits were derived from quantized (discrete) values of orbital angular momentum, L according to the equation where n = 1, 2, 3, ... and is called the principal quantum number, and h is the Planck constant. This formula is not correct in quantum mechanics as the angular momentum magnitude is described by the azimuthal quantum number, but the energy levels are accurate and classically they correspond to the sum of potential and kinetic energy of the electron. The principal quantum number n represents the relative overall energy of each orbital. The energy level of each orbital increases as its distance from the nucleus increases. The sets of orbitals with the same n value are often referred to as an electron shell. The minimum energy exchanged during any wave–matter interaction is the product of the wave frequency multiplied by the Planck constant. This causes the wave to display particle-like packets of energy called quanta. The difference between energy levels that have different n determine the emission spectrum of the element. In the notation of the periodic table, the main shells of electrons are labeled: based on the principal quantum number. The principal quantum number is related to the radial quantum number, nr, by: where ℓ is the azimuthal quantum number and nr is equal to the number of nodes in the radial wavefunction. The definite total energy for a particle motion in a common Coulomb field and with a discrete spectrum, is given by: where is the Bohr radius. This discrete energy spectrum resulted from the solution of the quantum mechanical problem on the electron motion in the Coulomb field, coincides with the spectrum that was obtained with the help application of the Bohr–Sommerfeld quantization rules to the classical equations. The radial quantum number determines the number of nodes of the radial wave function R(r). Values In chemistry, values n = 1, 2, 3, 4, 5, 6, 7 are used in relation to the electron shell theory, with expected inclusion of n = 8 (and possibly 9) for yet-undiscovered period 8 elements. In atomic physics, higher n sometimes occur for description of excited states. Observations of the interstellar medium reveal atomic hydrogen spectral lines involving n on order of hundreds; values up to 766 were detected. See also Introduction to quantum mechanics References External links Periodic Table Applet: showing principal and azimuthal quantum number for each element Quantum chemistry Atomic physics Quantum numbers
Principal quantum number
[ "Physics", "Chemistry" ]
1,112
[ "Quantum chemistry", "Quantum mechanics", "Quantum numbers", "Theoretical chemistry", "Atomic physics", " molecular", "Atomic", " and optical physics" ]
532,573
https://en.wikipedia.org/wiki/Azimuthal%20quantum%20number
In quantum mechanics, the azimuthal quantum number is a quantum number for an atomic orbital that determines its orbital angular momentum and describes aspects of the angular shape of the orbital. The azimuthal quantum number is the second of a set of quantum numbers that describe the unique quantum state of an electron (the others being the principal quantum number , the magnetic quantum number , and the spin quantum number ). For a given value of the principal quantum number (electron shell), the possible values of are the integers from 0 to . For instance, the  shell has only orbitals with , and the  shell has only orbitals with , and . For a given value of the azimuthal quantum number , the possible values of the magnetic quantum number are the integers from to , including 0. In addition, the spin quantum number can take two distinct values. The set of orbitals associated with a particular value of  are sometimes collectively called a subshell. While originally used just for isolated atoms, atomic-like orbitals play a key role in the configuration of electrons in compounds including gases, liquids and solids. The quantum number plays an important role here via the connection to the angular dependence of the spherical harmonics for the different orbitals around each atom. Nomenclature The term "azimuthal quantum number" was introduced by Arnold Sommerfeld in 1915 as part of an ad hoc description of the energy structure of atomic spectra. Only later with the quantum model of the atom was it understood that this number, , arises from quantization of orbital angular momentum. Some textbooks and the ISO standard 80000-10:2019 call the orbital angular momentum quantum number. The energy levels of an atom in an external magnetic field depend upon the value so it is sometimes called the magnetic quantum number. The lowercase letter , is used to denote the orbital angular momentum of a single particle. For a system with multiple particles, the capital letter is used. Relation to atomic orbitals There are four quantum numbersn, ℓ, mℓ, ms connected with the energy states of an isolated atom's electrons. These four numbers specify the unique and complete quantum state of any single electron in the atom, and they combine to compose the electron's wavefunction, or orbital. When solving to obtain the wave function, the Schrödinger equation resolves into three equations that lead to the first three quantum numbers, meaning that the three equations are interrelated. The azimuthal quantum number arises in solving the polar part of the wave equationrelying on the spherical coordinate system, which generally works best with models having sufficient aspects of spherical symmetry. An electron's angular momentum, , is related to its quantum number by the following equation: where is the reduced Planck constant, is the orbital angular momentum operator and is the wavefunction of the electron. The quantum number is always a non-negative integer: 0, 1, 2, 3, etc. (Notably, has no real meaning except in its use as the angular momentum operator; thus, it is standard practice to use the quantum number when referring to angular momentum). Atomic orbitals have distinctive shapes, (see top graphic) in which letters, s, p, d, f, etc., (employing a convention originating in spectroscopy) denote the shape of the atomic orbital. The wavefunctions of these orbitals take the form of spherical harmonics, and so are described by Legendre polynomials. The several orbitals relating to the different (integer) values of ℓ are sometimes called sub-shellsreferred to by lowercase Latin letters chosen for historical reasonsas shown in the table "Quantum subshells for the azimuthal quantum number". Each of the different angular momentum states can take 2(2ℓ + 1) electrons. This is because the third quantum number mℓ (which can be thought of loosely as the quantized projection of the angular momentum vector on the z-axis) runs from −ℓ to ℓ in integer units, and so there are 2ℓ + 1 possible states. Each distinct n, ℓ, mℓ orbital can be occupied by two electrons with opposing spins (given by the quantum number ms = ±), giving 2(2ℓ + 1) electrons overall. Orbitals with higher ℓ than given in the table are perfectly permissible, but these values cover all atoms so far discovered. For a given value of the principal quantum number n, the possible values of ℓ range from 0 to ; therefore, the shell only possesses an s subshell and can only take 2 electrons, the shell possesses an s and a p subshell and can take 8 electrons overall, the shell possesses s, p, and d subshells and has a maximum of 18 electrons, and so on. A simplistic one-electron model results in energy levels depending on the principal number alone. In more complex atoms these energy levels split for all , placing states of higher ℓ above states of lower ℓ. For example, the energy of 2p is higher than of 2s, 3d occurs higher than 3p, which in turn is above 3s, etc. This effect eventually forms the block structure of the periodic table. No known atom possesses an electron having ℓ higher than three (f) in its ground state. The angular momentum quantum number, ℓ and the corresponding spherical harmonic govern the number of planar nodes going through the nucleus. A planar node can be described in an electromagnetic wave as the midpoint between crest and trough, which has zero magnitudes. In an s orbital, no nodes go through the nucleus, therefore the corresponding azimuthal quantum number ℓ takes the value of 0. In a p orbital, one node traverses the nucleus and therefore ℓ has the value of 1. has the value . Depending on the value of n, there is an angular momentum quantum number ℓ and the following series. The wavelengths listed are for a hydrogen atom: Addition of quantized angular momenta Given a quantized total angular momentum that is the sum of two individual quantized angular momenta and , the quantum number associated with its magnitude can range from to in integer steps where and are quantum numbers corresponding to the magnitudes of the individual angular momenta. Total angular momentum of an electron in the atom Due to the spin–orbit interaction in an atom, the orbital angular momentum no longer commutes with the Hamiltonian, nor does the spin. These therefore change over time. However the total angular momentum does commute with the one-electron Hamiltonian and so is constant. is defined as being the orbital angular momentum and the spin. The total angular momentum satisfies the same commutation relations as orbital angular momentum, namely from which it follows that where stand for , , and . The quantum numbers describing the system, which are constant over time, are now and , defined through the action of on the wavefunction So that is related to the norm of the total angular momentum and to its projection along a specified axis. The j number has a particular importance for relativistic quantum chemistry, often featuring in subscript in for deeper states near to the core for which spin-orbit coupling is important. As with any angular momentum in quantum mechanics, the projection of along other axes cannot be co-defined with , because they do not commute. The eigenvectors of , , and parity, which are also eigenvectors of the Hamiltonian, are linear combinations of the eigenvectors of , , and . Beyond isolated atoms The angular momentum quantum numbers strictly refer to isolated atoms. However, they have wider uses for atoms in solids, liquids or gases. The quantum number corresponds to specific spherical harmonics and are commonly used to describe features observed in spectroscopic methods such as X-ray photoelectron spectroscopy and electron energy loss spectroscopy. (The notation is slightly different, with X-ray notation where K, L, M are used for excitations out of electron states with .) The angular momentum quantum numbers are also used when the electron states are described in methods such as Kohn–Sham density functional theory or with gaussian orbitals. For instance, in silicon the electronic properties used in semiconductor device are due to the p-like states with centered at each atom, while many properties of transition metals depend upon the d-like states with . History The azimuthal quantum number was carried over from the Bohr model of the atom, and was posited by Arnold Sommerfeld. The Bohr model was derived from spectroscopic analysis of atoms in combination with the Rutherford atomic model. The lowest quantum level was found to have an angular momentum of zero. Orbits with zero angular momentum were considered as oscillating charges in one dimension and so described as "pendulum" orbits, but were not found in nature. In three-dimensions the orbits become spherical without any nodes crossing the nucleus, similar (in the lowest-energy state) to a skipping rope that oscillates in one large circle. See also Introduction to quantum mechanics Particle in a spherically symmetric potential Angular momentum coupling Angular momentum operator Clebsch–Gordan coefficients References External links Development of the Bohr atom The azimuthal equation explained Angular momentum Atomic physics Quantum numbers Rotational symmetry de:Nebenquantenzahl
Azimuthal quantum number
[ "Physics", "Chemistry", "Mathematics" ]
1,892
[ "Quantum chemistry", "Physical quantities", "Quantity", "Quantum mechanics", "Quantum numbers", "Rotational symmetry", "Atomic physics", "Momentum", " molecular", "Atomic", " and optical physics", "Angular momentum", "Symmetry", "Moment (physics)" ]
151,001
https://en.wikipedia.org/wiki/C-symmetry
In physics, charge conjugation is a transformation that switches all particles with their corresponding antiparticles, thus changing the sign of all charges: not only electric charge but also the charges relevant to other forces. The term C-symmetry is an abbreviation of the phrase "charge conjugation symmetry", and is used in discussions of the symmetry of physical laws under charge-conjugation. Other important discrete symmetries are P-symmetry (parity) and T-symmetry (time reversal). These discrete symmetries, C, P and T, are symmetries of the equations that describe the known fundamental forces of nature: electromagnetism, gravity, the strong and the weak interactions. Verifying whether some given mathematical equation correctly models nature requires giving physical interpretation not only to continuous symmetries, such as motion in time, but also to its discrete symmetries, and then determining whether nature adheres to these symmetries. Unlike the continuous symmetries, the interpretation of the discrete symmetries is a bit more intellectually demanding and confusing. An early surprise appeared in the 1950s, when Chien Shiung Wu demonstrated that the weak interaction violated P-symmetry. For several decades, it appeared that the combined symmetry CP was preserved, until CP-violating interactions were discovered. Both discoveries lead to Nobel Prizes. The C-symmetry is particularly troublesome, physically, as the universe is primarily filled with matter, not anti-matter, whereas the naive C-symmetry of the physical laws suggests that there should be equal amounts of both. It is currently believed that CP-violation during the early universe can account for the "excess" matter, although the debate is not settled. Earlier textbooks on cosmology, predating the 1970s, routinely suggested that perhaps distant galaxies were made entirely of anti-matter, thus maintaining a net balance of zero in the universe. This article focuses on exposing and articulating the C-symmetry of various important equations and theoretical systems, including the Dirac equation and the structure of quantum field theory. The various fundamental particles can be classified according to behavior under charge conjugation; this is described in the article on C-parity. Informal overview Charge conjugation occurs as a symmetry in three different but closely related settings: a symmetry of the (classical, non-quantized) solutions of several notable differential equations, including the Klein–Gordon equation and the Dirac equation, a symmetry of the corresponding quantum fields, and in a general setting, a symmetry in (pseudo-)Riemannian geometry. In all three cases, the symmetry is ultimately revealed to be a symmetry under complex conjugation, although exactly what is being conjugated where can be at times obfuscated, depending on notation, coordinate choices and other factors. In classical fields The charge conjugation symmetry is interpreted as that of electrical charge, because in all three cases (classical, quantum and geometry), one can construct Noether currents that resemble those of classical electrodynamics. This arises because electrodynamics itself, via Maxwell's equations, can be interpreted as a structure on a U(1) fiber bundle, the so-called circle bundle. This provides a geometric interpretation of electromagnetism: the electromagnetic potential is interpreted as the gauge connection (the Ehresmann connection) on the circle bundle. This geometric interpretation then allows (literally almost) anything possessing a complex-number-valued structure to be coupled to the electromagnetic field, provided that this coupling is done in a gauge-invariant way. Gauge symmetry, in this geometric setting, is a statement that, as one moves around on the circle, the coupled object must also transform in a "circular way", tracking in a corresponding fashion. More formally, one says that the equations must be gauge invariant under a change of local coordinate frames on the circle. For U(1), this is just the statement that the system is invariant under multiplication by a phase factor that depends on the (space-time) coordinate In this geometric setting, charge conjugation can be understood as the discrete symmetry that performs complex conjugation, that reverses the sense of direction around the circle. In quantum theory In quantum field theory, charge conjugation can be understood as the exchange of particles with anti-particles. To understand this statement, one must have a minimal understanding of what quantum field theory is. In (vastly) simplified terms, it is a technique for performing calculations to obtain solutions for a system of coupled differential equations via perturbation theory. A key ingredient to this process is the quantum field, one for each of the (free, uncoupled) differential equations in the system. A quantum field is conventionally written as where is the momentum, is a spin label, is an auxiliary label for other states in the system. The and are creation and annihilation operators (ladder operators) and are solutions to the (free, non-interacting, uncoupled) differential equation in question. The quantum field plays a central role because, in general, it is not known how to obtain exact solutions to the system of coupled differential questions. However, via perturbation theory, approximate solutions can be constructed as combinations of the free-field solutions. To perform this construction, one has to be able to extract and work with any one given free-field solution, on-demand, when required. The quantum field provides exactly this: it enumerates all possible free-field solutions in a vector space such that any one of them can be singled out at any given time, via the creation and annihilation operators. The creation and annihilation operators obey the canonical commutation relations, in that the one operator "undoes" what the other "creates". This implies that any given solution must be paired with its "anti-solution" so that one undoes or cancels out the other. The pairing is to be performed so that all symmetries are preserved. As one is generally interested in Lorentz invariance, the quantum field contains an integral over all possible Lorentz coordinate frames, written above as an integral over all possible momenta (it is an integral over the fiber of the frame bundle). The pairing requires that a given is associated with a of the opposite momentum and energy. The quantum field is also a sum over all possible spin states; the dual pairing again matching opposite spins. Likewise for any other quantum numbers, these are also paired as opposites. There is a technical difficulty in carrying out this dual pairing: one must describe what it means for some given solution to be "dual to" some other solution and to describe it in such a way that it remains consistently dual when integrating over the fiber of the frame bundle, when integrating (summing) over the fiber that describes the spin, and when integrating (summing) over any other fibers that occur in the theory. When the fiber to be integrated over is the U(1) fiber of electromagnetism, the dual pairing is such that the direction (orientation) on the fiber is reversed. When the fiber to be integrated over is the SU(3) fiber of the color charge, the dual pairing again reverses orientation. This "just works" for SU(3) because it has two dual fundamental representations and which can be naturally paired. This prescription for a quantum field naturally generalizes to any situation where one can enumerate the continuous symmetries of the system, and define duals in a coherent, consistent fashion. The pairing ties together opposite charges in the fully abstract sense. In physics, a charge is associated with a generator of a continuous symmetry. Different charges are associated with different eigenspaces of the Casimir invariants of the universal enveloping algebra for those symmetries. This is the case for both the Lorentz symmetry of the underlying spacetime manifold, as well as the symmetries of any fibers in the fiber bundle posed above the spacetime manifold. Duality replaces the generator of the symmetry with minus the generator. Charge conjugation is thus associated with reflection along the line bundle or determinant bundle of the space of symmetries. The above then is a sketch of the general idea of a quantum field in quantum field theory. The physical interpretation is that solutions correspond to particles, and solutions correspond to antiparticles, and so charge conjugation is a pairing of the two. This sketch also provides enough hints to indicate what charge conjugation might look like in a general geometric setting. There is no particular forced requirement to use perturbation theory, to construct quantum fields that will act as middle-men in a perturbative expansion. Charge conjugation can be given a general setting. In geometry For general Riemannian and pseudo-Riemannian manifolds, one has a tangent bundle, a cotangent bundle and a metric that ties the two together. There are several interesting things one can do, when presented with this situation. One is that the smooth structure allows differential equations to be posed on the manifold; the tangent and cotangent spaces provide enough structure to perform calculus on manifolds. Of key interest is the Laplacian, and, with a constant term, what amounts to the Klein–Gordon operator. Cotangent bundles, by their basic construction, are always symplectic manifolds. Symplectic manifolds have canonical coordinates interpreted as position and momentum, obeying canonical commutation relations. This provides the core infrastructure to extend duality, and thus charge conjugation, to this general setting. A second interesting thing one can do is to construct a spin structure. Perhaps the most remarkable thing about this is that it is a very recognizable generalization to a -dimensional pseudo-Riemannian manifold of the conventional physics concept of spinors living on a (1,3)-dimensional Minkowski spacetime. The construction passes through a complexified Clifford algebra to build a Clifford bundle and a spin manifold. At the end of this construction, one obtains a system that is remarkably familiar, if one is already acquainted with Dirac spinors and the Dirac equation. Several analogies pass through to this general case. First, the spinors are the Weyl spinors, and they come in complex-conjugate pairs. They are naturally anti-commuting (this follows from the Clifford algebra), which is exactly what one wants to make contact with the Pauli exclusion principle. Another is the existence of a chiral element, analogous to the gamma matrix which sorts these spinors into left and right-handed subspaces. The complexification is a key ingredient, and it provides "electromagnetism" in this generalized setting. The spinor bundle doesn't "just" transform under the pseudo-orthogonal group , the generalization of the Lorentz group , but under a bigger group, the complexified spin group It is bigger in that it is a double covering of The piece can be identified with electromagnetism in several different ways. One way is that the Dirac operators on the spin manifold, when squared, contain a piece with arising from that part of the connection associated with the piece. This is entirely analogous to what happens when one squares the ordinary Dirac equation in ordinary Minkowski spacetime. A second hint is that this piece is associated with the determinant bundle of the spin structure, effectively tying together the left and right-handed spinors through complex conjugation. What remains is to work through the discrete symmetries of the above construction. There are several that appear to generalize P-symmetry and T-symmetry. Identifying the dimensions with time, and the dimensions with space, one can reverse the tangent vectors in the dimensional subspace to get time reversal, and flipping the direction of the dimensions corresponds to parity. The C-symmetry can be identified with the reflection on the line bundle. To tie all of these together into a knot, one finally has the concept of transposition, in that elements of the Clifford algebra can be written in reversed (transposed) order. The net result is that not only do the conventional physics ideas of fields pass over to the general Riemannian setting, but also the ideas of the discrete symmetries. There are two ways to react to this. One is to treat it as an interesting curiosity. The other is to realize that, in low dimensions (in low-dimensional spacetime) there are many "accidental" isomorphisms between various Lie groups and other assorted structures. Being able to examine them in a general setting disentangles these relationships, exposing more clearly "where things come from". Charge conjugation for Dirac fields The laws of electromagnetism (both classical and quantum) are invariant under the exchange of electrical charges with their negatives. For the case of electrons and quarks, both of which are fundamental particle fermion fields, the single-particle field excitations are described by the Dirac equation One wishes to find a charge-conjugate solution A handful of algebraic manipulations are sufficient to obtain the second from the first. Standard expositions of the Dirac equation demonstrate a conjugate field interpreted as an anti-particle field, satisfying the complex-transposed Dirac equation Note that some but not all of the signs have flipped. Transposing this back again gives almost the desired form, provided that one can find a 4×4 matrix that transposes the gamma matrices to insert the required sign-change: The charge conjugate solution is then given by the involution The 4×4 matrix called the charge conjugation matrix, has an explicit form given in the article on gamma matrices. Curiously, this form is not representation-independent, but depends on the specific matrix representation chosen for the gamma group (the subgroup of the Clifford algebra capturing the algebraic properties of the gamma matrices). This matrix is representation dependent due to a subtle interplay involving the complexification of the spin group describing the Lorentz covariance of charged particles. The complex number is an arbitrary phase factor generally taken to be Charge conjugation, chirality, helicity The interplay between chirality and charge conjugation is a bit subtle, and requires articulation. It is often said that charge conjugation does not alter the chirality of particles. This is not the case for fields, the difference arising in the "hole theory" interpretation of particles, where an anti-particle is interpreted as the absence of a particle. This is articulated below. Conventionally, is used as the chirality operator. Under charge conjugation, it transforms as and whether or not equals depends on the chosen representation for the gamma matrices. In the Dirac and chiral basis, one does have that , while is obtained in the Majorana basis. A worked example follows. Weyl spinors For the case of massless Dirac spinor fields, chirality is equal to helicity for the positive energy solutions (and minus the helicity for negative energy solutions). One obtains this by writing the massless Dirac equation as Multiplying by one obtains where is the angular momentum operator and is the totally antisymmetric tensor. This can be brought to a slightly more recognizable form by defining the 3D spin operator taking a plane-wave state , applying the on-shell constraint that and normalizing the momentum to be a 3D unit vector: to write Examining the above, one concludes that angular momentum eigenstates (helicity eigenstates) correspond to eigenstates of the chiral operator. This allows the massless Dirac field to be cleanly split into a pair of Weyl spinors and each individually satisfying the Weyl equation, but with opposite energy: and Note the freedom one has to equate negative helicity with negative energy, and thus the anti-particle with the particle of opposite helicity. To be clear, the here are the Pauli matrices, and is the momentum operator. Charge conjugation in the chiral basis Taking the Weyl representation of the gamma matrices, one may write a (now taken to be massive) Dirac spinor as The corresponding dual (anti-particle) field is The charge-conjugate spinors are where, as before, is a phase factor that can be taken to be Note that the left and right states are inter-changed. This can be restored with a parity transformation. Under parity, the Dirac spinor transforms as Under combined charge and parity, one then has Conventionally, one takes globally. See however, the note below. Majorana condition The Majorana condition imposes a constraint between the field and its charge conjugate, namely that they must be equal: This is perhaps best stated as the requirement that the Majorana spinor must be an eigenstate of the charge conjugation involution. Doing so requires some notational care. In many texts discussing charge conjugation, the involution is not given an explicit symbolic name, when applied to single-particle solutions of the Dirac equation. This is in contrast to the case when the quantized field is discussed, where a unitary operator is defined (as done in a later section, below). For the present section, let the involution be named as so that Taking this to be a linear operator, one may consider its eigenstates. The Majorana condition singles out one such: There are, however, two such eigenstates: Continuing in the Weyl basis, as above, these eigenstates are and The Majorana spinor is conventionally taken as just the positive eigenstate, namely The chiral operator exchanges these two, in that This is readily verified by direct substitution. Bear in mind that does not have a 4×4 matrix representation! More precisely, there is no complex 4×4 matrix that can take a complex number to its complex conjugate; this inversion would require an 8×8 real matrix. The physical interpretation of complex conjugation as charge conjugation becomes clear when considering the complex conjugation of scalar fields, described in a subsequent section below. The projectors onto the chiral eigenstates can be written as and and so the above translates to This directly demonstrates that charge conjugation, applied to single-particle complex-number-valued solutions of the Dirac equation flips the chirality of the solution. The projectors onto the charge conjugation eigenspaces are and Geometric interpretation The phase factor can be given a geometric interpretation. It has been noted that, for massive Dirac spinors, the "arbitrary" phase factor may depend on both the momentum, and the helicity (but not the chirality). This can be interpreted as saying that this phase may vary along the fiber of the spinor bundle, depending on the local choice of a coordinate frame. Put another way, a spinor field is a local section of the spinor bundle, and Lorentz boosts and rotations correspond to movements along the fibers of the corresponding frame bundle (again, just a choice of local coordinate frame). Examined in this way, this extra phase freedom can be interpreted as the phase arising from the electromagnetic field. For the Majorana spinors, the phase would be constrained to not vary under boosts and rotations. Charge conjugation for quantized fields The above describes charge conjugation for the single-particle solutions only. When the Dirac field is second-quantized, as in quantum field theory, the spinor and electromagnetic fields are described by operators. The charge conjugation involution then manifests as a unitary operator (in calligraphic font) acting on the particle fields, expressed as where the non-calligraphic is the same 4×4 matrix given before. Charge reversal in electroweak theory Charge conjugation does not alter the chirality of particles. A left-handed neutrino would be taken by charge conjugation into a left-handed antineutrino, which does not interact in the Standard Model. This property is what is meant by the "maximal violation" of C-symmetry in the weak interaction. Some postulated extensions of the Standard Model, like left-right models, restore this C-symmetry. Scalar fields The Dirac field has a "hidden" gauge freedom, allowing it to couple directly to the electromagnetic field without any further modifications to the Dirac equation or the field itself. This is not the case for scalar fields, which must be explicitly "complexified" to couple to electromagnetism. This is done by "tensoring in" an additional factor of the complex plane into the field, or constructing a Cartesian product with . One very conventional technique is simply to start with two real scalar fields, and and create a linear combination The charge conjugation involution is then the mapping since this is sufficient to reverse the sign on the electromagnetic potential (since this complex number is being used to couple to it). For real scalar fields, charge conjugation is just the identity map: and and so, for the complexified field, charge conjugation is just The "mapsto" arrow is convenient for tracking "what goes where"; the equivalent older notation is simply to write and and The above describes the conventional construction of a charged scalar field. It is also possible to introduce additional algebraic structure into the fields in other ways. In particular, one may define a "real" field behaving as . As it is real, it cannot couple to electromagnetism by itself, but, when complexified, would result in a charged field that transforms as Because C-symmetry is a discrete symmetry, one has some freedom to play these kinds of algebraic games in the search for a theory that correctly models some given physical reality. In physics literature, a transformation such as might be written without any further explanation. The formal mathematical interpretation of this is that the field is an element of where Thus, properly speaking, the field should be written as which behaves under charge conjugation as It is very tempting, but not quite formally correct to just multiply these out, to move around the location of this minus sign; this mostly "just works", but a failure to track it properly will lead to confusion. Combination of charge and parity reversal It was believed for some time that C-symmetry could be combined with the parity-inversion transformation (see P-symmetry) to preserve a combined CP-symmetry. However, violations of this symmetry have been identified in the weak interactions (particularly in the kaons and B mesons). In the Standard Model, this CP violation is due to a single phase in the CKM matrix. If CP is combined with time reversal (T-symmetry), the resulting CPT-symmetry can be shown using only the Wightman axioms to be universally obeyed. In general settings The analog of charge conjugation can be defined for higher-dimensional gamma matrices, with an explicit construction for Weyl spinors given in the article on Weyl–Brauer matrices. Note, however, spinors as defined abstractly in the representation theory of Clifford algebras are not fields; rather, they should be thought of as existing on a zero-dimensional spacetime. The analog of T-symmetry follows from as the T-conjugation operator for Dirac spinors. Spinors also have an inherent P-symmetry, obtained by reversing the direction of all of the basis vectors of the Clifford algebra from which the spinors are constructed. The relationship to the P and T symmetries for a fermion field on a spacetime manifold are a bit subtle, but can be roughly characterized as follows. When a spinor is constructed via a Clifford algebra, the construction requires a vector space on which to build. By convention, this vector space is the tangent space of the spacetime manifold at a given, fixed spacetime point (a single fiber in the tangent manifold). P and T operations applied to the spacetime manifold can then be understood as also flipping the coordinates of the tangent space as well; thus, the two are glued together. Flipping the parity or the direction of time in one also flips it in the other. This is a convention. One can become unglued by failing to propagate this connection. This is done by taking the tangent space as a vector space, extending it to a tensor algebra, and then using an inner product on the vector space to define a Clifford algebra. Treating each such algebra as a fiber, one obtains a fiber bundle called the Clifford bundle. Under a change of basis of the tangent space, elements of the Clifford algebra transform according to the spin group. Building a principle fiber bundle with the spin group as the fiber results in a spin structure. All that is missing in the above paragraphs are the spinors themselves. These require the "complexification" of the tangent manifold: tensoring it with the complex plane. Once this is done, the Weyl spinors can be constructed. These have the form where the are the basis vectors for the vector space , the tangent space at point in the spacetime manifold The Weyl spinors, together with their complex conjugates span the tangent space, in the sense that The alternating algebra is called the spinor space, it is where the spinors live, as well as products of spinors (thus, objects with higher spin values, including vectors and tensors). See also C parity G-parity Anti-particle Antimatter Truly neutral particle Notes References Quantum field theory Symmetry Antimatter
C-symmetry
[ "Physics", "Mathematics" ]
5,272
[ "Quantum field theory", "Antimatter", "Matter", "Quantum mechanics", "Geometry", "Symmetry" ]
151,013
https://en.wikipedia.org/wiki/T-symmetry
T-symmetry or time reversal symmetry is the theoretical symmetry of physical laws under the transformation of time reversal, Since the second law of thermodynamics states that entropy increases as time flows toward the future, in general, the macroscopic universe does not show symmetry under time reversal. In other words, time is said to be non-symmetric, or asymmetric, except for special equilibrium states when the second law of thermodynamics predicts the time symmetry to hold. However, quantum noninvasive measurements are predicted to violate time symmetry even in equilibrium, contrary to their classical counterparts, although this has not yet been experimentally confirmed. Time asymmetries (see Arrow of time) generally are caused by one of three categories: intrinsic to the dynamic physical law (e.g., for the weak force) due to the initial conditions of the universe (e.g., for the second law of thermodynamics) due to measurements (e.g., for the noninvasive measurements) Macroscopic phenomena The second law of thermodynamics Daily experience shows that T-symmetry does not hold for the behavior of bulk materials. Of these macroscopic laws, most notable is the second law of thermodynamics. Many other phenomena, such as the relative motion of bodies with friction, or viscous motion of fluids, reduce to this, because the underlying mechanism is the dissipation of usable energy (for example, kinetic energy) into heat. The question of whether this time-asymmetric dissipation is really inevitable has been considered by many physicists, often in the context of Maxwell's demon. The name comes from a thought experiment described by James Clerk Maxwell in which a microscopic demon guards a gate between two halves of a room. It only lets slow molecules into one half, only fast ones into the other. By eventually making one side of the room cooler than before and the other hotter, it seems to reduce the entropy of the room, and reverse the arrow of time. Many analyses have been made of this; all show that when the entropy of room and demon are taken together, this total entropy does increase. Modern analyses of this problem have taken into account Claude E. Shannon's relation between entropy and information. Many interesting results in modern computing are closely related to this problem—reversible computing, quantum computing and physical limits to computing, are examples. These seemingly metaphysical questions are today, in these ways, slowly being converted into hypotheses of the physical sciences. The current consensus hinges upon the Boltzmann–Shannon identification of the logarithm of phase space volume with the negative of Shannon information, and hence to entropy. In this notion, a fixed initial state of a macroscopic system corresponds to relatively low entropy because the coordinates of the molecules of the body are constrained. As the system evolves in the presence of dissipation, the molecular coordinates can move into larger volumes of phase space, becoming more uncertain, and thus leading to increase in entropy. Big Bang One resolution to irreversibility is to say that the constant increase of entropy we observe happens only because of the initial state of our universe. Other possible states of the universe (for example, a universe at heat death equilibrium) would actually result in no increase of entropy. In this view, the apparent T-asymmetry of our universe is a problem in cosmology: why did the universe start with a low entropy? This view, supported by cosmological observations (such as the isotropy of the cosmic microwave background) connects this problem to the question of initial conditions of the universe. Black holes The laws of gravity seem to be time reversal invariant in classical mechanics; however, specific solutions need not be. An object can cross through the event horizon of a black hole from the outside, and then fall rapidly to the central region where our understanding of physics breaks down. Since within a black hole the forward light-cone is directed towards the center and the backward light-cone is directed outward, it is not even possible to define time-reversal in the usual manner. The only way anything can escape from a black hole is as Hawking radiation. The time reversal of a black hole would be a hypothetical object known as a white hole. From the outside they appear similar. While a black hole has a beginning and is inescapable, a white hole has an ending and cannot be entered. The forward light-cones of a white hole are directed outward; and its backward light-cones are directed towards the center. The event horizon of a black hole may be thought of as a surface moving outward at the local speed of light and is just on the edge between escaping and falling back. The event horizon of a white hole is a surface moving inward at the local speed of light and is just on the edge between being swept outward and succeeding in reaching the center. They are two different kinds of horizons—the horizon of a white hole is like the horizon of a black hole turned inside-out. The modern view of black hole irreversibility is to relate it to the second law of thermodynamics, since black holes are viewed as thermodynamic objects. For example, according to the gauge–gravity duality conjecture, all microscopic processes in a black hole are reversible, and only the collective behavior is irreversible, as in any other macroscopic, thermal system. Kinetic consequences: detailed balance and Onsager reciprocal relations In physical and chemical kinetics, T-symmetry of the mechanical microscopic equations implies two important laws: the principle of detailed balance and the Onsager reciprocal relations. T-symmetry of the microscopic description together with its kinetic consequences are called microscopic reversibility. Effect of time reversal on some variables of classical physics Even Classical variables that do not change upon time reversal include: , position of a particle in three-space , acceleration of the particle , force on the particle , energy of the particle , electric potential (voltage) , electric field , electric displacement , density of electric charge , electric polarization Energy density of the electromagnetic field , Maxwell stress tensor All masses, charges, coupling constants, and other physical constants, except those associated with the weak force. Odd Classical variables that time reversal negates include: , the time when an event occurs , velocity of a particle , linear momentum of a particle , angular momentum of a particle (both orbital and spin) , electromagnetic vector potential , magnetic field , magnetic auxiliary field , density of electric current , magnetization , Poynting vector , power (rate of work done). Example: Magnetic Field and Onsager reciprocal relations Let us consider the example of a system of charged particles subject to a constant external magnetic field: in this case the canonical time reversal operation that reverses the velocities and the time and keeps the coordinates untouched is no more a symmetry for the system. Under this consideration, it seems that only Onsager–Casimir reciprocal relations could hold; these equalities relate two different systems, one subject to and another to , and so their utility is limited. However, it was proved that it is possible to find other time reversal operations which preserve the dynamics and so Onsager reciprocal relations; in conclusion, one cannot state that the presence of a magnetic field always breaks T-symmetry. Microscopic phenomena: time reversal invariance Most systems are asymmetric under time reversal, but there may be phenomena with symmetry. In classical mechanics, a velocity v reverses under the operation of T, but an acceleration does not. Therefore, one models dissipative phenomena through terms that are odd in v. However, delicate experiments in which known sources of dissipation are removed reveal that the laws of mechanics are time reversal invariant. Dissipation itself is originated in the second law of thermodynamics. The motion of a charged body in a magnetic field, B involves the velocity through the Lorentz force term v×B, and might seem at first to be asymmetric under T. A closer look assures us that B also changes sign under time reversal. This happens because a magnetic field is produced by an electric current, J, which reverses sign under T. Thus, the motion of classical charged particles in electromagnetic fields is also time reversal invariant. (Despite this, it is still useful to consider the time-reversal non-invariance in a local sense when the external field is held fixed, as when the magneto-optic effect is analyzed. This allows one to analyze the conditions under which optical phenomena that locally break time-reversal, such as Faraday isolators and directional dichroism, can occur.) In physics one separates the laws of motion, called kinematics, from the laws of force, called dynamics. Following the classical kinematics of Newton's laws of motion, the kinematics of quantum mechanics is built in such a way that it presupposes nothing about the time reversal symmetry of the dynamics. In other words, if the dynamics are invariant, then the kinematics will allow it to remain invariant; if the dynamics is not, then the kinematics will also show this. The structure of the quantum laws of motion are richer, and we examine these next. Time reversal in quantum mechanics This section contains a discussion of the three most important properties of time reversal in quantum mechanics; chiefly, that it must be represented as an anti-unitary operator, that it protects non-degenerate quantum states from having an electric dipole moment, that it has two-dimensional representations with the property (for fermions). The strangeness of this result is clear if one compares it with parity. If parity transforms a pair of quantum states into each other, then the sum and difference of these two basis states are states of good parity. Time reversal does not behave like this. It seems to violate the theorem that all abelian groups be represented by one-dimensional irreducible representations. The reason it does this is that it is represented by an anti-unitary operator. It thus opens the way to spinors in quantum mechanics. On the other hand, the notion of quantum-mechanical time reversal turns out to be a useful tool for the development of physically motivated quantum computing and simulation settings, providing, at the same time, relatively simple tools to assess their complexity. For instance, quantum-mechanical time reversal was used to develop novel boson sampling schemes and to prove the duality between two fundamental optical operations, beam splitter and squeezing transformations. Formal notation In formal mathematical presentations of T-symmetry, three different kinds of notation for T need to be carefully distinguished: the T that is an involution, capturing the actual reversal of the time coordinate, the T that is an ordinary finite dimensional matrix, acting on spinors and vectors, and the T that is an operator on an infinite-dimensional Hilbert space. For a real (not complex) classical (unquantized) scalar field , the time reversal involution can simply be written as as time reversal leaves the scalar value at a fixed spacetime point unchanged, up to an overall sign . A slightly more formal way to write this is which has the advantage of emphasizing that is a map, and thus the "mapsto" notation whereas is a factual statement relating the old and new fields to one-another. Unlike scalar fields, spinor and vector fields might have a non-trivial behavior under time reversal. In this case, one has to write where is just an ordinary matrix. For complex fields, complex conjugation may be required, for which the mapping can be thought of as a 2x2 matrix. For a Dirac spinor, cannot be written as a 4x4 matrix, because, in fact, complex conjugation is indeed required; however, it can be written as an 8x8 matrix, acting on the 8 real components of a Dirac spinor. In the general setting, there is no ab initio value to be given for ; its actual form depends on the specific equation or equations which are being examined. In general, one simply states that the equations must be time-reversal invariant, and then solves for the explicit value of that achieves this goal. In some cases, generic arguments can be made. Thus, for example, for spinors in three-dimensional Euclidean space, or four-dimensional Minkowski space, an explicit transformation can be given. It is conventionally given as where is the y-component of the angular momentum operator and is complex conjugation, as before. This form follows whenever the spinor can be described with a linear differential equation that is first-order in the time derivative, which is generally the case in order for something to be validly called "a spinor". The formal notation now makes it clear how to extend time-reversal to an arbitrary tensor field In this case, Covariant tensor indexes will transform as and so on. For quantum fields, there is also a third T, written as which is actually an infinite dimensional operator acting on a Hilbert space. It acts on quantized fields as This can be thought of as a special case of a tensor with one covariant, and one contravariant index, and thus two 's are required. All three of these symbols capture the idea of time-reversal; they differ with respect to the specific space that is being acted on: functions, vectors/spinors, or infinite-dimensional operators. The remainder of this article is not cautious to distinguish these three; the T that appears below is meant to be either or or depending on context, left for the reader to infer. Anti-unitary representation of time reversal Eugene Wigner showed that a symmetry operation S of a Hamiltonian is represented, in quantum mechanics either by a unitary operator, , or an antiunitary one, where U is unitary, and K denotes complex conjugation. These are the only operations that act on Hilbert space so as to preserve the length of the projection of any one state-vector onto another state-vector. Consider the parity operator. Acting on the position, it reverses the directions of space, so that . Similarly, it reverses the direction of momentum, so that , where x and p are the position and momentum operators. This preserves the canonical commutator , where ħ is the reduced Planck constant, only if P is chosen to be unitary, . On the other hand, the time reversal operator T, it does nothing to the x-operator, , but it reverses the direction of p, so that . The canonical commutator is invariant only if T is chosen to be anti-unitary, i.e., . Another argument involves energy, the time-component of the four-momentum. If time reversal were implemented as a unitary operator, it would reverse the sign of the energy just as space-reversal reverses the sign of the momentum. This is not possible, because, unlike momentum, energy is always positive. Since energy in quantum mechanics is defined as the phase factor exp(–iEt) that one gets when one moves forward in time, the way to reverse time while preserving the sign of the energy is to also reverse the sense of "i", so that the sense of phases is reversed. Similarly, any operation that reverses the sense of phase, which changes the sign of i, will turn positive energies into negative energies unless it also changes the direction of time. So every antiunitary symmetry in a theory with positive energy must reverse the direction of time. Every antiunitary operator can be written as the product of the time reversal operator and a unitary operator that does not reverse time. For a particle with spin J, one can use the representation where Jy is the y-component of the spin, and use of has been made. Electric dipole moments This has an interesting consequence on the electric dipole moment (EDM) of any particle. The EDM is defined through the shift in the energy of a state when it is put in an external electric field: , where d is called the EDM and δ, the induced dipole moment. One important property of an EDM is that the energy shift due to it changes sign under a parity transformation. However, since d is a vector, its expectation value in a state |ψ⟩ must be proportional to ⟨ψ| J |ψ⟩, that is the expected spin. Thus, under time reversal, an invariant state must have vanishing EDM. In other words, a non-vanishing EDM signals both P and T symmetry-breaking. Some molecules, such as water, must have EDM irrespective of whether T is a symmetry. This is correct; if a quantum system has degenerate ground states that transform into each other under parity, then time reversal need not be broken to give EDM. Experimentally observed bounds on the electric dipole moment of the nucleon currently set stringent limits on the violation of time reversal symmetry in the strong interactions, and their modern theory: quantum chromodynamics. Then, using the CPT invariance of a relativistic quantum field theory, this puts strong bounds on strong CP violation. Experimental bounds on the electron electric dipole moment also place limits on theories of particle physics and their parameters. Kramers' theorem For T, which is an anti-unitary Z2 symmetry generator T2 = UKUK = UU* = U (UT)−1 = Φ, where Φ is a diagonal matrix of phases. As a result, and , showing that U = Φ U Φ. This means that the entries in Φ are ±1, as a result of which one may have either . This is specific to the anti-unitarity of T. For a unitary operator, such as the parity, any phase is allowed. Next, take a Hamiltonian invariant under T. Let |a⟩ and T|a⟩ be two quantum states of the same energy. Now, if , then one finds that the states are orthogonal: a result called Kramers' theorem. This implies that if , then there is a twofold degeneracy in the state. This result in non-relativistic quantum mechanics presages the spin statistics theorem of quantum field theory. Quantum states that give unitary representations of time reversal, i.e., have , are characterized by a multiplicative quantum number, sometimes called the T-parity. Time reversal of the known dynamical laws Particle physics codified the basic laws of dynamics into the standard model. This is formulated as a quantum field theory that has CPT symmetry, i.e., the laws are invariant under simultaneous operation of time reversal, parity and charge conjugation. However, time reversal itself is seen not to be a symmetry (this is usually called CP violation). There are two possible origins of this asymmetry, one through the mixing of different flavours of quarks in their weak decays, the second through a direct CP violation in strong interactions. The first is seen in experiments, the second is strongly constrained by the non-observation of the EDM of a neutron. Time reversal violation is unrelated to the second law of thermodynamics, because due to the conservation of the CPT symmetry, the effect of time reversal is to rename particles as antiparticles and vice versa. Thus the second law of thermodynamics is thought to originate in the initial conditions in the universe. Time reversal of noninvasive measurements Strong measurements (both classical and quantum) are certainly disturbing, causing asymmetry due to the second law of thermodynamics. However, noninvasive measurements should not disturb the evolution, so they are expected to be time-symmetric. Surprisingly, it is true only in classical physics but not in quantum physics, even in a thermodynamically invariant equilibrium state. This type of asymmetry is independent of CPT symmetry but has not yet been confirmed experimentally due to extreme conditions of the checking proposal. Negative group delay in quantum systems In 2024, experiments by the University of Toronto showed that under certain quantum conditions, photons can exhibit "negative time" behavior. When interacting with atomic clouds, photons appeared to exit the medium before entering it, indicating a negative group delay, especially near atomic resonance. Using the cross-Kerr effect, the team measured atomic excitation by observing phase shifts in a weak probe beam. The results showed that atomic excitation times varied from negative to positive, depending on the pulse width. For narrow pulses, the excitation time was approximately -0.82 times the non-post-selected excitation time (τ₀), while for broader pulses, it was around 0.54 times τ₀. These findings align with theoretical predictions and highlight the non-classical nature of quantum mechanics, opening new possibilities for quantum computing and photonics. See also Arrow of time Causality (physics) Computing applications Limits of computation Quantum computing Reversible computing Standard model CKM matrix CP violation CPT invariance Neutrino mass Strong CP problem Wheeler–Feynman absorber theory Loschmidt's paradox Maxwell's demon Microscopic reversibility Second law of thermodynamics Time translation symmetry References Inline citations General references Maxwell's demon: entropy, information, computing, edited by H.S.Leff and A.F. Rex (IOP publishing, 1990) Maxwell's demon, 2: entropy, classical and quantum information, edited by H.S.Leff and A.F. Rex (IOP publishing, 2003) The emperor's new mind: concerning computers, minds, and the laws of physics, by Roger Penrose (Oxford university press, 2002) Multiferroic materials with time-reversal breaking optical properties CP violation, by I.I. Bigi and A.I. Sanda (Cambridge University Press, 2000) Particle Data Group on CP violation the Babar experiment in SLAC the BELLE experiment in KEK the KTeV experiment in Fermilab the CPLEAR experiment in CERN Time in physics Thermodynamics Statistical mechanics Philosophy of thermal and statistical physics Quantum field theory Symmetry
T-symmetry
[ "Physics", "Chemistry", "Mathematics" ]
4,568
[ "Quantum field theory", "Time in physics", "Physical phenomena", "Philosophy of thermal and statistical physics", "Quantum mechanics", "Thermodynamics", "Geometry", "Statistical mechanics", "Symmetry", "Dynamical systems" ]
151,040
https://en.wikipedia.org/wiki/CPT%20symmetry
Charge, parity, and time reversal symmetry is a fundamental symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity transformation (P), and time reversal (T). CPT is the only combination of C, P, and T that is observed to be an exact symmetry of nature at the fundamental level. The CPT theorem says that CPT symmetry holds for all physical phenomena, or more precisely, that any Lorentz invariant local quantum field theory with a Hermitian Hamiltonian must have CPT symmetry. In layman terms, this stipulates that an antimatter, mirrored, and time reversed universe would behave exactly the same as our regular universe. History The CPT theorem appeared for the first time, implicitly, in the work of Julian Schwinger in 1951 to prove the connection between spin and statistics. In 1954, Gerhart Lüders and Wolfgang Pauli derived more explicit proofs, so this theorem is sometimes known as the Lüders–Pauli theorem. At about the same time, and independently, this theorem was also proved by John Stewart Bell. These proofs are based on the principle of Lorentz invariance and the principle of locality in the interaction of quantum fields. Subsequently, Res Jost gave a more general proof in 1958 using the framework of axiomatic quantum field theory. Efforts during the late 1950s revealed the violation of P-symmetry by phenomena that involve the weak force, and there were well-known violations of C-symmetry as well. For a short time, the CP-symmetry was believed to be preserved by all physical phenomena, but in the 1960s that was later found to be false too, which implied, by CPT invariance, violations of T-symmetry as well. Derivation of the CPT theorem Consider a Lorentz boost in a fixed direction z. This can be interpreted as a rotation of the time axis into the z axis, with an imaginary rotation parameter. If this rotation parameter were real, it would be possible for a 180° rotation to reverse the direction of time and of z. Reversing the direction of one axis is a reflection of space in any number of dimensions. If space has 3 dimensions, it is equivalent to reflecting all the coordinates, because an additional rotation of 180° in the x-y plane could be included. This defines a CPT transformation if we adopt the Feynman–Stueckelberg interpretation of antiparticles as the corresponding particles traveling backwards in time. This interpretation requires a slight analytic continuation, which is well-defined only under the following assumptions: The theory is Lorentz invariant; The vacuum is Lorentz invariant; The energy is bounded below. When the above hold, quantum theory can be extended to a Euclidean theory, defined by translating all the operators to imaginary time using the Hamiltonian. The commutation relations of the Hamiltonian, and the Lorentz generators, guarantee that Lorentz invariance implies rotational invariance, so that any state can be rotated by 180 degrees. Since a sequence of two CPT reflections is equivalent to a 360-degree rotation, fermions change by a sign under two CPT reflections, while bosons do not. This fact can be used to prove the spin-statistics theorem. Consequences and implications The implication of CPT symmetry is that a "mirror-image" of our universe — with all objects having their positions reflected through an arbitrary point (corresponding to a parity inversion), all momenta reversed (corresponding to a time inversion) and with all matter replaced by antimatter (corresponding to a charge inversion) — would evolve under exactly our physical laws. The CPT transformation turns our universe into its "mirror image" and vice versa. CPT symmetry is recognized to be a fundamental property of physical laws. In order to preserve this symmetry, every violation of the combined symmetry of two of its components (such as CP) must have a corresponding violation in the third component (such as T); in fact, mathematically, these are the same thing. Thus violations in T-symmetry are often referred to as CP violations. The CPT theorem can be generalized to take into account pin groups. In 2002 Oscar Greenberg proved that, with reasonable assumptions, CPT violation implies the breaking of Lorentz symmetry. CPT violations would be expected by some string theory models, as well as by some other models that lie outside point-particle quantum field theory. Some proposed violations of Lorentz invariance, such as a compact dimension of cosmological size, could also lead to CPT violation. Non-unitary theories, such as proposals where black holes violate unitarity, could also violate CPT. As a technical point, fields with infinite spin could violate CPT symmetry. The overwhelming majority of experimental searches for Lorentz violation have yielded negative results. A detailed tabulation of these results was given in 2011 by Kostelecky and Russell. See also Poincaré symmetry and Quantum field theory Parity (physics), Charge conjugation and T-symmetry CP violation and kaon IKAROS scientific results References Sources External links Background information on Lorentz and CPT violation by Alan Kostelecký at Theoretical Physics Indiana University Charge, Parity, and Time Reversal (CPT) Symmetry at LBL CPT Invariance Tests in Neutral Kaon Decay at LBL – 8-component theory for fermions in which T-parity can be a complex number with unit radius. The CPT invariance is not a theorem but a better to have property in these class of theories. This Particle Breaks Time Symmetry – YouTube video by Veritasium An elementary discussion of CPT violation is given in chapter 15 of this student level textbook Quantum field theory Symmetry Theorems in quantum mechanics
CPT symmetry
[ "Physics", "Mathematics" ]
1,179
[ "Theorems in quantum mechanics", "Quantum field theory", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Geometry", "Symmetry", "Physics theorems" ]
151,196
https://en.wikipedia.org/wiki/Acute%20radiation%20syndrome
Acute radiation syndrome (ARS), also known as radiation sickness or radiation poisoning, is a collection of health effects that are caused by being exposed to high amounts of ionizing radiation in a short period of time. Symptoms can start within an hour of exposure, and can last for several months. Early symptoms are usually nausea, vomiting and loss of appetite. In the following hours or weeks, initial symptoms may appear to improve, before the development of additional symptoms, after which either recovery or death follow. ARS involves a total dose of greater than 0.7 Gy (70 rad), that generally occurs from a source outside the body, delivered within a few minutes. Sources of such radiation can occur accidentally or intentionally. They may involve nuclear reactors, cyclotrons, certain devices used in cancer therapy, nuclear weapons, or radiological weapons. It is generally divided into three types: bone marrow, gastrointestinal, and neurovascular syndrome, with bone marrow syndrome occurring at 0.7 to 10 Gy, and neurovascular syndrome occurring at doses that exceed 50 Gy. The cells that are most affected are generally those that are rapidly dividing. At high doses, this causes DNA damage that may be irreparable. Diagnosis is based on a history of exposure and symptoms. Repeated complete blood counts (CBCs) can indicate the severity of exposure. Treatment of ARS is generally supportive care. This may include blood transfusions, antibiotics, colony-stimulating factors, or stem cell transplant. Radioactive material remaining on the skin or in the stomach should be removed. If radioiodine was inhaled or ingested, potassium iodide is recommended. Complications such as leukemia and other cancers among those who survive are managed as usual. Short-term outcomes depend on the dose exposure. ARS is generally rare. A single event can affect a large number of people, as happened in the atomic bombings of Hiroshima and Nagasaki and the Chernobyl nuclear power plant disaster. ARS differs from chronic radiation syndrome, which occurs following prolonged exposures to relatively low doses of radiation. Signs and symptoms Classically, ARS is divided into three main presentations: hematopoietic, gastrointestinal, and neurovascular. These syndromes may be preceded by a prodrome. The speed of symptom onset is related to radiation exposure, with greater doses resulting in a shorter delay in symptom onset. These presentations presume whole-body exposure, and many of them are markers that are invalid if the entire body has not been exposed. Each syndrome requires that the tissue showing the syndrome itself be exposed (e.g., gastrointestinal syndrome is not seen if the stomach and intestines are not exposed to radiation). Some areas affected are: Hematopoietic. This syndrome is marked by a drop in the number of blood cells, called aplastic anemia. This may result in infections, due to a low number of white blood cells, bleeding, due to a lack of platelets, and anemia, due to too few red blood cells in circulation. These changes can be detected by blood tests after receiving a whole-body acute dose as low as , though they might never be felt by the patient if the dose is below . Conventional trauma and burns resulting from a bomb blast are complicated by the poor wound healing caused by hematopoietic syndrome, increasing mortality. Gastrointestinal. This syndrome often follows absorbed doses of . The signs and symptoms of this form of radiation injury include nausea, vomiting, loss of appetite, and abdominal pain. Vomiting in this time-frame is a marker for whole body exposures that are in the fatal range above . Without exotic treatment such as bone marrow transplant, death with this dose is common, due generally more to infection than gastrointestinal dysfunction. Neurovascular. This syndrome typically occurs at absorbed doses greater than , though it may occur at doses as low as . It presents with neurological symptoms such as dizziness, headache, or decreased level of consciousness, occurring within minutes to a few hours, with an absence of vomiting, and is almost always fatal, even with aggressive intensive care. Early symptoms of ARS typically include nausea, vomiting, headaches, fatigue, fever, and a short period of skin reddening. These symptoms may occur at radiation doses as low as . These symptoms are common to many illnesses, and may not, by themselves, indicate acute radiation sickness. Dose effects A similar table and description of symptoms (given in rems, where 100 rem = 1 Sv), derived from data from the effects on humans subjected to the atomic bombings of Hiroshima and Nagasaki, the indigenous peoples of the Marshall Islands subjected to the Castle Bravo thermonuclear bomb, animal studies and lab experiment accidents, have been compiled by the U.S. Department of Defense. A person who was less than from the atomic bomb Little Boy hypocenter at Hiroshima, Japan, was found to have absorbed about 9.46 grays (Gy) of ionizing radiation. The doses at the hypocenters of the Hiroshima and Nagasaki atomic bombings were 240 and 290 Gy, respectively. Skin changes Cutaneous radiation syndrome (CRS) refers to the skin symptoms of radiation exposure. Within a few hours after irradiation, a transient and inconsistent redness (associated with itching) can occur. Then, a latent phase may occur and last from a few days up to several weeks, when intense reddening, blistering, and ulceration of the irradiated site is visible. In most cases, healing occurs by regenerative means; however, very large skin doses can cause permanent hair loss, damaged sebaceous and sweat glands, atrophy, fibrosis (mostly keloids), decreased or increased skin pigmentation, and ulceration or necrosis of the exposed tissue. As seen at Chernobyl, when skin is irradiated with high energy beta particles, moist desquamation (peeling of skin) and similar early effects can heal, only to be followed by the collapse of the dermal vascular system after two months, resulting in the loss of the full thickness of the exposed skin. Another example of skin loss caused by high-level exposure of radiation is during the 1999 Tokaimura nuclear accident, where technician Hisashi Ouchi had lost a majority of his skin due to the high amounts of radiation he absorbed during the irradiation. This effect had been demonstrated previously with pig skin using high energy beta sources at the Churchill Hospital Research Institute, in Oxford. Cause ARS is caused by exposure to a large dose of ionizing radiation (> ~0.1 Gy) over a short period of time (> ~0.1 Gy/h). Alpha and beta radiation have low penetrating power and are unlikely to affect vital internal organs from outside the body. Any type of ionizing radiation can cause burns, but alpha and beta radiation can only do so if radioactive contamination or nuclear fallout is deposited on the individual's skin or clothing. Gamma and neutron radiation can travel much greater distances and penetrate the body easily, so whole-body irradiation generally causes ARS before skin effects are evident. Local gamma irradiation can cause skin effects without any sickness. In the early twentieth century, radiographers would commonly calibrate their machines by irradiating their own hands and measuring the time to onset of erythema. Accidental Accidental exposure may be the result of a criticality or radiotherapy accident. There have been numerous criticality accidents dating back to atomic testing during World War II, while computer-controlled radiation therapy machines such as Therac-25 played a major part in radiotherapy accidents. The latter of the two is caused by the failure of equipment software used to monitor the radiational dose given. Human error has played a large part in accidental exposure incidents, including some of the criticality accidents, and larger scale events such as the Chernobyl disaster. Other events have to do with orphan sources, in which radioactive material is unknowingly kept, sold, or stolen. The Goiânia accident is an example, where a forgotten radioactive source was taken from a hospital, resulting in the deaths of 4 people from ARS. Theft and attempted theft of radioactive material by clueless thieves has also led to lethal exposure in at least one incident. Exposure may also come from routine spaceflight and solar flares that result in radiation effects on earth in the form of solar storms. During spaceflight, astronauts are exposed to both galactic cosmic radiation (GCR) and solar particle event (SPE) radiation. The exposure particularly occurs during flights beyond low Earth orbit (LEO). Evidence indicates past SPE radiation levels that would have been lethal for unprotected astronauts. GCR levels that might lead to acute radiation poisoning are less well understood. The latter cause is rarer, with an event possibly occurring during the solar storm of 1859. Intentional Intentional exposure is controversial as it involves the use of nuclear weapons, human experiments, or is given to a victim in an act of murder. The intentional atomic bombings of Hiroshima and Nagasaki resulted in tens of thousands of casualties; the survivors of these bombings are known today as . Nuclear weapons emit large amounts of thermal radiation as visible, infrared, and ultraviolet light, to which the atmosphere is largely transparent. This event is also known as "flash", where radiant heat and light are bombarded into any given victim's exposed skin, causing radiation burns. Death is highly likely, and radiation poisoning is almost certain if one is caught in the open with no terrain or building masking-effects within a radius of 0–3 km from a 1 megaton airburst. The 50% chance of death from the blast extends out to ~8 km from a 1 megaton atmospheric explosion. Scientific testing on humans within the United States occurred extensively throughout the atomic age. Experiments took place on a range of subjects including, but not limited to; the disabled, children, soldiers, and incarcerated persons, with the level of understanding and consent given by subjects varying from complete to none. Since 1997 there have been requirements for patients to give informed consent, and to be notified if experiments were classified. Across the world, the Soviet nuclear program involved human experiments on a large scale, which is still kept secret by the Russian government and the Rosatom agency. The human experiments that fall under intentional ARS exclude those that involved long term exposure. Criminal activity has involved murder and attempted murder carried out through abrupt victim contact with a radioactive substance such as polonium or plutonium. Pathophysiology The most commonly used predictor of ARS is the whole-body absorbed dose. Several related quantities, such as the equivalent dose, effective dose, and committed dose, are used to gauge long-term stochastic biological effects such as cancer incidence, but they are not designed to evaluate ARS. To help avoid confusion between these quantities, absorbed dose is measured in units of grays (in SI, unit symbol Gy) or rad (in CGS), while the others are measured in sieverts (in SI, unit symbol Sv) or rem (in CGS). 1 rad = 0.01 Gy and 1 rem = 0.01 Sv. In most of the acute exposure scenarios that lead to radiation sickness, the bulk of the radiation is external whole-body gamma, in which case the absorbed, equivalent, and effective doses are all equal. There are exceptions, such as the Therac-25 accidents and the 1958 Cecil Kelley criticality accident, where the absorbed doses in Gy or rad are the only useful quantities, because of the targeted nature of the exposure to the body. Radiotherapy treatments are typically prescribed in terms of the local absorbed dose, which might be 60 Gy or higher. The dose is fractionated to about 2 Gy per day for curative treatment, which allows normal tissues to undergo repair, allowing them to tolerate a higher dose than would otherwise be expected. The dose to the targeted tissue mass must be averaged over the entire body mass, most of which receives negligible radiation, to arrive at a whole-body absorbed dose that can be compared to the table above. DNA damage Exposure to high doses of radiation causes DNA damage, later creating serious and even lethal chromosomal aberrations if left unrepaired. Ionizing radiation can produce reactive oxygen species, and does directly damage cells by causing localized ionization events. The former is very damaging to DNA, while the latter events create clusters of DNA damage. This damage includes loss of nucleobases and breakage of the sugar-phosphate backbone that binds to the nucleobases. The DNA organization at the level of histones, nucleosomes, and chromatin also affects its susceptibility to radiation damage. Clustered damage, defined as at least two lesions within a helical turn, is especially harmful. While DNA damage happens frequently and naturally in the cell from endogenous sources, clustered damage is a unique effect of radiation exposure. Clustered damage takes longer to repair than isolated breakages, and is less likely to be repaired at all. Larger radiation doses are more prone to cause tighter clustering of damage, and closely localized damage is increasingly less likely to be repaired. Somatic mutations cannot be passed down from parent to offspring, but these mutations can propagate in cell lines within an organism. Radiation damage can also cause chromosome and chromatid aberrations, and their effects depend on in which stage of the mitotic cycle the cell is when the irradiation occurs. If the cell is in interphase, while it is still a single strand of chromatin, the damage will be replicated during the S1 phase of the cell cycle, and there will be a break on both chromosome arms; the damage then will be apparent in both daughter cells. If the irradiation occurs after replication, only one arm will bear the damage; this damage will be apparent in only one daughter cell. A damaged chromosome may cyclize, binding to another chromosome, or to itself. Diagnosis Diagnosis is typically made based on a history of significant radiation exposure and suitable clinical findings. An absolute lymphocyte count can give a rough estimate of radiation exposure. Time from exposure to vomiting can also give estimates of exposure levels if they are less than 10 Gy (1000 rad). Prevention A guiding principle of radiation safety is as low as reasonably achievable (ALARA). This means try to avoid exposure as much as possible and includes the three components of time, distance, and shielding. Time The longer that humans are subjected to radiation the larger the dose will be. The advice in the nuclear war manual entitled Nuclear War Survival Skills published by Cresson Kearny in the U.S. was that if one needed to leave the shelter then this should be done as rapidly as possible to minimize exposure. In chapter 12, he states that "[q]uickly putting or dumping wastes outside is not hazardous once fallout is no longer being deposited. For example, assume the shelter is in an area of heavy fallout and the dose rate outside is 400 roentgen (R) per hour, enough to give a potentially fatal dose in about an hour to a person exposed in the open. If a person needs to be exposed for only 10 seconds to dump a bucket, in this 1/360 of an hour he will receive a dose of only about 1 R. Under war conditions, an additional 1-R dose is of little concern." In peacetime, radiation workers are taught to work as quickly as possible when performing a task that exposes them to radiation. For instance, the recovery of a radioactive source should be done as quickly as possible. Shielding Usually, matter attenuates radiation, so placing any mass (e.g., lead, dirt, sandbags, vehicles, water, even air) between humans and the source will reduce the radiation dose. This is not always the case, however; care should be taken when constructing shielding for a specific purpose. For example, although high atomic number materials are very effective in shielding photons, using them to shield beta particles may cause higher radiation exposure due to the production of bremsstrahlung x-rays, and hence low atomic number materials are recommended. Also, using material with a high neutron activation cross section to shield neutrons will result in the shielding material itself becoming radioactive and hence more dangerous than if it were not present. There are many types of shielding strategies that can be used to reduce the effects of radiation exposure. Internal contamination protective equipment such as respirators are used to prevent internal deposition as a result of inhalation and ingestion of radioactive material. Dermal protective equipment, which protects against external contamination, provides shielding to prevent radioactive material from being deposited on external structures. While these protective measures do provide a barrier from radioactive material deposition, they do not shield from externally penetrating gamma radiation. This leaves anyone exposed to penetrating gamma rays at high risk of ARS. Naturally, shielding the entire body from high energy gamma radiation is optimal, but the required mass to provide adequate attenuation makes functional movement nearly impossible. In the event of a radiation catastrophe, medical and security personnel need mobile protection equipment in order to safely assist in containment, evacuation, and many other necessary public safety objectives. Research has been done exploring the feasibility of partial body shielding, a radiation protection strategy that provides adequate attenuation to only the most radio-sensitive organs and tissues inside the body. Irreversible stem cell damage in the bone marrow is the first life-threatening effect of intense radiation exposure and therefore one of the most important bodily elements to protect. Due to the regenerative property of hematopoietic stem cells, it is only necessary to protect enough bone marrow to repopulate the exposed areas of the body with the shielded supply. This concept allows for the development of lightweight mobile radiation protection equipment, which provides adequate protection, deferring the onset of ARS to much higher exposure doses. One example of such equipment is the 360 gamma, a radiation protection belt that applies selective shielding to protect the bone marrow stored in the pelvic area as well as other radio sensitive organs in the abdominal region without hindering functional mobility. Reduction of incorporation Where radioactive contamination is present, an elastomeric respirator, dust mask, or good hygiene practices may offer protection, depending on the nature of the contaminant. Potassium iodide (KI) tablets can reduce the risk of cancer in some situations due to slower uptake of ambient radioiodine. Although this does not protect any organ other than the thyroid gland, their effectiveness is still highly dependent on the time of ingestion, which would protect the gland for the duration of a twenty-four-hour period. They do not prevent ARS as they provide no shielding from other environmental radionuclides. Fractionation of dose If an intentional dose is broken up into a number of smaller doses, with time allowed for recovery between irradiations, the same total dose causes less cell death. Even without interruptions, a reduction in dose rate below 0.1 Gy/h also tends to reduce cell death. This technique is routinely used in radiotherapy. The human body contains many types of cells and a human can be killed by the loss of a single type of cells in a vital organ. For many short term radiation deaths (3–30 days), the loss of two important types of cells that are constantly being regenerated causes death. The loss of cells forming blood cells (bone marrow) and the cells in the digestive system (microvilli, which form part of the wall of the intestines) is fatal. Management Treatment usually involves supportive care with possible symptomatic measures employed. The former involves the possible use of antibiotics, blood products, colony stimulating factors, and stem cell transplant. Antimicrobials There is a direct relationship between the degree of the neutropenia that emerges after exposure to radiation and the increased risk of developing infection. Since there are no controlled studies of therapeutic intervention in humans, most of the current recommendations are based on animal research. The treatment of established or suspected infection following exposure to radiation (characterized by neutropenia and fever) is similar to the one used for other febrile neutropenic patients. However, important differences between the two conditions exist. Individuals that develop neutropenia after exposure to radiation are also susceptible to irradiation damage in other tissues, such as the gastrointestinal tract, lungs and central nervous system. These patients may require therapeutic interventions not needed in other types of neutropenic patients. The response of irradiated animals to antimicrobial therapy can be unpredictable, as was evident in experimental studies where metronidazole and pefloxacin therapies were detrimental. Antimicrobials that reduce the number of the strict anaerobic component of the gut flora (i.e., metronidazole) generally should not be given because they may enhance systemic infection by aerobic or facultative bacteria, thus facilitating mortality after irradiation. An empirical regimen of antimicrobials should be chosen based on the pattern of bacterial susceptibility and nosocomial infections in the affected area and medical center and the degree of neutropenia. Broad-spectrum empirical therapy (see below for choices) with high doses of one or more antibiotics should be initiated at the onset of fever. These antimicrobials should be directed at the eradication of Gram-negative aerobic bacilli (i.e., Enterobacteriaceae, Pseudomonas) that account for more than three quarters of the isolates causing sepsis. Because aerobic and facultative Gram-positive bacteria (mostly alpha-hemolytic streptococci) cause sepsis in about a quarter of the victims, coverage for these organisms may also be needed. A standardized management plan for people with neutropenia and fever should be devised. Empirical regimens contain antibiotics broadly active against Gram-negative aerobic bacteria (quinolones: i.e., ciprofloxacin, levofloxacin, a third- or fourth-generation cephalosporin with pseudomonal coverage: e.g., cefepime, ceftazidime, or an aminoglycoside: i.e. gentamicin, amikacin). Prognosis The prognosis for ARS is dependent on the exposure dose, with anything above 8 Gy being almost always lethal, even with medical care. Radiation burns from lower-level exposures usually manifest after 2 months, while reactions from the burns occur months to years after radiation treatment. Complications from ARS include an increased risk of developing radiation-induced cancer later in life. According to the controversial but commonly applied linear no-threshold model, any exposure to ionizing radiation, even at doses too low to produce any symptoms of radiation sickness, can induce cancer due to cellular and genetic damage. The probability of developing cancer is a linear function with respect to the effective radiation dose. Radiation cancer may occur after ionizing radiation exposure following a latent period averaging 20 to 40 years. History Acute effects of ionizing radiation were first observed when Wilhelm Röntgen intentionally subjected his fingers to X-rays in 1895. He published his observations concerning the burns that developed that eventually healed, and misattributed them to ozone. Röntgen believed the free radical produced in air by X-rays from the ozone was the cause, but other free radicals produced within the body are now understood to be more important. David Walsh first established the symptoms of radiation sickness in 1897. Ingestion of radioactive materials caused many radiation-induced cancers in the 1930s, but no one was exposed to high enough doses at high enough rates to bring on ARS. The atomic bombings of Hiroshima and Nagasaki resulted in high acute doses of radiation to a large number of Japanese people, allowing for greater insight into its symptoms and dangers. Red Cross Hospital Surgeon Terufumi Sasaki led intensive research into the syndrome in the weeks and months following the Hiroshima and Nagasaki bombings. Sasaki and his team were able to monitor the effects of radiation in patients of varying proximities to the blast itself, leading to the establishment of three recorded stages of the syndrome. Within 25–30 days of the explosion, Sasaki noticed a sharp drop in white blood cell count and established this drop, along with symptoms of fever, as prognostic standards for ARS. Actress Midori Naka, who was present during the atomic bombing of Hiroshima, was the first incident of radiation poisoning to be extensively studied. Her death on 24 August 1945 was the first death ever to be officially certified as a result of ARS (or "Atomic bomb disease"). There are two major databases that track radiation accidents: The American ORISE REAC/TS and the European IRSN ACCIRAD. REAC/TS shows 417 accidents occurring between 1944 and 2000, causing about 3000 cases of ARS, of which 127 were fatal. ACCIRAD lists 580 accidents with 180 ARS fatalities for an almost identical period. The two deliberate bombings are not included in either database, nor are any possible radiation-induced cancers from low doses. The detailed accounting is difficult because of confounding factors. ARS may be accompanied by conventional injuries such as steam burns, or may occur in someone with a pre-existing condition undergoing radiotherapy. There may be multiple causes for death, and the contribution from radiation may be unclear. Some documents may incorrectly refer to radiation-induced cancers as radiation poisoning, or may count all overexposed individuals as survivors without mentioning if they had any symptoms of ARS. Notable cases The following table includes only those known for their attempted survival with ARS. These cases exclude chronic radiation syndrome such as Albert Stevens, in which radiation is exposed to a given subject over a long duration. The table also necessarily excludes cases where the individual was exposed to so much radiation that death occurred before medical assistance or dose estimations could be made, such as an attempted cobalt-60 thief who reportedly died 30 minutes after exposure. The result column represents the time of exposure to the time of death attributed to the short and long term effects attributed to initial exposure. As ARS is measured by a whole-body absorbed dose, the exposure column only includes units of gray (Gy). Other animals Thousands of scientific experiments have been performed to study ARS in animals. There is a simple guide for predicting survival and death in mammals, including humans, following the acute effects of inhaling radioactive particles. See also 5-Androstenediol Biological effects of ionizing radiation Biological effects of radiation on the epigenome CBLB502 Ex-Rad List of civilian nuclear accidents List of military nuclear accidents Nuclear terrorism Orders of magnitude (radiation) Prehydrated electrons Rongelap Atoll References This article incorporates public domain material from websites or documents of the U.S. Armed Forces Radiobiology Research Institute and the U.S. Centers for Disease Control and Prevention External links – A well documented account of the biological effects of a criticality accident. More information on bone marrow shielding can be found in the Health Physics Radiation Safety Journal article: , or in the Organisation for Economic Co-operation and Development (OECD) and the Nuclear Energy Agency (NEA)'s 2015 report: "Occupational Radiation Protection in Severe Accident Management" Radioactive contamination Radiology Radiobiology Radiation health effects Medical emergencies Causes of death Effects of external causes Syndromes affecting blood Occupational hazards Wikipedia medicine articles ready to translate
Acute radiation syndrome
[ "Chemistry", "Materials_science", "Technology", "Biology" ]
5,662
[ "Radiation health effects", "Radioactive contamination", "Radiobiology", "Environmental impact of nuclear power", "Radiation effects", "Radioactivity" ]
21,235,839
https://en.wikipedia.org/wiki/Cemented%20carbide
Cemented carbides are a class of hard materials used extensively for cutting tools, as well as in other industrial applications. It consists of fine particles of carbide cemented into a composite by a binder metal. Cemented carbides commonly use tungsten carbide (WC), titanium carbide (TiC), or tantalum carbide (TaC) as the aggregate. Mentions of "carbide" or "tungsten carbide" in industrial contexts usually refer to these cemented composites. Most of the time, carbide cutters will leave a better surface finish on a part and allow for faster machining than high-speed steel or other tool steels. Carbide tools can withstand higher temperatures at the cutter-workpiece interface than standard high-speed steel tools (which is a principal reason enabling the faster machining). Carbide is usually superior for the cutting of tough materials such as carbon steel or stainless steel, as well as in situations where other cutting tools would wear away faster, such as high-quantity production runs. In situations where carbide tooling is not required, high-speed steel is preferred for its lower cost. Construction Cemented carbides are metal matrix composites where carbide particles act as the aggregate and a metallic binder serves as the matrix (analogous to concrete, where a gravel aggregate is suspended in a cement matrix). The structure of cemented carbide is conceptually similar to that of a grinding wheel, but the abrasive particles are much smaller; macroscopically, the material of a carbide cutter appears homogeneous. The process of combining the carbide particles with the binder is referred to as sintering or hot isostatic pressing (HIP). During this process, the material is heated until the binder enters a liquid phase while the carbide grains (which have a much higher melting point) remain solid. At this elevated temperature and pressure, the carbide grains rearrange themselves and compact together, forming a porous matrix. The ductility of the metal binder serves to offset the brittleness of the carbide ceramic, resulting in the composite's high overall toughness and durability. By controlling various parameters, including grain size, cobalt content, dotation (e.g., alloy carbides) and carbon content, a carbide manufacturer can tailor the carbide's performance to specific applications. The first cemented carbide developed was tungsten carbide (introduced in 1927) which uses tungsten carbide particles held together by a cobalt metal binder. Since then, other cemented carbides have been developed, such as titanium carbide, which is better suited for cutting steel, and tantalum carbide, which is tougher than tungsten carbide. Physical properties The coefficient of thermal expansion of cemented tungsten carbide is found to vary with the amount of cobalt used as a metal binder. For 5.9% cobalt samples, a coefficient of 4.4 μm/m·K was measured, whereas 13% cobalt samples have a coefficient of around 5.0 μm/m·K. Both values are only valid from to due to non-linearity in the thermal expansion process. Applications Inserts for metal cutting Carbide is more expensive per unit than other typical tool materials, and it is more brittle, making it susceptible to chipping and breaking. To offset these problems, the carbide cutting tip itself is often in the form of a small insert for a larger tipped tool whose shank is made of another material, usually carbon tool steel. This gives the benefit of using carbide at the cutting interface without the high cost and brittleness of making the entire tool out of carbide. Most modern face mills use carbide inserts, as well as many lathe tools and endmills. In recent decades, though, solid-carbide endmills have also become more commonly used, wherever the application's characteristics make the pros (such as shorter cycle times) outweigh the cons (mentioned above). As well, modern turning (lathe) tooling may use a carbide insert on a carbide tool such as a boring bar, which are more rigid than steel insert holders and therefor less prone to vibration, which is of particular importance with boring or threading bars that may need to reach into a part to a depth many times the tool diameter. Insert coatings To increase the life of carbide tools, they are sometimes coated. Five such coatings are TiN (titanium nitride), TiC (titanium carbide), Ti(C)N (titanium carbide-nitride), TiAlN (titanium aluminium nitride) and AlTiN (aluminium titanium nitride). (Newer coatings, known as DLC (diamond-like carbon) are beginning to surface, enabling the cutting power of diamond without the unwanted chemical reaction between real diamond and iron.) Most coatings generally increase a tool's hardness and/or lubricity. A coating allows the cutting edge of a tool to cleanly pass through the material without having the material gall (stick) to it. The coating also helps to decrease the temperature associated with the cutting process and increase the life of the tool. The coating is usually deposited via thermal chemical vapor deposition (CVD) and, for certain applications, with the mechanical physical vapor deposition (PVD) method. However, if the deposition is performed at too high temperature, an eta phase of a Co6W6C tertiary carbide forms at the interface between the carbide and the cobalt phase, which may lead to adhesion failure of the coating. Inserts for mining tools Mining and tunneling cutting tools are most often fitted with cemented carbide tips, the so-called "button bits". Artificial diamond can replace the cemented carbide buttons only when conditions are ideal, but as rock drilling is a tough job cemented carbide button bits remain the most used type throughout the world. Rolls for hot-roll and cold-roll applications Since the mid-1960s, steel mills around the world have applied cemented carbide to the rolls of their rolling mills for both hot and cold rolling of tubes, bars, and flats. Other industrial applications This category contains a countless number of applications, but can be split into three main areas: Engineered components Wear parts Tools and tool blanks Some key areas where cemented carbide components are used: Automotive components Canning tools for deep drawing of two-piece cans Rotary cutters for high-speed cutting of artificial fibres Metal forming tools for wire drawing and stamping applications, such as drawing dies. Rings and bushings typically for bump and seal applications Woodworking, e.g., for sawing and planing applications Pump pistons for high-performance pumps (e.g., in nuclear installations) Nozzles, e.g., high-performance nozzles for oil drilling applications Roof and tail tools and components for high wear resistance Balls for ball bearings and ballpoint pens Non-industrial uses Jewellery Tungsten carbide has become a popular material in the bridal jewellery industry, due to its extreme hardness and high resistance to scratching. Given its brittleness, it is prone to chip, crack, or shatter in jewellery applications. Once fractured, it cannot be repaired. History The initial development of cemented and sintered carbides occurred in Germany in the 1920s. ThyssenKrupp says [in historical present tense], "Sintered tungsten carbide was developed by the 'Osram study society for electrical lighting' to replace diamonds as a material for machining metal. Not having the equipment to exploit this material on an industrial scale, Osram sells the license to Krupp at the end of 1925. In 1926 Krupp brings sintered carbide onto the market under the name WIDIA (acronym for = like diamond)." Machinery's Handbook gives the date of carbide tools' commercial introduction as 1927. Burghardt and Axelrod give the date of their commercial introduction in the United States as 1928. Subsequent development occurred in various countries. Although the marketing pitch was slightly hyperbolic (carbides being not entirely equal to diamond), carbide tooling offered an improvement in cutting speeds and feeds so remarkable that, like high-speed steel had done two decades earlier, it forced machine tool designers to rethink every aspect of existing designs, with an eye toward yet more rigidity and yet better spindle bearings. During World War II there was a tungsten shortage in Germany. It was found that tungsten in carbide cuts metal more efficiently than tungsten in high-speed steel, so to economise on the use of tungsten, carbides were used for metal cutting as much as possible. The name became a genericized trademark in various countries and languages, including English (widia, ), although the genericized sense was never especially widespread in English ("carbide" is the normal generic term). Since 2009, the name has been revived as a brand name by Kennametal, and the brand subsumes numerous popular brands of cutting tools. Uncoated tips brazed to their shanks were the first form. Clamped indexable inserts and today's wide variety of coatings are advances made in the decades since. With every passing decade, the use of carbide has become less "special" and more ubiquitous. Regarding fine-grained hardmetal, an attempt has been made to follow the scientific and technological steps associated with its production; this task is not easy, though, because of the restrictions placed by commercial, and in some cases research, organisations, in not publicising relevant information until long after the date of the initial work. Thus, placing data in an historical, chronological order is somewhat difficult. However, it has been possible to establish that as far back as 1929, approximately 6 years after the first patent was granted, Krupp/Osram workers had identified the positive aspects of tungsten carbide grain refinement. By 1939, they had also discovered the beneficial effects of adding a small amount of vanadium and tantalum carbide. This effectively controlled discontinuous grain growth. What was considered 'fine' in one decade was considered not so fine in the next. Thus, a grain size in the range 0.5–3.0 μm was considered fine in the early years, but by the 1990s, the era of the nano-crystalline material had arrived, with a grain size of 20–50 nm. Pobedit Pobedit () is a sintered carbide alloy of about 90% tungsten carbide as a hard phase, and about 10% cobalt (Co) as a binder phase, with a small amount of additional carbon. Developed in the Soviet Union in 1929, it is described as a material from which cutting tools are made. Later a number of similar alloys based on tungsten and cobalt were developed, and the name of 'pobedit' was retained for them as well. Pobedit is usually produced by powder metallurgy in the form of plates of different shapes and sizes. The manufacturing process is as follows: a fine powder of tungsten carbide (or other refractory carbide) and a fine powder of binder material such as cobalt or nickel both get intermixed and then pressed into the appropriate forms. Pressed plates are sintered at a temperature close to the melting point of the binder metal, which yields a very tight and solid substance. The plates of this superhard composite are applied to manufacturing of metal-cutting and drilling tools; they are usually soldered on the cutting tool tips. Heat post-treatment is not required. The pobedit inserts at the tips of drill bits are still very widespread in Russia. See also Carbide saw References Bibliography Further reading External links Carbides Superhard materials Tungsten compounds Metalworking tools
Cemented carbide
[ "Physics" ]
2,505
[ "Materials", "Superhard materials", "Matter" ]
31,321,967
https://en.wikipedia.org/wiki/Anodic%20bonding
Anodic bonding is a wafer bonding process to seal glass to either silicon or metal without introducing an intermediate layer. Anodic bonding is commonly used to seal glass to silicon wafers in electronics and microfluidics. Anodic bonding, also known as field assisted bonding or electrostatic sealing, is mostly used for connecting silicon/glass and metal/glass through electric fields. The requirements for anodic bonding are clean and even wafer surfaces and atomic contact between the bonding substrates through a sufficiently powerful electrostatic field. Also necessary is the use of borosilicate glass containing a high concentration of alkali ions. The coefficient of thermal expansion (CTE) of the processed glass needs to be similar to those of the bonding partner. Anodic bonding can be applied with glass wafers at temperatures of 250 to 400 °C or with sputtered glass at 400 °C. Structured borosilicate glass layers may also be deposited by plasma-assisted e-beam evaporation. This procedure is mostly used for hermetic encapsulation of micro-mechanical silicon elements. The glass substrate encapsulation protects from environmental influences, e.g. humidity or contamination. Further, other materials are used for anodic bonding with silicon, i.e. low-temperature cofired ceramics (LTCC). Overview Anodic bonding on silicon substrates is divided into bonding using a thin sheet of glass (a wafer) or a glass layer that is deposited onto the silicon using a technique such as sputtering. The glass wafer is often sodium-containing Borofloat or Pyrex glasses. With an intermediate glass layer, it is also possible to connect two silicon wafers. The glass layers are deposited by sputtering, spin-on of a glass solution or vapor deposition upon the processed silicon wafer. The thickness of these layers range from one to a few micrometers with spin-on glass layers needing 1 μm or less. Hermetic seals of silicon to glass using an aluminum layer with thickness of 50 to 100 nm can reach strengths of 18.0 MPa. This method enables burying electrically isolated conductors in the interface. Bonding of thermally oxidized wafers without a glass layer is also possible. The procedural steps of anodic bonding are divided into the following: Contact substrates Heating up substrates Bonding by the application of an electrostatic field Cooling down the wafer stack with a process characterized by the following variables: bond voltage UB bond temperature TB current limitation IB The typical bond strength is between 10 and 20 MPa according to pull tests, higher than the fracture strength of glass. Differing coefficients of thermal expansion pose challenges for anodic bonding. Excessive mismatch in the coefficients of thermal expansion can harm the bond through intrinsic material tensions and cause disruptions in the bonding materials. The use of sodium-containing glasses such as Borofloat or Pyrex serve to reduce the mismatch. These glasses have a similar CTE to silicon in the range of applied temperature, commonly up to 400 °C. History Anodic bonding is first mentioned by Wallis and Pomerantz in 1969. It is applied as bonding of silicon wafers to sodium containing glass wafers under the influence of an applied electric field. This method is used up to date as encapsulation of sensors with electrically conducted glasses. Procedural steps of anodic bonding Pretreatment of the substrates The anodic bonding procedure is able to bond hydrophilic and hydrophobic silicon surfaces equally effectively. The roughness of the surface should be less than 10 nm and free of contamination on the surface for the procedure to work properly. Even though anodic bonding is relatively tolerant to contaminations, a widely established cleaning procedure RCA takes place to remove any surface impurities. The glass wafer can also be chemically etched or powder blasted for creating small cavities, where MEMS devices can be accommodated. Further mechanisms supporting the bonding process of not completely inert anodic materials can be the planarization or polishing of surfaces and the ablation of the surface layer by electrochemical etching. Contact the substrates The wafers that meet the requirements are put into atomic contact. As soon as contact is first established, the bonding process starts close to the cathode and spreads in fronts to the edges, the process taking several minutes. The anodic bonding procedure is based on a glass wafer that is usually placed above a silicon wafer. An electrode is in contact with the glass wafer either through a needle or a full area cathode electrode. If using a needle electrode, the bond spreads radially to the outside which makes it impossible to trap air between the surfaces. The radius of the bonded area is approximately proportional to the square root of time elapsed during the procedure. Below temperatures of 350 to 400 °C and a bond voltage of 500 to 1000 V, this method is not very effective nor reliable. The use of a full area cathode electrode shows bond reactions over the whole interface after powering up the potential. This is the result of a homogeneous electric field distribution at temperatures of around 300 °C and bond voltage of 250 V. Using thin deposited glass layers the voltages needed can be significantly reduced. Heating and bonding by application of electrostatic field The wafers are placed between the chuck and the top tool used as a bond electrode at temperatures between 200 and 500 °C (compare to image "scheme of anodic bonding procedure") but below the softening point of glass (glass transition temperature). The higher the temperature the better is the mobility of positive ions in glass. The applied electrical potential is several hundred volts. This causes a diffusion of sodium ions (Na+) out of the bond interface to the back side of the glass to the cathode. Combined with humidity, that results in the formation of NaOH. The high voltage helps to support the drifting of the positive ions in glass to the cathode. The diffusion is, consistent with the Boltzmann distribution, exponentially related to the temperature. The glass (NaO2) with its remaining oxygen ions (O2−) is negatively volume charged at the bonding surface compared to the silicon (compare to figure "ion drifting in bond glass" (1)). This is based on the depletion of Na+ ions. Unlike e.g. aluminium, silicon is an inert anode. Thus no ions drift out of the silicon into the glass during the bonding process. This affects a positive volume charge in the silicon wafer on the opposite side. As a result, a high-impedance depletion region a few micrometres (μm) thick develops at the bond barrier in the glass wafer. In the gap between silicon and glass the bond voltage drops. The bonding process starts; it is a combination of electrostatic and electrochemical processes. The electrical field intensity in the depletion region is so high that the oxygen ions drift to the bond interface and pass out to react with the silicon to form SiO2 (compare to figure "ion drifting in bond glass" (2)). Based on the high field intensity in the depletion region or in the gap at the interface, both wafer surfaces are pressed together at a specific bond voltage and bond temperature. The temperature is maintained at 200 to 500 °C for about 5 to 20 minutes. Typically, the bonding or sealing time is longer when temperature and voltage are reduced. The pressure is applied to create intimate contact between the surfaces to ensure good electrical conduction across the wafer pair, and thus between the surfaces of the bonding partners. The thin oxide layer formed between the bond surfaces, siloxane (Si-O-Si), ensures an irreversible connection between the bonding partners. If using thermally oxidized wafers without a glass layer, the diffusion of OH− and H+ ions instead of Na+ ions leads to the bonding. Cooling down the substrate After the bonding process, slow cooling over several minutes has to take place. This can be supported by purging with an inert gas. The cooling time depends on the difference of CTE for the bonded materials: the higher the CTE difference, the longer the cooling period. Technical specifications References | Chemical bonding Electronics manufacturing Packaging (microfabrication) Semiconductor technology Wafer bonding
Anodic bonding
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,706
[ "Electronics manufacturing", "Microtechnology", "Packaging (microfabrication)", "Chemical bonding", "Electronic engineering", "Condensed matter physics", "nan", "Semiconductor technology" ]
31,322,683
https://en.wikipedia.org/wiki/PSORTdb
PSORTdb is a database of protein subcellular localization (SCL) for bacteria and archaea. It is a member of the PSORT family of bioinformatics tools. The database consists of two datasets, ePSORTdb and cPSORTdb, which contain information determined through experimental validation and computational prediction, respectively. The ePSORTdb dataset is the largest curated collection of experimentally verified SCL data. PSORTdb was initially developed in 2005 in order to contain protein subcellular localization predictions for bacteria. The computational predictions in the cPSORTdb dataset were generated by the PSORT tool PSORTb 2.0, the most accurate SCL predictor of the time. The second and current version of PSORTdb was released in 2010. Entries in the database are automatically generated as newly sequenced prokaryotic genomes become available through NCBI. As of the second release, ePSORTdb contained over 12,000 entries for bacterial proteins and 800 entries for archaeal proteins; cPSORTdb contained SCL predictions for over 3,700,000 proteins in total. The ePSORTdb data is derived from a manual literature search (using PubMed) as well as Swiss-Prot annotations. The cPSORTdb predictions are generated using the updated PSORTb 3.0 tool. PSORTdb can be accessed through a web interface or via BLAST. The entire database is also available for download under the GNU General Public License. Each term in the database is associated with an identifier from the Gene Ontology in order to allow integration with other bioinformatics resources. As of 2014, PSORTdb is maintained through the laboratory of Fiona Brinkman at Simon Fraser University. See also Protein targeting References External links http://db.psort.org/ Protein databases Protein targeting Post-translational modification
PSORTdb
[ "Chemistry", "Biology" ]
384
[ "Protein targeting", "Gene expression", "Biochemical reactions", "Post-translational modification", "Cellular processes" ]
31,329,101
https://en.wikipedia.org/wiki/Polycrystalline%20silicon
Polycrystalline silicon, or multicrystalline silicon, also called polysilicon, poly-Si, or mc-Si, is a high purity, polycrystalline form of silicon, used as a raw material by the solar photovoltaic and electronics industry. Polysilicon is produced from metallurgical grade silicon by a chemical purification process, called the Siemens process. This process involves distillation of volatile silicon compounds, and their decomposition into silicon at high temperatures. An emerging, alternative process of refinement uses a fluidized bed reactor. The photovoltaic industry also produces upgraded metallurgical-grade silicon (UMG-Si), using metallurgical instead of chemical purification processes. When produced for the electronics industry, polysilicon contains impurity levels of less than one part per billion (ppb), while polycrystalline solar grade silicon (SoG-Si) is generally less pure. In the 2010's, production shifted toward China, with China-based companies accounting for seven of the top ten producers and around 90% of total worldwide production capacity of approximately 1,400,000 MT. German, US and South Korea companies account for the remainder. The polysilicon feedstock – large rods, usually broken into chunks of specific sizes and packaged in clean rooms before shipment – is directly cast into multicrystalline ingots or submitted to a recrystallization process to grow single crystal boules. The boules are then sliced into thin silicon wafers and used for the production of solar cells, integrated circuits and other semiconductor devices. Polysilicon consists of small crystals, also known as crystallites, giving the material its typical metal flake effect. While polysilicon and multisilicon are often used as synonyms, multicrystalline usually refers to crystals larger than one millimetre. Multicrystalline solar cells are the most common type of solar cells in the fast-growing PV market and consume most of the worldwide produced polysilicon. About 5 tons of polysilicon is required to manufacture one 1 megawatt (MW) of conventional solar modules. Polysilicon is distinct from monocrystalline silicon and amorphous silicon. Vs monocrystalline silicon In single-crystal silicon, also known as monocrystalline silicon, the crystalline framework is homogeneous, which can be recognized by an even external colouring. The entire sample is one single, continuous and unbroken crystal as its structure contains no grain boundaries. Large single crystals are rare in nature and can also be difficult to produce in the laboratory (see also recrystallisation). In contrast, in an amorphous structure the order in atomic positions is limited to short range. Polycrystalline and paracrystalline phases are composed of a number of smaller crystals or crystallites. Polycrystalline silicon (or semi-crystalline silicon, polysilicon, poly-Si, or simply "poly") is a material consisting of multiple small silicon crystals. Polycrystalline cells can be recognized by a visible grain, a "metal flake effect". Semiconductor grade (also solar grade) polycrystalline silicon is converted to single-crystal silicon – meaning that the randomly associated crystallites of silicon in polycrystalline silicon are converted to a large single crystal. Single-crystal silicon is used to manufacture most Si-based microelectronic devices. Polycrystalline silicon can be as much as 99.9999% pure. Ultra-pure poly is used in the semiconductor industry, starting from poly rods that are two to three meters in length. In the microelectronics industry (semiconductor industry), poly is used at both the macro and micro scales. Single crystals are grown using the Czochralski, zone melting and Bridgman–Stockbarger methods. Components At the component level, polysilicon has long been used as the conducting gate material in MOSFET and CMOS processing technologies. For these technologies it is deposited using low-pressure chemical-vapour deposition (LPCVD) reactors at high temperatures and is usually heavily doped n-type or p-type. More recently, intrinsic and doped polysilicon is being used in large-area electronics as the active and/or doped layers in thin-film transistors. Although it can be deposited by LPCVD, plasma-enhanced chemical vapour deposition (PECVD), or solid-phase crystallization of amorphous silicon in certain processing regimes, these processes still require relatively high temperatures of at least 300 °C. These temperatures make deposition of polysilicon possible for glass substrates but not for plastic substrates. The deposition of polycrystalline silicon on plastic substrates is motivated by the desire to be able to manufacture digital displays on flexible screens. Therefore, a relatively new technique called laser crystallization has been devised to crystallize a precursor amorphous silicon (a-Si) material on a plastic substrate without melting or damaging the plastic. Short, high-intensity ultraviolet laser pulses are used to heat the deposited a-Si material to above the melting point of silicon, without melting the entire substrate. The molten silicon will then crystallize as it cools. By precisely controlling the temperature gradients, researchers have been able to grow very large grains, of up to hundreds of micrometers in size in the extreme case, although grain sizes of 10 nanometers to 1 micrometer are also common. In order to create devices on polysilicon over large-areas, however, a crystal grain size smaller than the device feature size is needed for homogeneity of the devices. Another method to produce poly-Si at low temperatures is metal-induced crystallization where an amorphous-Si thin film can be crystallized at temperatures as low as 150 °C if annealed while in contact of another metal film such as aluminium, gold, or silver. Polysilicon has many applications in VLSI manufacturing. One of its primary uses is as gate electrode material for MOS devices. A polysilicon gate's electrical conductivity may be increased by depositing a metal (such as tungsten) or a metal silicide (such as tungsten silicide) over the gate. Polysilicon may also be employed as a resistor, a conductor, or as an ohmic contact for shallow junctions, with the desired electrical conductivity attained by doping the polysilicon material. One major difference between polysilicon and a-Si is that the mobility of the charge carriers of the polysilicon can be orders of magnitude larger and the material also shows greater stability under electric field and light-induced stress. This allows more complex, high-speed circuitry to be created on the glass substrate along with the a-Si devices, which are still needed for their low-leakage characteristics. When polysilicon and a-Si devices are used in the same process, this is called hybrid processing. A complete polysilicon active layer process is also used in some cases where a small pixel size is required, such as in projection displays. Feedstock for PV industry Polycrystalline silicon is the key feedstock in the crystalline silicon based photovoltaic industry and used for the production of conventional solar cells. For the first time, in 2006, over half of the world's supply of polysilicon was being used by PV manufacturers. The solar industry was severely hindered by a shortage in supply of polysilicon feedstock and was forced to idle about a quarter of its cell and module manufacturing capacity in 2007. Only twelve factories were known to produce solar-grade polysilicon in 2008; however, by 2013 the number increased to over 100 manufacturers. Monocrystalline silicon is higher priced and a more efficient semiconductor than polycrystalline as it has undergone additional recrystallization via the Czochralski method. Deposition methods Polysilicon deposition, or the process of depositing a layer of polycrystalline silicon on a semiconductor wafer, is achieved by the chemical decomposition of silane (SiH4) at high temperatures of 580 to 650 °C. This pyrolysis process releases hydrogen. (g) → Si(s) + 2 (g) CVD at 500-800°C Polysilicon layers can be deposited using 100% silane at a pressure of or with 20–30% silane (diluted in nitrogen) at the same total pressure. Both of these processes can deposit polysilicon on 10–200 wafers per run, at a rate of 10–20 nm/min and with thickness uniformities of ±5%. Critical process variables for polysilicon deposition include temperature, pressure, silane concentration, and dopant concentration. Wafer spacing and load size have been shown to have only minor effects on the deposition process. The rate of polysilicon deposition increases rapidly with temperature, since it follows Arrhenius behavior, that is deposition rate = A·exp(–qEa/kT) where q is electron charge and k is the Boltzmann constant. The activation energy (Ea) for polysilicon deposition is about 1.7 eV. Based on this equation, the rate of polysilicon deposition increases as the deposition temperature increases. There will be a minimum temperature, however, wherein the rate of deposition becomes faster than the rate at which unreacted silane arrives at the surface. Beyond this temperature, the deposition rate can no longer increase with temperature, since it is now being hampered by lack of silane from which the polysilicon will be generated. Such a reaction is then said to be "mass-transport-limited". When a polysilicon deposition process becomes mass-transport-limited, the reaction rate becomes dependent primarily on reactant concentration, reactor geometry, and gas flow. When the rate at which polysilicon deposition occurs is slower than the rate at which unreacted silane arrives, then it is said to be surface-reaction-limited. A deposition process that is surface-reaction-limited is primarily dependent on reactant concentration and reaction temperature. Deposition processes must be surface-reaction-limited because they result in excellent thickness uniformity and step coverage. A plot of the logarithm of the deposition rate against the reciprocal of the absolute temperature in the surface-reaction-limited region results in a straight line whose slope is equal to –qEa/k. At reduced pressure levels for VLSI manufacturing, polysilicon deposition rate below 575 °C is too slow to be practical. Above 650 °C, poor deposition uniformity and excessive roughness will be encountered due to unwanted gas-phase reactions and silane depletion. Pressure can be varied inside a low-pressure reactor either by changing the pumping speed or changing the inlet gas flow into the reactor. If the inlet gas is composed of both silane and nitrogen, the inlet gas flow, and hence the reactor pressure, may be varied either by changing the nitrogen flow at constant silane flow, or changing both the nitrogen and silane flow to change the total gas flow while keeping the gas ratio constant. Recent investigations have shown that e-beam evaporation, followed by SPC (if needed) can be a cost-effective and faster alternative for producing solar-grade poly-Si thin films. Modules produced by such method are shown to have a photovoltaic efficiency of ~6%. Polysilicon doping, if needed, is also done during the deposition process, usually by adding phosphine, arsine, or diborane. Adding phosphine or arsine results in slower deposition, while adding diborane increases the deposition rate. The deposition thickness uniformity usually degrades when dopants are added during deposition. Siemens process The Siemens process is the most commonly used method of polysilicon production, especially for electronics, with close to 75% of the world's production using this process as of 2005. The process converts metallurgical-grade Si, of approximately 98% purity, to SiHCl3 and then to silicon in a reactor, thus removing transition metal and dopant impurities. The process is relatively expensive and slow. It is a type of chemical vapor deposition process. Upgraded metallurgical-grade silicon Upgraded metallurgical-grade (UMG) silicon (also known as UMG-Si) for solar cells is being produced as a low cost alternative to polysilicon created by the Siemens process. UMG-Si greatly reduces impurities in a variety of ways that require less equipment and energy than the Siemens process. It is about 99% pure which is three or more orders of magnitude less pure and about 10 times less expensive than polysilicon ($1.70 to $3.20 per kg from 2005 to 2008 compared to $40 to $400 per kg for polysilicon). It has the potential to provide nearly-as-good solar cell efficiency at 1/5 the capital expenditure, half the energy requirements, and less than $15/kg. In 2008 several companies were touting the potential of UMG-Si, but in 2010 the credit crisis greatly lowered the cost of polysilicon and several UMG-Si producers put plans on hold. The Siemens process will remain the dominant form of production for years to come due to more efficiently implementing the Siemens process. GT Solar claims a new Siemens process can produce at $27/kg and may reach $20/kg in 5 years. GCL-Poly expects production costs to be $20/kg by end of 2011. Elkem Solar estimates their UMG costs to be $25/kg, with a capacity of 6,000 tonnes by the end of 2010. Calisolar expects UMG technology to produce at $12/kg in 5 years with boron at 0.3 ppm and phosphorus at 0.6 ppm. At $50/kg and 7.5 g/W, module manufacturers spend $0.37/W for the polysilicon. For comparison, if a CdTe manufacturer pays spot price for tellurium ($420/kg in April 2010) and has a 3 μm thickness, their cost would be 10 times less, $0.037/Watt. At 0.1 g/W and $31/ozt for silver, polysilicon solar producers spend $0.10/W on silver. Q-Cells, Canadian Solar, and Calisolar have used Timminco UMG. Timminco is able to produce UMG-Si with 0.5 ppm boron for $21/kg but were sued by shareholders because they had expected $10/kg. RSI and Dow Corning have also been in litigation over UMG-Si technology. Potential applications Currently, polysilicon is commonly used for the conducting gate materials in semiconductor devices such as MOSFETs; however, it has potential for large-scale photovoltaic devices. The abundance, stability, and low toxicity of silicon, combined with the low cost of polysilicon relative to single crystals makes this variety of material attractive for photovoltaic production. Grain size has been shown to have an effect on the efficiency of polycrystalline solar cells. Solar cell efficiency increases with grain size. This effect is due to reduced recombination in the solar cell. Recombination, which is a limiting factor for current in a solar cell, occurs more prevalently at grain boundaries, see figure 1. The resistivity, mobility, and free-carrier concentration in monocrystalline silicon vary with doping concentration of the single crystal silicon. Whereas the doping of polycrystalline silicon does have an effect on the resistivity, mobility, and free-carrier concentration, these properties strongly depend on the polycrystalline grain size, which is a physical parameter that the material scientist can manipulate. Through the methods of crystallization to form polycrystalline silicon, an engineer can control the size of the polycrystalline grains which will vary the physical properties of the material. Novel ideas The use of polycrystalline silicon in the production of solar cells requires less material and therefore provides higher profits and increased manufacturing throughput. Polycrystalline silicon does not need to be deposited on a silicon wafer to form a solar cell, rather it can be deposited on other, cheaper materials, thus reducing the cost. Not requiring a silicon wafer alleviates the silicon shortages occasionally faced by the microelectronics industry. An example of not using a silicon wafer is crystalline silicon on glass (CSG) materials. A primary concern in the photovoltaics industry is cell efficiency. However, sufficient cost savings from cell manufacturing can be suitable to offset reduced efficiency in the field, such as the use of larger solar cell arrays compared with more compact/higher efficiency designs. Designs such as CSG are attractive because of a low cost of production even with reduced efficiency. Higher efficiency devices yield modules that occupy less space and are more compact; however, the 5–10% efficiency of typical CSG devices still makes them attractive for installation in large central-service stations, such as a power station. The issue of efficiency versus cost is a value decision of whether one requires an "energy dense" solar cell or sufficient area is available for the installation of less expensive alternatives. For instance, a solar cell used for power generation in a remote location might require a more highly efficient solar cell than one used for low-power applications, such as solar accent lighting or pocket calculators, or near established power grids. Manufacturers Capacity The polysilicon manufacturing market is growing rapidly. According to DigiTimes, in July 2011, the total polysilicon production in 2010 was 209,000 tons. First-tier suppliers account for 64% of the market while China-based polysilicon firms have 30% of market share. The total production is likely to increase 37.4% to 281,000 tons by end of 2011. For 2012, EETimes Asia predicts 328,000 tons production with only 196,000 tons of demand, with spot prices expected to fall 56%. While good for renewable energy prospects, the subsequent drop in price could be brutal for manufacturers. As of late 2012, SolarIndustryMag reports a capacity of 385,000 tons will be reached by yearend 2012. As of 2010, as established producers (mentioned below) expand their capacities, additional newcomers – many from Asia – are moving into the market. Even long-time players in the field have recently had difficulties expanding plant production. It is yet unclear which companies will be able to produce at costs low enough to be profitable after the steep drop in spot-prices of the last months. Leading producers Wacker's projected its total hyperpure-polysilicon production capacity to increase to 67,000 metric tons by 2014, due to its new polysilicon-production facility in Cleveland, Tennessee (US) with an annual capacity of 15,000 metric tons. Other manufacturers LDK Solar (2010: 15 kt) China. Tokuyama Corporation (2009: 8 kt, Jan 2013: 11 kt, 2015: 31 kt) Japan. MEMC/SunEdison (2010: 8 kt, Jan 2013: 18 kt) USA. Hankook Silicon (2011: 3.2 kt, 2013: 14.5 kt) Nitol Solar, (2011: 5 kt, Jan 2011), Russia Mitsubishi Polysilicon (2008: 4.3 kt) Osaka Titanium Technologies (2008: 4.2 kt) Daqo New Energy, (2011: 4.3 kt, under construction 3 kt), China Beijing Lier High-temperature Materials Co. (2012: 5 kt) Qatar Solar Technologies, at Ras Laffan, announced an 8 t facility for start in 2013. Price Prices of polysilicon are often divided into two categories, contract and spot prices, and higher purity commands higher prices. While in booming installation times, price rally occurs in polysilicon. Not only spot prices surpass contract prices in the market; but it is also hard to acquire enough polysilicon. Buyers will accept down payment and long-term agreements to acquire a large enough volume of polysilicon. On the contrary, spot prices will be below contract prices once the solar PV installation is in a down trend. In late 2010, booming installation brought up the spot prices of polysilicon. In the first half of 2011, prices of polysilicon kept strong owing to the FIT policies of Italy. The solar PV price survey and market research firm, PVinsights, reported that the prices of polysilicon might be dragged down by lack of installation in the second half of 2011. As recently as 2008 prices were over $400/kg spiking from levels around $200/kg, while seen falling to $15/kg in 2013. Dumping The Chinese government accused United States and South Korean manufacturers of predatory pricing or "dumping". As a consequence, in 2013 it imposed import tariffs of as much as 57 percent on polysilicon shipped from these two countries in order to stop the product from being sold below cost. Waste Due to the rapid growth in manufacturing in China and the lack of regulatory controls, there have been reports of the dumping of waste silicon tetrachloride. Normally the waste silicon tetrachloride is recycled but this adds to the cost of manufacture as it needs to be heated to . See also Amorphous silicon Cadmium telluride Metallurgical grade silicon Nanocrystalline silicon Photovoltaic module Photovoltaics Polycrystal Solar cell Thin-film solar cell Wafer (electronics) References External links Silicon, Polycrystalline Crystals Silicon solar cells Allotropes of silicon
Polycrystalline silicon
[ "Chemistry", "Materials_science" ]
4,509
[ "Allotropes", "Semiconductor materials", "Group IV semiconductors", "Allotropes of silicon", "Crystallography", "Crystals" ]
31,329,979
https://en.wikipedia.org/wiki/Conductivity%20near%20the%20percolation%20threshold
Conductivity near the percolation threshold in physics, occurs in a mixture between a dielectric and a metallic component. The conductivity and the dielectric constant of this mixture show a critical behavior if the fraction of the metallic component reaches the percolation threshold. The behavior of the conductivity near this percolation threshold will show a smooth change over from the conductivity of the dielectric component to the conductivity of the metallic component. This behavior can be described using two critical exponents "s" and "t", whereas the dielectric constant will diverge if the threshold is approached from either side. To include the frequency dependent behavior in electronic components, a resistor-capacitor model (R-C model) is used. Geometrical percolation For describing such a mixture of a dielectric and a metallic component we use the model of bond-percolation. On a regular lattice, the bond between two nearest neighbors can either be occupied with probability or not occupied with probability . There exists a critical value . For occupation probabilities an infinite cluster of the occupied bonds is formed. This value is called the percolation threshold. The region near to this percolation threshold can be described by the two critical exponents and (see Percolation critical exponents). With these critical exponents we have the correlation length, and the percolation probability, P: Electrical percolation For the description of the electrical percolation, we identify the occupied bonds of the bond-percolation model with the metallic component having a conductivity . And the dielectric component with conductivity corresponds to non-occupied bonds. We consider the two following well-known cases of a conductor-insulator mixture and a superconductor–conductor mixture. Conductor-insulator mixture In the case of a conductor-insulator mixture we have . This case describes the behaviour, if the percolation threshold is approached from above: for Below the percolation threshold we have no conductivity, because of the perfect insulator and just finite metallic clusters. The exponent t is one of the two critical exponents for electrical percolation. Superconductor–conductor mixture In the other well-known case of a superconductor-conductor mixture we have . This case is useful for the description below the percolation threshold: for Now, above the percolation threshold the conductivity becomes infinite, because of the infinite superconducting clusters. And also we get the second critical exponent s for the electrical percolation. Conductivity near the percolation threshold In the region around the percolation threshold, the conductivity assumes a scaling form: with and At the percolation threshold, the conductivity reaches the value: with Values for the critical exponents In different sources there exists some different values for the critical exponents s, t and u in 3 dimensions: Dielectric constant The dielectric constant also shows a critical behavior near the percolation threshold. For the real part of the dielectric constant we have: The R-C model Within the R-C model, the bonds in the percolation model are represented by pure resistors with conductivity for the occupied bonds and by perfect capacitors with conductivity (where represents the angular frequency) for the non-occupied bonds. Now the scaling law takes the form: This scaling law contains a purely imaginary scaling variable and a critical time scale which diverges if the percolation threshold is approached from above as well as from below. Conductivity for dense networks For a dense network, the concepts of percolation are not directly applicable and the effective resistance is calculated in terms of geometrical properties of network. Assuming, edge length << electrode spacing and edges to be uniformly distributed, the potential can be considered to drop uniformly from one electrode to another. Sheet resistance of such a random network () can be written in terms of edge (wire) density (), resistivity (), width () and thickness () of edges (wires) as: See also Percolation theory References Critical phenomena
Conductivity near the percolation threshold
[ "Physics", "Materials_science", "Mathematics" ]
853
[ "Physical phenomena", "Critical phenomena", "Condensed matter physics", "Statistical mechanics", "Dynamical systems" ]
1,988,996
https://en.wikipedia.org/wiki/Narrow-gap%20semiconductor
Narrow-gap semiconductors are semiconducting materials with a magnitude of bandgap that is smaller than 0.7 eV, which corresponds to an infrared absorption cut-off wavelength over 2.5 micron. A more extended definition includes all semiconductors with bandgaps smaller than silicon (1.1 eV). Modern terahertz, infrared, and thermographic technologies are all based on this class of semiconductors. Narrow-gap materials made it possible to realize satellite remote sensing, photonic integrated circuits for telecommunications, and unmanned vehicle Li-Fi systems, in the regime of Infrared detector and infrared vision. They are also the materials basis for terahertz technology, including security surveillance of concealed weapon uncovering, safe medical and industrial imaging with terahertz tomography, as well as dielectric wakefield accelerators. Besides, thermophotovoltaics embedded with narrow-gap semiconductors can potentially use the traditionally wasted portion of solar energy that takes up ~49% of the sun light spectrum. Spacecraft, deep ocean instruments, and vacuum physics setups use narrow-gap semiconductors to achieve cryogenic cooling. List of narrow-gap semiconductors {| class="wikitable" |- !Name !Chemical formula !Groups !Band gap (300 K) |- | Mercury cadmium telluride | Hg1−xCdxTe | II-VI | 0 to 1.5 eV |- | Mercury zinc telluride | Hg1−xZnxTe | II-VI | 0.15 to 2.25 eV |- | Lead selenide | PbSe | IV-VI | 0.27 eV |- | Lead(II) sulfide | PbS | IV-VI | 0.37 eV |- | Telenium | Te | VI | ~0.3 eV |- | Lead telluride | PbTe | IV-VI | 0.32 eV |- | Magnetite | Fe3O4 | Transition Metal-VI | 0.14 eV |- | Indium arsenide | InAs | III-V | 0.354 eV |- | Indium antimonide | InSb | III-V | 0.17 eV |- | Germanium | Ge | IV | 0.67 eV |- |Gallium antimonide | GaSb | III-V | 0.67 eV |- | Cadmium arsenide | Cd3As2 | II-V | 0.5 to 0.6 eV |- | Bismuth telluride | Bi2Te3 | | 0.21 eV |- | Tin telluride | SnTe | IV-VI | 0.18 eV |- | Tin selenide | SnSe | IV-VI | 0.9 eV |- | Silver(I) selenide | Ag2Se | | 0.07 eV |- |Magnesium silicide |Mg2Si |II-IV |0.79 eV |} See also List of semiconductor materials Wide-bandgap semiconductor References Further reading Dornhaus, R., Nimtz, G., Schlicht, B. (1983). Narrow-Gap Semiconductors. Springer Tracts in Modern Physics 98, (print) (online) Semiconductor material types
Narrow-gap semiconductor
[ "Physics", "Chemistry", "Materials_science" ]
682
[ "Materials science stubs", "Semiconductor materials", "Condensed matter physics", "Semiconductor material types", "Condensed matter stubs" ]
1,989,515
https://en.wikipedia.org/wiki/Breit%20equation
The Breit equation, or Dirac–Coulomb–Breit equation, is a relativistic wave equation derived by Gregory Breit in 1929 based on the Dirac equation, which formally describes two or more massive spin-1/2 particles (electrons, for example) interacting electromagnetically to the first order in perturbation theory. It accounts for magnetic interactions and retardation effects to the order of 1/c2. When other quantum electrodynamic effects are negligible, this equation has been shown to give results in good agreement with experiment. It was originally derived from the Darwin Lagrangian but later vindicated by the Wheeler–Feynman absorber theory and eventually quantum electrodynamics. Introduction The Breit equation is not only an approximation in terms of quantum mechanics, but also in terms of relativity theory as it is not completely invariant with respect to the Lorentz transformation. Just as does the Dirac equation, it treats nuclei as point sources of an external field for the particles it describes. For particles, the Breit equation has the form ( is the distance between particle and ): where is the Dirac Hamiltonian (see Dirac equation) for particle at position and is the scalar potential at that position; is the charge of the particle, thus for electrons . The one-electron Dirac Hamiltonians of the particles, along with their instantaneous Coulomb interactions , form the Dirac–Coulomb operator. To this, Breit added the operator (now known as the (frequency-independent) Breit operator): where the Dirac matrices for electron i: . The two terms in the Breit operator account for retardation effects to the first order. The wave function in the Breit equation is a spinor with elements, since each electron is described by a Dirac bispinor with 4 elements as in the Dirac equation, and the total wave function is the tensor product of these. Breit Hamiltonians The total Hamiltonian of the Breit equation, sometimes called the Dirac–Coulomb–Breit Hamiltonian () can be decomposed into the following practical energy operators for electrons in electric and magnetic fields (also called the Breit–Pauli Hamiltonian), which have well-defined meanings in the interaction of molecules with magnetic fields (for instance for nuclear magnetic resonance): in which the consecutive partial operators are: is the nonrelativistic Hamiltonian ( is the stationary mass of particle i). is connected to the dependence of mass on velocity: . is a correction that partly accounts for retardation and can be described as the interaction between the magnetic dipole moments of the particles, which arise from the orbital motion of charges (also called orbit–orbit interaction). is the classical interaction between the orbital magnetic moments (from the orbital motion of charge) and spin magnetic moments (also called spin–orbit interaction). The first term describes the interaction of a particle's spin with its own orbital moment (F(ri) is the electric field at the particle's position), and the second term between two different particles. is a nonclassical term characteristic for Dirac theory, sometimes called the Darwin term. is the magnetic moment spin-spin interaction. The first term is called the contact interaction, because it is nonzero only when the particles are at the same position; the second term is the interaction of the classical dipole-dipole type. is the interaction between spin and orbital magnetic moments with an external magnetic field H. where: and is the Bohr magneton. See also Bethe–Salpeter equation Darwin Lagrangian Two-body Dirac equations Positronium Wheeler–Feynman absorber theory References External links Tensor form of the Breit equation, Institute of Theoretical Physics, Warsaw University. Solving Nonperturbatively the Breit equation for Parapositronium, Institute of Theoretical Physics, Warsaw University. Eponymous equations of physics Quantum mechanics
Breit equation
[ "Physics" ]
817
[ "Quantum mechanics", "Theoretical physics", "Eponymous equations of physics", "Equations of physics" ]
1,989,599
https://en.wikipedia.org/wiki/Euclidean%20plane%20isometry
In geometry, a Euclidean plane isometry is an isometry of the Euclidean plane, or more informally, a way of transforming the plane that preserves geometrical properties such as length. There are four types: translations, rotations, reflections, and glide reflections (see below ). The set of Euclidean plane isometries forms a group under composition: the Euclidean group in two dimensions. It is generated by reflections in lines, and every element of the Euclidean group is the composite of at most three distinct reflections. Informal discussion Informally, a Euclidean plane isometry is any way of transforming the plane without "deforming" it. For example, suppose that the Euclidean plane is represented by a sheet of transparent plastic sitting on a desk. Examples of isometries include: Shifting the sheet one inch to the right. Rotating the sheet by ten degrees around some marked point (which remains motionless). Turning the sheet over to look at it from behind. Notice that if a picture is drawn on one side of the sheet, then after turning the sheet over, we see the mirror image of the picture. These are examples of translations, rotations, and reflections respectively. There is one further type of isometry, called a glide reflection (see below under classification of Euclidean plane isometries). However, folding, cutting, or melting the sheet are not considered isometries. Neither are less drastic alterations like bending, stretching, or twisting. Formal definition An isometry of the Euclidean plane is a distance-preserving transformation of the plane. That is, it is a map such that for any points p and q in the plane, where d(p, q) is the usual Euclidean distance between p and q. Classification It can be shown that there are four types of Euclidean plane isometries. (Note: the notations for the types of isometries listed below are not completely standardised.) Reflections Reflections, or mirror isometries, denoted by Fc,v, where c is a point in the plane and v is a unit vector in R2. (F is for "flip".) have the effect of reflecting the point p in the line L that is perpendicular to v and that passes through c. The line L is called the reflection axis or the associated mirror. To find a formula for Fc,v, we first use the dot product to find the component t of p − c in the v direction, and then we obtain the reflection of p by subtraction, The combination of rotations about the origin and reflections about a line through the origin is obtained with all orthogonal matrices (i.e. with determinant 1 and −1) forming orthogonal group O(2). In the case of a determinant of −1 we have: which is a reflection in the x-axis followed by a rotation by an angle θ, or equivalently, a reflection in a line making an angle of θ/2 with the x-axis. Reflection in a parallel line corresponds to adding a vector perpendicular to it. Translations Translations, denoted by Tv, where v is a vector in R2 have the effect of shifting the plane in the direction of v. That is, for any point p in the plane, or in terms of (x, y) coordinates, A translation can be seen as a composite of two parallel reflections. Rotations Rotations, denoted by Rc,θ, where c is a point in the plane (the centre of rotation), and θ is the angle of rotation. In terms of coordinates, rotations are most easily expressed by breaking them up into two operations. First, a rotation around the origin is given by These matrices are the orthogonal matrices (i.e. each is a square matrix G whose transpose is its inverse, i.e. ), with determinant 1 (the other possibility for orthogonal matrices is −1, which gives a mirror image, see below). They form the special orthogonal group SO(2). A rotation around c can be accomplished by first translating c to the origin, then performing the rotation around the origin, and finally translating the origin back to c. That is, or in other words, Alternatively, a rotation around the origin is performed, followed by a translation: A rotation can be seen as a composite of two non-parallel reflections. Rigid transformations The set of translations and rotations together form the rigid motions or rigid displacements. This set forms a group under composition, the group of rigid motions, a subgroup of the full group of Euclidean isometries. Glide reflections Glide reflections, denoted by Gc,v,w, where c is a point in the plane, v is a unit vector in R2, and w is non-null a vector perpendicular to v are a combination of a reflection in the line described by c and v, followed by a translation along w. That is, or in other words, (It is also true that that is, we obtain the same result if we do the translation and the reflection in the opposite order.) Alternatively we multiply by an orthogonal matrix with determinant −1 (corresponding to a reflection in a line through the origin), followed by a translation. This is a glide reflection, except in the special case that the translation is perpendicular to the line of reflection, in which case the combination is itself just a reflection in a parallel line. The identity isometry, defined by I(p) = p for all points p is a special case of a translation, and also a special case of a rotation. It is the only isometry which belongs to more than one of the types described above. In all cases we multiply the position vector by an orthogonal matrix and add a vector; if the determinant is 1 we have a rotation, a translation, or the identity, and if it is −1 we have a glide reflection or a reflection. A "random" isometry, like taking a sheet of paper from a table and randomly laying it back, "almost surely" is a rotation or a glide reflection (they have three degrees of freedom). This applies regardless of the details of the probability distribution, as long as θ and the direction of the added vector are independent and uniformly distributed and the length of the added vector has a continuous distribution. A pure translation and a pure reflection are special cases with only two degrees of freedom, while the identity is even more special, with no degrees of freedom. Isometries as reflection group Reflections, or mirror isometries, can be combined to produce any isometry. Thus isometries are an example of a reflection group. Mirror combinations In the Euclidean plane, we have the following possibilities. [d  ] Identity Two reflections in the same mirror restore each point to its original position. All points are left fixed. Any pair of identical mirrors has the same effect. [db] Reflection As Alice found through the looking-glass, a single mirror causes left and right hands to switch. (In formal terms, topological orientation is reversed.) Points on the mirror are left fixed. Each mirror has a unique effect. [dp] Rotation Two distinct intersecting mirrors have a single point in common, which remains fixed. All other points rotate around it by twice the angle between the mirrors. Any two mirrors with the same fixed point and same angle give the same rotation, so long as they are used in the correct order. [dd] Translation Two distinct mirrors that do not intersect must be parallel. Every point moves the same amount, twice the distance between the mirrors, and in the same direction. No points are left fixed. Any two mirrors with the same parallel direction and the same distance apart give the same translation, so long as they are used in the correct order. [dq] Glide reflection Three mirrors. If they are all parallel, the effect is the same as a single mirror (slide a pair to cancel the third). Otherwise we can find an equivalent arrangement where two are parallel and the third is perpendicular to them. The effect is a reflection combined with a translation parallel to the mirror. No points are left fixed. Three mirrors suffice Adding more mirrors does not add more possibilities (in the plane), because they can always be rearranged to cause cancellation. Recognition We can recognize which of these isometries we have according to whether it preserves hands or swaps them, and whether it has at least one fixed point or not, as shown in the following table (omitting the identity). Group structure Isometries requiring an odd number of mirrors — reflection and glide reflection — always reverse left and right. The even isometries — identity, rotation, and translation — never do; they correspond to rigid motions, and form a normal subgroup of the full Euclidean group of isometries. Neither the full group nor the even subgroup are abelian; for example, reversing the order of composition of two parallel mirrors reverses the direction of the translation they produce. Since the even subgroup is normal, it is the kernel of a homomorphism to a quotient group, where the quotient is isomorphic to a group consisting of a reflection and the identity. However the full group is not a direct product, but only a semidirect product, of the even subgroup and the quotient group. Composition Composition of isometries mixes kinds in assorted ways. We can think of the identity as either two mirrors or none; either way, it has no effect in composition. And two reflections give either a translation or a rotation, or the identity (which is both, in a trivial way). Reflection composed with either of these could cancel down to a single reflection; otherwise it gives the only available three-mirror isometry, a glide reflection. A pair of translations always reduces to a single translation; so the challenging cases involve rotations. We know a rotation composed with either a rotation or a translation must produce an even isometry. Composition with translation produces another rotation (by the same amount, with shifted fixed point), but composition with rotation can yield either translation or rotation. It is often said that composition of two rotations produces a rotation, and Euler proved a theorem to that effect in 3D; however, this is only true for rotations sharing a fixed point. Translation, rotation, and orthogonal subgroups We thus have two new kinds of isometry subgroups: all translations, and rotations sharing a fixed point. Both are subgroups of the even subgroup, within which translations are normal. Because translations are a normal subgroup, we can factor them out leaving the subgroup of isometries with a fixed point, the orthogonal group. Nested group construction The subgroup structure suggests another way to compose an arbitrary isometry: Pick a fixed point, and a mirror through it. If the isometry is odd, use the mirror; otherwise do not. If necessary, rotate around the fixed point. If necessary, translate. This works because translations are a normal subgroup of the full group of isometries, with quotient the orthogonal group; and rotations about a fixed point are a normal subgroup of the orthogonal group, with quotient a single reflection. Discrete subgroups The subgroups discussed so far are not only infinite, they are also continuous (Lie groups). Any subgroup containing at least one non-zero translation must be infinite, but subgroups of the orthogonal group can be finite. For example, the symmetries of a regular pentagon consist of rotations by integer multiples of 72° (360° / 5), along with reflections in the five mirrors which perpendicularly bisect the edges. This is a group, D5, with 10 elements. It has a subgroup, C5, of half the size, omitting the reflections. These two groups are members of two families, Dn and Cn, for any n > 1. Together, these families constitute the rosette groups. Translations do not fold back on themselves, but we can take integer multiples of any finite translation, or sums of multiples of two such independent translations, as a subgroup. These generate the lattice of a periodic tiling of the plane. We can also combine these two kinds of discrete groups — the discrete rotations and reflections around a fixed point and the discrete translations — to generate the frieze groups and wallpaper groups. Curiously, only a few of the fixed-point groups are found to be compatible with discrete translations. In fact, lattice compatibility imposes such a severe restriction that, up to isomorphism, we have only 7 distinct frieze groups and 17 distinct wallpaper groups. For example, the pentagon symmetries, D5, are incompatible with a discrete lattice of translations. (Each higher dimension also has only a finite number of such crystallographic groups, but the number grows rapidly; for example, 3D has 230 groups and 4D has 4783.) Isometries in the complex plane In terms of complex numbers, the isometries of the plane are either of the form or of the form for some complex numbers and with |ω| = 1. This is easy to prove: if and and if one defines then is an isometry, , and . It is then easy to see that g is either the identity or the conjugation, and the statement being proved follows from this and from the fact that . This is obviously related to the previous classification of plane isometries, since: functions of the type are translations; functions of the type are rotations (when |ω| = 1); the conjugation is a reflection. Note that a rotation about complex point p is obtained by complex arithmetic with where the last expression shows the mapping equivalent to rotation at 0 and a translation. Therefore, given direct isometry one can solve to obtain as the center for an equivalent rotation, provided that , that is, provided the direct isometry is not a pure translation. As stated by Cederberg, "A direct isometry is either a rotation or a translation." See also Beckman–Quarles theorem, a characterization of isometries as the transformations that preserve unit distances Congruence (geometry) Coordinate rotations and reflections Hjelmslev's theorem, the statement that the midpoints of corresponding pairs of points in an isometry of lines are collinear References External links Plane Isometries Crystallography Euclidean plane geometry Euclidean symmetries Group theory Articles containing proofs
Euclidean plane isometry
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
2,938
[ "Functions and mappings", "Euclidean symmetries", "Euclidean plane geometry", "Mathematical objects", "Materials science", "Group theory", "Crystallography", "Fields of abstract algebra", "Mathematical relations", "Condensed matter physics", "Articles containing proofs", "Planes (geometry)", ...
1,990,571
https://en.wikipedia.org/wiki/Millennium%20Run
The Millennium Run, or Millennium Simulation (referring to its size) is a computer N-body simulation used to investigate how the distribution of matter in the Universe has evolved over time, in particular, how the observed population of galaxies was formed. It is used by scientists working in physical cosmology to compare observations with theoretical predictions. Overview A basic scientific method for testing theories in cosmology is to evaluate their consequences for the observable parts of the universe. One piece of observational evidence is the distribution of matter, including galaxies and intergalactic gas, which are observed today. Light emitted from more distant matter must travel longer in order to reach Earth, meaning looking at distant objects is like looking further back in time. This means the evolution in time of the matter distribution in the universe can also be observed directly. The Millennium Simulation was run in 2005 by the Virgo Consortium, an international group of astrophysicists from Germany, the United Kingdom, Canada, Japan and the United States. It starts at the epoch when the Cosmic microwave background was emitted, about 379,000 years after the universe began. The cosmic background radiation has been studied by satellite experiments, and the observed inhomogeneities in the cosmic background serve as the starting point for following the evolution of the corresponding matter distribution. Using the physical laws expected to hold in the currently known cosmologies and simplified representations of the astrophysical processes observed to affect real galaxies, the initial distribution of matter is allowed to evolve, and the simulation's predictions for formation of galaxies and black holes are recorded. Since the completion of the Millennium Run simulation in 2005, a series of ever more sophisticated and higher fidelity simulations of the formation of the galaxy population have been built within its stored output and have been made publicly available over the internet. In addition to improving the treatment of the astrophysics of galaxy formation, recent versions have adjusted the parameters of the underlying cosmological model to reflect changing ideas about their precise values. To date (mid-2018) more than 950 published papers have made use of data from the Millennium Run, making it, at least by this measure, the highest impact astrophysical simulation of all time. Size of the simulation For the first scientific results, published on June 2, 2005, the Millennium Simulation traced 2160, or just over 10 billion, "particles." These are not particles in the particle physics sense – each "particle" represents approximately a billion solar masses of dark matter. The region of space simulated was a cube with about 2 billion light years as its length. This volume was populated by about 20 million "galaxies". A super computer located in Garching, Germany executed the simulation, which used a version of the GADGET code, for more than a month. The output of the simulation needed about 25 terabytes of storage. First results The Sloan Digital Sky Survey had challenged the current understanding of cosmology by finding black hole candidates in very bright quasars at large distances. This meant that they were created much earlier than initially expected. In successfully managing to produce quasars at early times, the Millennium Simulation demonstrated that these objects do not contradict our models of the evolution of the universe. Millennium II In 2009, the same group ran the 'Millennium II' simulation (MS-II) on a smaller cube (about 400 million light years on a side), with the same number of particles but with each particle representing 6.9 million solar masses. This is a rather harder numerical task since splitting the computational domain between processors becomes harder when dense clumps of matter are present. MS-II used 1.4 million CPU hours over 2048 cores (i.e. about a month) on the Power-6 computer at Garching; a simulation was also run with the same initial conditions and fewer particles to check that features in the higher-resolution run were also seen at lower resolution. Millennium XXL In 2010, the 'Millennium XXL' simulation (MXXL) was performed, this time using a much larger cube (over 13 billion light years on a side), and 6720 particles each representing 7 billion times the mass of the Sun. The MXXL spans a cosmological volume 216 and 27,000 times the size of the Millennium and the MS-II simulation boxes, respectively. The simulation was run on JUROPA, one of the top 15 supercomputers in the world in 2010. It used more than 12,000 cores for an equivalent of 300 years CPU time, 30 terabytes of RAM and generated more than 100 terabytes of data. Cosmologists use the MXXL simulation to study the distribution of galaxies and dark matter halos on very large scales and how the rarest and most massive structures in the universe came about. Millennium Run Observatory In 2012, the Millennium Run Observatory (MRObs) project was launched. The MRObs is a theoretical virtual observatory that integrates detailed predictions for the dark matter (from the Millennium simulations) and for the galaxies (from semi-analytical models) with a virtual telescope to synthesize artificial observations. Astrophysicists use these virtual observations to study how the predictions from the Millennium simulations compare to the real universe, to plan future observational surveys, and to calibrate the techniques used by astronomers to analyze real observations. A first set of virtual observations produced by the MRObs have been released to the astronomical community for analysis through the MRObs Web portal. The virtual universe can also be accessed through a new online tool, the MRObs browser, which allows users to interact with the Millennium Run Relational Database where the properties of millions of dark matter halos and their galaxies from the Millennium project are being stored. Upgrades to the MRObs framework, and its extension to other types of simulations, are currently being planned. See also Illustris Eris (simulation) Bolshoi cosmological simulation References Further reading External links Millennium Simulation Data Page Press release of the June 2 results (MPG) VIRGO home page Simulating the joint evolution of quasars, galaxies and their large-scale distribution The Millennium Run Observatory Page The Millennium Run Relational Database Physical cosmology Simulation Cosmological simulation
Millennium Run
[ "Physics", "Astronomy" ]
1,263
[ "Astronomical sub-disciplines", "Theoretical physics", "Astrophysics", "Computational physics", "Cosmological simulation", "Physical cosmology" ]
1,991,073
https://en.wikipedia.org/wiki/Surface%20plasmon%20resonance
Surface plasmon resonance (SPR) is a phenomenon that occurs where electrons in a thin metal sheet become excited by light that is directed to the sheet with a particular angle of incidence, and then travel parallel to the sheet. Assuming a constant light source wavelength and that the metal sheet is thin, the angle of incidence that triggers SPR is related to the refractive index of the material and even a small change in the refractive index will cause SPR to not be observed. This makes SPR a possible technique for detecting particular substances (analytes) and SPR biosensors have been developed to detect various important biomarkers. Explanation The surface plasmon polariton is a non-radiative electromagnetic surface wave that propagates in a direction parallel to the negative permittivity/dielectric material interface. Since the wave is on the boundary of the conductor and the external medium (air, water or vacuum for example), these oscillations are very sensitive to any change of this boundary, such as the adsorption of molecules to the conducting surface. To describe the existence and properties of surface plasmon polaritons, one can choose from various models (quantum theory, Drude model, etc.). The simplest way to approach the problem is to treat each material as a homogeneous continuum, described by a frequency-dependent relative permittivity between the external medium and the surface. This quantity, hereafter referred to as the materials' "dielectric function", is the complex permittivity. In order for the terms that describe the electronic surface plasmon to exist, the real part of the dielectric constant of the conductor must be negative and its magnitude must be greater than that of the dielectric. This condition is met in the infrared-visible wavelength region for air/metal and water/metal interfaces (where the real dielectric constant of a metal is negative and that of air or water is positive). LSPRs (localized surface plasmon resonances) are collective electron charge oscillations in metallic nanoparticles that are excited by light. They exhibit enhanced near-field amplitude at the resonance wavelength. This field is highly localized at the nanoparticle and decays rapidly away from the nanoparticle/dielectric interface into the dielectric background, though far-field scattering by the particle is also enhanced by the resonance. Light intensity enhancement is a very important aspect of LSPRs and localization means the LSPR has very high spatial resolution (subwavelength), limited only by the size of nanoparticles. Because of the enhanced field amplitude, effects that depend on the amplitude such as magneto-optical effect are also enhanced by LSPRs. Implementations In order to excite surface plasmon polaritons in a resonant manner, one can use electron bombardment or incident light beam (visible and infrared are typical). The incoming beam has to match its momentum to that of the plasmon. In the case of p-polarized light (polarization occurs parallel to the plane of incidence), this is possible by passing the light through a block of glass to increase the wavenumber (and the momentum), and achieve the resonance at a given wavelength and angle. S-polarized light (polarization occurs perpendicular to the plane of incidence) cannot excite electronic surface plasmons. Electronic and magnetic surface plasmons obey the following dispersion relation: where k() is the wave vector, is the relative permittivity, and is the relative permeability of the material (1: the glass block, 2: the metal film), while is angular frequency and is the speed of light in vacuum. Typical metals that support surface plasmons are silver and gold, but metals such as copper, titanium or chromium have also been used. When using light to excite SP waves, there are two configurations which are well known. In the Otto configuration, the light illuminates the wall of a glass block, typically a prism, and is totally internally reflected. A thin metal film (for example gold) is positioned close enough to the prism wall so that an evanescent wave can interact with the plasma waves on the surface and hence excite the plasmons. In the Kretschmann configuration (also known as Kretschmann–Raether configuration), the metal film is evaporated onto the glass block. The light again illuminates the glass block, and an evanescent wave penetrates through the metal film. The plasmons are excited at the outer side of the film. This configuration is used in most practical applications. SPR emission When the surface plasmon wave interacts with a local particle or irregularity, such as a rough surface, part of the energy can be re-emitted as light. This emitted light can be detected behind the metal film from various directions. Analytical implementations Surface plasmon resonance can be implemented in analytical instrumentation. SPR instruments consist of a light source, an input scheme, a prism with analyte interface, a detector, and computer. Detectors The detectors used in surface plasmon resonance convert the photons of light reflected off the metallic film into an electrical signal. A position sensing detector (PSD) or charged-coupled device (CCD) may be used to operate as detectors. Applications Surface plasmons have been used to enhance the surface sensitivity of several spectroscopic measurements including fluorescence, Raman scattering, and second-harmonic generation. In their simplest form, SPR reflectivity measurements can be used to detect molecular adsorption, such as polymers, DNA or proteins, etc. Technically, it is common to measure the angle of minimum reflection (angle of maximum absorption). This angle changes in the order of 0.1° during thin (about nm thickness) film adsorption. (See also the Examples.) In other cases the changes in the absorption wavelength is followed. The mechanism of detection is based on the adsorbing molecules causing changes in the local index of refraction, changing the resonance conditions of the surface plasmon waves. The same principle is exploited in the recently developed competitive platform based on loss-less dielectric multilayers (DBR), supporting surface electromagnetic waves with sharper resonances (Bloch surface waves). If the surface is patterned with different biopolymers, using adequate optics and imaging sensors (i.e. a camera), the technique can be extended to surface plasmon resonance imaging (SPRI). This method provides a high contrast of the images based on the adsorbed amount of molecules, somewhat similar to Brewster angle microscopy (this latter is most commonly used together with a Langmuir–Blodgett trough). For nanoparticles, localized surface plasmon oscillations can give rise to the intense colors of suspensions or sols containing the nanoparticles. Nanoparticles or nanowires of noble metals exhibit strong absorption bands in the ultraviolet–visible light regime that are not present in the bulk metal. This extraordinary absorption increase has been exploited to increase light absorption in photovoltaic cells by depositing metal nanoparticles on the cell surface. The energy (color) of this absorption differs when the light is polarized along or perpendicular to the nanowire. Shifts in this resonance due to changes in the local index of refraction upon adsorption to the nanoparticles can also be used to detect biopolymers such as DNA or proteins. Related complementary techniques include plasmon waveguide resonance, QCM, extraordinary optical transmission, and dual-polarization interferometry. SPR immunoassay The first SPR immunoassay was proposed in 1983 by Liedberg, Nylander, and Lundström, then of the Linköping Institute of Technology (Sweden). They adsorbed human IgG onto a 600-Ångström silver film, and used the assay to detect anti-human IgG in water solution. Unlike many other immunoassays, such as ELISA, an SPR immunoassay is label free in that a label molecule is not required for detection of the analyte. Additionally, the measurements on SPR can be followed real-time allowing the monitoring of individual steps in sequential binding events particularly useful in the assessment of for instance sandwich complexes. Material characterization Multi-parametric surface plasmon resonance, a special configuration of SPR, can be used to characterize layers and stacks of layers. Besides binding kinetics, MP-SPR can also provide information on structural changes in terms of layer true thickness and refractive index. MP-SPR has been applied successfully in measurements of lipid targeting and rupture, CVD-deposited single monolayer of graphene (3.7Å) as well as micrometer thick polymers. Data interpretation The most common data interpretation is based on the Fresnel formulas, which treat the formed thin films as infinite, continuous dielectric layers. This interpretation may result in multiple possible refractive index and thickness values. Usually only one solution is within the reasonable data range. In multi-parametric surface plasmon resonance, two SPR curves are acquired by scanning a range of angles at two different wavelengths, which results in a unique solution for both thickness and refractive index. Metal particle plasmons are usually modeled using the Mie scattering theory. In many cases no detailed models are applied, but the sensors are calibrated for the specific application, and used with interpolation within the calibration curve. Novel applications Due to the versatility of SPR instrumentation, this technique pairs well with other approaches, leading to novel applications in various fields, such as biomedical and environmental studies. When coupled with nanotechnology, SPR biosensors can use nanoparticles as carriers for therapeutic implants. For instance, in the treatment of Alzheimer's disease, nanoparticles can be used to deliver therapeutic molecules in targeted ways. In general, SPR biosensing is demonstrating advantages over other approaches in the biomedical field due to this technique being label-free, lower in costs, applicable in point-of-care settings, and capable of producing faster results for smaller research cohorts. In the study of environmental pollutants, SPR instrumentation can be used as a replacement for former chromatography-based techniques. Current pollution research relies on chromatography to monitor increases in pollution in an ecosystem over time. When SPR instrumentation with a Kretschmann prism configuration was used in the detection of chlorophene, an emerging pollutant, it was demonstrated that SPR has similar precision and accuracy levels as chromatography techniques. Furthermore, SPR sensing surpasses chromatography techniques through its high-speed, straightforward analysis. Examples Layer-by-layer self-assembly One of the first common applications of surface plasmon resonance spectroscopy was the measurement of the thickness (and refractive index) of adsorbed self-assembled nanofilms on gold substrates. The resonance curves shift to higher angles as the thickness of the adsorbed film increases. This example is a 'static SPR' measurement. When higher speed observation is desired, one can select an angle right below the resonance point (the angle of minimum reflectance), and measure the reflectivity changes at that point. This is the so-called 'dynamic SPR' measurement. The interpretation of the data assumes that the structure of the film does not change significantly during the measurement. Binding constant determination SPR can be used to study the real-time kinetics of molecular interactions. Determining the affinity between two ligands involves establishing the equilibrium dissociation constant, representing the equilibrium value for the product quotient. This constant can be determined using dynamic SPR parameters, calculated as the dissociation rate divided by the association rate. In this process, a ligand is immobilized on the dextran surface of the SPR crystal. Through a microflow system, a solution with the analyte is injected over the ligand-covered surface. The binding of the analyte to the ligand causes an increase in the SPR signal (expressed in response units, RU). Following the association time, a solution without the analyte (typically a buffer) is introduced into the microfluidics to initiate the dissociation of the bound complex between the ligand and analyte. As the analyte dissociates from the ligand, the SPR signal decreases. From these association ('on rate', ) and dissociation rates ('off rate', ), the equilibrium dissociation constant ('binding constant', ) can be calculated. The detected SPR signal is a consequence of the electromagnetic 'coupling' of the incident light with the surface plasmon of the gold layer. This interaction is particularly sensitive to the characteristics of the layer at the gold–solution interface, which is usually just a few nanometers thick. When substances bind to the surface, it alters the way light is reflected, causing a change in the reflection angle, which can be measured as a signal in SPR experiments. One common application is measuring the kinetics of antibody-antigen interactions. Thermodynamic analysis As SPR biosensors facilitate measurements at different temperatures, thermodynamic analysis can be performed to obtain a better understanding of the studied interaction. By performing measurements at different temperatures, typically between 4 and 40 °C, it is possible to relate association and dissociation rate constants with activation energy and thereby obtain thermodynamic parameters including binding enthalpy, binding entropy, Gibbs free energy and heat capacity. Pair-wise epitope mapping As SPR allows real-time monitoring, individual steps in sequential binding events can be thoroughly assessed when investigating the suitability between antibodies in a sandwich configuration. Additionally, it allows the mapping of epitopes as antibodies of overlapping epitopes will be associated with an attenuated signal compared to those capable of interacting simultaneously. Innovations Magnetic plasmon resonance Recently, there has been an interest in magnetic surface plasmons. These require materials with large negative magnetic permeability, a property that has only recently been made available with the construction of metamaterials. Graphene Layering graphene on top of gold has been shown to improve SPR sensor performance. Its high electrical conductivity increases the sensitivity of detection. The large surface area of graphene also facilitates the immobilization of biomolecules while its low refractive index minimizes its interference. Enhancing SPR sensitivity by incorporating graphene with other materials expands the potential of SPR sensors, making them practical in a broader range of applications. For instance, the enhanced sensitivity of graphene can be used in conjunction with a silver SPR sensor, providing a cost-effective alternative for measuring glucose levels in urine. Graphene has also been shown to improve the resistance of SPR sensors to high-temperature annealing up to 500 °C. Fiber-optic SPR Recent advancements in SPR technology have given rise to novel formats increasing the scope and applicability of SPR sensing. Fiber optic SPR involves the integration of SPR sensors onto the ends of optical fibers, enabling the direct coupling of light with the surface plasmons as the analytes are passed through a hollow SPR core. This format offers enhanced sensitivity and allows for the development of compact sensing devices, making it particularly valuable for applications requiring remote sensing in the field. It also offers an increased surface area for analytes to bind to the inner lining of the fiber optic. See also Hydrogen sensor Multi-parametric surface plasmon resonance Nano-optics Plasmon Spinplasmonics Surface plasmon polariton Waves in plasmas Localized surface plasmon Quartz crystal microbalance References Further reading A selection of free-download papers on Plasmonics in New Journal of Physics Electromagnetism Nanotechnology Spectroscopy Biochemistry methods Biophysics Forensic techniques Protein–protein interaction assays Plasmonics Optical phenomena
Surface plasmon resonance
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
3,320
[ "Physical phenomena", "Protein–protein interaction assays", "Surface science", "Fundamental interactions", "Nanotechnology", "Spectroscopy", "Plasmonics", "Electromagnetism", "Instrumental analysis", "Materials science", "Biophysics", "Biochemistry methods", "Molecular physics", "Spectrum ...
1,991,352
https://en.wikipedia.org/wiki/Polyhydroxyalkanoates
Polyhydroxyalkanoates or PHAs are polyesters produced in nature by numerous microorganisms, including through bacterial fermentation of sugars or lipids. When produced by bacteria they serve as both a source of energy and as a carbon store. More than 150 different monomers can be combined within this family to give materials with extremely different properties. These plastics are biodegradable and are used in the production of bioplastics. They can be either thermoplastic or elastomeric materials, with melting points ranging from 40 to 180 °C. The mechanical properties and biocompatibility of PHA can also be changed by blending, modifying the surface or combining PHA with other polymers, enzymes and inorganic materials, making it possible for a wider range of applications. Biosynthesis To induce PHA production in a laboratory setting, a culture of a micro-organism such as Cupriavidus necator can be placed in a suitable medium and fed appropriate nutrients so that it multiplies rapidly. Once the population has reached a substantial level, the nutrient composition can be changed to force the micro-organism to synthesize PHA. The yield of PHA obtained from the intracellular granule inclusions can be as high as 80% of the organism's dry weight. The biosynthesis of PHA is usually caused by deficiency conditions (e.g. lack of macro elements such as phosphorus, nitrogen, trace elements, or lack of oxygen) and the excess supply of carbon sources. However, the prevalence of PHA production within either a mono-culture or a set of mixed-microbial organisms can also be dependent on overall nutrient limitation, not just macro elements. This is especially the case in the 'feast/famine' cycle method for induction of PHA production, wherein carbon is periodically added and depleted to cause famine, which encourages cells to produce PHA during 'feast' as a storage method for periods of famine. Polyesters are deposited in the form of highly refractive granules in the cells. Depending upon the microorganism and the cultivation conditions, homo- or copolyesters with different hydroxyalkanoic acids are generated. PHA granules are then recovered by disrupting the cells. Recombinant Bacillus subtilis str. pBE2C1 and Bacillus subtilis str. pBE2C1AB were used in production of polyhydroxyalkanoates (PHA) and it was shown that they could use malt waste as carbon source for lower cost of PHA production. PHA synthases are the key enzymes of PHA biosynthesis. They use the coenzyme A - thioester of (r)-hydroxy fatty acids as substrates. The two classes of PHA synthases differ in the specific use of hydroxy fatty acids of short or medium chain length. The resulting PHA is of the two types: Poly (HA SCL) from hydroxy fatty acids with short chain lengths including three to five carbon atoms are synthesized by numerous bacteria, including Cupriavidus necator and Alcaligenes latus (PHB). Poly (HA MCL) from hydroxy fatty acids with medium chain lengths including six to 14 carbon atoms, can be made for example, by Pseudomonas putida. A few bacteria, including Aeromonas hydrophila and Thiococcus pfennigii, synthesize copolyester from the above two types of hydroxy fatty acids, or at least possess enzymes that are capable of part of this synthesis. Another even larger scale synthesis can be done with the help of soil organisms. For lack of nitrogen and phosphorus they produce a kilogram of PHA per three kilograms of sugar. The simplest and most commonly occurring form of PHA is the fermentative production of poly-beta-hydroxybutyrate [poly(3-hydroxybutyrate), P(3HB)], which consists of 1000 to 30000 hydroxy fatty acid monomers. Industrial production In the industrial production of PHA, the polyester is extracted and purified from the bacteria by optimizing the conditions of microbial fermentation of sugar, glucose, or vegetable oil. In the 1980s, Imperial Chemical Industries developed poly(3-hydroxybutyrate-co-3-hydroxyvalerate) obtained via fermentation that was named "Biopol". It was sold under the name "Biopol" and distributed in the U.S. by Monsanto and later Metabolix. As raw material for the fermentation, carbohydrates such as glucose and sucrose can be used, but also vegetable oil or glycerine from biodiesel production. Researchers in industry are working on methods with which transgenic crops will be developed that express PHA synthesis routes from bacteria and so produce PHA as energy storage in their tissues. Several companies are working to develop methods of producing PHA from waste water, including Veolia subsidiary Anoxkaldnes. and start-ups, Micromidas, Mango Materials, Full Cycle Bioplastics, Newlight and Paques Biomaterials. PHAs are processed mainly via injection molding, extrusion and extrusion bubbles into films and hollow bodies. Material properties PHA polymers are thermoplastic, can be processed on conventional processing equipment, and are, depending on their composition, ductile and more or less elastic. They differ in their properties according to their chemical composition (homo-or copolyester, contained hydroxy fatty acids). They are UV stable, in contrast to other bioplastics from polymers such as polylactic acid, partial ca. temperatures up to , and show a low permeation of water. The crystallinity can lie in the range of a few to 70%. Processability, impact strength and flexibility improves with a higher percentage of valerate in the material. PHAs are soluble in halogenated solvents such chloroform, dichloromethane or dichloroethane. PHB is similar in its material properties to polypropylene (PP), has a good resistance to moisture and aroma barrier properties. Polyhydroxybutyric acid synthesized from pure PHB is relatively brittle and stiff. PHB copolymers, which may include other fatty acids such as beta-hydroxyvaleric acid, may be elastic. Applications Due to its biodegradability and potential to create bioplastics with novel properties, much interest exists to develop the use of PHA-based materials. PHA fits into the green economy as a means to create plastics from non-fossil fuel sources. Furthermore, active research is being carried out for the biotransformation "upcycling" of plastic waste (e.g., polyethylene terephthalate and polyurethane) into PHA using Pseudomonas putida bacteria. A PHA copolymer called PHBV (poly(3-hydroxybutyrate-co-3-hydroxyvalerate)) is less stiff and tougher, and it may be used as packaging material. In June 2005, US company Metabolix, Inc. received the US Presidential Green Chemistry Challenge Award (small business category) for their development and commercialisation of a cost-effective method for manufacturing PHAs. There are potential applications for PHA produced by micro-organisms within the agricultural, medical and pharmaceutical industries, primarily due to their biodegradability. Fixation and orthopaedic applications have included sutures, suture fasteners, meniscus repair devices, rivets, tacks, staples, screws (including interference screws), bone plates and bone plating systems, surgical mesh, repair patches, slings, cardiovascular patches, orthopedic pins (including bone.lling augmentation material), adhesion barriers, stents, guided tissue repair/regeneration devices, articular cartilage repair devices, nerve guides, tendon repair devices, atrial septal defect repair devices, pericardial patches, bulking and filling agents, vein valves, bone marrow scaffolds, meniscus regeneration devices, ligament and tendon grafts, ocular cell implants, spinal fusion cages, skin substitutes, dural substitutes, bone graft substitutes, bone dowels, wound dressings, and hemostats. References Further reading Adhithya Sankar Santhosh; Mridul Umesh (December 2020). "A Strategic Review on Use of Polyhydroxyalkanoates as an Immunostimulant in Aquaculture". Applied Food Biotechnology, Vol. 8 No. 1 (2021), 14 December 2020 , Page 1-18. https://doi.org/10.22037/afb.v8i1.31255 Biomaterials Bioplastics Polyesters Thermoplastics
Polyhydroxyalkanoates
[ "Physics", "Biology" ]
1,874
[ "Biomaterials", "Materials", "Matter", "Medical technology" ]
1,991,441
https://en.wikipedia.org/wiki/Double%20beta%20decay
In nuclear physics, double beta decay is a type of radioactive decay in which two neutrons are simultaneously transformed into two protons, or vice versa, inside an atomic nucleus. As in single beta decay, this process allows the atom to move closer to the optimal ratio of protons and neutrons. As a result of this transformation, the nucleus emits two detectable beta particles, which are electrons or positrons. The literature distinguishes between two types of double beta decay: ordinary double beta decay and neutrinoless double beta decay. In ordinary double beta decay, which has been observed in several isotopes, two electrons and two electron antineutrinos are emitted from the decaying nucleus. In neutrinoless double beta decay, a hypothesized process that has never been observed, only electrons would be emitted. History The idea of double beta decay was first proposed by Maria Goeppert Mayer in 1935. In 1937, Ettore Majorana demonstrated that all results of beta decay theory remain unchanged if the neutrino were its own antiparticle, now known as a Majorana particle. In 1939, Wendell H. Furry proposed that if neutrinos are Majorana particles, then double beta decay can proceed without the emission of any neutrinos, via the process now called neutrinoless double beta decay. It is not yet known whether the neutrino is a Majorana particle, and, relatedly, whether neutrinoless double beta decay exists in nature. As parity violation in weak interactions would not be discovered until 1956, earlier calculations showed that neutrinoless double beta decay should be much more likely to occur than ordinary double beta decay, if neutrinos were Majorana particles. The predicted half-lives were on the order of ~ years. Efforts to observe the process in laboratory date back to at least 1948 when E.L. Fireman made the first attempt to directly measure the half-life of the isotope with a Geiger counter. Radiometric experiments through about 1960 produced negative results or false positives, not confirmed by later experiments. In 1950, for the first time the double beta decay half-life of was measured by geochemical methods to be 1.4× years, reasonably close to the modern value. This involved detecting the concentration in minerals of the xenon produced by the decay. In 1956, after the V − A nature of weak interactions was established, it became clear that the half-life of neutrinoless double beta decay would significantly exceed that of ordinary double beta decay. Despite significant progress in experimental techniques in 1960–1970s, double beta decay was not observed in a laboratory until the 1980s. Experiments had only been able to establish the lower bound for the half-life – about  years. At the same time, geochemical experiments detected the double beta decay of and . Double beta decay was first observed in a laboratory in 1987 by the group of Michael Moe at UC Irvine in . Since then, many experiments have observed ordinary double beta decay in other isotopes. None of those experiments have produced positive results for the neutrinoless process, raising the half-life lower bound to approximately  years. Geochemical experiments continued through the 1990s, producing positive results for several isotopes. Double beta decay is the rarest known kind of radioactive decay; as of 2019 it has been observed in only 14 isotopes (including double electron capture in observed in 2001, observed in 2013, and observed in 2019), and all have a mean lifetime over  yr (table below). Ordinary double beta decay In a typical double beta decay, two neutrons in the nucleus are converted to protons, and two electrons and two electron antineutrinos are emitted. The process can be thought as two simultaneous beta minus decays. In order for (double) beta decay to be possible, the final nucleus must have a larger binding energy than the original nucleus. For some nuclei, such as germanium-76, the isobar one atomic number higher (arsenic-76) has a smaller binding energy, preventing single beta decay. However, the isobar with atomic number two higher, selenium-76, has a larger binding energy, so double beta decay is allowed. The emission spectrum of the two electrons can be computed in a similar way to beta emission spectrum using Fermi's golden rule. The differential rate is given by where the subscripts refer to each electron, is kinetic energy, is total energy, is the Fermi function with Z the charge of the final-state nucleus, is momentum, is velocity in units of , is the angle between the electrons, and is the Q value of the decay. For some nuclei, the process occurs as conversion of two protons to neutrons, emitting two electron neutrinos and absorbing two orbital electrons (double electron capture). If the mass difference between the parent and daughter atoms is more than 1.022 MeV/c2 (two electron masses), another decay is accessible, capture of one orbital electron and emission of one positron. When the mass difference is more than 2.044 MeV/c2 (four electron masses), emission of two positrons is possible. These theoretical decay branches have not been observed. Known double beta decay isotopes There are 35 naturally occurring isotopes capable of double beta decay. In practice, the decay can be observed when the single beta decay is forbidden by energy conservation. This happens for elements with an even atomic number and even neutron number, which are more stable due to spin-coupling. When single beta decay or alpha decay also occur, the double beta decay rate is generally too low to observe. However, the double beta decay of (also an alpha emitter) has been measured radiochemically. Two other nuclides in which double beta decay has been observed, and , can also theoretically single beta decay, but this decay is extremely suppressed and has never been observed. Similar suppression of energetically barely possible single beta decay occurs for 148Gd and 222Rn, but both these nuclides are rather short-lived alpha emitters. Fourteen isotopes have been experimentally observed undergoing two-neutrino double beta decay (β–β–) or double electron capture (εε). The table below contains nuclides with the latest experimentally measured half-lives, as of December 2016, except for 124Xe (for which double electron capture was first observed in 2019). Where two uncertainties are specified, the first one is statistical uncertainty and the second is systematic. Searches for double beta decay in isotopes that present significantly greater experimental challenges are ongoing. One such isotope is . The following known beta-stable (or almost beta-stable in the cases 48Ca, 96Zr, and 222Rn) nuclides with A ≤ 260 are theoretically capable of double beta decay, where red are isotopes that have a double-beta rate measured experimentally and black have yet to be measured experimentally: 46Ca, , 70Zn, , 80Se, , 86Kr, 94Zr, , 98Mo, , 104Ru, 110Pd, 114Cd, , 122Sn, 124Sn, , , 134Xe, , 142Ce, 146Nd, 148Nd, , 154Sm, 160Gd, 170Er, 176Yb, 186W, 192Os, 198Pt, 204Hg, 216Po, 220Rn, 222Rn, 226Ra, 232Th, , 244Pu, 248Cm, 254Cf, 256Cf, and 260Fm. The following known beta-stable (or almost beta-stable in the case 148Gd) nuclides with A ≤ 260 are theoretically capable of double electron capture, where red are isotopes that have a double-electron capture rate measured and black have yet to be measured experimentally: 36Ar, 40Ca, 50Cr, 54Fe, 58Ni, 64Zn, 74Se, , 84Sr, 92Mo, 96Ru, 102Pd, 106Cd, 108Cd, 112Sn, 120Te, , 126Xe, , 132Ba, 136Ce, 138Ce, 144Sm, 148Gd, 150Gd, 152Gd, 154Dy, 156Dy, 158Dy, 162Er, 164Er, 168Yb, 174Hf, 180W, 184Os, 190Pt, 196Hg, 212Rn, 214Rn, 218Ra, 224Th, 230U, 236Pu, 242Cm, 252Fm, and 258No. In particular, 36Ar is the lightest observationally stable nuclide whose decay is energetically possible. Neutrinoless double beta decay If the neutrino is a Majorana particle (i.e., the antineutrino and the neutrino are actually the same particle), and at least one type of neutrino has non-zero mass (which has been established by the neutrino oscillation experiments), then it is possible for neutrinoless double beta decay to occur. Neutrinoless double beta decay is a lepton number violating process. In the simplest theoretical treatment, known as light neutrino exchange, a nucleon absorbs the neutrino emitted by another nucleon. The exchanged neutrinos are virtual particles. With only two electrons in the final state, the electrons' total kinetic energy would be approximately the binding energy difference of the initial and final nuclei, with the nuclear recoil accounting for the rest. Because of momentum conservation, electrons are generally emitted back-to-back. The decay rate for this process is given by where G is the two-body phase-space factor, M is the nuclear matrix element, and mββ is the effective Majorana mass of the electron neutrino. In the context of light Majorana neutrino exchange, mββ is given by where mi are the neutrino masses and the Uei are elements of the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrix. Therefore, observing neutrinoless double beta decay, in addition to confirming the Majorana neutrino nature, can give information on the absolute neutrino mass scale and Majorana phases in the PMNS matrix, subject to interpretation through theoretical models of the nucleus, which determine the nuclear matrix elements, and models of the decay. The observation of neutrinoless double beta decay would require that at least one neutrino is a Majorana particle, irrespective of whether the process is engendered by neutrino exchange. Experiments Numerous experiments have searched for neutrinoless double beta decay. The best-performing experiments have a high mass of the decaying isotope and low backgrounds, with some experiments able to perform particle discrimination and electron tracking. In order to remove backgrounds from cosmic rays, most experiments are located in underground laboratories around the world. Recent and proposed experiments include: Completed experiments: Gotthard TPC Heidelberg-Moscow, 76Ge detectors (1997–2001) IGEX, 76Ge detectors (1999–2002) NEMO, various isotopes using tracking calorimeters (2003–2011) Cuoricino, 130Te in ultracold TeO2 crystals (2003–2008) Experiments taking data as of November 2017: AMoRE, 100Mo enriched CaMoO4 crystals at YangYang underground laboratory COBRA, 116Cd in room temperature CdZnTe crystals CUORE, 130Te in ultracold TeO2 crystals EXO, a 136Xe and 134Xe search GERDA, a 76Ge detector KamLAND-Zen, a 136Xe search. Data collection from 2011. , using high purity 76Ge p-type point-contact detectors. XMASS using liquid Xe Proposed/future experiments: CUPID, neutrinoless double-beta decay of 100Mo CANDLES, 48Ca in CaF2, at Kamioka Observatory MOON, developing 100Mo detectors nEXO, using liquid 136Xe in a time projection chamber LEGEND, Neutrinoless Double-beta Decay of 76Ge. LUMINEU, exploring 100Mo enriched ZnMoO4 crystals at LSM, France. NEXT, a Xenon TPC. NEXT-DEMO ran and NEXT-100 will run in 2016. SNO+, a liquid scintillator, will study 130Te SuperNEMO, a NEMO upgrade, will study 82Se TIN.TIN, a 124Sn detector at INO PandaX-III, an experiment with 200 kg to 1000 kg of 90% enriched 136Xe DUNE, a TPC filled with liquid Argon doped with 136Xe. NuDoubt++ will study double beta plus decays of 78Kr in a pressurized hybrid-opaque liquid scintillation detector Status While some experiments have claimed a discovery of neutrinoless double beta decay, modern searches have found no evidence for the decay. Heidelberg-Moscow controversy Some members of the Heidelberg-Moscow collaboration claimed a detection of neutrinoless beta decay in 76Ge in 2001. This claim was criticized by outside physicists as well as other members of the collaboration. In 2006, a refined estimate by the same authors stated the half-life was 2.3 years. This half-life has been excluded at high confidence by other experiments, including in 76Ge by GERDA. Current results As of 2017, the strongest limits on neutrinoless double beta decay have come from GERDA in 76Ge, CUORE in 130Te, and EXO-200 and KamLAND-Zen in 136Xe. Higher order simultaneous beta decay For mass numbers with more than two beta-stable isobars, quadruple beta decay and its inverse, quadruple electron capture, have been proposed as alternatives to double beta decay in the isobars with the greatest energy excess. These decays are energetically possible in eight nuclei, though partial half-lives compared to single or double beta decay are predicted to be very long; hence, quadruple beta decay is unlikely to be observed. The seven candidate nuclei for quadruple beta decay include 96Zr, 136Xe, and 150Nd capable of quadruple beta-minus decay, and 124Xe, 130Ba, 148Gd, and 154Dy capable of quadruple beta-plus decay or electron capture (though 148Gd and 154Dy are non-primordial alpha-emitters with geologically short half-lives). In theory, quadruple beta decay may be experimentally observable in three of these nuclei – 96Zr, 136Xe, and 150Nd – with the most promising candidate being 150Nd. Triple beta-minus decay is also possible for 48Ca, 96Zr, and 150Nd; triple beta-plus decay or electron capture is also possible for 148Gd and 154Dy. Moreover, such a decay mode could also be neutrinoless in physics beyond the standard model. Neutrinoless quadruple beta decay would violate lepton number in 4 units, as opposed to a lepton number breaking of two units in the case of neutrinoless double beta decay. Therefore, there is no 'black-box theorem' and neutrinos could be Dirac particles while allowing these type of processes. In particular, if neutrinoless quadruple beta decay is found before neutrinoless double beta decay then the expectation is that neutrinos will be Dirac particles. So far, searches for triple and quadruple beta decay in 150Nd have remained unsuccessful. See also Double electron capture Beta decay Neutrino Particle radiation Radioactive isotope References External links Double beta decay on arxiv.org Nuclear physics Radioactivity Neutrinos
Double beta decay
[ "Physics", "Chemistry" ]
3,231
[ "Radioactivity", "Nuclear physics" ]
1,991,491
https://en.wikipedia.org/wiki/Neutrinoless%20double%20beta%20decay
Neutrinoless double beta decay (0νββ) is a commonly proposed and experimentally pursued theoretical radioactive decay process that would prove a Majorana nature of the neutrino particle. To this day, it has not been found. The discovery of neutrinoless double beta decay could shed light on the absolute neutrino masses and on their mass hierarchy (Neutrino mass). It would mean the first ever signal of the violation of total lepton number conservation. A Majorana nature of neutrinos would confirm that the neutrino is its own antiparticle. To search for neutrinoless double beta decay, there are currently a number of experiments underway, with several future experiments for increased sensitivity proposed as well. History The Italian physicist Ettore Majorana first introduced the concept of a particle being its own antiparticle in 1937. Particles of this nature were subsequently named after him as Majorana particles. In 1939, Wendell H. Furry proposed the idea of the Majorana nature of the neutrino, which was associated with beta decays. Furry stated the transition probability to even be higher for neutrinoless double beta decay. It was the first idea proposed to search for the violation of lepton number conservation. It has, since then, drawn attention to it for being useful to study the nature of neutrinos (see quote). Physical relevance Conventional double beta decay Neutrinos are conventionally produced in weak decays. Weak beta decays normally produce one electron (or positron), emit an antineutrino (or neutrino) and increase the nucleus' proton number by one. The nucleus' mass (i.e. binding energy) is then lower and thus more favorable. There exist a number of elements that can decay into a nucleus of lower mass, but they cannot emit one electron only because the resulting nucleus is kinematically (that is, in terms of energy) not favorable (its energy would be higher). These nuclei can only decay by emitting two electrons (that is, via double beta decay). There are about a dozen confirmed cases of nuclei that can only decay via double beta decay. The corresponding decay equation is: . It is a weak process of second order. A simultaneous decay of two nucleons in the same nucleus is extremely unlikely. Thus, the experimentally observed lifetimes of such decay processes are in the range of years. A number of isotopes have been observed already to show this two-neutrino double beta decay. This conventional double beta decay is allowed in the Standard Model of particle physics. It has thus both a theoretical and an experimental foundation. Overview If the nature of the neutrinos is Majorana, then they can be emitted and absorbed in the same process without showing up in the corresponding final state. As Dirac particles, both the neutrinos produced by the decay of the W bosons would be emitted, and not absorbed after. Neutrinoless double beta decay can only occur if the neutrino particle is Majorana, and there exists a right-handed component of the weak leptonic current or the neutrino can change its handedness between emission and absorption (between the two W vertices), which is possible for a non-zero neutrino mass (for at least one of the neutrino species). The simplest decay process is known as the light neutrino exchange. It features one neutrino emitted by one nucleon and absorbed by another nucleon (see figure to the right). In the final state, the only remaining parts are the nucleus (with its changed proton number ) and two electrons: The two electrons are emitted quasi-simultaneously. The two resulting electrons are then the only emitted particles in the final state and must carry approximately the difference of the sums of the binding energies of the two nuclei before and after the process as their kinetic energy. The heavy nuclei do not carry significant kinetic energy. In that case, the decay rate can be calculated with , where denotes the phase space factor, the (squared) matrix element of this nuclear decay process (according to the Feynman diagram), and the square of the effective Majorana mass. First, the effective Majorana mass can be obtained by , where are the Majorana neutrino masses (three neutrinos ) and the elements of the neutrino mixing matrix (see PMNS matrix). Contemporary experiments to find neutrinoless double beta decays (see section on experiments) aim at both the proof of the Majorana nature of neutrinos and the measurement of this effective Majorana mass (can only be done if the decay is actually generated by the neutrino masses). The nuclear matrix element (NME) cannot be measured independently; it must, but also can, be calculated. The calculation itself relies on sophisticated nuclear many-body theories and there exist different methods to do this. The NME differs also from nucleus to nucleus (i.e. chemical element to chemical element). Today, the calculation of the NME is a significant problem and it has been treated by different authors in different ways. One question is whether to treat the range of obtained values for as the theoretical uncertainty and whether this is then to be understood as a statistical uncertainty. Different approaches are being chosen here. The obtained values for often vary by factors of 2 up to about 5. Typical values lie in the range of from about 0.9 to 14, depending on the decaying nucleus/element. Lastly, the phase-space factor must also be calculated. It depends on the total released kinetic energy (, i.e. "-value") and the atomic number . Methods use Dirac wave functions, finite nuclear sizes and electron screening. There exist high-precision results for for various nuclei, ranging from about 0.23 (for ), and 0.90 () to about 24.14 (). It is believed that, if neutrinoless double beta decay is found under certain conditions (decay rate compatible with predictions based on experimental knowledge about neutrino masses and mixing), this would indeed "likely" point at Majorana neutrinos as the main mediator (and not other sources of new physics). There are 35 nuclei that can undergo neutrinoless double beta decay (according to the aforementioned decay conditions). Experiments and results Nine different candidates of nuclei are being considered in experiments to confirm neutrinoless double beta-decay: . They all have arguments for and against their use in an experiment. Factors to be included and revised are natural abundance, reasonably priced enrichment, and a well understood and controlled experimental technique. The higher the -value, the better are the chances of a discovery, in principle. The phase-space factor , and thus the decay rate, grows with . Experimentally of interest and thus measured is the sum of the kinetic energies of the two emitted electrons. It should equal the -value of the respective nucleus for neutrinoless double beta emission. The table shows a summary of the currently best limits on the lifetime of 0νββ. From this, it can be deduced that neutrinoless double beta decay is an extremely rare process, if it occurs at all. Heidelberg-Moscow collaboration The so-called "Heidelberg-Moscow collaboration" (HDM; 1990–2003) of the German Max-Planck-Institut für Kernphysik and the Russian science center Kurchatov Institute in Moscow famously claimed to have found "evidence for neutrinoless double beta decay" (Heidelberg-Moscow controversy). Initially, in 2001 the collaboration announced a 2.2σ, or a 3.1σ (depending on the used calculation method) evidence. The decay rate was found to be around years. This result has been topic of discussions between many scientists and authors. To this day, no other experiment has ever confirmed or approved the result of the HDM group. Instead, recent results from the GERDA experiment for the lifetime limit clearly disfavor and reject the values of the HDM collaboration. Neutrinoless double beta decay has not yet been found. GERDA (Germanium Detector Array) experiment The Germanium Detector Array (GERDA) collaboration's result of phase I of the detector was a limit of years (90% C.L.). It used germanium both as source and detector material. Liquid argon was used for muon vetoing and as a shielding from background radiation. The -value of for 0νββ decay is 2039 keV, but no excess of events in this region was found. Phase II of the experiment started data-taking in 2015, and it used around 36 kg of germanium for the detectors. The exposure analyzed until July 2020 was 10.8 kg yr. Again, no signal was found and thus a new limit was set to years (90% C.L.). The detector has stopped working and published its final results in December 2020. No neutrinoless double beta decay was observed. EXO (Enriched Xenon Observatory) experiment The Enriched Xenon Observatory-200 experiment uses xenon both as source and detector. The experiment is located in New Mexico (US) and uses a time-projection chamber (TPC) for three-dimensional spatial and temporal resolution of the electron track depositions. The EXO-200 experiment yielded a lifetime limit of years (90% C.L.). When translated to effective Majorana mass, this is a limit of the same order as that obtained by GERDA I and II. Currently data-taking experiments CUORE (Cryogenic Underground Observatory for Rare Events) experiment: The CUORE experiment consists of an array of 988 ultra-cold TeO2 crystals (for a total mass of 206 kg of ) used as bolometers to detect the emitted beta particles and as the source of the decay. CUORE is located underground at the Laboratori Nazionali del Gran Sasso, and it began its first physics data run in 2017. CUORE published in 2020 results from the search for neutrinoless double-beta decay in with a total exposure of 372.5 kg⋅yr, finding no evidence for 0νββ decay and setting a 90% CI Bayesian lower limit of years and in April 2022 a new limit was set on years at the same confidence level. The experiment is steadily taking data, and it is expected to finalize its physics program by 2024. KamLAND-Zen (Kamioka Liquid Scintillator Antineutrino Detector-Zen) experiment: The KamLAND-Zen experiment commenced using 13 tons of xenon as a source (enriched with about 320 kg of ), contained in a nylon balloon that is surrounded by a liquid scintillator outer balloon of 13 m diameter. Starting in 2011, KamLAND-Zen Phase I started taking data, eventually leading to set a limit on the lifetime for neutrinoless double beta decay of years (90% C.L.). This limit could be improved by combining with Phase II data (data-taking started in December 2013) to years (90% C.L.). For Phase II, the collaboration especially managed to reduce the decay of , which disturbed the measurements in the region of interest for 0νββ decay of . In August 2016, KamLAND-Zen 800 was completed containing 800 kg of , reporting a limit of years (90% C.L.). In 2023 the limit was improved limit of years (90% C.L.). Proposed/future experiments nEXO experiment: As EXO-200's successor, nEXO is planned to be a ton-scale experiment and part of the next generation of 0νββ experiments. The detector material is planned to weigh about 5 t, serving a 1% energy resolution at the -value. The experiment is planned to deliver a lifetime sensitivity of about years after 10 years of data-taking. LEGEND (experiment) SuperNEMO NuDoubt++: The NuDoubt⁺⁺ experiment aims at the measurement of two-neutrino and neutrinoless positive double weak decays (2β⁺/ECβ⁺). It is based on a new detector concept combining hybrid and opaque scintillators paired with a novel light read-out technique. The technology is particularly suitable detecting positrons (β⁺) signatures. In its first phase, NuDoubt⁺⁺ is going to operate under high-pressure loading of enriched Kr-78 gas. It expects to discover two-neutrino positive double weak decay modes of Kr-78 within 1 tonne-week exposure and is able to probe neutrinoless positive double weak decay modes at several orders of magnitude improved significance compared to current experimental limits. After 1 ton-week exposure, a half-life sensitivity of years (90% C.L.) is expected for Kr-78. Later phases may involve searches for positive double weak decays in Xe-124 and Cd-106. Neutrinoless muon conversion The muon decays as and . Decays without neutrino emission, such as , , and are so unlikely that they are considered prohibited and their observation would be considered evidence of new physics. A number of experiments are pursuing this path such as Mu to E Gamma, Comet, and Mu2e for and Mu3e for . Neutrinoless tau conversion in the form has been searched for by the CMS experiment. See also Double beta decay Heidelberg-Moscow controversy Neutrinoless double electron capture References Nuclear physics Standard Model Physics beyond the Standard Model Radioactivity Hypothetical processes
Neutrinoless double beta decay
[ "Physics", "Chemistry" ]
2,812
[ "Standard Model", "Hypotheses in physics", "Theoretical physics", "Unsolved problems in physics", "Particle physics", "Nuclear physics", "Physics beyond the Standard Model", "Radioactivity" ]
1,991,528
https://en.wikipedia.org/wiki/Vlasov%20equation
In plasma physics, the Vlasov equation is a differential equation describing time evolution of the distribution function of collisionless plasma consisting of charged particles with long-range interaction, such as the Coulomb interaction. The equation was first suggested for the description of plasma by Anatoly Vlasov in 1938 and later discussed by him in detail in a monograph. The Vlasov equation, combined with Landau kinetic equation describe collisional plasma. Difficulties of the standard kinetic approach First, Vlasov argues that the standard kinetic approach based on the Boltzmann equation has difficulties when applied to a description of the plasma with long-range Coulomb interaction. He mentions the following problems arising when applying the kinetic theory based on pair collisions to plasma dynamics: Theory of pair collisions disagrees with the discovery by Rayleigh, Irving Langmuir and Lewi Tonks of natural vibrations in electron plasma. Theory of pair collisions is formally not applicable to Coulomb interaction due to the divergence of the kinetic terms. Theory of pair collisions cannot explain experiments by Harrison Merrill and Harold Webb on anomalous electron scattering in gaseous plasma. Vlasov suggests that these difficulties originate from the long-range character of Coulomb interaction. He starts with the collisionless Boltzmann equation (sometimes called the Vlasov equation, anachronistically in this context), in generalized coordinates: explicitly a PDE: and adapted it to the case of a plasma, leading to the systems of equations shown below. Here is a general distribution function of particles with momentum at coordinates and given time . Note that the term is the force acting on the particle. The Vlasov–Maxwell system of equations (Gaussian units) Instead of collision-based kinetic description for interaction of charged particles in plasma, Vlasov utilizes a self-consistent collective field created by the charged plasma particles. Such a description uses distribution functions and for electrons and (positive) plasma ions. The distribution function for species describes the number of particles of the species having approximately the momentum near the position at time . Instead of the Boltzmann equation, the following system of equations was proposed for description of charged components of plasma (electrons and positive ions): Here is the elementary charge (), is the speed of light, is the mass of the ion, and represent collective self-consistent electromagnetic field created in the point at time moment by all plasma particles. The essential difference of this system of equations from equations for particles in an external electromagnetic field is that the self-consistent electromagnetic field depends in a complex way on the distribution functions of electrons and ions and . The Vlasov–Poisson equation The Vlasov–Poisson equations are an approximation of the Vlasov–Maxwell equations in the non-relativistic zero-magnetic field limit: and Poisson's equation for self-consistent electric field: Here is the particle's electric charge, is the particle's mass, is the self-consistent electric field, the self-consistent electric potential, is the electric charge density, and is the electric permitivity. Vlasov–Poisson equations are used to describe various phenomena in plasma, in particular Landau damping and the distributions in a double layer plasma, where they are necessarily strongly non-Maxwellian, and therefore inaccessible to fluid models. Moment equations In fluid descriptions of plasmas (see plasma modeling and magnetohydrodynamics (MHD)) one does not consider the velocity distribution. This is achieved by replacing with plasma moments such as number density , flow velocity and pressure . They are named plasma moments because the -th moment of can be found by integrating over velocity. These variables are only functions of position and time, which means that some information is lost. In multifluid theory, the different particle species are treated as different fluids with different pressures, densities and flow velocities. The equations governing the plasma moments are called the moment or fluid equations. Below the two most used moment equations are presented (in SI units). Deriving the moment equations from the Vlasov equation requires no assumptions about the distribution function. Continuity equation The continuity equation describes how the density changes with time. It can be found by integration of the Vlasov equation over the entire velocity space. After some calculations, one ends up with The number density , and the momentum density , are zeroth and first order moments: Momentum equation The rate of change of momentum of a particle is given by the Lorentz equation: By using this equation and the Vlasov Equation, the momentum equation for each fluid becomes where is the pressure tensor. The material derivative is The pressure tensor is defined as the particle mass times the covariance matrix of the velocity: The frozen-in approximation As for ideal MHD, the plasma can be considered as tied to the magnetic field lines when certain conditions are fulfilled. One often says that the magnetic field lines are frozen into the plasma. The frozen-in conditions can be derived from Vlasov equation. We introduce the scales , , and for time, distance and speed respectively. They represent magnitudes of the different parameters which give large changes in . By large we mean that We then write Vlasov equation can now be written So far no approximations have been done. To be able to proceed we set , where is the gyro frequency and is the gyroradius. By dividing by , we get If and , the two first terms will be much less than since and due to the definitions of , , and above. Since the last term is of the order of , we can neglect the two first terms and write This equation can be decomposed into a field aligned and a perpendicular part: The next step is to write , where It will soon be clear why this is done. With this substitution, we get If the parallel electric field is small, This equation means that the distribution is gyrotropic. The mean velocity of a gyrotropic distribution is zero. Hence, is identical with the mean velocity, , and we have To summarize, the gyro period and the gyro radius must be much smaller than the typical times and lengths which give large changes in the distribution function. The gyro radius is often estimated by replacing with the thermal velocity or the Alfvén velocity. In the latter case is often called the inertial length. The frozen-in conditions must be evaluated for each particle species separately. Because electrons have much smaller gyro period and gyro radius than ions, the frozen-in conditions will more often be satisfied. See also Fokker–Planck equation References Further reading Statistical mechanics Non-equilibrium thermodynamics Plasma physics equations Transport phenomena Moment (physics)
Vlasov equation
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
1,371
[ "Transport phenomena", "Physical phenomena", "Physical quantities", "Equations of physics", "Non-equilibrium thermodynamics", "Quantity", "Chemical engineering", "Plasma physics equations", "Dynamical systems", "Statistical mechanics", "Moment (physics)" ]
30,193,699
https://en.wikipedia.org/wiki/Metamaterials%20Handbook
Metamaterials Handbook is a two-volume handbook on metamaterials edited by Filippo Capolino professor of electrical engineering in University of California. The series is designed to cover all theory and application topics related to electromagnetic metamaterials. Disciplines have combined to study, and develop electromagnetic metamaterials. Some of these disciplines are optics, physics, electromagnetic theory (including computational methods) microfabrication, microwaves, nanofabrication, nanotechnology, and nanochemistry. Theory and Phenomena of Metamaterials Theory and Phenomena of Metamaterials is the first volume of the Metamaterials Handbook. It contains contributions from researchers (scientists) who have produced accepted results in the field of metamaterials. Most of the contributors are associated with Metamorphose VI AISBL, a non-profit, European organization that focuses on artificial electromagnetic materials and metamaterials. Metamorphose provided access to the network of contributors (researchers) who work in a variety of scientific disciplines, involved with metamaterials This book is in an article review format, covering prior work in metamaterials. It focuses on theories underpinning metamaterial research along with the properties of metamaterials. The text covers all areas of metamaterial research. Applications of Metamaterials Applications of Metamaterials is the second volume of the Metamaterials Handbook. This book derives its organization for discussion of its topics from the previous volume. Theory, modeling, and basic properties of metamaterials that were explored in the first volume, are now shown how they work when applied. Devices based on electromagnetic metamaterials continue to expand understanding of principles and modeling begun in the first volume. The applications for metamaterials are shown to be wide-ranging, encompassing electronics, telecommunications, sensing, medical instrumentation, and data storage. This book also discusses the key domains of where metamaterials have already been developed. The material in this book is obtained from highly regarded sources, such as many scientific, peer reviewed, journal articles. See also Metamaterials Metamaterials (journal) Metamaterials: Physics and Engineering Explorations History of metamaterials References Metamaterials Physics books Engineering books 2009 non-fiction books
Metamaterials Handbook
[ "Materials_science", "Engineering" ]
473
[ "Metamaterials", "Materials science" ]
1,354,029
https://en.wikipedia.org/wiki/Heat%20death%20paradox
The heat death paradox, also known as thermodynamic paradox, Clausius' paradox, and Kelvin's paradox, is a reductio ad absurdum argument that uses thermodynamics to show the impossibility of an infinitely old universe. It was formulated in February 1862 by Lord Kelvin and expanded upon by Hermann von Helmholtz and William John Macquorn Rankine. The paradox This theoretical paradox is directed at the then-mainstream strand of belief in a classical view of a sempiternal universe, whereby its matter is postulated as everlasting and having always been recognisably the universe. Heat death paradox is born of a paradigm resulting from fundamental ideas about the cosmos. It is necessary to change the paradigm to resolve the paradox. The paradox was based upon the rigid mechanical point of view of the second law of thermodynamics postulated by Rudolf Clausius and Lord Kelvin, according to which heat can only be transferred from a warmer to a colder object. It notes: if the universe were eternal, as claimed classically, it should already be cold and isotropic (its objects should have the same temperature, and the distribution of matter or radiation should be even). Kelvin compared the universe to a clock that runs slower and slower, constantly dissipating energy in impalpable heat, although he was unsure whether it would stop for ever (reach thermodynamic equilibrium). According to this model, the existence of usable energy, which can be used to perform work and produce entropy, means that the clock has not stopped - since a conversion of heat in mechanical energy (which Kelvin called a rejuvenating universe scenario) is not contemplated. According to the laws of thermodynamics, any hot object transfers heat to its cooler surroundings, until everything is at the same temperature. For two objects at the same temperature as much heat flows from one body as flows from the other, and the net effect is no change. If the universe were infinitely old, there must have been enough time for the stars to cool and warm their surroundings. Everywhere should therefore be at the same temperature and there should either be no stars, or everything should be as hot as stars. The universe should thus achieve, or asymptotically tend to, thermodynamic equilibrium, which corresponds to a state where no thermodynamic free energy is left, and therefore no further work is possible: this is the heat death of the universe, as predicted by Lord Kelvin in 1852. The average temperature of the cosmos should also asymptotically tend to Kelvin Zero, and it is possible that a maximum entropy state will be reached. Kelvin's solution In February 1862, Lord Kelvin used the existence of the Sun and the stars as an empirical proof that the universe has not achieved thermodynamic equilibrium, as entropy production and free work are still possible, and there are temperature differences between objects. Helmholtz and Rankine expanded Kelvin's work soon after. Since there are stars and colder objects, the universe is not in thermodynamic equilibrium, so it cannot be infinitely old. Modern cosmology The paradox does not arise in the Big Bang or its successful Lambda-CDM refinement, which posit that the universe began roughly 13.8 billion years ago, not long enough ago for the universe to have approached thermodynamic equilibrium. Some proposed further refinements, termed eternal inflation, restore Kelvin's idea of unending time in the more complicated form of an eternal, exponentially-expanding multiverse in which mutually-inaccessible baby universes, some of which resemble the universe we inhabit, are continually being born. Related paradoxes Olbers' paradox is another paradox which aims to disprove an infinitely old static universe, but it only fits with a static universe scenario. Also, unlike Kelvin's paradox, it relies on cosmology rather than thermodynamics. The Boltzmann Brain can also be related to Kelvin's, as it focuses on the spontaneous generation of a brain (filled with false memories) from entropy fluctuations, in a universe which has been lying in a heat death state for an indefinite amount of time. See also Entropy (arrow of time) Graphical timeline from Big Bang to Heat Death Heat death of the universe List of paradoxes Thermodynamic temperature References Thermodynamics Physical paradoxes Physical cosmology 1862 introductions
Heat death paradox
[ "Physics", "Chemistry", "Astronomy", "Mathematics" ]
913
[ "Astronomical sub-disciplines", "Theoretical physics", "Astrophysics", "Thermodynamics", "Physical cosmology", "Dynamical systems" ]
1,354,339
https://en.wikipedia.org/wiki/Puerto%20Rico%20Trench
The Puerto Rico Trench is located on the boundary between the North Atlantic Ocean and Caribbean Sea, parallel to and north of Puerto Rico, where the oceanic trench reaches the deepest points in the Atlantic Ocean. The trench is associated with a complex transition from the Lesser Antilles frontal subduction zone between the South American plate and Caribbean plate to the oblique subduction zone and the strike-slip transform fault zone between the North American plate and Caribbean plate, which extends from the Puerto Rico Trench at the Puerto Rico–Virgin Islands microplate through the Cayman Trough at the Gonâve microplate to the Middle America Trench at the Cocos plate. Constituting the deepest points in the Atlantic Ocean, the trench is long and has a maximum documented depth between and . The deepest point is commonly referred to as the Milwaukee Deep, with the Brownson Deep naming the seabed surrounding it. However, more recently, the latter term has also been used interchangeably with the former to refer to this point. The exact point was identified by the DSSV Pressure Drop using a state-of-the-art Kongsberg EM124 multibeam sonar in 2018, and then directly visited and its depth verified by the crewed submersible Deep-Submergence Vehicle DSV Limiting Factor (a Triton 36000/2 model submersible) piloted by Victor Vescovo. Scientific studies have concluded that an earthquake occurring along this fault zone could generate a significant tsunami. The island of Puerto Rico, which lies immediately to the south of the fault zone and the trench, suffered a destructive tsunami soon after the 1918 San Fermín earthquake. Geology The Puerto Rico Trench is located at a boundary between two plates that pass each other along a transform boundary with only a small component of subduction. The Caribbean plate is moving to the east relative to the North American plate. The North American plate is being subducted by the Caribbean plate obliquely at the trench while to the southeast, the South American plate is being more directly subducted along the Lesser Antilles subduction zone. This subduction zone explains the presence of active volcanoes over the southeastern part of the Caribbean Sea. Volcanic activity is frequent along the Lesser Antilles island arc southeast from Puerto Rico to the northern coast of South America. Although originally part of a volcanic arc, the Virgin Islands, Puerto Rico, Hispaniola, Cuba, and Jamaica do not have active volcanoes. The Virgin Islands and Puerto Rico do not have active volcanic activity since approximately 30 million years ago, while the last active volcanoes in Hispaniola, Thomazanue and Morne la Vigie, became extinct within 1.5 million years ago. However, the islands are at risk of earthquakes and tsunamis. The Puerto Rico Trench has produced earthquakes greater than magnitude 8.0 and is considered capable of continuing to do so. According to NASA, beneath the trench is a mass so dense it deflects gravitational pull on the surface of the ocean, causing it to dip somewhat. It also has a negative effect on the accuracy of navigational instruments. Public awareness Knowledge of the earthquake and tsunami risks has not been widespread among the general public of the islands located near the trench. Since 1988, the Puerto Rican Seismic Society has been trying to use the Puerto Rican media to inform people about a future earthquake that could result in a catastrophic tragedy. Following the 2004 tsunami that affected more than forty countries in the Indian Ocean, many more people now fear the consequences that such an event would bring to the Caribbean. Local governments have begun emergency planning. In the case of Puerto Rico and the U.S. Virgin Islands, the United States government has been studying the problem for years. It is increasing its seismic investigations and developing tsunami warning systems. Seismicity On 11 October 1918, the western coast of Puerto Rico was hit by a major earthquake which caused a tsunami. The 1918 earthquake was caused by an old left-lateral strike-slip fault near the Mona Passage. In 1953, Santo Domingo, Dominican Republic, was affected by the Santo Domingo earthquake. The actual subduction zone (Puerto Rico Trench) has not ruptured in over 200 years, which is a major concern to geophysicists, as they believe it may be due for a major event. Puerto Rico has always been an area of concern to earthquake experts because, apart from the 1918 episode, there are frequent tremors in and around the island, indicating activity. A 1981 tremor was felt across the island, while another in 1985 was felt in the towns of Cayey and Salinas. The January 13, 2014 M 6.4 earthquake north of Puerto Rico occurred as a result of oblique-thrust faulting. Preliminary faulting mechanisms for the event indicate it ruptured either a structure dipping shallowly to the south and striking approximately east-west, or a near-vertical structure striking northwest-southeast. At the location of this earthquake, the North America plate moves west-southwest with respect to the Caribbean plate at a velocity of approximately 20 mm/yr, and subducts beneath the Caribbean plate at the Puerto Rico Trench. The location, depth and mechanism of the earthquake are consistent with the event occurring on this subduction zone interface." Exploration Several exploration cruises carried out by USGS in the Puerto Rico Trench have for the first time mapped the entire trench using ship mounted multibeam bathymetry. The seafloor was visited for the first time by French bathyscaphe Archimède in 1964 and then by a robotic vehicle in 2012. The most conspicuous aspect of the footage was the swarm of benthic amphipods. Some of these amphipods were collected by bait bags attached to the vehicle and were brought to the surface for further analysis. The samples recovered were Scopelocheirus schellenbergi, a species of lysianassid amphipod that have so far only been found in ultradeep trenches in the Pacific. Two invertebrate creatures were also observed in the video. One soft dark individual, estimated to be long, has been identified by Dr. Stace E. Beaulieu of Woods Hole Oceanographic Institution as a sea cucumber, tentatively assigned to genus Peniagone. The other individual, a small crustacean, is tentatively identified as a munnopsid isopod, based on morphology and similar walking and jumping movements observed for other hadal munnopsid isopods. Because these individuals were not collected, it is not possible to obtain species-level identifications. However, these sightings likely exceed the deepest known records for genus Peniagone and family Munnopsidae. Crewed descent The American explorer Victor Vescovo dived to the deepest point of the Puerto Rico Trench and therefore the Atlantic Ocean on 19 December 2018, as part of the Five Deeps Expedition. He reached a depth of ± at 19°42'49" N, 67°18'39" W by direct CTD pressure measurements with the Deep-Submergence Vehicle DSV Limiting Factor (a Triton 36000/2 model submersible) and thus became the first person to reach the bottom of the Atlantic Ocean while also making the second-deepest recorded solo dive in history at that time. Many media outlets referred to the deep as Brownson Deep, in opposition to past references to the area, where the term Milwaukee Deep was used instead. The operating area was surveyed by the support ship, the Deep Submersible Support Vessel DSSV Pressure Drop, with a Kongsberg SIMRAD EM124 multibeam echosounder system. The gathered data will be donated to the GEBCO Seabed 2030 initiative. The dive was part of the Five Deeps Expedition. The objective of this expedition was to thoroughly map and visit the deepest points of all five of the world's oceans by the end of September 2019. See also Mariana Trench Plate tectonics List of oceanic trenches References External links Mapping of the Puerto Rico Trench, the Deepest Part of the Atlantic, is Nearing Completion – United States Geological Survey Workshop Addresses Tsunami Hazard to Puerto Rico, the Virgin Islands, and Other Caribbean Islands – United States Geological Survey Caribbean Tsunami and Earthquake Hazards Studies – Woods Hole Coastal and Marine Science Center Latest Significant Earthquakes – Puerto Rico Seismic Network Promare – Promare – Promoting Marine Research and Exploration Geography of Puerto Rico Geology of Puerto Rico Geology of the Caribbean Natural history of the Caribbean Oceanic trenches of the Atlantic Ocean Oceanic trenches of the Caribbean Sea Physical oceanography Subduction zones Seismic faults of North America
Puerto Rico Trench
[ "Physics" ]
1,711
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
1,354,812
https://en.wikipedia.org/wiki/A%20Million%20Random%20Digits%20with%20100%2C000%20Normal%20Deviates
A Million Random Digits with 100,000 Normal Deviates is a random number book by the RAND Corporation, originally published in 1955. The book, consisting primarily of a random number table, was an important 20th century work in the field of statistics and random numbers. Production and background It was produced starting in 1947 by an electronic simulation of a roulette wheel attached to a computer, the results of which were then carefully filtered and tested before being used to generate the table. The RAND table was an important breakthrough in delivering random numbers, because such a large and carefully prepared table had never before been available. In addition to being available in book form, one could also order the digits on a series of punched cards. The table is formatted as 400 pages, each containing 50 lines of 50 digits. Columns and lines are grouped in fives, and the lines are numbered 00000 through 19999. The standard normal deviates are another 200 pages (10 per line, lines 0000 through 9999), with each deviate given to three decimal places. There are 28 additional pages of front matter. Utility The main use of the tables was in statistics and the experimental design of scientific experiments, especially those that used the Monte Carlo method; in cryptography, they have also been used as nothing up my sleeve numbers, for example in the design of the Khafre cipher. The book was one of the last of a series of random number tables produced from the mid-1920s to the 1950s, after which the development of high-speed computers allowed faster operation through the generation of pseudorandom numbers rather than reading them from tables. 2001 edition The book was reissued in 2001 () with a new foreword by RAND Executive Vice President Michael D. Rich. It has generated many humorous user reviews on Amazon.com. Sample The digits begin: References Additional sources George W. Brown, "History of RAND's random digits—Summary," in A.S. Householder, G.E. Forsythe, and H.H. Germond, eds., Monte Carlo Method, National Bureau of Standards Applied Mathematics Series, 12 (Washington, D.C.: U.S. Government Printing Office, 1951): 31–32. (Available here for download from the RAND Corporation.) External links Full text downloadable from rand.org A Million Random Digits' Was a Number-Cruncher's Bible. Now One Has Exposed Flaws in the Disorder. at wsj.com 1955 non-fiction books Probability books RAND Corporation Mathematical tables Random number generation
A Million Random Digits with 100,000 Normal Deviates
[ "Mathematics" ]
516
[ "Mathematical tables" ]
1,355,641
https://en.wikipedia.org/wiki/Noil
Noil refers to the short fibers that are removed during the combing process in spinning. These fibers are often then used for other purposes. Fibers are chosen for their length and evenness in specific spinning techniques, such as worsted. The short noil fibers are left over from combing of wool or spinning silk. Noil may be treated as a shorter-staple fiber and spun, hand-plied, or used as wadding. Noil may also be used as a decorative additive in spinning projects like rovings and yarns. As noil is a relatively short fiber, fabric made from noil is weaker and often considered less valuable than that made using long lengths of longer staple lengths, though it is sometimes valued for aesthetic effects (see Slub (textiles)). Silk Silk noil is also called "raw silk", although this is a misnomer. Silk noil may also be made from the short fibres taken from silkworm cocoons – either fibres that are naturally shorter or fibres broken by emerging silk moths. Rather than the continuous filament length of silk, shorter fibers are silk noil, which has a slightly rough texture. It is relatively weaker and has low resilience. It tends to have very low lustre, which makes it appear more like cotton than silk. Noil silk has the advantage of being made from protein. Thus, it has a better texture and depth than cotton and gives a nice fall and drape. Silk noil is also blended or appended with heavier fabrics like velvets and satins to create varied textures. Made out of the strongest natural fibres (with a protein base) around, noil saris are not as slippery as many synthetic fibres or filament silk. Being silk, it dyes easily, absorbs moisture well, and can also be waterproofed with a polyurethane coating. Such coatings increase their use in furnishings and upholstery. Silk noil hails from China, whence it was exported to Europe in the Middle Ages, especially to Italy. It was used to create silk blends through the first half of the 14th century. However, over time its use decreased. With an increase in demand and variety of alternatives and low-cost substitutes, noil has re-surfaced, experiencing a sort of revival. In India, noil is used to make saris, materials and furnishings. See also Slub (in textiles), for a list of cloth types made from or with silk noil References Fibers Materials Textiles
Noil
[ "Physics" ]
522
[ "Materials", "Matter" ]
1,357,514
https://en.wikipedia.org/wiki/Recombinant%20DNA
Recombinant DNA (rDNA) molecules are DNA molecules formed by laboratory methods of genetic recombination (such as molecular cloning) that bring together genetic material from multiple sources, creating sequences that would not otherwise be found in the genome. Recombinant DNA is the general name for a piece of DNA that has been created by combining two or more fragments from different sources. Recombinant DNA is possible because DNA molecules from all organisms share the same chemical structure, differing only in the nucleotide sequence. Recombinant DNA molecules are sometimes called chimeric DNA because they can be made of material from two different species like the mythical chimera. rDNA technology uses palindromic sequences and leads to the production of sticky and blunt ends. The DNA sequences used in the construction of recombinant DNA molecules can originate from any species. For example, plant DNA can be joined to bacterial DNA, or human DNA can be joined with fungal DNA. In addition, DNA sequences that do not occur anywhere in nature can be created by the chemical synthesis of DNA and incorporated into recombinant DNA molecules. Using recombinant DNA technology and synthetic DNA, any DNA sequence can be created and introduced into living organisms. Proteins that can result from the expression of recombinant DNA within living cells are termed recombinant proteins. When recombinant DNA encoding a protein is introduced into a host organism, the recombinant protein is not necessarily produced. Expression of foreign proteins requires the use of specialized expression vectors and often necessitates significant restructuring by foreign coding sequences. Recombinant DNA differs from genetic recombination in that the former results from artificial methods while the latter is a normal biological process that results in the remixing of existing DNA sequences in essentially all organisms. Production Molecular cloning is the laboratory process used to produce recombinant DNA. It is one of two most widely used methods, along with polymerase chain reaction (PCR), used to direct the replication of any specific DNA sequence chosen by the experimentalist. There are two fundamental differences between the methods. One is that molecular cloning involves replication of the DNA within a living cell, while PCR replicates DNA in the test tube, free of living cells. The other difference is that cloning involves cutting and pasting DNA sequences, while PCR amplifies by copying an existing sequence. Formation of recombinant DNA requires a cloning vector, a DNA molecule that replicates within a living cell. Vectors are generally derived from plasmids or viruses, and represent relatively small segments of DNA that contain necessary genetic signals for replication, as well as additional elements for convenience in inserting foreign DNA, identifying cells that contain recombinant DNA, and, where appropriate, expressing the foreign DNA. The choice of vector for molecular cloning depends on the choice of host organism, the size of the DNA to be cloned, and whether and how the foreign DNA is to be expressed. The DNA segments can be combined by using a variety of methods, such as restriction enzyme/ligase cloning or Gibson assembly. In standard cloning protocols, the cloning of any DNA fragment essentially involves seven steps: (1) Choice of host organism and cloning vector, (2) Preparation of vector DNA, (3) Preparation of DNA to be cloned, (4) Creation of recombinant DNA, (5) Introduction of recombinant DNA into the host organism, (6) Selection of organisms containing recombinant DNA, and (7) Screening for clones with desired DNA inserts and biological properties. These steps are described in some detail in a related article (molecular cloning). DNA expression DNA expression requires the transfection of suitable host cells. Typically, either bacterial, yeast, insect, or mammalian cells (such as Human Embryonic Kidney cells or CHO cells) are used as host cells. Following transplantation into the host organism, the foreign DNA contained within the recombinant DNA construct may or may not be expressed. That is, the DNA may simply be replicated without expression, or it may be transcribed and translated and a recombinant protein is produced. Generally speaking, expression of a foreign gene requires restructuring the gene to include sequences that are required for producing an mRNA molecule that can be used by the host's translational apparatus (e.g. promoter, translational initiation signal, and transcriptional terminator). Specific changes to the host organism may be made to improve expression of the ectopic gene. In addition, changes may be needed to the coding sequences as well, to optimize translation, make the protein soluble, direct the recombinant protein to the proper cellular or extracellular location, and stabilize the protein from degradation. Properties of organisms containing recombinant DNA In most cases, organisms containing recombinant DNA have apparently normal phenotypes. That is, their appearance, behavior and metabolism are usually unchanged, and the only way to demonstrate the presence of recombinant sequences is to examine the DNA itself, typically using a polymerase chain reaction (PCR) test. Significant exceptions exist, and are discussed below. If the rDNA sequences encode a gene that is expressed, then the presence of RNA and/or protein products of the recombinant gene can be detected, typically using RT-PCR or western hybridization methods. Gross phenotypic changes are not the norm, unless the recombinant gene has been chosen and modified so as to generate biological activity in the host organism. Additional phenotypes that are encountered include toxicity to the host organism induced by the recombinant gene product, especially if it is over-expressed or expressed within inappropriate cells or tissues. In some cases, recombinant DNA can have deleterious effects even if it is not expressed. One mechanism by which this happens is insertional inactivation, in which the rDNA becomes inserted into a host cell's gene. In some cases, researchers use this phenomenon to "knock out" genes to determine their biological function and importance. Another mechanism by which rDNA insertion into chromosomal DNA can affect gene expression is by inappropriate activation of previously unexpressed host cell genes. This can happen, for example, when a recombinant DNA fragment containing an active promoter becomes located next to a previously silent host cell gene, or when a host cell gene that functions to restrain gene expression undergoes insertional inactivation by recombinant DNA. Applications of recombinant DNA Recombinant DNA is widely used in biotechnology, medicine and research. Today, recombinant proteins and other products that result from the use of DNA technology are found in essentially every pharmacy, physician or veterinarian office, medical testing laboratory, and biological research laboratory. In addition, organisms that have been manipulated using recombinant DNA technology, as well as products derived from those organisms, have found their way into many farms, supermarkets, home medicine cabinets, and even pet shops, such as those that sell GloFish and other genetically modified animals. The most common application of recombinant DNA is in basic research, in which the technology is important to most current work in the biological and biomedical sciences. Recombinant DNA is used to identify, map and sequence genes, and to determine their function. rDNA probes are employed in analyzing gene expression within individual cells, and throughout the tissues of whole organisms. Recombinant proteins are widely used as reagents in laboratory experiments and to generate antibody probes for examining protein synthesis within cells and organisms. Many additional practical applications of recombinant DNA are found in industry, food production, human and veterinary medicine, agriculture, and bioengineering. Some specific examples are identified below. Recombinant chymosin Found in rennet, chymosin is the enzyme responsible for hydrolysis of κ-casein to produce para-κ-casein and glycomacropeptide, which is the first step in formation of cheese, and subsequently curd, and whey. It was the first genetically engineered food additive used commercially. Traditionally, processors obtained chymosin from rennet, a preparation derived from the fourth stomach of milk-fed calves. Scientists engineered a non-pathogenic strain (K-12) of E. coli bacteria for large-scale laboratory production of the enzyme. This microbiologically produced recombinant enzyme, identical structurally to the calf derived enzyme, costs less and is produced in abundant quantities. Today about 60% of U.S. hard cheese is made with genetically engineered chymosin. In 1990, FDA granted chymosin "generally recognized as safe" (GRAS) status based on data showing that the enzyme was safe. Recombinant human insulin Recombinant human insulin has almost completely replaced insulin obtained from animal sources (e.g. pigs and cattle) for the treatment of type 1 diabetes. A variety of different recombinant insulin preparations are in widespread use. Recombinant insulin is synthesized by inserting the human insulin gene into E. coli, or yeast (Saccharomyces cerevisiae) which then produces insulin for human use. Insulin produced by E. coli requires further post translational modifications (e.g. glycosylation) whereas yeasts are able to perform these modifications themselves by virtue of being more complex host organisms. The advantage of recombinant human insulin is after chronic use patients don't develop an immune defence against it the way animal sourced insulin stimulates the human immune system. Recombinant human growth hormone (HGH, somatotropin) Administered to patients whose pituitary glands generate insufficient quantities to support normal growth and development. Before recombinant HGH became available, HGH for therapeutic use was obtained from pituitary glands of cadavers. This unsafe practice led to some patients developing Creutzfeldt–Jakob disease. Recombinant HGH eliminated this problem, and is now used therapeutically. It has also been misused as a performance-enhancing drug by athletes and others. Recombinant blood clotting factor VIII It is the recombinant form of factor VIII, a blood-clotting protein that is administered to patients with the bleeding disorder hemophilia, who are unable to produce factor VIII in quantities sufficient to support normal blood coagulation. Before the development of recombinant factor VIII, the protein was obtained by processing large quantities of human blood from multiple donors, which carried a very high risk of transmission of blood borne infectious diseases, for example HIV and hepatitis B. Recombinant hepatitis B vaccine Hepatitis B infection can be successfully controlled through the use of a recombinant subunit hepatitis B vaccine, which contains a form of the hepatitis B virus surface antigen that is produced in yeast cells. The development of the recombinant subunit vaccine was an important and necessary development because hepatitis B virus, unlike other common viruses such as polio virus, cannot be grown in vitro. Recombinant antibodies Recombinant antibodies (rAbs) are produced in vitro by the means of expression systems based on mammalian cells. Their monospecific binding to a specific epitope makes rAbs eligible not only for research purposes, but also as therapy options against certain cancer types, infections and autoimmune diseases. Diagnosis of HIV infection Each of the three widely used methods for diagnosing HIV infection has been developed using recombinant DNA. The antibody test (ELISA or western blot) uses a recombinant HIV protein to test for the presence of antibodies that the body has produced in response to an HIV infection. The DNA test looks for the presence of HIV genetic material using reverse transcription polymerase chain reaction (RT-PCR). Development of the RT-PCR test was made possible by the molecular cloning and sequence analysis of HIV genomes. HIV testing page from US Centers for Disease Control (CDC) Golden rice Golden rice is a recombinant variety of rice that has been engineered to express the enzymes responsible for β-carotene biosynthesis. This variety of rice holds substantial promise for reducing the incidence of vitamin A deficiency in the world's population. Golden rice is not currently in use, pending the resolution of regulatory and intellectual property issues. Herbicide-resistant crops Commercial varieties of important agricultural crops (including soy, maize/corn, sorghum, canola, alfalfa and cotton) have been developed that incorporate a recombinant gene that results in resistance to the herbicide glyphosate (trade name Roundup), and simplifies weed control by glyphosate application. These crops are in common commercial use in several countries. Insect-resistant crops Bacillus thuringiensis is a bacterium that naturally produces a protein (Bt toxin) with insecticidal properties. The bacterium has been applied to crops as an insect-control strategy for many years, and this practice has been widely adopted in agriculture and gardening. Recently, plants have been developed that express a recombinant form of the bacterial protein, which may effectively control some insect predators. Environmental issues associated with the use of these transgenic crops have not been fully resolved. History The idea of recombinant DNA was first proposed by Peter Lobban, a graduate student of Prof. Dale Kaiser in the Biochemistry Department at Stanford University Medical School. The first publications describing the successful production and intracellular replication of recombinant DNA appeared in 1972 and 1973, from Stanford and UCSF. In 1980 Paul Berg, a professor in the Biochemistry Department at Stanford and an author on one of the first papers was awarded the Nobel Prize in Chemistry for his work on nucleic acids "with particular regard to recombinant DNA". Werner Arber, Hamilton Smith, and Daniel Nathans shared the 1978 Nobel Prize in Physiology or Medicine for the discovery of restriction endonucleases which enhanced the techniques of rDNA technology. Stanford University applied for a U.S. patent on recombinant DNA on November 4, 1974, listing the inventors as Herbert W. Boyer (professor at the University of California, San Francisco) and Stanley N. Cohen (professor at Stanford University); this patent, U.S. 4,237,224A, was awarded on December 2, 1980. The first licensed drug generated using recombinant DNA technology was human insulin, developed by Genentech and licensed by Eli Lilly and Company. Controversy Scientists associated with the initial development of recombinant DNA methods recognized that the potential existed for organisms containing recombinant DNA to have undesirable or dangerous properties. At the 1975 Asilomar Conference on Recombinant DNA, these concerns were discussed and a voluntary moratorium on recombinant DNA research was initiated for experiments that were considered particularly risky. This moratorium was widely observed until the US National Institutes of Health developed and issued formal guidelines for rDNA work. Today, recombinant DNA molecules and recombinant proteins are usually not regarded as dangerous. However, concerns remain about some organisms that express recombinant DNA, particularly when they leave the laboratory and are introduced into the environment or food chain. These concerns are discussed in the articles on genetically modified organisms and genetically modified food controversies. Furthermore, there are concerns about the by-products in biopharmaceutical production, where recombinant DNA result in specific protein products. The major by-product, termed host cell protein, comes from the host expression system and poses a threat to the patient's health and the overall environment. See also Asilomar conference on recombinant DNA Genetic engineering Genetically modified organism Recombinant virus Vector DNA Biomolecular engineering Recombinant DNA technology Host cell protein T7 expression system References Further reading The Eighth Day of Creation: Makers of the Revolution in Biology. Touchstone Books, . 2nd edition: Cold Spring Harbor Laboratory Press, 1996 paperback: . Micklas, David. 2003. DNA Science: A First Course. Cold Spring Harbor Press: . Rasmussen, Nicolas, Gene Jockeys: Life Science and the rise of Biotech Enterprise, Johns Hopkins University Press, (Baltimore), 2014. . Rosenfeld, Israel. 2010. DNA: A Graphic Guide to the Molecule that Shook the World. Columbia University Press: . Schultz, Mark and Zander Cannon. 2009. The Stuff of Life: A Graphic Guide to Genetics and DNA. Hill and Wang: . Watson, James. 2004. DNA: The Secret of Life. Random House: . External links Recombinant DNA fact sheet (from University of New Hampshire) Plasmids in Yeasts (Fact sheet from San Diego State University) Recombinant DNA research at UCSF and commercial application at Genentech Edited transcript of 1994 interview with Herbert W. Boyer, Living history project. Oral history. Recombinant Protein Purification Principles and Methods Handbook Massachusetts Institute of Technology, Oral History Program, Oral History Collection on the Recombinant DNA Controversy, MC-0100. Massachusetts Institute of Technology, Department of Distinctive Collections, Cambridge, Massachusetts American inventions Biopharmaceuticals Genetics techniques Molecular genetics Molecular biology Synthetic biology 1972 in biotechnology
Recombinant DNA
[ "Chemistry", "Engineering", "Biology" ]
3,568
[ "Genetics techniques", "Pharmacology", "Biological engineering", "Synthetic biology", "Biotechnology products", "Genetic engineering", "Bioinformatics", "Molecular genetics", "Molecular biology", "Biochemistry", "Biopharmaceuticals" ]
1,357,593
https://en.wikipedia.org/wiki/Sensitive%20high-resolution%20ion%20microprobe
The sensitive high-resolution ion microprobe (also sensitive high mass-resolution ion microprobe or SHRIMP) is a large-diameter, double-focusing secondary ion mass spectrometer (SIMS) sector instrument that was produced by Australian Scientific Instruments in Canberra, Australia and now has been taken over by Chinese company Dunyi Technology Development Co. (DTDC) in Beijing. Similar to the IMS 1270-1280-1300 large-geometry ion microprobes produced by CAMECA, Gennevilliers, France and like other SIMS instruments, the SHRIMP microprobe bombards a sample under vacuum with a beam of primary ions that sputters secondary ions that are focused, filtered, and measured according to their energy and mass. The SHRIMP is primarily used for geological and geochemical applications. It can measure the isotopic and elemental abundances in minerals at a 10 to 30 μm-diameter scale and with a depth resolution of 1–5 μm. Thus, SIMS method is well-suited for the analysis of complex minerals, as often found in metamorphic terrains, some igneous rocks, and for relatively rapid analysis of statistical valid sets of detrital minerals from sedimentary rocks. The most common application of the instrument is in uranium-thorium-lead geochronology, although the SHRIMP can be used to measure some other isotope ratio measurements (e.g., δ7Li or δ11B) and trace element abundances. History and scientific impact The SHRIMP originated in 1973 with a proposal by Prof. Bill Compston, trying to build an ion microprobe at the Research School of Earth Sciences of the Australian National University that exceeded the sensitivity and resolution of ion probes available at the time in order to analyse individual mineral grains. Optic designer Steve Clement based the prototype instrument (now referred to as 'SHRIMP-I') on a design by Matsuda which minimised aberrations in transmitting ions through the various sectors. The instrument was built from 1975 and 1977 with testing and redesigning from 1978. The first successful geological applications occurred in 1980. The first major scientific impact was the discovery of Hadean (>4000 million year old) zircon grains at Mt. Narryer in Western Australia and then later at the nearby Jack Hills. These results and the SHRIMP analytical method itself were initially questioned but subsequent conventional analysis were partially confirmed. SHRIMP-I also pioneered ion microprobe studies of titanium, hafnium and sulfur isotopic systems. Growing interest from commercial companies and other academic research groups, notably Prof. John de Laeter of Curtin University (Perth, Western Australia), led to the project in 1989 to build a commercial version of the instrument, the SHRIMP-II, in association with ANUTECH, the Australian National University's commercial arm. Refined ion optic designs in the mid-1990s prompted development and construction of the SHRIMP-RG (Reverse Geometry) with improved mass resolution. Further advances in design have also led to multiple ion collection systems (already introduced in the market by a French company years before), negative-ion stable isotope measurements and on-going work in developing a dedicated instrument for light stable isotopes. Fifteen SHRIMP instruments have now been installed around the world and SHRIMP results have been reported in more than 2000 peer reviewed scientific papers. SHRIMP is an important tool for understanding early Earth history having analysed some of the oldest terrestrial material including the Acasta Gneiss and further extending the age of zircons from the Jack Hills and the oldest impact crater on the planet. Other significant milestones include the first U/Pb ages for lunar zircon and Martian apatite dating. More recent uses include the determination of Ordovician sea surface temperature, the timing of snowball Earth events and development of stable isotope techniques. Design and operation Primary column In a typical U-Pb geochronology analytical mode, a beam of (O2)1− primary ions are produced from a high-purity oxygen gas discharge in the hollow Ni cathode of a duoplasmatron. The ions are extracted from the plasma and accelerated at 10 kV. The primary column uses Köhler illumination to produce a uniform ion density across the target spot. The spot diameter can vary from ~5 μm to over 30 μm as required. Typical ion beam density on the sample is ~10 pA/μm2 and an analysis of 15–20 minutes creates an ablation pit of less than 1 μm. Sample chamber The primary beam is 45° incident to the plane of the sample surface with secondary ions extracted at 90° and accelerated at 10 kV. Three quadrupole lenses focus the secondary ions onto a source slit and the design aims to maximise transmission of ions rather than preserving an ion image unlike other ion probe designs. A Schwarzschild objective lens provides reflected-light direct microscopic viewing of the sample during analysis. Electrostatic analyzer The secondary ions are filtered and focussed according to their kinetic energy by a 1272 mm radius 90° electrostatic sector. A mechanically-operated slit provides fine-tuning of the energy spectrum transmitted into the magnetic sector and an electrostatic quadrupole lens is used to reduce aberrations in transmitting the ions to the magnetic sector. Magnetic sector The electromagnet has a 1000 mm radius through 72.5° to focus the secondary ions according to their mass/charge ratio according to the principles of the Lorentz force. Essentially, the path of a less massive ion will have a greater curvature through the magnetic field than the path of a more massive ion. Thus, altering the current in the electromagnet focuses a particular mass species at the detector. Detectors The ions pass through a collector slit in the focal plane of the magnetic sector and the collector assembly can be moved along an axis to optimise the focus of a given isotopic species. In typical U-Pb zircon analysis, a single secondary electron multiplier is used for ion counting. Vacuum system Turbomolecular pumps evacuate the entire beam path of the SHRIMP to maximise transmission and reduce contamination. The sample chamber also employs a cryopump to trap contaminants, especially water. Typical pressures inside the SHRIMP are between ~7 x 10−9 mbar in the detector and ~1 x 10−6 mbar in the primary column (with oxygen duoplasmatron source). Mass resolution and sensitivity In normal operations, the SHRIMP achieves mass resolution of 5000 with sensitivity >20 counts/sec/ppm/nA for lead from zircon. Applications Isotope dating For U-Th-Pb geochronology a beam of primary ions (O2)1− are accelerated and collimated towards the target where it sputters "secondary" ions from the sample. These secondary ions are accelerated along the instrument where the various isotopes of uranium, lead and thorium are measured successively, along with reference peaks for Zr2O+, ThO+ and UO+. Since the sputtering yield differs between ion species and relative sputtering yield increases or decreases with time depending on the ion species (due to increasing crater depth, charging effects and other factors), the measured relative isotopic abundances do not relate to the real relative isotopic abundances in the target. Corrections are determined by analysing unknowns and reference material (matrix-matched material of known isotopic composition), and determining an analytical-session specific calibration factor. SHRIMP instruments around the world References External links Founding SHRIMP Lab at Australian National University Australian Scientific Instruments Geochronological dating methods Mass spectrometry
Sensitive high-resolution ion microprobe
[ "Physics", "Chemistry" ]
1,561
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
1,358,453
https://en.wikipedia.org/wiki/Astrophysical%20jet
An astrophysical jet is an astronomical phenomenon where outflows of ionised matter are emitted as extended beams along the axis of rotation. When this greatly accelerated matter in the beam approaches the speed of light, astrophysical jets become relativistic jets as they show effects from special relativity. The formation and powering of astrophysical jets are highly complex phenomena that are associated with many types of high-energy astronomical sources. They likely arise from dynamic interactions within accretion disks, whose active processes are commonly connected with compact central objects such as black holes, neutron stars or pulsars. One explanation is that tangled magnetic fields are organised to aim two diametrically opposing beams away from the central source by angles only several degrees wide Jets may also be influenced by a general relativity effect known as frame-dragging. Most of the largest and most active jets are created by supermassive black holes (SMBH) in the centre of active galaxies such as quasars and radio galaxies or within galaxy clusters. Such jets can exceed millions of parsecs in length. Other astronomical objects that contain jets include cataclysmic variable stars, X-ray binaries and gamma-ray bursts (GRB). Jets on a much smaller scale (~parsecs) may be found in star forming regions, including T Tauri stars and Herbig–Haro objects; these objects are partially formed by the interaction of jets with the interstellar medium. Bipolar outflows may also be associated with protostars, or with evolved post-AGB stars, planetary nebulae and bipolar nebulae. Relativistic jets Relativistic jets are beams of ionised matter accelerated close to the speed of light. Most have been observationally associated with central black holes of some active galaxies, radio galaxies or quasars, and also by galactic stellar black holes, neutron stars or pulsars. Beam lengths may extend between several thousand, hundreds of thousands or millions of parsecs. Jet velocities when approaching the speed of light show significant effects of the special theory of relativity; for example, relativistic beaming that changes the apparent beam brightness. Massive central black holes in galaxies have the most powerful jets, but their structure and behaviours are similar to those of smaller galactic neutron stars and black holes. These SMBH systems are often called microquasars and show a large range of velocities. SS 433 jet, for example, has a mean velocity of 0.26c. Relativistic jet formation may also explain observed gamma-ray bursts, which have the most relativistic jets known, being ultrarelativistic. Mechanisms behind the composition of jets remain uncertain, though some studies favour models where jets are composed of an electrically neutral mixture of nuclei, electrons, and positrons, while others are consistent with jets composed of positron–electron plasma. Trace nuclei swept up in a relativistic positron–electron jet would be expected to have extremely high energy, as these heavier nuclei should attain velocity equal to the positron and electron velocity. Rotation as possible energy source Because of the enormous amount of energy needed to launch a relativistic jet, some jets are possibly powered by spinning black holes. However, the frequency of high-energy astrophysical sources with jets suggests combinations of different mechanisms indirectly identified with the energy within the associated accretion disk and X-ray emissions from the generating source. Two early theories have been used to explain how energy can be transferred from a black hole into an astrophysical jet: Blandford–Znajek process. This theory explains the extraction of energy from magnetic fields around an accretion disk, which are dragged and twisted by the spin of the black hole. Relativistic material is then feasibly launched by the tightening of the field lines. Penrose mechanism. Here energy is extracted from a rotating black hole by frame dragging, which was later theoretically proven by Reva Kay Williams to be able to extract relativistic particle energy and momentum, and subsequently shown to be a possible mechanism for jet formation. This effect includes using general relativistic gravitomagnetism. Relativistic jets from neutron stars Jets may also be observed from spinning neutron stars. An example is pulsar IGR J11014-6103, which has the largest jet so far observed in the Milky Way, and whose velocity is estimated at 80% the speed of light (0.8c). X-ray observations have been obtained, but there is no detected radio signature nor accretion disk. Initially, this pulsar was presumed to be rapidly spinning, but later measurements indicate the spin rate is only 15.9 Hz. Such a slow spin rate and lack of accretion material suggest the jet is neither rotation nor accretion powered, though it appears aligned with the pulsar rotation axis and perpendicular to the pulsar's true motion. Other images See also disk wind slower wide-angle outflow, often occurring together with a jet Accretion disk Bipolar outflow Blandford–Znajek process Herbig–Haro object Penrose process CGCG 049-033, elliptical galaxy located 600 million light-years from Earth, known for having the longest galactic jet discovered Gamma-ray burst Solar jet References External links NASA – Ask an Astrophysicist: Black Hole Bipolar Jets SPACE.com – Twisted Physics: How Black Holes Spout Off Hubble Video Shows Shock Collision inside Black Hole Jet (Article) Space plasmas Black holes Jet, Astrophysical Concepts in stellar astronomy
Astrophysical jet
[ "Physics", "Astronomy" ]
1,147
[ "Space plasmas", "Black holes", "Physical phenomena", "Concepts in astrophysics", "Physical quantities", "Concepts in astronomy", "Unsolved problems in physics", "Astrophysics", "Density", "Concepts in stellar astronomy", "Stellar phenomena", "Astronomical objects" ]
1,358,654
https://en.wikipedia.org/wiki/Measurement%20tower
A measurement tower or measurement mast, also known as meteorological tower or meteorological mast (met tower or met mast), is a free standing tower or a removed mast, which carries measuring instruments with meteorological instruments, such as thermometers and instruments to measure wind speed. Measurement towers are an essential component of rocket launching sites, since one must know exact wind conditions for an execution of a rocket launch. Met masts are crucial in the development of wind farms, as precise knowledge of the wind speed is necessary to know how much energy will be produced, and whether the turbines will survive on the site. Measurement towers are also used in other contexts, for instance near nuclear power stations, and by ASOS stations. Examples Meteorology Other measurement towers Aerial test facility Brück, Brück, Germany BREN Tower, Nevada Test Site, USA Wind farm development Before developers construct a wind farm, they first measure the wind resource on a prospective site by erecting temporary measurement towers. Typically these mount anemometers at a range of heights up to the hub height of the proposed wind turbines, and log the wind speed data at frequent intervals (e.g. every ten minutes) for at least one year and preferably two or more. The data allow the developer to determine if the site is economically viable for a wind farm, and to choose wind turbines optimized for the local wind speed distribution. See also Automatic weather station#Mast Guyed mast Radio masts and towers Truss tower References Meteorological instrumentation and equipment Towers
Measurement tower
[ "Technology", "Engineering" ]
305
[ "Structural engineering", "Towers", "Meteorological instrumentation and equipment", "Measuring instruments" ]
1,358,686
https://en.wikipedia.org/wiki/Bomb%20tower
A bomb tower is a lightly constructed tower, often 100 to 700 feet (30 to 210 meters) high, built to hold a nuclear weapon for an aboveground nuclear test. The tower holds the bomb for the purpose of the investigation of its destructive effects (such as burst height and distance with given explosive yield) and for the adjustment of measuring instruments, such as high-speed cameras. Normally, the bomb tower will disintegrate completely on detonation due to the enormous heat of the explosion. References Nuclear weapons testing Towers
Bomb tower
[ "Technology", "Engineering" ]
108
[ "Structural engineering", "Environmental impact of nuclear power", "Nuclear weapons testing", "Towers" ]
20,145,865
https://en.wikipedia.org/wiki/Decoherence-free%20subspaces
A decoherence-free subspace (DFS) is a subspace of a quantum system's Hilbert space that is invariant to non-unitary dynamics. Alternatively stated, they are a small section of the system Hilbert space where the system is decoupled from the environment and thus its evolution is completely unitary. DFSs can also be characterized as a special class of quantum error correcting codes. In this representation they are passive error-preventing codes since these subspaces are encoded with information that (possibly) won't require any active stabilization methods. These subspaces prevent destructive environmental interactions by isolating quantum information. As such, they are an important subject in quantum computing, where (coherent) control of quantum systems is the desired goal. Decoherence creates problems in this regard by causing loss of coherence between the quantum states of a system and therefore the decay of their interference terms, thus leading to loss of information from the (open) quantum system to the surrounding environment. Since quantum computers cannot be isolated from their environment (i.e. we cannot have a truly isolated quantum system in the real world) and information can be lost, the study of DFSs is important for the implementation of quantum computers into the real world. Background Origins The study of DFSs began with a search for structured methods to avoid decoherence in the subject of quantum information processing (QIP). The methods involved attempts to identify particular states which have the potential of being unchanged by certain decohering processes (i.e. certain interactions with the environment). These studies started with observations made by G.M. Palma, K-A Suominen, and A.K. Ekert, who studied the consequences of pure dephasing on two qubits that have the same interaction with the environment. They found that two such qubits do not decohere. Originally the term "sub-decoherence" was used by Palma to describe this situation. Noteworthy is also independent work by Martin Plenio, Vlatko Vedral and Peter Knight who constructed an error correcting code with codewords that are invariant under a particular unitary time evolution in spontaneous emission. Further development Shortly afterwards, L-M Duan and G-C Guo also studied this phenomenon and reached the same conclusions as Palma, Suominen, and Ekert. However, Duan and Guo applied their own terminology, using "coherence preserving states" to describe states that do not decohere with dephasing. Duan and Guo furthered this idea of combining two qubits to preserve coherence against dephasing, to both collective dephasing and dissipation showing that decoherence is prevented in such a situation. This was shown by assuming knowledge of the system-environment coupling strength. However, such models were limited since they dealt with the decoherence processes of dephasing and dissipation solely. To deal with other types of decoherences, the previous models presented by Palma, Suominen, and Ekert, and Duan and Guo were cast into a more general setting by P. Zanardi and M. Rasetti. They expanded the existing mathematical framework to include more general system-environment interactions, such as collective decoherence-the same decoherence process acting on all the states of a quantum system and general Hamiltonians. Their analysis gave the first formal and general circumstances for the existence of decoherence-free (DF) states, which did not rely upon knowing the system-environment coupling strength. Zanardi and Rasetti called these DF states "error avoiding codes". Subsequently, Daniel A. Lidar proposed the title "decoherence-free subspace" for the space in which these DF states exist. Lidar studied the strength of DF states against perturbations and discovered that the coherence prevalent in DF states can be upset by evolution of the system Hamiltonian. This observation discerned another prerequisite for the possible use of DF states for quantum computation. A thoroughly general requirement for the existence of DF states was obtained by Lidar, D. Bacon, and K.B. Whaley expressed in terms of the Kraus operator-sum representation (OSR). Later, A. Shabani and Lidar generalized the DFS framework relaxing the requirement that the initial state needs to be a DF-state and modified some known conditions for DFS. Recent research A subsequent development was made in generalizing the DFS picture when E. Knill, R. Laflamme, and L. Viola introduced the concept of a "noiseless subsystem". Knill extended to higher-dimensional irreducible representations of the algebra generating the dynamical symmetry in the system-environment interaction. Earlier work on DFSs described DF states as singlets, which are one-dimensional irreducible representations. This work proved to be successful, as a result of this analysis was the lowering of the number of qubits required to build a DFS under collective decoherence from four to three. The generalization from subspaces to subsystems formed a foundation for combining most known decoherence prevention and nulling strategies. Conditions for the existence of decoherence-free subspaces Hamiltonian formulation Consider an N-dimensional quantum system S coupled to a bath B and described by the combined system-bath Hamiltonian as follows: where the interaction Hamiltonian is given in the usual way as and where act upon the system (bath) only, and is the system (bath) Hamiltonian, and is the identity operator acting on the system (bath). Under these conditions, the dynamical evolution within , where is the system Hilbert space, is completely unitary (all possible bath states) if and only if: In other words, if the system begins in (i.e. the system and bath are initially decoupled) and the system Hamiltonian leaves invariant, then is a DFS if and only if it satisfies (i). These states are degenerate eigenkets of and thus are distinguishable, hence preserving information in certain decohering processes. Any subspace of the system Hilbert space that satisfies the above conditions is a decoherence-free subspace. However, information can still "leak" out of this subspace if condition (iii) is not satisfied. Therefore, even if a DFS exists under the Hamiltonian conditions, there are still non-unitary actions that can act upon these subspaces and take states out of them into another subspace, which may or may not be a DFS, of the system Hilbert space. Operator-sum representation formulation Let be an N-dimensional DFS, where is the system's (the quantum system alone) Hilbert space. The Kraus operators when written in terms of the basis states that span are given as: where ( is the combined system-bath Hamiltonian), acts on , and is an arbitrary matrix that acts on (the orthogonal complement to ). Since operates on , then it will not create decoherence in ; however, it can (possibly) create decohering effects in . Consider the basis kets which span and, furthermore, they fulfill: is an arbitrary unitary operator and may or may not be time-dependent, but it is independent of the indexing variable . The 's are complex constants. Since spans , then any pure state can be written as a linear combination of these basis kets: This state will be decoherence-free; this can be seen by considering the action of on : Therefore, in terms of the density operator representation of , , the evolution of this state is: The above expression says that is a pure state and that its evolution is unitary, since is unitary. Therefore, any state in will not decohere since its evolution is governed by a unitary operator and so its dynamical evolution will be completely unitary. Thus is a decoherence-free subspace. The above argument can be generalized to an initial arbitrary mixed state as well. Semigroup formulation This formulation makes use of the semigroup approach. The Lindblad decohering term determines when the dynamics of a quantum system will be unitary; in particular, when , where is the density operator representation of the state of the system, the dynamics will be decoherence-free. Let span , where is the system's Hilbert space. Under the assumptions that: a necessary and sufficient condition for to be a DFS is : The above expression states that all basis states are degenerate eigenstates of the error generators As such, their respective coherence terms do not decohere. Thus states within will remain mutually distinguishable after a decohering process since their respective eigenvalues are degenerate and hence identifiable after action under the error generators. DFSs as a special class of information-preserving structures (IPS) and quantum error-correcting codes (QECCs) Information-preserving structures (IPS) DFSs can be thought of as "encoding" information through its set of states. To see this, consider a d-dimensional open quantum system that is prepared in the state - a non-negative (i.e. its eigenvalues are positive), trace-normalized (), density operator that belongs to the system's Hilbert–Schmidt space, the space of bounded operators on (). Suppose that this density operator(state) is selected from a set of states , a DFS of (the system's Hilbert space) and where . This set of states is called a code, because the states within this set encode particular kind of information; that is, the set S encodes information through its states. This information that is contained within must be able to be accessed; since the information is encoded in the states in , these states must be distinguishable to some process, say, that attempts to acquire the information. Therefore, for two states , the process is information preserving for these states if the states remain as distinguishable after the process as they were before it. Stated in a more general manner, a code (or DFS) is preserved by a process if and only if each pair of states is as distinguishable after is applied as they were before it was applied. A more practical description would be: is preserved by a process if and only if and This just says that is a 1:1 trace-distance-preserving map on . In this picture DFSs are sets of states (codes rather) whose mutual distinguishability is unaffected by a process . Quantum error-correcting codes (QECCs) Since DFSs can encode information through their sets of states, then they are secure against errors (decohering processes). In this way DFSs can be looked at as a special class of QECCs, where information is encoded into states which can be disturbed by an interaction with the environment but retrieved by some reversal process. Consider a code , which is a subspace of the system Hilbert space, with encoded information given by (i.e. the "codewords"). This code can be implemented to protect against decoherence and thus prevent loss of information in a small section of the system's Hilbert space. The errors are caused by interaction of the system with the environment (bath) and are represented by the Kraus operators. After the system has interacted with the bath, the information contained within must be able to be "decoded"; therefore, to retrieve this information a recovery operator is introduced. So a QECC is a subspace along with a set of recovery operators Let be a QECC for the error operators represented by the Kraus operators , with recovery operators Then is a DFS if and only if upon restriction to , then , where is the inverse of the system evolution operator. In this picture of reversal of quantum operations, DFSs are a special instance of the more general QECCs whereupon restriction to a given a code, the recovery operators become proportional to the inverse of the system evolution operator, hence allowing for unitary evolution of the system. Notice that the subtle difference between these two formulations exists in the two words preserving and correcting; in the former case, error-prevention is the method used whereas in the latter case it is error-correction. Thus the two formulations differ in that one is a passive method and the other is an active method. Example of a decoherence-free subspace Collective dephasing Consider a two-qubit Hilbert space, spanned by the basis qubits which undergo collective dephasing. A random phase will be created between these basis qubits; therefore, the qubits will transform in the following way: Under this transformation the basis states obtain the same phase factor . Thus in consideration of this, a state can be encoded with this information (i.e. the phase factor) and thus evolve unitarily under this dephasing process, by defining the following encoded qubits: Since these are basis qubits, then any state can be written as a linear combination of these states; therefore, This state will evolve under the dephasing process as: However, the overall phase for a quantum state is unobservable and, as such, is irrelevant in the description of the state. Therefore, remains invariant under this dephasing process and hence the basis set is a decoherence-free subspace of the 4-dimensional Hilbert space. Similarly, the subspaces are also DFSs. Alternative: decoherence-free subsystems Consider a quantum system with an N-dimensional system Hilbert space that has a general subsystem decomposition The subsystem is a decoherence-free subsystem with respect to a system-environment coupling if every pure state in remains unchanged with respect to this subsystem under the OSR evolution. This is true for any possible initial condition of the environment. To understand the difference between a decoherence-free subspace and a decoherence-free subsystem, consider encoding a single qubit of information into a two-qubit system. This two-qubit system has a 4-dimensional Hilbert space; one method of encoding a single qubit into this space is by encoding information into a subspace that is spanned by two orthogonal qubits of the 4-dimensional Hilbert space. Suppose information is encoded in the orthogonal state in the following way: This shows that information has been encoded into a subspace of the two-qubit Hilbert space. Another way of encoding the same information is to encode only one of the qubits of the two qubits. Suppose the first qubit is encoded, then the state of the second qubit is completely arbitrary since: This mapping is a one-to-many mapping from the one qubit encoding information to a two-qubit Hilbert space. Instead, if the mapping is to , then it is identical to a mapping from a qubit to a subspace of the two-qubit Hilbert space. See also Quantum decoherence Quantum measurement References Quantum measurement Quantum information science
Decoherence-free subspaces
[ "Physics" ]
3,117
[ "Quantum measurement", "Quantum mechanics" ]
20,148,129
https://en.wikipedia.org/wiki/Eco-social%20market%20economy
The eco-social market economy (ESME), also known as the socio-ecological market economy (SEME) or social and ecological market economy, aims at balancing free market economics, striving for social fairness, and the sustainable use and protection of natural resources. Developed by Austrian politician during the 1980s, it expands on the original concept of the social market economy—an economic model first advocated by Konrad Adenauer—and is considered the economic format that is followed by the majority of European nations. Definition and aims The eco-social market economy is a holistic model based on a strong and innovative market economy. The eco-social market economy requires that the protection of the environment and social fairness are vital criteria for all economic activity. The protection of the ecology and habitat for future generations are central issues for eco-social market economies. Its supporters maintain that free markets alone are not able or interested to protect the environment, hence government action is necessary. The creation of higher social and environmental standards, especially in developing countries, is seen as a vital step towards world peace in the future. The eco-social market economy aims at "higher chances for the brave, more solidarity and more responsibility for natural habitats". Proposed measures Frameworks and Guidelines for fair competition must be implemented, not only in the EU, but on a global level. It is a foremost demand to politics to create such a political landscape of global connectivity and cooperation. Eco-social market economists support the implementation of the Millennium Development Goals and the Kyoto Protocol, and demand stronger cooperation between the United Nations, the World Trade Organization, the International Monetary Fund and the International Labour Organization to create the said framework. On a national level, it favours higher taxes on fossil-fuel energy sources (environmental taxes), while at the same time lowering income taxes. Public subsidies must, in their opinion, only be paid to promote sustainability. Environmental pollution and resource use must be included in the calculation of product processes and in product prices. A strong emphasis in education about issues on environmental protection are seen as being absolutely necessary to create market awareness. Global Marshall Plan initiative The idea of a global marshall plan, first brought forward by Al Gore in the 1990s, is a main part of eco-social thinking. The idea of a Global Marshall Plan is based on two pillars: Innovative additional fundraising required for the actual realisation of the UN Millennium Development Goals on the basis of partnerships, co-responsibility and good governance. The achievement of a global eco-social market economy by means of implementing the same ecological and social standards in all institutes and agreements on a global scale. The funding of these development measures are a levy on financial transactions, a kerosene tax or special drawing rights with the IMF. The big challenge is finding an effective way of translating money into development without losing to corruption worldwide. Among the prominent supporters of the initiative are Muhammad Yunus, Hans-Dietrich Genscher, Ernst Ulrich von Weizsäcker and Jane Goodall. See also Carbon fee and dividend References External links European Eco-social Forum Global Marshall Plan Initiative Free market Environmental economics
Eco-social market economy
[ "Environmental_science" ]
624
[ "Environmental economics", "Environmental social science" ]
20,151,587
https://en.wikipedia.org/wiki/Stencil%20lithography
Stencil lithography is a novel method of fabricating nanometer scale patterns using nanostencils, stencils (shadow mask) with nanometer size apertures. It is a resist-less, simple, parallel nanolithography process, and it does not involve any heat or chemical treatment of the substrates (unlike resist-based techniques). History Stencil lithography was first reported in a scientific journal as a micro-structuring technique by S. Gray and P. K. Weimer in 1959. They used long stretched metallic wires as shadow masks during metal deposition. Various materials can be used as membranes, such as metals, Si, SixNy, and polymers. Today the stencil apertures can be scaled down to sub-micrometer size at full 4" wafer scale. This is called a nanostencil. Nano-scale stencil apertures have been fabricated using laser interference lithography (LIL), electron beam lithography, and focused ion beam lithography. Processes Several process are available using stencil lithography: material deposition and etching, as well as implantation of ions. Different stencil requirements are necessary for the various processes, e. g. an extra etch-resistant layer on the backside of the stencil for etching (if the membrane material is sensitive to the etching process) or a conductive layer on the backside of the stencil for ion implantation. Deposition The main deposition method used with stencil lithography is physical vapor deposition. This includes thermal and electron beam physical vapor deposition, molecular beam epitaxy, sputtering, and pulsed laser deposition. The more directional the material flux is, the more accurate the pattern is transferred from the stencil to the substrate. Etching Reactive ion etching is based on ionized, accelerated particles that etch both chemically and physically the substrate. The stencil in this case is used as a hard mask, protecting the covered regions of the substrate, while allowing the substrate under the stencil apertures to be etched. Ion implantation Here the thickness of the membrane has to be greater than the penetration length of the ions in the membrane material. The ions will then implant only under the stencil apertures, into the substrate. Modes There are three main modes of operation of stencil lithography: static, quasi-dynamic and dynamic. While all the above described processes have been proven using the static mode (stencil doesn't move relative to substrate during material or ion processing), only ion implantation has been shown for the non-static modes (quasi-dynamic). Static stencil In the static mode, the stencil is aligned (if necessary) and fixed to a substrate. The stencil-substrate pair is placed in the evaporation/etching/ion implantation machine, and after the processing is done, the stencil is simply removed from the now patterned substrate. Quasi-dynamic stencil In the quasi-dynamic mode (or step-and-repeat), the stencil moves relative to the substrate in between depositions, without breaking the vacuum. Dynamic stencil In the dynamic mode, the stencil moves relative to the substrate during deposition, allowing the fabrication of patterns with variable height profiles by changing the stencil speed during a constant material deposition rate. For motion in one-dimension, the deposited material has a height profile given by the convolution where is the time the mask resides at longitudinal position , and is the constant deposition rate. represents the height profile that would be produced by a static immobile mask (inclusive of any blurring). Programmable-height nanostructures as small as 10nm can be produced. Challenges Despite it being a versatile technique, there are still several challenges to be addressed by stencil lithography. During deposition through the stencil, material is deposited not only on the substrate through the apertures but also on the stencil backside, including around and inside the apertures. This reduces the effective aperture size by an amount proportional to the deposited material, leading ultimately to aperture clogging. The accuracy of the pattern transfer from the stencil to the substrate depends on many parameters. The material diffusion on the substrate (as a function of temperature, material type, evaporation angle) and the geometrical setup of the evaporation are the main factors. Both lead to an enlargement of the initial pattern, called blurring. See also Lithography References Series in MICROSYSTEMS Vol. 20: Marc Antonius Friedrich van den Boogaart, "Stencil lithography: An ancient technique for advanced micro- and nanopatterning", 2006, VIII, 182 p.; External links http://lmis1.epfl.ch/page-34708-en.html http://www.microlitho.com/ Lithography (microfabrication)
Stencil lithography
[ "Materials_science" ]
1,038
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
20,155,643
https://en.wikipedia.org/wiki/Lee%E2%80%93Kesler%20method
The Lee–Kesler method allows the estimation of the saturated vapor pressure at a given temperature for all components for which the critical pressure Pc, the critical temperature Tc, and the acentric factor ω are known. Equations with (reduced pressure) and (reduced temperature). Typical errors The prediction error can be up to 10% for polar components and small pressures and the calculated pressure is typically too low. For pressures above 1 bar, that means, above the normal boiling point, the typical errors are below 2%. Example calculation For benzene with Tc = 562.12 K Pc = 4898 kPa Tb = 353.15 K ω = 0.2120 the following calculation for T = Tb results: Tr = 353.15 / 562.12 = 0.628247 f(0) = −3.167428 f(1) = −3.429560 Pr = exp( f(0) + ω f(1) ) = 0.020354 P = Pr · Pc = 99.69 kPa The correct result would be P = 101.325 kPa, the normal (atmospheric) pressure. The deviation is −1.63 kPa or −1.61 %. It is important to use the same absolute units for T and Tc as well as for P and Pc. The unit system used (K or R for T) is irrelevant because of the usage of the reduced values Tr and Pr. See also Vapour pressure of water Antoine equation Tetens equation Arden Buck equation Goff–Gratch equation References Thermodynamic models
Lee–Kesler method
[ "Physics", "Chemistry" ]
334
[ "Thermodynamic models", "Thermodynamics" ]
20,156,881
https://en.wikipedia.org/wiki/Bode%27s%20sensitivity%20integral
Bode's sensitivity integral, discovered by Hendrik Wade Bode, is a formula that quantifies some of the limitations in feedback control of linear parameter invariant systems. Let L be the loop transfer function and S be the sensitivity function. In the diagram, P is a dynamical process that has a transfer function P(s). The controller, C, has the transfer function C(s). The controller attempts to cause the process output, y, to track the reference input, r. Disturbances, d, and measurement noise, n, may cause undesired deviations of the output. Loop gain is defined by L(s) = P(s)C(s). The following holds: where are the poles of L in the right half plane (unstable poles). If L has at least two more poles than zeros, and has no poles in the right half plane (is stable), the equation simplifies to: This equality shows that if sensitivity to disturbance is suppressed at some frequency range, it is necessarily increased at some other range. This has been called the "waterbed effect." References Further reading Karl Johan Åström and Richard M. Murray. Feedback Systems: An Introduction for Scientists and Engineers. Chapter 11 - Frequency Domain Design. Princeton University Press, 2008. http://www.cds.caltech.edu/~murray/amwiki/Frequency_Domain_Design External links WaterbedITOOL - Interactive software tool to analyze, learn/teach the Waterbed effect in linear control systems. Gunter Stein’s Bode Lecture on fundamental limitations on the achievable sensitivity function expressed by Bode's integral. Use of Bode's Integral Theorem (circa 1945) - NASA publication. See also Bode plot Sensitivity (control systems) Control theory
Bode's sensitivity integral
[ "Mathematics" ]
373
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
24,209,925
https://en.wikipedia.org/wiki/Mirtron
Mirtrons are a type of microRNAs that are located in the introns of the mRNA encoding host genes. These short hairpin introns formed via atypical miRNA biogenesis pathways. Mirtrons arise from the spliced-out introns and are known to function in gene expression. Mirtrons were first identified in Drosophila melanogaster and Caenorhabditis elegans. The number of mirtrons identified to date are 14, 9, and 19 in D. melanogaster, C. elegans and mammals respectively. Mirtrons are alternative precursors for microRNA biogenesis. The short hairpin introns use splicing to bypass DROSHA cleavage, which is otherwise essential for the generation of canonical animal microRNAs. Mirtrons arise from the spliced-out introns and are known to function like classical microRNAs (miRs) and regulate gene expression, by either mRNA destabilisation, inhibition of the translation or target mRNA cleavage. Now more evidence is emerging that supports the existence of mirtrons in plants. All the miRNAs in plants are derived from the sequential DCL1 cleavages from pri-miRNA to give pre-miRNA (or miRNA precursor), but the mirtrons bypass the DCL1 cleavage and enter as pre-miRNA in the miRNA maturation pathway. Mirtrons are distinct from canonical miRNA sequences, and can be distinguished with machine learning methods in data analysis. References RNA MicroRNA
Mirtron
[ "Chemistry" ]
319
[ "Biochemistry stubs", "Molecular and cellular biology stubs" ]
24,210,239
https://en.wikipedia.org/wiki/C16H12FN3O3
{{DISPLAYTITLE:C16H12FN3O3}} The molecular formula C16H12FN3O3 (molar mass: 313.283 g/mol, exact mass: 313.0863 u) may refer to: Flubendazole Flunitrazepam Molecular formulas
C16H12FN3O3
[ "Physics", "Chemistry" ]
71
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
28,760,028
https://en.wikipedia.org/wiki/P-adic%20exponential%20function
In mathematics, particularly p-adic analysis, the p-adic exponential function is a p-adic analogue of the usual exponential function on the complex numbers. As in the complex case, it has an inverse function, named the p-adic logarithm. Definition The usual exponential function on C is defined by the infinite series Entirely analogously, one defines the exponential function on Cp, the completion of the algebraic closure of Qp, by However, unlike exp which converges on all of C, expp only converges on the disc This is because p-adic series converge if and only if the summands tend to zero, and since the n! in the denominator of each summand tends to make them large p-adically, a small value of z is needed in the numerator. It follows from Legendre's formula that if then tends to , p-adically. Although the p-adic exponential is sometimes denoted ex, the number e itself has no p-adic analogue. This is because the power series expp(x) does not converge at . It is possible to choose a number e to be a p-th root of expp(p) for , but there are multiple such roots and there is no canonical choice among them. p-adic logarithm function The power series converges for x in Cp satisfying |x|p < 1 and so defines the p-adic logarithm function logp(z) for |z − 1|p < 1 satisfying the usual property logp(zw) = logpz + logpw. The function logp can be extended to all of (the set of nonzero elements of Cp) by imposing that it continues to satisfy this last property and setting logp(p) = 0. Specifically, every element w of can be written as w = pr·ζ·z with r a rational number, ζ a root of unity, and |z − 1|p < 1, in which case logp(w) = logp(z). This function on is sometimes called the Iwasawa logarithm to emphasize the choice of logp(p) = 0. In fact, there is an extension of the logarithm from |z − 1|p < 1 to all of for each choice of logp(p) in Cp. Properties If z and w are both in the radius of convergence for expp, then their sum is too and we have the usual addition formula: expp(z + w) = expp(z)expp(w). Similarly if z and w are nonzero elements of Cp then logp(zw) = logpz + logpw. For z in the domain of expp, we have expp(logp(1+z)) = 1+z and logp(expp(z)) = z. The roots of the Iwasawa logarithm logp(z) are exactly the elements of Cp of the form pr·ζ where r is a rational number and ζ is a root of unity. Note that there is no analogue in Cp of Euler's identity, e2πi = 1. This is a corollary of Strassmann's theorem. Another major difference to the situation in C is that the domain of convergence of expp is much smaller than that of logp. A modified exponential function — the Artin–Hasse exponential — can be used instead which converges on |z|p < 1. Notes References Chapter 12 of External links p-adic exponential and p-adic logarithm Exponentials p-adic numbers
P-adic exponential function
[ "Mathematics" ]
778
[ "E (mathematical constant)", "P-adic numbers", "Exponentials", "Number theory" ]
28,763,449
https://en.wikipedia.org/wiki/C31H52N2O23
{{DISPLAYTITLE:C31H52N2O23}} The molecular formula C31H52N2O23 (molar mass: 820.74 g/mol) may refer to: Sialyl-LewisA Sialyl-LewisX Molecular formulas
C31H52N2O23
[ "Physics", "Chemistry" ]
60
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
28,764,397
https://en.wikipedia.org/wiki/Bjerrum%20defect
A Bjerrum defect is a crystallographic defect which is specific to ice, and which is partly responsible for the electrical properties of ice. It was first proposed by Niels Bjerrum in 1952 in order to explain the electrical polarization of ice in an electric field. A hydrogen bond normally has one proton, but a hydrogen bond with a Bjerrum defect will have either two protons (D defect, from "doppel" in German, meaning "double") or no proton (L defect, from "leer" in German, meaning "empty"). D-defects are more energetically favorable than L-defects. The unfavorable defect strain is resolved when a water molecule pivots about an oxygen atom to produce hydrogen bonds with single protons. Dislocations of ice Ih along a slip plane create pairs of Bjerrum defects, one D defect and one L defect. Nonpolar molecules such as methane can form clathrate hydrates with water, especially under high pressure. Although there is no hydrogen bonding of water molecules when methane is the guest molecule of the clathrate, guest-host hydrogen bonding often forms with guest molecules in clathrates of many larger organic molecules, such as pinacolone and tetrahydrofuran. In such cases the guest-host hydrogen bonds result in the formation of L-type Bjerrum defect in the clathrate lattice. Oxygen atoms (in alcohol or carbonyl functional groups) and nitrogen atoms (in amine functional groups) in the guest molecules lead to transient hydrogen bonds and misoriented water molecules in the hydrate lattice. See also Ice rules References Water ice Crystallographic defects Electrochemistry
Bjerrum defect
[ "Chemistry", "Materials_science", "Engineering" ]
350
[ "Materials science stubs", "Crystallographic defects", "Materials science", "Crystallography stubs", "Crystallography", "Electrochemistry", "Electrochemistry stubs", "Materials degradation", "Physical chemistry stubs" ]
28,765,994
https://en.wikipedia.org/wiki/Receptor%20tyrosine%20kinase-like%20orphan%20receptor
In the field of molecular biology, receptor tyrosine kinase-like orphan receptors (RORs) are a family of tyrosine kinase receptors that are important in regulating skeletal and neuronal development, cell migration and cell polarity. ROR proteins (ROR1 and ROR2 in humans) can modulate Wnt signaling by sequestering Wnt ligands. References Tyrosine kinase receptors
Receptor tyrosine kinase-like orphan receptor
[ "Chemistry" ]
85
[ "Tyrosine kinase receptors", "Signal transduction" ]
28,766,498
https://en.wikipedia.org/wiki/DOTA-TATE
DOTA-TATE (DOTATATE, DOTA-octreotate, oxodotreotide, DOTA-(Tyr3)-octreotate, and DOTA-0-Tyr3-Octreotate) is an eight amino acid long peptide, with a covalently bonded DOTA bifunctional chelator. DOTA-TATE can be reacted with the radionuclides gallium-68 (T1/2 = 68 min), lutetium-177 (T1/2 = 6.65 d) and copper-64 (T1/2 = 12.7 h) to form radiopharmaceuticals for positron emission tomography (PET) imaging or radionuclide therapy. 177Lu DOTA-TATE therapy is a form of peptide receptor radionuclide therapy (PRRT) which targets somatostatin receptors (SSR). In that form of application it is a form of targeted drug delivery. Chemistry and mechanism of action DOTA-TATE is a compound containing tyrosine3-octreotate, an SSR agonist, and the bifunctional chelator DOTA (tetraxetan). SSRs are found with high density in numerous malignancies, including CNS, breast, lung, and lymphatics. The role of SSR agonists (i.e. somatostatin and its analogs such as octreotide, somatuline and vapreotide) in neuroendocrine tumours (NETs) is well established, and massive SSR overexpression is present in several NETs. (Tyr3)-octreotate binds the transmembrane receptors of NETs with highest activity for SSR2 and is actively transported into the cell via endocytosis, allowing trapping of the radioactivity and increasing the probability of the desired double-strand DNA breakage (for tumour control). Trapping improves the probability of this kind of effect due to the relatively short range of the beta particles emitted by 177Lu, which have a maximum range in tissue of <2 mm. Bystander effects include cellular damage by free radical formation. Clinical applications Gallium-68 DOTA-TATE 68Ga DOTA-TATE (gallium-68 dotatate, GaTate) is used to measure tumor SSR density and whole-body bio-distribution via PET imaging. 68Ga DOTA-TATE imagery has a much higher sensitivity and resolution compared to 111In octreotide gamma camera or SPECT scans, due to intrinsic modality differences. It is commonly used to confirm the presence of paragangliomas and pheochromocytomas. Copper-64 DOTA-TATE Copper (64Cu) oxodotreotide or copper Cu 64 dotatate, sold under the brand name Detectnet, is a radioactive diagnostic agent indicated for use with positron emission tomography (PET) for localization of somatostatin receptor positive neuroendocrine tumors (NETs) in adults. It was FDA approved in September 2020. These are the same indications for as the gallium DOTA-TATE scans, but Cu-64 has advantages over Ga-68 in having a 12-hour half life rather than the much shorter one-hour half life of Ga-68, making it easier to transport from central production locations. Lutetium-177 DOTA-TATE The combination of the beta emitter 177Lu with DOTA-TATE can be used in the treatment of cancers expressing the relevant somatostatin receptors. The U.S. Food and Drug Administration (FDA) considers 177Lu-dotatate to be a first-in-class medication. Alternatives to 177Lu-DOTA-TATE include 90Y (T1/2 = 64.6 h) DOTA-TATE. The longer penetration range in the target tissues of the more energetic beta particles emitted by 90Y (high average beta energy of 0.9336 MeV) could make it more suitable for large tumors while 177Lu would be preferred for smaller volume tumors. See also Lutetium 64Cu-dotatate References Chelating agents Macrocycles Orphan drugs Radiopharmaceuticals DOTA (chelator) derivatives
DOTA-TATE
[ "Chemistry" ]
887
[ "Medicinal radiochemistry", "Organic compounds", "Macrocycles", "Radiopharmaceuticals", "Chelating agents", "Chemicals in medicine", "Process chemicals" ]
27,460,882
https://en.wikipedia.org/wiki/List%20of%20restriction%20enzyme%20cutting%20sites%3A%20Bd%E2%80%93Bp
This article contains a list of the most studied restriction enzymes whose names start with Bd to Bp inclusive. It contains approximately 100 enzymes. The following information is given: Whole list navigation Restriction enzymes Bd - Bp Notes Biotechnology Restriction enzyme cutting sites Restriction enzymes
List of restriction enzyme cutting sites: Bd–Bp
[ "Chemistry", "Biology" ]
52
[ "Genetics techniques", "Molecular-biology-related lists", "Biotechnology", "nan", "Molecular biology", "Restriction enzymes" ]
27,460,930
https://en.wikipedia.org/wiki/List%20of%20restriction%20enzyme%20cutting%20sites%3A%20Bsa%E2%80%93Bso
This article contains a list of the most studied restriction enzymes whose names start with Bsa to Bso inclusive. It contains approximately 90 enzymes. The following information is given: Whole list navigation Restriction enzymes Bsa - Bso Notes Biotechnology Restriction enzyme cutting sites Restriction enzymes
List of restriction enzyme cutting sites: Bsa–Bso
[ "Chemistry", "Biology" ]
54
[ "Genetics techniques", "Molecular-biology-related lists", "Biotechnology", "nan", "Molecular biology", "Restriction enzymes" ]
27,460,950
https://en.wikipedia.org/wiki/List%20of%20restriction%20enzyme%20cutting%20sites%3A%20C%E2%80%93D
This article contains a list of the most studied restriction enzymes whose names start with C to D inclusive. It contains approximately 80 enzymes. The following information is given: Whole list navigation Restriction enzymes C D Notes Biotechnology Restriction enzyme cutting sites Restriction enzymes
List of restriction enzyme cutting sites: C–D
[ "Chemistry", "Biology" ]
49
[ "Genetics techniques", "Molecular-biology-related lists", "Biotechnology", "nan", "Molecular biology", "Restriction enzymes" ]
27,460,957
https://en.wikipedia.org/wiki/List%20of%20restriction%20enzyme%20cutting%20sites%3A%20E%E2%80%93F
This article contains a list of the most studied restriction enzymes whose names start with E to F inclusive. It contains approximately 110 enzymes. The following information is given: Whole list navigation Restriction enzymes E F Notes Biotechnology Restriction enzyme cutting sites Restriction enzymes
List of restriction enzyme cutting sites: E–F
[ "Chemistry", "Biology" ]
49
[ "Genetics techniques", "Molecular-biology-related lists", "Biotechnology", "nan", "Molecular biology", "Restriction enzymes" ]