id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
16,837,442 | https://en.wikipedia.org/wiki/Chameleon%20coating | Chameleon coating, also known as nano composite tribological coating, is an adaptive adhesive that uses nanotechnology to adjust to environmental fluctuations to make living conditions more suitable to the object that the coat has been applied to.
Purpose
The purpose of chameleon coating is to provide optimal performance of material in any environment. The idea is that when a sudden change in the environment occurs the particular nano coating will change its chemical properties to better suit itself in the environment in order to avoid wear due to friction and abrasion. Chameleon coating is supposed to resolve the issues with fluid hydraulics where atmospheric pressure changes under different environments. The chemical properties of fluid hydraulics alter under different atmospheric pressure because liquids and gases expand and condense under different pressures. Thus the goal of chameleon coating is to provide the same lubrication in machines from fluids such as oil but without the drawbacks of the coating or lubrication deteriorating under variable environments.
Development
The beginning of chameleon coating was not always nanoscaled until recent developments in nanoengineering. Before the use of nano films (coatings), the films used to provide the beneficiary aspects of the coating would break down easier and faster due to consistent wear and tear. The use of the nano thin films helped control dislocating formation of the film and helped reduce the shear rate (rate at which the film deteriorated) due to abrasion and wear several folds over. The term “chameleon coating” was used in analogy to the actual coating of a chameleon where the coat of a chameleon is able to adapt to its environment as a defense mechanism to avoid predators and increase its chances of survival. The use of diamond-like carbon for short is generally one of the nano films used to inhibit abrasion. A possible nano film that could be used to counteract temperature changes are the pure metals Ag (silver) and Au (gold). Silver and gold can withstand high temperatures and remain soft, which is desirable for coating properties. Using a lattice matrix (a template for the coating using the basket weave design), nano engineers are able to utilize properties of the diamond-like carbons and pure metals to make chameleon coatings more adaptable at even more varied environments.
Applications
The most common application for chameleon coating is in aerospace technology, where the environment is always changing due to changes in altitude. On the earth the air is humid, and temperature varies only slightly in comparison to other environments like space. The use of oil for reduced friction and wear, and lubrication applies to those ambient conditions only found on earth. When the frontier changes from atmosphere to orbit, temperatures can range from -150 °C to nearly 200 °C many of these lubrications break down at accelerated rates and thus become useless. Space satellites are under such conditions where liquid lubrication is useless as many liquid lubricants become prone to volatilization due to extremely low pressures - from nearly 100 kPa at launch, to 10 nPa in orbit. With the help of chameleon coating, the life expectancy of satellites range from 15 to about 30 years.
Chameleon coatings are also often used on hypersonic and reusable launch vehicles that require lubrication in ambient atmosphere, vacuum (space), and during re-entry (high temperature). A typical multilayer coating may use a molybdenum disulfide or diamond-like carbon for low friction at ambient conditions. A layer of teflon may be used for vacuum service along with a silver or gold containing layer for high temperature lubricity.
References
Coatings
Thin films | Chameleon coating | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 746 | [
"Coatings",
"Materials science",
"Nanotechnology",
"Planes (geometry)",
"Thin films"
] |
3,285,926 | https://en.wikipedia.org/wiki/Metamagnetism | Metamagnetism is a sudden (often, dramatic) increase in the magnetization of a material with a small change in an externally applied magnetic field. The metamagnetic behavior may have quite different physical causes for different types of metamagnets. Some examples of physical mechanisms leading to metamagnetic behavior are:
Itinerant metamagnetism - Exchange splitting of the Fermi surface in a paramagnetic system of itinerant electrons causes an energetically favorable transition to bulk magnetization near the transition to a ferromagnet or other magnetically ordered state.
Antiferromagnetic transition - Field-induced spin flips in antiferromagnets cascade at a critical energy determined by the applied magnetic field.
Depending on the material and experimental conditions, metamagnetism may be associated with a first-order phase transition, a continuous phase transition at a critical point (classical or quantum), or crossovers beyond a critical point that do not involve a phase transition at all. These wildly different physical explanations sometimes lead to confusion as to what the term "metamagnetic" is referring in specific cases.
References
Magnetic ordering | Metamagnetism | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 236 | [
"Magnetic ordering",
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
3,287,513 | https://en.wikipedia.org/wiki/BBC%20Research%20%26%20Development | BBC Research & Development is the technical research department of the BBC.
Function
It has responsibility for researching and developing advanced and emerging media technologies for the benefit of the corporation, and wider UK and European media industries, and is also the technical design authority for a number of major technical infrastructure transformation projects for the UK broadcasting industry.
Structure
BBC R&D is part of the wider BBC Design & Engineering, and is led by Jatin Aythora, Director, Research & Development. In 2011, the North Lab moved into MediaCityUK in Salford along with several other departments of the BBC, whilst the South Lab remained in White City in London.
History
In April 1930 the Development section of the BBC became the Research Department.
The department as it stands today was formed in 1993 from the merger of the BBC Designs Department and the BBC Research Department. From 2006 to 2008 it was known as Research and Innovation but has since reverted to its original name. BBC Research & Development has made major contributions to broadcast technology, carrying out original research in many areas, and developing items like the peak programme meter (PPM) which became the basis for many world standards.
Innovations
It has also been involved in many well-known consumer technologies such as teletext, DAB, NICAM and Freeview. It was at the forefront of the development of FM radio, stereo FM, and RDS. These innovations have led to Queen's Awards for Innovation in 1969, 1974, 1983, 1987, 1992, 1998, 2001 and 2011.
In the 1970s, its engineers designed the famous LS3/5A studio monitor for use in outside broadcasting units. Licensed to manufacturers, the loudspeaker sold 100,000 pairs in its 20+ years' life.
Closure of Kingswood Warren and move to London and Salford
In early 2010 the department had approximately 135 staff based at three locations: White City in London, Kingswood Warren in Kingswood, Surrey, and the R&D (North Lab) at the BBC's Manchester offices at New Broadcasting House, Oxford Road, Manchester. In early 2010 the Kingswood Warren site was vacated and the bulk of the department relocated to Centre House, in White City, London co-locating with the main campus of the BBC in London, whilst a significant number have moved to the new North Lab in MediaCityUK in Salford.
BBC R&D has more than 200 employees in their UK labs.
Future projects
BBC R&D engineers and researchers are currently active on approximately 50 projects, including 7 active national and international collaborative research efforts.
These include R&D projects built around BBC Redux—the proof of concept for the cross-platform, Flash video-based streaming version of the BBC iPlayer.
See also
A-weighting
Backstage.bbc.co.uk
CEEFAX
Dirac (codec)
Equal-loudness contour
Institut für Rundfunktechnik, German equivalent
ITU-R 468 noise weighting
NICAM
Peak programme meter
Sound-in-Syncs
VERA videotape format
References
External links
BBC R&D Reports list 1933 - 1996 (PDF, 965 kB)
Move
Location of South Lab
Video clips
History of the department and interviews with the staff
Talk by Matthew Postgate in Manchester in November 2009 at TEDx
Audio engineering
Digital Video Broadcasting
Engineering research institutes
Freeview (UK)
Organisations based in Salford
Organisations based in Surrey
Radio technology
Reigate and Banstead
Research institutes in England
Science and technology in Greater Manchester
Scientific organizations established in 1923
Sound production technology
Sound recording technology
Television technology
1923 establishments in the United Kingdom | BBC Research & Development | [
"Technology",
"Engineering"
] | 715 | [
"Information and communications technology",
"Telecommunications engineering",
"Television technology",
"Sound recording technology",
"Radio technology",
"Engineering research institutes",
"Electrical engineering",
"Recording devices",
"Audio engineering"
] |
3,287,565 | https://en.wikipedia.org/wiki/Contrast-enhanced%20ultrasound | Contrast-enhanced ultrasound (CEUS) is the application of ultrasound contrast medium to traditional medical sonography. Ultrasound contrast agents rely on the different ways in which sound waves are reflected from interfaces between substances. This may be the surface of a small air bubble or a more complex structure. Commercially available contrast media are gas-filled microbubbles that are administered intravenously to the systemic circulation. Microbubbles have a high degree of echogenicity (the ability of an object to reflect ultrasound waves). There is a great difference in echogenicity between the gas in the microbubbles and the soft tissue surroundings of the body. Thus, ultrasonic imaging using microbubble contrast agents enhances the ultrasound backscatter, (reflection) of the ultrasound waves, to produce a sonogram with increased contrast due to the high echogenicity difference. Contrast-enhanced ultrasound can be used to image blood perfusion in organs, measure blood flow rate in the heart and other organs, and for other applications.
Targeting ligands that bind to receptors characteristic of intravascular diseases can be conjugated to microbubbles, enabling the microbubble complex to accumulate selectively in areas of interest, such as diseased or abnormal tissues. This form of molecular imaging, known as targeted contrast-enhanced ultrasound, will only generate a strong ultrasound signal if targeted microbubbles bind in the area of interest. Targeted contrast-enhanced ultrasound may have many applications in both medical diagnostics and medical therapeutics. However, the targeted technique has not yet been approved by the FDA for clinical use in the United States.
Contrast-enhanced ultrasound is regarded as safe in adults, comparable to the safety of MRI contrast agents, and better than radiocontrast agents used in contrast CT scans. The more limited safety data in children suggests that such use is as safe as in the adult population.
Bubble echocardiogram
An echocardiogram is a study of the heart using ultrasound. A bubble echocardiogram is an extension of this that uses simple air bubbles as a contrast medium during this study and often has to be requested specifically.
Although colour Doppler can be used to detect abnormal flows between the chambers of the heart (e.g., persistent (patent) foramen ovale), it has a limited sensitivity. When specifically looking for a defect such as this, small air bubbles can be used as a contrast medium and injected intravenously, where they travel to the right side of the heart. The test would be positive for an abnormal communication if the bubbles are seen passing into the left side of the heart. (Normally, they would exit the heart through the pulmonary artery and be stopped by the lungs.) This form of bubble contrast medium is generated on an ad hoc basis by the testing clinician by agitating normal saline (e.g., by rapidly and repeatedly transferring the saline between two connected syringes) immediately prior to injection.
Microbubble contrast agents
General features
There are a variety of microbubble contrast agents. Microbubbles differ in their shell makeup, gas core makeup, and whether or not they are targeted.
Microbubble shell: selection of shell material determines how easily the microbubble is taken up by the immune system. A more hydrophilic material tends to be taken up more easily, which reduces the microbubble residence time in the circulation. This reduces the time available for contrast imaging. The shell material also affects microbubble mechanical elasticity. The more elastic the material, the more acoustic energy it can withstand before bursting. Most commonly, microbubble shells are composed of albumin, galactose, lipid, or polymers. Hydrophobic particles have also been applied to stabilize microbubble shells.
Microbubble gas core: The gas core is primary part of the ultrasound contrast microbubble that determines its echogenicity. Gas bubbles that are subjected to ultrasound pulsate and scatter a characteristic signal. This signal manifests itself as a high-amplitude entity in a contrast-enhanced sonogram. Gas cores can be composed of air, or heavy gases like perfluorocarbon, or nitrogen. Heavy gases are less water-soluble so they are less likely to leak out from the microbubble leading to microbubble dissolution. As a result, microbubbles with heavy gas cores last longer in circulation. To increase harmonic pulsation behavior, liquid and solid cores have been added to the gas contents.
Regardless of the shell or gas core composition, microbubble size is fairly uniform. They lie within a range of 1–4 micrometres in diameter. That makes them smaller than red blood cells, which allows them to flow easily through the circulation as well as the microcirculation.
Specific agents
Perflutren lipid microspheres (brand names Definity, Luminity) are perfluorocarbon emulsions composed of octafluoropropane encapsulated in an outer lipid shell.
Octafluoropropane gas core with an albumin shell (Optison) is another perfluorocarbon emulsion that is an Food and Drug Administration (FDA)-approved microbubble and made by GE Healthcare).
Sulphur hexafluoride microbubbles (SonoVue Bracco (company)). It is mainly used to characterize liver lesions that cannot be properly identified using conventional (b-mode) ultrasound. It remains visible in the blood for 3 to 8 minutes, and is expired by the lungs.
Air within a lipid/galactose shell (formerly Levovist, an FDA-approved microbubble that was made by Schering).
Perflexane lipid microspheres (formerly Imagent or Imavist) was an injectable suspension developed by Alliance Pharmaceutical approved by the FDA (in June 2002) for improving visualization of the left ventricular chamber of the heart, the delineation of the endocardial borders in patients with suboptimal echocardiograms. Beside its use to assess cardiac function and perfusion it is also used as an enhancer of the images of prostate, liver, kidney and other organs.
Targeted microbubbles
Targeted microbubbles are under preclinical development. They retain the same general features as untargeted microbubbles, but they are outfitted with ligands that bind specific receptors expressed by cell types of interest, such as inflamed cells or cancer cells. Current microbubbles in development are composed of a lipid monolayer shell with a perfluorocarbon gas core. The lipid shell is also covered with a polyethylene glycol (PEG) layer. PEG prevents microbubble aggregation and makes the microbubble more non-reactive. It temporarily "hides" the microbubble from the immune system uptake, increasing the amount of circulation time, and hence, imaging time. In addition to the PEG layer, the shell is modified with molecules that allow for the attachment of ligands that bind certain receptors. These ligands are attached to the microbubbles using carbodiimide, maleimide, or biotin-streptavidin coupling. Biotin-streptavidin is the most popular coupling strategy because biotin's affinity for streptavidin is very strong and it is easy to label the ligands with biotin. Currently, these ligands are monoclonal antibodies produced from animal cell cultures that bind specifically to receptors and molecules expressed by the target cell type. Since the antibodies are not humanized, they will elicit an immune response when used in human therapy. Humanizing antibodies is an expensive and time-intensive process, so it would be ideal to find an alternative source of ligands, such as synthetically manufactured targeting peptides that perform the same function, but without the immune issues.
Types
There are two forms of contrast-enhanced ultrasound, untargeted (used in the clinic today) and targeted (under preclinical development). The two methods slightly differ from each other.
Untargeted CEUS
Untargeted microbubbles, such as the aforementioned SonoVue, Optison, or Levovist, are injected intravenously into the systemic circulation in a small bolus. The microbubbles will remain in the systemic circulation for a certain period of time. During that time, ultrasound waves are directed on the area of interest. When microbubbles in the blood flow past the imaging window, the microbubbles' compressible gas cores oscillate in response to the high frequency sonic energy field, as described in the ultrasound article. The microbubbles reflect a unique echo that stands in stark contrast to the surrounding tissue due to the orders of magnitude mismatch between microbubble and tissue echogenicity. The ultrasound system converts the strong echogenicity into a contrast-enhanced image of the area of interest. In this way, the bloodstream's echo is enhanced, thus allowing the clinician to distinguish blood from surrounding tissues.
Targeted CEUS
Targeted contrast-enhanced ultrasound works in a similar fashion, with a few alterations. Microbubbles targeted with ligands that bind certain molecular markers that are expressed by the area of imaging interest are still injected systemically in a small bolus. Microbubbles theoretically travel through the circulatory system, eventually finding their respective targets and binding specifically. Ultrasound waves can then be directed on the area of interest. If a sufficient number of microbubbles have bound in the area, their compressible gas cores oscillate in response to the high frequency sonic energy field, as described in the ultrasound article. The targeted microbubbles also reflect a unique echo that stands in stark contrast to the surrounding tissue due to the orders of magnitude mismatch between microbubble and tissue echogenicity. The ultrasound system converts the strong echogenicity into a contrast-enhanced image of the area of interest, revealing the location of the bound microbubbles. Detection of bound microbubbles may then show that the area of interest is expressing that particular molecular marker, which can be indicative of a certain disease state, or identify particular cells in the area of interest.
Applications
Untargeted contrast-enhanced ultrasound is currently applied in echocardiography and radiology. Targeted contrast-enhanced ultrasound is being developed for a variety of medical applications.
Untargeted CEUS
Untargeted microbubbles like Optison and Levovist are currently used in echocardiography. In addition, SonoVue ultrasound contrast agent is used in radiology for lesion characterization.
Organ Edge Delineation: microbubbles can enhance the contrast at the interface between the tissue and blood. A clearer picture of this interface gives the clinician a better picture of the structure of an organ. Tissue structure is crucial in echocardiograms, where a thinning, thickening, or irregularity in the heart wall indicates a serious heart condition that requires either monitoring or treatment.
Blood Volume and Perfusion: contrast-enhanced ultrasound holds the promise for (1) evaluating the degree of blood perfusion in an organ or area of interest and (2) evaluating the blood volume in an organ or area of interest. When used in conjunction with Doppler ultrasound, microbubbles can measure myocardial flow rate to diagnose valve problems. And the relative intensity of the microbubble echoes can also provide a quantitative estimate on blood volume.
Lesion Characterization: contrast-enhanced ultrasound plays a role in the differentiation between benign and malignant focal liver lesions. This differentiation relies on the observation or processing of the dynamic vascular pattern in a lesion with respect to its surrounding tissue parenchyma.
Targeted CEUS
Inflammation: Contrast agents may be designed to bind to certain proteins that become expressed in inflammatory diseases such as Crohn's disease, atherosclerosis, and even heart attacks. Cells of interest in such cases are endothelial cells of blood vessels, and leukocytes:
The inflamed blood vessels specifically express certain receptors, functioning as cell adhesion molecules, like VCAM-1, ICAM-1, E-selectin. If microbubbles are targeted with ligands that bind these molecules, they can be used in contrast echocardiography to detect the onset of inflammation. Early detection allows the design of better treatments. Attempts have been made to outfit microbubbles with monoclonal antibodies that bind P-selectin, ICAM-1, and VCAM-1, but the adhesion to the molecular target was poor and a large fraction of microbubbles that bound to the target rapidly detached, especially at high shear stresses of physiological relevance.
Leukocytes possess high adhesion efficiencies, partly due to a dual-ligand selectin-integrin cell arrest system. One ligand:receptor pair (PSGL-1:selectin) has a fast bond on-rate to slow the leukocyte and allows the second pair (integrin:immunoglobulin superfamily), which has a slower on-rate but slow off-rate to arrest the leukocyte, kinetically enhancing adhesion. Attempts have been made to make contrast agents bind to such ligands, with techniques such as dual-ligand targeting of distinct receptors to polymer microspheres, and biomimicry of the leukocyte's selectin-integrin cell arrest system, having shown an increased adhesion efficiency, but yet not efficient enough to allow clinical use of targeted contrast-enhanced ultrasound for inflammation.
Thrombosis and thrombolysis: Activated platelets are major components of blood clots (thrombi). Microbubbles can be conjugated to a recombinant single-chain variable fragment specific for activated glycoprotein IIb/IIIa (GPIIb/IIIa), which is the most abundant platelet surface receptor. Despite the high shear stress at the thrombus area, the GPIIb/IIIa-targeted microbubbles will specifically bind to activated platelets and allow real-time molecular imaging of thrombosis, such as in myocardial infarction, as well as monitoring success or failure of pharmacological thrombolysis.
Cancer: cancer cells also express a specific set of receptors, mainly receptors that encourage angiogenesis, or the growth of new blood vessels. If microbubbles are targeted with ligands that bind receptors like VEGF or activated glycoprotein IIb/IIIa, they can non-invasively and specifically identify areas of cancers.
Gene Delivery: Vector DNA can be conjugated to the microbubbles. Microbubbles can be targeted with ligands that bind to receptors expressed by the cell type of interest. When the targeted microbubble accumulates at the cell surface with its DNA payload, ultrasound can be used to burst the microbubble. The force associated with the bursting may temporarily permeablize surrounding tissues and allow the DNA to more easily enter the cells. Targeted theranostic microbubbles (directed at VCAM-1) have been employed to deliver miR126 in a preclinical setting to stop the development of AAA in vivo.
Drug Delivery: drugs can be incorporated into the microbubble's lipid shell. The microbubble's large size relative to other drug delivery vehicles like liposomes may allow a greater amount of drug to be delivered per vehicle. By targeted the drug-loaded microbubble with ligands that bind to a specific cell type, microbubble will not only deliver the drug specifically, but can also provide verification that the drug is delivered if the area is imaged using ultrasound. Ultrasound-guided drug delivery has been successfully applied in the treatment of pancreatic cancer.
Advantages
On top of the strengths mentioned in the medical sonography entry, contrast-enhanced ultrasound adds these additional advantages:
The body is 73% water, and therefore, acoustically homogeneous. Blood and surrounding tissues have similar echogenicities, so it is also difficult to clearly discern the degree of blood flow, perfusion, or the interface between the tissue and blood using traditional ultrasound.
Ultrasound imaging allows real-time evaluation of blood flow.
Destruction of microbubbles by ultrasound in the image plane allows absolute quantification of tissue perfusion.
Ultrasonic molecular imaging is safer than molecular imaging modalities such as radionuclide imaging because it does not involve radiation.
Alternative molecular imaging modalities, such as MRI, PET, and SPECT are very costly. Ultrasound, on the other hand, is very cost-efficient and widely available.
Since microbubbles can generate such strong signals, a lower intravenous dosage is needed, micrograms of microbubbles are needed compared to milligrams for other molecular imaging modalities such as MRI contrast agents.
Targeting strategies for microbubbles are versatile and modular. Targeting a new area only entails conjugating a new ligand.
Active targeting can be increased (enhanced microbubbles adhesion) by Acoustic radiation force using a clinical ultrasound imaging system in 2D-mode and 3D-mode.
Disadvantages
In addition to the weaknesses mentioned in the medical sonography entry, contrast-enhanced ultrasound has the following disadvantages:
Microbubbles don't last very long in circulation. They have low circulation residence times because they either get taken up by immune system cells or get taken up by the liver or spleen even when they are coated with PEG.
Ultrasound produces more heat as the frequency increases, so the ultrasonic frequency must be carefully monitored.
Microbubbles burst at low ultrasound frequencies and at high mechanical indices (MI), which is the measure of the negative acoustic pressure of the ultrasound imaging system. Increasing MI increases image quality, but there are tradeoffs with microbubble destruction. Microbubble destruction could cause local microvasculature ruptures and hemolysis.
Targeting ligands can be immunogenic, since current targeting ligands used in preclinical experiments are derived from animal culture.
Low targeted microbubble adhesion efficiency, which means a small fraction of injected microbubbles bind to the area of interest. This is one of the main reasons that targeted contrast-enhanced ultrasound remains in the preclinical development stages.
See also
Doppler effect
Medical imaging
Medical ultrasound
Ultrasound Localization Microscopy
References
External links
Optison Information from GE Healthcare
Levovist Data Sheet from New Zealand Medicines and Medical Devices Safety Authority
Biological engineering
Cardiology
Medical equipment
Medical ultrasonography | Contrast-enhanced ultrasound | [
"Engineering",
"Biology"
] | 3,851 | [
"Biological engineering",
"Medical equipment",
"Medical technology"
] |
3,289,152 | https://en.wikipedia.org/wiki/Castigliano%27s%20method | Castigliano's method, named after Carlo Alberto Castigliano, is a method for determining the displacements of a linear-elastic system based on the partial derivatives of the energy. He is known for his two theorems. The basic concept may be easy to understand by recalling that a change in energy is equal to the causing force times the resulting displacement. Therefore, the causing force is equal to the change in energy divided by the resulting displacement. Alternatively, the resulting displacement is equal to the change in energy divided by the causing force. Partial derivatives are needed to relate causing forces and resulting displacements to the change in energy.
Examples
For a thin, straight cantilever beam with a load P at the end, the displacement at the end can be found by Castigliano's second theorem :
where is Young's modulus, is the second moment of area of the cross-section, and is the expression for the internal moment at a point at distance from the end. The integral evaluates to:
The result is the standard formula given for cantilever beams under end loads.
Castigliano's theorems apply if the strain energy is finite. This is true if . It is the order of the energy (= the highest derivative in the energy), , is the index of the Dirac delta (single force, ) and is the dimension of the space. To second order equations, , belong two Dirac deltas, , force and , dislocation and to fourth order equations, , four Dirac deltas, force, moment, bend, dislocation.
Example: If a plate, , is loaded with a single force, , the inequality is not valid, , also not in , . Nor does it apply to a membrane (Laplace), , or a Reissner-Mindlin plate, . In general Castigliano's theorems do not apply to and problems. The exception is the Kirchhoff plate, , since . But a moment, , causes the energy of a Kirchhoff plate to overflow, . In problems the strain energy is finite if .
Menabrea's theorem is subject to the same restriction. It needs that 2 is valid. It is the order of the support reaction, single force , moment . Except for a Kirchhoff plate and (single force as support reaction), it is generally not valid in and because the presence of point supports results in infinitely large energy.
External links
Carlo Alberto Castigliano
Castigliano's method: some examples
References
Beam theory
Eponymous theorems of physics
Structural analysis | Castigliano's method | [
"Physics",
"Engineering"
] | 534 | [
"Structural engineering",
"Equations of physics",
"Structural analysis",
"Eponymous theorems of physics",
"Mechanical engineering",
"Aerospace engineering",
"Physics theorems"
] |
3,289,793 | https://en.wikipedia.org/wiki/Paper%20composite%20panels | Paper composite panels are a phenolic resin/cellulose composite material made from partially recycled paper and phenolic resin. Multiple layers of paper are soaked in phenolic resin, then molded and baked into net shape in a heated form or press. Originally distributed as a commercial kitchen surface in the 1950s, it has recently been adapted for use in skateboard parks as well as various other applications, such as residential counters, cabinetry, fiberglass cores, guitar fingerboards, signage, exterior wall cladding, and a variety of architectural applications.
Composition
There are several manufacturers in North America who use a different composition of materials to form the final product. One composition is cellulose fiber and phenolic resin (a type of polymer) which is combined and baked for a smooth hard surface. The natural fibers are made from plant, animal and mineral sources. However most natural fibers are predominantly cellulosic.
Cellulose derived from tree pulp is turned into large rolls of paper.
The paper is then soaked in phenolic resin and goes up to a heating chamber to be dried out before being rolled back up.
Then hundreds of these sheets are laid on top of each other and with the use of compression molding the stack is compacted.
Because of the resin's thermoset properties the resulting cooled material is hard.
Applications
It was used for the Boeing 747 for their air tables, hydroforming dyes, vacuum chuck faces, work holders, and proofing materials. Architecturally, it is used for countertops. It has also been used for whaleboard in fiberglass boat building. Other commercial uses include cutting boards, prep tables, pizza peels, and the dashboard of a pickup truck prototype vehicle.
Since the last quarter of the 20th century, phenolic resin and cellulose based compound materials have been used as an alternative to ebony and rosewood to make stringed instrument fingerboards. From 2012 to 2018 guitarmaker Gibson used Richlite, a paper composite material, for the fingerboard on several top-of-the-line production guitars.
Properties
Even if it is formed by laminating many layers of paper together, the finished material appears as a solid, and is not considered to be a sandwich panel. Resistance to high temperatures has been claimed to be up to 350 °F.
See also
Micarta, a group of similar materials, especially if made with paper and epoxy resin.
Ebonol, a paper and phenolic resin compound sometimes used for fingerboards in stringed instruments.
References
External links
Richlite Company home page
Eco-Clad home page
Materials | Paper composite panels | [
"Physics"
] | 527 | [
"Materials",
"Matter"
] |
3,291,252 | https://en.wikipedia.org/wiki/Induced%20radioactivity | Induced radioactivity, also called artificial radioactivity or man-made radioactivity, is the process of using radiation to make a previously stable material radioactive. The husband-and-wife team of Irène Joliot-Curie and Frédéric Joliot-Curie discovered induced radioactivity in 1934, and they shared the 1935 Nobel Prize in Chemistry for this discovery.
Irène Curie began her research with her parents, Marie Curie and Pierre Curie, studying the natural radioactivity found in radioactive isotopes. Irene branched off from the Curies to study turning stable isotopes into radioactive isotopes by bombarding the stable material with alpha particles (denoted α). The Joliot-Curies showed that when lighter elements, such as boron and aluminium, were bombarded with α-particles, the lighter elements continued to emit radiation even after the α−source was removed. They showed that this radiation consisted of particles carrying one unit positive charge with mass equal to that of an electron, now known as a positron.
Neutron activation is the main form of induced radioactivity. It occurs when an atomic nucleus captures one or more free neutrons. This new, heavier isotope may be either stable or unstable (radioactive), depending on the chemical element involved.
Because neutrons disintegrate within minutes outside of an atomic nucleus, free neutrons can be obtained only from nuclear decay, nuclear reaction, and high-energy interaction, such as cosmic radiation or particle accelerator emissions. Neutrons that have been slowed through a neutron moderator (thermal neutrons) are more likely to be captured by nuclei than fast neutrons.
A less common form of induced radioactivity results from removing a neutron by photodisintegration. In this reaction, a high energy photon (a gamma ray) strikes a nucleus with an energy greater than the binding energy of the nucleus, which releases a neutron. This reaction has a minimum cutoff of 2 MeV (for deuterium) and around 10 MeV for most heavy nuclei. Many radionuclides do not produce gamma rays with energy high enough to induce this reaction.
The isotopes used in food irradiation (cobalt-60, caesium-137) both have energy peaks below this cutoff and thus cannot induce radioactivity in the food.
The conditions inside certain types of nuclear reactors with high neutron flux can induce radioactivity. The components in those reactors may become highly radioactive from the radiation to which they are exposed. Induced radioactivity increases the amount of nuclear waste that must eventually be disposed, but it is not referred to as radioactive contamination unless it is uncontrolled.
Further research originally done by Irene and Frederic Joliot-Curie has led to modern techniques to treat various types of cancers.
Ștefania Mărăcineanu's work
After World War I, with support from Constantin Kirițescu, Ștefania Mărăcineanu obtained a fellowship that allowed her to travel to Paris to further her studies. In 1919 she took a course on radioactivity at the Sorbonne with Marie Curie. Afterwards, she pursued research with Curie at the Radium Institute until 1926. She received her Ph.D. At the institute, Mărăcineanu researched the half-life of polonium and devised methods of measuring alpha decay.This work led her to believe that radioactive isotopes could be formed from atoms as a result of exposure to polonium's alpha rays, an observation which would lead to the Joliot-Curies' 1935 Nobel Prize.
In 1935, Frederic and Irene Joliot-Curie (n.r. – daughter of scientists Pierre Curie and Marie Curie) won the Nobel Prize in Physics for the discovery of artificial radioactivity, although all data show that Mărăcineanu was the first to make it. In fact, Ștefania Mărăcineanu expressed her dismay at the fact that Irene Joliot-Curie had used a large part of her work observations regarding artificial radioactivity, without mentioning it. Mărăcineanu publicly claimed that she discovered artificial radioactivity during her years of research in Paris, as evidenced by her doctoral dissertation, presented more than 10 years earlier. "Mărăcineanu wrote to Lise Meitner in 1936, expressing her disappointment that Irene Joliot Curie, without her knowledge, used much of her work, especially that related to artificial radioactivity, in her work," is mentioned in the book A devotion to their science: Pioneer women of radioactivity.
See also
Neutron activation
Radioactive decay
Radioactivity
Slow neutron
Radiocarbon dating
Notes
External links
PhysLink.com – Ask the Experts "Gamma ray food irradiation"
Conference (Dec. 1935) for the Nobel prize of F. & I. Joliot-Curie (induced radioactivity), online and analyzed on BibNum [click 'à télécharger' for English version].
Radioactive waste
Radiation effects | Induced radioactivity | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 1,010 | [
"Physical phenomena",
"Radioactive waste",
"Materials science",
"Radiation",
"Hazardous waste",
"Condensed matter physics",
"Environmental impact of nuclear power",
"Radiation effects",
"Radioactivity"
] |
3,291,372 | https://en.wikipedia.org/wiki/Loading%20dose | In pharmacokinetics, a loading dose is an initial higher dose of a drug that may be given at the beginning of a course of treatment before dropping down to a lower maintenance dose.
A loading dose is most useful for drugs that are eliminated from the body relatively slowly, i.e. have a long systemic half-life. Such drugs need only a low maintenance dose in order to keep the amount of the drug in the body at the appropriate therapeutic level, but this also means that, without an initial higher dose, it would take a long time for the amount of the drug in the body to reach that level.
Drugs which may be started with an initial loading dose include digoxin, teicoplanin, voriconazole, procainamide and fulvestrant.
One or series of doses that may be given at the onset of therapy with the aim of achieving the target concentration rapidly.
Worked example
For an example, one might consider the
hypothetical drug foosporin. Suppose it has a long lifetime in the body, and only ten percent of it is cleared from the blood each day by the liver and kidneys. Suppose also that the drug works best when the total amount in the body is exactly one gram. So, the maintenance dose of foosporin is 100 milligrams (100 mg) per day—just enough to offset the amount cleared.
Suppose a patient just started taking 100 mg of foosporin every day.
On the first day, they'd have 100 mg in their system; their body would clear 10 mg, leaving 90 mg.
On the second day, the patient would have 190 mg in total; their body would clear 19 mg, leaving 171 mg.
On the third day, they'd be up to 271 mg total; their body would clear 27 mg, leaving 244 mg.
As one can see, it would take many days for the total amount of drug within the body to come close to 1 gram (1000 mg) and achieve its full therapeutic effect.
For a drug such as this, a doctor might prescribe a loading dose of one gram to be taken on the first day. That immediately gets the drug's concentration in the body up to the therapeutically-useful level.
First day: 1000 mg; the body clears 100 mg, leaving 900 mg.
On the second day, the patient takes 100 mg, bringing the level back to 1000 mg; the body clears 100 mg overnight, still leaving 900 mg, and so forth.
Calculating the loading dose
Four variables are used to calculate the loading dose:
{|
| Cp || = desired peak concentration of drug
|-
| Vd || = volume of distribution of drug in body
|-
| F || = bioavailability
|-
| S || = fraction of drug salt form which is active drug
|}
The required loading dose may then be calculated as
For an intravenously administered drug, the bioavailability F will equal 1, since the drug is directly introduced to the bloodstream. If the patient requires an oral dose, bioavailability will be less than 1 (depending upon absorption, first pass metabolism etc.), requiring a larger loading dose.
References
Pharmacokinetics | Loading dose | [
"Chemistry"
] | 660 | [
"Pharmacology",
"Pharmacokinetics"
] |
3,292,213 | https://en.wikipedia.org/wiki/Hazard%20analysis | A hazard analysis is one of many methods that may be used to assess risk. At its core, the process entails describing a system object (such as a person or machine) that intends to conduct some activity. During the performance of that activity, an adverse event (referred to as a “factor”) may be encountered that could cause or contribute to an occurrence (mishap, incident, accident). Finally, that occurrence will result in some outcome that may be measured in terms of the degree of loss or harm. This outcome may be measured on a continuous scale, such as an amount of monetary loss, or the outcomes may be categorized into various levels of severity.
A Simple Hazard Analysis
The first step in hazard analysis is to identify the hazards. If an automobile is an object performing an activity such as driving over a bridge, and that bridge may become icy, then an icy bridge might be identified as a hazard. If this hazard is encountered, it could cause or contribute to the occurrence of an automobile accident, and the outcome of that occurrence could range in severity from a minor fender-bender to a fatal accident.
Managing Risk through Hazard Analysis
A hazard analysis may be used to inform decisions regarding the mitigation of risk. For instance, the probability of encountering an icy bridge may be reduced by adding salt such that the ice will melt. Or, risk mitigation strategies may target the occurrence. For instance, putting tire chains on a vehicle does nothing to change the probability of a bridge becoming icy, but if an icy bridge is encountered, it does improve traction, reducing the chance of a sliding into another vehicle. Finally, risk may be managed by influencing the severity of outcomes. For instance, seatbelts and airbags do nothing to prevent bridges from becoming icy, nor do they prevent accidents caused by that ice. However, in the event of an accident, these devices lower the probability of the accident resulting in fatal or serious injuries.
Software Hazard Analysis
IEEE STD-1228-1994 Software Safety Plans prescribes industry best practices for conducting software safety hazard analyses to help ensure safety requirements and attributes are defined and specified for inclusion in software that commands, controls or monitors critical functions. When software is involved in a system, the development and design assurance of that software is often governed by DO-178C. The severity of consequence identified by the hazard analysis establishes the criticality level of the software. Software criticality levels range from A to E, corresponding to the severity of Catastrophic to No Safety Effect. Higher levels of rigor are required for level A and B software and corresponding functional tasks and work products is the system safety domain are used as objective evidence of meeting safety criteria and requirements.
In 2009 a leading edge commercial standard was promulgated based on decades of proven system safety processes in DoD and NASA. ANSI/GEIA-STD-0010-2009 (Standard Best Practices for System Safety Program Development and Execution) is a demilitarized commercial best practice that uses proven holistic, comprehensive and tailored approaches for hazard prevention, elimination and control. It is centered around the hazard analysis and functional based safety process.
Severity category examples
When used as part of an aviation hazard analysis, "Severity" describes the outcome (the degree of loss or harm) that results from an occurrence (an aircraft accident or incident). When categorized, severity categories must be mutually exclusive such that every occurrence has one, and only one, severity category associated with it. The definitions must also be collectively exhaustive such that all occurrences fall into one of the categories. In the US, the FAA includes five severity categories as part of its safety risk management policy.
(medical devices)
Likelihood category examples
When used as part of an aviation hazard analysis, a "Likelihood" is a specific probability. It is the joint probability of a hazard occurring, that hazard causing or contributing to an aircraft accident or incident, and the resulting degree of loss or harm falling within one of the defined severity categories. Thus, if there are five severity categories, each hazard will have five likelihoods. In the US, the FAA provides a continuous probability scale for measuring likelihood, but also includes seven likelihood categories as part of its safety risk management policy.
(medical devices)
See also
Layers of protection analysis (LOPA) – Technique for evaluating the hazards, risks and layers of protection of a system
(Software Considerations in Airborne Systems and Equipment Certification)
(similar to DO-178B, but for hardware)
(System development process)
(System safety assessment process)
Further reading
Notes
References
External links
CFR, Title 29-Labor, Part 1910--Occupational Safety and Health Standards, § 1910.119 U.S. OSHA regulations regarding "Process safety management of highly hazardous chemicals" (especially Appendix C).
FAA Order 8040.4 establishes FAA safety risk management policy.
The FAA publishes a System Safety Handbook that provides a good overview of the system safety process used by the agency.
IEEE 1584-2002 Standard which provides guidelines for doing arc flash hazard assessment.
Avionics
Process safety
Safety engineering
Software quality
Occupational safety and health
Reliability engineering | Hazard analysis | [
"Chemistry",
"Technology",
"Engineering"
] | 1,029 | [
"Systems engineering",
"Reliability engineering",
"Safety engineering",
"Avionics",
"Hazard analysis",
"Process safety",
"Aircraft instruments",
"Chemical process engineering"
] |
1,128,719 | https://en.wikipedia.org/wiki/Temperature%20coefficient | A temperature coefficient describes the relative change of a physical property that is associated with a given change in temperature. For a property R that changes when the temperature changes by dT, the temperature coefficient α is defined by the following equation:
Here α has the dimension of an inverse temperature and can be expressed e.g. in 1/K or K−1.
If the temperature coefficient itself does not vary too much with temperature and , a linear approximation will be useful in estimating the value R of a property at a temperature T, given its value R0 at a reference temperature T0:
where ΔT is the difference between T and T0.
For strongly temperature-dependent α, this approximation is only useful for small temperature differences ΔT.
Temperature coefficients are specified for various applications, including electric and magnetic properties of materials as well as reactivity. The temperature coefficient of most of the reactions lies between 2 and 3.
Negative temperature coefficient
Most ceramics exhibit negative temperature dependence of resistance behaviour. This effect is governed by an Arrhenius equation over a wide range of temperatures:
where R is resistance, A and B are constants, and T is absolute temperature (K).
The constant B is related to the energies required to form and move the charge carriers responsible for electrical conduction hence, as the value of B increases, the material becomes insulating. Practical and commercial NTC resistors aim to combine modest resistance with a value of B that provides good sensitivity to temperature. Such is the importance of the B constant value, that it is possible to characterize NTC thermistors using the B parameter equation:
where is resistance at temperature .
Therefore, many materials that produce acceptable values of include materials that have been alloyed or possess variable negative temperature coefficient (NTC), which occurs when a physical property (such as thermal conductivity or electrical resistivity) of a material lowers with increasing temperature, typically in a defined temperature range. For most materials, electrical resistivity will decrease with increasing temperature.
Materials with a negative temperature coefficient have been used in floor heating since 1971. The negative temperature coefficient avoids excessive local heating beneath carpets, bean bag chairs, mattresses, etc., which can damage wooden floors, and may infrequently cause fires.
Reversible temperature coefficient
Residual magnetic flux density or B changes with temperature and it is one of the important characteristics of magnet performance. Some applications, such as inertial gyroscopes and traveling-wave tubes (TWTs), need to have constant field over a wide temperature range. The reversible temperature coefficient (RTC) of B is defined as:
To address these requirements, temperature compensated magnets were developed in the late 1970s. For conventional SmCo magnets, B decreases as temperature increases. Conversely, for GdCo magnets, B increases as temperature increases within certain temperature ranges. By combining samarium and gadolinium in the alloy, the temperature coefficient can be reduced to nearly zero.
Electrical resistance
The temperature dependence of electrical resistance and thus of electronic devices (wires, resistors) has to be taken into account when constructing devices and circuits. The temperature dependence of conductors is to a great degree linear and can be described by the approximation below.
where
just corresponds to the specific resistance temperature coefficient at a specified reference value (normally T = 0 °C)
That of a semiconductor is however exponential:
where is defined as the cross sectional area and and are coefficients determining the shape of the function and the value of resistivity at a given temperature.
For both, is referred to as the temperature coefficient of resistance (TCR).
This property is used in devices such as thermistors.
Positive temperature coefficient of resistance
A positive temperature coefficient (PTC) refers to materials that experience an increase in electrical resistance when their temperature is raised. Materials which have useful engineering applications usually show a relatively rapid increase with temperature, i.e. a higher coefficient. The higher the coefficient, the greater an increase in electrical resistance for a given temperature increase. A PTC material can be designed to reach a maximum temperature for a given input voltage, since at some point any further increase in temperature would be met with greater electrical resistance. Unlike linear resistance heating or NTC materials, PTC materials are inherently self-limiting. On the other hand, NTC material may also be inherently self-limiting if constant current power source is used.
Some materials even have exponentially increasing temperature coefficient. Example of such a material is PTC rubber.
Negative temperature coefficient of resistance
A negative temperature coefficient (NTC) refers to materials that experience a decrease in electrical resistance when their temperature is raised. Materials which have useful engineering applications usually show a relatively rapid decrease with temperature, i.e. a lower coefficient. The lower the coefficient, the greater a decrease in electrical resistance for a given temperature increase. NTC materials are used to create inrush current limiters (because they present higher initial resistance until the current limiter reaches quiescent temperature), temperature sensors and thermistors.
Negative temperature coefficient of resistance of a semiconductor
An increase in the temperature of a semiconducting material results in an increase in charge-carrier concentration. This results in a higher number of charge carriers available for recombination, increasing the conductivity of the semiconductor. The increasing conductivity causes the resistivity of the semiconductor material to decrease with the rise in temperature, resulting in a negative temperature coefficient of resistance.
Temperature coefficient of elasticity
The elastic modulus of elastic materials varies with temperature, typically decreasing with higher temperature.
Temperature coefficient of reactivity
In nuclear engineering, the temperature coefficient of reactivity is a measure of the change in reactivity (resulting in a change in power), brought about by a change in temperature of the reactor components or the reactor coolant. This may be defined as
Where is reactivity and T is temperature. The relationship shows that is the value of the partial differential of reactivity with respect to temperature and is referred to as the "temperature coefficient of reactivity". As a result, the temperature feedback provided by has an intuitive application to passive nuclear safety. A negative is broadly cited as important for reactor safety, but wide temperature variations across real reactors (as opposed to a theoretical homogeneous reactor) limit the usability of a single metric as a marker of reactor safety.
In water moderated nuclear reactors, the bulk of reactivity changes with respect to temperature are brought about by changes in the temperature of the water. However each element of the core has a specific temperature coefficient of reactivity (e.g. the fuel or cladding). The mechanisms which drive fuel temperature coefficients of reactivity are different from water temperature coefficients. While water expands as temperature increases, causing longer neutron travel times during moderation, fuel material will not expand appreciably. Changes in reactivity in fuel due to temperature stem from a phenomenon known as doppler broadening, where resonance absorption of fast neutrons in fuel filler material prevents those neutrons from thermalizing (slowing down).
Mathematical derivation of temperature coefficient approximation
In its more general form, the temperature coefficient differential law is:
Where is defined:
And is independent of .
Integrating the temperature coefficient differential law:
Applying the Taylor series approximation at the first order, in the proximity of , leads to:
Units
The thermal coefficient of electrical circuit parts is sometimes specified as ppm/°C, or ppm/K. This specifies the fraction (expressed in parts per million) that its electrical characteristics will deviate when taken to a temperature above or below the operating temperature.
See also
Microbolometer (used to measure TCRs)
References
Bibliography
Electric and magnetic fields in matter
Electrical engineering
Nuclear physics | Temperature coefficient | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,552 | [
"Materials science",
"Electric and magnetic fields in matter",
"Condensed matter physics",
"Nuclear physics",
"Electrical engineering"
] |
1,129,026 | https://en.wikipedia.org/wiki/Hyperon | In particle physics, a hyperon is any baryon containing one or more strange quarks, but no charm, bottom, or top quarks. This form of matter may exist in a stable form within the core of some neutron stars. Hyperons are sometimes generically represented by the symbol Y.
History and research
The first research into hyperons happened in the 1950s and spurred physicists on to the creation of an organized classification of particles.
The term was coined by French physicist Louis Leprince-Ringuet in 1953, and announced for the first time at the cosmic ray conference at Bagnères de Bigorre in July of that year, agreed upon by Leprince-Ringuet, Bruno Rossi, C.F. Powell, William B. Fretter and Bernard Peters.
Today, research in this area is carried out on data taken at many facilities around the world, including CERN, Fermilab, SLAC, JLAB, Brookhaven National Laboratory, KEK, GSI and others. Physics topics include searches for CP violation, measurements of spin, studies of excited states (commonly referred to as spectroscopy), and hunts for exotic forms such as pentaquarks and dibaryons.
Properties and behavior
Being baryons, all hyperons are fermions. That is, they have half-integer spin and obey Fermi–Dirac statistics. Hyperons all interact via the strong nuclear force, making them types of hadron. They are composed of three light quarks, at least one of which is a strange quark, which makes them strange baryons.
Excited hyperon resonances and ground-state hyperons with a '*' included in their notation decay via the strong interaction. For Ω⁻ as well as the lighter hyperons this decay mode is not possible given the particle masses and the conservation of flavor and isospin necessary in strong interactions. Instead, these decay weakly with non-conserved parity. An exception to this is the Σ⁰ which decays electromagnetically into Λ on account of carrying the same flavor quantum numbers. The type of interaction through which these decays occur determine the average lifetime, which is why weakly decaying hyperons are significantly more long-lived than those that decay through strong or electromagnetic interactions.
List
Notes:
Since strangeness is conserved by the strong interactions, some ground-state hyperons cannot decay strongly. However, they do participate in strong interactions.
may also decay on rare occurrences via these processes:
→ + +
→ + +
and are also known as "cascade" hyperons, since they go through a two-step cascading decay into a nucleon.
The has a baryon number of +1 and hypercharge of −2, giving it strangeness of −3. It takes multiple flavor-changing weak decays for it to decay into a proton or neutron. Murray Gell-Mann's and Yuval Ne'eman's SU(3) model (sometimes called the Eightfold Way) predicted this hyperon's existence, mass and that it will only undergo weak decay processes. Experimental evidence for its existence was discovered in 1964 at Brookhaven National Laboratory. Further examples of its formation and observation using particle accelerators confirmed the SU(3) model.
See also
Delta baryon
Hypernucleus
Strangelet
List of baryons
List of particles
Physics portal
Timeline of particle discoveries
References
Baryons
Exotic matter
Strange quark | Hyperon | [
"Physics"
] | 705 | [
"Matter",
"Exotic matter"
] |
1,129,074 | https://en.wikipedia.org/wiki/Grand%20canonical%20ensemble | In statistical mechanics, the grand canonical ensemble (also known as the macrocanonical ensemble) is the statistical ensemble that is used to represent the possible states of a mechanical system of particles that are in thermodynamic equilibrium (thermal and chemical) with a reservoir. The system is said to be open in the sense that the system can exchange energy and particles with a reservoir, so that various possible states of the system can differ in both their total energy and total number of particles. The system's volume, shape, and other external coordinates are kept the same in all possible states of the system.
The thermodynamic variables of the grand canonical ensemble are chemical potential (symbol: ) and absolute temperature (symbol: . The ensemble is also dependent on mechanical variables such as volume (symbol: , which influence the nature of the system's internal states. This ensemble is therefore sometimes called the ensemble, as each of these three quantities are constants of the ensemble.
Basics
In simple terms, the grand canonical ensemble assigns a probability to each distinct microstate given by the following exponential:
where is the number of particles in the microstate and is the total energy of the microstate. is the Boltzmann constant.
The number is known as the grand potential and is constant for the ensemble. However, the probabilities and will vary if different are selected. The grand potential serves two roles: to provide a normalization factor for the probability distribution (the probabilities, over the complete set of microstates, must add up to one); and, many important ensemble averages can be directly calculated from the function .
In the case where more than one kind of particle is allowed to vary in number, the probability expression generalizes to
where is the chemical potential for the first kind of particles, is the number of that kind of particle in the microstate, is the chemical potential for the second kind of particles and so on ( is the number of distinct kinds of particles). However, these particle numbers should be defined carefully (see the note on particle number conservation below).
The distribution of the grand canonical ensemble is called generalized Boltzmann distribution by some authors.
Grand ensembles are apt for use when describing systems such as the electrons in a conductor, or the photons in a cavity, where the shape is fixed but the energy and number of particles can easily fluctuate due to contact with a reservoir (e.g., an electrical ground or a dark surface, in these cases). The grand canonical ensemble provides a natural setting for an exact derivation of the Fermi–Dirac statistics or Bose–Einstein statistics for a system of non-interacting quantum particles (see examples below).
Note on formulation
An alternative formulation for the same concept writes the probability as , using the grand partition function rather than the grand potential. The equations in this article (in terms of grand potential) may be restated in terms of the grand partition function by simple mathematical manipulations.
Applicability
The grand canonical ensemble is the ensemble that describes the possible states of an isolated system that is in thermal and chemical equilibrium with a reservoir (the derivation proceeds along lines analogous to the heat bath derivation of the normal canonical ensemble, and can be found in Reif). The grand canonical ensemble applies to systems of any size, small or large; it is only necessary to assume that the reservoir with which it is in contact is much larger (i.e., to take the macroscopic limit).
The condition that the system is isolated is necessary in order to ensure it has well-defined thermodynamic quantities and evolution. In practice, however, it is desirable to apply the grand canonical ensemble to describe systems that are in direct contact with the reservoir, since it is that contact that ensures the equilibrium. The use of the grand canonical ensemble in these cases is usually justified either 1) by assuming that the contact is weak, or 2) by incorporating a part of the reservoir connection into the system under analysis, so that the connection's influence on the region of interest is correctly modeled. Alternatively, theoretical approaches can be used to model the influence of the connection, yielding an open statistical ensemble.
Another case in which the grand canonical ensemble appears is when considering a system that is large and thermodynamic (a system that is "in equilibrium with itself"). Even if the exact conditions of the system do not actually allow for variations in energy or particle number, the grand canonical ensemble can be used to simplify calculations of some thermodynamic properties. The reason for this is that various thermodynamic ensembles (microcanonical, canonical) become equivalent in some aspects to the grand canonical ensemble, once the system is very large. Of course, for small systems, the different ensembles are no longer equivalent even in the mean. As a result, the grand canonical ensemble can be highly inaccurate when applied to small systems of fixed particle number, such as atomic nuclei.
Properties
Grand potential, ensemble averages, and exact differentials
The partial derivatives of the function give important grand canonical ensemble average quantities:
Exact differential: From the above expressions, it can be seen that the function has the exact differential
First law of thermodynamics: Substituting the above relationship for into the exact differential of , an equation similar to the first law of thermodynamics is found, except with average signs on some of the quantities:
Thermodynamic fluctuations: The variances in energy and particle numbers are
Correlations in fluctuations: The covariances of particle numbers and energy are
Example ensembles
The usefulness of the grand canonical ensemble is illustrated in the examples below. In each case the grand potential is calculated on the basis of the relationship
which is required for the microstates' probabilities to add up to 1.
Statistics of noninteracting particles
Bosons and fermions (quantum)
In the special case of a quantum system of many non-interacting particles, the thermodynamics are simple to compute.
Since the particles are non-interacting, one can compute a series of single-particle stationary states, each of which represent a separable part that can be included into the total quantum state of the system.
For now let us refer to these single-particle stationary states as orbitals (to avoid confusing these "states" with the total many-body state), with the provision that each possible internal particle property (spin or polarization) counts as a separate orbital.
Each orbital may be occupied by a particle (or particles), or may be empty.
Since the particles are non-interacting, we may take the viewpoint that each orbital forms a separate thermodynamic system.
Thus each orbital is a grand canonical ensemble unto itself, one so simple that its statistics can be immediately derived here. Focusing on just one orbital labelled , the total energy for a microstate of particles in this orbital will be , where is the characteristic energy level of that orbital. The grand potential for the orbital is given by one of two forms, depending on whether the orbital is bosonic or fermionic:
In each case the value gives the thermodynamic average number of particles on the orbital: the Fermi–Dirac distribution for fermions, and the Bose–Einstein distribution for bosons.
Considering again the entire system, the total grand potential is found by adding up the for all orbitals.
Indistinguishable classical particles
In classical mechanics it is also possible to consider indistinguishable particles (in fact, indistinguishability is a prerequisite for defining a chemical potential in a consistent manner; all particles of a given kind must be interchangeable). We can consider a region of the single-particle phase space with approximately uniform energy to be an "orbital" labelled .
Two complications arise since this orbital actually encompasses many (infinite) distinct states. Briefly:
An overcounting correction of is needed since the many-particle phase space contains copies of the same actual state (formed by the permutation of the particles' different exact states).
The chosen width of the orbital is arbitrary, thus there is a further proportionality factor that is independent of .
Due to the overcounting correction, the summation now takes the form of an exponential power series,
the value corresponding to Maxwell–Boltzmann statistics.
Ionization of an isolated atom
The grand canonical ensemble can be used to predict whether an atom prefers to be in a neutral state or ionized state.
An atom is able to exist in ionized states with more or fewer electrons compared to neutral. As shown below, ionized states may be thermodynamically preferred depending on the environment.
Consider a simplified model where the atom can be in a neutral state or in one of two ionized states (a detailed calculation also includes the degeneracy factors of the states):
charge neutral state, with electrons and energy .
an oxidized state ( electrons) with energy
a reduced state ( electrons) with energy
Here and are the atom's ionization energy and electron affinity, respectively; is the local electrostatic potential in the vacuum nearby the atom, and is the electron charge.
The grand potential in this case is thus determined by
The quantity is critical in this case, for determining the balance between the various states. This value is determined by the environment around the atom.
If one of these atoms is placed into a vacuum box, then , the work function of the box lining material. Comparing the tables of work function for various solid materials with the tables of electron affinity and ionization energy for atomic species, it is clear that many combinations would result in a neutral atom, however some specific combinations would result in the atom preferring an ionized state: e.g., a halogen atom in a ytterbium box, or a cesium atom in a tungsten box. At room temperature this situation is not stable since the atom tends to adsorb to the exposed lining of the box instead of floating freely. At high temperatures, however, the atoms are evaporated from the surface in ionic form; this spontaneous surface ionization effect has been used as a cesium ion source.
At room temperature, this example finds application in semiconductors, where the ionization of a dopant atom is well described by this ensemble. In the semiconductor, the conduction band edge plays the role of the vacuum energy level (replacing ), and is known as the Fermi level. Of course, the ionization energy and electron affinity of the dopant atom are strongly modified relative to their vacuum values. A typical donor dopant in silicon, phosphorus, has ;
the value of in the intrinsic silicon is initially about , guaranteeing the ionization of the dopant.
The value of depends strongly on electrostatics, however, so under some circumstances it is possible to de-ionize the dopant.
Meaning of chemical potential, generalized "particle number"
In order for a particle number to have an associated chemical potential, it must be conserved during the internal dynamics of the system, and only able to change when the system exchanges particles with an external reservoir.
If the particles can be created out of energy during the dynamics of the system, then an associated term must not appear in the probability expression for the grand canonical ensemble. In effect, this is the same as requiring that for that kind of particle. Such is the case for photons in a black cavity, whose number regularly change due to absorption and emission on the cavity walls. (On the other hand, photons in a highly reflective cavity can be conserved and caused to have a nonzero .)
In some cases the number of particles is not conserved and the represents a more abstract conserved quantity:
Chemical reactions: Chemical reactions can convert one type of molecule to another; if reactions occur then the must be defined such that they do not change during the chemical reaction.
High energy particle physics: Ordinary particles can be spawned out of pure energy, if a corresponding antiparticle is created. If this sort of process is allowed, then neither the number of particles nor antiparticles are conserved. Instead, is conserved. As particle energies increase, there are more possibilities to convert between particle types, and so there are fewer numbers that are truly conserved. At the very highest energies, the only conserved numbers are electric charge, weak isospin, and baryon–lepton number difference.
On the other hand, in some cases a single kind of particle may have multiple conserved numbers:
Closed compartments: In a system composed of multiple compartments that share energy but do not share particles, it is possible to set the chemical potentials separately for each compartment. For example, a capacitor is composed of two isolated conductors and is charged by applying a difference in electron chemical potential.
Slow equilibration: In some quasi-equilibrium situations it is possible to have two distinct populations of the same kind of particle in the same location, which are each equilibrated internally but not with each other. Though not strictly in equilibrium, it may be useful to name quasi-equilibrium chemical potentials which can differ among the different populations. Examples: (semiconductor physics) distinct quasi-Fermi levels (electron chemical potentials) in the conduction band and valence band; (spintronics) distinct spin-up and spin-down chemical potentials; (cryogenics) distinct parahydrogen and orthohydrogen chemical potentials.
Precise expressions for the ensemble
The precise mathematical expression for statistical ensembles has a distinct form depending on the type of mechanics under consideration (quantum or classical), as the notion of a "microstate" is considerably different. In quantum mechanics, the grand canonical ensemble affords a simple description since diagonalization provides a set of distinct microstates of a system, each with well-defined energy and particle number. The classical mechanical case is more complex as it involves not stationary states but instead an integral over canonical phase space.
Quantum mechanical
A statistical ensemble in quantum mechanics is represented by a density matrix, denoted by . The grand canonical ensemble is the density matrix
where is the system's total energy operator (Hamiltonian), is the system's total particle number operator for particles of type 1, is the total particle number operator for particles of type 2, and so on. is the matrix exponential operator. The grand potential is determined by the probability normalization condition that the density matrix has a trace of one, :
Note that for the grand ensemble, the basis states of the operators , , etc. are all states with multiple particles in Fock space, and the density matrix is defined on the same basis. Since the energy and particle numbers are all separately conserved, these operators are mutually commuting.
The grand canonical ensemble can alternatively be written in a simple form using bra–ket notation, since it is possible (given the mutually commuting nature of the energy and particle number operators) to find a complete basis of simultaneous eigenstates , indexed by , where , , and so on. Given such an eigenbasis, the grand canonical ensemble is simply
where the sum is over the complete set of states with state having total energy, particles of type 1, particles of type 2, and so on.
Classical mechanical
In classical mechanics, a grand ensemble is instead represented by a joint probability density function defined over multiple phase spaces of varying dimensions, , where the and are the canonical coordinates (generalized momenta and generalized coordinates) of the system's internal degrees of freedom. The expression for the grand canonical ensemble is somewhat more delicate than the canonical ensemble since:
The number of particles and thus the number of coordinates varies between the different phase spaces, and,
it is vital to consider whether permuting similar particles counts as a distinct state or not.
In a system of particles, the number of degrees of freedom depends on the number of particles in a way that depends on the physical situation. For example, in a three-dimensional gas of monoatoms , however in molecular gases there will also be rotational and vibrational degrees of freedom.
The probability density function for the grand canonical ensemble is:
where
is the energy of the system, a function of the phase ,
is an arbitrary but predetermined constant with the units of , setting the extent of one microstate and providing correct dimensions to .
is an overcounting correction factor (see below), a function of .
Again, the value of is determined by demanding that is a normalized probability density function:
This integral is taken over the entire available phase space for the given numbers of particles.
Overcounting correction
A well-known problem in the statistical mechanics of fluids (gases, liquids, plasmas) is how to treat particles that are similar or identical in nature: should they be regarded as distinguishable or not? In the system's equation of motion each particle is forever tracked as a distinguishable entity, and yet there are also valid states of the system where the positions of each particle have simply been swapped: these states are represented at different places in phase space, yet would seem to be equivalent.
If the permutations of similar particles are regarded to count as distinct states, then the factor above is simply . From this point of view, ensembles include every permuted state as a separate microstate. Although appearing benign at first, this leads to a problem of severely non-extensive entropy in the canonical ensemble, known today as the Gibbs paradox. In the grand canonical ensemble a further logical inconsistency occurs: the number of distinguishable permutations depends not only on how many particles are in the system, but also on how many particles are in the reservoir (since the system may exchange particles with a reservoir). In this case the entropy and chemical potential are non-extensive but also badly defined, depending on a parameter (reservoir size) that should be irrelevant.
To solve these issues it is necessary that the exchange of two similar particles (within the system, or between the system and reservoir) must not be regarded as giving a distinct state of the system. In order to incorporate this fact, integrals are still carried over full phase space but the result is divided by
which is the number of different permutations possible. The division by neatly corrects the overcounting that occurs in the integral over all phase space.
It is of course possible to include distinguishable types of particles in the grand canonical ensemble—each distinguishable type is tracked by a separate particle counter and chemical potential . As a result, the only consistent way to include "fully distinguishable" particles in the grand canonical ensemble is to consider every possible distinguishable type of those particles, and to track each and every possible type with a separate particle counter and separate chemical potential.
See also
Microcanonical ensemble
Canonical ensemble
Notes
References
Statistical ensembles | Grand canonical ensemble | [
"Physics"
] | 3,839 | [
"Statistical ensembles",
"Statistical mechanics"
] |
1,129,272 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28speed%29 | To help compare different orders of magnitude, the following list describes various speed levels between approximately 2.2 m/s and 3.0 m/s (the speed of light). Values in bold are exact.
List of orders of magnitude for speed
See also
Typical projectile speeds - also showing the corresponding kinetic energy per unit mass
Neutron temperature
References
Units of velocity
Physical quantities
Speed | Orders of magnitude (speed) | [
"Physics",
"Mathematics"
] | 76 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Units of velocity",
"Orders of magnitude",
"Physical properties",
"Units of measurement"
] |
1,130,097 | https://en.wikipedia.org/wiki/Inflationary%20epoch |
In physical cosmology, the inflationary epoch was the period in the evolution of the early universe when, according to inflation theory, the universe underwent an extremely rapid exponential expansion. This rapid expansion increased the linear dimensions of the early universe by a factor of at least 1026 (and possibly a much larger factor), and so increased its volume by a factor of at least 1078. Expansion by a factor of 1026 is equivalent to expanding an object 1 nanometer (10−9 m, about half the width of a molecule of DNA) in length to one approximately 10.6 light years (about 62 trillion miles).
Description
Vacuum state is a configuration of quantum fields representing a local minimum (but not necessarily a global minimum) of energy.
Inflationary models propose that at approximately 10−36 seconds after the Big Bang, vacuum state of the Universe was different from the one seen at the present time: the inflationary vacuum had a much higher energy density.
According to general relativity, any vacuum state with non-zero energy density generates a repulsive force that leads to an expansion of space. In inflationary models, early high-energy vacuum state causes a very rapid expansion. This expansion explains various properties of the current universe that are difficult to account for without such an inflationary epoch.
Most inflationary models propose a scalar field called the inflaton field, with properties necessary for having (at least) two vacuum states.
It is not known exactly when the inflationary epoch ended, but it is thought to have been between 10−33 and 10−32 seconds after the Big Bang. The rapid expansion of space meant that any potential elementary particles (or other "unwanted" artifacts, such as topological defects) remaining from the time before inflation were now distributed very thinly across the universe.
When the inflaton field reconfigured itself into the low-energy vacuum state we currently observe, the huge difference in potential energy was released in the form of a dense, hot mixture of quarks, anti-quarks and gluons as it entered the electroweak epoch.
See also
Notes
References
External links
Inflation for Beginners by John Gribbin
NASA Universe 101 What is the Inflation Theory?
Physical cosmology
Big Bang
Inflation (cosmology) | Inflationary epoch | [
"Physics",
"Astronomy"
] | 463 | [
"Cosmogony",
"Astronomical sub-disciplines",
"Big Bang",
"Theoretical physics",
"Astrophysics",
"Physical cosmology"
] |
1,130,902 | https://en.wikipedia.org/wiki/Sphingomyelin | Sphingomyelin (SPH, ) is a type of sphingolipid found in animal cell membranes, especially in the membranous myelin sheath that surrounds some nerve cell axons. It usually consists of phosphocholine and ceramide, or a phosphoethanolamine head group; therefore, sphingomyelins can also be classified as sphingophospholipids. In humans, SPH represents ~85% of all sphingolipids, and typically makes up 10–20 mol % of plasma membrane lipids.
Sphingomyelin was first isolated by German chemist Johann L.W. Thudicum in the 1880s. The structure of sphingomyelin was first reported in 1927 as N-acyl-sphingosine-1-phosphorylcholine. Sphingomyelin content in mammals ranges from 2 to 15% in most tissues, with higher concentrations found in nerve tissues, red blood cells, and the ocular lenses. Sphingomyelin has significant structural and functional roles in the cell. It is a plasma membrane component and participates in many signaling pathways. The metabolism of sphingomyelin creates many products that play significant roles in the cell.
Physical characteristics
Composition
Sphingomyelin consists of a phosphocholine head group, a sphingosine, and a fatty acid. It is one of the few membrane phospholipids not synthesized from glycerol. The sphingosine and fatty acid can collectively be categorized as a ceramide. This composition allows sphingomyelin to play significant roles in signaling pathways: the degradation and synthesis of sphingomyelin produce important second messengers for signal transduction.
Sphingomyelin obtained from natural sources, such as eggs or bovine brain, contains fatty acids of various chain length. Sphingomyelin with set chain length, such as palmitoylsphingomyelin with a saturated 16 acyl chain, is available commercially.
Properties
Ideally, sphingomyelin molecules are shaped like a cylinder, however many molecules of sphingomyelin have a significant chain mismatch (the lengths of the two hydrophobic chains are significantly different). The hydrophobic chains of sphingomyelin tend to be much more saturated than other phospholipids. The main transition phase temperature of sphingomyelins is also higher compared to the phase transition temperature of similar phospholipids, near 37 °C. This can introduce lateral heterogeneity in the membrane, generating domains in the membrane bilayer.
Sphingomyelin undergoes significant interactions with cholesterol. Cholesterol has the ability to eliminate the liquid to solid phase transition in phospholipids. Due to sphingomyelin transition temperature being within physiological temperature ranges, cholesterol can play a significant role in the phase of sphingomyelin. Sphingomyelin are also more prone to intermolecular hydrogen bonding than other phospholipids.
Location
Sphingomyelin is synthesized at the endoplasmic reticulum (ER), where it can be found in low amounts, and at the trans Golgi. It is enriched at the plasma membrane with a greater concentration on the outer than the inner leaflet. The Golgi complex represents an intermediate between the ER and plasma membrane, with slightly higher concentrations towards the trans side.
Metabolism
Synthesis
The synthesis of sphingomyelin involves the enzymatic transfer of a phosphocholine from phosphatidylcholine to a ceramide. The first committed step of sphingomyelin synthesis involves the condensation of L-serine and palmitoyl-CoA. This reaction is catalyzed by serine palmitoyltransferase. The product of this reaction is reduced, yielding dihydrosphingosine. The dihydrosphingosine undergoes N-acylation followed by desaturation to yield a ceramide. Each one of these reactions occurs at the cytosolic surface of the endoplasmic reticulum. The ceramide is transported to the Golgi apparatus where it can be converted to sphingomyelin. Sphingomyelin synthase is responsible for the production of sphingomyelin from ceramide. Diacylglycerol is produced as a byproduct when the phosphocholine is transferred.
Degradation
Sphingomyelin breakdown is responsible for initiating many universal signaling pathways. It is hydrolyzed by sphingomyelinases (sphingomyelin specific type-C phospholipases). The phosphocholine head group is released into the aqueous environment while the ceramide diffuses through the membrane.
Function
Membranes
The membranous myelin sheath that surrounds and electrically insulates many nerve cell axons is particularly rich in sphingomyelin, suggesting its role as an insulator of nerve fibers. The plasma membrane of other cells is also abundant in sphingomyelin, though it is largely to be found in the exoplasmic leaflet of the cell membrane. There is, however, some evidence that there may also be a sphingomyelin pool in the inner leaflet of the membrane. Moreover, neutral sphingomyelinase-2 – an enzyme that breaks down sphingomyelin into ceramide – has been found to localise exclusively to the inner leaflet, further suggesting that there may be sphingomyelin present there.
Signal transduction
The function of sphingomyelin remained unclear until it was found to have a role in signal transduction. It has been discovered that sphingomyelin plays a significant role in cell signaling pathways. The synthesis of sphingomyelin at the plasma membrane by sphingomyelin synthase 2 produces diacylglycerol, which is a lipid-soluble second messenger that can pass along a signal cascade. In addition, the degradation of sphingomyelin can produce ceramide which is involved in the apoptotic signaling pathway.
Apoptosis
Sphingomyelin has been found to have a role in cell apoptosis by hydrolyzing into ceramide. Studies in the late 1990s had found that ceramide was produced in a variety of conditions leading to apoptosis. It was then hypothesized that sphingomyelin hydrolysis and ceramide signaling were essential in the decision of whether a cell dies. In the early 2000s new studies emerged that defined a new role for sphingomyelin hydrolysis in apoptosis, determining not only when a cell dies but how. After more experimentation it has been shown that if sphingomyelin hydrolysis happens at a sufficiently early point in the pathway the production of ceramide may influence either the rate and form of cell death or work to release blocks on downstream events.
Lipid rafts
Sphingomyelin, as well as other sphingolipids, are associated with lipid microdomains in the plasma membrane known as lipid rafts. Lipid rafts are characterized by the lipid molecules being in the lipid ordered phase, offering more structure and rigidity compared to the rest of the plasma membrane. In the rafts, the acyl chains have low chain motion but the molecules have high lateral mobility. This order is in part due to the higher transition temperature of sphingolipids as well as the interactions of these lipids with cholesterol. Cholesterol is a relatively small, nonpolar molecule that can fill the space between the sphingolipids that is a result of the large acyl chains. Lipid rafts are thought to be involved in many cell processes, such as membrane sorting and trafficking, signal transduction, and cell polarization. Excessive sphingomyelin in lipid rafts may lead to insulin resistance.
Due to the specific types of lipids in these microdomains, lipid rafts can accumulate certain types of proteins associated with them, thereby increasing the special functions they possess. Lipid rafts have been speculated to be involved in the cascade of cell apoptosis.
Abnormalities and associated diseases
Sphingomyelin can accumulate in a rare hereditary disease called Niemann–Pick disease, types A and B. It is a genetically-inherited disease caused by a deficiency in the lysosomal enzyme acid sphingomyelinase, which causes the accumulation of sphingomyelin in spleen, liver, lungs, bone marrow, and brain, causing irreversible neurological damage. Of the two types involving sphingomyelinase, type A occurs in infants. It is characterized by jaundice, an enlarged liver, and profound brain damage. Children with this type rarely live beyond 18 months. Type B involves an enlarged liver and spleen, which usually occurs in the pre-teen years. The brain is not affected. Most patients present with <1% normal levels of the enzyme in comparison to normal levels. A hemolytic protein, lysenin, may be a valuable probe for sphingomyelin detection in cells of Niemann-Pick A patients.
As a result of the autoimmune disease multiple sclerosis (MS), the myelin sheath of neuronal cells in the brain and spinal cord is degraded, resulting in loss of signal transduction capability. MS patients exhibit upregulation of certain cytokines in the cerebrospinal fluid, particularly tumor necrosis factor alpha. This activates sphingomyelinase, an enzyme that catalyzes the hydrolysis of sphingomyelin to ceramide; sphingomyelinase activity has been observed in conjunction with cellular apoptosis.
An excess of sphingomyelin in the red blood cell membrane (as in abetalipoproteinemia) causes excess lipid accumulation in the outer leaflet of the red blood cell plasma membrane. This results in abnormally shaped red cells called acanthocytes.
Additional images
References
External links
Phospholipids
Membrane biology | Sphingomyelin | [
"Chemistry"
] | 2,076 | [
"Phospholipids",
"Molecular biology",
"Membrane biology",
"Signal transduction"
] |
1,130,916 | https://en.wikipedia.org/wiki/Sphingosine | Sphingosine (2-amino-4-trans-octadecene-1,3-diol) is an 18-carbon amino alcohol with an unsaturated hydrocarbon chain, which forms a primary part of sphingolipids, a class of cell membrane lipids that include sphingomyelin, an important phospholipid.
Functions
Sphingosine can be phosphorylated in vivo via two kinases, sphingosine kinase type 1 and sphingosine kinase type 2. This leads to the formation of sphingosine-1-phosphate, a potent signaling lipid.
Sphingolipid metabolites, such as ceramides, sphingosine and sphingosine-1-phosphate, are lipid signaling molecules involved in diverse cellular processes.
Biosynthesis
Sphingosine is synthesized from palmitoyl CoA and serine in a condensation required to yield dihydrosphingosine.
Dehydrosphingosine is then reduced by NADPH to dihydrosphingosine (sphinganine), acylated to dihydroceramide and finally oxidized by FAD to ceramide. Sphingosine is then solely formed via degradation of sphingolipid in the lysosome.
Gallery
See also
Dimethylsphingosine
Fingolimod
Literature
article
External links
Biomolecules
Diols
Amines
Alkene derivatives | Sphingosine | [
"Chemistry",
"Biology"
] | 309 | [
"Natural products",
"Functional groups",
"Organic compounds",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Amines",
"Bases (chemistry)",
"Molecular biology"
] |
1,131,151 | https://en.wikipedia.org/wiki/Heliosphere | The heliosphere is the magnetosphere, astrosphere, and outermost atmospheric layer of the Sun. It takes the shape of a vast, tailed bubble-like region of space. In plasma physics terms, it is the cavity formed by the Sun in the surrounding interstellar medium. The "bubble" of the heliosphere is continuously "inflated" by plasma originating from the Sun, known as the solar wind. Outside the heliosphere, this solar plasma gives way to the interstellar plasma permeating the Milky Way. As part of the interplanetary magnetic field, the heliosphere shields the Solar System from significant amounts of cosmic ionizing radiation; uncharged gamma rays are, however, not affected. Its name was likely coined by Alexander J. Dessler, who is credited with the first use of the word in the scientific literature in 1967. The scientific study of the heliosphere is heliophysics, which includes space weather and space climate.
Flowing unimpeded through the Solar System for billions of kilometers, the solar wind extends far beyond even the region of Pluto until it encounters the "termination shock", where its motion slows abruptly due to the outside pressure of the interstellar medium. The "heliosheath" is a broad transitional region between the termination shock and the heliosphere's outmost edge, the "heliopause". The overall shape of the heliosphere resembles that of a comet, being roughly spherical on one side to around 100 astronomical units (AU), and on the other side being tail shaped, known as the "heliotail", trailing for several thousands of AUs.
Two Voyager program spacecraft explored the outer reaches of the heliosphere, passing through the termination shock and the heliosheath. Voyager 1 encountered the heliopause on 25 August 2012, when the spacecraft measured a forty-fold sudden increase in plasma density. Voyager 2 traversed the heliopause on 5 November 2018. Because the heliopause marks the boundary between matter originating from the Sun and matter originating from the rest of the galaxy, spacecraft that depart the heliosphere (such as the two Voyagers) are in interstellar space.
History
The heliosphere is thought to change significantly over the course of millions of years due to extrasolar effects such as closer supernovas or the traversing interstellar medium of different densities. Evidence suggests that up to three million years ago Earth was exposed to the interstellar medium due to it shrinking the heliosphere to the Inner Solar System, which possibly had impacted Earth's past climate and human evolution.
Structure
Despite its name, the heliosphere's shape is not a perfect sphere. Its shape is determined by three factors: the interstellar medium (ISM), the solar wind, and the overall motion of the Sun and heliosphere as it passes through the ISM. Because the solar wind and the ISM are both fluid, the heliosphere's shape and size are also fluid. Changes in the solar wind, however, more strongly alter the fluctuating position of the boundaries on short timescales (hours to a few years). The solar wind's pressure varies far more rapidly than the outside pressure of the ISM at any given location. In particular, the effect of the 11-year solar cycle, which sees a distinct maximum and minimum of solar wind activity, is thought to be significant.
On a broader scale, the motion of the heliosphere through the fluid medium of the ISM results in an overall comet-like shape. The solar wind plasma which is moving roughly "upstream" (in the same direction as the Sun's motion through the galaxy) is compressed into a nearly-spherical form, whereas the plasma moving "downstream" (opposite the Sun's motion) flows out for a much greater distance before giving way to the ISM, defining the long, trailing shape of the heliotail.
The limited data available and the unexplored nature of these structures have resulted in many theories as to their form. In 2020, Merav Opher led the team of researchers who determined that the shape of the heliosphere is a crescent that can be described as a deflated croissant.
Solar wind
The solar wind consists of particles (ionized atoms from the solar corona) and fields like the magnetic field that are produced from the Sun and stream out into space. Because the Sun rotates once approximately every 25 days, the heliospheric magnetic field transported by the solar wind gets wrapped into a spiral. The solar wind affects many other systems in the Solar System; for example, variations in the Sun's own magnetic field are carried outward by the solar wind, producing geomagnetic storms in the Earth's magnetosphere.
Heliospheric current sheet
The heliospheric current sheet is a ripple in the heliosphere created by the rotating magnetic field of the Sun. It marks the boundary between heliospheric magnetic field regions of opposite polarity. Extending throughout the heliosphere, the heliospheric current sheet could be considered the largest structure in the Solar System and is said to resemble a "ballerina's skirt".
Edge structure
The outer structure of the heliosphere is determined by the interactions between the solar wind and the winds of interstellar space. The solar wind streams away from the Sun in all directions at speeds of several hundred km/s in the Earth's vicinity. At some distance from the Sun, well beyond the orbit of Neptune, this supersonic wind slows down as it encounters the gases in the interstellar medium. This takes place in several stages:
The solar wind is traveling at supersonic speeds within the Solar System. At the termination shock, a standing shock wave, the solar wind falls below the speed of sound and becomes subsonic.
It was previously thought that once subsonic, the solar wind would be shaped by the ambient flow of the interstellar medium, forming a blunt nose on one side and comet-like heliotail behind, a region called the heliosheath. However, observations in 2009 showed that this model is incorrect. As of 2011, it is thought to be filled with a magnetic bubble "foam".
The outer surface of the heliosheath, where the heliosphere meets the interstellar medium, is called heliopause. This is the edge of the entire heliosphere. Observations in 2009 led to changes to this model.
In theory, heliopause causes turbulence in the interstellar medium as the Sun orbits the Galactic Center. Outside the heliopause, would be a turbulent region caused by the pressure of the advancing heliopause against the interstellar medium. However, the velocity of the solar wind relative to the interstellar medium is probably too low for a bow shock.
Termination shock
The termination shock is the point in the heliosphere where the solar wind slows down to subsonic speed (relative to the Sun) because of interactions with the local interstellar medium. This causes compression, heating, and a change in the magnetic field. In the Solar System, the termination shock is believed to be 75 to 90 astronomical units from the Sun. In 2004, Voyager 1 crossed the Sun's termination shock, followed by Voyager 2 in 2007.
The shock arises because solar wind particles are emitted from the Sun at about 400 km/s, while the speed of sound (in the interstellar medium) is about 100 km/s. The exact speed depends on the density, which fluctuates considerably. The interstellar medium, although very low in density, nonetheless has a relatively constant pressure associated with it; the pressure from the solar wind decreases with the square of the distance from the Sun. As one moves far enough away from the Sun, the pressure of the solar wind drops to where it can no longer maintain supersonic flow against the pressure of the interstellar medium, at which point the solar wind slows to below its speed of sound, causing a shock wave. Further from the Sun, the termination shock is followed by heliopause, where the two pressures become equal and solar wind particles are stopped by the interstellar medium.
Other termination shocks can be seen in terrestrial systems; perhaps the easiest may be seen by simply running a water tap into a sink creating a hydraulic jump. Upon hitting the floor of the sink, the flowing water spreads out at a speed that is higher than the local wave speed, forming a disk of shallow, rapidly diverging flow (analogous to the tenuous, supersonic solar wind). Around the periphery of the disk, a shock front or wall of water forms; outside the shock front, the water moves slower than the local wave speed (analogous to the subsonic interstellar medium).
Evidence presented at a meeting of the American Geophysical Union in May 2005 by Ed Stone suggests that the Voyager 1 spacecraft passed the termination shock in December 2004, when it was about 94 AU from the Sun, by virtue of the change in magnetic readings taken from the craft. In contrast, Voyager 2 began detecting returning particles when it was only 76 AU from the Sun, in May 2006. This implies that the heliosphere may be irregularly shaped, bulging outwards in the Sun's northern hemisphere and pushed inward in the south.
Heliosheath
The heliosheath is the region of the heliosphere beyond the termination shock. Here the wind is slowed, compressed, and made turbulent by its interaction with the interstellar medium. At its closest point, the inner edge of the heliosheath lies approximately 80 to 100 AU from the Sun. A proposed model hypothesizes that the heliosheath is shaped like the coma of a comet, and trails several times that distance in the direction opposite to the Sun's path through space. At its windward side, its thickness is estimated to be between 10 and 100 AU. Voyager project scientists have determined that the heliosheath is not "smooth" – it is rather a "foamy zone" filled with magnetic bubbles, each about 1 AU wide. These magnetic bubbles are created by the impact of the solar wind and the interstellar medium. Voyager 1 and Voyager 2 began detecting evidence of the bubbles in 2007 and 2008, respectively. The probably sausage-shaped bubbles are formed by magnetic reconnection between oppositely oriented sectors of the solar magnetic field as the solar wind slows down. They probably represent self-contained structures that have detached from the interplanetary magnetic field.
At a distance of about 113 AU, Voyager 1 detected a 'stagnation region' within the heliosheath. In this region, the solar wind slowed to zero, the magnetic field intensity doubled and high-energy electrons from the galaxy increased 100-fold. At about 122 AU, the spacecraft entered a new region that Voyager project scientists called the "magnetic highway", an area still under the influence of the Sun but with some dramatic differences.
Heliopause
The heliopause is the theoretical boundary where the Sun's solar wind is stopped by the interstellar medium; where the solar wind's strength is no longer great enough to push back the stellar winds of the surrounding stars. This is the boundary where the interstellar medium and solar wind pressures balance. The crossing of the heliopause should be signaled by a sharp drop in the temperature of solar wind-charged particles, a change in the direction of the magnetic field, and an increase in the number of galactic cosmic rays.
In May 2012, Voyager 1 detected a rapid increase in such cosmic rays (a 9% increase in a month, following a more gradual increase of 25% from January 2009 to January 2012), suggesting it was approaching the heliopause. Between late August and early September 2012, Voyager 1 witnessed a sharp drop in protons from the Sun, from 25 particles per second in late August, to about 2 particles per second by early October. In September 2013, NASA announced that Voyager 1 had crossed the heliopause as of 25 August 2012. This was at a distance of from the Sun. Contrary to predictions, data from Voyager 1 indicates the magnetic field of the galaxy is aligned with the solar magnetic field.
On November 5, 2018, the Voyager 2 mission detected a sudden decrease in the flux of low-energy ions. At the same time, the level of cosmic rays increased. This demonstrated that the spacecraft crossed the heliopause at a distance of from the Sun. Unlike Voyager 1, the Voyager 2 spacecraft did not detect interstellar flux tubes while crossing the heliosheath.
NASA also collected data from the heliopause remotely during the suborbital SHIELDS mission in 2021.
Heliotail
The heliotail is the several thousand astronomical units long tail of the heliosphere, and thus the Solar System's tail. It can be compared to the tail of a comet (however, a comet's tail does not stretch behind it as it moves; it is always pointing away from the Sun). The tail is a region where the Sun's solar wind slows down and ultimately escapes the heliosphere, slowly evaporating because of charge exchange.
The shape of the heliotail (newly found by NASA's Interstellar Boundary Explorer – IBEX), is that of a four-leaf clover. The particles in the tail do not shine, therefore it cannot be seen with conventional optical instruments. IBEX made the first observations of the heliotail by measuring the energy of "energetic neutral atoms", neutral particles created by collisions in the Solar System's boundary zone.
The tail has been shown to contain fast and slow particles; the slow particles are on the side and the fast particles are encompassed in the center. The shape of the tail can be linked to the Sun sending out fast solar winds near its poles and slow solar winds near its equator more recently. The clover-shaped tail moves further away from the Sun, which makes the charged particles begin to morph into a new orientation.
Cassini and IBEX data challenged the "heliotail" theory in 2009. In July 2013, IBEX results revealed a 4-lobed tail on the Solar System's heliosphere.
Outside structures
The heliopause is the final known boundary between the heliosphere and the interstellar space that is filled with material, especially plasma, not from the Earth's own star, the Sun, but from other stars. Even so, just outside the heliosphere (i.e. the "solar bubble") there is a transitional region, as detected by Voyager 1. Just as some interstellar pressure was detected as early as 2004, some of the Sun's material seeps into the interstellar medium. The heliosphere is thought to reside in the Local Interstellar Cloud inside the Local Bubble, which is a region in the Orion Arm of the Milky Way Galaxy.
Outside the heliosphere, there is a forty-fold increase in plasma density. There is also a radical reduction in the detection of certain types of particles from the Sun, and a large increase in galactic cosmic rays.
The flow of the interstellar medium (ISM) into the heliosphere has been measured by at least 11 different spacecraft as of 2013. By 2013, it was suspected that the direction of the flow had changed over time. The flow, coming from Earth's perspective from the constellation Scorpius, has probably changed direction by several degrees since the 1970s.
Hydrogen wall
Predicted to be a region of hot hydrogen, a structure called the "hydrogen wall" may be between the bow shock and the heliopause. The wall is composed of interstellar material interacting with the edge of the heliosphere. One paper released in 2013 studied the concept of a bow wave and hydrogen wall.
Another hypothesis suggests that the heliopause could be smaller on the side of the Solar System facing the Sun's orbital motion through the galaxy. It may also vary depending on the current velocity of the solar wind and the local density of the interstellar medium. It is known to lie far outside the orbit of Neptune. The mission of the Voyager 1 and 2 spacecraft is to find and study the termination shock, heliosheath, and heliopause. Meanwhile, the IBEX mission is attempting to image the heliopause from Earth orbit within two years of its 2008 launch. Initial results (October 2009) from IBEX suggest that previous assumptions are insufficiently cognizant of the true complexities of the heliopause.
In August 2018, long-term studies about the hydrogen wall by the New Horizons spacecraft confirmed results first detected in 1992 by the two Voyager spacecraft. Although the hydrogen is detected by extra ultraviolet light (which may come from another source), the detection by New Horizons corroborates the earlier detections by Voyager at a much higher level of sensitivity.
Bow shock
It was long hypothesized that the Sun produces a "shock wave" in its travels within the interstellar medium. It would occur if the interstellar medium is moving supersonically "toward" the Sun, since its solar wind moves "away" from the Sun supersonically. When the interstellar wind hits the heliosphere it slows and creates a region of turbulence. A bow shock was thought to possibly occur at about 230 AU, but in 2012 it was determined it probably does not exist. This conclusion resulted from new measurements: The velocity of the LISM (local interstellar medium) relative to the Sun's was previously measured to be 26.3 km/s by Ulysses, whereas IBEX measured it at 23.2 km/s.
This phenomenon has been observed outside the Solar System, around stars other than the Sun, by NASA's now retired orbital GALEX telescope. The red giant star Mira in the constellation Cetus has been shown to have both a debris tail of ejecta from the star and a distinct shock in the direction of its movement through space (at over 130 kilometers per second).
Observational methods
Detection by spacecraft
The precise distance to and shape of the heliopause are still uncertain. Interplanetary/interstellar spacecraft such as Pioneer 10, Pioneer 11 and New Horizons are traveling outward through the Solar System and will eventually pass through the heliopause. Contact to Pioneer 10 and 11 has been lost.
Cassini results
Rather than a comet-like shape, the heliosphere appears to be bubble-shaped according to data from Cassinis Ion and Neutral Camera (MIMI / INCA). Rather than being dominated by the collisions between the solar wind and the interstellar medium, the INCA (ENA) maps suggest that the interaction is controlled more by particle pressure and magnetic field energy density.
IBEX results
Initial data from Interstellar Boundary Explorer (IBEX), launched in October 2008, revealed a previously unpredicted "very narrow ribbon that is two to three times brighter than anything else in the sky." Initial interpretations suggest that "the interstellar environment has far more influence on structuring the heliosphere than anyone previously believed"
"No one knows what is creating the ENA (energetic neutral atoms) ribbon, ..."
"The IBEX results are truly remarkable! What we are seeing in these maps does not match with any of the previous theoretical models of this region. It will be exciting for scientists to review these (ENA) maps and revise the way we understand our heliosphere and how it interacts with the galaxy." In October 2010, significant changes were detected in the ribbon after 6 months, based on the second set of IBEX observations. IBEX data did not support the existence of a bow shock, but there might be a 'bow wave' according to one study.
Locally
Examples of missions that have or continue to collect data related to the heliosphere include:
Solar Anomalous and Magnetospheric Particle Explorer
Solar and Heliospheric Observatory
Solar Dynamics Observatory
STEREO
Ulysses spacecraft
Parker Solar Probe
During a total eclipse the high-temperature corona can be more readily observed from Earth solar observatories. During the Apollo program the Solar wind was measured on the Moon via the Solar Wind Composition Experiment. Some examples of Earth surface based Solar observatories include the McMath–Pierce solar telescope or the newer GREGOR Solar Telescope, and the refurbished Big Bear Solar Observatory.
Exploration history
The heliosphere is the area under the influence of the Sun; the two major components to determining its edge are the heliospheric magnetic field and the solar wind from the Sun. Three major sections from the beginning of the heliosphere to its edge are the termination shock, the heliosheath, and the heliopause. Five spacecraft have returned much of the data about its furthest reaches, including Pioneer 10 (1972–1997; data to 67 AU), Pioneer 11 (1973–1995; 44 AU), Voyager 1 and Voyager 2 (launched 1977, ongoing), and New Horizons (launched 2006). A type of particle called an energetic neutral atom (ENA) has also been observed to have been produced from its edges.
Except for regions near obstacles such as planets or comets, the heliosphere is dominated by material emanating from the Sun, although cosmic rays, fast-moving neutral atoms, and cosmic dust can penetrate the heliosphere from the outside. Originating at the extremely hot surface of the corona, solar wind particles reach escape velocity, streaming outwards at 300 to 800 km/s (671 thousand to 1.79 million mph or 1 to 2.9 million km/h). As it begins to interact with the interstellar medium, its velocity slows to a stop. The point where the solar wind becomes slower than the speed of sound is called the termination shock; the solar wind continues to slow as it passes through the heliosheath leading to a boundary called the heliopause, where the interstellar medium and solar wind pressures balance. The termination shock was traversed by Voyager 1 in 2004, and Voyager 2 in 2007.
It was thought that beyond the heliopause there was a bow shock, but data from Interstellar Boundary Explorer suggested the velocity of the Sun through the interstellar medium is too low for it to form. It may be a more gentle "bow wave".
Voyager data led to a new theory that the heliosheath has "magnetic bubbles" and a stagnation zone. Also, there were reports of a "stagnation region" within the heliosheath, starting around , detected by Voyager 1 in 2010. There, the solar wind velocity drops to zero, the magnetic field intensity doubles, and high-energy electrons from the galaxy increase 100-fold.
Starting in May 2012 at , Voyager 1 detected a sudden increase in cosmic rays, an apparent sign of approach to the heliopause. In the summer of 2013, NASA announced that Voyager 1 had reached interstellar space as of 25 August 2012.
In December 2012, NASA announced that in late August 2012, Voyager 1, at about from the Sun, entered a new region they called the "magnetic highway", an area still under the influence of the Sun but with some dramatic differences.
Pioneer 10 was launched in March 1972, and within 10 hours passed by the Moon; over the next 35 years or so the mission would be the first out, laying out many firsts of discoveries about the nature of heliosphere as well as Jupiter's impact on it. Pioneer 10 was the first spacecraft to detect sodium and aluminum ions in the solar wind, as well as helium in the inner Solar System. In November 1972, Pioneer 10 encountered Jupiter's enormous (compared to Earth) magnetosphere and would pass in and out of it and its heliosphere 17 times charting its interaction with the solar wind. Pioneer 10 returned scientific data until March 1997, including data on the solar wind out to about 67 AU. It was also contacted in 2003 when it was a distance of 7.6 billion miles from Earth (82 AU), but no instrument data about the solar wind was returned then.
Voyager 1 surpassed the radial distance from the Sun of Pioneer 10 at 69.4 AU on 17 February 1998, because it was traveling faster, gaining about 1.02 AU per year. On July 18, 2023, Voyager 2 overtook Pioneer 10 as the second most distant human-made object from the Sun. Pioneer 11, launched a year after Pioneer 10, took similar data as Pioneer out to 44.7 AU in 1995 when that mission was concluded. Pioneer 11 had a similar instrument suite as 10 but also had a flux-gate magnetometer. Pioneer and Voyager spacecraft were on different trajectories and thus recorded data on the heliosphere in different overall directions away from the Sun. Data obtained from Pioneer and Voyager spacecraft helped corroborate the detection of a hydrogen wall.
Voyagers 1 and 2 were launched in 1977 and operated continuously to at least the late 2010s and encountered various aspects of the heliosphere past Pluto. In 2012 Voyager 1 is thought to have passed through heliopause, and Voyager 2 did the same in 2018
The twin Voyagers are the only man-made objects to have entered interstellar space. However, while they have left the heliosphere, they have not yet left the boundary of the Solar System which is considered to be the outer edge of the Oort Cloud. Upon passing the heliopause, Voyager 2 Plasma Science Experiment (PLS) observed a sharp decline in the speed of solar wind particles on 5 November and there has been no sign of it since. The three other instruments on board measuring cosmic rays, low-energy charged particles, and magnetic fields also recorded the transition. The observations complement data from NASA's IBEX mission. NASA is also preparing an additional mission, Interstellar Mapping and Acceleration Probe (IMAP) which is due to launch in 2025 to capitalize on Voyager observations.
Timeline of exploration and detection
1904: Astronomers using the Potsdam Great Refractor with a spectrograph find evidence of the interstellar medium while observing the binary star Mintaka in Orion.
January 1959: Luna 1 becomes the first spacecraft to observe the solar wind.
1962: Mariner 2 detects the solar wind.
1972–1973: Pioneer 10 becomes the first spacecraft to explore the heliosphere past Mars, flying by Jupiter on 4 December 1973 and continuing to return solar wind data out to a distance of 67 AU.
February 1992: After flying by Jupiter, the Ulysses spacecraft becomes the first to explore the mid and high latitudes of the heliosphere.
1992: Pioneer and Voyager probes detected Ly-α radiation resonantly scattered by heliospheric hydrogen.
2004: Voyager 1 becomes the first spacecraft to reach the termination shock.
2005: SOHO observations of the solar wind show that the shape of the heliosphere is not axisymmetrical, but distorted, very likely under the effect of the local galactic magnetic field.
2009: IBEX project scientists discover and map a ribbon-shaped region of intense energetic neutral atom emission. These neutral atoms are thought to be originating from the heliopause.
October 2009: the heliosphere may be bubble, not comet shaped.
October 2010: significant changes were detected in the ribbon after six months, based on the second set of IBEX observations.
May 2012: IBEX data implies there is probably not a bow "shock".
June 2012: At 119 AU, Voyager 1 detected an increase in cosmic rays.
25 August 2012: Voyager 1 crosses the heliopause, becoming the first human-made object to depart the heliosphere.
August 2018: long-term studies about the hydrogen wall by the New Horizons spacecraft confirmed results first detected in 1992 by the two Voyager spacecraft.
5 November 2018: Voyager 2 crosses the heliopause, departing the heliosphere.
See also
Coronal mass ejection
Fermi glow
References
Sources
Further reading
External links
Voyager Interstellar Mission Objectives
NASA GALEX (Galaxy evolution Explorer) homepage at Caltech
A Big Surprise from the Edge of the Solar System (NASA 06.09.11)
Articles containing video clips
Local Interstellar Cloud
Space plasmas
Sphere
Trans-Neptunian region
Voyager program | Heliosphere | [
"Physics",
"Astronomy"
] | 5,790 | [
"Space plasmas",
"Trans-Neptunian region",
"Solar System",
"Astrophysics"
] |
1,131,243 | https://en.wikipedia.org/wiki/Dedekind%20zeta%20function | In mathematics, the Dedekind zeta function of an algebraic number field K, generally denoted ζK(s), is a generalization of the Riemann zeta function (which is obtained in the case where K is the field of rational numbers Q). It can be defined as a Dirichlet series, it has an Euler product expansion, it satisfies a functional equation, it has an analytic continuation to a meromorphic function on the complex plane C with only a simple pole at s = 1, and its values encode arithmetic data of K. The extended Riemann hypothesis states that if ζK(s) = 0 and 0 < Re(s) < 1, then Re(s) = 1/2.
The Dedekind zeta function is named for Richard Dedekind who introduced it in his supplement to Peter Gustav Lejeune Dirichlet's Vorlesungen über Zahlentheorie.
Definition and basic properties
Let K be an algebraic number field. Its Dedekind zeta function is first defined for complex numbers s with real part Re(s) > 1 by the Dirichlet series
where I ranges through the non-zero ideals of the ring of integers OK of K and NK/Q(I) denotes the absolute norm of I (which is equal to both the index [OK : I] of I in OK or equivalently the cardinality of quotient ring OK / I). This sum converges absolutely for all complex numbers s with real part Re(s) > 1. In the case K = Q, this definition reduces to that of the Riemann zeta function.
Euler product
The Dedekind zeta function of has an Euler product which is a product over all the non-zero prime ideals of
This is the expression in analytic terms of the uniqueness of prime factorization of ideals in . For is non-zero.
Analytic continuation and functional equation
Erich Hecke first proved that ζK(s) has an analytic continuation to a meromorphic function that is analytic at all points of the complex plane except for one simple pole at s = 1. The residue at that pole is given by the analytic class number formula and is made up of important arithmetic data involving invariants of the unit group and class group of K.
The Dedekind zeta function satisfies a functional equation relating its values at s and 1 − s. Specifically, let ΔK denote the discriminant of K, let r1 (resp. r2) denote the number of real places (resp. complex places) of K, and let
and
where Γ(s) is the gamma function. Then, the functions
satisfy the functional equation
Special values
Analogously to the Riemann zeta function, the values of the Dedekind zeta function at integers encode (at least conjecturally) important arithmetic data of the field K. For example, the analytic class number formula relates the residue at s = 1 to the class number h(K) of K, the regulator R(K) of K, the number w(K) of roots of unity in K, the absolute discriminant of K, and the number of real and complex places of K. Another example is at s = 0 where it has a zero whose order r is equal to the rank of the unit group of OK and the leading term is given by
It follows from the functional equation that .
Combining the functional equation and the fact that Γ(s) is infinite at all integers less than or equal to zero yields that ζK(s) vanishes at all negative even integers. It even vanishes at all negative odd integers unless K is totally real (i.e. r2 = 0; e.g. Q or a real quadratic field). In the totally real case, Carl Ludwig Siegel showed that ζK(s) is a non-zero rational number at negative odd integers. Stephen Lichtenbaum conjectured specific values for these rational numbers in terms of the algebraic K-theory of K.
Relations to other L-functions
For the case in which K is an abelian extension of Q, its Dedekind zeta function can be written as a product of Dirichlet L-functions. For example, when K is a quadratic field this shows that the ratio
is the L-function L(s, χ), where χ is a Jacobi symbol used as Dirichlet character. That the zeta function of a quadratic field is a product of the Riemann zeta function and a certain Dirichlet L-function is an analytic formulation of the quadratic reciprocity law of Gauss.
In general, if K is a Galois extension of Q with Galois group G, its Dedekind zeta function is the Artin L-function of the regular representation of G and hence has a factorization in terms of Artin L-functions of irreducible Artin representations of G.
The relation with Artin L-functions shows that if L/K is a Galois extension then is holomorphic ( "divides" ): for general extensions the result would follow from the Artin conjecture for L-functions.
Additionally, ζK(s) is the Hasse–Weil zeta function of Spec OK and the motivic L-function of the motive coming from the cohomology of Spec K.
Arithmetically equivalent fields
Two fields are called arithmetically equivalent if they have the same Dedekind zeta function. used Gassmann triples to give some examples of pairs of non-isomorphic fields that are arithmetically equivalent. In particular some of these pairs have different class numbers, so the Dedekind zeta function of a number field does not determine its class number.
showed that two number fields K and L are arithmetically equivalent if and only if all but finitely many prime numbers p have the same inertia degrees in the two fields, i.e., if are the prime ideals in K lying over p, then the tuples need to be the same for K and for L for almost all p.
Notes
References
Section 10.5.1 of
Zeta and L-functions
Algebraic number theory | Dedekind zeta function | [
"Mathematics"
] | 1,262 | [
"Algebraic number theory",
"Number theory"
] |
1,131,357 | https://en.wikipedia.org/wiki/Preclinical%20development | In drug development, preclinical development (also termed preclinical studies or nonclinical studies) is a stage of research that begins before clinical trials (testing in humans) and during which important feasibility, iterative testing and drug safety data are collected, typically in laboratory animals.
The main goals of preclinical studies are to determine a starting, safe dose for first-in-human study and assess potential toxicity of the product, which typically include new medical devices, prescription drugs, and diagnostics.
Companies use stylized statistics to illustrate the risks in preclinical research, such as that on average, only one in every 5,000 compounds that enters drug discovery to the stage of preclinical development becomes an approved drug.
Types of preclinical research
Each class of product may undergo different types of preclinical research. For instance, drugs may undergo pharmacodynamics (what the drug does to the body) (PD), pharmacokinetics (what the body does to the drug) (PK), ADME, and toxicology testing. This data allows researchers to allometrically estimate a safe starting dose of the drug for clinical trials in humans. Medical devices that do not have drug attached will not undergo these additional tests and may go directly to good laboratory practices (GLP) testing for safety of the device and its components. Some medical devices will also undergo biocompatibility testing which helps to show whether a component of the device or all components are sustainable in a living model. Most preclinical studies must adhere to GLPs in ICH Guidelines to be acceptable for submission to regulatory agencies such as the Food & Drug Administration in the United States.
Typically, both in vitro and in vivo tests will be performed. Studies of drug toxicity include which organs are targeted by that drug, as well as if there are any long-term carcinogenic effects or toxic effects causing illness.
Animal testing
The information collected from these studies is vital so that safe human testing can begin. Typically, in drug development studies animal testing involves two species. The most commonly used models are murine and canine, although primate and porcine are also used.
Choice of species
The choice of species is based on which will give the best correlation to human trials. Differences in the gut, enzyme activity, circulatory system, or other considerations make certain models more appropriate based on the dosage form, site of activity, or noxious metabolites. For example, canines may not be good models for solid oral dosage forms because the characteristic carnivore intestine is underdeveloped compared to the omnivore's, and gastric emptying rates are increased. Also, rodents can not act as models for antibiotic drugs because the resulting alteration to their intestinal flora causes significant adverse effects. Depending on a drug's functional groups, it may be metabolized in similar or different ways between species, which will affect both efficacy and toxicology.
Medical device studies also use this basic premise. Most studies are performed in larger species such as dogs, pigs and sheep which allow for testing in a similar sized model as that of a human. In addition, some species are used for similarity in specific organs or organ system physiology (swine for dermatological and coronary stent studies; goats for mammary implant studies; dogs for gastric and cancer studies; etc.).
Importantly, the regulatory guidelines of FDA, EMA, and other similar international and regional authorities usually require safety testing in at least two mammalian species, including one non-rodent species, prior to human trials authorization.
Ethical issues
Animal testing in the research-based pharmaceutical industry has been reduced in recent years both for ethical and cost reasons. However, most research will still involve animal based testing for the need of similarity in anatomy and physiology that is required for diverse product development.
No observable effect levels
Based on preclinical trials, no-observed-adverse-effect levels (NOAELs) on drugs are established, which are used to determine initial phase 1 clinical trial dosage levels on a mass API per mass patient basis. Generally a 1/100 uncertainty factor or "safety margin" is included to account for interspecies (1/10) and inter-individual (1/10) differences.
See also
Drug development
Preclinical imaging
Phases of clinical research
References
Drug development
Drug discovery | Preclinical development | [
"Chemistry",
"Biology"
] | 909 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
1,131,674 | https://en.wikipedia.org/wiki/Muzzle%20energy | Muzzle energy is the kinetic energy of a bullet as it is expelled from the muzzle of a firearm. Without consideration of factors such as aerodynamics and gravity for the sake of comparison, muzzle energy is used as a rough indication of the destructive potential of a given firearm or cartridge. The heavier the bullet and especially the faster it moves, the higher its muzzle energy and the more damage it will do.
Kinetic energy
The general formula for the kinetic energy is
where v is the velocity of the bullet and m is the mass of the bullet.
Although both mass and velocity contribute to the muzzle energy, the muzzle energy is proportional to the mass while proportional to the square of the velocity. The velocity of the bullet is a more important determinant of muzzle energy. For a constant velocity, if the mass is doubled, the energy is doubled; however, for a constant mass, if the velocity is doubled, the muzzle energy increases four times. In the SI system the above Ek will be in unit joules if the mass, m, is in kilograms, and the speed, v, is in metres per second.
Typical muzzle energies of common firearms and cartridges
Muzzle energy is dependent upon the factors previously listed, and velocity is highly variable depending upon the length of the barrel a projectile is fired from. Also the muzzle energy is only an upper limit for how much energy is transmitted to the target, and the effects of a ballistic trauma depend on several other factors as well. There is wide variation in commercial ammunition. A bullet fired from .357 Magnum handgun can achieve a muzzle energy of . A bullet fired from the same gun might only achieve of muzzle energy, depending upon the manufacturer of the cartridge. Some .45 Colt +P ammunition can produce of muzzle energy.
Legal requirements on muzzle energy
Many parts of the world use muzzle energy to classify guns into categories that require different categories of licence. In general guns that have the potential to be more dangerous have tighter controls, while those of minimal energy, such as small air pistols or air rifles, require little more than user registration, or in some countries have no restrictions at all. Overview of gun laws by nation indicates the various approaches taken. Firearms regulation in the United Kingdom is a complicated example, but is demarked by muzzle energy as well as barrel length and ammunition diameter.
Some jurisdictions also stipulate minimum muzzle energies for safe hunting. For example, in Denmark rifle ammunition used for hunting the largest types of game there such as red deer must have a kinetic energy E100 (i.e.: at range) of at least and a bullet mass of at least or alternatively an E100 of at least and a bullet mass of at least . Namibia specifies three levels of minimum muzzle energy for hunting depending on the size of the game, for game such as springbok, for game such as hartebeest, and for Big Five game, together with a minimum caliber of .
In Germany, airsoft guns with a muzzle energy of no more than are exempt from the gun law, while air guns with a muzzle energy of no more than may be acquired without a firearms license.
Mainland China uses a varied concept of "muzzle ratio kinetic energy" (), which is the quotient (ratio) of the muzzle energy divided by the bore cross sectional area, to distinguish genuine guns from "imitation" replicas like toy guns. The Ministry of Public Security unilaterally introduced the concept in 2008 leading up to the Beijing Olympic Games, dictating that anything over 1.8 J/cm2 to be defined as real firearms. This caused many existing toy gun products on the Chinese market (particularly airsoft) to become illegal overnight, as almost all airsoft guns shooting a standard pellet have a muzzle velocity over , which translates to more than of muzzle energy, or 2.0536 J/cm2 of "ratio energy". For comparison a standard baseball changeup thrown at has 1.951 J/cm2 of "ratio energy" which also exceeds the 1.8 J/cm2 of a real firearm while a fastball can reach over 3.5 J/cm2 or nearly double the level of a real firearm. The subsequent crackdowns by local law enforcement led to many seizures, arrests and prosecutions of individual owners for "trafficking and possession of illegal weapons" over the years for weapons that were previously permitted.
See also
Free recoil
Muzzle velocity
Power factor (shooting sports)
Resources
Edward F. Obert, Thermodynamics, McGraw-Hill Book Co., 1948.
Mc Graw-Hill encyclopedia of Science and Technology, volume ebe-eye and ice-lev, 9th Edition, Mc Graw-Hill, 2002.
References
Ammunition
Ballistics | Muzzle energy | [
"Physics"
] | 953 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
1,131,721 | https://en.wikipedia.org/wiki/Neutron%20flux | The neutron flux is a scalar quantity used in nuclear physics and nuclear reactor physics. It is the total distance travelled by all free neutrons per unit time and volume. Equivalently, it can be defined as the number of neutrons travelling through a small sphere of radius in a time interval, divided by a maximal cross section of the sphere (the great disk area, ) and by the duration of the time interval. The dimension of neutron flux is and the usual unit is cm−2s−1 (reciprocal square centimetre times reciprocal second).
The neutron fluence is defined as the neutron flux integrated over a certain time period. So its dimension is and its usual unit is cm−2 (reciprocal square centimetre). An older term used instead of cm−2 was "n.v.t." (neutrons, velocity, time).
Natural neutron flux
Neutron flux in asymptotic giant branch stars and in supernovae is responsible for most of the natural nucleosynthesis producing elements heavier than iron. In stars there is a relatively low neutron flux on the order of 105 to 1011 cm−2 s−1, resulting in nucleosynthesis by the s-process (slow neutron-capture process). By contrast, after a core-collapse supernova, there is an extremely high neutron flux, on the order of 1032 cm−2 s−1, resulting in nucleosynthesis by the r-process (rapid neutron-capture process).
Earth atmospheric neutron flux, apparently from thunderstorms, can reach levels of 3·10−2 to 9·10+1 cm−2 s−1. However, recent results (considered invalid by the original investigators) obtained with unshielded scintillation neutron detectors show a decrease in the neutron flux during thunderstorms. Recent research appears to support lightning generating 1013–1015 neutrons per discharge via photonuclear processes.
Artificial neutron flux
Artificial neutron flux refers to neutron flux which is man-made, either as byproducts from weapons or nuclear energy production or for a specific application such as from a research reactor or by spallation. A flow of neutrons is often used to initiate the fission of unstable large nuclei. The additional neutron(s) may cause the nucleus to become unstable, causing it to decay (split) to form more stable products. This effect is essential in fission reactors and nuclear weapons.
Within a nuclear fission reactor, the neutron flux is the primary quantity measured to control the reaction inside. The flux shape is the term applied to the density or relative strength of the flux as it moves around the reactor. Typically the strongest neutron flux occurs in the middle of the reactor core, becoming lower toward the edges. The higher the neutron flux the greater the chance of a nuclear reaction occurring as there are more neutrons going through an area per unit time.
Reactor vessel wall neutron fluence
A reactor vessel of a typical nuclear power plant (PWR) endures in 40 years (32 full reactor years) of operation approximately 6.5×1019 cm−2 (E > 1 MeV) of neutron fluence. Neutron flux causes reactor vessels to suffer from neutron embrittlement and is a major problem with thermonuclear fusion like ITER and other magnetic confinement D-T reactors where fast (originally 14.06 MeV) neutrons damage equipment resulting in short equipment lifetime and huge costs and large volumes of radioactive waste streams.
See also
Neutron radiation
Neutron transport
References
Flux
Physical quantities | Neutron flux | [
"Physics",
"Mathematics"
] | 718 | [
"Physical phenomena",
"Quantity",
"Physical quantities",
"Physical properties"
] |
18,026,663 | https://en.wikipedia.org/wiki/Non-contact%20wafer%20testing | Non contact wafer testing is an alternative to mechanical probing of ICs during the wafer testing step in semiconductor device fabrication.
Traditional (contact) wafer testing
Probing ICs while they are still on the wafer normally requires that contact be made between the automatic test equipment (ATE) and IC. This contact is usually made with some form of mechanical probe. A set of mechanical probes will often be arranged together on a probe card, which is attached to the wafer prober. The wafer is lifted by the wafer prober until metal pads on one or more ICs on the wafer make physical contact with the probes. A certain amount of over-travel is required after the first probe makes contact with the wafer, for two reasons:
to guarantee that all probes have made contact (to account for non-planarity of the wafer)
to break through the thin oxidized layer (if the metal pad is Aluminum) on the pad
There are numerous types of mechanical probes available commercially: their shape can be in the form of a cantilever, spring, or membrane, and they can be bent into shape, stamped, or made by microelectromechanical systems processing.
Using mechanical probes has certain drawbacks:
mechanical probing can damage the circuits under the probe pad on the IC
repeated probing can damage the probe pad on the IC, making further probing of that IC impossible
the probe card may be damaged from repeated contact, or become contaminated with debris created by contact with the wafer
the probe will act as a circuit and affect the results of the test. For this reason, the tests performed at wafer sort cannot always be identical and as extensive as those performed at the final device test after packaging is complete
since the probe pads are typically on the perimeter of the IC, the IC can soon become pad-limited. Shrinking pad sizes makes design and manufacturing of smaller and more accurate probes a challenge
Non-contact (wireless) wafer testing
Alternatives to mechanical probing of ICs have been explored by various groups (Slupsky,
Moore,
Scanimetrics,
Kuroda). These methods use tiny RF antennae (similar to RFID tags, but on a much smaller scale) to replace both the mechanical probes and the metal probe pads. If the antennae on the probe card and IC are properly aligned, then a transmitter on the probe card can send data wirelessly to the receiver on the IC via RF communication.
This method has several advantages:
no damage is done to circuits, pads, nor probe cards
no debris is created
probe pads are no longer required, on the periphery of the IC
wireless probe points can be placed anywhere on the IC, not just on the periphery
repeated probing is possible without damaging the probe points
faster data rates are possible than with mechanical probes
the wafer prober does not have to exert any force on the probe location (in traditional probing this can be a significant amount of force, when hundreds or thousands of probes are used)
See also
Wafer testing
References
Semiconductor device fabrication
Cleanroom technology | Non-contact wafer testing | [
"Chemistry",
"Materials_science"
] | 624 | [
"Semiconductor device fabrication",
"Microtechnology",
"Cleanroom technology"
] |
18,035,176 | https://en.wikipedia.org/wiki/TOPO%20cloning | Topoisomerase-based cloning (TOPO cloning) is a molecular biology technique in which DNA fragments are cloned into specific vectors without the requirement for DNA ligases. Taq polymerase has a nontemplate-dependent terminal transferase activity that adds a single deoxyadenosine (A) to the 3'-end of the PCR products. This characteristic is exploited in "sticky end" TOPO TA cloning. For "blunt end" TOPO cloning, the recipient vector does not have overhangs and blunt-ended DNA fragments can be cloned.
Principle
The technique utilizes the inherent biological activity of DNA topoisomerase I. The biological role of topoisomerase is to cleave and rejoin supercoiled DNA ends to facilitate replication. Vaccinia virus topoisomerase I specifically recognize DNA sequence 5´-(C/T)CCTT-3'. During replication, the enzyme digests DNA specifically at this sequence, unwinds the DNA and, re-ligates it again at the 3' phosphate group of the thymidine base.
The vectors in commercially available TOPO kits have been added to the topoisomerase site embedded in a beta-galactosidase cassette allowing blue-white scanning. The vector ends thus self-assemble, resulting in the production of blue colonies that do not need to be selected and sequenced for potential positive clones.
"Sticky end" TOPO TA cloning
TOPO vectors are designed in such a way that they carry this specific sequence 5´-(C/T)CCTT-3' at the two linear ends. The linear vector DNA already has the topoisomerase enzyme covalently attached to both of its strands' free 3' ends. This is then mixed with PCR products. When the free 5' ends of the PCR product strands attach to the topoisomerase 3' end of each vector strand, the strands are covalently linked by the already bound topoisomerase. This reaction proceeds efficiently when this solution is incubated at room temperature with the required salt. Different types of vectors are used for cloning fragments amplified by either Taq or Pfu polymerase as Taq polymerase (unlike Pfu) leaves an extra "A" nucleotide at the 3'end during amplification.
The TA TOPO cloning technique relies on the ability of adenine (A) and thymine (T) (complementary base pairs) on different DNA fragments to hybridize and, in the presence of ligase or topoisomerase, become ligated together.
The insert is created by PCR using Taq DNA polymerase, a polymerase that lacks 3' to 5' proofreading activity and with a high probability adds a single, 3'-adenine overhang to each end of the PCR product. It is best if the PCR primers have guanines at the 5' end as this maximizes probability of Taq DNA polymerase adding the terminal adenosine overhang. Thermostable polymerases containing extensive 3´ to 5´ exonuclease activity should not be used as they do not leave the 3´ adenine-overhangs.
The target vector is linearized and cut with a blunt-end restriction enzyme. This vector is then tailed with dideoxythymidine triphosphate (ddTTP) using terminal transferase. It is important to use ddTTP to ensure the addition of only one T residue. This tailing leaves the vector with a single 3'-overhanging thymine residue on each blunt end.
"Blunt end" TOPO cloning
Polymerases (such as Phusion) or restriction enzymes that produce blunt ends can also be used for TOPO cloning. Rather than relying on sticky ends, the blunt end TOPO vector has blunt ends where the topoisomerase molecules are bound. Commercial kits, such as the Zero Blunt® Cloning Kit from Invitrogen, are available.
References
Cloning
Molecular biology techniques | TOPO cloning | [
"Chemistry",
"Engineering",
"Biology"
] | 838 | [
"Cloning",
"Molecular biology techniques",
"Genetic engineering",
"Molecular biology"
] |
2,401,526 | https://en.wikipedia.org/wiki/Molecular%20Borromean%20rings | In chemistry, molecular Borromean rings are an example of a mechanically-interlocked molecular architecture in which three macrocycles are interlocked in such a way that breaking any macrocycle allows the others to dissociate. They are the smallest examples of Borromean rings. The synthesis of molecular Borromean rings was reported in 2004 by the group of J. Fraser Stoddart. The so-called Borromeate is made up of three interpenetrated macrocycles formed through templated self assembly as complexes of zinc.
The synthesis of the macrocyclic systems involves self-assembles of two organic building blocks: 2,6-diformylpyridine (an aromatic compound with two aldehyde groups positioned ortho to the nitrogen atom of the pyridine ring) and a symmetric diamine containing a meta-substituted 2,2'-bipyridine group. Zinc acetate is added as the template for the reaction, resulting in one zinc cation in each of the six pentacoordinate complexation sites. Trifluoroacetic acid (TFA) is added to catalyse the imine bond-forming reactions. The preparation of the tri-ring Borromeate involves a total of 18 precursor molecules and is only possible because the building blocks self-assemble through 12 aromatic pi-pi interactions and 30 zinc to nitrogen dative bonds. Because of these interactions, the Borromeate is thermodynamically the most stable reaction product out of potentially many others. As a consequence of all the reactions taking place being equilibria, the Borromeate is the predominant reaction product.
Reduction with sodium borohydride in ethanol affords the neutral Borromeand. With the zinc removed, the three macrocycles are no longer chemically bonded but remain "mechanically entangled in such a way that that if only one of the rings is removed the other two can part company." The Borromeand is thus a true Borromean system as cleavage of just one imine bond (to an amine and an acetal) in this structure breaks the mechanical bond between the three constituent macrocycles, releasing the other two individual rings. A borromeand differs from a [3]catenane in that none of its three macrocycles is concatenated with another other; if one bond in a [3]catenane is broken and a cycle removed, a [2]catenane can remain.
Organic synthesis of this seemingly complex compound is in reality fairly simple; for this reason, the Stoddart group has suggested it as a gram-scale laboratory activity for undergraduate organic chemistry courses.
See also
Molecular knot
Topology (chemistry)
Dynamic covalent chemistry
References
External links
Borromean chemistry overview website
Supramolecular chemistry
Borromean rings | Molecular Borromean rings | [
"Chemistry",
"Materials_science",
"Mathematics"
] | 600 | [
"Molecular topology",
"Topology",
"nan",
"Nanotechnology",
"Supramolecular chemistry"
] |
2,401,965 | https://en.wikipedia.org/wiki/Reconfigurable%20optical%20add-drop%20multiplexer | In optical communication, a reconfigurable optical add-drop multiplexer (ROADM) is a form of optical add-drop multiplexer that adds the ability to remotely switch traffic from a wavelength-division multiplexing (WDM) system at the wavelength layer. This is achieved through the use of a wavelength selective switching module. This allows individual or multiple wavelengths carrying data channels to be added and/or dropped from a transport fiber without the need to convert the signals on all of the WDM channels to electronic signals and back again to optical signals.
The main advantages of the ROADM are:
The planning of entire bandwidth assignment need not be carried out during initial deployment of a system. The configuration can be done as and when required without affecting traffic already passing the ROADM.
ROADM allows for remote configuration and reconfiguration.
In ROADM, as it is not clear beforehand where a signal can be potentially routed, there is a necessity of power balancing of these signals. ROADMs allow for automatic power balancing.
ROADM functionality originally appeared in long-haul dense wavelength division multiplexing (DWDM) equipment, but by 2005, it began to appear in metro optical systems because of the need to build out major metropolitan networks in order to deal with the traffic driven by the increasing demand for packet-based services.
The switching or reconfiguration functions of a ROADM can be achieved using a variety of switching technologies including microelectromechanical systems (MEMS), liquid crystal, thermo optic and beam-steering switches in planar waveguide circuits, and tunable optical filter technology. MEMS and liquid crystal technologies are the most widely used. ROADMs were first introduced in 2002 with the introduction of DWDM. ROADMs can be directionless, colorless, contentionless, and gridless. Directionless means that any wavelength can be dropped from any fiber, and any wavelength or signal can be sent to any port in the ROADM. Colorless implies every port in the ROADM can handle or accept any wavelength or color of light. Contentionless means several identical wavelengths or signals can be dropped from several ports at the same time. Gridless means that the ROADM can handle frequencies or signals that aren't precisely 50 GHz apart from each other. This is relevant because 50 GHz spacing has been traditionally used in fiber optic communications.
See also
Optical mesh network
References
Light Reading's Heavy Reading - "ROADMs and the Future of Metro Optical Networks", May 2005
South-East Europe Fibre Infrastructure for Research and Education - Dark Fibre Lighting Technologies
Telecommunications equipment
Networking hardware | Reconfigurable optical add-drop multiplexer | [
"Engineering"
] | 533 | [
"Computer networks engineering",
"Networking hardware"
] |
2,402,393 | https://en.wikipedia.org/wiki/Clearance%20%28pharmacology%29 | In pharmacology, clearance () is a pharmacokinetic parameter representing the efficiency of drug elimination. This is the rate of elimination of a substance divided by its concentration. The parameter also indicates the theoretical volume of plasma from which a substance would be completely removed per unit time. Usually, clearance is measured in L/h or mL/min. Excretion, on the other hand, is a measurement of the amount of a substance removed from the body per unit time (e.g., mg/min, μg/min, etc.). While clearance and excretion of a substance are related, they are not the same thing. The concept of clearance was described by Thomas Addis, a graduate of the University of Edinburgh Medical School.
Substances in the body can be cleared by various organs, including the kidneys, liver, lungs, etc. Thus, total body clearance is equal to the sum clearance of the substance by each organ (e.g., renal clearance + hepatic clearance + pulmonary clearance = total body clearance). For many drugs, however, clearance is solely a function of renal excretion. In these cases, clearance is almost synonymous with renal clearance or renal plasma clearance. Each substance has a specific clearance that depends on how the substance is handled by the nephron. Clearance is a function of 1) glomerular filtration, 2) secretion from the peritubular capillaries to the nephron, and 3) reabsorption from the nephron back to the peritubular capillaries. Clearance is variable in zero-order kinetics because a constant amount of the drug is eliminated per unit time, but it is constant in first-order kinetics, because the amount of drug eliminated per unit time changes with the concentration of drug in the blood.
Clearance can refer to the volume of plasma from which the substance is removed (i.e., cleared) per unit time or, in some cases, inter-compartmental clearances can be discussed when referring to redistribution between body compartments such as plasma, muscle, and fat.
Definition
The clearance of a substance is the volume of plasma that contains the same amount of the substance as has been removed from the plasma per unit time.
When referring to the function of the kidney, clearance is considered to be the amount of liquid filtered out of the blood that gets processed by the kidneys or the amount of blood cleaned per time because it has the units of a volumetric flow rate [ volume per unit time ]. However, it does not refer to a real value; "the kidney does not completely remove a substance from the total renal plasma flow." From a mass transfer perspective and physiologically, volumetric blood flow (to the dialysis machine and/or kidney) is only one of several factors that determine blood concentration and removal of a substance from the body. Other factors include the mass transfer coefficient, dialysate flow and dialysate recirculation flow for hemodialysis, and the glomerular filtration rate and the tubular reabsorption rate, for the kidney. A physiologic interpretation of clearance (at steady-state) is that clearance is a ratio of the mass generation and blood (or plasma) concentration.
Its definition follows from the differential equation that describes exponential decay and is used to model kidney function and hemodialysis machine function:
Where:
is the mass generation rate of the substance - assumed to be a constant, i.e. not a function of time (equal to zero for exogenous (foreign) substances/drugs) [mmol/min] or [mol/s]
t is dialysis time or time since injection of the substance/drug [min] or [s]
V is the volume of distribution or total body water [L] or [m3]
K is the clearance [mL/min] or [m3/s]
C is the concentration [mmol/L] or [mol/m3] (in the United States often [mg/mL])
From the above definitions it follows that is the first derivative of concentration with respect to time, i.e. the change in concentration with time.
It is derived from a mass balance.
Clearance of a substance is sometimes expressed as the inverse of the time constant that describes its removal rate from the body divided by its volume of distribution (or total body water).
In steady-state, it is defined as the mass generation rate of a substance (which equals the mass removal rate) divided by its concentration in the blood.
Clearance, half-life and volume of distribution
There is an important relationship between clearance, elimination half-life and distribution volume.
The elimination rate constant of a drug is equivalent to total clearance divided by the distribution volume
(note the usage of Cl and not Κ, not to confuse with ). But is also equivalent to divided by elimination rate half-life , . Thus, . This means, for example, that an increase in total clearance results in a decrease in elimination rate half-life, provided distribution volume is constant.
Effect of plasma protein binding
For substances that exhibit substantial plasma protein binding, clearance is generally dependent on the total concentration (free + protein-bound) and not the free concentration.
Most plasma substances have primarily their free concentrations regulated, which thus remains the same, so extensive protein binding increases total plasma concentration (free + protein-bound). This decreases clearance compared to what would have been the case if the substance did not bind to protein. However, the mass removal rate is the same, because it depends only on concentration of free substance, and is independent on plasma protein binding, even with the fact that plasma proteins increase in concentration in the distal renal glomerulus as plasma is filtered into Bowman's capsule, because the relative increases in concentrations of substance-protein and non-occupied protein are equal and therefore give no net binding or dissociation of substances from plasma proteins, thus giving a constant plasma concentration of free substance throughout the glomerulus, which also would have been the case without any plasma protein binding.
In other sites than the kidneys, however, where clearance is made by membrane transport proteins rather than filtration, extensive plasma protein binding may increase clearance by keeping concentration of free substance fairly constant throughout the capillary bed, inhibiting a decrease in clearance caused by decreased concentration of free substance through the capillary.
Derivation of equation
Equation is derived from a mass balance:
where:
is a period of time
the change in mass of the toxin in the body during
is the toxin intake rate
is the toxin removal rate
is the toxin generation rate
In words, the above equation states:
The change in the mass of a toxin within the body () during some time is equal to the toxin intake plus the toxin generation minus the toxin removal.
Since
and
Equation A1 can be rewritten as:
If one lumps the in and gen. terms together, i.e. and divides by the result is a difference equation:
If one applies the limit one obtains a differential equation:
Using the product rule this can be rewritten as:
If one assumes that the volume change is not significant, i.e. , the result is Equation :
Solution to the differential equation
The general solution of the above differential equation (1) is:
Where:
Co is the concentration at the beginning of dialysis or the initial concentration of the substance/drug (after it has distributed) [mmol/L] or [mol/m3]
e is the base of the natural logarithm
Steady-state solution
The solution to the above differential equation (9) at time infinity (steady state) is:
The above equation (10a) can be rewritten as:
The above equation () makes clear the relationship between mass removal and clearance. It states that (with a constant mass generation) the concentration and clearance vary inversely with one another. If applied to creatinine (i.e. creatinine clearance), it follows from the equation that if the serum creatinine doubles the clearance halves and that if the serum creatinine quadruples the clearance is quartered.
Measurement of renal clearance
Renal clearance can be measured with a timed collection of urine and an analysis of its composition with the aid of the following equation (which follows directly from the derivation of ()):
Where:
K is the clearance [mL/min]
CU is the urine concentration [mmol/L] (in the USA often [mg/mL])
Q is the urine flow (volume/time) [mL/min] (often [mL/24 h])
CB is the plasma concentration [mmol/L] (in the USA often [mg/mL])
When the substance "C" is creatinine, an endogenous chemical that is excreted only by filtration, the clearance is an approximation of the glomerular filtration rate. Inulin clearance is less commonly used to precisely determine glomerular filtration rate.
Note - the above equation () is valid only for the steady-state condition. If the substance being cleared is not at a constant plasma concentration (i.e. not at steady-state) K'' must be obtained from the (full) solution of the differential equation ().
See also
References
Further reading
Nephrology
Pharmacokinetic metrics
Temporal rates
de:Clearance (Medizin) | Clearance (pharmacology) | [
"Physics"
] | 1,943 | [
"Temporal quantities",
"Temporal rates",
"Physical quantities"
] |
2,403,085 | https://en.wikipedia.org/wiki/Lanczos%20tensor | The Lanczos tensor or Lanczos potential is a rank 3 tensor in general relativity that generates the Weyl tensor. It was first introduced by Cornelius Lanczos in 1949. The theoretical importance of the Lanczos tensor is that it serves as the gauge field for the gravitational field in the same way that, by analogy, the electromagnetic four-potential generates the electromagnetic field.
Definition
The Lanczos tensor can be defined in a few different ways. The most common modern definition is through the Weyl–Lanczos equations, which demonstrate the generation of the Weyl tensor from the Lanczos tensor. These equations, presented below, were given by Takeno in 1964. The way that Lanczos introduced the tensor originally was as a Lagrange multiplier on constraint terms studied in the variational approach to general relativity. Under any definition, the Lanczos tensor H exhibits the following symmetries:
The Lanczos tensor always exists in four dimensions but does not generalize to higher dimensions. This highlights the specialness of four dimensions. Note further that the full Riemann tensor cannot in general be derived from derivatives of the Lanczos potential alone. The Einstein field equations must provide the Ricci tensor to complete the components of the Ricci decomposition.
The Curtright field has a gauge-transformation dynamics similar to that of Lanczos tensor. But Curtright field exists in arbitrary dimensions > 4D.
Weyl–Lanczos equations
The Weyl–Lanczos equations express the Weyl tensor entirely as derivatives of the Lanczos tensor:
where is the Weyl tensor, the semicolon denotes the covariant derivative, and the subscripted parentheses indicate symmetrization. Although the above equations can be used to define the Lanczos tensor, they also show that it is not unique but rather has gauge freedom under an affine group. If is an arbitrary vector field, then the Weyl–Lanczos equations are invariant under the gauge transformation
where the subscripted brackets indicate antisymmetrization. An often convenient choice is the Lanczos algebraic gauge, which sets The gauge can be further restricted through the Lanczos differential gauge . These gauge choices reduce the Weyl–Lanczos equations to the simpler form
Wave equation
The Lanczos potential tensor satisfies a wave equation
where is the d'Alembert operator and
is known as the Cotton tensor. Since the Cotton tensor depends only on covariant derivatives of the Ricci tensor, it can perhaps be interpreted as a kind of matter current. The additional self-coupling terms have no direct electromagnetic equivalent. These self-coupling terms, however, do not affect the vacuum solutions, where the Ricci tensor vanishes and the curvature is described entirely by the Weyl tensor. Thus in vacuum, the Einstein field equations are equivalent to the homogeneous wave equation in perfect analogy to the vacuum wave equation of the electromagnetic four-potential. This shows a formal similarity between gravitational waves and electromagnetic waves, with the Lanczos tensor well-suited for studying gravitational waves.
In the weak field approximation where , a convenient form for the Lanczos tensor in the Lanczos gauge is
Example
The most basic nontrivial case for expressing the Lanczos tensor is, of course, for the Schwarzschild metric. The simplest, explicit component representation in natural units for the Lanczos tensor in this case is
with all other components vanishing up to symmetries. This form, however, is not in the Lanczos gauge. The nonvanishing terms of the Lanczos tensor in the Lanczos gauge are
It is further possible to show, even in this simple case, that the Lanczos tensor cannot in general be reduced to a linear combination of the spin coefficients of the Newman–Penrose formalism, which attests to the Lanczos tensor's fundamental nature. Similar calculations have been used to construct arbitrary Petrov type D solutions.
See also
Bach tensor
Ricci calculus
Schouten tensor
tetradic Palatini action
Self-dual Palatini action
References
External links
Peter O'Donnell, Introduction To 2-Spinors In General Relativity. World Scientific, 2003.
Gauge theories
Differential geometry
Tensors
Tensors in general relativity
1949 introductions | Lanczos tensor | [
"Physics",
"Engineering"
] | 873 | [
"Tensors in general relativity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
2,403,235 | https://en.wikipedia.org/wiki/Strained%20silicon | Strained silicon is a layer of silicon in which the silicon atoms are stretched beyond their normal interatomic distance. This can be accomplished by putting the layer of silicon over a substrate of silicon–germanium (). As the atoms in the silicon layer align with the atoms of the underlying silicon germanium layer (which are arranged a little further apart, with respect to those of a bulk silicon crystal), the links between the silicon atoms become stretched, thereby leading to strained silicon. Moving these silicon atoms further apart reduces the atomic forces that interfere with the movement of electrons through the transistors and thus improved mobility, resulting in better chip performance and lower energy consumption. These electrons can move 70% faster allowing strained silicon transistors to switch 35% faster.
More recent advances include deposition of strained silicon using metalorganic vapor-phase epitaxy (MOVPE) with metalorganics as starting sources, e.g. silicon sources (silane and dichlorosilane) and germanium sources (germane, germanium tetrachloride, and isobutylgermane).
More recent methods of inducing strain include doping the source and drain with lattice mismatched atoms such as germanium and carbon. Germanium doping of up to 20% in the P-channel MOSFET source and drain causes uniaxial compressive strain in the channel, increasing hole mobility. Carbon doping as low as 0.25% in the N-channel MOSFET source and drain causes uniaxial tensile strain in the channel, increasing electron mobility. Covering the NMOS transistor with a highly stressed silicon nitride layer is another way to create uniaxial tensile strain. As opposed to wafer-level methods of inducing strain on the channel layer prior to MOSFET fabrication, the aforementioned methods use strain induced during the MOSFET fabrication itself to alter the carrier mobility in the transistor channel.
History
The idea of using germanium to strain silicon for the purpose of improving field-effect transistors appears to go back at least as far as 1991.
In 2000, an MIT report investigated theoretical and experimental hole mobility in SiGe heterostructure-based PMOS devices.
In 2003, IBM was reported to be among primary proponents of the technology.
In 2002, Intel had featured strained silicon technology in its 90nm X86 Pentium microprocessors series in early 2000. In 2005, Intel was sued by AmberWave company for alleged patent infringement related to strained silicon technology.
See also
Strain engineering
Hall effect
Piezo effect
References
Further reading
Development of New Germanium Precursors for SiGe Epitaxy; Presentation at 210th ECS Meeting (SiGe Symposium), Cancun, Mexico, October 29, 2006.
Germanium
Silicon, Strained
Semiconductor material types
Silicon | Strained silicon | [
"Chemistry"
] | 579 | [
"Semiconductor material types",
"Semiconductor materials",
"Group IV semiconductors"
] |
2,403,542 | https://en.wikipedia.org/wiki/Intense%20Pulsed%20Neutron%20Source | Intense Pulsed Neutron Source (IPNS) was a scientific user facility at Argonne National Laboratory for neutron scattering research. The IPNS was the world's first pulsed neutron source open to external users and started operations in 1981. The facility ceased operation in January, 2008 after the omnibus spending bill forced Basic Energy Sciences (BES) to cease IPNS operations.
References
Argonne National Laboratory
Neutron facilities | Intense Pulsed Neutron Source | [
"Physics",
"Materials_science"
] | 83 | [
"Materials science stubs",
"Condensed matter stubs",
"Condensed matter physics"
] |
4,463,475 | https://en.wikipedia.org/wiki/Gas%20kinetics | Gas kinetics is a science in the branch of fluid dynamics, concerned with the study of motion of gases and its effects on physical systems. Based on the principles of fluid mechanics and thermodynamics, gas dynamics arises from the studies of gas flows in transonic and supersonic flights. To distinguish itself from other sciences in fluid dynamics, the studies in gas dynamics are often defined with gases flowing around or within physical objects at speeds comparable to or exceeding the speed of sound and causing a significant change in temperature and pressure. Some examples of these studies include but are not limited to: choked flows in nozzles and valves, shock waves around jets, aerodynamic heating on atmospheric reentry vehicles and flows of gas fuel within a jet engine. At the molecular level, gas dynamics is a study of the kinetic theory of gases, often leading to the study of gas diffusion, statistical mechanics, chemical thermodynamics and non-equilibrium thermodynamics. Gas dynamics is synonymous with aerodynamics when the gas field is air and the subject of study is flight. It is highly relevant in the design of aircraft and spacecraft and their respective propulsion systems.
History
Progress in gas dynamics coincides with the developments of transonic and supersonic flights. As aircraft began to travel faster, the density of air began to change, considerably increasing the air resistance as the air speed approached the speed of sound. The phenomenon was later identified in wind tunnel experiments as an effect caused by the formation of shock waves around the aircraft. Major advances were made to describe the behavior during and after World War II, and the new understandings on compressible and high speed flows became theories of gas dynamics.
As the construct that gases are small particles in Brownian motion became widely accepted and numerous quantitative studies verifying that the macroscopic properties of gases, such as temperature, pressure and density, are the results of collisions of moving particles, the study of kinetic theory of gases became increasingly an integrated part of gas dynamics. Modern books and classes on gas dynamics often began with an introduction to kinetic theory. The advent of the molecular modeling in computer simulation further made kinetic theory a highly relevant subject in today's research on gas dynamics.
Introductory terminology
Compressibility
Mach number
Diffusion
Gas dynamics is the overview of the average value in the distance between two molecules of gas that has collided with out ignoring the structure in which the molecules are contained. The field requires a great amount of knowledge and practical use in the ideas of the kinetic theory of gases, and it links the kinetic theory of gases with the solid state physics through the study of how gas reacts with surfaces.
Definition of a fluid
Fluids are substances that do not permanently change under an enormous amount of stress. A solid tends to deform in order to remain at equilibrium under a great deal of stress. Fluids are defined as both liquids and gases because the molecules inside the fluid are much weaker than those molecules contained in a solid. When referring to the density of a fluid in terms of a liquid, there is a small percentage of change to the liquid’s density as pressure is increased. If the fluid is referred to as a gas, the density will change greatly depending on the amount of pressure applied due to the equation of state for gases (p=ρRT). In the study of the flow of liquids, the term used while referring to the little change in density is called incompressible flow. In the study of the flow of gases, the rapid increase due to an increase of pressure is called compressible flow.
Real gases
Real gases are characterized by their compressibility (z) in the equation PV = zn0RT. When the pressure P is set as a function of the volume V where the series is determined by set temperatures T, P, and V began to take hyperbolic relationships that are exhibited by ideal gases as the temperatures start to get very high. A critical point is reached when the slope of the graph is equal to zero and makes the state of the fluid change between a liquid and a vapor. The properties of ideal gases contain viscosity, thermal conductivity, and diffusion.
Viscosity
The viscosity of gases is the result in the transfer of each molecule of gas as they pass each other from layer to layer. As gases tend to pass one another, the velocity, in the form of momentum, of the faster moving molecule speeds up the slower moving molecule. As the slower moving molecule passes the faster moving molecule, the momentum of the slower moving particle slows down the faster moving particle. The molecules continue to enact until frictional drag causes both molecules to equalize their velocities.
Thermal conductivity
The thermal conductivity of a gas can be found through analysis of a gas’ viscosity except the molecules are stationary while only the temperatures of the gases are changing. Thermal conductivity is stated as the amount of heat transported across a specific area in a specific time. The thermal conductivity always flows opposite of the direction of the temperature gradient.
Diffusion
Diffusion of gases is configured with a uniform concentration of gases and while the gases are stationary. Diffusion is the change of concentration between the two gases due to a weaker concentration gradient between the two gases. Diffusion is the transportation of mass over a period of time.
Shock waves
The shock wave may be described as a compression front in a supersonic flow field, and the flow process across the front results in an abrupt change in fluid properties. The thickness of the shock wave is comparable to the mean free path of the gas molecules in the flow field. In other words, shock is a thin region where large gradients in temperature, pressure and velocity occur, and where the transport phenomena of momentum and energy are important. The normal shock wave is a compression front normal to the flow direction. However, in a wide variety of physical situations, a compression wave inclined at an angle to the flow occurs. Such a wave is called an oblique shock. Indeed, all naturally occurring shocks in external flows are oblique.
Stationary normal shock waves
A stationary normal shock wave is classified as going in the normal direction of the flow direction. For example, when a piston moves at a constant rate inside a tube, sound waves that travel down the tube are produced. As the piston continues to move, the wave begins to come together and compresses the gas inside the tube. The various calculations that come alongside of normal shock waves can vary due to the size of the tubes in which they are contained. Abnormalities such as converging-diverging nozzles and tubes with changing areas can affect such calculations as volume, pressure, and Mach number.
Moving normal shock waves
Unlike stationary normal shockwaves, moving normal shockwaves are more commonly available in physical situations. For example, a blunt object entering into the atmosphere faces a shock that comes through the medium of a non-moving gas. The fundamental problem that comes through moving normal shockwaves is the moment of a normal shockwave through motionless gas. The viewpoint of the moving shockwaves characterizes it as a moving or non-moving shock wave. The example of an object entering into the atmosphere depicts an object traveling in the opposite direction of the shockwave resulting in a moving shockwave, but if the object was launching into space, riding on top of the shockwave, it would appear to be a stationary shockwave. The relations and comparisons along with speed and shock ratios of moving and stationary shockwaves can be calculated through extensive formulas.
Friction and compressible flow
Frictional forces play a role in determining the flow properties of compressible flow in ducts. In calculations, friction is either taken as inclusive or exclusive. If friction is inclusive, then the analysis of compressible flow becomes more complex as if friction is not inclusive. If the friction is exclusive to the analysis, then certain restrictions will be put into place. When friction is included on compressible flow, the friction limits the areas in which the results from analysis in be applied. As mentioned before, the shape of the duct, such as varying sizes or nozzles, effect the different calculations in between friction and compressible flow.
See also
References
Specific
General
External links
Georgia Tech web page on gas dynamics topics
Fluid dynamics | Gas kinetics | [
"Chemistry",
"Engineering"
] | 1,660 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
4,464,817 | https://en.wikipedia.org/wiki/Neurodegenerative%20disease | A neurodegenerative disease is caused by the progressive loss of neurons, in the process known as neurodegeneration. Neuronal damage may also ultimately result in their death. Neurodegenerative diseases include amyotrophic lateral sclerosis, multiple sclerosis, Parkinson's disease, Alzheimer's disease, Huntington's disease, multiple system atrophy, tauopathies, and prion diseases. Neurodegeneration can be found in the brain at many different levels of neuronal circuitry, ranging from molecular to systemic. Because there is no known way to reverse the progressive degeneration of neurons, these diseases are considered to be incurable; however research has shown that the two major contributing factors to neurodegeneration are oxidative stress and inflammation. Biomedical research has revealed many similarities between these diseases at the subcellular level, including atypical protein assemblies (like proteinopathy) and induced cell death. These similarities suggest that therapeutic advances against one neurodegenerative disease might ameliorate other diseases as well.
Within neurodegenerative diseases, it is estimated that 55 million people worldwide had dementia in 2019, and that by 2050 this figure will increase to 139 million people.
Specific disorders
The consequences of neurodegeneration can vary widely depending on the specific region affected, ranging from issues related to movement to the development of dementia.
Alzheimer's disease
Alzheimer's disease (AD) is a chronic neurodegenerative disease that results in the loss of neurons and synapses in the cerebral cortex and certain subcortical structures, resulting in gross atrophy of the temporal lobe, parietal lobe, and parts of the frontal cortex and cingulate gyrus. It is the most common neurodegenerative disease. Even with billions of dollars being used to find a treatment for Alzheimer's disease, no effective treatments have been found. Within clinical trials stable and effective AD therapeutic strategies have a 99.5% failure rate. Reasons for this failure rate include inappropriate drug doses, invalid target and participant selection, and inadequate knowledge of pathophysiology of AD. Currently, diagnoses of Alzheimer's is subpar, and better methods need to be utilized for various aspects of clinical diagnoses. Alzheimer's has a 20% misdiagnosis rate.
AD pathology is primarily characterized by the presence of amyloid plaques and neurofibrillary tangles. Plaques are made up of small peptides, typically 39–43 amino acids in length, called amyloid beta (also written as A-beta or Aβ). Amyloid beta is a fragment from a larger protein called amyloid precursor protein (APP), a transmembrane protein that penetrates through the neuron's membrane. APP appears to play roles in normal neuron growth, survival and post-injury repair. APP is cleaved into smaller fragments by enzymes such as gamma secretase and beta secretase. One of these fragments gives rise to fibrils of amyloid beta which can self-assemble into the dense extracellular amyloid plaques.
Parkinson's disease
Parkinson's disease (PD) is the second most common neurodegenerative disorder. It typically manifests as bradykinesia, rigidity, resting tremor and posture instability. The crude prevalence rate of PD has been reported to range from 15 per 100,000 to 12,500 per 100,000, and the incidence of PD from 15 per 100,000 to 328 per 100,000, with the disease being less common in Asian countries.
PD is primarily characterized by death of dopaminergic neurons in the substantia nigra, a region of the midbrain. The cause of this selective cell death is unknown. Notably, alpha-synuclein-ubiquitin complexes and aggregates are observed to accumulate in Lewy bodies within affected neurons. It is thought that defects in protein transport machinery and regulation, such as RAB1, may play a role in this disease mechanism. Impaired axonal transport of alpha-synuclein may also lead to its accumulation in Lewy bodies. Experiments have revealed reduced transport rates of both wild-type and two familial Parkinson's disease-associated mutant alpha-synucleins through axons of cultured neurons. Membrane damage by alpha-synuclein could be another Parkinson's disease mechanism.
The main known risk factor is age. Mutations in genes such as α-synuclein (SNCA), leucine-rich repeat kinase 2 (LRRK2), glucocerebrosidase (GBA), and tau protein (MAPT) can also cause hereditary PD or increase PD risk. While PD is the second most common neurodegenerative disorder, problems with diagnoses still persist. Problems with the sense of smell is a widespread symptom of Parkinson's disease (PD), however, some neurologists question its efficacy. This assessment method is a source of controversy among medical professionals. The gut microbiome might play a role in the diagnosis of PD, and research suggests various ways that could revolutionize the future of PD treatment.
Huntington's disease
Huntington's disease (HD) is a rare autosomal dominant neurodegenerative disorder caused by mutations in the huntingtin gene (HTT). HD is characterized by loss of medium spiny neurons and astrogliosis. The first brain region to be substantially affected is the striatum, followed by degeneration of the frontal and temporal cortices. The striatum's subthalamic nuclei send control signals to the globus pallidus, which initiates and modulates motion. The weaker signals from subthalamic nuclei thus cause reduced initiation and modulation of movement, resulting in the characteristic movements of the disorder, notably chorea. Huntington's disease presents itself later in life even though the proteins that cause the disease works towards manifestation from their early stages in the humans affected by the proteins. Along with being a neurodegenerative disorder, HD has links to problems with neurodevelopment.
HD is caused by polyglutamine tract expansion in the huntingtin gene, resulting in the mutant huntingtin. Aggregates of mutant huntingtin form as inclusion bodies in neurons, and may be directly toxic. Additionally, they may damage molecular motors and microtubules to interfere with normal axonal transport, leading to impaired transport of important cargoes such as BDNF. Huntington's disease currently has no effective treatments that would modify the disease.
Multiple sclerosis
Multiple sclerosis (MS) is a chronic debilitating demyelinating disease of the central nervous system, caused by an autoimmune attack resulting in the progressive loss of myelin sheath on neuronal axons. The resultant decrease in the speed of signal transduction leads to a loss of functionality that includes both cognitive and motor impairment depending on the location of the lesion. The progression of MS occurs due to episodes of increasing inflammation, which is proposed to be due to the release of antigens such as myelin oligodendrocyte glycoprotein, myelin basic protein, and proteolipid protein, causing an autoimmune response. This sets off a cascade of signaling molecules that result in T cells, B cells, and macrophages to cross the blood-brain barrier and attack myelin on neuronal axons leading to inflammation. Further release of antigens drives subsequent degeneration causing increased inflammation. Multiple sclerosis presents itself as a spectrum based on the degree of inflammation, a majority of patients experience early relapsing and remitting episodes of neuronal deterioration following a period of recovery. Some of these individuals may transition to a more linear progression of the disease, while about 15% of others begin with a progressive course on the onset of multiple sclerosis. The inflammatory response contributes to the loss of the grey matter, and as a result current literature devotes itself to combatting the auto-inflammatory aspect of the disease. While there are several proposed causal links between EBV and the HLA-DRB1*15:01 allele to the onset of MS – they may contribute to the degree of autoimmune attack and the resultant inflammation – they do not determine the onset of MS.
Amyotrophic lateral sclerosis
Amyotrophic lateral sclerosis (ALS), commonly referred to Lou Gehrig's disease, is a rare neurodegenerative disorder characterized by the gradual loss of both upper motor neurons (UMNs) and lower motor neurons (LMNs). Although initial symptoms may vary, most patients develop skeletal muscle weakness that progresses to involve the entire body. The precise etiology of ALS remains unknown. In 1993, missense mutations in the gene encoding the antioxidant enzyme superoxide dismutase 1 (SOD1) were discovered in a subset of patients with familial ALS. More recently, TAR DNA-binding protein 43 (TDP-43) and Fused in Sarcoma (FUS) protein aggregates have been implicated in some cases of the disease, and a mutation in chromosome 9 (C9orf72) is thought to be the most common known cause of sporadic ALS. Early diagnosis of ALS is harder than with other neurodegenerative diseases as there are no highly effective means of determining its early onset. Currently, there is research being done regarding the diagnosis of ALS through upper motor neuron tests. The Penn Upper Motor Neuron Score (PUMNS) consists of 28 criteria with a score range of 0–32. A higher score indicates a higher level of burden present on the upper motor neurons. The PUMNS has proven quite effective in determining the burden that exists on upper motor neurons in affected patients.
Independent research provided in vitro evidence that the primary cellular sites where SOD1 mutations act are located on astrocytes. Astrocytes then cause the toxic effects on the motor neurons. The specific mechanism of toxicity still needs to be investigated, but the findings are significant because they implicate cells other than neuron cells in neurodegeneration.
Batten disease
Batten disease is a rare and fatal recessive neurodegenerative disorder that begins in childhood. Batten disease is the common name for a group of lysosomal storage disorders known as neuronal ceroid lipofuscinoses (NCLs) – each caused by a specific gene mutation, of which there are thirteen. Since Batten disease is quite rare, its worldwide prevalence is about 1 in every 100,000 live births. In North America, NCL3 disease (juvenile NCL) typically manifests between the ages of 4 and 7. Batten disease is characterized by motor impairment, epilepsy, dementia, vision loss, and shortened lifespan. A loss of vision is common first sign of Batten disease. Loss of vision is typically preceded by cognitive and behavioral changes, seizures, and loss of the ability to walk. It is common for people to establish cardiac arrhythmias and difficulties eating food as the disease progresses. Batten disease diagnosis depends on a conflation of many criteria: clinical signs and symptoms, evaluations of the eye, electroencephalograms (EEG), and brain magnetic resonance imaging (MRI) results. The diagnosis provided by these results are corroborated by genetic and biochemical testing. No effective treatments were available to prevent the disease from being widespread before the past few years. In recent years, more models have been created to expedite the research process for methods to treat Batten disease.
Creutzfeldt–Jakob disease
Creutzfeldt–Jakob disease (CJD) is a prion disease that is characterized by rapidly progressive dementia. Misfolded proteins called prions aggregate in brain tissue leading to nerve cell death. Variant Creutzfeldt–Jakob disease (vCJD) is the infectious form that comes from the meat of a cow that was infected with bovine spongiform encephalopathy, also called mad cow disease.
Risk factors
Aging
The greatest risk factor for neurodegenerative diseases is aging. Mitochondrial DNA mutations as well as oxidative stress both contribute to aging. Many of these diseases are late-onset, meaning there is some factor that changes as a person ages for each disease. One constant factor is that in each disease, neurons gradually lose function as the disease progresses with age. It has been proposed that DNA damage accumulation provides the underlying causative link between aging and neurodegenerative disease. About 20–40% of healthy people between 60 and 78 years old experience discernable decrements in cognitive performance in several domains including working, spatial, and episodic memory, and processing speed.
Infections
A study using electronic health records indicates that 45 (with 22 of these being replicated with the UK Biobank) viral exposures can significantly elevate risks of neurodegenerative disease, including up to 15 years after infection.
Mechanisms
Genetics
Many neurodegenerative diseases are caused by genetic mutations, most of which are located in completely unrelated genes. In many of the different diseases, the mutated gene has a common feature: a repeat of the CAG nucleotide triplet. CAG codes for the amino acid glutamine. A repeat of CAG results in a polyglutamine (polyQ) tract. Diseases associated with such mutations are known as trinucleotide repeat disorders.
Polyglutamine repeats typically cause dominant pathogenesis. Extra glutamine residues can acquire toxic properties through a variety of ways, including irregular protein folding and degradation pathways, altered subcellular localization, and abnormal interactions with other cellular proteins. PolyQ studies often use a variety of animal models because there is such a clearly defined trigger – repeat expansion. Extensive research has been done using the models of nematode (C. elegans), and fruit fly (Drosophila), mice, and non-human primates.
Nine inherited neurodegenerative diseases are caused by the expansion of the CAG trinucleotide and polyQ tract, including Huntington's disease and the spinocerebellar ataxias.
Epigenetics
The presence of epigenetic modifications for certain genes has been demonstrated in this type of pathology. An example is FKBP5 gene, which progressively increases its expression with age and has been related to Braak staging and increased tau pathology both in vitro and in mouse models of AD.
Protein misfolding
Several neurodegenerative diseases are classified as proteopathies as they are associated with the aggregation of misfolded proteins. Protein toxicity is one of the key mechanisms of many neurodegenrative diseases.
alpha-synuclein: can aggregate to form insoluble fibrils in pathological conditions characterized by Lewy bodies, such as Parkinson's disease, dementia with Lewy bodies, and multiple system atrophy. Alpha-synuclein is the primary structural component of Lewy body fibrils. In addition, an alpha-synuclein fragment, known as the non-Abeta component (NAC), is found in amyloid plaques in Alzheimer's disease.
tau: hyperphosphorylated tau protein is the main component of neurofibrillary tangles in Alzheimer's disease; tau fibrils are the main component of Pick bodies found in behavioral variant frontotemporal dementia.
amyloid beta: the major component of amyloid plaques in Alzheimer's disease.
prion: main component of prion diseases and transmissible spongiform encephalopathy.
Intracellular mechanisms
Protein degradation pathways
Parkinson's disease and Huntington's disease are both late-onset and associated with the accumulation of intracellular toxic proteins. Diseases caused by the aggregation of proteins are known as proteopathies, and they are primarily caused by aggregates in the following structures:
cytosol, e.g. Parkinson's and Huntington's
nucleus, e.g. Spinocerebellar ataxia type 1 (SCA1)
endoplasmic reticulum (ER), (as seen with neuroserpin mutations that cause familial encephalopathy with neuroserpin inclusion bodies)
extracellularly excreted proteins, amyloid-beta in Alzheimer's disease
There are two main avenues eukaryotic cells use to remove troublesome proteins or organelles:
ubiquitin–proteasome: protein ubiquitin along with enzymes is key for the degradation of many proteins that cause proteopathies including polyQ expansions and alpha-synucleins. Research indicates proteasome enzymes may not be able to correctly cleave these irregular proteins, which could possibly result in a more toxic species. This is the primary route cells use to degrade proteins.
Decreased proteasome activity is consistent with models in which intracellular protein aggregates form. It is still unknown whether or not these aggregates are a cause or a result of neurodegeneration.
autophagy–lysosome pathways: a form of programmed cell death (PCD), this becomes the favorable route when a protein is aggregate-prone meaning it is a poor proteasome substrate. This can be split into two forms of autophagy: macroautophagy and chaperone-mediated autophagy (CMA).
macroautophagy is involved with nutrient recycling of macromolecules under conditions of starvation, certain apoptotic pathways, and if absent, leads to the formation of ubiquinated inclusions. Experiments in mice with neuronally confined macroautophagy-gene knockouts develop intraneuronal aggregates leading to neurodegeneration.
chaperone-mediated autophagy defects may also lead to neurodegeneration. Research has shown that mutant proteins bind to the CMA-pathway receptors on lysosomal membrane and in doing so block their own degradation as well as the degradation of other substrates.
Membrane damage
Damage to the membranes of organelles by monomeric or oligomeric proteins could also contribute to these diseases. Alpha-synuclein can damage membranes by inducing membrane curvature, and cause extensive tubulation and vesiculation when incubated with artificial phospholipid vesicles.
The tubes formed from these lipid vesicles consist of both micellar as well as bilayer tubes. Extensive induction of membrane curvature is deleterious to the cell and would eventually lead to cell death. Apart from tubular structures, alpha-synuclein can also form lipoprotein nanoparticles similar to apolipoproteins.
Mitochondrial dysfunction
The most common form of cell death in neurodegeneration is through the intrinsic mitochondrial apoptotic pathway. This pathway controls the activation of caspase-9 by regulating the release of cytochrome c from the mitochondrial intermembrane space. Reactive oxygen species (ROS) are normal byproducts of mitochondrial respiratory chain activity. ROS concentration is mediated by mitochondrial antioxidants such as manganese superoxide dismutase (SOD2) and glutathione peroxidase. Over production of ROS (oxidative stress) is a central feature of all neurodegenerative disorders. In addition to the generation of ROS, mitochondria are also involved with life-sustaining functions including calcium homeostasis, PCD, mitochondrial fission and fusion, lipid concentration of the mitochondrial membranes, and the mitochondrial permeability transition. Mitochondrial disease leading to neurodegeneration is likely, at least on some level, to involve all of these functions.
There is strong evidence that mitochondrial dysfunction and oxidative stress play a causal role in neurodegenerative disease pathogenesis, including in four of the more well known diseases Alzheimer's, Parkinson's, Huntington's, and amyotrophic lateral sclerosis.
Neurons are particularly vulnerable to oxidative damage due to their strong metabolic activity associated with high transcription levels, high oxygen consumption, and weak antioxidant defense.
DNA damage
The brain metabolizes as much as a fifth of consumed oxygen, and reactive oxygen species produced by oxidative metabolism are a major source of DNA damage in the brain. Damage to a cell's DNA is particularly harmful because DNA is the blueprint for protein production and unlike other molecules it cannot simply be replaced by re-synthesis. The vulnerability of post-mitotic neurons to DNA damage (such as oxidative lesions or certain types of DNA strand breaks), coupled with a gradual decline in the activities of repair mechanisms, could lead to accumulation of DNA damage with age and contribute to brain aging and neurodegeneration. DNA single-strand breaks are common and are associated with the neurodegenerative disease ataxia-oculomotor apraxia. Increased oxidative DNA damage in the brain is associated with Alzheimer's disease and Parkinson's disease. Defective DNA repair has been linked to neurodegenerative disorders such as Alzheimer's disease, amyotrophic lateral sclerosis, ataxia telangiectasia, Cockayne syndrome, Parkinson's disease and xeroderma pigmentosum.
Axonal transport
Axonal swelling, and axonal spheroids have been observed in many different neurodegenerative diseases. This suggests that defective axons are not only present in diseased neurons, but also that they may cause certain pathological insult due to accumulation of organelles. Axonal transport can be disrupted by a variety of mechanisms including damage to: kinesin and cytoplasmic dynein, microtubules, cargoes, and mitochondria. When axonal transport is severely disrupted a degenerative pathway known as Wallerian-like degeneration is often triggered.
Programmed cell death
Programmed cell death (PCD) is death of a cell in any form, mediated by an intracellular program. This process can be activated in neurodegenerative diseases including Parkinson's disease, amytrophic lateral sclerosis, Alzheimer's disease and Huntington's disease. PCD observed in neurodegenerative diseases may be directly pathogenic; alternatively, PCD may occur in response to other injury or disease processes.
Apoptosis (type I)
Apoptosis is a form of programmed cell death in multicellular organisms. It is one of the main types of programmed cell death (PCD) and involves a series of biochemical events leading to a characteristic cell morphology and death.
Extrinsic apoptotic pathways: Occur when factors outside the cell activate cell surface death receptors (e.g., Fas) that result in the activation of caspases-8 or -10.
Intrinsic apoptotic pathways: Result from mitochondrial release of cytochrome c or endoplasmic reticulum malfunctions, each leading to the activation of caspase-9. The nucleus and Golgi apparatus are other organelles that have damage sensors, which can lead the cells down apoptotic pathways.
Caspases (cysteine-aspartic acid proteases) cleave at very specific amino acid residues. There are two types of caspases: initiators and effectors. Initiator caspases cleave inactive forms of effector caspases. This activates the effectors that in turn cleave other proteins resulting in apoptotic initiation.
Autophagic (type II)
Autophagy is a form of intracellular phagocytosis in which a cell actively consumes damaged organelles or misfolded proteins by encapsulating them into an autophagosome, which fuses with a lysosome to destroy the contents of the autophagosome. Because many neurodegenerative diseases show unusual protein aggregates, it is hypothesized that defects in autophagy could be a common mechanism of neurodegeneration.
Cytoplasmic (type III)
PCD can also occur via non-apoptotic processes, also known as Type III or cytoplasmic cell death. For example, type III PCD might be caused by trophotoxicity, or hyperactivation of trophic factor receptors. Cytotoxins that induce PCD can cause necrosis at low concentrations, or aponecrosis (combination of apoptosis and necrosis) at higher concentrations. It is still unclear exactly what combination of apoptosis, non-apoptosis, and necrosis causes different kinds of aponecrosis.
Transglutaminase
Transglutaminases are human enzymes ubiquitously present in the human body and in the brain in particular.
The main function of transglutaminases is bind proteins and peptides intra- and intermolecularly, by a type of covalent bonds termed isopeptide bonds, in a reaction termed transamidation or crosslinking.
Transglutaminase binding of these proteins and peptides make them clump together. The resulting structures are turned extremely resistant to chemical and mechanical disruption.
Most relevant human neurodegenerative diseases share the property of having abnormal structures made up of proteins and peptides.
Each of these neurodegenerative diseases have one (or several) specific main protein or peptide. In Alzheimer's disease, these are amyloid-beta and tau. In Parkinson's disease, it is alpha-synuclein. In Huntington's disease, it is huntingtin.
Transglutaminase substrates:
Amyloid-beta, tau, alpha-synuclein and huntingtin have been proved to be substrates of transglutaminases in vitro or in vivo, that is, they can be bonded by trasglutaminases by covalent bonds to each other and potentially to any other transglutaminase substrate in the brain.
Transglutaminase augmented expression:
It has been proved that in these neurodegenerative diseases (Alzheimer's disease, Parkinson's disease, and Huntington's disease) the expression of the transglutaminase enzyme is increased.
Presence of isopeptide bonds in these structures:
The presence of isopeptide bonds (the result of the transglutaminase reaction) have been detected in the abnormal structures that are characteristic of these neurodegenerative diseases.
Co-localization:
Co-localization of transglutaminase mediated isopeptide bonds with these abnormal structures has been detected in the autopsy of brains of patients with these diseases.
Management
The process of neurodegeneration is not well understood, so the diseases that stem from it have, as yet, no cures.
Animal models in research
In the search for effective treatments (as opposed to palliative care), investigators employ animal models of disease to test potential therapeutic agents. Model organisms provide an inexpensive and relatively quick means to perform two main functions: target identification and target validation. Together, these help show the value of any specific therapeutic strategies and drugs when attempting to ameliorate disease severity. An example is the drug Dimebon by Medivation, Inc. In 2009 this drug was in phase III clinical trials for use in Alzheimer's disease, and also phase II clinical trials for use in Huntington's disease. In March 2010, the results of a clinical trial phase III were released; the investigational Alzheimer's disease drug Dimebon failed in the pivotal CONNECTION trial of patients with mild-to-moderate disease. With CONCERT, the remaining Pfizer and Medivation Phase III trial for Dimebon (latrepirdine) in Alzheimer's disease failed in 2012, effectively ending the development in this indication.
In another experiment using a rat model of Alzheimer's disease, it was demonstrated that systemic administration of hypothalamic proline-rich peptide (PRP)-1 offers neuroprotective effects and can prevent neurodegeneration in hippocampus amyloid-beta 25–35. This suggests that there could be therapeutic value to PRP-1.
Other avenues of investigation
Protein degradation offers therapeutic options both in preventing the synthesis and degradation of irregular proteins. There is also interest in upregulating autophagy to help clear protein aggregates implicated in neurodegeneration. Both of these options involve very complex pathways that we are only beginning to understand.
The goal of immunotherapy is to enhance aspects of the immune system. Both active and passive vaccinations have been proposed for Alzheimer's disease and other conditions; however, more research must be done to prove safety and efficacy in humans.
A current therapeutic target for the treatment of Alzheimer's disease is the protease β-secretase, which is involved in the amyloidogenic processing pathway that leads to the pathological accumulation of proteins in the brain. When the gene that encodes for amyloid precursor protein (APP) is spliced by α-secretase rather than β-secretase, the toxic protein β amyloid is not produced. Targeted inhibition of β-secretase can potentially prevent the neuronal death that is responsible for the symptoms of Alzheimer's disease.
Dr. Antonio Barbera, a former obstetrics and gynaecology doctor, is prescribing table tennis for patients who are suffering from a serious neurological disorder.
See also
Amyloid
JUNQ and IPOD
Neurodegeneration with brain iron accumulation
Prevention of dementia
References
Clinical neuroscience
Neurological disorders
Senescence
Neurodegenerative disease | Neurodegenerative disease | [
"Chemistry",
"Biology"
] | 6,138 | [
"Senescence",
"Metabolism",
"Cellular processes"
] |
4,466,344 | https://en.wikipedia.org/wiki/Optical%20modulator | An optical modulator is a device which is used to modulate a beam of light. The beam may be carried over free space, or propagated through an optical waveguide (optical fibre). Depending on the parameter of a light beam which is manipulated, modulators may be categorized into amplitude modulators, phase modulators, polarization modulators, etc. The easiest way to obtain modulation of intensity of a light beam is to modulate the current driving the light source, e.g. a laser diode. This sort of modulation is called direct modulation, as opposed to the external modulation performed by a light modulator. For this reason light modulators are, e.g. in fiber-optic communications, called external light modulators.
With laser diodes where narrow linewidth is required, direct modulation is avoided due to a high bandwidth "chirping" effect when applying and removing the current to the laser.
Optical modulators are used with superconductors which work properly only at low temperatures, generally just above absolute zero. Optical modulators convert information carried by an electric current in an electromagnet into light.
Classification of optical modulators
According to the properties of the material that are used to modulate the light beam, modulators are divided into two groups: absorptive modulators and refractive modulators. In absorptive modulators the absorption coefficient of the material is changed, in refractive modulators the refractive index of the material is changed.
The absorption coefficient of the material in the modulator can be manipulated by the Franz–Keldysh effect, the Quantum-confined Stark effect, excitonic absorption, changes of Fermi level, or changes of free carrier concentration. Usually, if several such effects appear together, the modulator is called an electro-absorptive modulator.
Refractive modulators most often make use of an electro-optic effect. Some modulators utilize an acousto-optic effect or magneto-optic effect or take advantage of polarization changes in liquid crystals. The refractive modulators are named by the respective effect: i.e. electrooptic modulators, acousto-optic modulators etc. The effect of a refractive modulator of any of the types mentioned above is to change the phase of a light beam. The phase modulation can be converted into amplitude modulation using an interferometer or directional coupler.
Separate case of modulators are spatial light modulators (SLMs). The role of SLM is modification two dimensional distribution of amplitude and/or phase of an optical wave.
See also
Acousto-optic modulator
Electro-absorption modulator
Electro-optic modulator, exploiting the electro-optic effect
References
Modulator, optical
Nonlinear optics | Optical modulator | [
"Materials_science",
"Engineering"
] | 586 | [
"Glass engineering and science",
"Optical devices"
] |
4,466,508 | https://en.wikipedia.org/wiki/Thyroid%20function%20tests | Thyroid function tests (TFTs) is a collective term for blood tests used to check the function of the thyroid.
TFTs may be requested if a patient is thought to suffer from hyperthyroidism (overactive thyroid) or hypothyroidism (underactive thyroid), or to monitor the effectiveness of either thyroid-suppression or hormone replacement therapy. It is also requested routinely in conditions linked to thyroid disease, such as atrial fibrillation and anxiety disorder.
A TFT panel typically includes thyroid hormones such as thyroid-stimulating hormone (TSH, thyrotropin) and thyroxine (T4), and triiodothyronine (T3) depending on local laboratory policy.
Thyroid-stimulating hormone
Thyroid-stimulating hormone (TSH, thyrotropin) is generally increased in hypothyroidism and decreased in hyperthyroidism, making it the most important test for early detection of both of these conditions. The result of this assay is suggestive of the presence and cause of thyroid disease, since a measurement of elevated TSH generally indicates hypothyroidism, while a measurement of low TSH generally indicates hyperthyroidism. However, when TSH is measured by itself, it can yield misleading results, so additional thyroid function tests must be compared with the result of this test for accurate diagnosis.
TSH is produced in the pituitary gland. The production of TSH is controlled by thyrotropin-releasing hormone (TRH), which is produced in the hypothalamus. TSH levels may be suppressed by excess free T3 (fT3) or free T4 (fT4) in the blood.
History
First-generation TSH assays were done by radioimmunoassay and were introduced in 1965. There were variations and improvements upon TSH radioimmunoassay, but their use declined as a new immunometric assay technique became available in the middle of the 1980s. The new techniques were more accurate, leading to the second, third, and even fourth generations of TSH assay, with each generation possessing ten times greater functional sensitivity than the last. Third generation immunometric assay methods are typically automated. Fourth generation TSH immunometric assay has been developed for use in research.
Modern standard
Third generation TSH assay is the requirement for modern standards of care. TSH testing in the United States is typically carried out with automated platforms using advanced forms of immunometric assay. Nonetheless, there is no international standard for measurement of thyroid-stimulating hormone.
Interpretation
Accurate interpretation takes a variety of factors into account, such as the thyroid hormones i.e. thyroxine (T4) and triiodothyronine (T3), current medical status (such as pregnancy), certain medications like propylthiouracil, temporal effects including circadian rhythm and hysteresis, and other past medical history.
Thyroid hormones
Total thyroxine
Total thyroxine is rarely measured, having been largely superseded by free thyroxine tests. Total thyroxine (Total T4) is generally elevated in hyperthyroidism and decreased in hypothyroidism. It is usually slightly elevated in pregnancy secondary to increased levels of thyroid binding globulin (TBG).
Total T4 is measured to see the bound and unbound levels of T4. The total T4 is less useful in cases where there could be protein abnormalities. The total T4 is less accurate due to the large amount of T4 that is bound. The total T3 is measured in clinical practice since the T3 has decreased amount that is bound as compared to T4.
Reference ranges depend on the method of analysis. Results should always be interpreted using the range from the laboratory that performed the test. Example values are:
Free thyroxine
Free thyroxine (fT4 or free T4) is generally elevated in hyperthyroidism and decreased in hypothyroidism.
Reference ranges depend on the method of analysis. Results should always be interpreted using the range from the laboratory that performed the test. Example values are:
Total triiodothyronine
Total triiodothyronine (Total T3) is rarely measured, having been largely superseded by free T3 tests. Total T3 is generally elevated in hyperthyroidism and decreased in hypothyroidism.
Reference ranges depend on the method of analysis. Results should always be interpreted using the range from the laboratory that performed the test. Example values are:
Free triiodothyronine
Free triiodothyronine (fT3 or free T3) is generally elevated in hyperthyroidism and decreased in hypothyroidism.
Reference ranges depend on the method of analysis. Results should always be interpreted using the range from the laboratory that performed the test. Example values are:
Carrier proteins
Thyroxine-binding globulin
An increased thyroxine-binding globulin results in an increased total thyroxine and total triiodothyronine without an actual increase in hormonal activity of thyroid hormones.
Reference ranges:
Thyroglobulin
Reference ranges:
Other binding hormones
Transthyretin (prealbumin)
Albumin
Protein binding function
Thyroid hormone uptake
Thyroid hormone uptake (Tuptake or T3 uptake) is a measure of the unbound thyroxine binding globulins in the blood, that is, the TBG that is unsaturated with thyroid hormone. Unsaturated TBG increases with decreased levels of thyroid hormones. It is not directly related to triiodothyronine, despite the name T3 uptake.
Reference ranges:
Other protein binding tests
Thyroid Hormone Binding Ratio (THBR)
Thyroxine-binding index (TBI)
Mixed parameters
Free thyroxine index
The Free Thyroxine Index (FTI or T7) is obtained by multiplying the total T4 with T3 uptake. FTI is considered to be a more reliable indicator of thyroid status in the presence of abnormalities in plasma protein binding. This test is rarely used now that reliable free thyroxine and free triiodothyronine assays are routinely available.
FTI is elevated in hyperthyroidism and decreased in hypothyroidism.
Calculated and structure parameters
Derived structure parameters that describe constant properties of the overall feedback control system may add useful information for special purposes, e.g. in diagnosis of nonthyroidal illness syndrome or central hypothyroidism.
Secretory capacity (GT)
Thyroid's secretory capacity (GT, also referred to as SPINA-GT) is the maximum stimulated amount of thyroxine the thyroid can produce in one second. GT is elevated in hyperthyroidism and reduced in hypothyroidism.
GT is calculated with
or
: Dilution factor for T4 (reciprocal of apparent volume of distribution, 0.1 l−1)
: Clearance exponent for T4 (1.1e-6 sec−1)
K41: Dissociation constant T4-TBG (2e10 L/mol)
K42: Dissociation constant T4-TBPA (2e8 L/mol)
DT: EC50 for TSH (2.75 mU/L)
Sum activity of peripheral deiodinases (GD)
The sum activity of peripheral deiodinases (GD, also referred to as SPINA-GD) is reduced in nonthyroidal illness with hypodeiodination.
GD is obtained with
or
: Dilution factor for T3 (reciprocal of apparent volume of distribution, 0.026 L−1)
: Clearance exponent for T3 (8e-6 sec−1)
KM1: Dissociation constant of type-1-deiodinase (5e-7 mol/L)
K30: Dissociation constant T3-TBG (2e9 L/mol)
TSH index
Jostel's TSH index (JTI or TSHI) helps to determine thyrotropic function of anterior pituitary on a quantitative level. It is reduced in thyrotropic insufficiency and in certain cases of non-thyroidal illness syndrome.
It is calculated with
.
Additionally, a standardized form of TSH index may be calculated with
.
TTSI
The Thyrotroph Thyroid Hormone Sensitivity Index (TTSI, also referred to as Thyrotroph T4 Resistance Index or TT4RI) was developed to enable fast screening for resistance to thyroid hormone. Somewhat similar to the TSH Index it is calculated from equilibrium values for TSH and FT4, however with a different equation.
TFQI
The Thyroid Feedback Quantile-based Index (TFQI) is another parameter for thyrotropic pituitary function. It was defined to be more robust to distorted data than JTI and TTSI. It is calculated with
from quantiles of FT4 and TSH concentration (as determined based on cumulative distribution functions). Per definition the TFQI has a mean of 0 and a standard deviation of 0.37 in a reference population. Higher values of TFQI are associated with obesity, metabolic syndrome, impaired renal function, diabetes, and diabetes-related mortality. TFQI results are also elevated in takotsubo syndrome, potentially reflecting type 2 allostatic load in the situation of psychosocial stress. Reductions have been observed in subjects with schizophrenia after initiation of therapy with oxcarbazepine, potentially reflecting declining allostatic load.
Reconstructed set point
In healthy persons, the intra-individual variation of TSH and thyroid hormones is considerably smaller than the inter-individual variation. This results from a personal set point of thyroid homeostasis. In hypothyroidism, it is impossible to directly access the set point, but it can be reconstructed with methods of systems theory.
A computerised algorithm, called Thyroid-SPOT, which is based on this mathematical theory, has been implemented in software applications. In patients undergoing thyroidectomy it could be demonstrated that this algorithm can be used to reconstruct the personal set point with sufficient precision.
Effects of drugs
Drugs can profoundly affect thyroid function tests. Listed below is a selection of important effects.
↓: reduced serum concentration or structure parameter; ↑: increased serum concentration or structure parameter; ↔: no change; TSH: Thyroid-stimulating hormone; T3: Total triiodothyronine; T4: Total thyroxine; fT4: Free thyroxine; fT3: Free triiodothyronine; rT3: Reverse triiodothyronine
See also
Reference ranges for thyroid hormones
Long-acting thyroid stimulator (LATS)
References
Further reading
American Thyroid Association: Thyroid Function Tests. Posted on June 4, 2012, seen on January 9, 2013.
Thyroid function panel - Lab Tests Online
External links
SPINA Thyr: Open source software for calculating GT and GD
Interpretation of thyroid function tests by Dayan, Colin M. 2001. The Lancet, Vol. 357.
CDC laboratory procedure manuals
The Centers for Disease Control and Prevention has published the following laboratory procedure manuals for measuring thyroid-stimulating hormone:
Thyroid Stimulating Hormone (TSH) (University of Washington Medical Center). September 2011. Method: Access 2 (Beckman Coulter).
Thyroid Stimulating Hormone (TSH) (Collaborative Laboratory Services). September 2011. Method: Access 2 (Beckman Coulter).
Thyroid Stimulating Hormone (TSH). September 2009. Method: Access 2 (Beckman Coulter).
Lab 18 Thyroid Stimulating Hormone. 2001-2002. Method: Microparticle Enzyme Immunoassay.
Lab 18 TSH - Thyroid Stimulating Hormone. 1999-2000. Method: Microparticle Enzyme Immunoassay.
Chemical pathology
Blood tests
Endocrine procedures
Thyroidological methods
Endocrine function tests | Thyroid function tests | [
"Chemistry",
"Biology"
] | 2,472 | [
"Biochemistry",
"Blood tests",
"Chemical pathology",
"Endocrine function tests"
] |
4,467,221 | https://en.wikipedia.org/wiki/Delay%20spread | In telecommunications, the delay spread is a measure of the multipath richness of a communications channel.
In general, it can be interpreted as the difference between the time of arrival of the earliest significant multipath component (typically the line-of-sight component) and the time of arrival of the last multipath components.
The delay spread is mostly used in the characterization of wireless channels, but it also applies to any other multipath channel (e.g. multipath in optical fibers).
Delay spread can be quantified through different metrics, although the most common one is the root mean square (rms) delay spread. Denoting the power delay profile of the channel by , the mean delay of the channel is
and the rms delay spread is given by
The formula above is also known as the root of the second central moment of the normalised delay power density spectrum.
The importance of delay spread is how it affects the Inter Symbol Interference (ISI). If the symbol duration is long enough compared to the delay spread (typically 10 times as big would be good enough), one can expect an equivalent ISI-free channel. The correspondence with the frequency domain is the notion of coherence bandwidth (CB), which is the bandwidth over which the channel can be assumed flat (i.e. channel that passes all spectral components with approximately equal gain and linear phase.). Coherence bandwidth is related to the inverse of the delay spread. The shorter the delay spread, the larger is the coherence bandwidth.
Delay spread has a significant impact on Intersymbol interference.
References
Further reading
Radio frequency propagation | Delay spread | [
"Physics"
] | 330 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves"
] |
4,468,576 | https://en.wikipedia.org/wiki/Cre%20recombinase | Cre recombinase is a tyrosine recombinase enzyme derived from the P1 bacteriophage. The enzyme uses a topoisomerase I-like mechanism to carry out site specific recombination events. The enzyme (38 kDa) is a member of the integrase family of site specific recombinase and it is known to catalyse the site specific recombination event between two DNA recognition sites (LoxP sites). This 34 base pair (bp) loxP recognition site consists of two 13 bp palindromic sequences which flank an 8bp spacer region. The products of Cre-mediated recombination at loxP sites are dependent upon the location and relative orientation of the loxP sites. Two separate DNA species both containing loxP sites can undergo fusion as the result of Cre mediated recombination. DNA sequences found between two loxP sites are said to be "floxed". In this case the products of Cre mediated recombination depends upon the orientation of the loxP sites. DNA found between two loxP sites oriented in the same direction will be excised as a circular loop of DNA whilst intervening DNA between two loxP sites that are opposingly orientated will be inverted. The enzyme requires no additional cofactors (such as ATP) or accessory proteins for its function.
The enzyme plays important roles in the life cycle of the P1 bacteriophage, such as cyclization of the linear genome and resolution of dimeric chromosomes that form after DNA replication.
Cre recombinase is a widely used tool in the field of molecular biology. The enzyme's unique and specific recombination system is exploited to manipulate genes and chromosomes in a huge range of research, such as gene knock out or knock in studies. The enzyme's ability to operate efficiently in a wide range of cellular environments (including mammals, plants, bacteria, and yeast) enables the Cre-Lox recombination system to be used in a vast number of organisms, making it a particularly useful tool in scientific research.
Discovery
Studies carried out in 1981 by Sternberg and Hamilton demonstrated that the bacteriophage 'P1' had a unique site specific recombination system. EcoRI fragments of the P1 bacteriophage genome were generated and cloned into lambda vectors. A 6.5kb EcoRI fragment (Fragment 7) was found to permit efficient recombination events. The mechanism of these recombination events was known to be unique as they occurred in the absence of bacterial RecA and RecBCD proteins. The components of this recombination system were elucidated using deletion mutagenesis studies. These studies showed that a P1 gene product and a recombination site were both required for efficient recombination events to occur. The P1 gene product was named Cre (cyclization recombination) and the recombination site was named loxP (locus of crossing (x) over, P1). The Cre protein was purified in 1983 and was found to be a 35,000 Da protein. No high energy cofactors such as ATP or accessory proteins are required for the recombinase activity of the purified protein. Early studies also demonstrated that Cre binds to non specific DNA sequences whilst having a 20 fold higher affinity for loxP sequences and results of early DNA footprinting studies also suggested that Cre molecules bind loxP sites as dimers.
Structure
Cre recombinase consists of 343 amino acids that form two distinct domains. The amino terminal domain encompasses residues 20–129 and this domain contains 5 alpha helical segments linked by a series of short loops. Helices A & E are involved in the formation of the recombinase tetramer with the C terminus region of helix E known to form contacts with the C terminal domain of adjacent subunits. Helices B & D form direct contacts with the major groove of the loxP DNA. These two helices are thought to make three direct contacts to DNA bases at the loxP site. The carboxy terminal domain of the enzyme consists of amino acids 132–341 and it harbours the active site of the enzyme. The overall structure of this domain shares a great deal of structural resemblance to the catalytic domain of other enzymes of the same family such as λ Integrase and HP1 Integrase. This domain is predominantly helical in structure with 9 distinct helices (F−N). The terminal helix (N) protrudes from the main body of the carboxy domain and this helix is reputed to play a role in mediating interactions with other subunits. Crystal structures demonstrate that this terminal N helix buries its hydrophobic surface into an acceptor pocket of an adjacent Cre subunit.
The effect of the two-domain structure is to form a C-shaped clamp that grasps the DNA from opposite sides.
Active site
The active site of the Cre enzyme consists of the conserved catalytic triad residues Arg 173, His 289, Arg 292 as well as the conserved nucleophilic residues Tyr 324 and Trp 315. Unlike some recombinase enzymes such as Flp recombinase, Cre does not form a shared active site between separate subunits and all the residues that contribute to the active site are found on a single subunit. Consequently, when two Cre molecules bind at a single loxP site two active sites are present. Cre mediated recombination requires the formation of a synapse in which two Cre-LoxP complexes associate to form what is known as the synapse tetramer in which 4 distinct active sites are present.
Tyr 324 acts as a nucleophile to form a covalent 3’-phosphotyrosine linkage to the DNA substrate. The scissile phosphate (phosphate targeted for nucleophilic attack at the cleavage site) is coordinated by the side chains of the 3 amino acid residues of the catalytic triad (Arg 173, His 289 & Trp 315). The indole nitrogen of tryptophan 315 also forms a hydrogen bond to this scissile phosphate. (n.b A Histidine occupies this site in other tyrosine recombinase family members and performs the same function). This reaction cleaves the DNA and frees a 5’ hydroxyl group. This process occurs in the active site of two out of the four recombinase subunits present at the synapse tetramer. If the 5’ hydroxyl groups attack the 3’-phosphotyrosine linkage one pair of the DNA strands will exchange to form a Holliday junction intermediate.
Applications
Role in bacteriophage P1
Cre recombinase plays important roles in the life cycle of the P1 bacteriophage. Upon infection of a cell the Cre-loxP system is used to cause circularization of the P1 DNA. In addition to this Cre is also used to resolve dimeric lysogenic P1 DNA that forms during the cell division of the phage.
Use in research
Inducible Cre activation is achieved using CreER (estrogen receptor) variant, which is only activated after delivery of tamoxifen. This is done through the fusion of a mutated ligand binding domain of the estrogen receptor to the Cre recombinase, resulting in Cre becoming specifically activated by tamoxifen. In the absence of tamoxifen, CreER will result in the shuttling of the mutated recombinase into the cytoplasm. The protein will stay in this location in its inactivated state until tamoxifen is given. Once tamoxifen is introduced, it is metabolized into 4-hydroxytamoxifen, which then binds to the ER and results in the translocation of the CreER into the nucleus, where it is then able to cleave the lox sites. Importantly, sometimes fluorescent reporters can be activated in the absence of tamoxifen, due to leakage of a few Cre recombinase molecules into the nucleus which, in combination with very sensitive reporters, results in unintended cell labelling. CreER(T2) was developed to minimize tamoxifen-independent recombination and maximize tamoxifen-sensitivity.
Improvements
In recent years, Cre recombinase has been improved with conversion to preferred mammalian codons, the removal of reported cryptic splice sites, an altered stop codon, and reduced CpG content to reduce the risk of epigenetic silencing in mammals. A number of mutants with enhanced accuracy have also been identified.
See also
Cre-Lox recombination
FLP-FRT recombination
Cre/loxP-System
References
External links
Genetics techniques
Enzymes | Cre recombinase | [
"Engineering",
"Biology"
] | 1,882 | [
"Genetics techniques",
"Genetic engineering"
] |
14,024,869 | https://en.wikipedia.org/wiki/Photothermal%20spectroscopy | Photothermal spectroscopy is a group of high sensitivity spectroscopy techniques used to measure optical absorption and thermal characteristics of a sample. The basis of photothermal spectroscopy is the change in thermal state of the sample resulting from the absorption of radiation. Light absorbed and not lost by emission results in heating. The heat raises temperature thereby influencing the thermodynamic properties of the sample or of a suitable material adjacent to it. Measurement of the temperature, pressure, or density changes that occur due to optical absorption are ultimately the basis for the photothermal spectroscopic measurements.
As with photoacoustic spectroscopy, photothermal spectroscopy is an indirect method for measuring optical absorption, because it is not based on the direct measure of the light which is involved in the absorption. In another sense, however, photothermal (and photoacoustic) methods measure directly the absorption, rather than e.g. calculate it from the transmission, as is the case of more usual (transmission) spectroscopic techniques. And it is this fact that gives the technique its high sensitivity, because in transmission techniques the absorbance is calculated as the difference between total light impinging on the sample and the transmitted (plus reflected, plus scattered) light, with the usual problems of accuracy when one deals with small differences between large numbers, if the absorption is small. In photothermal spectroscopies, instead, the signal is essentially proportional to the absorption, and is zero when there is zero true absorption, even in the presence of reflection or scattering.
There are several methods and techniques used in photothermal spectroscopy. Each of these has a name indicating the specific physical effect measured.
Photothermal lens spectroscopy (PTS or TLS) measures the thermal blooming that occurs when a beam of light heats a transparent sample. It is typically applied for measuring minute quantities of substances in homogeneous gas and liquid solutions.
Photothermal deflection spectroscopy (PDS), also called the mirage effect, measures the bending of light due to optical absorption. This technique is particularly useful for measuring surface absorption and for profiling thermal properties in layered materials.
Photothermal diffraction, a type of four wave mixing, monitors the effect of transient diffraction gratings "written" into the sample with coherent lasers. It is a form of real-time holography.
Photothermal emission measures an increase in sample infrared radiance occurring as a consequence of absorption. Sample emission follows Stefan's law of thermal emission. This methods is used to measure the thermal properties of solids and layered materials.
Photothermal single particle microscopy. This technique allows the detection of single absorbing nanoparticles via the creation of a spherically symmetric thermal lens for imaging and correlation spectroscopy.
Photothermal deflection spectroscopy
Photothermal deflection spectroscopy is a kind of spectroscopy that measures the change in refractive index due to heating of a medium by light. It works via a sort of "mirage effect" where a refractive index gradient exists adjacent to the test sample surface. A probe laser beam is refracted or bent in a manner proportional to the temperature gradient of the transparent medium near the surface. From this deflection, a measure of the absorbed excitation radiation can be determined. The technique is useful when studying optically thin samples, because sensitive measurements can be obtained of whether absorption is occurring. It is of value in situations where "pass through" or transmission spectroscopy can't be used.
There are two main forms of PDS: Collinear and Transverse. Collinear PDS was introduced in a 1980 paper by A.C. Boccara, D. Fournier, et al. In collinear, two beams pass through and intersect in a medium. The pump beam heats the material and the probe beam is deflected. This technique only works for transparent media. In transverse, the pump beam heats come in normal to the surface, and the probe beam passes parallel. In a variation on this, the probe beam may reflect off the surface, and measure buckling due to heating. Transverse PDS can be done in Nitrogen, but better performance is gained in a liquid cell: usually an inert, non-absorbing material such as a perfluorocarbon is used.
In both collinear and transverse PDS, the surface is heated using a periodically modulated light source, such as an optical beam passing through a mechanical chopper or regulated with a function generator. A lock-in amplifier is then used to measure deflections found at the modulation frequency. Another scheme uses a pulsed laser as the excitation source. In that case, a boxcar average can be used to measure the temporal deflection of the probe beam to the excitation radiation. The signal falls off exponentially as a function of frequency, so frequencies around 1-10 hertz are frequently used. A full theoretical analysis of the PDS system was published by Jackson, Amer, et al. in 1981. The same paper also discussed the use of PDS as a form of microscopy, called "Photothermal Deflection Microscopy", which can yield information about impurities and the surface topology of materials.
PDS analysis of thin films can also be performed using a patterned substrate that supports optical resonances, such as guided-mode resonance and whispering-gallery modes. The probe beam is coupled into a resonant mode and the coupling efficiency is highly sensitive to the incidence angle. Due to the photoheating effect, the coupling efficiency is changed and characterized to indicate the thin film absorption.
See also
Photothermal effect
Photothermal microspectroscopy
Photothermal optical microscopy
Urbach energy
References
J. A. Sell Photothermal Investigations of Solids and Fluids Academic Press, New York 1989
D. P. Almond and P. M. Patel Photothermal Science and Techniques Chapman and Hall, London 1996
S. E. Bialkowski Photothermal Spectroscopy Methods for Chemical Analysis John Wiley, New York 1996
External links
Quantities, terminology, and symbols in photothermal and related spectroscopies (IUPAC Recommendations 2004)
on-line version Chapter 1 of Stephen E. Bialkowski's Photothermal Spectroscopy Methods for Chemical Analysis John Wiley, New York 1996
Spectroscopy | Photothermal spectroscopy | [
"Physics",
"Chemistry"
] | 1,248 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
14,026,380 | https://en.wikipedia.org/wiki/S-matrix%20theory | S-matrix theory was a proposal for replacing local quantum field theory as the basic principle of elementary particle physics.
It avoided the notion of space and time by replacing it with abstract mathematical properties of the S-matrix. In S-matrix theory, the S-matrix relates the infinite past to the infinite future in one step, without being decomposable into intermediate steps corresponding to time-slices.
This program was very influential in the 1960s, because it was a plausible substitute for quantum field theory, which was plagued with the zero interaction phenomenon at strong coupling. Applied to the strong interaction, it led to the development of string theory.
S-matrix theory was largely abandoned by physicists in the 1970s, as quantum chromodynamics was recognized to solve the problems of strong interactions within the framework of field theory. But in the guise of string theory, S-matrix theory is still a popular approach to the problem of quantum gravity.
The S-matrix theory is related to the holographic principle and the AdS/CFT correspondence by a flat space limit. The analog of the S-matrix relations in AdS space is the boundary conformal theory.
The most lasting legacy of the theory is string theory. Other notable achievements are the Froissart bound, and the prediction of the pomeron.
History
S-matrix theory was proposed as a principle of particle interactions by Werner Heisenberg in 1943, following John Archibald Wheeler's 1937 introduction of the S-matrix.
It was developed heavily by Geoffrey Chew, Steven Frautschi, Stanley Mandelstam, Vladimir Gribov, and Tullio Regge. Some aspects of the theory were promoted by Lev Landau in the Soviet Union, and by Murray Gell-Mann in the United States.
Basic principles
The basic principles are:
Relativity: The S-matrix is a representation of the Poincaré group;
Unitarity: ;
Analyticity: integral relations and singularity conditions.
The basic analyticity principles were also called analyticity of the first kind, and they were never fully enumerated, but they include
Crossing: The amplitudes for antiparticle scattering are the analytic continuation of particle scattering amplitudes.
Dispersion relations: the values of the S-matrix can be calculated by integrals over internal energy variables of the imaginary part of the same values.
Causality conditions: the singularities of the S-matrix can only occur in ways that don't allow the future to influence the past (motivated by Kramers–Kronig relations)
Landau principle: Any singularity of the S-matrix corresponds to production thresholds of physical particles.
These principles were to replace the notion of microscopic causality in field theory, the idea that field operators exist at each spacetime point, and that spacelike separated operators commute with one another.
Bootstrap models
The basic principles were too general to apply directly, because they are satisfied automatically by any field theory. So to apply to the real world, additional principles were added.
The phenomenological way in which this was done was by taking experimental data and using the dispersion relations to compute new limits. This led to the discovery of some particles, and to successful parameterizations of the interactions of pions and nucleons.
This path was mostly abandoned because the resulting equations, devoid of any space-time interpretation, were very difficult to understand and solve.
Regge theory
The principle behind the Regge theory hypothesis (also called analyticity of the second kind or the bootstrap principle) is that all strongly interacting particles lie on Regge trajectories. This was considered the definitive sign that all the hadrons are composite particles, but within S-matrix theory, they are not thought of as being made up of elementary constituents.
The Regge theory hypothesis allowed for the construction of string theories, based on bootstrap principles. The additional assumption was the narrow resonance approximation, which started with stable particles on Regge trajectories, and added interaction loop by loop in a perturbation series.
String theory was given a Feynman path-integral interpretation a little while later. The path integral in this case is the analog of a sum over particle paths, not of a sum over field configurations. Feynman's original path integral formulation of field theory also had little need for local fields, since Feynman derived the propagators and interaction rules largely using Lorentz invariance and unitarity.
See also
Landau pole
Regge trajectory
Bootstrap model
Pomeron
Dual resonance model
History of string theory
Notes
References
Steven Frautschi, Regge Poles and S-matrix Theory, New York: W. A. Benjamin, Inc., 1963.
Particle physics | S-matrix theory | [
"Physics"
] | 959 | [
"Particle physics"
] |
14,032,539 | https://en.wikipedia.org/wiki/Recombinase-mediated%20cassette%20exchange | RMCE (recombinase-mediated cassette exchange) is a procedure in reverse genetics allowing the systematic, repeated modification of higher eukaryotic genomes by targeted integration, based on the features of site-specific recombination processes (SSRs). For RMCE, this is achieved by the clean exchange of a preexisting gene cassette for an analogous cassette carrying the "gene of interest" (GOI).
The genetic modification of mammalian cells is a standard procedure for the production of correctly modified proteins with pharmaceutical relevance. To be successful, the transfer and expression of the transgene has to be highly efficient and should have a largely predictable outcome. Current developments in the field of gene therapy are based on the same principles. Traditional procedures used for transfer of GOIs are not sufficiently reliable, mostly because the relevant epigenetic influences have not been sufficiently explored: transgenes integrate into chromosomes with low efficiency and at loci that provide only sub-optimal conditions for their expression. As a consequence the newly introduced information may not be realized (expressed), the gene(s) may be lost and/or re-insert and they may render the target cells in unstable state. It is exactly this point where RMCE enters the field. The procedure was introduced in 1994 and it uses the tools yeasts and bacteriophages have evolved for the efficient replication of important genetic information:
General principles
Most yeast strains contain circular, plasmid-like DNAs called "two-micron circles". The persistence of these entities is granted by a recombinase called "flippase" or "Flp". Four monomers of this enzyme associate with two identical short (48 bp) target sites, called FRT ("flip-recombinase targets"), resulting in their crossover. The outcome of such a process depends on the relative orientation of the participating FRTs leading to
the inversion of a sequence that is flanked by two identical but inversely oriented FRT sites
the deletion/resolution of a sequence that is flanked by two equally oriented identical FRTs
the inefficient reversion of the letter process, commonly called integration or "addition" of an extra piece of DNA carrying a single FRT site identical to the target site
This spectrum of options could be extended significantly by the generation of spacer mutants for extended 48 bp FRT sites (cross-hatched half-arrows in Figure 1). Each mutant Fn recombines with an identical mutant Fn with an efficiency equal to the wildtype sites (F x F). A cross-interaction (F x Fn) is strictly prevented by the particular design of these components. This sets the stage for the situation depicted in Figure 1A:
a target cassette (here a composite +/- selection marker) is flanked by an F- and an Fn site. After its introduction into the genome of a host cell the properties of many integration sites (genomic ´addresses´) are characterized and appropriate clones are isolated
the GOI (gene-of-interest) is part of a circular ´exchange plasmid´ and is flanked by a set of matching sites. This exchange plasmid can be introduced into the cell at large molecular excess and will thereby undergo the depicted exchange (RMCE-) reaction with the pre-selected genomic address (i.e. the F <+/-> Fn target)
this RMCE-principle is a process that can be repeated with the same or a different exchange plasmid ("serial RMCE"). Please note that RMCE of the GOI at the pre-determined locus and that it does (dotted lines) that would otherwise trigger immunologic or epigenetic defense mechanisms.
First applied for the Tyr-recombinase Flp, this novel procedure is not only relevant to the rational construction of biotechnologically significant cell lines, but it also finds increasing use for the systematic generation of stem cells. Stem cells can be used to replace damaged tissue or to generate transgenic animals with largely pre-determined properties.
Dual RMCE
It has been previously established that coexpression of both Cre and Flp recombinases catalyzes the exchange of sequences flanked by single loxP and FRT sites integrated into the genome at a random location. However, these studies did not explore whether such an approach could be used to modify conditional mouse alleles carrying single or multiple loxP and FRT sites. dual RMCE (dRMCE; Osterwalder et al., 2010) was recently developed as a re-engineering tool applicable to the vast numbers of mouse conditional alleles that harbor wild-type loxP and FRT sites and therefore are not compatible with conventional RMCE. The general dRMCE strategy takes advantage of the fact that most conditional alleles encode a selection cassette flanked by FRT sites, in addition to loxP sites that flank functionally relevant exons ('floxed' exons). The FRT-flanked selection cassette is in general placed outside the loxP-flanked region, which renders these alleles directly compatible with dRMCE. Simultaneous expression of Cre and Flp recombinases induces cis recombination and formation of the deleted allele, which then serves as a 'docking site' at which to insert the replacement vector by trans recombination. The correctly replaced locus would encode the custom modification and a different drug-selection cassette flanked by single loxP and FRT sites. dRMCE therefore appears as a very efficient tool for targeted re-engineering of thousands of mouse alleles produced by the IKMC consortium.
Multiplexing RMCE
Multiplexing setups rely on the fact that each F-Fn pair (consisting of a wildtype FRT site and a mutant called "n") or each Fn-Fm pair (consisting of two mutants, "m" and "n") constitutes a unique "address" in the genome. A prerequisite are differences in four out of the eight spacer positions (see Figure 1B). If the difference is below this threshold, some cross-interaction between the mutants may occur leading to a faulty deletion of the sequence between the heterospecific (Fm/Fn or F/Fn) sites.
13 FRT-mutants have meanwhile become available, which permit the establishment of several unique genomic addresses (for instance F-Fn and Fm-Fo). These addresses will be recognized by donor plasmids that have been designed according to the same principles, permitting successive (but also synchronous) modifications at the predetermined loci. These modifications can be driven to completion in case the compatible donor plasmid(s) are provided at an excess (mass-action principles). Figure 2 illustrates one use of the multiplexing principle: the stepwise extension of a coding region in which a basic expression unit is provided with genomic insulators, enhancers, or other cis-acting elements.
A recent variation of the general concept is based on PhiC31 (an integrase of the Ser-class), which permits introduction of another RMCE target at a secondary site the first RMCE-based modification has occurred. This is due to the fact that each phiC31-catalyzed exchange destroys the attP and attB sites it has addressed converting them to attR and attL product sites, respectively. While these changes permit the subsequent mounting of new (and most likely remote) targets, they do not enable addressing several RMCE targets , nor do they permit "serial RMCE", i.e. successive, stepwise modifications at a given genomic locus.
This is different for Flp-RMCE, for which the post-RMCE status of FRTs corresponds to their initial state. This property enables the intentional, repeated mobilization of a target cassette by the addition of a new donor plasmid with compatible architecture. These "multiplexing-RMCE" options open unlimited possibilities for serial- and parallel specific modifications of pre-determined RMCE-targets
Applications
Generation of transgenic animals
Generation of transgenic knock-out/-in mice and their genetic modification by RMCE.
Tagging and cassette exchange in DG44 cells in suspension culture
Insertion of a target cassette in a mammalian host cell line (CHO DG44 in suspension culture) and exchange with an ER stress reporter construct via targeted integration (RMCE).
See also
Site-specific recombinase technology
Site-specific recombination
FLP-FRT recombination
Cre recombinase
Cre-Lox recombination
Genetic recombination
Homologous recombination
References
J. Bode, S. Götze, M. Klar, K. Maaß, K. Nehlsen, A. Oumard & S. Winkelmann (2004) BIOForum 34-36 Den Viren nachempfunden: Effiziente Modifikation von Säugerzellen.
External links
https://www.sciencedaily.com/releases/2011/11/111130115822.htm
Applied genetics
Genetics techniques
Molecular genetics | Recombinase-mediated cassette exchange | [
"Chemistry",
"Engineering",
"Biology"
] | 1,922 | [
"Genetics techniques",
"Molecular genetics",
"Genetic engineering",
"Molecular biology"
] |
14,035,044 | https://en.wikipedia.org/wiki/Electron%20cooling | Electron cooling () is a method to shrink the emittance (size, divergence, and energy spread) of a charged particle beam without removing particles from the beam. Since the number of particles remains unchanged and the space coordinates and their derivatives (angles) are reduced, this means that the phase space occupied by the stored particles is compressed. It is equivalent to reducing the temperature of the beam. See also stochastic cooling.
The method was invented by Gersh Budker at INP, Novosibirsk, in 1966 for the purpose of increasing luminosity of hadron colliders. It was first tested in 1974 with 68 MeV protons at NAP-M storage ring at INP.
It is used at both operating ion colliders: the Relativistic Heavy Ion Collider and in the Low Energy Ion Ring at CERN.
Basically, electron cooling works as follows:
A beam of dense quasi-monoenergetic electrons is produced and merged with the ion beam to be cooled.
The velocity of the electrons is made equal to the average velocity of the ions.
The ions undergo Coulomb scattering in the electron “gas” and exchange momentum with the electrons. Thermodynamic equilibrium is reached when the particles have the same momentum, which requires that the much lighter electrons have much higher velocities. Thus, thermal energy is transferred from the ions to the electrons.
The electron beam is finally bent away from the ion beam.
See also
Stochastic cooling
Particle beam cooling
References
The Fermilab Electron Cooling Project
Accelerator physics
Soviet inventions
Budker Institute of Nuclear Physics | Electron cooling | [
"Physics"
] | 329 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
14,036,671 | https://en.wikipedia.org/wiki/C7H8O3 | {{DISPLAYTITLE:C7H8O3}}
The molecular formula C7H8O3 (molar mass: 140.14 g/mol, exact mass: 140.0473 u) may refer to:
Ethyl maltol
Methoxymethylfurfural (MMF or 5-methoxymethylfuran-2-carbaldehyde)
Molecular formulas | C7H8O3 | [
"Physics",
"Chemistry"
] | 88 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
10,000,631 | https://en.wikipedia.org/wiki/Frequency%20scanning%20interferometry | Frequency scanning interferometry (FSI) is an absolute distance measurement technique, for measuring the distance between a pair of points, along a line-of-sight.
The power of the FSI technique lies in its ability to make many such distance measurements, simultaneously.
For each distance to be measured, a measurement interferometer is built using optical components placed at each end of a line-of-sight.
The optical path of each measurement interferometer is compared to the optical path in a reference interferometer, by scanning the frequency of a laser (connected to all interferometers in the system) and counting fringe cycles produced in the return signals from each interferometer.
The length of each measurement interferometer is given in units of reference length by the ratio of measurement interferometer to reference interferometer fringes.
To give an example: A frequency scan might produce 100 fringe cycles in the measurement interferometer and 50 in the reference interferometer. The measured interferometer is therefore twice the length of reference interferometer, to first order (ignoring systematic errors - see below).
Reference interferometer precautions
A typical reference interferometer is held at a stable length in a controlled environment, to reduce the dominant systematic errors which arise from changes in optical path which occur during the laser frequency scan.
Uses
The great strength of the FSI technique is the ability to simultaneously compare any number of "measurement" interferometers to the same reference length. This has great benefit in a shape measurement system.
An FSI system is being used to monitor shape changes of the semiconductor tracker (SCT) on the ATLAS detector at CERN.
Precision
The measurement sensitivity depends on how rapidly the laser is tuned and how well systematic errors are controlled. Currently precisions of a few nm over a 6m path are possible in evacuated interferometers. In a system built for the ATLAS experiment a target of 1 micrometre precision over distances of 1 m is expected to be easily achieved.
References
P A Coe et al. 2004 Meas. Sci. Technol. 15 2175-2187
Measurement | Frequency scanning interferometry | [
"Physics",
"Mathematics"
] | 432 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
10,014,557 | https://en.wikipedia.org/wiki/Magnesium%20trisilicate | Magnesium trisilicate is an inorganic compound that is used as a food additive. The additive is frequently used by fast food chains to absorb fatty acids and extract impurities formed while frying edible oils. It has good acid neutralizing properties, but the reaction appears too slow to serve as an effective non-prescription antacid.
Health effects
On March 12, 2007, Chinese health authorities halted the use of magnesium trisilicate at Shaanxi Province KFC franchises, suspecting it to be a possible carcinogen. As a response, China's Ministry of Health conducted tests at six outlets of KFC. The results showed chemicals in the cooking process at KFC restaurants in the country were not harmful. The Ministry of Health said tests showed that using the product to filter cooking oil had no apparent impact on health. Food scares regularly sweep the Chinese media.
References
Food additives
Silicates
Magnesium compounds | Magnesium trisilicate | [
"Chemistry"
] | 184 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
12,401,224 | https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Maclane%20spectrum | In mathematics, specifically algebraic topology, there is a distinguished class of spectra called Eilenberg–Maclane spectra for any Abelian group pg 134. Note, this construction can be generalized to commutative rings as well from its underlying Abelian group. These are an important class of spectra because they model ordinary integral cohomology and cohomology with coefficients in an abelian group. In addition, they are a lift of the homological structure in the derived category of abelian groups in the homotopy category of spectra. In addition, these spectra can be used to construct resolutions of spectra, called Adams resolutions, which are used in the construction of the Adams spectral sequence.
Definition
For a fixed abelian group let denote the set of Eilenberg–MacLane spaces with the adjunction map coming from the property of loop spaces of Eilenberg–Maclane spaces: namely, because there is a homotopy equivalencewe can construct maps from the adjunction giving the desired structure maps of the set to get a spectrum. This collection is called the Eilenberg–Maclane spectrum of pg 134.
Properties
Using the Eilenberg–Maclane spectrum we can define the notion of cohomology of a spectrum and the homology of a spectrum pg 42. Using the functorwe can define cohomology simply asNote that for a CW complex , the cohomology of the suspension spectrum recovers the cohomology of the original space . Note that we can define the dual notion of homology aswhich can be interpreted as a "dual" to the usual hom-tensor adjunction in spectra. Note that instead of , we take for some Abelian group , we recover the usual (co)homology with coefficients in the abelian group and denote it by .
Mod-p spectra and the Steenrod algebra
For the Eilenberg–Maclane spectrum there is an isomorphismfor the p-Steenrod algebra .
Tools for computing Adams resolutions
One of the quintessential tools for computing stable homotopy groups is the Adams spectral sequence. In order to make this construction, the use of Adams resolutions are employed. These depend on the following properties of Eilenberg–Maclane spectra. We define a generalized Eilenberg–Maclane spectrum as a finite wedge of suspensions of Eilenberg–Maclane spectra , soNote that for and a spectrum so it shifts the degree of cohomology classes. For the rest of the article for some fixed abelian group
Equivalence of maps to K
Note that a homotopy class represents a finite collection of elements in . Conversely, any finite collection of elements in is represented by some homotopy class .
Constructing a surjection
For a locally finite collection of elements in generating it as an abelian group, the associated map induces a surjection on cohomology, meaning if we evaluate these spectra on some topological space , there is always a surjectionof Abelian groups.
Steenrod-module structure on cohomology of spectra
For a spectrum taking the wedge constructs a spectrum which is homotopy equivalent to a generalized Eilenberg–Maclane space with one wedge summand for each generator or . In particular, it gives the structure of a module over the Steenrod algebra for . This is because the equivalence stated before can be read asand the map induces the -structure.
See also
Adams spectral sequence
Spectrum (topology)
Homotopy groups of spheres
References
External links
Complex cobordism and stable homotopy groups of spheres
The Adams Spectral Sequence
Algebraic topology
Homological algebra
Spectra (topology) | Eilenberg–Maclane spectrum | [
"Mathematics"
] | 737 | [
"Mathematical structures",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Homological algebra"
] |
1,721,949 | https://en.wikipedia.org/wiki/Heterojunction%20bipolar%20transistor | A heterojunction bipolar transistor (HBT) is a type of bipolar junction transistor (BJT) that uses different semiconductor materials for the emitter and base regions, creating a heterojunction. The HBT improves on the BJT in that it can handle signals of very high frequencies, up to several hundred GHz. It is commonly used in modern ultrafast circuits, mostly radio frequency (RF) systems, and in applications requiring a high power efficiency, such as RF power amplifiers in cellular phones. The idea of employing a heterojunction is as old as the conventional BJT, dating back to a patent from 1951. Detailed theory of heterojunction bipolar transistor was developed by Herbert Kroemer in 1957.
Materials
The principal difference between the BJT and HBT is in the use of differing semiconductor materials for the emitter-base junction and the base-collector junction, creating a heterojunction. The effect is to limit the injection of holes from the base into the emitter region, since the potential barrier in the valence band is higher than in the conduction band. Unlike BJT technology, this allows a high doping density to be used in the base, reducing the base resistance while maintaining gain. The efficiency of the heterojunction is measured by the Kroemer factor. Kroemer was awarded a Nobel Prize in 2000 for his work in this field at the University of California, Santa Barbara.
Materials used for the substrate include silicon, gallium arsenide, and indium phosphide, while silicon / silicon-germanium alloys, aluminum gallium arsenide / gallium arsenide, and indium phosphide / indium gallium arsenide are used for the epitaxial layers. Wide-bandgap semiconductors such as gallium nitride and indium gallium nitride are especially promising.
In SiGe graded heterostructure transistors, the amount of germanium in the base is graded, making the bandgap narrower at the collector than at the emitter. That tapering of the bandgap leads to a field-assisted transport in the base, which speeds transport through the base and increases frequency response.
Fabrication
Due to the need to manufacture HBT devices with extremely high-doped thin base layers, molecular beam epitaxy is principally employed. In addition to base, emitter and collector layers, highly doped layers are deposited on either side of collector and emitter to facilitate an ohmic contact, which are placed on the contact layers after exposure by photolithography and etching. The contact layer underneath the collector, named subcollector, is an active part of the transistor.
Other techniques are used depending on the material system. IBM and others use ultra-high vacuum chemical vapor deposition (UHVCVD) for SiGe; other techniques used include MOVPE for III-V systems.
Normally the epitaxial layers are lattice matched (which restricts the choice of bandgap etc.). If they are near-lattice-matched the device is pseudomorphic, and if the layers are unmatched (often separated by a thin buffer layer) it is metamorphic.
Limits
A pseudomorphic heterojunction bipolar transistor developed at the University of Illinois at Urbana-Champaign, built from indium phosphide and indium gallium arsenide and designed with compositionally graded collector, base and emitter, was demonstrated to cut off at a speed of 710 GHz.
Besides being record breakers in terms of speed, HBTs made of InP/InGaAs are ideal for monolithic optoelectronic integrated circuits. A PIN-type photo detector is formed by the base-collector-subcollector layers. The bandgap of InGaAs works well for detecting 1550 nm-wavelength infrared laser signals used in optical communication systems. Biasing the HBT to obtain an active device, a photo transistor with high internal gain is obtained. Among other HBT applications are mixed signal circuits such as analog-to-digital and digital-to-analog converters.
See also
High-electron-mobility transistor (HEMT)
MESFET
References
External links
HBT Optoelectronic Circuits developed in the Technion (15Mb, 230p)
New Material Structure Produces World's Fastest Transistor 604 GHz Early 2005
Microwave technology
Terahertz technology
Transistor types | Heterojunction bipolar transistor | [
"Physics"
] | 936 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Terahertz technology"
] |
1,722,070 | https://en.wikipedia.org/wiki/Iron%20cycle | The iron cycle (Fe) is the biogeochemical cycle of iron through the atmosphere, hydrosphere, biosphere and lithosphere. While Fe is highly abundant in the Earth's crust, it is less common in oxygenated surface waters. Iron is a key micronutrient in primary productivity, and a limiting nutrient in the Southern ocean, eastern equatorial Pacific, and the subarctic Pacific referred to as High-Nutrient, Low-Chlorophyll (HNLC) regions of the ocean.
Iron exists in a range of oxidation states from -2 to +7; however, on Earth it is predominantly in its +2 or +3 redox state and is a primary redox-active metal on Earth. The cycling of iron between its +2 and +3 oxidation states is referred to as the iron cycle. This process can be entirely abiotic or facilitated by microorganisms, especially iron-oxidizing bacteria. The abiotic processes include the rusting of iron-bearing metals, where Fe2+ is abiotically oxidized to Fe3+ in the presence of oxygen, and the reduction of Fe3+ to Fe2+ by iron-sulfide minerals. The biological cycling of Fe2+ is done by iron oxidizing and reducing microbes.
Iron is an essential micronutrient for almost every life form. It is a key component of hemoglobin, important to nitrogen fixation as part of the Nitrogenase enzyme family, and as part of the iron-sulfur core of ferredoxin it facilitates electron transport in chloroplasts, eukaryotic mitochondria, and bacteria. Due to the high reactivity of Fe2+ with oxygen and low solubility of Fe3+, iron is a limiting nutrient in most regions of the world.
Ancient earth
On the early Earth, when atmospheric oxygen levels were 0.001% of those present today, dissolved Fe2+ was thought to have been a lot more abundant in the oceans, and thus more bioavailable to microbial life. Iron sulfide may have provided the energy and surfaces for the first organisms. At this time, before the onset of oxygenic photosynthesis, primary production may have been dominated by photo-ferrotrophs, which would obtain energy from sunlight, and use the electrons from Fe2+ to fix carbon.
During the Great Oxidation Event, 2.3-2.5 billion years ago, dissolved iron was oxidized by oxygen produced by cyanobacteria to form iron oxides. The iron oxides were denser than water and fell to the ocean floor forming banded iron formations (BIF). Over time, rising oxygen levels removed increasing amounts of iron from the ocean. BIFs have been a key source of iron ore in modern times.
Terrestrial ecosystems
The iron cycle is an important component of the terrestrial ecosystems. The ferrous form of iron, Fe2+, is dominant in the Earth's mantle, core, or deep crust. The ferric form, Fe3+, is more stable in the presence of oxygen gas. Dust is a key component in the Earth's iron cycle. Chemical and biological weathering break down iron-bearing minerals, releasing the nutrient into the atmosphere. Changes in hydrological cycle and vegetative cover impact these patterns and have a large impact on global dust production, with dust deposition estimates ranging between 1000 and 2000 Tg/year. Aeolian dust is a critical part of the iron cycle by transporting iron particulates from the Earth's land via the atmosphere to the ocean.
Volcanic eruptions are also a key contributor to the terrestrial iron cycle, releasing iron-rich dust into the atmosphere in either a large burst or in smaller spurts over time. The atmospheric transport of iron-rich dust can impact the ocean concentrations.
Oceanic ecosystem
The ocean is a critical component of the Earth's climate system, and the iron cycle plays a key role in ocean primary productivity and marine ecosystem function. Iron limitation has been known to limit the efficiency of the biological carbon pump. The largest supply of iron to the oceans is from rivers, where it is suspended as sediment particles. Coastal waters receive inputs of iron from rivers and anoxic sediments. Other major sources of iron to the ocean include glacial particulates, atmospheric dust transport, and hydrothermal vents. Iron supply is an important factor affecting growth of phytoplankton, the base of marine food web. Offshore regions rely on atmospheric dust deposition and upwelling. Other major sources of iron to the ocean include glacial particulates, hydrothermal vents, and volcanic ash. In offshore regions, bacteria also compete with phytoplankton for uptake of iron. In HNLC regions, iron limits the productivity of phytoplankton.
Most commonly, iron was available as an inorganic source to phytoplankton; however, organic forms of iron can also be used by specific diatoms which use a process of surface reductase mechanism. Uptake of iron by phytoplankton leads to lowest iron concentrations in surface seawater. Remineralization occurs when the sinking phytoplankton are degraded by zooplankton and bacteria. Upwelling recycles iron and causes higher deep water iron concentrations. On average there is 0.07±0.04 nmol Fe kg−1 at the surface (<200 m) and 0.76±0.25 nmol Fe kg−1 at depth (>500 m). Therefore, upwelling zones contain more iron than other areas of the surface oceans. Soluble iron in ferrous form is bioavailable for utilization which commonly comes from aeolian resources.
Iron primarily is present in particulate phases as ferric iron, and the dissolved iron fraction is removed out of the water column by coagulation. For this reason, the dissolved iron pool turns over rapidly, in around 100 years.
Interactions with other elemental cycles
The iron cycle interacts significantly with the sulfur, nitrogen, and phosphorus cycles. Soluble Fe(II) can act as the electron donor, reducing oxidized organic and inorganic electron receptors, including O2 and NO3, and become oxidized to Fe(III). The oxidized form of iron can then be the electron acceptor for reduced sulfur, H2, and organic carbon compounds. This returns the iron to the reduced Fe(II) state, completing the cycle.
The transition of iron between Fe(II) and Fe(III) in aquatic systems interacts with the freshwater phosphorus cycle. With oxygen in the water, Fe(II) gets oxidized to Fe(III), either abiotically or by microbes via lithotrophic oxidation. Fe(III) can form iron hydroxides, which bind tightly to phosphorus, removing it from the bioavailable phosphorus pool, limiting primary productivity. In anoxic conditions, Fe(III) can be reduced, used by microbes to be the final electron acceptor from either organic carbon or H2. This releases the phosphorus back into the water for biological use.
The iron and sulfur cycle can interact at several points. Purple sulfur bacteria and green sulfur bacteria can use Fe(II) as an electron donor during anoxic photosynthesis. Sulfate reducing bacteria in anoxic environments can reduce sulfate to sulfide, which then binds to Fe(II) to create iron sulfide, a solid mineral that precipitates out of water and removes the iron and sulfur. The iron, phosphate, and sulfur cycles can all interact with each other. Sulfide can reduce Fe(III) from iron that is already bound to phosphate when there are no more metal ions available, which releases the phosphate and creates iron sulfide.
Iron plays an important role in the nitrogen cycle, aside from its role as part of the enzymes involved in nitrogen fixation. In anoxic conditions, Fe(II) can donate an electron that is accepted by NO3− which is oxidized to several different forms of nitrogen compounds, NO2−, N2O, N2, and NH4+, while Fe(II) is reduced to Fe(III).
Anthropogenic influences
Human impact on the iron cycle in the ocean is due to dust concentrations increasing at the beginning of the industrial era. Today, there is approximately double the amount of soluble iron in oceans than pre-industrial times from anthropogenic pollutants and soluble iron combustion sources. Changes in human land-use activities and climate have augmented dust fluxes which increases the amount of aeolian dust to open regions of the ocean. Other anthropogenic sources of iron are due to combustion. Highest combustion rates of iron occurs in East Asia, which contributes to 20-100% of ocean depositions around the globe.
Humans have altered the cycle for Nitrogen from fossil fuel combustion and large-scale agriculture. Due to increased Iron and Nitrogen raises marine nitrogen fixation in the subtropical North and South Pacific Ocean. In the subtropics, tropics and HNLC regions, increased inputs of iron may lead to increased CO2 uptake, impacting the global carbon cycle.
See also
Iron fertilization
Iron-oxidizing bacteria
References
Further reading
Biogeochemical cycle
Geological processes
Iron | Iron cycle | [
"Chemistry"
] | 1,919 | [
"Biogeochemical cycle",
"Biogeochemistry"
] |
1,722,334 | https://en.wikipedia.org/wiki/Solvent%20extraction%20and%20electrowinning | Solvent extraction and electrowinning (SX/EW) is a two-stage hydrometallurgical process that first extracts and upgrades copper ions from low-grade leach solutions into a solvent containing a chemical that selectively reacts with and binds the copper in the solvent. The copper is extracted from the solvent with strong aqueous acid which then deposits pure copper onto cathodes using an electrolytic procedure (electrowinning).
SX/EW processing is best known for its use by the copper industry, where it accounts for 20% of worldwide production, but the technology is also successfully applied to a wide range of other metals including cobalt, nickel, zinc and uranium.
References
Metallurgical processes | Solvent extraction and electrowinning | [
"Chemistry",
"Materials_science"
] | 148 | [
"Metallurgical processes",
"Metallurgy"
] |
1,722,516 | https://en.wikipedia.org/wiki/Electrowinning | Electrowinning, also called electroextraction, is the electrodeposition of metals from their ores that have been put in solution via a process commonly referred to as leaching. Electrorefining uses a similar process to remove impurities from a metal. Both processes use electroplating on a large scale and are important techniques for the economical and straightforward purification of non-ferrous metals. The resulting metals are said to be electrowon.
In electrowinning, an electrical current is passed from an inert anode through a leach solution containing the dissolved metal ions so that the metal is recovered as it is reduced and deposited in an electroplating process onto the cathode. In electrorefining, the anode consists of the impure metal (e.g., copper) to be refined. The impure metallic anode is oxidized and the metal dissolves into solution. The metal ions migrate through the electrolyte towards the cathode where the pure metal is deposited. Insoluble solid impurities sedimenting below the anode often contain valuable rare elements such as gold, silver and selenium.
History
Electrowinning is the oldest industrial electrolytic process. The English chemist Humphry Davy obtained sodium metal in elemental form for the first time in 1807 by the electrolysis of molten sodium hydroxide.
Electrorefining of copper was first demonstrated experimentally by Maximilian, Duke of Leuchtenberg in 1847.
James Elkington patented the commercial process in 1865 and opened the first successful plant in Pembrey, Wales in 1870. The first commercial plant in the United States was the Balbach and Sons Refining and Smelting Company in Newark, New Jersey in 1883.
Applications
Nickel and copper are often obtained by electrowinning. These metals have some noble character, which enables their soluble cationic forms to be reduced to their pure metallic form at mild applied potentials applied between the cathode and the anode.
Process
Most metal ores contain metals of interest (e.g. gold, copper, nickel) in some oxidized states and thus the goal of most metallurgical operations is to chemically reduce them to their pure metallic form. The question is how to convert highly impure metal ores into purified bulk metals. A vast array of operations have been developed to accomplish those tasks, one of which is electrowinning. In an ideal case, ore is extracted into a solution which is then subjected to electrolysis. The metal is deposited on the cathode. In a practical sense, this idealized process is complicated by some or all of the following considerations: the metal content is low (a few percent is typical), other metals deposit competitively with the desired one, the ore is not easily or efficiently dissolved. For these reasons, electrowinning is usually only used on purified solutions of a desired metal, e.g. cyanide-extracts of gold ores.
Because metal deposition rates are related to available surface area, maintaining properly working cathodes is important. Two cathode types exist, flat-plate and reticulated cathodes, each with its own advantages and disadvantages. Flat-plate cathodes can be cleaned and reused, and plated metals recovered by either mechanically scraping the cathode (or, if the electrolyzed metal has a lower melting point than the cathode, heating the cathode to the electrolyzed metal's melting point causing the electrolyzed metal to liquify and separate from the cathode, which remains solid). Reticulated cathodes have a much higher deposition rate compared to flat-plate cathodes due to their greater surface area. However, reticulated cathodes are not reusable and must be sent off for recycling. Alternatively, starter cathodes of pre-refined metals can be used, which become an integral part of the finished metal ready for rolling or further processing.
See also
Electrochemical engineering
References
External links
High Throughput Electrorefining of Uranium in Pyro-reprocessing
Aluminum Electrowinning and Electrorefining with Ionic Liquids
How Hydrometallurgy and the SX/EW Process Made Copper the "Green" Metal
Copper Electrowinning
Chemical processes
Electrolysis
Metallurgical processes
Separation processes | Electrowinning | [
"Chemistry",
"Materials_science"
] | 889 | [
"Separation processes",
"Metallurgical processes",
"Metallurgy",
"Chemical processes",
"Electrochemistry",
"nan",
"Electrolysis",
"Chemical process engineering"
] |
1,722,616 | https://en.wikipedia.org/wiki/Physical%20object | In natural language and physical science, a physical object or material object (or simply an object or body) is a contiguous collection of matter, within a defined boundary (or surface), that exists in space and time. Usually contrasted with abstract objects and mental objects.
Also in common usage, an object is not constrained to consist of the same collection of matter. Atoms or parts of an object may change over time. An object is usually meant to be defined by the simplest representation of the boundary consistent with the observations. However the laws of physics only apply directly to objects that consist of the same collection of matter.
In physics, an object is an identifiable collection of matter, which may be constrained by an identifiable boundary, and may move as a unit by translation or rotation, in 3-dimensional space.
Each object has a unique identity, independent of any other properties. Two objects may be identical, in all properties except position, but still remain distinguishable. In most cases the boundaries of two objects may not overlap at any point in time. The property of identity allows objects to be counted.
Examples of models of physical bodies include, but are not limited to a particle, several interacting smaller bodies (particulate or otherwise). Discrete objects are in contrast to continuous media.
The common conception of physical objects includes that they have extension in the physical world, although there do exist theories of quantum physics and cosmology which arguably challenge this. In modern physics, "extension" is understood in terms of the spacetime: roughly speaking, it means that for a given moment of time the body has some location in the space (although not necessarily amounting to the abstraction of a point in space and time). A physical body as a whole is assumed to have such quantitative properties as mass, momentum, electric charge, other conserved quantities, and possibly other quantities.
An object with known composition and described in an adequate physical theory is an example of physical system.
In common usage
An object is known by the application of senses. The properties of an object are inferred by learning and reasoning based on the information perceived. Abstractly, an object is a construction of our mind consistent with the information provided by our senses, using Occam's razor.
In common usage an object is the material inside the boundary of an object, in three-dimensional space. The boundary of an object is a contiguous surface which may be used to determine what is inside, and what is outside an object. An object is a single piece of material, whose extent is determined by a description based on the properties of the material. An imaginary sphere of granite within a larger block of granite would not be considered an identifiable object, in common usage. A fossilized skull encased in a rock may be considered an object because it is possible to determine the extent of the skull based on the properties of the material.
For a rigid body, the boundary of an object may change over time by continuous translation and rotation. For a deformable body the boundary may also be continuously deformed over time in other ways.
An object has an identity. In general two objects with identical properties, other than position at an instance in time, may be distinguished as two objects and may not occupy the same space at the same time (excluding component objects). An object's identity may be tracked using the continuity of the change in its boundary over time. The identity of objects allows objects to be arranged in sets and counted.
The material in an object may change over time. For example, a rock may wear away or have pieces broken off it. The object will be regarded as the same object after the addition or removal of material, if the system may be more simply described with the continued existence of the object, than in any other way. The addition or removal of material may discontinuously change the boundary of the object. The continuation of the object's identity is then based on the description of the system by continued identity being simpler than without continued identity.
For example, a particular car might have all its wheels changed, and still be regarded as the same car.
The identity of an object may not split. If an object is broken into two pieces at most one of the pieces has the same identity. An object's identity may also be destroyed if the simplest description of the system at a point in time changes from identifying the object to not identifying it. Also an object's identity is created at the first point in time that the simplest model of the system consistent with perception identifies it.
An object may be composed of components. A component is an object completely within the boundary of a containing object.
A living thing may be an object, and is distinguished from non-living things by the designation of the latter as inanimate objects. Inanimate objects generally lack the capacity or desire to undertake actions, although humans in some cultures may tend to attribute such characteristics to non-living things.
In physics
Classical mechanics
In classical mechanics a physical body is collection of matter having properties including mass, velocity, momentum and energy. The matter exists in a volume of three-dimensional space. This space is its extension.
Interactions between objects are partly described by orientation and external shape.
In continuum mechanics an object may be described as a collection of sub objects, down to an infinitesimal division, which interact with each other by forces that may be described internally by pressure and mechanical stress.
Quantum mechanics
In quantum mechanics an object is a particle or collection of particles. Until measured, a particle does not have a physical position. A particle is defined by a probability distribution of finding the particle at a particular position. There is a limit to the accuracy with which the position and velocity may be measured. A particle or collection of particles is described by a quantum state.
These ideas vary from the common usage understanding of what an object is.
String theory
In particle physics, there is a debate as to whether some elementary particles are not bodies, but are points without extension in physical space within spacetime, or are always extended in at least one dimension of space as in string theory or M theory.
In psychology
In some branches of psychology, depending on school of thought, a physical object has physical properties, as compared to mental objects. In (reductionistic) behaviorism, objects and their properties are the (only) meaningful objects of study. While in the modern day behavioral psychotherapy it is still only the means for goal oriented behavior modifications, in Body Psychotherapy it is not a means only anymore, but its felt sense is a goal of its own. In cognitive psychology, physical bodies as they occur in biology are studied in order to understand the mind, which may not be a physical body, as in functionalist schools of thought.
In philosophy
A physical body is an enduring object that exists throughout a particular trajectory of space and orientation over a particular duration of time, and which is located in the world of physical space (i.e., as studied by physics). This contrasts with abstract objects such as mathematical objects which do not exist at any particular time or place.
Examples are a cloud, a human body, a banana, a billiard ball, a table, or a proton. This is contrasted with abstract objects such as mental objects, which exist in the mental world, and mathematical objects. Other examples that are not physical bodies are emotions, the concept of "justice", a feeling of hatred, or the number "3". In some philosophies, like the idealism of George Berkeley, a physical body is a mental object, but still has extension in the space of a visual field.
See also
Abstract object theory
Astronomical object
Deformable body
Free body
Human body
Non-physical entity
Physical model
Rigid body
Ship of Theseus, a thought experiment about an object's identity over time
References
External links
Concepts in metaphysics
Concepts in physics
Mechanics
Ontology | Physical object | [
"Physics",
"Engineering"
] | 1,591 | [
"Mechanics",
"Physical objects",
"nan",
"Mechanical engineering",
"Matter"
] |
1,722,939 | https://en.wikipedia.org/wiki/Continental%20shelf%20pump | In oceanic biogeochemistry, the continental shelf pump is proposed to operate in the shallow waters of the continental shelves, acting as a mechanism to transport carbon (as either dissolved or particulate material) from surface waters to the interior of the adjacent deep ocean.
Overview
Originally formulated by Tsunogai et al. (1999), the pump is believed to occur where the solubility and biological pumps interact with a local hydrography that feeds dense water from the shelf floor into sub-surface (at least subthermocline) waters in the neighbouring deep ocean. Tsunogai et al.'''s (1999) original work focused on the East China Sea, and the observation that, averaged over the year, its surface waters represented a sink for carbon dioxide. This observation was combined with others of the distribution of dissolved carbonate and alkalinity and explained as follows :
the shallowness of the continental shelf restricts convection of cooling water
as a consequence, cooling is greater for continental shelf waters than for neighbouring open ocean waters
this leads to the production of relatively cool and dense water on the shelf
the cooler waters promote the solubility pump and lead to an increased storage of dissolved inorganic carbon
this extra carbon storage is augmented by the increased biological production characteristic of shelves
the dense, carbon-rich shelf waters sink to the shelf floor and enter the sub-surface layer of the open ocean via isopycnal mixing
Significance
Based on their measurements of the CO2 flux over the East China Sea (35 g C m−2 y−1), Tsunogai et al. (1999) estimated that the continental shelf pump could be responsible for an air-to-sea flux of approximately 1 Gt C y−1 over the world's shelf areas. Given that observational and modelling of anthropogenic emissions of CO2 estimates suggest that the ocean is currently responsible for the uptake of approximately 2 Gt C y−1, and that these estimates are poor for the shelf regions, the continental shelf pump may play an important role in the ocean's carbon cycle.
One caveat to this calculation is that the original work was concerned with the hydrography of the East China Sea, where cooling plays the dominant role in the formation of dense shelf water, and that this mechanism may not apply in other regions. However, it has been suggested that other processes may drive the pump under different climatic conditions. For instance, in polar regions, the formation of sea-ice results in the extrusion of salt that may increase seawater density. Similarly, in tropical regions, evaporation may increase local salinity and seawater density.
The strong sink of CO2 at temperate latitudes reported by Tsunogai et al.'' (1999) was later confirmed in the Gulf of Biscay, the Middle Atlantic Bight and the North Sea. On the other hand, in the sub-tropical South Atlantic Bight reported a source of CO2 to the atmosphere.
Recently, work has compiled and scaled available data on CO2 fluxes in coastal environments, and shown that globally marginal seas act as a significant CO2 sink (-1.6 mol C m−2 y−1; -0.45 Gt C y−1) in agreement with previous estimates. However, the global sink of CO2 in marginal seas could be almost fully compensated by the emission of CO2 (+11.1 mol C m−2 y−1; +0.40 Gt C y−1) from the ensemble of near-shore coastal ecosystems, mostly related to the emission of CO2 from estuaries (0.34 Gt C y−1).
An interesting application of this work has been examining the impact of sea level rise over the last de-glacial transition on the global carbon cycle. During the last glacial maximum sea level was some lower than today. As sea level rose the surface area of the shelf seas grew and in consequence the strength of the shelf sea pump should increase.
References
See also
Biological pump
Ocean acidification
Solubility pump
Aquatic ecology
Biological oceanography
Carbon
Chemical oceanography
Geochemistry
Biogeochemistry
Continental shelves
Oceanographical terminology | Continental shelf pump | [
"Chemistry",
"Biology",
"Environmental_science"
] | 836 | [
"Environmental chemistry",
"Chemical oceanography",
"Biogeochemistry",
"Ecosystems",
"nan",
"Aquatic ecology"
] |
1,723,127 | https://en.wikipedia.org/wiki/Caniapiscau%20Reservoir | The Caniapiscau Reservoir () is a reservoir on the upper Caniapiscau River in the Côte-Nord administrative region of the Canadian province of Quebec. It is the largest body of water in Quebec and the second largest reservoir in Canada.
The Caniapiscau Reservoir, formed by two dams and forty-three dikes, is the largest reservoir in surface area of the James Bay Project. As headpond, it feeds the power plants of the La Grande complex in the winter and provides up to 35% of their production. Its total catchment area is about .
The reservoir was named after Lake Caniapiscau that was flooded during the formation of the reservoir. The name is an adaptation of the Cree or Innu toponym kâ-neyâpiskâw, which means "rocky point". Albert Peter Low had noted in 1895 that "a high rocky headland jutts into the lake." He probably referred to the northwest facing peninsula that gives the reservoir the shape of an arc as we currently know it.
The Caniapiscau Reservoir is accessible by bush plane and, since 1981, by a gravel road from James Bay (the Trans-Taiga Road). At the very end of this road, near the Duplanter spillway, is the former worksite of the Société d'énergie de la Baie-James, named Caniapiscau. There is no permanent human habitation at the reservoir, but it is used by outfitters for seasonal hunting and fishing expeditions and by some Cree for subsistence fishing and trapping. It is isolated from society and there are very few gas stations or other services nearby.
History
The natural lakes of the region were formed about nine thousand years ago as glaciers left Quebec after having scoured the Canadian Shield for ninety thousand years. The prototype of these lakes was an ice dam lake that drained southwards into the Gulf of Saint Lawrence at a time when areas further north (Nunavik) were still glaciated. As post-glacial rebound elevated the southern part of the Canadian Shield more rapidly than the north, the region began to drain northward into the Caniapiscau River, a tributary of the Koksoak River, and ultimately into Ungava Bay.
Prior to impoundment, Lake Caniapiscau covered about and was frequented by hunters and fur traders in the 19th century. In 1834, the Hudson's Bay Company opened an outpost there to link its facilities in the James Bay region with those of Ungava Bay, but closed the Kaniapiskau Post in 1870.
In 1976, Société d'énergie de la Baie James, a subsidiary of Hydro-Québec, began construction on the Caniapiscau Reservoir, designed to feed the hydro-electric generating stations of the James Bay Project. Filling the reservoir began on October 25, 1981, and over the next three years it flooded numerous lakes such as Lakes Caniapiscau, Delorme, Brisay, Tournon, and Vermouille. It now fills a depression in the highest part of the Laurentian Plateau of the Canadian Shield, covering , or about four times the size of the natural lakes prior to impoundment.
Since August 1985, the Caniapiscau River was partially diverted to the west into the Laforge River of the La Grande River watershed, flowing west to James Bay.
Many new islands were created as a result of the lake's impoundment, and in 1997 Quebec's Commission de toponymie published a map naming those islands for significant works of Québécois literature. The names of the islands attracted controversy not only because they predominantly used French-language works, but also because Cree and Inuit First Nations leaders claimed that the sites already had native names prior to becoming islands, which were ignored and overwritten.
Flora
The Caniapiscau Reservoir is in the zone of discontinuous permafrost. The area surrounding the reservoir is vegetated entirely with taiga, or boreal forest, characterized by widely spaced Black Spruce with a thick underlayer of yellow-grey lichen and interspersed with muskeg and bogs. In the more moist areas, some closed coniferous forest stands may appear. On the more exposed land, a forest-tundra transition zone occurs where the woodland is replaced by lichen dominated tundra.
See also
List of lakes of Quebec
Hydro-Québec
James Bay Project
List of Quebec rivers
References
Bibliography
External links
La Grande hydroelectric complex
HAYEUR, Gaëtan. 2001. Summary of Knowledge Acquired in Northern Environments from 1970 to 2000. Montreal: Hydro-Québec
World Lakes Database
Explo-Sylva
Air Saguenay Base on Lac Pau
Lakes of Nord-du-Québec
Lakes of Côte-Nord
Reservoirs in Quebec
James Bay Project
Canada geography articles needing translation from French Wikipedia | Caniapiscau Reservoir | [
"Engineering"
] | 993 | [
"James Bay Project",
"Macro-engineering"
] |
1,723,512 | https://en.wikipedia.org/wiki/List%20of%20fusion%20experiments | Experiments directed toward developing fusion power are invariably done with dedicated machines which can be classified according to the principles they use to confine the plasma fuel and keep it hot.
The major division is between magnetic confinement and inertial confinement. In magnetic confinement, the tendency of the hot plasma to expand is counteracted by the Lorentz force between currents in the plasma and magnetic fields produced by external coils. The particle densities tend to be in the range of to and the linear dimensions in the range of . The particle and energy confinement times may range from under a millisecond to over a second, but the configuration itself is often maintained through input of particles, energy, and current for times that are hundreds or thousands of times longer. Some concepts are capable of maintaining a plasma indefinitely.
In contrast, with inertial confinement, there is nothing to counteract the expansion of the plasma. The confinement time is simply the time it takes the plasma pressure to overcome the inertia of the particles, hence the name. The densities tend to be in the range of to and the plasma radius in the range of 1 to 100 micrometers. These conditions are obtained by irradiating a millimeter-sized solid pellet with a nanosecond laser or ion pulse. The outer layer of the pellet is ablated, providing a reaction force that compresses the central 10% of the fuel by a factor of 10 or 20 to 103 or times solid density. These microplasmas disperse in a time measured in nanoseconds. For a fusion power reactor, a repetition rate of several per second will be needed.
Magnetic confinement
Within the field of magnetic confinement experiments, there is a basic division between toroidal and open magnetic field topologies. Generally speaking, it is easier to contain a plasma in the direction perpendicular to the field than parallel to it. Parallel confinement can be solved either by bending the field lines back on themselves into circles or, more commonly, toroidal surfaces, or by constricting the bundle of field lines at both ends, which causes some of the particles to be reflected by the mirror effect. The toroidal geometries can be further subdivided according to whether the machine itself has a toroidal geometry, i.e., a solid core through the center of the plasma. The alternative is to dispense with a solid core and rely on currents in the plasma to produce the toroidal field.
Mirror machines have advantages in a simpler geometry and a better potential for direct conversion of particle energy to electricity. They generally require higher magnetic fields than toroidal machines, but the biggest problem has turned out to be confinement. For good confinement there must be more particles moving perpendicular to the field than there are moving parallel to the field. Such a non-Maxwellian velocity distribution is, however, very difficult to maintain and energetically costly.
The mirrors' advantage of simple machine geometry is maintained in machines which produce compact toroids, but there are potential disadvantages for stability in not having a central conductor and there is generally less possibility to control (and thereby optimize) the magnetic geometry. Compact toroid concepts are generally less well developed than those of toroidal machines. While this does not necessarily mean that they cannot work better than mainstream concepts, the uncertainty involved is much greater.
Somewhat in a class by itself is the Z-pinch, which has circular field lines. This was one of the first concepts tried, but it did not prove very successful. Furthermore, there was never a convincing concept for turning the pulsed machine requiring electrodes into a practical reactor.
The dense plasma focus is a controversial and "non-mainstream" device that relies on currents in the plasma to produce a toroid. It is a pulsed device that depends on a plasma that is not in equilibrium and has the potential for direct conversion of particle energy to electricity. Experiments are ongoing to test relatively new theories to determine if the device has a future.
Toroidal machine
Toroidal machines can be axially symmetric, like the tokamak and the reversed field pinch (RFP), or asymmetric, like the stellarator. The additional degree of freedom gained by giving up toroidal symmetry might ultimately be usable to produce better confinement, but the cost is complexity in the engineering, the theory, and the experimental diagnostics. Stellarators typically have a periodicity, e.g. a fivefold rotational symmetry. The RFP, despite some theoretical advantages such as a low magnetic field at the coils, has not proven very successful.
Tokamak
Stellarator
Magnetic mirror
Tabletop/Toytop, Lawrence Livermore National Laboratory, Livermore CA.
DCX/DCX-2, Oak Ridge National Laboratory
OGRA (Odin GRAm neitronov v sutki, one gram of neutrons per day), Akademgorodok, Russia. A 20-meter-long pipe
Baseball I/Baseball II Lawrence Livermore National Laboratory, Livermore CA.
2X/2XIII/2XIII-B, Lawrence Livermore National Laboratory, Livermore CA.
TMX, TMX-U Lawrence Livermore National Laboratory, Livermore CA.
MFTF Lawrence Livermore National Laboratory, Livermore CA.
Gas Dynamic Trap at Budker Institute of Nuclear Physics, Akademgorodok, Russia.
Toroidal Z-pinch
Perhapsatron (1953, USA)
ZETA (Zero Energy Thermonuclear Assembly) (1957, United Kingdom)
Reversed field pinch (RFP)
ETA-BETA II in Padua, Italy (1979–1989)
RFX (Reversed-Field eXperiment), Consorzio RFX, Padova, Italy
MST (Madison Symmetric Torus), University of Wisconsin–Madison, United States
T2R, Royal Institute of Technology, Stockholm, Sweden
TPE-RX, AIST, Tsukuba, Japan
KTX (Keda Torus eXperiment) in China (since 2015)
Spheromak
Sustained Spheromak Physics Experiment
Field-reversed configuration (FRC)
C-2 Tri Alpha Energy
C-2U Tri Alpha Energy
C-2W TAE Technologies
LSX University of Washington
IPA University of Washington
HF University of Washington
IPA- HF University of Washington
Other toroidal machines
TMP (Tor s Magnitnym Polem, torus with magnetic field): A porcelain torus with major radius , minor radius , toroidal field of and plasma current , predecessor to the first tokamak (1955, USSR)
Open field lines
Plasma pinch
Trisops – 2 facing theta-pinch guns
FF-2B, Lawrenceville Plasma Physics, United States
Levitated dipole
Levitated Dipole Experiment (LDX), MIT/Columbia University, United States
Inertial confinement
Laser-driven
Z-pinch
Z Pulsed Power Facility
ZEBRA device at the University of Nevada's Nevada Terawatt Facility
Saturn accelerator at Sandia National Laboratory
MAGPIE at Imperial College London
COBRA at Cornell University
PULSOTRON
Z-FFR (Z(-pinch)-Fission-Fusion Reactor), a nuclear fusion–fission hybrid machine to be built in Chengdu, China by 2025 and generate power as early as 2028
Inertial electrostatic confinement
Fusor
List of fusor examples
Polywell
Magnetized target fusion
FRX-L
FRCHX
General Fusion – under development
LINUS project
References
See also
List of nuclear reactors
Fusion power
Magnetic confinement fusion devices | List of fusion experiments | [
"Physics",
"Chemistry"
] | 1,511 | [
"Plasma physics",
"Fusion power",
"Particle traps",
"Magnetic confinement fusion devices",
"Nuclear fusion"
] |
1,723,667 | https://en.wikipedia.org/wiki/Oncolytic%20virus | An oncolytic virus is a virus that preferentially infects and kills cancer cells. As the infected cancer cells are destroyed by oncolysis, they release new infectious virus particles or virions to help destroy the remaining tumour. Oncolytic viruses are thought not only to cause direct destruction of the tumour cells, but also to stimulate host anti-tumour immune system responses. Oncolytic viruses also have the ability to affect the tumor micro-environment in multiple ways.
The potential of viruses as anti-cancer agents was first realised in the early twentieth century, although coordinated research efforts did not begin until the 1960s. A number of viruses including adenovirus, reovirus, measles, herpes simplex, Newcastle disease virus, and vaccinia have been clinically tested as oncolytic agents. Most current oncolytic viruses are engineered for tumour selectivity, although there are naturally occurring examples such as reovirus and the senecavirus, resulting in clinical trials.
The first oncolytic virus to be approved by a national regulatory agency was genetically unmodified ECHO-7 strain enterovirus RIGVIR, which was approved in Latvia in 2004 for the treatment of skin melanoma; the approval was withdrawn in 2019. An oncolytic adenovirus, a genetically modified adenovirus named H101, was approved in China in 2005 for the treatment of head and neck cancer. In 2015, talimogene laherparepvec (OncoVex, T-VEC), an oncolytic herpes virus which is a modified herpes simplex virus, became the first oncolytic virus to be approved for use in the United States and the European Union, for the treatment of advanced inoperable melanoma.
On 16 December 2022, the Food and Drug Administration approved nadofaragene firadenovec-vncg (Adstiladrin, Ferring Pharmaceuticals) for adult patients with high-risk Bacillus Calmette-Guérin (BCG) unresponsive non-muscle invasive bladder cancer (NMIBC) with carcinoma in situ (CIS) with or without papillary tumors.
History
A connection between cancer regression and viruses has long been theorised, and case reports of regression noted in cervical cancer, Burkitt lymphoma, and Hodgkin lymphoma, after immunisation or infection with an unrelated virus appeared at the beginning of the 20th century. Efforts to treat cancer through immunisation or virotherapy (deliberate infection with a virus), began in the mid-20th century. As the technology to create a custom virus did not exist, all early efforts focused on finding natural oncolytic viruses. During the 1960s, promising research involved using poliovirus, adenovirus, Coxsackie virus, ECHO enterovirus RIGVIR, and others. The early complications were occasional cases of uncontrolled infection (resulting in significant morbidity and mortality); an immune response would also frequently develop. While not directly harmful to the patient, the response destroyed the virus thus preventing it from destroying the cancer. Early efforts also found that only certain cancers could be treated through virotherapy. Even when a response was seen, these responses were neither complete nor durable. The field of virotherapy was nearly abandoned for a time, as the technology required to modify viruses didn't exist whereas chemotherapy and radiotherapy technology enjoyed early success. However, now that these technologies have been thoroughly developed and cancer remains a major cause of mortality, there is still a need for novel cancer therapies, garnering this once-sidelined therapy renewed interest. In one case report published in 2024, a scientist Beata Halassy treated her own stage 3 breast cancer using an Edmonston-Zagreb measles vaccine strain (MeV) and then a vesicular stomatitis virus Indiana strain (VSV), both prepared in her own laboratory, in combination with trastuzumab. While the treatment was successful and self-experimentation has a long history in science, the decision to publish the case report attracted controversy due to the unapproved nature of the viral agents and treatment protocol used.
Herpes simplex virus
Herpes simplex virus (HSV) was one of the first viruses to be adapted to attack cancer cells selectively, because it was well understood, easy to manipulate and relatively harmless in its natural state (merely causing cold sores) so likely to pose fewer risks. The herpes simplex virus type 1 (HSV-1) mutant 1716 lacks both copies of the ICP34.5 gene, and as a result is no longer able to replicate in terminally differentiated and non-dividing cells but will infect and cause lysis very efficiently in cancer cells, and this has proved to be an effective tumour-targeting strategy. In a wide range of in vivo cancer models, the HSV1716 virus has induced tumour regression and increased survival times.
In 1996, the first approval was given in Europe for a clinical trial using the oncolytic virus HSV1716. From 1997 to 2003, strain HSV1716 was injected into tumours of patients with glioblastoma multiforme, a highly malignant brain tumour, with no evidence of toxicity or side effects, and some long-term survivors. Other safety trials have used HSV1716 to treat patients with melanoma and squamous-cell carcinoma of head and neck. Since then other studies have shown that the outer coating of HSV1716 variants can be targeted to specific types of cancer cells, and can be used to deliver a variety of additional genes into cancer cells, such as genes to split a harmless prodrug inside cancer cells to release toxic chemotherapy, or genes which command infected cancer cells to concentrate protein tagged with radioactive iodine, so that individual cancer cells are killed by micro-dose radiation as well as by virus-induced cell lysis.
Other oncolytic viruses based on HSV have also been developed and are in clinical trials. One that has been approved by the FDA for advanced melanoma is Amgen's talimogene laherparepvec.
Oncorine (H101)
The first oncolytic virus to be approved by a regulatory agency was a genetically modified adenovirus named H101 by Shanghai Sunway Biotech. It gained regulatory approval in 2005 from China's State Food and Drug Administration (SFDA) for the treatment of head and neck cancer. Sunway's H101 and the very similar Onyx-15 (dl1520) have been engineered to remove a viral defense mechanism that interacts with a normal human gene p53, which is very frequently dysregulated in cancer cells. Despite the promises of early in vivo lab work, these viruses do not specifically infect cancer cells, but they still kill cancer cells preferentially. While overall survival rates are not known, short-term response rates are approximately doubled for H101 plus chemotherapy when compared to chemotherapy alone. It appears to work best when injected directly into a tumour, and when any resulting fever is not suppressed. Systemic therapy (such as through infusion through an intravenous line) is desirable for treating metastatic disease. It is now marketed under the brand name Oncorine.
Mechanisms of action
Immunotherapy
With advances in cancer immunotherapy such as immune checkpoint inhibitors, increased attention has been given to using oncolytic viruses to increase antitumor immunity. There are two main considerations of the interaction between oncolytic viruses and the immune system.
Immunity as an obstacle
A major obstacle to the success of oncolytic viruses is the patient immune system which naturally attempts to deactivate any virus. This can be a particular problem for intravenous injection, where the virus must first survive interactions with the blood complement and neutralising antibodies. It has been shown that immunosuppression by chemotherapy and inhibition of the complement system can enhance oncolytic virus therapy.
Pre-existing immunity can be partly avoided by using viruses that are not common human pathogens. However, this does not avoid subsequent antibody generation. Yet, some studies have shown that pre-immunity to oncolytic viruses doesn't cause a significant reduction in efficacy.
Alternatively, the viral vector can be coated with a polymer such as polyethylene glycol, shielding it from antibodies, but this also prevents viral coat proteins adhering to host cells.
Another way to help oncolytic viruses reach cancer growths after intravenous injection, is to hide them inside macrophages (a type of white blood cell). Macrophages automatically migrate to areas of tissue destruction, especially where oxygen levels are low, characteristic of cancer growths, and have been used successfully to deliver oncolytic viruses to prostate cancer in animals.
Immunity as an ally
Although it poses a hurdle by inactivating viruses, the patient's immune system can also act as an ally against tumors; infection attracts the attention of the immune system to the tumour and may help to generate useful and long-lasting antitumor immunity. One important mechanism is the release of substances by tumor lysis, such as tumor-associated antigens and danger associated-molecular patterns (DAMPs), which can elicit an antitumor immune response. This essentially produces a personalised cancer vaccine.
Many cases of spontaneous remission of cancer have been recorded. Though the cause is not fully understood, they are thought likely to be a result of a sudden immune response or infection. Efforts to induce this phenomenon have used cancer vaccines (derived from cancer cells or selected cancer antigens), or direct treatment with immune-stimulating factors on skin cancers. Some oncolytic viruses are very immunogenic and may by infection of the tumour, elicit an anti-tumor immune response, especially viruses delivering cytokines or other immune stimulating factors.
Viruses selectively infect tumor cells because of their defective anti-viral response. Imlygic, an attenuated herpes simplex virus, has been genetically engineered to replicate preferentially within tumor cells and to generate antigens that elicit an immune response.
Oncolytic behaviour of wild-type viruses
Vaccinia virus
Vaccinia virus (VACV) is arguably the most successful live biotherapeutic agent because of its critical role in the eradication of smallpox, one of the most deadly diseases in human history. Long before the smallpox eradication campaign was launched, VACV was exploited as a therapeutic agent for the treatment of cancer. In 1922, Levaditi and Nicolau reported that VACV was able to inhibit the growth of various tumors in mice and rats. This was the first demonstration of viral oncolysis in the laboratory. This virus was subsequently shown to selectively infect and destroy tumor cells with great potency, while sparing normal cells, both in cell cultures and in animal models. Since vaccinia virus has long been recognized as an ideal backbone for vaccines due to its potent antigen presentation capability, this combines well with its natural oncolytic activities as an oncolytic virus for cancer immunotherapy.
Vesicular stomatitis virus
Vesicular stomatitis virus (VSV) is a rhabdovirus, consisting of 5 genes encoded by a negative sense, single-stranded RNA genome. In nature, VSV infects insects as well as livestock, where it causes a relatively localized and non-fatal illness. The low pathogenicity of this virus is due in large part to its sensitivity to interferons, a class of proteins that are released into the tissues and bloodstream during infection. These molecules activate genetic anti-viral defence programs that protect cells from infection and prevent spread of the virus. However, in 2000, Stojdl, Lichty et al. demonstrated that defects in these pathways render cancer cells unresponsive to the protective effects of interferons and therefore highly sensitive to infection with VSV. Since VSV undergoes a rapid cytolytic replication cycle, infection leads to death of the malignant cell and roughly a 1000-fold amplification of virus within 24h. VSV is therefore highly suitable for therapeutic application, and several groups have gone on to show that systemically administered VSV can be delivered to a tumour site, where it replicates and induces disease regression, often leading to durable cures. Attenuation of the virus by engineering a deletion of Met-51 of the matrix protein ablates virtually all infection of normal tissues, while replication in tumour cells is unaffected.
Recent research has shown that this virus has the potential to cure brain tumours, thanks to its oncolytic properties.
Poliovirus
Poliovirus is a natural invasive neurotropic virus, making it the obvious choice for selective replication in tumours derived from neuronal cells. Poliovirus has a plus-strand RNA genome, the translation of which depends on a tissue-specific internal ribosome entry site (IRES) within the 5' untranslated region of the viral genome, which is active in cells of neuronal origin and allows translation of the viral genome without a 5' cap. Gromeier et al. (2000) replaced the normal poliovirus IRES with a rhinovirus IRES, altering tissue specificity. The resulting PV1(RIPO) virus was able to selectively destroy malignant glioma cells, while leaving normal neuronal cells untouched.
Reovirus
Reoviruses generally infect mammalian respiratory and bowel systems (the name deriving from an acronym, respiratory enteric orphan virus). Most people have been exposed to reovirus by adulthood; however, the infection does not typically produce symptoms. The reovirus' oncolytic potential was established after they were discovered to reproduce well in various cancer cell lines, lysing these cells.
Reolysin is a formulation of reovirus intended to treat various cancers currently undergoing clinical trials.
Senecavirus
Senecavirus, also known as Seneca Valley Virus, is a naturally occurring wild-type oncolytic picornavirus discovered in 2001 as a tissue culture contaminate at Genetic Therapy, Inc. The initial isolate, SVV-001, is being developed as an anti-cancer therapeutic by Neotropix, Inc. under the name NTX-010 for cancers with neuroendocrine features including small cell lung cancer and a variety of pediatric solid tumours.
RIGVIR
RIGVIR is a drug that was approved by the State Agency of Medicines of the Republic of Latvia in 2004. It was also approved in Georgia and Armenia. It is wild type ECHO-7, a member of echovirus group. The potential use of echovirus as an oncolytic virus to treat cancer was discovered by Latvian scientist Aina Muceniece in the 1960s and 1970s. The data used to register the drug in Latvia is not sufficient to obtain approval to use it in the US, Europe, or Japan. As of 2017 there was no good evidence that RIGVIR is an effective cancer treatment. On 19 March 2019, the manufacturer of ECHO-7, SIA LATIMA, announced the drug's removal from sale in Latvia, quoting financial and strategic reasons and insufficient profitability. However, several days later an investigative TV show revealed that State Agency of Medicines had run laboratory tests on the vials, and found that the amount of ECHO-7 virus is of a much smaller amount than claimed by the manufacturer. According to agency's lab director, "It's like buying what you think is lemon juice, but finding that what you have is lemon-flavored water". In March 2019, the distribution of ECHO-7 in Latvia has been stopped. Based on the request of some patients, medical institutions and physicians were allowed to continue use despite the suspension of the registration certificate.
Semliki Forest virus
Semliki Forest virus (SFV) is a virus that naturally infects cells of the central nervous system and causes encephalitis. A genetically engineered form has been pre-clinically tested as an oncolytic virus against the severe brain tumour type glioblastoma. The SFV was genetically modified with microRNA target sequences so that it only replicated in brain tumour cells and not in normal brain cells. The modified virus reduced tumour growth and prolonged survival of mice with brain tumours. The modified virus was also found to efficiently kill human glioblastoma tumour cell lines.
Other
The maraba virus, first identified in Brazilian sandflies, is being tested clinically.
Coxsackievirus A21 is being developed by Viralytics under trade name Cavatak. Coxsackievirus A21 belongs to Enterovirus C species.
Influenza A is one of the earliest viruses anecdotally reported to induce cancer regression. This has prompted preclinical development of genetically engineered oncolytic influenza A viruses. Murine Respirovirus, which is frequently called Sendai virus in scientific literature, has shown some oncolytic properties that are described in the section Murine respirovirus as an oncolytic agent.
Engineering oncolytic viruses
Directed evolution
An innovative approach of drug development termed "directed evolution" involves the creation of new viral variants or serotypes specifically directed against tumour cells via rounds of directed selection using large populations of randomly generated recombinant precursor viruses. The increased biodiversity produced by the initial homologous recombination step provides a large random pool of viral candidates which can then be passed through a series of selection steps designed to lead towards a pre-specified outcome (e.g. higher tumor specific activity) without requiring any previous knowledge of the resultant viral mechanisms that are responsible for that outcome. The pool of resultant oncolytic viruses can then be further screened in pre-clinical models to select an oncolytic virus with the desired therapeutic characteristics.
Directed evolution was applied on human adenovirus, one of many viruses that are being developed as oncolytic agents, to create a highly selective and yet potent oncolytic vaccine. As a result of this process, ColoAd1 (a novel chimeric member of the group B adenoviruses) was generated. This hybrid of adenovirus serotypes Ad11p and Ad3 shows much higher potency and tumour selectivity than the control viruses (including Ad5, Ad11p and Ad3) and was confirmed to generate approximately two logs more viral progeny on freshly isolated human colon tumour tissue than on matching normal tissue.
Attenuation
Attenuation involves deleting viral genes, or gene regions, to eliminate viral functions that are expendable in tumour cells, but not in normal cells, thus making the virus safer and more tumour-specific. Cancer cells and virus-infected cells have similar alterations in their cell signalling pathways, particularly those that govern progression through the cell cycle. A viral gene whose function is to alter a pathway is dispensable in cells where the pathway is defective, but not in cells where the pathway is active.
The enzymes thymidine kinase and ribonucleotide reductase in cells are responsible for DNA synthesis and are only expressed in cells which are actively replicating. These enzymes also exist in the genomes of certain viruses (E.g. HSV, vaccinia) and allow viral replication in quiescent(non-replicating) cells, so if they are inactivated by mutation the virus will only be able to replicate in proliferating cells, such as cancer cells.
Tumour targeting
There are two main approaches for generating tumour selectivity: transductional and non-transductional targeting.
Transductional targeting involves modifying the viral coat proteins to target tumour cells while reducing entry to non-tumour cells. This approach to tumour selectivity has mainly focused on adenoviruses and HSV-1, although it is entirely viable with other viruses.
Non-transductional targeting involves altering the genome of the virus so it can only replicate in cancer cells, most frequently as part of the attenuation of the virus.
Transcription targeting can also be used, where critical parts of the viral genome are placed under the control of a tumour-specific promoter. A suitable promoter should be active in the tumour but inactive in the majority of normal tissue, particularly the liver, which is the organ that is most exposed to blood born viruses. Many such promoters have been identified and studied for the treatment of a range of cancers.
Similarly, viral replication can be finely tuned with the use of microRNAs (miRNA) artificial target sites or miRNA response elements (MREs). Differential expression of miRNAs between healthy tissues and tumors permit to engineer oncolytic viruses detargeted from certain tissues of interest while allowing its replication in the tumor cells.
Double targeting with both transductional and non-transductional targeting methods is more effective than any one form of targeting alone.
Reporter genes
Both in the laboratory and in the clinic it is useful to have a simple means of identifying cells infected by the experimental virus. This can be done by equipping the virus with "reporter genes" not normally present in viral genomes, which encode easily identifiable protein markers. One example of such proteins is GFP (green fluorescent protein) which, when present in infected cells, will cause a fluorescent green light to be emitted when stimulated by blue light. An advantage of this method is that it can be used on live cells and in patients with superficial infected lesions, it enables rapid non-invasive confirmation of viral infection. Another example of a visual marker useful in living cells is luciferase, an enzyme from the firefly which in the presence of luciferin, emits light detectable by specialized cameras.
The E. coli enzymes beta-glucuronidase and beta-galactosidase can also be encoded by some viruses. These enzymes, in the presence of certain substrates, can produce intense colored compounds useful for visualizing infected cells and also for quantifying gene expression.
Modifications to improve oncolytic activity
Oncolytic viruses can be used against cancers in ways that are additional to lysis of infected cells.
Suicide genes
Viruses can be used as vectors for delivery of suicide genes, encoding enzymes that can metabolise a separately administered non-toxic pro-drug into a potent cytotoxin, which can diffuse to and kill neighbouring cells. One herpes simplex virus, encoding a thymidine kinase suicide gene, has progressed to phase III clinical trials. The herpes simplex virus thymidine kinase phosphorylates the pro-drug, ganciclovir, which is then incorporated into DNA, blocking DNA synthesis. The tumour selectivity of oncolytic viruses ensures that the suicide genes are only expressed in cancer cells, however a "bystander effect" on surrounding tumour cells has been described with several suicide gene systems.
Suppression of angiogenesis
Angiogenesis (blood vessel formation) is an essential part of the formation of large tumour masses. Angiogenesis can be inhibited by the expression of several genes, which can be delivered to cancer cells in viral vectors, resulting in suppression of angiogenesis, and oxygen starvation in the tumour. The infection of cells with viruses containing the genes for angiostatin and endostatin synthesis inhibited tumour growth in mice. Enhanced antitumour activities have been demonstrated in a recombinant vaccinia virus encoding anti-angiogenic therapeutic antibody and with an HSV1716 variant expressing an inhibitor of angiogenesis.
Radioiodine
Addition of the sodium-iodide symporter (NIS) gene to the viral genome causes infected tumour cells to express NIS and accumulate iodine. When combined with radioiodine therapy it allows local radiotherapy of the tumour, as used to treat thyroid cancer. The radioiodine can also be used to visualise viral replication within the body by the use of a gamma camera. This approach has been used successfully preclinically with adenovirus, measles virus and vaccinia virus.
Approved therapeutic agents
Talimogene laherparepvec (OncoVEX GM-CSF), aka T-vec, by Amgen, successfully completed phase III trials for advanced melanoma in March 2013. In October 2015, the US FDA approved T-VEC, with the brand name Imlygic, for the treatment of melanoma in patients with inoperable tumors. becoming the first approved oncolytic agent in the western world. It is based on herpes simplex virus (HSV-1). It has also been tested in a Phase I trial for pancreatic cancer and a Phase III trial in head and neck cancer together with cisplatin chemotherapy and radiotherapy.
Teserpaturev (G47∆), aka Delytact by Daiichi Sankyo is a first oncolytic virus therapy approved by Japan Ministry of Health, Labour and Welfare (MHLW). Delytact is a genetically engineered oncolytic herpes simplex virus type 1 (HSV-1) approved for treatment of malignant glioma in Japan.
Oncolytic viruses in conjunction with existing cancer therapies
It is in conjunction with conventional cancer therapies that oncolytic viruses have often showed the most promise, since combined therapies operate synergistically with no apparent negative effects.
Clinical trials
Onyx-015 (dl1520) underwent trials in conjunction with chemotherapy before it was abandoned in the early 2000s. The combined treatment gave a greater response than either treatment alone, but the results were not entirely conclusive. Vaccinia virus GL-ONC1 was studied in a trial combined with chemo- and radiotherapy as Standard of Care for patients newly diagnosed with head & neck cancer. Herpes simplex virus, adenovirus, reovirus and murine leukemia virus are also undergoing clinical trials as a part of combination therapies.
Pre-clinical research
Chen et al. (2001) used CV706, a prostate-specific adenovirus, in conjunction with radiotherapy on prostate cancer in mice. The combined treatment resulted in a synergistic increase in cell death, as well as a significant increase in viral burst size (the number of virus particles released from each cell lysis). No alteration in viral specificity was observed.
SEPREHVIR (HSV-1716) has also shown synergy in pre-clinical research when used in combination with several cancer chemotherapies.
The anti-angiogenesis drug bevacizumab (anti-VEGF antibody) has been shown to reduce the inflammatory response to oncolytic HSV and improve virotherapy in mice. A modified oncolytic vaccinia virus encoding a single-chain anti-VEGF antibody (mimicking bevacizumab) was shown to have significantly enhanced antitumor activities than parental virus in animal models.
In fiction
In science fiction, the concept of an oncolytic virus was first introduced to the public in Jack Williamson's novel Dragon's Island, published in 1951, although Williamson's imaginary virus was based on a bacteriophage rather than a mammalian virus. Dragon's Island is also known for being the source of the term "genetic engineering".
The plot of the Hollywood film I Am Legend is based on the premise that a worldwide epidemic was caused by a viral cure for cancer.
See also
Measles virus encoding the human thyroidal sodium iodide symporter (MV-NIS)
Oncolytic AAV
Oncovirus, virus that can cause cancer
References
Further reading | Oncolytic virus | [
"Biology"
] | 5,732 | [
"Viruses",
"Oncolytic virus"
] |
1,723,783 | https://en.wikipedia.org/wiki/Transformation%20theory%20%28quantum%20mechanics%29 | The term transformation theory refers to a procedure and a "picture" used by Paul Dirac in his early formulation of quantum theory, from around 1927.
This "transformation" idea refers to the changes a quantum state undergoes in the course of time, whereby its vector "moves" between "positions" or "orientations" in its Hilbert space. Time evolution, quantum transitions, and symmetry transformations in quantum mechanics may thus be viewed as the systematic theory of abstract, generalized rotations in this space of quantum state vectors.
Remaining in full use today, it would be regarded as a topic in the mathematics of Hilbert space, although, technically speaking, it is somewhat more general in scope. While the terminology is reminiscent of rotations of vectors in ordinary space, the Hilbert space of a quantum object is more general and holds its entire quantum state.
(The term further sometimes evokes the wave–particle duality, according to which a particle (a "small" physical object) may display either particle or wave aspects, depending on the observational situation. Or, indeed, a variety of intermediate aspects, as the situation demands.)
References
Foundational quantum physics | Transformation theory (quantum mechanics) | [
"Physics"
] | 233 | [
"Foundational quantum physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
1,724,836 | https://en.wikipedia.org/wiki/Reliability%20engineering | Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
The reliability function is theoretically defined as the probability of success. In practice, it is calculated using different techniques, and its value ranges between 0 and 1, where 0 indicates no probability of success while 1 indicates definite success. This probability is estimated from detailed (physics of failure) analysis, previous data sets, or through reliability testing and reliability modeling. Availability, testability, maintainability, and maintenance are often defined as a part of "reliability engineering" in reliability programs. Reliability often plays a key role in the cost-effectiveness of systems.
Reliability engineering deals with the prediction, prevention, and management of high levels of "lifetime" engineering uncertainty and risks of failure. Although stochastic parameters define and affect reliability, reliability is not only achieved by mathematics and statistics. "Nearly all teaching and literature on the subject emphasize these aspects and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods for prediction and measurement." For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massively multivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability.
Reliability engineering relates closely to Quality Engineering, safety engineering, and system safety, in that they use common methods for their analysis and may require input from each other. It can be said that a system must be reliably safe.
Reliability engineering focuses on the costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims.
History
The word reliability can be traced back to 1816 and is first attested to the poet Samuel Taylor Coleridge. Before World War II the term was linked mostly to repeatability; a test (in any type of science) was considered "reliable" if the same results would be obtained repeatedly. In the 1920s, product improvement through the use of statistical process control was promoted by Dr. Walter A. Shewhart at Bell Labs, around the time that Waloddi Weibull was working on statistical models for fatigue. The development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U.S. military in the 1940s, characterizing a product that would operate when expected and for a specified period.
In World War II, many reliability issues were due to the inherent unreliability of electronic equipment available at the time, and to fatigue issues. In 1945, M.A. Miner published a seminal paper titled "Cumulative Damage in Fatigue" in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability proved to be very problematic and costly. The IEEE formed the Reliability Society in 1948. In 1950, the United States Department of Defense formed a group called the "Advisory Group on the Reliability of Electronic Equipment" (AGREE) to investigate reliability methods for military equipment. This group recommended three main ways of working:
Improve component reliability.
Establish quality and reliability requirements for suppliers.
Collect field data and find root causes of failures.
In the 1960s, more emphasis was given to reliability testing on component and system levels. The famous military standard MIL-STD-781 was created at that time. Around this period also the much-used predecessor to military handbook 217 was published by RCA and was used for the prediction of failure rates of electronic components. The emphasis on component reliability and empirical research (e.g. Mil Std 217) alone slowly decreased. More pragmatic approaches, as used in the consumer industries, were being used. In the 1980s, televisions were increasingly made up of solid-state semiconductors. Automobiles rapidly increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as did microwave ovens and a variety of other appliances. Communications systems began to adopt
electronics to replace older mechanical switching systems. Bellcore issued the first consumer prediction methodology for telecommunications, and SAE developed a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, and it became apparent that die complexity wasn't the only factor that determined failure rates for integrated circuits (ICs).
Kam Wong published a paper questioning the bathtub curve—see also reliability-centered maintenance. During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, and the PC market helped keep IC densities following Moore's law and doubling about every 18 months. Reliability engineering was now changing as it moved towards understanding the physics of failure. Failure rates for components kept dropping, but system-level issues became more prominent. Systems thinking has become more and more important. For software, the CMM model (Capability Maturity Model) was developed, which gave a more qualitative approach to reliability. ISO 9000 added reliability measures as part of the design and development portion of certification. The expansion of the World Wide Web created new challenges of security and trust. The older problem of too little reliable information available had now been replaced by too much information of questionable value. Consumer reliability problems could now be discussed online in real-time using data. New technologies such as micro-electromechanical systems (MEMS), handheld GPS, and hand-held devices that combine cell phones and computers all represent challenges to maintaining reliability. Product development time continued to shorten through this
decade and what had been done in three years was being done in 18 months. This meant that reliability tools and tasks had to be more closely tied to the development process itself. In many ways, reliability has become part of everyday life and consumer expectations.
Overview
Reliability is the probability of a product performing its intended function under specified operating conditions in a manner that meets or exceeds customer expectations.
Objective
The objectives of reliability engineering, in decreasing order of priority, are:
To apply engineering knowledge and specialist techniques to prevent or to reduce the likelihood or frequency of failures.
To identify and correct the causes of failures that do occur despite the efforts to prevent them.
To determine ways of coping with failures that do occur, if their causes have not been corrected.
To apply methods for estimating the likely reliability of new designs, and for analysing reliability data.
The reason for the priority emphasis is that it is by far the most effective way of working, in terms of minimizing costs and generating reliable products. The primary skills that are required, therefore, are the ability to understand and anticipate the possible causes of failures, and knowledge of how to prevent them. It is also necessary to know the methods that can be used for analyzing designs and data.
Scope and techniques
Reliability engineering for "complex systems" requires a different, more elaborate systems approach than for non-complex systems. Reliability engineering may in that case involve:
System availability and mission readiness analysis and related reliability and maintenance requirement allocation
Functional system failure analysis and derived requirements specification
Inherent (system) design reliability analysis and derived requirements specification for both hardware and software design
System diagnostics design
Fault tolerant systems (e.g. by redundancy)
Predictive and preventive maintenance (e.g. reliability-centered maintenance)
Human factors / human interaction / human errors
Manufacturing- and assembly-induced failures (effect on the detected "0-hour quality" and reliability)
Maintenance-induced failures
Transport-induced failures
Storage-induced failures
Use (load) studies, component stress analysis, and derived requirements specification
Software (systematic) failures
Failure / reliability testing (and derived requirements)
Field failure monitoring and corrective actions
Spare parts stocking (availability control)
Technical documentation, caution and warning analysis
Data and information acquisition/organisation (creation of a general reliability development hazard log and FRACAS system)
Chaos engineering
Effective reliability engineering requires understanding of the basics of failure mechanisms for which experience, broad engineering skills and good knowledge from many different special fields of engineering are required, for example:
Tribology
Stress (mechanics)
Fracture mechanics / fatigue
Thermal engineering
Fluid mechanics / shock-loading engineering
Electrical engineering
Chemical engineering (e.g. corrosion)
Material science
Definitions
Reliability may be defined in the following ways:
The idea that an item is fit for a purpose
The capacity of a designed, produced, or maintained item to perform as required
The capacity of a population of designed, produced or maintained items to perform as required
The resistance to failure of an item
The probability of an item to perform a required function under stated conditions
The durability of an object
Basics of a reliability assessment
Many engineering techniques are used in reliability risk assessments, such as reliability block diagrams, hazard analysis, failure mode and effects analysis (FMEA), fault tree analysis (FTA), Reliability Centered Maintenance, (probabilistic) load and material stress and wear calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect analysis, reliability testing, etc. These analyses must be done properly and with much attention to detail to be effective. Because of the large number of reliability techniques, their expense, and the varying degrees of reliability required for different situations, most projects develop a reliability program plan to specify the reliability tasks (statement of work (SoW) requirements) that will be performed for that specific system.
Consistent with the creation of safety cases, for example per ARP4761, the goal of reliability assessments is to provide a robust set of qualitative and quantitative evidence that the use of a component or system will not be associated with unacceptable risk. The basic steps to take are to:
Thoroughly identify relevant unreliability "hazards", e.g. potential conditions, events, human errors, failure modes, interactions, failure mechanisms, and root causes, by specific analysis or tests.
Assess the associated system risk, by specific analysis or testing.
Propose mitigation, e.g. requirements, design changes, detection logic, maintenance, and training, by which the risks may be lowered and controlled at an acceptable level.
Determine the best mitigation and get agreement on final, acceptable risk levels, possibly based on cost/benefit analysis.
The risk here is the combination of probability and severity of the failure incident (scenario) occurring. The severity can be looked at from a system safety or a system availability point of view. Reliability for safety can be thought of as a very different focus from reliability for system availability. Availability and safety can exist in dynamic tension as keeping a system too available can be unsafe. Forcing an engineering system into a safe state too quickly can force false alarms that impede the availability of the system.
In a de minimis definition, the severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 Tōhoku earthquake and tsunami)—in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities. Residual risk is the risk that is left over after all reliability activities have finished, and includes the unidentified risk—and is therefore not completely quantifiable.
The complexity of the technical systems such as improvements of design and materials, planned inspections, fool-proof design, and backup redundancy decreases risk and increases the cost. The risk can be decreased to ALARA (as low as reasonably achievable) or ALAPA (as low as practically achievable) levels.
Reliability and availability program plan
Implementing a reliability program is not simply a software purchase; it is not just a checklist of items that must be completed that ensure one has reliable products and processes. A reliability program is a complex learning and knowledge-based system unique to one's products and processes. It is supported by leadership, built on the skills that one develops within a team, integrated into business processes, and executed by following proven standard work practices.
A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools, analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements for reliability assessment. For large-scale complex systems, the reliability program plan should be a separate document. Resource determination for manpower and budgets for testing and other tasks is critical for a successful program. In general, the amount of work required for an effective program for complex systems is large.
A reliability program plan is essential for achieving high levels of reliability, testability, maintainability, and the resulting system availability, and is developed early during system development and refined over the system's life cycle. It specifies not only what the reliability engineer does, but also the tasks performed by other stakeholders. An effective reliability program plan must be approved by top program management, which is responsible for the allocation of sufficient resources for its implementation.
A reliability program plan may also be used to evaluate and improve the availability of a system by the strategy of focusing on increasing testability & maintainability and not on reliability. Improving maintainability is generally easier than improving reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, they are likely to dominate the availability calculation (prediction uncertainty problem), even when maintainability levels are very high. When reliability is not under control, more complicated issues may arise, like manpower (maintainers/customer service capability) shortages, spare part availability, logistic delays, lack of repair facilities, extensive retrofit and complex configuration management costs, and others. The problem of unreliability may be increased also due to the "domino effect" of maintenance-induced failures after repairs. Focusing only on maintainability is therefore not enough. If failures are prevented, none of the other issues are of any importance, and therefore reliability is generally regarded as the most important part of availability. Reliability needs to be evaluated and improved related to both availability and the total cost of ownership (TCO) due to the cost of spare parts, maintenance man-hours, transport costs, storage costs, part obsolete risks, etc. But, as GM and Toyota have belatedly discovered, TCO also includes the downstream liability costs when reliability calculations have not sufficiently or accurately addressed customers' bodily risks. Often a trade-off is needed between the two. There might be a maximum ratio between availability and cost of ownership. The testability of a system should also be addressed in the plan, as this is the link between reliability and maintainability. The maintenance strategy can influence the reliability of a system (e.g., by preventive and/or predictive maintenance), although it can never bring it above the inherent reliability.
The reliability plan should clearly provide a strategy for availability control. Whether only availability or also cost of ownership is more important depends on the use of the system. For example, a system that is a critical link in a production system—e.g., a big oil platform—is normally allowed to have a very high cost of ownership if that cost translates to even a minor increase in availability, as the unavailability of the platform results in a massive loss of revenue which can easily exceed the high cost of ownership. A proper reliability plan should always address RAMT analysis in its total context. RAMT stands for reliability, availability, maintainability/maintenance, and testability in the context of the customer's needs.
Reliability requirements
For any system, one of the first tasks of reliability engineering is to adequately specify the reliability and maintainability requirements allocated from the overall availability needs and, more importantly, derived from proper design failure analysis or preliminary prototype test results. Clear requirements (able to be designed to) should constrain the designers from designing particular unreliable items/constructions/interfaces/systems. Setting only availability, reliability, testability, or maintainability targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability Requirements Engineering. Reliability requirements address the system itself, including test and assessment requirements, and associated tasks and documentation. Reliability requirements are included in the appropriate system or subsystem requirements specifications, test plans, and contract statements. The creation of proper lower-level requirements is critical.
The provision of only quantitative minimum targets (e.g., Mean Time Between Failure (MTBF) values or failure rates) is not sufficient for different reasons. One reason is that a full validation (related to correctness and verifiability in time) of a quantitative reliability allocation (requirement spec) on lower levels for complex systems can (often) not be made as a consequence of (1) the fact that the requirements are probabilistic, (2) the extremely high level of uncertainties involved for showing compliance with all these probabilistic requirements, and because (3) reliability is a function of time, and accurate estimates of a (probabilistic) reliability number per item are available only very late in the project, sometimes even after many years of in-service use. Compare this problem with the continuous (re-)balancing of, for example, lower-level-system mass requirements in the development of an aircraft, which is already often a big undertaking. Notice that in this case, masses do only differ in terms of only some %, are not a function of time, and the data is non-probabilistic and available already in CAD models. In the case of reliability, the levels of unreliability (failure rates) may change with factors of decades (multiples of 10) as a result of very minor deviations in design, process, or anything else. The information is often not available without huge uncertainties within the development phase. This makes this allocation problem almost impossible to do in a useful, practical, valid manner that does not result in massive over- or under-specification. A pragmatic approach is therefore needed—for example: the use of general levels/classes of quantitative requirements depending only on severity of failure effects. Also, the validation of results is a far more subjective task than any other type of requirement. (Quantitative) reliability parameters—in terms of MTBF—are by far the most uncertain design parameters in any design.
Furthermore, reliability design requirements should drive a (system or part) design to incorporate features that prevent failures from occurring, or limit consequences from failure in the first place. Not only would it aid in some predictions, this effort would keep from distracting the engineering effort into a kind of accounting work. A design requirement should be precise enough so that a designer can "design to" it and can also prove—through analysis or testing—that the requirement has been achieved, and, if possible, within some a stated confidence. Any type of reliability requirement should be detailed and could be derived from failure analysis (Finite-Element Stress and Fatigue analysis, Reliability Hazard Analysis, FTA, FMEA, Human Factor Analysis, Functional Hazard Analysis, etc.) or any type of reliability testing. Also, requirements are needed for verification tests (e.g., required overload stresses) and test time needed. To derive these requirements in an effective manner, a systems engineering-based risk assessment and mitigation logic should be used. Robust hazard log systems must be created that contain detailed information on why and how systems could or have failed. Requirements are to be derived and tracked in this way. These practical design requirements shall drive the design and not be used only for verification purposes. These requirements (often design constraints) are in this way derived from failure analysis or preliminary tests. Understanding of this difference compared to only purely quantitative (logistic) requirement specification (e.g., Failure Rate / MTBF target) is paramount in the development of successful (complex) systems.
The maintainability requirements address the costs of repairs as well as repair time. Testability (not to be confused with test requirements) requirements provide the link between reliability and maintainability and should address detectability of failure modes (on a particular system level), isolation levels, and the creation of diagnostics (procedures).
As indicated above, reliability engineers should also address requirements for various reliability tasks and documentation during system development, testing, production, and operation. These requirements are generally specified in the contract statement of work and depend on how much leeway the customer wishes to provide to the contractor. Reliability tasks include various analyses, planning, and failure reporting. Task selection depends on the criticality of the system as well as cost. A safety-critical system may require a formal failure reporting and review process throughout development, whereas a non-critical system may rely on final test reports. The most common reliability program tasks are documented in reliability program standards, such as MIL-STD-785 and IEEE 1332. Failure reporting analysis and corrective action systems are a common approach for product/process reliability monitoring.
Reliability culture / human errors / human factors
In practice, most failures can be traced back to some type of human error, for example in:
Management decisions (e.g. in budgeting, timing, and required tasks)
Systems Engineering: Use studies (load cases)
Systems Engineering: Requirement analysis / setting
Systems Engineering: Configuration control
Assumptions
Calculations / simulations / FEM analysis
Design
Design drawings
Testing (e.g. incorrect load settings or failure measurement)
Statistical analysis
Manufacturing
Quality control
Maintenance
Maintenance manuals
Training
Classifying and ordering of information
Feedback of field information (e.g. incorrect or too vague)
etc.
However, humans are also very good at detecting such failures, correcting them, and improvising when abnormal situations occur. Therefore, policies that completely rule out human actions in design and production processes to improve reliability may not be effective. Some tasks are better performed by humans and some are better performed by machines.
Furthermore, human errors in management; the organization of data and information; or the misuse or abuse of items, may also contribute to unreliability. This is the core reason why high levels of reliability for complex systems can only be achieved by following a robust systems engineering process with proper planning and execution of the validation and verification tasks. This also includes the careful organization of data and information sharing and creating a "reliability culture", in the same way, that having a "safety culture" is paramount in the development of safety-critical systems.
Reliability prediction and improvement
Reliability prediction combines:
creation of a proper reliability model (see further on this page)
estimation (and justification) of input parameters for this model (e.g. failure rates for a particular failure mode or event and the mean time to repair the system for a particular failure)
estimation of output reliability parameters at system or part level (i.e. system availability or frequency of a particular functional failure) The emphasis on quantification and target setting (e.g. MTBF) might imply there is a limit to achievable reliability, however, there is no inherent limit and development of higher reliability does not need to be more costly. In addition, they argue that prediction of reliability from historic data can be very misleading, with comparisons only valid for identical designs, products, manufacturing processes, and maintenance with identical operating loads and usage environments. Even minor changes in any of these could have major effects on reliability. Furthermore, the most unreliable and important items (i.e. the most interesting candidates for a reliability investigation) are most likely to be modified and re-engineered since historical data was gathered, making the standard (re-active or pro-active) statistical methods and processes used in e.g. medical or insurance industries less effective. Another surprising – but logical – argument is that to be able to accurately predict reliability by testing, the exact mechanisms of failure must be known and therefore – in most cases – could be prevented! Following the incorrect route of trying to quantify and solve a complex reliability engineering problem in terms of MTBF or probability using an-incorrect – for example, the re-active – approach is referred to by Barnard as "Playing the Numbers Game" and is regarded as bad practice.
For existing systems, it is arguable that any attempt by a responsible program to correct the root cause of discovered failures may render the initial MTBF estimate invalid, as new assumptions (themselves subject to high error levels) of the effect of this correction must be made. Another practical issue is the general unavailability of detailed failure data, with those available often featuring inconsistent filtering of failure (feedback) data, and ignoring statistical errors (which are very high for rare events like reliability related failures). Very clear guidelines must be present to count and compare failures related to different type of root-causes (e.g. manufacturing-, maintenance-, transport-, system-induced or inherent design failures). Comparing different types of causes may lead to incorrect estimations and incorrect business decisions about the focus of improvement.
To perform a proper quantitative reliability prediction for systems may be difficult and very expensive if done by testing. At the individual part-level, reliability results can often be obtained with comparatively high confidence, as testing of many sample parts might be possible using the available testing budget. However, unfortunately these tests may lack validity at a system-level due to assumptions made at part-level testing. These authors emphasized the importance of initial part- or system-level testing until failure, and to learn from such failures to improve the system or part. The general conclusion is drawn that an accurate and absolute prediction – by either field-data comparison or testing – of reliability is in most cases not possible. An exception might be failures due to wear-out problems such as fatigue failures. In the introduction of MIL-STD-785 it is written that reliability prediction should be used with great caution, if not used solely for comparison in trade-off studies.
Design for reliability
Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure that a product meets its reliability requirements, under its use environment, for the duration of its lifetime. DfR is implemented in the design stage of a product to proactively improve product reliability. DfR is often used as part of an overall Design for Excellence (DfX) strategy.
Statistics-based approach (i.e. MTBF)
Reliability design begins with the development of a (system) model. Reliability and availability models use block diagrams and Fault Tree Analysis to provide a graphical means of evaluating the relationships between different parts of the system. These models may incorporate predictions based on failure rates taken from historical data. While the (input data) predictions are often not accurate in an absolute sense, they are valuable to assess relative differences in design alternatives. Maintainability parameters, for example Mean time to repair (MTTR), can also be used as inputs for such models.
The most important fundamental initiating causes and failure mechanisms are to be identified and analyzed with engineering tools. A diverse set of practical guidance as to performance and reliability should be provided to designers so that they can generate low-stressed designs and products that protect, or are protected against, damage and excessive wear. Proper validation of input loads (requirements) may be needed, in addition to verification for reliability "performance" by testing.
One of the most important design techniques is redundancy. This means that if one part of the system fails, there is an alternate success path, such as a backup system. The reason why this is the ultimate design choice is related to the fact that high-confidence reliability evidence for new parts or systems is often not available, or is extremely expensive to obtain. By combining redundancy, together with a high level of failure monitoring, and the avoidance of common cause failures; even a system with relatively poor single-channel (part) reliability, can be made highly reliable at a system level (up to mission critical reliability). No testing of reliability has to be required for this. In conjunction with redundancy, the use of dissimilar designs or manufacturing processes (e.g. via different suppliers of similar parts) for single independent channels, can provide less sensitivity to quality issues (e.g. early childhood failures at a single supplier), allowing very-high levels of reliability to be achieved at all moments of the development cycle (from early life to long-term). Redundancy can also be applied in systems engineering by double checking requirements, data, designs, calculations, software, and tests to overcome systematic failures.
Another effective way to deal with reliability issues is to perform analysis that predicts degradation, enabling the prevention of unscheduled downtime events / failures. RCM (Reliability Centered Maintenance) programs can be used for this.
Physics-of-failure-based approach
For electronic assemblies, there has been an increasing shift towards a different approach called physics of failure. This technique relies on understanding the physical static and dynamic failure mechanisms. It accounts for variation in load, strength, and stress that lead to failure with a high level of detail, made possible with the use of modern finite element method (FEM) software programs that can handle complex geometries and mechanisms such as creep, stress relaxation, fatigue, and probabilistic design (Monte Carlo Methods/DOE). The material or component can be re-designed to reduce the probability of failure and to make it more robust against such variations. Another common design technique is component derating: i.e. selecting components whose specifications significantly exceed the expected stress levels, such as using heavier gauge electrical wire than might normally be specified for the expected electric current.
Common tools and techniques
Many of the tasks, techniques, and analyses used in Reliability Engineering are specific to particular industries and applications, but can commonly include:
Physics of failure (PoF)
Built-in self-test (BIT or BIST) (testability analysis)
Failure mode and effects analysis (FMEA)
Reliability hazard analysis
Reliability block-diagram analysis
Dynamic reliability block-diagram analysis
Fault tree analysis
Root cause analysis
Statistical engineering, design of experiments – e.g. on simulations / FEM models or with testing
Sneak circuit analysis
Accelerated testing
Reliability growth analysis (re-active reliability)
Weibull analysis (for testing or mainly "re-active" reliability)
Hypertabastic survival models
Thermal analysis by finite element analysis (FEA) and / or measurement
Thermal induced, shock and vibration fatigue analysis by FEA and / or measurement
Electromagnetic analysis
Avoidance of single point of failure (SPOF)
Functional analysis and functional failure analysis (e.g., function FMEA, FHA or FFA)
Predictive and preventive maintenance: reliability centered maintenance (RCM) analysis
Testability analysis
Failure diagnostics analysis (normally also incorporated in FMEA)
Human error analysis
Operational hazard analysis
Preventative/Planned Maintenance Optimization (PMO)
Manual screening
Integrated logistics support
Results from these methods are presented during reviews of part or system design, and logistics. Reliability is just one requirement among many for a complex part or system. Engineering trade-off studies are used to determine the optimum balance between reliability requirements and other constraints.
The importance of language
Reliability engineers, whether using quantitative or qualitative methods to describe a failure or hazard, rely on language to pinpoint the risks and enable issues to be solved. The language used must help create an orderly description of the function/item/system and its complex surrounding as it relates to the failure of these functions/items/systems. Systems engineering is very much about finding the correct words to describe the problem (and related risks), so that they can be readily solved via engineering solutions. Jack Ring said that a systems engineer's job is to "language the project." (Ring et al. 2000) For part/system failures, reliability engineers should concentrate more on the "why and how", rather that predicting "when". Understanding "why" a failure has occurred (e.g. due to over-stressed components or manufacturing issues) is far more likely to lead to improvement in the designs and processes used than quantifying "when" a failure is likely to occur (e.g. via determining MTBF). To do this, first the reliability hazards relating to the part/system need to be classified and ordered (based on some form of qualitative and quantitative logic if possible) to allow for more efficient assessment and eventual improvement. This is partly done in pure language and proposition logic, but also based on experience with similar items. This can for example be seen in descriptions of events in fault tree analysis, FMEA analysis, and hazard (tracking) logs. In this sense language and proper grammar (part of qualitative analysis) plays an important role in reliability engineering, just like it does in safety engineering or in-general within systems engineering.
Correct use of language can also be key to identifying or reducing the risks of human error, which are often the root cause of many failures. This can include proper instructions in maintenance manuals, operation manuals, emergency procedures, and others to prevent systematic human errors that may result in system failures. These should be written by trained or experienced technical authors using so-called simplified English or Simplified Technical English, where words and structure are specifically chosen and created so as to reduce ambiguity or risk of confusion (e.g. an "replace the old part" could ambiguously refer to a swapping a worn-out part with a non-worn-out part, or replacing a part with one using a more recent and hopefully improved design).
Reliability modeling
Reliability modeling is the process of predicting or understanding the reliability of a component or system prior to its implementation. Two types of analysis that are often used to model a complete system's availability behavior including effects from logistics issues like spare part provisioning, transport and manpower are fault tree analysis and reliability block diagrams. At a component level, the same types of analyses can be used together with others. The input for the models can come from many sources including testing; prior operational experience; field data; as well as data handbooks from similar or related industries. Regardless of source, all model input data must be used with great caution, as predictions are only valid in cases where the same product was used in the same context. As such, predictions are often only used to help compare alternatives.
For part level predictions, two separate fields of investigation are common:
The physics of failure approach uses an understanding of physical failure mechanisms involved, such as mechanical crack propagation or chemical corrosion degradation or failure;
The parts stress modelling approach is an empirical method for prediction based on counting the number and type of components of the system, and the stress they undergo during operation.
Reliability theory
Reliability is defined as the probability that a device will perform its intended function during a specified period of time under stated conditions. Mathematically, this may be expressed as,
,
where is the failure probability density function and is the length of the period of time (which is assumed to start from time zero).
There are a few key elements of this definition:
Reliability is predicated on "intended function:" Generally, this is taken to mean operation without failure. However, even if no individual part of the system fails, but the system as a whole does not do what was intended, then it is still charged against the system reliability. The system requirements specification is the criterion against which reliability is measured.
Reliability applies to a specified period of time. In practical terms, this means that a system has a specified chance that it will operate without failure before time . Reliability engineering ensures that components and materials will meet the requirements during the specified time. Note that units other than time may sometimes be used (e.g. "a mission", "operation cycles").
Reliability is restricted to operation under stated (or explicitly defined) conditions. This constraint is necessary because it is impossible to design a system for unlimited conditions. A Mars rover will have different specified conditions than a family car. The operating environment must be addressed during design and testing. That same rover may be required to operate in varying conditions requiring additional scrutiny.
Two notable references on reliability theory and its mathematical and statistical foundations are Barlow, R. E. and Proschan, F. (1982) and Samaniego, F. J. (2007).
Quantitative system reliability parameters—theory
Quantitative requirements are specified using reliability parameters. The most common reliability parameter is the mean time to failure (MTTF), which can also be specified as the failure rate (this is expressed as a frequency or conditional probability density function (PDF)) or the number of failures during a given period. These parameters may be useful for higher system levels and systems that are operated frequently (i.e. vehicles, machinery, and electronic equipment). Reliability increases as the MTTF increases. The MTTF is usually specified in hours, but can also be used with other units of measurement, such as miles or cycles. Using MTTF values on lower system levels can be very misleading, especially if they do not specify the associated Failures Modes and Mechanisms (The F in MTTF).
In other cases, reliability is specified as the probability of mission success. For example, reliability of a scheduled aircraft flight can be specified as a dimensionless probability or a percentage, as often used in system safety engineering.
A special case of mission success is the single-shot device or system. These are devices or systems that remain relatively dormant and only operate once. Examples include automobile airbags, thermal batteries and missiles. Single-shot reliability is specified as a probability of one-time success or is subsumed into a related parameter. Single-shot missile reliability may be specified as a requirement for the probability of a hit. For such systems, the probability of failure on demand (PFD) is the reliability measure – this is actually an "unavailability" number. The PFD is derived from failure rate (a frequency of occurrence) and mission time for non-repairable systems.
For repairable systems, it is obtained from failure rate, mean-time-to-repair (MTTR), and test interval. This measure may not be unique for a given system as this measure depends on the kind of demand. In addition to system level requirements, reliability requirements may be specified for critical subsystems. In most cases, reliability parameters are specified with appropriate statistical confidence intervals.
Reliability testing
The purpose of reliability testing or reliability verification is to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements. The reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan should be considered. It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action. Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality in R&D, design, and manufacturing.
Reliability verification is also called reliability testing, which refers to the use of modeling, statistics, and other methods to evaluate the reliability of the product based on the product's life span and expected performance. Most product on the market requires reliability testing, such as automotive, integrated circuit, heavy machinery used to mine nature resources, Aircraft auto software.
Reliability testing may be performed at several levels and there are different types of testing. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels.
(The test level nomenclature varies among applications.) For example, performing environmental stress screening tests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels. Testing proceeds during each level of integration through full-up system testing, developmental testing, and operational testing, thereby reducing program risk. However, testing does not mitigate unreliability risk.
With each test both statistical type I and type II errors could be made, depending on sample size, test time, assumptions and the needed discrimination ratio. There is risk of incorrectly rejecting a good design (type I error) and the risk of incorrectly accepting a bad design (type II error).
It is not always feasible to test all system requirements. Some systems are prohibitively expensive to test; some failure modes may take years to observe; some complex interactions result in a huge number of possible test cases; and some tests require the use of limited test ranges or other resources. In such cases, different approaches to testing can be used, such as (highly) accelerated life testing, design of experiments, and simulations.
The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specified confidence level with the minimum number of test units and test time. Different test plans result in different levels of risk to the producer and consumer. The desired reliability, statistical confidence, and risk levels for each side influence the ultimate test plan. The customer and developer should agree in advance on how reliability requirements will be tested.
A key aspect of reliability testing is to define "failure". Although this may seem obvious, there are many situations where it is not clear whether a failure is really the fault of the system. Variations in test conditions, operator differences, weather and unexpected situations create differences between the customer and the system developer. One strategy to address this issue is to use a scoring conference process. A scoring conference includes representatives from the customer, the developer, the test organization, the reliability organization, and sometimes independent observers. The scoring conference process is defined in the statement of work. Each test case is considered by the group and "scored" as a success or failure. This scoring is the official result used by the reliability engineer.
As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented.
Reliability testing is common in the Photonics industry. Examples of reliability tests of lasers are life test and burn-in. These tests consist of the highly accelerated aging, under controlled conditions, of a group of lasers. The data collected from these life tests are used to predict laser life expectancy under the intended operating characteristics.
Reliability test requirements
There are many criteria to test depends on the product or process that are testing on, and mainly, there are five components that are most common:
Product life span
Intended function
Operating Condition
Probability of Performance
User exceptions
The product life span can be split into four different for analysis. Useful life is the estimated economic life of the product, which is defined as the time can be used before the cost of repair do not justify the continue use to the product. Warranty life is the product should perform the function within the specified time period. Design life is where during the design of the product, designer take into consideration on the life time of competitive product and customer desire and ensure that the product do not result in customer dissatisfaction.
Reliability test requirements can follow from any analysis for which the first estimate of failure probability, failure mode or effect needs to be justified. Evidence can be generated with some level of confidence by testing. With software-based systems, the probability is a mix of software and hardware-based failures. Testing reliability requirements is problematic for several reasons. A single test is in most cases insufficient to generate enough statistical data. Multiple tests or long-duration tests are usually very expensive. Some tests are simply impractical, and environmental conditions can be hard to predict over a systems life-cycle.
Reliability engineering is used to design a realistic and affordable test program that provides empirical evidence that the system meets its reliability requirements. Statistical confidence levels are used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, an MTBF of 1000 hours at 90% confidence level. From this specification, the reliability engineer can, for example, design a test with explicit criteria for the number of hours and number of failures until the requirement is met or failed. Different sorts of tests are possible.
The combination of required reliability level and required confidence level greatly affects the development cost and the risk to both the customer and producer. Care is needed to select the best combination of requirements—e.g. cost-effectiveness. Reliability testing may be performed at various levels, such as component, subsystem and system. Also, many factors must be addressed during testing and operation, such as extreme temperature and humidity, shock, vibration, or other environmental factors (like loss of signal, cooling or power; or other catastrophes such as fire, floods, excessive heat, physical or security violations or other myriad forms of damage or degradation). For systems that must last many years, accelerated life tests may be needed.
Testing method
A systematic approach to reliability testing is to, first, determine reliability goal, then do tests that are linked to performance and determine the reliability of the product. A reliability verification test in modern industries should clearly determine how they relate to the product's overall reliability performance and how individual tests impact the warranty cost and customer satisfaction.
Accelerated testing
The purpose of accelerated life testing (ALT test) is to induce field failure in the laboratory at a much faster rate by providing a harsher, but nonetheless representative, environment. In such a test, the product is expected to fail in the lab just as it would have failed in the field—but in much less time.
The main objective of an accelerated test is either of the following:
To discover failure modes
To predict the normal field life from the high stress lab life
An accelerated testing program can be broken down into the following steps:
Define objective and scope of the test
Collect required information about the product
Identify the stress(es)
Determine level of stress(es)
Conduct the accelerated test and analyze the collected data.
Common ways to determine a life stress relationship are:
Arrhenius model
Eyring model
Inverse power law model
Temperature–humidity model
Temperature non-thermal model
Software reliability
Software reliability is a special aspect of reliability engineering. It focuses on foundations and techniques to make software more reliable, i.e., resilient to faults. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digital integrated circuit technology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. Therefore, software reliability has gained prominence within the field of system reliability.
There are significant differences, however, in how software and hardware behave.
Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state.
However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically large combinations of inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.
Despite this difference in the source of failure between software and hardware, several software reliability models based on statistics have been proposed to quantify what we experience with software: the longer software is run, the higher the probability that it will eventually be used in an untested manner and exhibit a latent defect that results in a failure (Shooman 1987), (Musa 2005), (Denney 2005).
As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplined software engineering process to anticipate and design against unintended consequences. There is more overlap between software quality engineering and software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards, peer reviews, unit tests, configuration management, software metrics and software models to be used during software development.
A common reliability metric is the number of software faults per line of code (FLOC), usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) decreases. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used. This metric remains controversial, since changes in software development and verification practices can have dramatic impact on overall defect rates.
Software testing is an important aspect of software reliability. Even the best software development process results in some software faults that are nearly undetectable until tested. Software is tested at several levels, starting with individual units, through integration and full-up system testing. All phases of testing, software faults are discovered, corrected, and re-tested. Reliability estimates are updated based on the fault density and other metrics. At a system level, mean-time-between-failure data can be collected and used to estimate reliability. Unlike hardware, performing exactly the same test on exactly the same software configuration does not provide increased statistical confidence. Instead, software reliability uses different metrics, such as code coverage.
The Software Engineering Institute's capability maturity model is a common means of assessing the overall software development process for reliability and quality purposes.
Structural reliability
Structural reliability or the reliability of structures is the application of reliability theory to the behavior of structures. It is used in both the design and maintenance of different types of structures including concrete and steel structures. In structural reliability studies both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated.
Comparison to safety engineering
Reliability for safety and reliability for availability are often closely related. Lost availability of an engineering system can cost money. If a subway system is unavailable the subway operator will lose money for each hour the system is down. The subway operator will lose more money if safety is compromised. The definition of reliability is tied to a probability of not encountering a failure. A failure can cause loss of safety, loss of availability or both. It is undesirable to lose safety or availability in a critical system.
Reliability engineering is concerned with overall minimisation of failures that could lead to financial losses for the responsible entity, whereas safety engineering focuses on minimising a specific set of failure types that in general could lead to loss of life, injury or damage to equipment.
Reliability hazards could transform into incidents leading to a loss of revenue for the company or the customer, for example due to direct and indirect costs associated with: loss of production due to system unavailability; unexpected high or low demands for spares; repair costs; man-hours; re-designs or interruptions to normal production.
Safety engineering is often highly specific, relating only to certain tightly regulated industries, applications, or areas. It primarily focuses on system safety hazards that could lead to severe accidents including: loss of life; destruction of equipment; or environmental damage. As such, the related system functional reliability requirements are often extremely high. Although it deals with unwanted failures in the same sense as reliability engineering, it, however, has less of a focus on direct costs, and is not concerned with post-failure repair actions. Another difference is the level of impact of failures on society, leading to a tendency for strict control by governments or regulatory bodies (e.g. nuclear, aerospace, defense, rail and oil industries).
Fault tolerance
Safety can be increased using a 2oo2 cross checked redundant system. Availability can be increased by using "1oo2" (1 out of 2) redundancy at a part or system level. If both redundant elements disagree the more permissive element will maximize availability. A 1oo2 system should never be relied on for safety. Fault-tolerant systems often rely on additional redundancy (e.g. 2oo3 voting logic) where multiple redundant elements must agree on a potentially unsafe action before it is performed. This increases both availability and safety at a system level. This is common practice in aerospace systems that need continued availability and do not have a fail-safe mode. For example, aircraft may use triple modular redundancy for flight computers and control surfaces (including occasionally different modes of operation e.g. electrical/mechanical/hydraulic) as these need to always be operational, due to the fact that there are no "safe" default positions for control surfaces such as rudders or ailerons when the aircraft is flying.
Basic reliability and mission reliability
The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety. However, the "basic" reliability of the system will in this case still be lower than a non-redundant (1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not result in system failure, but do result in additional cost due to: maintenance repair actions; logistics; spare parts etc. For example, replacement or repair of 1 faulty channel in a 2oo3 voting system, (the system is still operating, although with one failed channel it has actually become a 2oo2 system) is contributing to basic unreliability but not mission unreliability. As an example, the failure of the tail-light of an aircraft will not prevent the plane from flying (and so is not considered a mission failure), but it does need to be remedied (with a related cost, and so does contribute to the basic unreliability levels).
Detectability and common cause failures
When using fault tolerant (redundant) systems or systems that are equipped with protection functions, detectability of failures and avoidance of common cause failures becomes paramount for safe functioning and/or mission reliability.
Reliability versus quality (Six Sigma)
Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the failure intensity over the whole life of a product or engineering system from commissioning to decommissioning. Six Sigma has its roots in statistical control in quality of manufacturing. Reliability engineering is a specialty part of systems engineering. The systems engineering process is a discovery process that is often unlike a manufacturing process. A manufacturing process is often focused on repetitive activities that achieve high quality outputs with minimum cost and time.
The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of excellence. In industry, a more precise definition of quality as "conformance to requirements or specifications at the start of use" is used. Assuming the final product specification adequately captures the original requirements and customer/system needs, the quality level can be measured as the fraction of product units shipped that meet specifications. Manufactured goods quality often focuses on the number of warranty claims during the warranty period.
Quality is a snapshot at the start of life through the warranty period and is related to the control of lower-level product specifications. This includes time-zero defects i.e. where manufacturing mistakes escaped final Quality Control. In theory the quality level might be described by a single fraction of defective products. Reliability, as a part of systems engineering, acts as more of an ongoing assessment of failure rates over many years. Theoretically, all items will fail over an infinite period of time. Defects that appear over time are referred to as reliability fallout. To describe reliability fallout a probability model that describes the fraction fallout over time is needed. This is known as the life distribution model. Some of these reliability issues may be due to inherent design issues, which may exist even though the product conforms to specifications. Even items that are produced perfectly will fail over time due to one or more failure mechanisms (e.g. due to human error or mechanical, electrical, and chemical factors). These reliability issues can also be influenced by acceptable levels of variation during initial production.
Quality and reliability are, therefore, related to manufacturing. Reliability is more targeted towards clients who are focused on failures throughout the whole life of the product such as the military, airlines or railroads. Items that do not conform to product specification will generally do worse in terms of reliability (having a lower MTTF), but this does not always have to be the case. The full mathematical quantification (in statistical models) of this combined relation is in general very difficult or even practically impossible. In cases where manufacturing variances can be effectively reduced, six sigma tools have been shown to be useful to find optimal process solutions which can increase quality and reliability. Six Sigma may also help to design products that are more robust to manufacturing induced failures and infant mortality defects in engineering systems and manufactured product.
In contrast with Six Sigma, reliability engineering solutions are generally found by focusing on reliability testing and system design. Solutions are found in different ways, such as by simplifying a system to allow more of the mechanisms of failure involved to be understood; performing detailed calculations of material stress levels allowing suitable safety factors to be determined; finding possible abnormal system load conditions and using this to increase robustness of a design to manufacturing variance related failure mechanisms. Furthermore, reliability engineering uses system-level solutions, like designing redundant and fault-tolerant systems for situations with high availability needs (see Reliability engineering vs Safety engineering above).
Note: A "defect" in six-sigma/quality literature is not the same as a "failure" (Field failure | e.g. fractured item) in reliability. A six-sigma/quality defect refers generally to non-conformance with a requirement (e.g. basic functionality or a key dimension). Items can, however, fail over time, even if these requirements are all fulfilled. Quality is generally not concerned with asking the crucial question "are the requirements actually correct?", whereas reliability is.
Reliability operational assessment
Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters identified during the fault tree analysis design stage. Data collection is highly dependent on the nature of the system. Most large organizations have quality control groups that collect failure data on vehicles, equipment and machinery. Consumer product failures are often tracked by the number of returns. For systems in dormant storage or on standby, it is necessary to establish a formal surveillance program to inspect and test random samples. Any changes to the system, such as field upgrades or recall repairs, require additional reliability testing to ensure the reliability of the modification. Since it is not possible to anticipate all the failure modes of a given system, especially ones with a human element, failures will occur. The reliability program also includes a systematic root cause analysis that identifies the causal relationships involved in the failure such that effective corrective actions may be implemented. When possible, system failures and corrective actions are reported to the reliability engineering organization.
Some of the most common methods to apply to a reliability operational assessment are failure reporting, analysis, and corrective action systems (FRACAS). This systematic approach develops a reliability, safety, and logistics assessment based on failure/incident reporting, management, analysis, and corrective/preventive actions. Organizations today are adopting this method and utilizing commercial systems (such as Web-based FRACAS applications) that enable them to create a failure/incident data repository from which statistics can be derived to view accurate and genuine reliability, safety, and quality metrics.
It is extremely important for an organization to adopt a common FRACAS system for all end items. Also, it should allow test results to be captured in a practical way. Failure to adopt one easy-to-use (in terms of ease of data-entry for field engineers and repair shop engineers) and easy-to-maintain integrated system is likely to result in a failure of the FRACAS program itself.
Some of the common outputs from a FRACAS system include Field MTBF, MTTR, spares consumption, reliability growth, failure/incidents distribution by type, location, part no., serial no., and symptom.
The use of past data to predict the reliability of new comparable systems/items can be misleading as reliability is a function of the context of use and can be affected by small changes in design/manufacturing.
Reliability organizations
Systems of any significant complexity are developed by organizations of people, such as a commercial company or a government agency. The reliability engineering organization must be consistent with the company's organizational structure. For small, non-critical systems, reliability engineering may be informal. As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization.
There are several common types of reliability organizations. The project manager or chief engineer may employ one or more reliability engineers directly. In larger organizations, there is usually a product assurance or specialty engineering organization, which may include reliability, maintainability, quality, safety, human factors, logistics, etc. In such case, the reliability engineer reports to the product assurance manager or specialty engineering manager.
In some cases, a company may wish to establish an independent reliability organization. This is desirable to ensure that the system reliability, which is often expensive and time-consuming, is not unduly slighted due to budget and schedule pressures. In such cases, the reliability engineer works for the project day-to-day, but is actually employed and paid by a separate organization within the company.
Because reliability engineering is critical to early system design, it has become common for reliability engineers, however, the organization is structured, to work as part of an integrated product team.
Education
Some universities offer graduate degrees in reliability engineering. Other reliability professionals typically have a physics degree from a university or college program. Many engineering programs offer reliability courses, and some universities have entire reliability engineering programs. A reliability engineer must be registered as a professional engineer by the state or province by law, but not all reliability professionals are engineers. Reliability engineers are required in systems where public safety is at risk. There are many professional conferences and industry training programs available for reliability engineers. Several professional organizations exist for reliability engineers, including the American Society for Quality Reliability Division (ASQ-RD), the IEEE Reliability Society, the American Society for Quality (ASQ), and the Society of Reliability Engineers (SRE).
See also
and
References
Further reading
Barlow, R. E. and Proscan, F. (1981) Statistical Theory of Reliability and Life Testing, To Begin With Press, Silver Springs, MD.
Blanchard, Benjamin S. (1992), Logistics Engineering and Management (Fourth Ed.), Prentice-Hall, Inc., Englewood Cliffs, New Jersey.
Breitler, Alan L. and Sloan, C. (2005), Proceedings of the American Institute of Aeronautics and Astronautics (AIAA) Air Force T&E Days Conference, Nashville, TN, December, 2005: System Reliability Prediction: towards a General Approach Using a Neural Network.
Ebeling, Charles E., (1997), An Introduction to Reliability and Maintainability Engineering, McGraw-Hill Companies, Inc., Boston.
Denney, Richard (2005) Succeeding with Use Cases: Working Smart to Deliver Quality. Addison-Wesley Professional Publishing. ISBN. Discusses the use of software reliability engineering in use case driven software development.
Gano, Dean L. (2007), "Apollo Root Cause Analysis" (Third Edition), Apollonian Publications, LLC., Richland, Washington
Holmes, Oliver Wendell Sr. The Deacon's Masterpiece
Horsburgh, Peter (2018), "5 Habits of an Extraordinary Reliability Engineer", Reliability Web
Kapur, K.C., and Lamberson, L.R., (1977), Reliability in Engineering Design, John Wiley & Sons, New York.
Kececioglu, Dimitri, (1991) "Reliability Engineering Handbook", Prentice-Hall, Englewood Cliffs, New Jersey
Trevor Kletz (1998) Process Plants: A Handbook for Inherently Safer Design CRC
Leemis, Lawrence, (1995) Reliability: Probabilistic Models and Statistical Methods, 1995, Prentice-Hall.
MacDiarmid, Preston; Morris, Seymour; et al., (1995), Reliability Toolkit: Commercial Practices Edition, Reliability Analysis Center and Rome Laboratory, Rome, New York.
Modarres, Mohammad; Kaminskiy, Mark; Krivtsov, Vasiliy (1999), Reliability Engineering and Risk Analysis: A Practical Guide, CRC Press, .
Musa, John (2005) Software Reliability Engineering: More Reliable Software Faster and Cheaper, 2nd. Edition, AuthorHouse. ISBN
Neubeck, Ken (2004) "Practical Reliability Analysis", Prentice Hall, New Jersey
Neufelder, Ann Marie, (1993), Ensuring Software Reliability, Marcel Dekker, Inc., New York.
O'Connor, Patrick D. T. (2002), Practical Reliability Engineering (Fourth Ed.), John Wiley & Sons, New York. .
Samaniego, Francisco J. (2007) "System Signatures and their Applications in Engineering Reliability", Springer (International Series in Operations Research and Management Science), New York.
Shooman, Martin, (1987), Software Engineering: Design, Reliability, and Management, McGraw-Hill, New York.
Tobias, Trindade, (1995), Applied Reliability, Chapman & Hall/CRC,
Springer Series in Reliability Engineering
Nelson, Wayne B., (2004), Accelerated Testing—Statistical Models, Test Plans, and Data Analysis, John Wiley & Sons, New York,
Bagdonavicius, V., Nikulin, M., (2002), "Accelerated Life Models. Modeling and Statistical analysis", CHAPMAN&HALL/CRC, Boca Raton,
Todinov, M. (2016), "Reliability and Risk Models: setting reliability requirements", Wiley, 978-1-118-87332-8.
US standards, specifications, and handbooks
Aerospace Report Number: TOR-2007(8583)-6889 Reliability Program Requirements for Space Systems, The Aerospace Corporation (10 July 2007)
DoD 3235.1-H (3rd Ed) Test and Evaluation of System Reliability, Availability, and Maintainability (A Primer), U.S. Department of Defense (March 1982).
NASA GSFC 431-REF-000370 Flight Assurance Procedure: Performing a Failure Mode and Effects Analysis, National Aeronautics and Space Administration Goddard Space Flight Center (10 August 1996).
IEEE 1332–1998 IEEE Standard Reliability Program for the Development and Production of Electronic Systems and Equipment, Institute of Electrical and Electronics Engineers (1998).
JPL D-5703 Reliability Analysis Handbook, National Aeronautics and Space Administration Jet Propulsion Laboratory (July 1990).
MIL-STD-785B Reliability Program for Systems and Equipment Development and Production, U.S. Department of Defense (15 September 1980). (*Obsolete, superseded by ANSI/GEIA-STD-0009-2008 titled Reliability Program Standard for Systems Design, Development, and Manufacturing, 13 Nov 2008)
MIL-HDBK-217F Reliability Prediction of Electronic Equipment, U.S. Department of Defense (2 December 1991).
MIL-HDBK-217F (Notice 1) Reliability Prediction of Electronic Equipment, U.S. Department of Defense (10 July 1992).
MIL-HDBK-217F (Notice 2) Reliability Prediction of Electronic Equipment, U.S. Department of Defense (28 February 1995).
MIL-STD-690D Failure Rate Sampling Plans and Procedures, U.S. Department of Defense (10 June 2005).
MIL-HDBK-338B Electronic Reliability Design Handbook, U.S. Department of Defense (1 October 1998).
MIL-HDBK-2173 Reliability-Centered Maintenance (RCM) Requirements for Naval Aircraft, Weapon Systems, and Support Equipment, U.S. Department of Defense (30 January 1998); (superseded by NAVAIR 00-25-403).
MIL-STD-1543B Reliability Program Requirements for Space and Launch Vehicles, U.S. Department of Defense (25 October 1988).
MIL-STD-1629A Procedures for Performing a Failure Mode Effects and Criticality Analysis, U.S. Department of Defense (24 November 1980).
MIL-HDBK-781A Reliability Test Methods, Plans, and Environments for Engineering Development, Qualification, and Production, U.S. Department of Defense (1 April 1996).
NSWC-06 (Part A & B) Handbook of Reliability Prediction Procedures for Mechanical Equipment, Naval Surface Warfare Center (10 January 2006).
SR-332 Reliability Prediction Procedure for Electronic Equipment, Telcordia Technologies (January 2011).
FD-ARPP-01 Automated Reliability Prediction Procedure, Telcordia Technologies (January 2011).
GR-357 Generic Requirements for Assuring the Reliability of Components Used in Telecommunications Equipment, Telcordia Technologies (March 2001).
http://standards.sae.org/ja1000/1_199903/ SAE JA1000/1 Reliability Program Standard Implementation Guide
UK standards
In the UK, there are more up to date standards maintained under the sponsorship of UK MOD as Defence Standards. The relevant Standards include:
DEF STAN 00-40 Reliability and Maintainability (R&M)
PART 1: Issue 5: Management Responsibilities and Requirements for Programmes and Plans
PART 4: (ARMP-4)Issue 2: Guidance for Writing NATO R&M Requirements Documents
PART 6: Issue 1: IN-SERVICE R & M
PART 7 (ARMP-7) Issue 1: NATO R&M Terminology Applicable to ARMP's
DEF STAN 00-42 RELIABILITY AND MAINTAINABILITY ASSURANCE GUIDES
PART 1: Issue 1: ONE-SHOT DEVICES/SYSTEMS
PART 2: Issue 1: SOFTWARE
PART 3: Issue 2: R&M CASE
PART 4: Issue 1: Testability
PART 5: Issue 1: IN-SERVICE RELIABILITY DEMONSTRATIONS
DEF STAN 00-43 RELIABILITY AND MAINTAINABILITY ASSURANCE ACTIVITY
PART 2: Issue 1: IN-SERVICE MAINTAINABILITY DEMONSTRATIONS
DEF STAN 00-44 RELIABILITY AND MAINTAINABILITY DATA COLLECTION AND CLASSIFICATION
PART 1: Issue 2: MAINTENANCE DATA & DEFECT REPORTING IN THE ROYAL NAVY, THE ARMY AND THE ROYAL AIR FORCE
PART 2: Issue 1: DATA CLASSIFICATION AND INCIDENT SENTENCING—GENERAL
PART 3: Issue 1: INCIDENT SENTENCING—SEA
PART 4: Issue 1: INCIDENT SENTENCING—LAND
DEF STAN 00-45 Issue 1: RELIABILITY CENTERED MAINTENANCE
DEF STAN 00-49 Issue 1: RELIABILITY AND MAINTAINABILITY MOD GUIDE TO TERMINOLOGY DEFINITIONS
These can be obtained from DSTAN. There are also many commercial standards, produced by many organisations including the SAE, MSG, ARP, and IEE.
French standards
FIDES . The FIDES methodology (UTE-C 80-811) is based on the physics of failures and supported by the analysis of test data, field returns and existing modelling.
UTE-C 80–810 or RDF2000 . The RDF2000 methodology is based on the French telecom experience.
International standards
TC 56 Standards: Dependability
External links
John P. Rankin Collection, The University of Alabama in Huntsville Archives and Special Collections NASA reliability engineering research on sneak circuits.
Systems engineering
Design for X
Engineering failures
Software quality
Engineering statistics
Survival analysis
Materials science
Engineering disciplines
Applied probability | Reliability engineering | [
"Physics",
"Materials_science",
"Mathematics",
"Technology",
"Engineering"
] | 14,840 | [
"Systems engineering",
"Applied and interdisciplinary physics",
"Reliability analysis",
"Applied probability",
"Reliability engineering",
"Design for X",
"Applied mathematics",
"Technological failures",
"Materials science",
"Engineering failures",
"Civil engineering",
"nan",
"Engineering sta... |
1,725,432 | https://en.wikipedia.org/wiki/Formal%20charge | In chemistry, a formal charge (F.C. or ), in the covalent view of chemical bonding, is the hypothetical charge assigned to an atom in a molecule, assuming that electrons in all chemical bonds are shared equally between atoms, regardless of relative electronegativity. In simple terms, formal charge is the difference between the number of valence electrons of an atom in a neutral free state and the number assigned to that atom in a Lewis structure. When determining the best Lewis structure (or predominant resonance structure) for a molecule, the structure is chosen such that the formal charge on each of the atoms is as close to zero as possible.
The formal charge of any atom in a molecule can be calculated by the following equation:
where is the number of valence electrons of the neutral atom in isolation (in its ground state); is the number of non-bonding valence electrons assigned to this atom in the Lewis structure of the molecule; and is the total number of electrons shared in bonds with other atoms in the molecule. It can also be found visually as shown below.
Formal charge and oxidation state both assign a number to each individual atom within a compound; they are compared and contrasted in a section below.
Examples
Example: CO2 is a neutral molecule with 16 total valence electrons. There are different ways to draw the Lewis structure
Carbon single bonded to both oxygen atoms (carbon = +2, oxygens = −1 each, total formal charge = 0)
Carbon single bonded to one oxygen and double bonded to another (carbon = +1, oxygendouble = 0, oxygensingle = −1, total formal charge = 0)
Carbon double bonded to both oxygen atoms (carbon = 0, oxygens = 0, total formal charge = 0)
Even though all three structures gave us a total charge of zero, the final structure is the superior one because there are no charges in the molecule at all.
Pictorial method
The following is equivalent:
Draw a circle around the atom for which the formal charge is requested (as with carbon dioxide, below)
Count up the number of electrons in the atom's "circle." Since the circle cuts the covalent bond "in half," each covalent bond counts as one electron instead of two.
Subtract the number of electrons in the circle from the number of valence electrons of the neutral atom in isolation (in its ground state) to determine the formal charge.
The formal charges computed for the remaining atoms in this Lewis structure of carbon dioxide are shown below.
It is important to keep in mind that formal charges are just that – formal, in the sense that this system is a formalism. The formal charge system is just a method to keep track of all of the valence electrons that each atom brings with it when the molecule is formed.
Usage conventions
In organic chemistry convention, formal charges are an essential feature of a correctly rendered Lewis–Kekulé structure, and a structure omitting nonzero formal charges is considered incorrect, or at least, incomplete. Formal charges are drawn in close proximity to the atom bearing the charge. They may or may not be enclosed in a circle for clarity.
In contrast, this convention is not followed in inorganic chemistry. Many workers in organometallic and a majority of workers in coordination chemistry will omit formal charges, unless they are needed for emphasis, or they are needed to make a particular point. Instead a top-right corner ⌝ will be drawn following the covalently-bound, charged entity, in turn followed immediately by the overall charge.
The top-right corner ⌝ is sometimes replaced by square brackets enclosing the entire charged species, again with the total charge written in the upper right corner, just outside the brackets.
This difference in practice stems from the relatively straightforward assignment of bond order, valence electron count, and hence, formal charge for compounds only containing main-group elements (though oligomeric compounds like organolithium reagents and enolates tend to be depicted in an oversimplified and idealized manner), but transition metals have an unclear number of valence electrons so there is no unambiguous way to assign formal charges.
Formal charge compared to oxidation state
The formal charge is a tool for estimating the distribution of electric charge within a molecule. The concept of oxidation states constitutes a competing method to assess the distribution of electrons in molecules. If the formal charges and oxidation states of the atoms in carbon dioxide are compared, the following values are arrived at:
The reason for the difference between these values is that formal charges and oxidation states represent fundamentally different ways of looking at the distribution of electrons amongst the atoms in the molecule. With the formal charge, the electrons in each covalent bond are assumed to be split exactly evenly between the two atoms in the bond (hence the dividing by two in the method described above). The formal charge view of the CO2 molecule is essentially shown below:
The covalent (sharing) aspect of the bonding is overemphasized in the use of formal charges since in reality there is a higher electron density around the oxygen atoms due to their higher electronegativity compared to the carbon atom. This can be most effectively visualized in an electrostatic potential map.
With the oxidation state formalism, the electrons in the bonds are "awarded" to the atom with the greater electronegativity. The oxidation state view of the CO2 molecule is shown below:
Oxidation states overemphasize the ionic nature of the bonding; the difference in electronegativity between carbon and oxygen is insufficient to regard the bonds as being ionic in nature.
In reality, the distribution of electrons in the molecule lies somewhere between these two extremes. The inadequacy of the simple Lewis structure view of molecules led to the development of the more generally applicable and accurate valence bond theory of Slater, Pauling, et al., and henceforth the molecular orbital theory developed by Mulliken and Hund.
See also
Oxidation state
Valence (chemistry)
Coordination number
References
Chemical bonding
Electric charge | Formal charge | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,224 | [
"Physical quantities",
"Electric charge",
"Quantity",
"Condensed matter physics",
"nan",
"Chemical bonding",
"Wikipedia categories named after physical quantities"
] |
1,725,518 | https://en.wikipedia.org/wiki/Relativistic%20Breit%E2%80%93Wigner%20distribution | The relativistic Breit–Wigner distribution (after the 1936 nuclear resonance formula of Gregory Breit and Eugene Wigner) is a continuous probability distribution with the following probability density function,
where is a constant of proportionality, equal to
with
(This equation is written using natural units,
It is most often used to model resonances (unstable particles) in high-energy physics. In this case, is the center-of-mass energy that produces the resonance, is the mass of the resonance, and is the resonance width (or decay width), related to its mean lifetime according to (With units included, the formula is )
Usage
The probability of producing the resonance at a given energy is proportional to , so that a plot of the production rate of the unstable particle as a function of energy traces out the shape of the relativistic Breit–Wigner distribution. Note that for values of off the maximum at such that (hence for the distribution has attenuated to half its maximum value, which justifies the name for , width at half-maximum.
In the limit of vanishing width, the particle becomes stable as the Lorentzian distribution sharpens infinitely to where is the Dirac delta function (point impulse).
In general, can also be a function of ; this dependence is typically only important when is not small compared to and the phase space-dependence of the width needs to be taken into account. (For example, in the decay of the rho meson into a pair of pions.) The factor of that multiplies should also be replaced with
(or etc.) when the resonance is wide.
The form of the relativistic Breit–Wigner distribution arises from the propagator of an unstable particle, which has a denominator of the form (Here, is the square of the four-momentum carried by that particle in the tree Feynman diagram involved.) The propagator in its rest frame then is proportional to the quantum-mechanical amplitude for the decay utilized to reconstruct that resonance,
The resulting probability distribution is proportional to the absolute square of the amplitude, so then the above relativistic Breit–Wigner distribution for the probability density function.
The form of this distribution is similar to the amplitude of the solution to the classical equation of motion for a driven harmonic oscillator damped and driven by a sinusoidal external force. It has the standard resonance form of the Lorentz, or Cauchy distribution, but involves relativistic variables here The distribution is the solution of the differential equation for the amplitude squared w.r.t. the energy energy (frequency), in such a classical forced oscillator,
or rather
with
Gaussian broadening
In experiment, the incident beam that produces resonance always has some spread of energy around a central value. Usually, that is a Gaussian/normal distribution. The resulting resonance shape in this case is given by the convolution of the Breit–Wigner and the Gaussian distribution,
This function can be simplified by introducing new variables,
to obtain
where the relativistic line broadening function has the following definition,
is the relativistic counterpart of the similar line-broadening function for the Voigt profile used in spectroscopy (see also § 7.19 of ).
References
Continuous distributions
Particle physics | Relativistic Breit–Wigner distribution | [
"Physics"
] | 690 | [
"Particle physics"
] |
1,725,990 | https://en.wikipedia.org/wiki/Ionophore | In chemistry, an ionophore () is a chemical species that reversibly binds ions. Many ionophores are lipid-soluble entities that transport ions across the cell membrane. Ionophores catalyze ion transport across hydrophobic membranes, such as liquid polymeric membranes (carrier-based ion selective electrodes) or lipid bilayers found in the living cells or synthetic vesicles (liposomes). Structurally, an ionophore contains a hydrophilic center and a hydrophobic portion that interacts with the membrane.
Some ionophores are synthesized by microorganisms to import ions into their cells. Synthetic ion carriers have also been prepared. Ionophores selective for cations and anions have found many applications in analysis. These compounds have also shown to have various biological effects and a synergistic effect when combined with the ion they bind.
Classification
Biological activities of metal ion-binding compounds can be changed in response to the increment of the metal concentration, and based on the latter compounds can be classified as "metal ionophores", "metal chelators" or "metal shuttles". If the biological effect is augmented by increasing the metal concentration, it is classified as a "metal ionophore". If the biological effect is decreased or reversed by increasing the metal concentration, it is classified as a "metal chelator". If the biological effect is not affected by increasing the metal concentration, and the compound-metal complex enters the cell, it is classified as a "metal shuttle". The term ionophore (from Greek ion carrier or ion bearer) was proposed by Berton Pressman in 1967 when he and his colleagues were investigating the antibiotic mechanisms of valinomycin and nigericin.
Many ionophores are produced naturally by a variety of microbes, fungi and plants, and act as a defense against competing or pathogenic species. Multiple synthetic membrane-spanning ionophores have also been synthesized.
The two broad classifications of ionophores synthesized by microorganisms are:
Carrier ionophores that bind to a particular ion and shield its charge from the surrounding environment. This makes it easier for the ion to pass through the hydrophobic interior of the lipid membrane. However, these ionophores become unable to transport ions under very low temperatures. An example of a carrier ionophore is valinomycin, a molecule that transports a single potassium cation. Carrier ionophores may be proteins or other molecules.
Channel formers that introduce a hydrophilic pore into the membrane, allowing ions to pass through without coming into contact with the membrane's hydrophobic interior. Channel forming ionophores are usually large proteins. This type of ionophores can maintain their ability to transfer ions at low temperatures, unlike carrier ionophores. Examples of channel-forming ionophores are gramicidin A and nystatin.
Ionophores that transport hydrogen ions (H+, i.e. protons) across the cell membrane are called protonophores. Iron ionophores and chelating agents are collectively called siderophores.
Synthetic ionophores
Many synthetic ionophores are based on crown ethers, cryptands, and calixarenes. Pyrazole-pyridine and bis-pyrazole derivatives have also been synthesized. These synthetic species are often macrocyclic. Some synthetic agents are not macrocyclic, e.g. carbonyl cyanide-p-trifluoromethoxyphenylhydrazone. Even simple organic compounds, such as phenols, exhibit ionophoric properties. The majority of synthetic receptors used in the carrier-based anion-selective electrodes employ transition elements or metalloids as anion carriers, although simple organic urea- and thiourea-based receptors are known.
Mechanism of action
Ionophores are chemical compounds that reversibly bind and transport ions through biological membranes in the absence of a protein pore. This can disrupt the membrane potential, and thus these substances could exhibit cytotoxic properties. Ionophores modify the permeability of biological membranes toward certain ions to which they show affinity and selectivity. Many ionophores are lipid-soluble and transport ions across hydrophobic membranes, such as lipid bilayers found in the living cells or synthetic vesicles (liposomes), or liquid polymeric membranes (carrier-based ion selective electrodes). Structurally, an ionophore contains a hydrophilic center and a hydrophobic portion that interacts with the membrane. Ions are bound to the hydrophilic center and form an ionophore-ion complex. The structure of the ionophore-ion complex has been verified by X-ray crystallography.
Chemistry
Several chemical factors affect the ionophore activity. The activity of an ionophore-metal complex depends on its geometric configuration and the coordinating sites and atoms which create coordination environment surrounding the metal center. This affects the selectivity and affinity towards a certain ion. Ionophores can be selective to a particular ion but may not be exclusive to it. Ionophores facilitate the transport of ions across biological membranes most commonly via passive transport, which is affected by lipophilicity of the ionophore molecule. The increase in lipophilicity of the ionophore-metal complex enhances its permeability through lipophilic membranes. The hydrophobicity and hydrophilicity of the complex also determines whether it will slow down or ease the transport of metal ions into cell compartments. The reduction potential of a metal complex influences its thermodynamic stability and affects its reactivity. The ability of an ionophore to transfer ions is also affected by the temperature.
Biological properties
Ionophores are widely used in cell physiology experiments and biotechnology as these compounds can effectively perturb gradients of ions across biological membranes and thus they can modulate or enhance the role of key ions in the cell. Many ionophores have shown antibacterial and antifungal activities. Some of them also act against insects, pests and parasites. Some ionophores have been introduced into medicinal products for dermatological and veterinary use. A large amount of research has been directed toward investigating novel antiviral, anti-inflammatory, anti-tumor, antioxidant and neuroprotective properties of different ionophores.
Chloroquine is an antimalarial and antiamebic drug. It is also used in the management of rheumatoid arthritis and lupus erythematosus. Pyrithione is used as an anti-dandruff agent in medicated shampoos for seborrheic dermatitis. It also serves as an anti-fouling agent in paints to cover and protect surfaces against mildew and algae. Clioquinol and PBT2 are 8-hydroxyquinoline derivatives. Clioquinol has antiprotozoal and topical antifungal properties, however its use as an antiprotozoal agent has widely restricted because of neurotoxic concerns. Clioquinol and PBT2 are currently being studied for neurodegenerative diseases, such as Alzheimer's disease, Huntington's disease and Parkinson's disease. Gramicidin is used in throat lozenges and has been used to treat infected wounds. Epigallocatechin gallate is used in many dietary supplements and has shown slight cholesterol-lowering effects. Quercetin has a bitter flavor and is used as a food additive and in dietary supplements. Hinokitiol (ß-thujaplicin) is used in commercial products for skin, hair and oral care, insect repellents and deodorants. It is also used as a food additive, shelf-life extending agent in food packaging, and wood preservative in timber treatment.
Polyene antimycotics, such as nystatin, natamycin and amphotericin B, are a subgroup of macrolides and are widely used antifungal and antileishmanial medications. These drugs act as ionophores by binding to ergosterol in the fungal cell membrane and making it leaky and permeable for K+ and Na+ ions, as a result contributing to fungal cell death.
Carboxylic ionophores, i.e. monensin, lasalocid, salinomycin, narasin, maduramicin, semduramycin and laidlomycin, are marketed globally and widely used as anticoccidial feed additives to prevent and treat coccidiosis in poultry. Some of these compounds have also been used as growth and production promoters in certain ruminants, such as cattle, and chickens, however this use has been mainly restricted because of safety issues.
Zinc ionophores have been shown to inhibit replication of various viruses in vitro, including coxsackievirus, equine arteritis virus, coronavirus, HCV, HSV, HCoV-229E, HIV, mengovirus, MERS-CoV, rhinovirus, SARS-CoV-1, Zika virus.
See also
Siderophore
Protonophore
Chelation
References
External links
Fluka ionophores for ion-selective electrodes
Medical Information database Reference.MD
Structures and Properties of Naturally Occurring Polyether Antibiotics, J. Rutkowski, B. Brzezinski; open access review article
Polyether ionophores—promising bioactive molecules for cancer therapy, A. Huczyński; open access review article
Ionophores
Membrane biology
Charge carriers
Ions | Ionophore | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,036 | [
"Physical phenomena",
"Charge carriers",
"Membrane biology",
"Electrical phenomena",
"Condensed matter physics",
"Molecular biology",
"Ions",
"Matter"
] |
16,839,536 | https://en.wikipedia.org/wiki/Rotating%20disk%20electrode | In analytical chemistry, a rotating disk electrode (RDE) is a working electrode used in three-electrode systems for hydrodynamic voltammetry. The electrode rotates during experiments, inducing a flux of analyte to the electrode. These working electrodes are used in electrochemical studies when investigating reaction mechanisms related to redox chemistry, among other chemical phenomena. The more complex rotating ring-disk electrode can be used as a rotating disk electrode if the ring is left inactive during the experiment.
Structure
The electrode includes a conductive disk embedded in an inert non-conductive polymer or resin that can be attached to an electric motor that has very fine control of the electrode's rotation rate. The disk, like any working electrode, is generally made of a noble metal or glassy carbon, however any conductive material can be used based on specific needs.
Function
The disk's rotation is usually described in terms of angular velocity. As the disk turns, some of the solution described as the hydrodynamic boundary layer is dragged by the spinning disk and the resulting centrifugal force flings the solution away from the center of the electrode. Solution flows up, perpendicular to the electrode, from the bulk to replace the boundary layer. The sum result is a laminar flow of solution towards and across the electrode. The rate of the solution flow can be controlled by the electrode's angular velocity and modeled mathematically. This flow can quickly achieve conditions in which the steady-state current is controlled by the solution flow rather than diffusion. This is a contrast to still and unstirred experiments such as cyclic voltammetry where the steady-state current is limited by the diffusion of species in solution.
By running linear sweep voltammetry and other experiments at various rotation rates, different electrochemical phenomena can be investigated, including multi-electron transfer, the kinetics of a slow electron transfer, adsorption/desorption steps, and electrochemical reaction mechanisms.
Differences in behavior from stationary electrodes
Potential sweep reversals as used in cyclic voltammetry are different for an RDE system, since the products of the potential sweep are continually swept away from the electrode. A reversal would produce a similar i-E curve, which would closely match the forward scan, except for capacitive charging current. An RDE cannot be used to observe the behavior of the electrode reaction products, since they are continually swept away from the electrode. However, the rotating ring-disk electrode is well suited to investigate this further reactivity. The peak current in a cyclic voltammogram for an RDE is a plateau like region, governed by the Levich equation. The limiting current is typically much higher than the peak current of a stationary electrode, being that the mass transport of reactants is actively stimulated by the rotating disk, and not just governed by diffusion, as is the case for a stationary electrode. Any rotating disk electrode can, of course, also be used as a stationary electrode by using it with the rotator turned off.
See also
Liquid metal electrode
References
Electroanalytical chemistry devices
Electrodes
Rotation | Rotating disk electrode | [
"Physics",
"Chemistry"
] | 627 | [
"Physical phenomena",
"Electroanalytical chemistry",
"Electrodes",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Electroanalytical chemistry devices",
"Electrochemistry"
] |
16,839,650 | https://en.wikipedia.org/wiki/Rotating%20ring-disk%20electrode | In analytical chemistry, a rotating ring-disk electrode (RRDE) is a double working electrode used in hydrodynamic voltammetry, very similar to a rotating disk electrode (RDE). The electrode rotates during experiments inducing a flux of analyte to the electrode. This system used in electrochemical studies when investigating reaction mechanisms related to redox chemistry and other chemical phenomena.
Structure
The difference between a rotating ring-disk electrode and a rotating disk electrode is the addition of a second working electrode in the form of a ring around the central disk of the first working electrode. To operate such an electrode, it is necessary to use a potentiostat, such as a bipotentiostat, capable of controlling a four-electrode system. The two electrodes are separated by a non-conductive barrier and connected to the potentiostat through different leads. This rotating hydrodynamic electrode motif can be extended to rotating double-ring electrodes, rotating double-ring-disk electrodes, and even more esoteric constructions, as suited to the experiment.
Function
The RRDE takes advantage of the laminar flow created during rotation. As the system is rotated, the solution in contact with the electrode is driven to its side, similar to the situation of a rotating disk electrode. As the solution flows to the side, it crosses the ring electrode and flows back into the bulk solution. If the flow in the solution is laminar, the solution is brought in contact with the disk and with the ring quickly afterward, in a very controlled manner. The resulting currents depend on the potential, area, and spacing of the electrodes, as well as the rotation speed and the substrate.
This design makes a variety of experiments possible, for example a complex could be oxidized at the disk and then reduced back to the starting material at the ring. It is easy to predict what the ring/disk current ratios is if this process is entirely controlled by the flow of solution. If it is not controlled by the flow of the solution the current will deviate. For example, if the first oxidation is followed by a chemical reaction, an EC mechanism, to form a product that can not be reduced at the ring then the magnitude of the ring current would be reduced. By varying the rate of rotation it is possible to determine the rate of the chemical reaction if it is in the proper kinetic regime.
Applications
The RRDE setup allows for many additional experiments well beyond the capacity of a RDE. For example, while one electrode conducts linear sweep voltammetry the other can be kept at a constant potential or also swept in a controlled manner. Step experiments with each electrode acting independently can be conducted. These as well as many other extremely elegant experiments are possible, including those tailored to the needs of a given system. Such experiments are useful in studying multi-electrons processes, the kinetics of a slow electron transfer, adsorption/desorption steps, and electrochemical reaction mechanisms.
The RRDE is an important tool for characterizing the fundamental properties of electrocatalysts used in fuel cells. For example, in a proton exchange membrane (PEM) fuel cell, dioxygen reduction at the cathode is often enhanced by an electrocatalyst comprising platinum nanoparticles. When oxygen is reduced using an electrocatalyst, an unwanted and harmful by-product, hydrogen peroxide, may be produced. Hydrogen peroxide can damage the internal components of a PEM fuel cell, so oxygen-reduction electrocatalysts are engineered in such a way as to limit the amount of peroxide formed. An RRDE "collection experiment" can be used to probe the peroxide generating tendencies of an electrocatalyst. In this experiment, the disk is coated with a thin layer bearing the electrocatalyst, and the disk electrode is poised at a potential which reduces the oxygen. Any products generated at the disk electrode are then swept past the ring electrode. The potential of the ring electrode is poised to detect any hydrogen peroxide that may have been generated at the disk.
Design considerations
In general, narrowing the gap between the disk outer diameter and the ring inner diameter allows probing of systems with faster kinetics. A narrow gap reduces the "transit time" necessary for an intermediate species generated at the disk to successfully reach the ring electrode and be detected. Using precision machining techniques, it is possible to make gaps between 0.1 and 0.5 millimeters, and narrower gaps have been created using microlithography techniques.
Another important parameter for an RRDE is the "collection efficiency". This parameter is a measure of the percentage of the material generated at the disk electrode which is detected at the ring electrode. For any given set of RRDE dimensions (disk OD, ring ID, and ring OD), the collection efficiency can be computed using formulae derived from fluid dynamics first principles. One useful aspect of the theoretical collection efficiency is that it is only a function of the RRDE dimensions. That is, it is independent of the rotation rate over a wide range of rotation rates.
It is desirable for an RRDE to have a large collection efficiency if only to assure that the current signal measured at the ring electrode is detectable. On the other hand, it also desirable for an RRDE to have a small transit time so that short-lived (unstable) intermediate products generated at the disk survive long enough to be detected at the ring. The choice of actual RRDE dimensions is often a trade-off between a large collection efficiency or a short transit time.
See also
Hydrodynamic technique
Liquid metal electrode
Rotating disk electrode
Voltammetry
Working electrode
References
Electroanalytical chemistry devices
Electrodes
Rotation | Rotating ring-disk electrode | [
"Physics",
"Chemistry"
] | 1,159 | [
"Physical phenomena",
"Electroanalytical chemistry",
"Electrodes",
"Classical mechanics",
"Rotation",
"Electrochemistry",
"Motion (physics)",
"Electroanalytical chemistry devices"
] |
16,844,676 | https://en.wikipedia.org/wiki/W%20%26%20T%20Avery | W & T Avery Ltd. (later GEC Avery) was a British manufacturer of weighing machines. The company was founded in the early 18th century and took the name W & T Avery in 1818. Having been taken over by GEC in 1979 the company was later renamed into GEC-Avery. The company became Avery Berkel in 1993 when GEC acquired the Dutch company Berkel. After the take over by Weigh-Tronix in 2000 the company was again renamed to be called Avery Weigh-Tronix with Avery Berkel continuing to operate as a brand. The company is based in Smethwick, West Midlands, United Kingdom.
History
The undocumented origin of the company goes back to 1730 when James Ford established the business in Digbeth. On Joseph Balden the then owner's death in 1813 William and Thomas Avery took over his scalemaking business and in 1818 renamed it W & T Avery. The business rapidly expanded and in 1885 they owned three factories: the Atlas Works in West Bromwich, the Mill Lane Works in Birmingham and the Moat Lane Works in Digbeth. In 1891 the business became a limited company with a board of directors and in 1894 the shares were quoted on the London Stock Exchange. In 1895 the company bought the legendary Soho Foundry in Smethwick, a former steam engine factory owned by James Watt & Co. In 1897 the move was complete and the steam engine business was gradually converted to pure manufacture of weighing machines. The turn of the century was marked by managing director William Hipkins' determined efforts to broaden the renown of the Avery brand and transform the business into a specialist manufacturer of weighing machines. By 1914 the company occupied an area of 32,000 m² and had some 3000 employees.
In the inter-war period, the growth continued with the addition of specialised shops for cast parts, enamel paints and weighbridge assembly and the product range diversified into counting machines, testing machines, automatic packing machines and petrol pumps. During the second world war the company also produced various types of heavy guns. At that time the site underwent severe damage from parachute mines and incendiary bombs, some of many which landed on the town of Smethwick.
From 1931 to 1973 the company occupied the 18th-century Middlesex Sessions House in Clerkenwell as its headquarters.
Changes in weighing machine technology after World War II led to the closure of the foundry, the introduction of load cells and electronic weighing with the simultaneous gradual disappearance of purely mechanical devices.
After almost a century of national and international expansion the company was taken over by GEC in 1979. Keith Hodgkinson, managing director at the time, completed the turn-around from mechanical to electronic weighing with a complete overhaul of the product range of retail scales and industrial platform scales. Avery Berkel started in 1993 when the British conglomerate General Electric Company combined their GEC Avery (formerly W & T Avery) business with the newly acquired Berkel company. The group continued as a subsidiary of GEC (and later Marconi plc) until March 2000 when the business was in turn acquired for £102.5 million by the US-American company Weigh-Tronix to form Avery Weigh-Tronix. Avery Berkel continued to operate as a brand of the newly created company with GEC Avery being absorbed by the Avery Berkel brand. Avery Berkel continued as the commercial brand of Avery Weigh-Tronix.
In September 2007, Illinois Tool Works acquired Avery Berkel from Avery Weigh-Tronix. Illinois Tool Works acquired Avery Weigh-Tronix one year later, but kept the two companies separate.
In 2015, the Avery museum, which had existed for almost nine decades, was closed and the collection dispersed.
Avery Berkel is a brand and major manufacturer of commercial weighing machines owned by Illinois Tool Works.
Products
The Avery Berkel product range includes:
ValuMax scale
CodeChecker
Intelligent Shelf Edge Labels (iSEL)
Allergen labelling compliance
Self-Service AI
Linerless auto-cutter
Acquisitions
The company has made several large acquisition over the years that help contribute to its large size.
1895 James Watt & Co
1899 Parnall & Sons Ltd.
1920/1928 Southall and Smith Ltd.
1920 Saml. Denison & Son Ltd. (name changed to Avery-Denison Ltd. in 1970)
1925 Oertling Ltd.
1931 The Tan Sad Chair Co. (1931) Ltd.
1932 Avery-Hardoll Ltd.
1953/1976 Pump Maintenance Ltd.
1959 Geo Driver & Son Ltd.; merged in 1966 with Southall and Smith Ltd. to form Driver Southall Ltd.
1968 Stanton Redcroft Ltd.
1973 Telomex Ltd.
At some point, the company owned Haseley Manor, in Warwickshire.
See also
Roberval Balance
Weighbridge
Sir William Beilby Avery
Literature
Ernest Pendarves Leigh-Bennett, Weighing the World: an impression after two hundred years of the past history of an English house of business, and of its present activities and influence throughout the world of weighing, 1730–1930, Birmingham, 1930
Walter Keith Vernon Gale, Soho Foundry, Birmingham, 1948
Monopolies and Mergers Commission, The General Electric Company Limited and Averys Limited: a report on the proposed merger, London, 1979,
L H Broadbent, "The Avery Business (1730–1918)", W & T Avery, Birmingham, 1949
References
External links
www.averyweigh-tronix.com Corporate website
Avery Chronology of the Avery company
The Soho Foundry History of the company's main site
Chapter One of The General Electric Company Limited and Averys Limited: A Report on the Proposed Merger
Chapter Four of The General Electric Company Limited and Averys Limited: A Report on the Proposed Merger
Names on Weights A list of manufacturers whose names appears on British weights
Companies based in Smethwick
Industrial Revolution
Manufacturing companies of the United Kingdom
Weighing instruments | W & T Avery | [
"Physics",
"Technology",
"Engineering"
] | 1,190 | [
"Weighing instruments",
"Mass",
"Matter",
"Measuring instruments"
] |
16,846,511 | https://en.wikipedia.org/wiki/CMTM2 | CKLF-like MARVEL transmembrane domain-containing protein 2 (i.e. CMTM2), previously termed chemokine-like factor superfamily 2 ( i.e. CKLFSF2), is a member of the CKLF-like MARVEL transmembrane domain-containing family (CMTM) of proteins. In humans, it is encoded by the CMTM2 gene located in band 22 on the long (i.e. "q") arm of chromosome 16. CMTM2 protein is expressed in the bone marrow and various circulating blood cells. It is also highly expressed in testicular tissues: The CMTM2 gene and CMTM2 protein, it is suggested, may play an important role in testicular development.
Studies find that the levels of CMTM2 protein in hepatocellular carcinoma tissues of patients are lower higher than their levels in normal liver tissues. CMTM2 protein levels were also lower in the hepatocellular carcinoma tissues that had a more aggressive pathology and therefore a possible poorer prognosis. Finally, the forced overexpression of CMTM2 protein in cultured hepatocellular tumor cells inhibited their invasiveness and migration. These findings suggest that CMTM2 protein suppresses the development and/or progression of hepatocellular carcinoma and therefore that the CMTM2 gene acts as a tumor suppressor in this cancer. Patients with higher CMTM2 levels in their linitis plastica stomach cancer (i.e. a type of gastric cancer also termed diffuse-type gastric cancer or diffuse type adenocarcinoma of the stomach) tissues had better prognoses than patients with lower CMTM2 levels in their linitis plastica tissues. And, the CMTM2 gene has been found to be more highly expressed in the salivary gland adenoid cystic carcinoma tissues of patients who did not develop tumor recurrences or perineural invasion of their carcinomas compared to the expression of this gene in patients whose adenoid cystic carcinoma tissues went on to develop these complications. These findings suggest that the CMTM2 gene may act as a tumor suppressor not only in hepatocellular carcinoma but also in linitis plastica and salivary gland adenoid cystic carcinoma. Further studies are needed to confirm these findings and determine if CMTM2 protein can serve as a marker for the severity of these three cancers and/or as a therapeutic target for treating them.
References
Further reading
External links
Human proteins
Gene expression | CMTM2 | [
"Chemistry",
"Biology"
] | 528 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
16,846,849 | https://en.wikipedia.org/wiki/Partition%20function%20%28mathematics%29 | The partition function or configuration integral, as used in probability theory, information theory and dynamical systems, is a generalization of the definition of a partition function in statistical mechanics. It is a special case of a normalizing constant in probability theory, for the Boltzmann distribution. The partition function occurs in many problems of probability theory because, in situations where there is a natural symmetry, its associated probability measure, the Gibbs measure, has the Markov property. This means that the partition function occurs not only in physical systems with translation symmetry, but also in such varied settings as neural networks (the Hopfield network), and applications such as genomics, corpus linguistics and artificial intelligence, which employ Markov networks, and Markov logic networks. The Gibbs measure is also the unique measure that has the property of maximizing the entropy for a fixed expectation value of the energy; this underlies the appearance of the partition function in maximum entropy methods and the algorithms derived therefrom.
The partition function ties together many different concepts, and thus offers a general framework in which many different kinds of quantities may be calculated. In particular, it shows how to calculate expectation values and Green's functions, forming a bridge to Fredholm theory. It also provides a natural setting for the information geometry approach to information theory, where the Fisher information metric can be understood to be a correlation function derived from the partition function; it happens to define a Riemannian manifold.
When the setting for random variables is on complex projective space or projective Hilbert space, geometrized with the Fubini–Study metric, the theory of quantum mechanics and more generally quantum field theory results. In these theories, the partition function is heavily exploited in the path integral formulation, with great success, leading to many formulas nearly identical to those reviewed here. However, because the underlying measure space is complex-valued, as opposed to the real-valued simplex of probability theory, an extra factor of i appears in many formulas. Tracking this factor is troublesome, and is not done here. This article focuses primarily on classical probability theory, where the sum of probabilities total to one.
Definition
Given a set of random variables taking on values , and some sort of potential function or Hamiltonian , the partition function is defined as
The function H is understood to be a real-valued function on the space of states , while is a real-valued free parameter (conventionally, the inverse temperature). The sum over the is understood to be a sum over all possible values that each of the random variables may take. Thus, the sum is to be replaced by an integral when the are continuous, rather than discrete. Thus, one writes
for the case of continuously-varying .
When H is an observable, such as a finite-dimensional matrix or an infinite-dimensional Hilbert space operator or element of a C-star algebra, it is common to express the summation as a trace, so that
When H is infinite-dimensional, then, for the above notation to be valid, the argument must be trace class, that is, of a form such that the summation exists and is bounded.
The number of variables need not be countable, in which case the sums are to be replaced by functional integrals. Although there are many notations for functional integrals, a common one would be
Such is the case for the partition function in quantum field theory.
A common, useful modification to the partition function is to introduce auxiliary functions. This allows, for example, the partition function to be used as a generating function for correlation functions. This is discussed in greater detail below.
The parameter β
The role or meaning of the parameter can be understood in a variety of different ways. In classical thermodynamics, it is an inverse temperature. More generally, one would say that it is the variable that is conjugate to some (arbitrary) function of the random variables . The word conjugate here is used in the sense of conjugate generalized coordinates in Lagrangian mechanics, thus, properly is a Lagrange multiplier. It is not uncommonly called the generalized force. All of these concepts have in common the idea that one value is meant to be kept fixed, as others, interconnected in some complicated way, are allowed to vary. In the current case, the value to be kept fixed is the expectation value of , even as many different probability distributions can give rise to exactly this same (fixed) value.
For the general case, one considers a set of functions that each depend on the random variables . These functions are chosen because one wants to hold their expectation values constant, for one reason or another. To constrain the expectation values in this way, one applies the method of Lagrange multipliers. In the general case, maximum entropy methods illustrate the manner in which this is done.
Some specific examples are in order. In basic thermodynamics problems, when using the canonical ensemble, the use of just one parameter reflects the fact that there is only one expectation value that must be held constant: the free energy (due to conservation of energy). For chemistry problems involving chemical reactions, the grand canonical ensemble provides the appropriate foundation, and there are two Lagrange multipliers. One is to hold the energy constant, and another, the fugacity, is to hold the particle count constant (as chemical reactions involve the recombination of a fixed number of atoms).
For the general case, one has
with a point in a space.
For a collection of observables , one would write
As before, it is presumed that the argument of tr is trace class.
The corresponding Gibbs measure then provides a probability distribution such that the expectation value of each is a fixed value. More precisely, one has
with the angle brackets denoting the expected value of , and being a common alternative notation. A precise definition of this expectation value is given below.
Although the value of is commonly taken to be real, it need not be, in general; this is discussed in the section Normalization below. The values of can be understood to be the coordinates of points in a space; this space is in fact a manifold, as sketched below. The study of these spaces as manifolds constitutes the field of information geometry.
Symmetry
The potential function itself commonly takes the form of a sum:
where the sum over s is a sum over some subset of the power set P(X) of the set . For example, in statistical mechanics, such as the Ising model, the sum is over pairs of nearest neighbors. In probability theory, such as Markov networks, the sum might be over the cliques of a graph; so, for the Ising model and other lattice models, the maximal cliques are edges.
The fact that the potential function can be written as a sum usually reflects the fact that it is invariant under the action of a group symmetry, such as translational invariance. Such symmetries can be discrete or continuous; they materialize in the correlation functions for the random variables (discussed below). Thus a symmetry in the Hamiltonian becomes a symmetry of the correlation function (and vice versa).
This symmetry has a critically important interpretation in probability theory: it implies that the Gibbs measure has the Markov property; that is, it is independent of the random variables in a certain way, or, equivalently, the measure is identical on the equivalence classes of the symmetry. This leads to the widespread appearance of the partition function in problems with the Markov property, such as Hopfield networks.
As a measure
The value of the expression
can be interpreted as a likelihood that a specific configuration of values occurs in the system. Thus, given a specific configuration ,
is the probability of the configuration occurring in the system, which is now properly normalized so that , and such that the sum over all configurations totals to one. As such, the partition function can be understood to provide a measure (a probability measure) on the probability space; formally, it is called the Gibbs measure. It generalizes the narrower concepts of the grand canonical ensemble and canonical ensemble in statistical mechanics.
There exists at least one configuration for which the probability is maximized; this configuration is conventionally called the ground state. If the configuration is unique, the ground state is said to be non-degenerate, and the system is said to be ergodic; otherwise the ground state is degenerate. The ground state may or may not commute with the generators of the symmetry; if commutes, it is said to be an invariant measure. When it does not commute, the symmetry is said to be spontaneously broken.
Conditions under which a ground state exists and is unique are given by the Karush–Kuhn–Tucker conditions; these conditions are commonly used to justify the use of the Gibbs measure in maximum-entropy problems.
Normalization
The values taken by depend on the mathematical space over which the random field varies. Thus, real-valued random fields take values on a simplex: this is the geometrical way of saying that the sum of probabilities must total to one. For quantum mechanics, the random variables range over complex projective space (or complex-valued projective Hilbert space), where the random variables are interpreted as probability amplitudes. The emphasis here is on the word projective, as the amplitudes are still normalized to one. The normalization for the potential function is the Jacobian for the appropriate mathematical space: it is 1 for ordinary probabilities, and i for Hilbert space; thus, in quantum field theory, one sees in the exponential, rather than . The partition function is very heavily exploited in the path integral formulation of quantum field theory, to great effect. The theory there is very nearly identical to that presented here, aside from this difference, and the fact that it is usually formulated on four-dimensional space-time, rather than in a general way.
Expectation values
The partition function is commonly used as a probability-generating function for expectation values of various functions of the random variables. So, for example, taking as an adjustable parameter, then the derivative of with respect to
gives the average (expectation value) of H. In physics, this would be called the average energy of the system.
Given the definition of the probability measure above, the expectation value of any function f of the random variables X may now be written as expected: so, for discrete-valued X, one writes
The above notation makes sense for a finite number of discrete random variables. In more general settings, the summations should be replaced with integrals over a probability space.
Thus, for example, the entropy is given by
The Gibbs measure is the unique statistical distribution that maximizes the entropy for a fixed expectation value of the energy; this underlies its use in maximum entropy methods.
Information geometry
The points can be understood to form a space, and specifically, a manifold. Thus, it is reasonable to ask about the structure of this manifold; this is the task of information geometry.
Multiple derivatives with regard to the Lagrange multipliers gives rise to a positive semi-definite covariance matrix
This matrix is positive semi-definite, and may be interpreted as a metric tensor, specifically, a Riemannian metric. Equipping the space of lagrange multipliers with a metric in this way turns it into a Riemannian manifold. The study of such manifolds is referred to as information geometry; the metric above is the Fisher information metric. Here, serves as a coordinate on the manifold. It is interesting to compare the above definition to the simpler Fisher information, from which it is inspired.
That the above defines the Fisher information metric can be readily seen by explicitly substituting for the expectation value:
where we've written for and the summation is understood to be over all values of all random variables . For continuous-valued random variables, the summations are replaced by integrals, of course.
Curiously, the Fisher information metric can also be understood as the flat-space Euclidean metric, after appropriate change of variables, as described in the main article on it. When the are complex-valued, the resulting metric is the Fubini–Study metric. When written in terms of mixed states, instead of pure states, it is known as the Bures metric.
Correlation functions
By introducing artificial auxiliary functions into the partition function, it can then be used to obtain the expectation value of the random variables. Thus, for example, by writing
one then has
as the expectation value of . In the path integral formulation of quantum field theory, these auxiliary functions are commonly referred to as source fields.
Multiple differentiations lead to the connected correlation functions of the random variables. Thus the correlation function between variables and is given by:
Gaussian integrals
For the case where H can be written as a quadratic form involving a differential operator, that is, as
then partition function can be understood to be a sum or integral over Gaussians. The correlation function can be understood to be the Green's function for the differential operator (and generally giving rise to Fredholm theory). In the quantum field theory setting, such functions are referred to as propagators; higher order correlators are called n-point functions; working with them defines the effective action of a theory.
When the random variables are anti-commuting Grassmann numbers, then the partition function can be expressed as a determinant of the operator D. This is done by writing it as a Berezin integral (also called Grassmann integral).
General properties
Partition functions are used to discuss critical scaling, universality and are subject to the renormalization group.
See also
Exponential family
Partition function (statistical mechanics)
Partition problem
Markov random field
References
Entropy and information | Partition function (mathematics) | [
"Physics",
"Mathematics"
] | 2,801 | [
"Physical quantities",
"Entropy and information",
"Entropy",
"Partition functions",
"Statistical mechanics",
"Dynamical systems"
] |
16,847,321 | https://en.wikipedia.org/wiki/ZNF423 | Zinc finger protein 423 is a protein that in humans is encoded by the ZNF423 gene.
The protein encoded by this gene is a nuclear protein that belongs to the family of Kruppel-like C2H2 zinc finger proteins. It functions as a DNA-binding transcription factor by using distinct zinc fingers in different signaling pathways. Thus, it is thought that this gene may have multiple roles in signal transduction during development. Mice lacking the homologous gene Zfp423 have defects in midline brain development, especially in the cerebellum, as well as defects in olfactory development, and adipogenesis. Patients with mutations in ZNF423 have been reported in Joubert Syndrome and nephronophthisis.
Interactions
ZNF423 has been shown to interact with EBF1, PARP1, Notch intracellular domain, retinoic acid receptor, and CEP290.
References
Further reading
External links
Transcription factors | ZNF423 | [
"Chemistry",
"Biology"
] | 201 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
16,848,024 | https://en.wikipedia.org/wiki/Panos%20%28operating%20system%29 | PANOS is a discontinued computer operating system developed by Acorn Computers in the 1980s and released in 1985, which ran on the 32016 Second Processor for the BBC Micro and the Acorn Cambridge Workstation. These systems had essentially the same architecture, based on a 32-bit NS32016 CPU; the ACW having a BBC Micro-based "I/O processor". Access to the I/O processor was through a NS32016 firmware kernel called Pandora.
Panos ran on the NS32016 and was a rudimentary single-user operating system, written in Modula-2. It provided a simple command-line interpreter, a text editor and access to DFS, ADFS or NFS file systems via the I/O processor. Targeted at the academic and scientific user community, it came bundled with compilers for the FORTRAN 77, C, Pascal and LISP programming languages.
Commands
The following list of commands is supported by the Panos command line interpreter.
.space
.Delete
.Help
.key
.NewCommand
.Obey
.pwd
.Quit
.Run
.Set
.swd
.wait
Access
Catalogue
Configure
Copy
Create
Delete
Echo
Logon
Rename
Set
Show
Star
References
Notes
Acorn Computers operating systems
Discontinued operating systems | Panos (operating system) | [
"Technology"
] | 263 | [
"Operating system stubs",
"Computing stubs"
] |
16,848,644 | https://en.wikipedia.org/wiki/TXN2 | Thioredoxin, mitochondrial also known as thioredoxin-2 is a protein that in humans is encoded by the TXN2 gene on chromosome 22. This nuclear gene encodes a mitochondrial member of the thioredoxin family, a group of small multifunctional redox-active proteins. The encoded protein may play important roles in the regulation of the mitochondrial membrane potential and in protection against oxidant-induced apoptosis.
Structure
As a thioredoxin, TXN2 is a 12-kDa protein characterized by the redox active site Trp-Cys-Gly-Pro-Cys. In its oxidized (inactive) form, the two cysteines form a disulfide bond. This bond is then reduced by thioredoxin reductase and NADPH to a dithiol, which serves as a disulfide reductase. In contrast to TXN1, TXN2 contains a putative N-terminal mitochondrial targeting sequence, responsible for its mitochondria localization, and lacks structural cysteines. Two mRNA transcripts of the TXN2 gene differ by ~330 bp in the length of the 3′-untranslated region, and both are believed to exist in vivo.
Function
This nuclear gene encodes a mitochondrial member of the thioredoxin family, a group of small multifunctional redox-active proteins. The encoded protein is ubiquitously expressed in all prokaryotic and eukaryotic organisms, but demonstrates especially high expression in tissues with heavy metabolic activity, including the stomach, testis, ovary, liver, heart, neurons, and adrenal gland. It may play important roles in the regulation of the mitochondrial membrane potential and in protection against oxidant-induced apoptosis. Specifically, the ability of TXN2 to reduce disulfide bonds enables the protein to regulate mitochondrial redox and, thus, the production of reactive oxygen species (ROS). By extension, downregulation of TXN2 can lead to increased ROS generation and cell death. The antiapoptotic function of TXN2 is attributed to its involvement in GSH-dependent mechanisms to scavenge ROS, or its interaction with, and thus regulation of, thiols in the mitochondrial permeability transition pore component adenine nucleotide translocator (ANT).
Overexpression of TXN2 was shown to have attenuated hypoxia-induced HIF-1alpha accumulation, which is in direct opposition of the cytosolic TXN1, which enhanced HIF-1alpha levels. Moreover, although both TXN2 and TXN1 are able to reduce insulin, TXN2 does not depend on the oxidative status of the protein for this activity, a quality which may contribute to their difference in function.
Clinical significance
It has been demonstrated that genetic polymorphisms in the TXN2 gene may be associated with the risk of spina bifida.
TXN2 is known to inhibit transforming growth factor (TGF)-β-stimulated ROS generation independent of Smad signaling. TGF-β is a pro-oncogenic cytokine that induces epithelial–mesenchymal transition (EMT), which is a crucial event in metastatic progression. In particular, TXN2 inhibits TGF-β-mediated induction of HMGA2, a central EMT mediator, and fibronectin, an EMT marker.
Interactions
TXN2 is shown to interact with ANT.
References
Further reading
Proteins
Genes | TXN2 | [
"Chemistry"
] | 750 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
16,849,002 | https://en.wikipedia.org/wiki/Fibroblast%20growth%20factor%2021 | Fibroblast growth factor 21 (FGF-21) is a protein that in mammals is encoded by the FGF21 gene. The protein encoded by this gene is a member of the fibroblast growth factor (FGF) family and specifically a member of the endocrine subfamily which includes FGF23 and FGF15/19. FGF21 is the primary endogenous agonist of the FGF21 receptor, which is composed of the co-receptors FGF receptor 1 and β-Klotho.
FGF family members possess broad mitogenic and cell survival activities and are involved in a variety of biological processes including embryonic development, cell growth, morphogenesis, tissue repair, tumor growth and invasion. FGFs act through a family of four FGF receptors. Binding is complicated and requires both interaction of the FGF molecule with an FGF receptor and binding to heparin through a heparin binding domain. Endocrine FGFs lack a heparin binding domain and thus can be released into the circulation.
FGF21 is a hepatokine – i.e., a hormone secreted by the liver – that regulates simple sugar intake and preferences for sweet foods via signaling through FGF21 receptors in the paraventricular nucleus of the hypothalamus and correlates with reduced dopamine neurotransmission within the nucleus accumbens.
A single-nucleotide polymorphism of the FGF21 gene – the FGF21 rs838133 variant (frequency 44.7%) – has been identified as a genetic mechanism responsible for the sweet tooth behavioral phenotype, a trait associated with cravings for sweets and high sugar consumption, in both humans and mice.
Regulation
FGF21 is specifically induced by mitochondrial 3-hydroxy-3-methylglutaryl-CoA synthase 2 (HMGCS2) activity. The oxidized form of ketone bodies (acetoacetate) in a cultured medium also induced FGF21, possibly via a sirtuin 1 (SIRT1)-dependent mechanism. HMGCS2 activity has also been shown to be increased by deacetylation of lysines 310, 447, and 473 via SIRT3 in the mitochondria.
While FGF21 is expressed in numerous tissues, including liver, brown adipose tissue, white adipose tissue (WAT) and pancreas, circulating levels of FGF21 are derived specifically from the liver in mice. In liver FGF21 expression is regulated by PPARα and levels rise substantially with both fasting and consumption of ketogenic diets.
Liver X receptor (LXR) represses FGF21 in humans via an LXR response element located from -37 to -22 bp on the human FGF21 promoter.
Function
FGF21 stimulates glucose uptake in adipocytes but not in other cell types. This effect is additive to the activity of insulin. FGF21 treatment of adipocytes is associated with phosphorylation of FRS2, a protein linking FGF receptors to the Ras/MAP kinase pathway. FGF21 injection in ob/ob mice results in an increase in Glut1 in adipose tissue. FGF21 also protects animals from diet-induced obesity when overexpressed in transgenic mice and lowers blood glucose and triglyceride levels when administered to diabetic rodents. Treatment of animals with FGF21 results in increased energy expenditure, fat utilization and lipid excretion.
β-Klotho () functions as a cofactor essential for FGF21 activity.
In cows plasma FGF21 was nearly undetectable in late pregnancy (LP), peaked at parturition, and then stabilized at lower, chronically elevated concentrations during early lactation (EL). Plasma FGF21 was similarly increased in the absence of parturition when an energy-deficit state was induced by feed restricting late-lactating dairy cows, implicating energy insufficiency as a cause of chronically elevated FGF21 in EL. The liver was the major source of plasma FGF21 in early lactation with little or no contribution by WAT, skeletal muscle, and mammary gland. Meaningful expression of the FGF21 coreceptor β-Klotho was restricted to liver and WAT in a survey of 15 tissues that included the mammary gland. Expression of β-Klotho and its subset of interacting FGF receptors was modestly affected by the transition from LP to EL in liver but not in WAT.
Clinical significance
Serum FGF-21 levels were significantly increased in patients with type 2 diabetes mellitus (T2DM) which may indicate a role in the pathogenesis of T2DM. Elevated levels also correlate with liver fat content in non-alcoholic fatty liver disease and positively correlate with BMI in humans suggesting obesity as a FGF21-resistant state.
A single-nucleotide polymorphism (SNP) of the FGF21 gene – the FGF21 rs838133 variant (frequency 44.7%) – has been identified as a genetic mechanism responsible for the sweet tooth behavioral phenotype, a trait associated with cravings for sweets and high sugar consumption, in both humans and mice.
Animal studies
Mice lacking FGF21 fail to fully induce PGC-1α expression in response to a prolonged fast and have impaired gluconeogenesis and ketogenesis.
FGF21 stimulates phosphorylation of fibroblast growth factor receptor substrate 2 and ERK1/2 in the liver. Acute FGF21 treatment induced hepatic expression of key regulators of gluconeogenesis, lipid metabolism, and ketogenesis including glucose-6-phosphatase, phosphoenol pyruvate carboxykinase, 3-hydroxybutyrate dehydrogenase type 1, and carnitine palmitoyltransferase 1α. In addition, injection of FGF21 was associated with decreased circulating insulin and free fatty acid levels. FGF21 treatment induced mRNA and protein expression of PGC-1α, but in mice PGC-1α expression was not necessary for the effect of FGF21 on glucose metabolism.
In mice FGF21 is strongly induced in liver by prolonged fasting via PPAR-alpha and in turn induces the transcriptional coactivator PGC-1α and stimulates hepatic gluconeogenesis, fatty acid oxidation, and ketogenesis. FGF21 also blocks somatic growth and sensitizes mice to a hibernation-like state of torpor, playing a key role in eliciting and coordinating the adaptive starvation response. FGF21 expression is also induced in white adipose tissue by PPAR-gamma, which may indicate it also regulates metabolism in the fed state. FGF21 is induced in both rodents and humans consuming a low protein diet. FGF21 expression is also induced by diets with reduced levels of the essential dietary amino acids methionine or threonine, or with reduced levels of branched-chain amino acids.
Activation of AMPK and SIRT1 by FGF21 in adipocytes enhanced mitochondrial oxidative capacity as demonstrated by increases in oxygen consumption, citrate synthase activity, and induction of key metabolic genes. The effects of FGF21 on mitochondrial function require serine/threonine kinase 11 (STK11/LKB1), which activates AMPK. Inhibition of AMPK, SIRT1, and PGC-1α activities attenuated the effects of FGF21 on oxygen consumption and gene expression, indicating that FGF21 regulates mitochondrial activity and enhances oxidative capacity through an LKB1-AMPK-SIRT1-PGC-1α-dependent mechanism in adipocytes, resulting in increased phosphorylation of AMPK, increased cellular NAD+ levels and activation of SIRT1 and deacetylation of SIRT1 targets PGC-1α and histone 3.
Acutely, the rise in FGF21 in response to alcohol consumption inhibits further drinking. Chronically, the rise in FGF21 expression in the liver may protect against liver damage.
References
Further reading
External links
Got a sweet tooth? Blame your liver Phys.org, 2017
Aging-related proteins
Anti-aging substances | Fibroblast growth factor 21 | [
"Chemistry",
"Biology"
] | 1,755 | [
"Senescence",
"Anti-aging substances",
"Aging-related proteins"
] |
19,060,231 | https://en.wikipedia.org/wiki/Success%20likelihood%20index%20method | Success Likelihood Index Method (SLIM) is a technique used in the field of Human reliability Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
SLIM is a decision-analytic approach to HRA which uses expert judgement to quantify Performance Shaping Factors (PSFs); factors concerning the individuals, environment or task, which have the potential to either positively or negatively affect performance e.g. available task time. Such factors are used to derive a Success Likelihood Index (SLI), a form of preference index, which is calibrated against existing data to derive a final Human Error Probability (HEP). The PSF's which require to be considered are chosen by experts and are namely those factors which are regarded as most significant in relation to the context in question.
The technique consists of two modules: MAUD (multi-attribute utility decomposition) which scales the relative success likelihood in performing a range of tasks, given the PSFs probable to affect human performance; and SARAH (Systematic Approach to the Reliability Assessment of Humans) which calibrates these success scores with tasks with known HEP values, to provide an overall figure.
Background
SLIM was developed by Embrey et al. [1] for use within the US nuclear industry. By use of this method, relative success likelihoods are established for a range of tasks, and then calibrated using a logarithmic transformation.
SLIM methodology
The SLIM methodology breaks down into ten steps of which steps 1-7 are involved in SLIM-MAUD and 8-10 are SLIM-SARAH.
Definition of situations and subsets
Upon selection of a relevant panel of experts who will carry out the assessment, these individuals are provided with as fully detailed a task description as possible with regards to the individual designated to perform each task and further factors which are likely to influence the success of each of these. An in depth description is a critical aspect of the procedure in order to ensure that all members of the assessing group share a common understanding of the given task. This may be further advanced through a group discussion prior to the commencement of the panel session to ascertain of consensus. Following this discussion, the tasks under consideration are then classified into a number of groupings depending upon the homogeneity of the PSF's that have an effect on each. Subsets are thus defined by those tasks which have in common specific PSFs and also by their weighting within a certain sub-group; this weighting is only an approximation at this stage of the process.
Elicitation of PSFs
Random sets of 3 tasks are presented to experts from which they are required to compare one against the other two and subsequently identify an aspect in which the highlighted task differs from the remaining two; this dissimilarity should be a characteristic which affects the probability of successful task completion. The experts are then asked to highlight the low and high end-points of the identified PSF i.e. the optimality of the PSF in the context of the given task. For example, the PSF may be Time Pressure and therefore the end points of the scale would perhaps be “High level of pressure” to “Low level of pressure”. Other possible PSFs may be stress levels, task complexity or degree of teamwork required. The aim of this stage is to identify those PSF's which are most prevalent in affecting the tasks as opposed to eliciting all the possible influencing factors.
Rating the tasks on the PSFs
The endpoints of each individual PSF, as identified by the expert, are then assigned the values 1 and 9 on a linear scale. Using this scale, the expert is required to assign to each task a rating, between the two end points, which accurately reflects, using their judgement, the conditions occurring in the task in question. It is optimal to consider each factor in turn so that the judgements made are independent from the influence of other factors which otherwise may affect opinion.
Ideal point elicitation and scaling calculations
The “ideal” rating for each PSF is then selected on the scale constructed. The ideal is the point at which the PSF least degrades performance – for instance both low and high time pressure may contribute to increasing the chance of failure. The MAUD software then rescales all other ratings made on the scale in terms of their distance from this ideal point, with the closest being assigned as a 1 and the furthest from this point as a 0. This is done for all PSF's until the experts are agreed that the list of PSF's is exhausted and that all the scale positions identified are correctly positioned.
Independence checks
Using the figures which represent the relative importance of each task and their rating on the relevant scale, these are multiplied to produce a Success Likelihood Index (SLI) figure for each task. To improve the validity of the process it is necessary to confirm that each of the scales in use are independent to ensure no overlap or double counting in the overall calculation of the index.
To help carry out this validation task, MAUD software checks for correlations between the experts’ scoring on the different scales; if the scale ratings indicate a high correlation, the experts are consulted to reveal whether they agree in their meanings of the ratings on the two scales which are showing similarities. If this situation occurs, the experts are asked to define a new scale which will be a combination of the meaning of the two individually correlated scales. If the correlation is not significant then the scales are treated as independent; in this case, the concerned facilitator is required to make an informed decision as to whether or not the PSFs showing similarities are actually similar and should therefore ensure that a strong justification is explainable for the final decision.
Weighting procedure
This stage of the process concentrates on eliciting the emphasis required to be weighted to each of the PSFs in terms of the influence on the success of a task. This is done by enquiring, with the experts, the likelihood of success between pairs of tasks while considering two previously identified PSFs. By noting where the experts’ opinion is changed, the weighting of the effect of each PSF on the task success can thus be inferred. To enhance the accuracy of the outcome, this stage should be carried out in an iterative manner.
Calculation of the SLI
The Success Likelihood Index for each task is deduced using the following formula:
Where
SLIj is the SLI for task j
Wi is the importance weight for the ith PSF
Rij is the scaled rating of task j on the ith PSF
x represents the number of PSFs considered.
These SLIs are estimates of the probability with which different types of error may occur.
Conversion of SLIs to probabilities
The SLIs previously calculated require to be transformed to HEPs as they are only relative measures of the likelihood of success of each of the considered tasks.
The relationship
is assumed to exist between SLIs and HEPs. P is the probability of success and a and b are constants; a and b are calculated from the SLIs of two tasks where the HEP has already been established.
Uncertainty bound analysis
Uncertainty bounds can be estimated using expert judgement methods such as Absolute probability judgement (APJ).
Use of SLIM-SARAH for cost-effectiveness analyses
As SLIM evaluates HEPs as a function of the PSFs, considered to be the major drivers in human reliability, it is possible to perform sensitivity analysis by modifying the scores of the PSFs. By considering the PSFs which may be altered, the degree to which they can be changed and the importance of the PSFs, it is possible to conduct a cost-benefit analysis to determine how worthwhile suggested improvements may be i.e. what-if analysis, the optimal means by which the calculated HEPs can be reduced.
Worked example
The following example provides a good illustration of how the SLIM methodology is used in practice in the field of HRA.
Context
In this context an operator is responsible for the task of de-coupling a filling hose from a chemical road tanker. There exists the possibility that the operator may forget to close a valve located upstream of the filling hose, which is a crucial part of the procedure; if overlooked, this could result in adverse consequences, of greater effect to the operator in control. The primary human error of concern in this situation is ‘failure to close V0204 prior to decoupling filling hose’. The decoupling operation required to be conducted is a fairly easy task to carry out and does not require to be completed in conjunction with any further tasks; therefore is failure occurs it will have a catastrophic impact as opposed to displaying effects in a gradual manner.
Required inputs
This technique also requires an ‘expert panel’ to carry out the HRA; the panel would be made up of for example two operators possessing approximately 10 years experience of the system, a human factors analyst and a reliability analyst who has knowledge of the system and possesses a degree of experience of operation.
The panel of experts is requested to determine a set of PSFs which are applicable to the task in question within the context of the wider system; of these, the experts are then required to propose those PSFs, of the identified, which are the most important in the circumstances of the scenario.
For this example, it is assumed that the panel put forth 5 main PSFs for consideration, which are believed to have the greatest effect on human performance of the task: training, procedures, feedback, perceived risk and time pressure.
Method
PSF rating
Considering the situation within the context of the task under assessment, the panel are asked to provide further possible human errors which may occur that have the potential of affecting performance e.g. mis-setting or ignoring an alarm. For each of these, the experts are required to establish the degree to which each is either optimal or sub-optimal for the task under assessment, working on a scale from 1 to 9, with the latter being the optimal rating. For the 3 human errors which have been identified, the ratings decided for each are provided below:
PSF weighting
Were each of the identified human errors of equal importance, it would then be possible to obtain the summation of each row of ratings and come to the conclusion that the row with the lowest total rating- in this case it would be alarm mis-set- was the most probable to occur. In this context, as is most often the case, the experts are in agreement that the PSFs given above are not of equal weighting. Perceived risk and feedback are deemed to be of greatest importance, twice as much as training and procedures, which these two are considered to be one and a half times more important than the factor of time. The time factor is of considered of minimal importance in this context as the task is routine and is therefore not limited by time.
The importance of each factor can be observed through the allocated weighting, as provided below. Note that they have been normalised to sum to unity.
Using the figures for the scaled weighting of the PSFs and the weighting of their importance, it is now possible to calculate the Success Likelihood Index (SLI) for the task under assessment.
From the results of the calculations, as the SLI for ‘alarm mis-set’ is the lowest, this suggests that this is the most probable error to occur throughout the completion of the task.
However these SLI figures are not yet in the form of probabilities; they are only indications as to the likelihood by which the various errors may occur. The SLIs determine the order in which the errors are most probable to occur; they do not delineate the absolute probabilities of the PSFs. To convert the SLIs to HEPs, the SLI figures require to first be standardised; this can be done using the following formulation.
Result
If the two tasks for which the HEPs are known are incorporated in the task set which is undergoing quantification then the equation parameters can be determined by using the method of simultaneous equations; using the result of this the unknown HEP values can thus be quantified. In the example provided, were two additional tasks to be assessed e.g. A and B, which had HEP values of 0.5 and 10 -4 respectively and SLIs respectively of 4.00 and 6.00, respectively, then the formulation would be:
The final HEP values would thus be determined as
V0204 = 0.0007
Alarm mis-set = 0.14
Alarm ignored = 0.0003
References
[1] EMBREY, D.E., Humphreys, P.C., rRosa, E.A., Kirwan, B. & Rea, K., SLIM-MAUD: An approach to assessing human error probabilities using structured expert judgement. NUREG/CR-3518. 1984, US Nuclear Regulatory Commission: Washington DC.
[2] Humphreys, P. (1995) Human Reliability Assessor's Guide. Human Factors in Reliability Group.
[3] Kirwan, B. (1994). A Practical Guide to Human Reliability Assessment. CPC Press.
[4] Corlett, E.N., & Wilson, J.R. (1995). Evaluation of Human Work: A Practical Ergonomics Methodology. Taylor & Francis.
Human reliability | Success likelihood index method | [
"Engineering"
] | 2,891 | [
"Human reliability",
"Reliability engineering"
] |
19,060,709 | https://en.wikipedia.org/wiki/Influence%20diagrams%20approach | Influence Diagrams Approach (IDA) is a technique used in the field of Human reliability Assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines.
An Influence diagram(ID) is essentially a graphical representation of the probabilistic interdependence between Performance Shaping Factors (PSFs), factors which pose a likelihood of influencing the success or failure of the performance of a task. The approach originates from the field of decision analysis and uses expert judgement in its formulations. It is dependent upon the principal of human reliability and results from the combination of factors such as organisational and individual factors, which in turn combine to provide an overall influence. There exists a chain of influences in which each successive level affects the next. The role of the ID is to depict these influences and the nature of the interrelationships in a more comprehensible format. In this way, the diagram may be used to represent the shared beliefs of a group of experts on the outcome of a particular action and the factors that may or may not influence that outcome. For each of the identified influences quantitative values are calculated, which are then used to derive final Human Error Probability (HEP) estimates.
Background
IDA is a decision analysis based framework which is developed through eliciting expert judgement through group workshops. Unlike other first generation HRA, IDA explicitly considers the inter-dependency of operator and organisational PSFs. The IDA approach was first outlined by Howard and Matheson, and then developed specifically for the nuclear industry by Embrey et al. [2].
IDA Methodology
The IDA methodology is conducted in a series of 10 steps as follows:
1. Describe all relevant conditioning events
Experts who have sufficient knowledge of the situation under evaluation form a group; in depth knowledge is essential for the technique to be used to its optimal potential. The chosen individuals include a range of experts - typically those with first hand experience in the operational context under consideration – such as plant supervisors, reliability assessors, human factor specialists and designers. The group collectively assesses and gradually develops a representation of the most significant influences which will affect the success of the situation. The resultant diagram is useful in that it identifies both immediate and underlying influences of the considered factors with regards their effect on the situation under assessment and upon one another.
2. Refine the target event definition
The event which is the basis of the assessment requires to be defined as tightly as possible.
3. Balance of Evidence
The next stage is to select a middle-level event in the situation and using each of the bottom level influences, assess the weight of evidence, also known as the ‘balance of evidence’; this represents expert analysis of the likelihood that a specific state of influence or combination of the various influences is existent within the considered situation.
4. Assess the weight of evidence for this middle-level influence, which is conditional on bottom-level influences
5. Repeat 3 and 4 for the remaining middle-level and bottom-level influences
These three steps are conducted in the aim of determining the extent to which the influences exist in the process, alone and in different combinations, and their conditional effects.
6. Assess probabilities of target event conditional on middle-level influences
7. Calculate the unconditional probability of target event and unconditional weight of evidence of middle-level influences
For the various combinations of influences that have been considered, the experts identify direct estimates of the likelihood of either success or failure.
8. Compare these results to the holistic judgements of HEPs by the assessors. Revise if necessary to reduce discrepancies.
At this stage the probabilities derived from the use of the technique are compared to holistic estimates from the experts, which have been derived through an Absolute probability judgement (APJ) process. Discrepancies are discussed and resolved within the group as required.
9. Repeat above steps until assessors are finished refining their judgements
The above steps are iterated, in which all experts share opinions, highlight new aspects to the problem and revise the initially made assessments of the situation. The process is deemed complete when all participants reach a consensus that any misgivings about the discrepancies are resolved.
10. Perform sensitivity analyses
If individual experts remain to be unsure of the discrepancies about the assessments which have been made, then sensitivity analysis can be used to determine the extent to which individual influence assessments affect the target event HEP. Conducting a cost-benefit analysis is also possible at this stage of the process.
Example
The diagram below depicts an influence diagram which can be applied to any human reliability assessment [3].
This diagram was originally developed for use in the HRA of a scenario within the settings of a nuclear power situation. The diagram depicts the direct influences of each of the factors on the situation under consideration as well as providing as indication as to the way in which some of the factors affect each other.
There are 7 first level influences on the outcome of the high level task, numbered 1 to 7. Each of these describes an aspect of the task under assessment, which requires to be judged as one of two states.
The design of the task is judged to be either good or bad
The meaningfulness of the procedures involved in the completion of the task are simply meaningful or not meaningful
Operators either possess a role in the task that is of primary importance or that is not considered as a primary role
For the purposes of completing the considered task, they may or not be a formation of teams of individuals
the stress levels associated with the task can affect performance and render individuals either functional or not functional
the surrounding work ethic and environment in which the task takes place will provide either a good level of morale or a poor motivation level
competence of the individuals who are responsible for carrying out the task is either of a high level or a low level
Differing combinations of these first stage influences affect the state of those on the second level.
The quality of information, which can either be classed as good or bad, is dependent upon the meaningfulness of the procedures of the task and the task design.
The organisation, whether it is assessed as either requisite or not requisite, is determined by the role of operations functions in completing the task, the meaningfulness of the procedures and whether or not teams are formed to complete the task
The personal aspect of the task can be judged as either favourable for successful completion or unfavourable. The way in which this is assessed is dependent on competence level of the concerned individuals, stress levels present, morale/motivation levels of the individuals and whether or not teams are formed to complete the task.
By assessing the state of the second level influences, the quality of information, organisation and personal factors, the overall likelihood of either success or failure of the task can be calculated by means of conditional probability calculations.
Advantages of IDA
Dependence between PSFs is explicitly acknowledged and modelled [3]
It can be used at any task “level”, i.e. it can be used in a strategic overview or in a very fine breakdown of a task element [3]
Data requirements are small and no calibration is necessary [3]
PSFs are precisely defined and their influence is explored in depth [3]
PSFs and other influence creating error producing conditions are prioritised and if desired, the less significant ones may be ignored
Sensitivity analysis is possible with use of this technique [3]
It is possible to generate high amounts of qualitative data through the group discussion process
Disadvantages of IDA
Building IDAs is highly resource-intensive in terms of organising and supporting an extensive group session involving a suitable range of experts [3]
Eliciting unbiased HEPs requires further research with regards to their accuracy and justification [3]
See also
Current reality tree (theory of constraints)
References
[2] EMBREY, D.E. & al, e. (1985) Appendix D: A Socio-Technical Approach to Assessing Human Reliability (STAHR) in Pressurized Thermal Shock Evaluation of the Calvert Cliffs Unit 1, Nuclear Power Plant. Research Report on DOE Contract 105840R21400, Selby, D. (Ed. Oak Ridge National Laboratory, Oak Ridge, TN.
[3] Humphreys, P. (1995). Human Reliability Assessor's Guide. Human Factors in Reliability Group.
[4] Ainsworth, L.K., & Kirwan, B. (1992). A Guide to Task Analysis. Taylor & Francis.
Human reliability | Influence diagrams approach | [
"Engineering"
] | 1,894 | [
"Human reliability",
"Reliability engineering"
] |
19,062,032 | https://en.wikipedia.org/wiki/Nuclear%20Receptor%20Signaling%20Atlas | The Nuclear Receptor Signaling Atlas (NURSA) was a United States National Institutes of Health-funded research consortium focused on nuclear receptors and nuclear receptor coregulators. Its co-principal investigators were Bert O'Malley and Neil McKenna of Baylor College of Medicine and Ron Evans of the Salk Institute. NURSA has now been retired and replaced by the Signaling Pathways Project (SPP).
References
External links
Biological databases | Nuclear Receptor Signaling Atlas | [
"Chemistry",
"Biology"
] | 83 | [
"Bioinformatics stubs",
"Biotechnology stubs",
"Biochemistry stubs",
"Bioinformatics",
"Biological databases"
] |
19,063,002 | https://en.wikipedia.org/wiki/TIA-607-B | Covering the grounding and bonding requirements for a building's electrical system and telecommunications cabling infrastructure, TIA-607-B is an American National Standard created by the Telecommunications Industry Association, which facilitates the design and installation of telecom grounding/bonding systems.
Grounding and bonding not only save lives by preventing electrical hazards, they also maintain a network's overall performance by ensuring that electromagnetic “noise” doesn't interfere with data transmission. Whether the telecommunications system is based on Shielded Twisted Pair (STP) or Unshielded Twisted Pair (UTP) cable, TIA-607-B requires that each and every metallic component making contact with a telecom cabling infrastructure be bonded, even if it is merely touching another metal component that is directly attached.
According to the standard, proper infrastructure bonding requires the following elements: a telecommunications main grounding busbar (TMGB), telecommunications grounding busbars (TGB), telecommunications bonding backbone (TBB), grounding equalizers (GE), and a bonding conductor for telecommunications (BCT). Among TIA-607-B's list of metallic components in need of bonding are racks, enclosures, ladders, surge protectors, cable trays, routers, switches and patch panels.
Following the bonding of telecom infrastructure components, the entire system must be bonded to the building's main ground, which is sometimes also referred to as a grounding electrode system.
References
TIA-607-B
Mil-Spec and Telecommunications Industry Standards Glossaries
Electrical standards | TIA-607-B | [
"Physics"
] | 316 | [
"Physical systems",
"Electrical standards",
"Electrical systems"
] |
19,063,921 | https://en.wikipedia.org/wiki/Sphere%20of%20influence%20%28black%20hole%29 | The sphere of influence is a region around a supermassive black hole in which the gravitational potential of the black hole dominates the gravitational potential of the host galaxy. The radius of the sphere of influence is called the "(gravitational) influence radius".
There are two definitions in common use for the radius of the sphere of influence. The first is given by
where MBH is the mass of the black hole, σ is the stellar velocity dispersion of the host bulge, and G is the gravitational constant.
The second definition is the radius at which the enclosed mass in stars equals twice MBH, i.e.
Which definition is most appropriate depends on the physical question that is being addressed. The first definition takes into account the bulge's overall effect on the motion of a star, since is determined in part by stars that have moved far from the black hole. The second definition compares the force from the black hole to the local force from the stars.
It is a minimum requirement that the sphere of influence be well resolved in order that the mass of the black hole be determined dynamically.
Rotational influence sphere
If the black hole is rotating, there is a second radius of influence associated with the rotation. This is the radius inside of which the Lense-Thirring torques from the black hole are larger than the Newtonian torques between stars. Inside the rotational influence sphere, stellar orbits precess at approximately the Lense-Thirring rate; while outside this sphere, orbits evolve predominantly in response to perturbations from stars on other orbits. Assuming that the Milky Way black hole is maximally rotating, its rotational influence radius is about 0.001 parsec, while its radius of gravitational influence is about 3 parsecs.
See also
Roche limit
References
Galaxies
Stellar astronomy
Supermassive black holes | Sphere of influence (black hole) | [
"Physics",
"Astronomy"
] | 368 | [
"Black holes",
"Galaxies",
"Unsolved problems in physics",
"Supermassive black holes",
"Astronomical objects",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
19,064,311 | https://en.wikipedia.org/wiki/AC/DC%20receiver%20design | An AC/DC receiver design is a style of power supply of vacuum tube radio or television receivers that eliminated the bulky and expensive mains transformer. A side-effect of the design was that the receiver could in principle operate from a DC supply as well as an AC supply. Consequently, they were known as "AC/DC receivers".
Applicability to early radio and television
In the early days of radio, mains electricity was supplied at different voltages in different places, and either direct current (DC) or alternating current (AC) was supplied. There are three ways of powering electronic equipment. AC-only equipment would rely on a transformer to provide the voltages for heater and plate circuits. AC/DC equipment would connect all the tube heaters in series to match the supply voltage; a rectifier would convert AC to the direct current required for operation. When connected to a DC supply, the rectifier stage of the power supply performed no active function. DC-only equipment would only run from a DC supply and included no rectifier stage. DC is almost never used in mains power distribution anymore.
Different radio set models were required for AC, DC mains, and battery operation. For example, a 1933 Murphy radio with essentially the same circuit had different models for AC supply, DC supply, and battery operation. The introduction of AC/DC circuitry allowed a single model to be used on either AC or DC mains as a selling point, and some such models added "Universal" to their name (such sets usually had user-settable voltage tapping arrangements to cater for the wide range of voltages).
The first ever AC/DC design of radio was the All American Five. The sole aim of the design was to eliminate the mains transformer. The lower cost of transformerless designs remained popular with manufacturers long after DC power distribution had disappeared. Several models were produced which dispensed with the power transformer, but had circuit features which only allowed operation from AC. Some early models were available in both AC-only and AC/DC versions, with the AC/DC versions sometimes slightly more expensive.
Television receivers were first commercially sold in England in 1936 for the new 'Television Service' broadcast by the British Broadcasting Corporation. All pre World War II sets used mains transformers and consequently were AC only. In 1948 Pye released the first television receiver, the B18T, to employ the AC/DC design to eliminate the mains transformer when operated off 240 V mains. While sufficient for radio, the voltage was not high enough to power some television circuits, so energy was recovered during the flyback period from the primary of the line output transformer to provide a boosted HT supply; this was not possible with a lower mains supply voltage—even 220 V was insufficient. Pye's marketing material did not mention the set's ability to operate from a DC supply, possibly because there were no DC supplies within the reception range of Alexandra Palace television station, then Britain's only operating transmitter. Other manufacturers adopted the design; they, and later also Pye, sold them as AC/DC sets; the technique was used for many decades.
Series tube heaters
Vacuum tube equipment used a number of tubes, each with a heater requiring a certain amount of electrical power. In AC/DC equipment, the heaters of all the tubes are connected in series. All the tubes are rated at the same current (typically 100, 150, 300, or 450 mA) but at different voltages, according to their heating power requirements. If necessary, resistance (which can be a ballast tube (barretter), a power resistor or a resistive mains lead are added so that, when the mains voltage is applied across the chain, the specified heating current flows. Some types of ballast resistors were built into an envelope like a tube that was easily replaceable. With mains voltages of around 220 V, the power dissipated by the additional resistance and the voltage drop across it could be quite high, and it was common to use a resistive power cable (mains cord) of defined resistance, running warm, rather than putting a hot resistor inside the case. If a resistive power cable was used, an inexperienced repairer might replace it with a standard cable, or use the wrong length, damaging the equipment and risking a fire.
Transformer
AC/DC equipment did not require a transformer, and was consequently cheaper, lighter, and smaller than comparable AC equipment. This type of equipment continued to be produced long after AC became the universal standard due to its cost advantage over AC-only, and was only discontinued when vacuum tubes were replaced by low-voltage solid-state electronics.
A rectifier and a filter capacitor were connected directly to the mains. If the mains power was AC, the rectifier converted it to DC. If it was DC, the rectifier effectively acted as a conductor. When operating on DC, the voltage available was reduced by the voltage drop across the rectifier. Because an AC waveform has a voltage peak that is higher than the average value produced by the rectifier, the same set operating on the same root mean square AC supply voltage would have a higher effective voltage after the rectifier stage. In areas using 110–120 volt AC, a simple half-wave rectifier limited the maximum plate voltage that could be developed; this was adequate for relatively low-power audio equipment, but television receivers or higher-powered amplifiers required either a more complex voltage doubler rectifier or warranted the use of a power transformer with a conveniently high secondary voltage. Areas with 220–240 volt AC supplies could develop higher plate voltage with a simple rectifier. Transformerless power supplies were feasible for television receivers in 220–240 volt areas. Additionally, the use of a transformer allowed multiple independent power supplies from separate transformer windings for different stages.
In an AC/DC design there was no transformer to isolate the equipment from the mains. Much equipment was built on a metal chassis which was connected to one side of the mains. Because no power transformer was used, "hot chassis" construction was required: one of the mains power lines became the negative side of the power supply, connected to the chassis, and all metal parts in metallic contact with it, as common "ground". With AC power, the neutral, rather than live, line should be connected to the chassis; touching it, while highly undesirable, is usually relatively safe—the neutral conductor is normally at or near earth potential. But if used with a two-pin power plug (or an incorrectly wired three-pin one), any metal that the user could touch was an electrocution hazard, connected to mains live. Consequently equipment was made with no metal connected to the chassis exposed even in predictable abnormal situations, such as when a plastic knob came off a metal shaft, or small fingers poked through ventilation holes. Service personnel working on energized equipment had to use an isolation transformer for safety, or be mindful that the chassis could be live. AC-only vacuum tube equipment used a bulky, heavy, and expensive transformer, but the chassis was not connected to the supply conductors and could be earthed, making for safe operation.
Transformerless "hot chassis" televisions continued to be commonly manufactured long after transistorisation rendered live-chassis design obsolete in radios. By the 1990s, inclusion of audio-video input jacks required elimination of the floating ground as TVs needed to be interconnectable with VCRs, game consoles and video disc players. The widespread replacement of cathode ray tubes with liquid crystal displays after the turn of the millennium resulted in televisions using primarily low voltages, obtained from switching power supplies. The potentially-hazardous "floating chassis" was no more.
Regional variations
In the past, 110–120 V was not high enough for higher-power tube audio and television applications, and only suitable to operate low-power radio and audio equipment such as radio receivers. Higher-powered 110–120 V audio or television equipment needed higher voltages, which were obtained using a step-up transformer based power supply, or sometimes an AC voltage doubler, therefore operating off AC only.
Some AC/DC equipment was designed to be switchable to be able to operate off either 110 V AC (possibly with a voltage doubler) or 220–240 V AC or DC. Television receivers were produced which could run off 240 V AC or DC. The voltage was not high enough to power some circuits, so energy was recovered during the flyback period from the primary of the line output transformer to provide a boosted HT (vacuum tube) (high tension) supply. In a typical vacuum tube colour TV set, the line output stage had to boost its own HT supply to between 900 and 1200 volts (depending on screen size and design). Transistor line output stages, although not requiring supply voltages above the rectified mains voltage, nevertheless still developed extra voltage over the normal supply rail to avoid complicating the power supply circuitry. A typical transistor stage would produce between 20 and 50 'extra' volts. Some details of the way in which the nominally 190 volts HT supply was boosted to nearly 500 volts in the 1951 Bush TV22 are described in a technical publication. AC/DC televisions were produced well into the color and semiconductor era (some sets were tube/semiconductor hybrids).
Transistor radios
With widespread adoption of solid-state design in the 1970s, voltage and power requirements for tabletop portable radio receivers dropped significantly. One common approach was to design a battery-powered radio (typically 6 volts DC from four dry cells) but include a small built-in step down transformer and rectifier to allow mains electricity (120 V or 240 V AC, depending on region) as an alternative to battery-powered operation.
See also
Notes and references
Electric current
Electrical engineering
History of radio technology | AC/DC receiver design | [
"Physics",
"Engineering"
] | 2,082 | [
"Electrical engineering",
"Electric current",
"Wikipedia categories named after physical quantities",
"Physical quantities"
] |
19,067,895 | https://en.wikipedia.org/wiki/Conventional%20pollutant | A conventional pollutant is a term used in the USA to describe a water pollutant that is amenable to treatment by a municipal sewage treatment plant. A basic list of conventional pollutants is defined in the U.S. Clean Water Act. The list has been amended in regulations issued by the Environmental Protection Agency:
biochemical oxygen demand (BOD)
fecal coliform bacteria
oil and grease
pH (exceeding regulatory limits)
total suspended solids (TSS).
The Secondary Treatment Regulation contains national discharge standards for BOD, pH and TSS, applicable to sewage treatment plants in the U.S.
See also
Secondary treatment
Water quality
Criteria pollutants, a similar list of pollutants of air
References
Environmental engineering
Water pollution
Water quality indicators | Conventional pollutant | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 155 | [
"Chemical engineering",
"Water pollution",
"Civil engineering",
"Water quality indicators",
"Environmental engineering"
] |
19,071,846 | https://en.wikipedia.org/wiki/European%20Fusion%20Development%20Agreement | EFDA (1999 — 2013) has been followed by EUROfusion, which is a consortium of national fusion research institutes located in the European Union and Switzerland.
The European Union has a strongly coordinated nuclear fusion research programme. At the European level, the so-called EURATOM Treaty is the international legal framework under which member states cooperate in the fields of nuclear fusion research.
The European Fusion Development Agreement (EFDA) is an agreement between European fusion research institutions and the European Commission (which represents Euratom) to strengthen their coordination and collaboration, and to participate in collective activities in the field of nuclear fusion research.
In Europe, fusion research takes place in a great number of research institutes and universities. In each member state of the European Fusion Programme at least one research organisation has a "Contract of Association" with the European Commission. All the fusion research organisations and institutions of a country are connected to the program through this (these) contracted organisation(s). After the name of the contract, the groups of fusion research organisations of the member states are called "Associations".
History
The European Fusion Development Agreement (EFDA) was created in 1999.
Until 2008 EFDA was responsible for the exploitation of the Joint European Torus, the coordination and support of fusion-related research & development activities carried out by the Associations and by European Industry and coordination of the European contribution to large scale international collaborations, such as the ITER-project.
2008 has brought a significant change to the structure of the European Fusion Programme. The change was triggered by the signature of the ITER agreement at the end of 2006. The ITER parties had agreed to provide contributions to ITER through legal entities referred to as "Domestic Agencies". Europe has fulfilled its obligation by launching the European Domestic Agency called "Fusion for Energy", also called F4E, in March 2007.
With the appearance of F4E EFDA´s role has changed and it has been reorganised. A revised European Fusion Development Agreement entered into force on 1 January 2008 focuses on research coordination with two main objectives: to prepare for the operation and exploitation of ITER and to further develop and consolidate the knowledge base needed for overall fusion development and in particular for DEMO, the first electricity producing experimental fusion power plant being built after ITER.
Organisation
EFDA has two locations, which each house a so-called Close Support Unit (CSU), responsible for part of EFDA's activities. The EFDA-CSU Garching is located in Garching, near Munich (Germany), and is hosted by the German Max-Planck Institut für Plasmaphysik. EFDA-CSU Culham is hosted by the CCFE laboratory in Culham (UK), home of the Joint European Torus facilities.
A large number of scientists and engineers from the associated laboratories work together on different projects of EFDA. The main task of the Close Support Units is to ensure that these diverse activities are integrated in a coordinated European Fusion Programme.
The EFDA management consists of the EFDA Leader (Dr. Francesco Romanelli) and the EFDA-Associate Leader for JET (Dr. Francesco Romanelli).
Activities
In order to achieve its objectives EFDA conducts the following group of activities:
Collective use of JET, the world's largest fusion experiment
Reinforced coordination of fusion physics and technology research and development in EU laboratories.
Training and carrier development of researchers, promoting links to universities and carrying out support actions for the benefit of the fusion programme.
EU contributions to international collaborations outside F4E
EFDA coordinates a range of activities to be carried out by the Associations in 7 key physics and technology areas. The implementation of these activities benefits from structures so called Task Forces and Topical Groups. The European Task Forces on Plasma Wall Interaction (PWI) and on Integrated Tokamak Modelling (ITM) set up respectively in 2002 and 2003. To strengthen the co-ordination in other key areas five Topical Groups have been set up in 2008: on Fusion Materials Development, Diagnostics, Heating and Current Drive, Transport and Plasma Stability and Control.
See also
Culham Centre for Fusion Energy
External links
European Fusion Development Agreement (EFDA)
Fusion laboratories
Further reading
EURATOM
The EU fusion programme
The international fusion experiment ITER
The European Domestic Agency 'Fusion for Energy'
Nuclear fusion
Nuclear power in Europe | European Fusion Development Agreement | [
"Physics",
"Chemistry"
] | 889 | [
"Nuclear fusion",
"Nuclear physics"
] |
406,880 | https://en.wikipedia.org/wiki/Odds%20ratio | An odds ratio (OR) is a statistic that quantifies the strength of the association between two events, A and B. The odds ratio is defined as the ratio of the odds of event A taking place in the presence of B, and the odds of A in the absence of B. Due to symmetry, odds ratio reciprocally calculates the ratio of the odds of B occurring in the presence of A, and the odds of B in the absence of A. Two events are independent if and only if the OR equals 1, i.e., the odds of one event are the same in either the presence or absence of the other event. If the OR is greater than 1, then A and B are associated (correlated) in the sense that, compared to the absence of B, the presence of B raises the odds of A, and symmetrically the presence of A raises the odds of B. Conversely, if the OR is less than 1, then A and B are negatively correlated, and the presence of one event reduces the odds of the other event occurring.
Note that the odds ratio is symmetric in the two events, and no causal direction is implied (correlation does not imply causation): an OR greater than 1 does not establish that B causes A, or that A causes B.
Two similar statistics that are often used to quantify associations are the relative risk (RR) and the absolute risk reduction (ARR). Often, the parameter of greatest interest is actually the RR, which is the ratio of the probabilities analogous to the odds used in the OR. However, available data frequently do not allow for the computation of the RR or the ARR, but do allow for the computation of the OR, as in case-control studies, as explained below. On the other hand, if one of the properties (A or B) is sufficiently rare (in epidemiology this is called the rare disease assumption), then the OR is approximately equal to the corresponding RR.
The OR plays an important role in the logistic model.
Definition and basic properties
Intuition from an example for laypeople
If we flip an unbiased coin, the probability of getting heads and the probability of getting tails are equal — both are 50%. Imagine we get a biased coin that makes it two times more likely to get heads. But what does "twice as likely" mean in terms of a probability? It cannot literally mean to double the original probability value, because doubling 50% would yield 100%. Rather, it is the odds that are doubling: from 1:1 odds, to 2:1 odds. The new probabilities would be 66⅔% for heads and 33⅓% for tails.
A motivating example, in the context of the rare disease assumption
Suppose a radiation leak in a village of 1,000 people increased the incidence of a rare disease. The total number of people exposed to the radiation was out of which developed the disease and stayed healthy. The total number of people not exposed was out of which developed the disease and stayed healthy. We can organize this in a contingency table:
The risk of developing the disease given exposure is and of developing the disease given non-exposure is . One obvious way to compare the risks is to use the ratio of the two, the relative risk.
The odds ratio is different. The odds of getting the disease if exposed is and the odds if not exposed is The odds ratio is the ratio of the two,
As illustrated by this example, in a rare-disease case like this, the relative risk and the odds ratio are almost the same. By definition, rare disease implies that and . Thus, the denominators in the relative risk and odds ratio are almost the same ( and .
Relative risk is easier to understand than the odds ratio, but one reason to use odds ratio is that usually, data on the entire population is not available and random sampling must be used. In the example above, if it were very costly to interview villagers and find out if they were exposed to the radiation, then the prevalence of radiation exposure would not be known, and neither would the values of or . One could take a random sample of fifty villagers, but quite possibly such a random sample would not include anybody with the disease, since only 2.6% of the population are diseased. Instead, one might use a case-control study in which all 26 diseased villagers are interviewed as well as a random sample of 26 who do not have the disease. The results might turn out as follows ("might", because this is a random sample):
The odds in this sample of getting the disease given that someone is exposed is 20/10 and the odds given that someone is not exposed is 6/16. The odds ratio is thus , quite close to the odds ratio calculated for the entire village. The relative risk, however, cannot be calculated, because it is the ratio of the risks of getting the disease and we would need and to figure those out. Because the study selected for people with the disease, half the people in the sample have the disease and it is known that that is more than the population-wide prevalence.
It is standard in the medical literature to calculate the odds ratio and then use the rare-disease assumption (which is usually reasonable) to claim that the relative risk is approximately equal to it. This not only allows for the use of case-control studies, but makes controlling for confounding variables such as weight or age using regression analysis easier and has the desirable properties discussed in other sections of this article of invariance and insensitivity to the type of sampling.
Definition in terms of group-wise odds
The odds ratio is the ratio of the odds of an event occurring in one group to the odds of it occurring in another group. The term is also used to refer to sample-based estimates of this ratio. These groups might be men and women, an experimental group and a control group, or any other dichotomous classification. If the probabilities of the event in each of the groups are p1 (first group) and p2 (second group), then the odds ratio is:
where qx = 1 − px. An odds ratio of 1 indicates that the condition or event under study is equally likely to occur in both groups. An odds ratio greater than 1 indicates that the condition or event is more likely to occur in the first group. And an odds ratio less than 1 indicates that the condition or event is less likely to occur in the first group. The odds ratio must be nonnegative if it is defined. It is undefined if p2q1 equals zero, i.e., if p2 equals zero or q1 equals zero.
Definition in terms of joint and conditional probabilities
The odds ratio can also be defined in terms of the joint probability distribution of two binary random variables. The joint distribution of binary random variables and can be written
where 11, 10, 01 and 00 are non-negative "cell probabilities" that sum to one. The odds for within the two subpopulations defined by = 1 and = 0 are defined in terms of the conditional probabilities given , i.e., :
Thus the odds ratio is
The simple expression on the right, above, is easy to remember as the product of the probabilities of the "concordant cells" divided by the product of the probabilities of the "discordant cells" . However in some applications the labeling of categories as zero and one is arbitrary, so there is nothing special about concordant versus discordant values in these applications.
Symmetry
If we had calculated the odds ratio based on the conditional probabilities given Y,
we would have obtained the same result
Other measures of effect size for binary data such as the relative risk do not have this symmetry property.
Relation to statistical independence
If X and Y are independent, their joint probabilities can be expressed in terms of their marginal probabilities and , as follows
In this case, the odds ratio equals one, and conversely the odds ratio can only equal one if the joint probabilities can be factored in this way. Thus the odds ratio equals one if and only if X and Y are independent.
Recovering the cell probabilities from the odds ratio and marginal probabilities
The odds ratio is a function of the cell probabilities, and conversely, the cell probabilities can be recovered given knowledge of the odds ratio and the marginal probabilities and . If the odds ratio R differs from 1, then
where , and
In the case where , we have independence, so .
Once we have , the other three cell probabilities can easily be recovered from the marginal probabilities.
Example
Suppose that in a sample of 100 men, 90 drank wine in the previous week (so 10 did not), while in a sample of 80 women only 20 drank wine in the same period (so 60 did not). This forms the contingency table:
The odds ratio (OR) can be directly calculated from this table as:
Alternatively, the odds of a man drinking wine are 90 to 10, or 9:1, while the odds of a woman drinking wine are only 20 to 60, or 1:3 = 0.33. The odds ratio is thus 9/0.33, or 27, showing that men are much more likely to drink wine than women. The detailed calculation is:
This example also shows how odds ratios are sometimes sensitive in stating relative positions: in this sample men are (90/100)/(20/80) = 3.6 times as likely to have drunk wine than women, but have 27 times the odds. The logarithm of the odds ratio, the difference of the logits of the probabilities, tempers this effect, and also makes the measure symmetric with respect to the ordering of groups. For example, using natural logarithms, an odds ratio of 27/1 maps to 3.296, and an odds ratio of 1/27 maps to −3.296.
Statistical inference
Several approaches to statistical inference for odds ratios have been developed.
One approach to inference uses large sample approximations to the sampling distribution of the log odds ratio (the natural logarithm of the odds ratio). If we use the joint probability notation defined above, the population log odds ratio is
If we observe data in the form of a contingency table
then the probabilities in the joint distribution can be estimated as
where , with being the sum of all four cell counts. The sample log odds ratio is
.
The distribution of the log odds ratio is approximately normal with:
The standard error for the log odds ratio is approximately
.
This is an asymptotic approximation, and will not give a meaningful result if any of the cell counts are very small. If L is the sample log odds ratio, an approximate 95% confidence interval for the population log odds ratio is . This can be mapped to to obtain a 95% confidence interval for the odds ratio. If we wish to test the hypothesis that the population odds ratio equals one, the two-sided p-value is , where P denotes a probability, and Z denotes a standard normal random variable.
An alternative approach to inference for odds ratios looks at the distribution of the data conditionally on the marginal frequencies of X and Y. An advantage of this approach is that the sampling distribution of the odds ratio can be expressed exactly.
Role in logistic regression
Logistic regression is one way to generalize the odds ratio beyond two binary variables. Suppose we have a binary response variable Y and a binary predictor variable X, and in addition we have other predictor variables Z1, ..., Zp that may or may not be binary. If we use multiple logistic regression to regress Y on X, Z1, ..., Zp, then the estimated coefficient for X is related to a conditional odds ratio. Specifically, at the population level
so is an estimate of this conditional odds ratio. The interpretation of is as an estimate of the odds ratio between Y and X when the values of Z1, ..., Zp are held fixed.
Insensitivity to the type of sampling
If the data form a "population sample", then the cell probabilities are interpreted as the frequencies of each of the four groups in the population as defined by their X and Y values. In many settings it is impractical to obtain a population sample, so a selected sample is used. For example, we may choose to sample units with with a given probability f, regardless of their frequency in the population (which would necessitate sampling units with with probability ). In this situation, our data would follow the following joint probabilities:
The odds ratio for this distribution does not depend on the value of f. This shows that the odds ratio (and consequently the log odds ratio) is invariant to non-random sampling based on one of the variables being studied. Note however that the standard error of the log odds ratio does depend on the value of f.
This fact is exploited in two important situations:
Suppose it is inconvenient or impractical to obtain a population sample, but it is practical to obtain a convenience sample of units with different X values, such that within the and subsamples the Y values are representative of the population (i.e. they follow the correct conditional probabilities).
Suppose the marginal distribution of one variable, say X, is very skewed. For example, if we are studying the relationship between high alcohol consumption and pancreatic cancer in the general population, the incidence of pancreatic cancer would be very low, so it would require a very large population sample to get a modest number of pancreatic cancer cases. However we could use data from hospitals to contact most or all of their pancreatic cancer patients, and then randomly sample an equal number of subjects without pancreatic cancer (this is called a "case-control study").
In both these settings, the odds ratio can be calculated from the selected sample, without biasing the results relative to what would have been obtained for a population sample.
Use in quantitative research
Due to the widespread use of logistic regression, the odds ratio is widely used in many fields of medical and social science research. The odds ratio is commonly used in survey research, in epidemiology, and to express the results of some clinical trials, such as in case-control studies. It is often abbreviated "OR" in reports. When data from multiple surveys is combined, it will often be expressed as "pooled OR".
Relation to relative risk
As explained in the "Motivating Example" section, the relative risk is usually better than the odds ratio for understanding the relation between risk and some variable such as radiation or a new drug. That section also explains that if the rare disease assumption holds, the odds ratio is a good approximation to relative risk and that it has some advantages over relative risk. When the rare disease assumption does not hold, the unadjusted odds ratio will be greater than the relative risk, but novel methods can easily use the same data to estimate the relative risk, risk differences, base probabilities, or other quantities.
If the absolute risk in the unexposed group is available, conversion between the two is calculated by:
where RC is the absolute risk of the unexposed group.
If the rare disease assumption does not apply, the odds ratio may be very different from the relative risk and should not be interpreted as a relative risk.
Consider the death rate of men and women passengers when a ship sank. Of 462 women, 154 died and 308 survived. Of 851 men, 709 died and 142 survived. Clearly a man on the ship was more likely to die than a woman, but how much more likely? Since over half the passengers died, the rare disease assumption is strongly violated.
To compute the odds ratio, note that for women the odds of dying were 1 to 2 (154/308). For men, the odds were 5 to 1 (709/142). The odds ratio is 9.99 (4.99/.5). Men had ten times the odds of dying as women.
For women, the probability of death was 33% (154/462). For men the probability was 83% (709/851). The relative risk of death is 2.5 (.83/.33). A man had 2.5 times a woman's probability of dying.
Confusion and exaggeration
Odds ratios have often been confused with relative risk in medical literature. For non-statisticians, the odds ratio is a difficult concept to comprehend, and it gives a more impressive figure for the effect. However, most authors consider that the relative risk is readily understood. In one study, members of a national disease foundation were actually 3.5 times more likely than nonmembers to have heard of a common treatment for that disease – but the odds ratio was 24 and the paper stated that members were ‘more than 20-fold more likely to have heard of’ the treatment. A study of papers published in two journals reported that 26% of the articles that used an odds ratio interpreted it as a risk ratio.
This may reflect the simple process of uncomprehending authors choosing the most impressive-looking and publishable figure. But its use may in some cases be deliberately deceptive. It has been suggested that the odds ratio should only be presented as a measure of effect size when the risk ratio cannot be estimated directly, but with newly available methods it is always possible to estimate the risk ratio, which should generally be used instead.
While relative risks are potentially easier to interpret for a general audience, there are mathematical and conceptual advantages when using an odds-ratio instead of a relative risk, particularly in regression models. For that reason, there is not a consensus within the fields of epidemiology or biostatistics that relative risks or odds-ratios should be preferred when both can be validly used, such as in clinical trials and cohort studies
Invertibility and invariance
The odds ratio has another unique property of being directly mathematically invertible whether analyzing the OR as either disease survival or disease onset incidence – where the OR for survival is direct reciprocal of 1/OR for risk. This is known as the 'invariance of the odds ratio'. In contrast, the relative risk does not possess this mathematical invertible property when studying disease survival vs. onset incidence. This phenomenon of OR invertibility vs. RR non-invertibility is best illustrated with an example:
Suppose in a clinical trial, one has an adverse event risk of 4/100 in drug group, and 2/100 in placebo... yielding a RR=2 and OR=2.04166 for drug-vs-placebo adverse risk. However, if analysis was inverted and adverse events were instead analyzed as event-free survival, then the drug group would have a rate of 96/100, and placebo group would have a rate of 98/100—yielding a drug-vs-placebo a RR=0.9796 for survival, but an OR=0.48979. As one can see, a RR of 0.9796 is clearly not the reciprocal of a RR of 2. In contrast, an OR of 0.48979 is indeed the direct reciprocal of an OR of 2.04166.
This is again what is called the 'invariance of the odds ratio', and why a RR for survival is not the same as a RR for risk, while the OR has this symmetrical property when analyzing either survival or adverse risk. The danger to clinical interpretation for the OR comes when the adverse event rate is not rare, thereby exaggerating differences when the OR rare-disease assumption is not met. On the other hand, when the disease is rare, using a RR for survival (e.g. the RR=0.9796 from above example) can clinically hide and conceal an important doubling of adverse risk associated with a drug or exposure.
Estimators of the odds ratio
Sample odds ratio
The sample odds ratio n11n00 / n10n01 is easy to calculate, and for moderate and large samples performs well as an estimator of the population odds ratio. When one or more of the cells in the contingency table can have a small value, the sample odds ratio can be biased and exhibit high variance.
Alternative estimators
A number of alternative estimators of the odds ratio have been proposed to address limitations of the sample odds ratio. One alternative estimator is the conditional maximum likelihood estimator, which conditions on the row and column margins when forming the likelihood to maximize (as in Fisher's exact test). Another alternative estimator is the Mantel–Haenszel estimator.
Numerical examples
The following four contingency tables contain observed cell counts, along with the corresponding sample odds ratio (OR) and sample log odds ratio (LOR):
The following joint probability distributions contain the population cell probabilities, along with the corresponding population odds ratio (OR) and population log odds ratio (LOR):
Numerical example
Related statistics
There are various other summary statistics for contingency tables that measure association between two events, such as Yule's Y, Yule's Q; these two are normalized so they are 0 for independent events, 1 for perfectly correlated, −1 for perfectly negatively correlated. studied these and argued that these measures of association must be functions of the odds ratio, which he referred to as the cross-ratio.
Odds Ratio for a Matched Case-Control Study
A case-control study involves selecting representative samples of cases and controls who do, and do not, have some disease, respectively. These samples are usually independent of each other. The prior prevalence of exposure to some risk factor is observed in subjects from both samples. This permits the estimation of the odds ratio for disease in exposed vs. unexposed people as noted above. Sometimes, however, it makes sense to match cases to controls on one or more confounding variables. In this case, the prior exposure of interest is determined for each case and her/his matched control. The data can be summarized in the following table.
Matched 2x2 Table
This table gives the exposure status of the matched pairs of subjects. There are pairs where both the case and her/his matched control were exposed, pairs where the case patient was exposed but the control subject was not, pairs where the control subject was exposed but the case patient was not, and pairs were neither subject was exposed. The exposure of matched case and control pairs is correlated due to the similar values of their shared confounding variables.
The following derivation is due to Breslow & Day. We consider each pair as belonging to a stratum with identical values of the confounding variables. Conditioned on belonging to the same stratum, the exposure status of cases and controls are independent of each other. For any case-control pair within the same stratum let
be the probability that a case patient is exposed,
be the probability that a control patient is exposed,
be the probability that a case patient is not exposed, and
be the probability that a control patient is not exposed.
Then the probability that a case is exposed and a control is not is , and the probability that a control is exposed and a case in not is . The within-stratum odds ratio for exposure in cases relative to controls is
We assume that is constant across strata.
Now concordant pairs in which either both the case and the control are exposed, or neither are exposed tell us nothing about the odds of exposure in cases relative to the odds of exposure among controls. The probability that the case is exposed and the control is not given that the pair is discordant is
The distribution of given the number of discordant pairs is binomial ~ B and the maximum likelihood estimate of is
Multiplying both sides of this equation by and subtracting gives
and hence
.
Now is the maximum likelihood estimate of , and is a monotonic function of . It follows that is the conditional maximum likelihood estimate of given the number of discordant pairs. Rothman et al. give an alternate derivation of by showing that it is a special case of the Mantel-Haenszel estimate of the intra-strata odds ratio for stratified 2x2 tables. They also reference Breslow & Day as providing the derivation given here.
Under the null hypothesis that .
Hence, we can test the null hypothesis that by testing the null hypothesis that . This is done using McNemar's test.
There are a number of ways to calculate a confidence interval for . Let and denote the lower and upper bound of a confidence interval for , respectively. Since , the corresponding confidence interval for is
.
Matched 2x2 tables may also be analyzed using conditional logistic regression. This technique has the advantage of allowing users to regress case-control status against multiple risk factors from matched case-control data.
Example
McEvoy et al.
studied the use of cell phones by drivers as a risk factor for automobile crashes in a case-crossover study. All study subjects were involved in an automobile crash requiring hospital attendance. Each driver's cell phone use at the time of her/his crash was compared to her/his cell phone use in a control interval at the same time of day one week earlier. We would expect that a person's cell phone use at the time of the crash would be correlated with his/her use one week earlier. Comparing usage during the crash and control intervals adjusts for driver's characteristics and the time of day and day of the week. The data can be summarized in the following table.
There were 5 drivers who used their phones in both intervals, 27 who used them in the crash but not the control interval, 6 who used them in the control but not the crash interval, and 288 who did not use them in either interval. The odds ratio for crashing while using their phone relative to driving when not using their phone was
.
Testing the null hypothesis that is the same as testing the null hypothesis that given 27 out of 33 discordant pairs in which the driver was using her/his phone at the time of his crash. McNemar's . This statistic has one degree of freedom and yields a P value of 0.0003. This allows us to reject the hypothesis that cell phone use has no effect on the risk of automobile crashes () with a high level of statistical significance.
Using Wilson's method, a 95% confidence interval for is (0.6561, 0.9139). Hence, a 95% confidence interval for is
(McEvoy et al. analyzed their data using conditional logistic regression and obtained almost identical results to those given here. See the last row of Table 3 in their paper.)
See also
Cohen's h
Cross-ratio
Diagnostic odds ratio
Forest plot
Hazard ratio
Likelihood ratio
Rate ratio
References
Citations
Sources
External links
Odds Ratio Calculator – website
Odds Ratio Calculator with various tests – website
OpenEpi, a web-based program that calculates the odds ratio, both unmatched and pair-matched
Epidemiology
Medical statistics
Bayesian statistics
Summary statistics for contingency tables | Odds ratio | [
"Environmental_science"
] | 5,604 | [
"Epidemiology",
"Environmental social science"
] |
406,902 | https://en.wikipedia.org/wiki/Geometrized%20unit%20system | A geometrized unit system or geometrodynamic unit system is a system of natural units in which the base physical units are chosen so that the speed of light in vacuum, c, and the gravitational constant, G, are set equal to unity.
The geometrized unit system is not a completely defined system. Some systems are geometrized unit systems in the sense that they set these, in addition to other constants, to unity, for example Stoney units and Planck units.
This system is useful in physics, especially in the special and general theories of relativity. All physical quantities are identified with geometric quantities such as areas, lengths, dimensionless numbers, path curvatures, or sectional curvatures.
Many equations in relativistic physics appear simpler when expressed in geometric units, because all occurrences of G and of c drop out. For example, the Schwarzschild radius of a nonrotating uncharged black hole with mass m becomes . For this reason, many books and papers on relativistic physics use geometric units. An alternative system of geometrized units is often used in particle physics and cosmology, in which instead. This introduces an additional factor of 8π into Newton's law of universal gravitation but simplifies the Einstein field equations, the Einstein–Hilbert action, the Friedmann equations and the Newtonian Poisson equation by removing the corresponding factor.
Definition
Geometrized units were defined in the book Gravitation by Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler with the speed of light, , the gravitational constant, , and Boltzmann constant, all set to . Some authors refer to these units as geometrodynamic units.
In geometric units, every time interval is interpreted as the distance travelled by light during that given time interval. That is, one second is interpreted as one light-second, so time has the geometric units of length. This is dimensionally consistent with the notion that, according to the kinematical laws of special relativity, time and distance are on an equal footing.
Energy and momentum are interpreted as components of the four-momentum vector, and mass is the magnitude of this vector, so in geometric units these must all have the dimension of length. We can convert a mass expressed in kilograms to the equivalent mass expressed in metres by multiplying by the conversion factor G/c2. For example, the Sun's mass of in SI units is equivalent to . This is half the Schwarzschild radius of a one solar mass black hole. All other conversion factors can be worked out by combining these two.
The small numerical size of the few conversion factors reflects the fact that relativistic effects are only noticeable when large masses or high speeds are considered.
Conversions
Listed below are all conversion factors that are useful to convert between all combinations of the SI base units, and if not possible, between them and their unique elements, because ampere is a dimensionless ratio of two lengths such as [C/s], and candela (1/683 [W/sr]) is a dimensionless ratio of two dimensionless ratios such as ratio of two volumes [kg⋅m2/s3] = [W] and ratio of two areas [m2/m2] = [sr], while mole is only a dimensionless Avogadro number of entities such as atoms or particles:
References
See Appendix F
External links
Conversion factors for energy equivalents
General relativity
Systems of units
Natural units | Geometrized unit system | [
"Physics",
"Mathematics"
] | 714 | [
"Systems of units",
"Quantity",
"General relativity",
"Theory of relativity",
"Units of measurement"
] |
407,480 | https://en.wikipedia.org/wiki/Grout | Grout is a dense substance that flows like a liquid yet hardens upon application, and it gets used to fill gaps or to function as reinforcement in existing structures. Grout is generally a mixture of water, cement, and sand, and it frequently gets employed in efforts such as pressure grouting, embedding rebar in masonry walls, connecting sections of precast concrete, filling voids, and sealing joints such as those between tiles. Common uses for grout in the household include filling in tiles of shower floors and kitchen tiles. It is often color tinted when it has to be kept visible and sometimes includes fine gravel when being used to fill large spaces (such as the cores of concrete blocks). Unlike other structural pastes such as plaster or joint compound, correctly mixed and applied grout forms a water-resistant seal.
Although both grout and its close relative, mortar, are applied as a thick suspension and harden over time, grout is distinguished by its low viscosity and lack of lime (added to mortar for pliability); grout is thin so it flows readily into gaps, while mortar is thick enough to support not only its own weight, but also that of masonry placed above it. Grout is also similar to concrete, but grout is distinguished by having only very fine aggregate (sand) and by generally containing a higher ratio of water to achieve the low desired viscosity.
The materials "caulk" and "grout" may be confused for each other or otherwise subject to misunderstandings. While each are used in building maintenance to a significant degree, the former is usually made up of a fluid silicone or polyurethane type of chemical substance while the latter consists of a specific mixture based on many fine particles, with the aforementioned household use of grout relying on its basis in cement being important. In addition, caulk remains flexible after it dries, which contrasts with the utilization of grout. Projects involving a lot of work involving grout frequently take place with the goals of preventing both dirt and moisture from getting under tiles.
Varieties
Grout varieties include tiling, flooring, resin, nonshrinking, structural, and thixotropic grouts. The use of enhancing admixtures increases the quality of cement-based materials and leads to greater uniformity of hardened properties.
Tiling grout is often used to fill the spaces between tiles or mosaics and to secure tile to its base. Although ungrouted mosaics do exist, most have grout between the tesserae. Tiling grout is also cement-based, and is produced in sanded and unsanded varieties, which affects the strength, size, and appearance of the grout. The sanded variety contains finely ground silica sand; unsanded is finer and produces a smoother final surface. They are often enhanced with polymers and/or latex.
Structural grout is often used in reinforced masonry to fill voids in masonry housing reinforcing steel, securing the steel in place, and bonding it to the masonry. Nonshrinking grout is used beneath metal bearing plates to ensure a consistent bearing surface between the plate and its substrate, which adds stability and allows for higher load transfers.
Portland cement is the most common cementing agent in grout. However, the utilization of thermoset polymer matrix grouts based on thermosets such as urethanes and epoxies are also popular.
Portland cement-based grouts include different varieties depending on the particle size of the ground clinker used to make the cement, with a standard size around 15 microns, microfine from 6–10 microns, and ultrafine below 5 microns. Finer particle sizes let the grout penetrate more deeply into a fissure. Because these grouts depend on the presence of sand for their basic strength, they are often somewhat gritty when finally cured and hardened.
From the different types of grout, a suitable one has to be chosen depending on the load. For example, a load up to 7.5 tons can be expected for a garage access [two-component pavement joint mortar (traffic load)], whereas a cobbled garden path is only designed for a pedestrian load [one-component pavement joint mortar (pedestrian load)]. Furthermore, various substructures determine whether the type of grout should be permanently permeable to water or waterproof, for example, by concrete subfloor.
Tools and treatments
Tools associated with groutwork include:
A grout saw or grout scraper is a manual tool for removal of old and discolored grout. The blade is usually composed of tungsten carbide.
A grout float is a trowel-like tool for smoothing the surface of a grout line, typically made of rubber or soft plastic
Grout sealer is a water-based or solvent-based sealant applied over dried grout that resists water, oil, and acid-based contaminants.
Grout cleaner is a basic cleaning solution that is applied on grout lines and removes the dirt and dust.
A die grinder is used for faster removal of old grout compared to a standard grout saw.
A pointing trowel is used for applying grout in flagstone and other stone works.
A multi-tool (power tools) is another option for removing tile grout between tiles when fitted with a specified diamond blade.
A grout clean-up bucket is a professional clean-up kit for faster grout washup. It consists of a specialised bucket on rollers with a sponge.
See also
Composite material
Caulk
Glue
Mortar in masonry
Mortar joint
Thinset
References
Building materials
Cement
Concrete
Masonry | Grout | [
"Physics",
"Engineering"
] | 1,173 | [
"Structural engineering",
"Matter",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Concrete",
"Masonry",
"Building materials"
] |
407,734 | https://en.wikipedia.org/wiki/Mercury%28II%29%20chloride | Mercury(II) chloride (or mercury bichloride, mercury dichloride), historically also known as sulema or corrosive sublimate, is the inorganic chemical compound of mercury and chlorine with the formula HgCl2, used as a laboratory reagent. It is a white crystalline solid and a molecular compound that is very toxic to humans. Once used as a treatment for syphilis, it is no longer used for medicinal purposes because of mercury toxicity and the availability of superior treatments.
Synthesis
Mercuric chloride is obtained by the action of chlorine on mercury or on mercury(I) chloride. It can also be produced by the addition of hydrochloric acid to a hot, concentrated solution of mercury(I) compounds such as the nitrate:
Hg2(NO3)2 + 4 HCl → 2 HgCl2 + 2 H2O + 2 NO2
Heating a mixture of solid mercury(II) sulfate and sodium chloride also affords volatile HgCl2, which can be separated by sublimation.
Properties
Mercuric chloride exists not as a salt composed of discrete ions, but rather is composed of linear triatomic molecules, hence its tendency to sublime. In the crystal, each mercury atom is bonded to two chloride ligands with Hg–Cl distance of 2.38 Å; six more chlorides are more distant at 3.38 Å.
Its solubility increases from 6% at to 36% at .
Applications
The main application of mercuric chloride is as a catalyst for the conversion of acetylene to vinyl chloride, the precursor to polyvinyl chloride:
C2H2 + HCl → CH2=CHCl
For this application, the mercuric chloride is supported on carbon in concentrations of about 5 weight percent. This technology has been eclipsed by the thermal cracking of 1,2-dichloroethane. Other significant applications of mercuric chloride include its use as a depolarizer in batteries and as a reagent in organic synthesis and analytical chemistry (see below).
It is being used in plant tissue culture for surface sterilisation of explants such as leaf or stem nodes.
As a chemical reagent
Mercuric chloride is occasionally used to form an amalgam with metals, such as aluminium. Upon treatment with an aqueous solution of mercuric chloride, aluminium strips quickly become covered by a thin layer of the amalgam. Normally, aluminium is protected by a thin layer of oxide, thus making it inert. Amalgamated aluminium exhibits a variety of reactions not observed for aluminium itself. For example, amalgamated aluminum reacts with water generating Al(OH)3 and hydrogen gas. Halocarbons react with amalgamated aluminium in the Barbier reaction. These alkylaluminium compounds are nucleophilic and can be used in a similar fashion to the Grignard reagent. Amalgamated aluminium is also used as a reducing agent in organic synthesis. Zinc is also commonly amalgamated using mercuric chloride.
Mercuric chloride is used to remove dithiane groups attached to a carbonyl in an umpolung reaction. This reaction exploits the high affinity of Hg2+ for anionic sulfur ligands.
Mercuric chloride may be used as a stabilising agent for chemicals and analytical samples. Care must be taken to ensure that detected mercuric chloride does not eclipse the signals of other components in the sample, such as is possible in gas chromatography.
History
Discovery of the mineral acids
Around 900, the authors of the Arabic writings attributed to Jabir ibn Hayyan (Latin: Geber) and the Persian physician and alchemist Abu Bakr al-Razi (Latin: Rhazes) were experimenting with sal ammoniac (ammonium chloride), which when it was distilled together with vitriol (hydrated sulfates of various metals) produced hydrogen chloride. It is possible that in one of his experiments, al-Razi stumbled upon a primitive method to produce hydrochloric acid. However, it appears that in most of these early experiments with chloride salts, the gaseous products were discarded, and hydrogen chloride may have been produced many times before it was discovered that it can be put to chemical use.
One of the first such uses of hydrogen chloride was in the synthesis of mercury(II) chloride (corrosive sublimate), whose production from the heating of mercury either with alum and ammonium chloride or with vitriol and sodium chloride was first described in the ("On Alums and Salts"). This eleventh- or twelfth-century Arabic alchemical text is anonymous in most manuscripts, though some manuscripts attribute it to Hermes Trismegistus, and a few falsely attribute it to Abu Bakr al-Razi. It was translated into Hebrew and two times into Latin, with one Latin translation by .
In the process described in the , hydrochloric acid started to form, but it immediately reacted with the mercury to produce mercury(II) chloride. Thirteenth-century Latin alchemists, for whom the was one of the main reference works, were fascinated by the chlorinating properties of mercury(II) chloride, and they eventually discovered that when the metals are eliminated from the process of heating vitriols, alums, and salts, strong mineral acids can directly be distilled.
Historical use in photography
Mercury(II) chloride was used as a photographic intensifier to produce positive pictures in the collodion process of the 1800s. When applied to a negative, the mercury(II) chloride whitens and thickens the image, thereby increasing the opacity of the shadows and creating the illusion of a positive image.
Historical use in preservation
For the preservation of anthropological and biological specimens during the late 19th and early 20th centuries, objects were dipped in or were painted with a "mercuric solution". This was done to prevent the specimens' destruction by moths, mites and mold. Objects in drawers were protected by scattering crystalline mercuric chloride over them. It finds minor use in tanning, and wood was preserved by kyanizing (soaking in mercuric chloride). Mercuric chloride was one of the three chemicals used for railroad tie wood treatment between 1830 and 1856 in Europe and the United States. Limited railroad ties were treated in the United States until there were concerns over lumber shortages in the 1890s. The process was generally abandoned because mercuric chloride was water-soluble and not effective for the long term, as well as being highly poisonous. Furthermore, alternative treatment processes, such as copper sulfate, zinc chloride, and ultimately creosote; were found to be less toxic. Limited kyanizing was used for some railroad ties in the 1890s and early 1900s.
Historic use in medicine
Mercuric chloride was a common over-the-counter disinfectant in the early twentieth century, recommended for everything from fighting measles germs to protecting fur coats and exterminating red ants.
A New York physician, Carlin Philips, wrote in 1913 that "it is one of our most popular and effective household antiseptics", but so corrosive and poisonous that it should only be available by prescription. A group of physicians in Chicago made the same demand later the same month. The product frequently caused accidental poisonings and was used as a suicide method.
It was used to disinfect wounds by Arab physicians in the Middle Ages. It continued to be used by Arab physicians into the twentieth century, until modern medicine deemed it unsafe for use.
Syphilis was frequently treated with mercuric chloride before the advent of antibiotics. It was inhaled, ingested, injected, and applied topically. Both mercuric-chloride treatment for syphilis and poisoning during the course of treatment were so common that the latter's symptoms were often confused with those of syphilis. This use of "salts of white mercury" is referred to in the English-language folk song "The Unfortunate Rake".
Yaws was treated with mercuric chloride (labeled as Corrosive Sublimate) before the advent of antibiotics. It was applied topically to alleviate ulcerative symptoms. Evidence of this is found in Jack London's book The Cruise of the Snark in the chapter entitled "The Amateur M.D."
Between 1901 and 1904 the US Marines Hospital Service quarantined and engaged in an extensive disinfection program of San Francisco's Chinatown, forcing the closure of over 14,000 rooms and eviction of thousands of Chinese whose dwellings were rendered toxic and uninhabitable from the disinfection program. Long-term mercury pollution is still a concern for construction workers in Chinatown to this day.
Historic use in crime and accidental poisonings
In 1613, whilst imprisoned in the Tower of London, Thomas Overbury was poisoned with an enema of mercury sublimate. The following trial saw the downfall of the murderers, Robert Carr and his wife, Frances.
In Volume V of Alexandre Dumas' Celebrated Crimes, he recounts the history of Antoine François Desrues, who killed noblewoman Madame de Lamotte with "corrosive sublimate."
In 1906 in New York, Richard Tilghman died after mistaking bichloride of mercury tablets for lithium citrate.
Actor Lon Chaney's estranged wife Cleva attempted suicide by swallowing mercuric chloride in 1913. Although the attempt failed, the toxic effects ruined her singing career.
In a highly publicized case in 1920, mercury bichloride was reported to have caused the death of 25-year-old American silent film star Olive Thomas. While vacationing in France, she accidentally (or perhaps intentionally) ingested the compound, which had been prescribed to her husband Jack Pickford in liquid topical form to treat his syphilis. Thomas died five days later.
Mercuric chloride was used by Madge Oberholtzer to commit suicide after she was kidnapped, raped and tortured by Ku Klux Klan leader D.C. Stephenson. She died from a combination of mercury poisoning and the staph infection that she suffered when Stephenson bit her during the assault.
Ana María Cires, a young wife of Uruguayan writer Horacio Quiroga, committed suicide by poison. After a violent fight with Quiroga, she ingested a fatal dose of "sublimado", or mercury chloride. She endured great agony for eight days before dying on December 14, 1915.
Ruth L. Truffant's death was called a suicide after she died from bichloride of mercury poisoning on 26 April 1914.
Toxicity
Mercury dichloride is a highly toxic compound, both acutely and as a cumulative poison. Its toxicity is due not just to its mercury content but also to its corrosive properties, which can cause serious internal damage, including ulcers to the stomach, mouth, and throat, and corrosive damage to the intestines. Mercuric chloride also tends to accumulate in the kidneys, causing severe corrosive damage which can lead to acute kidney failure. However, mercuric chloride, like all inorganic mercury salts, does not cross the blood–brain barrier as readily as organic mercury, although it is known to be a cumulative poison.
Common side effects of acute mercuric chloride poisoning include burning sensations in the mouth and throat, stomach pain, abdominal discomfort, lethargy, vomiting of blood, corrosive bronchitis, severe irritation to the gastrointestinal tract, and kidney failure. Chronic exposure can lead to symptoms more common with mercury poisoning, such as insomnia, delayed reflexes, excessive salivation, bleeding gums, fatigue, tremors, and dental problems.
Acute exposure to large amounts of mercuric chloride can cause death in as little as 24 hours, usually due to acute kidney failure or damage to the gastrointestinal tract. In other cases, victims of acute exposure have taken up to two weeks to die.
References
External links
Agency for toxic substances and disease registry. (2001, May 25). Toxicological profile for Mercury. Retrieved on April 17, 2005.
US National Institutes of Health. Retrieved 19 July 2022. Archived 20 July 2022.
Young, R.(2004, October 6). Toxicity summary for mercury. The risk assessment information system. Retrieved on April 17, 2005.
ATSDR - ToxFAQs: Mercury
ATSDR - Public Health Statement: Mercury
ATSDR - Medical Management Guidelines (MMGs) for Mercury (Hg)
ATSDR - Toxicological Profile: Mercury
National Pollutant Inventory - Mercury and compounds Fact Sheet
NIOSH Pocket Guide to Chemical Hazards
- includes excerpts from research reports.
Mercury(II) compounds
Chlorides
Metal halides
Alchemical substances
Photographic chemicals
Pulmonary agents | Mercury(II) chloride | [
"Chemistry"
] | 2,631 | [
"Chlorides",
"Inorganic compounds",
"Chemical weapons",
"Alchemical substances",
"Salts",
"Metal halides",
"Pulmonary agents"
] |
407,763 | https://en.wikipedia.org/wiki/Unit%20cube | A unit cube, more formally a cube of side 1, is a cube whose sides are 1 unit long. The volume of a 3-dimensional unit cube is 1 cubic unit, and its total surface area is 6 square units.
Unit hypercube
The term unit cube or unit hypercube is also used for hypercubes, or "cubes" in n-dimensional spaces, for values of n other than 3 and edge length 1.
Sometimes the term "unit cube" refers in specific to the set [0, 1]n of all n-tuples of numbers in the interval [0, 1].
The length of the longest diagonal of a unit hypercube of n dimensions is , the square root of n and the (Euclidean) length of the vector (1,1,1,....1,1) in n-dimensional space.
See also
Doubling the cube
k-cell
Robbins constant, the average distance between two random points in a unit cube
Tychonoff cube, an infinite-dimensional analogue of the unit cube
Unit square
Unit sphere
References
External links
Euclidean solid geometry
1 (number)
Cubes | Unit cube | [
"Physics"
] | 232 | [
"Spacetime",
"Space",
"Euclidean solid geometry"
] |
407,814 | https://en.wikipedia.org/wiki/Hygiene%20hypothesis | In medicine, the hygiene hypothesis states that early childhood exposure to particular microorganisms (such as the gut flora and helminth parasites) protects against allergies by properly tuning the immune system. In particular, a lack of such exposure is thought to lead to poor immune tolerance. The time period for exposure begins before birth and ends at school age.
While early versions of the hypothesis referred to microorganism exposure in general, later versions apply to a specific set of microbes that have co-evolved with humans. The updates have been given various names, including the microbiome depletion hypothesis, the microflora hypothesis, and the "old friends" hypothesis. There is a significant amount of evidence supporting the idea that lack of exposure to these microbes is linked to allergies or other conditions, although it is still rejected by many scientists.
The term "hygiene hypothesis" has been described as a misnomer because people incorrectly interpret it as referring to their own cleanliness. Having worse personal hygiene, such as not washing hands before eating, only increases the risk of infection without affecting the risk of allergies or immune disorders. Hygiene is essential for protecting vulnerable populations such as the elderly from infections, preventing the spread of antibiotic resistance, and combating emerging infectious diseases such as Ebola. The hygiene hypothesis does not suggest that having more infections during childhood would be an overall benefit.
Overview
The idea of a link between parasite infection and immune disorders was first suggested in 1968 before the advent of large scale DNA sequencing techniques. The original formulation of the hygiene hypothesis dates from 1989, when David Strachan proposed that lower incidence of infection in early childhood could be an explanation for the rise in allergic diseases such as asthma and hay fever during the 20th century.
The hygiene hypothesis has also been expanded beyond allergies, and is also studied in the context of a broader range of conditions affected by the immune system, particularly inflammatory diseases. These include type 1 diabetes, multiple sclerosis, and also some types of depression and cancer. For example, the global distribution of multiple sclerosis is negatively correlated with that of the helminth Trichuris trichiura and its incidence is negatively correlated with Helicobacter pylori infection. Strachan's original hypothesis could not explain how various allergic conditions spiked or increased in prevalence at different times, such as why respiratory allergies began to increase much earlier than food allergies, which did not become more common until near the end of the 20th century.
In 2003, Graham Rook proposed the "old friends" hypothesis which has been described as a more rational explanation for the link between microbial exposure and inflammatory disorders. The hypothesis states that the vital microbial exposures are not colds, influenza, measles and other common childhood infections which have evolved relatively recently over the last 10,000 years, but rather the microbes already present during mammalian and human evolution, that could persist in small hunter-gatherer groups as microbiota, tolerated latent infections, or carrier states. He proposed that coevolution with these species has resulted in their gaining a role in immune system development.
Strachan's original formulation of the hygiene hypothesis also centred around the idea that smaller families provided insufficient microbial exposure partly because of less person-to-person spread of infections, but also because of "improved household amenities and higher standards of personal cleanliness". It seems likely that this was the reason he named it the "hygiene hypothesis". Although the "hygiene revolution" of the nineteenth and twentieth centuries may have been a major factor, it now seems more likely that, while public health measures such as sanitation, potable water and garbage collection were instrumental in reducing our exposure to cholera, typhoid and so on, they also deprived people of their exposure to the "old friends" that occupy the same environmental habitats.
The rise of autoimmune diseases and acute lymphoblastic leukemia in young people in the developed world was linked to the hygiene hypothesis. Autism may be associated with changes in the gut microbiome and early infections. The risk of chronic inflammatory diseases also depends on factors such as diet, pollution, physical activity, obesity, socio-economic factors, and stress. Genetic predisposition is also a factor.
History
Since allergies and other chronic inflammatory diseases are largely diseases of the last 100 years or so, the "hygiene revolution" of the last 200 years came under scrutiny as a possible cause. During the 1800s, radical improvements to sanitation and water quality occurred in Europe and North America. The introduction of toilets and sewer systems and the cleanup of city streets, and cleaner food were part of this program. This in turn led to a rapid decline in infectious diseases, particularly during the period 1900–1950, through reduced exposure to infectious agents.
Although the idea that exposure to certain infections may decrease the risk of allergy is not new, Strachan was one of the first to formally propose it, in an article published in the British Medical Journal in 1989. This article proposed to explain the observation that hay fever and eczema, both allergic diseases, were less common in children from larger families, which were presumably exposed to more infectious agents through their siblings, than in children from families with only one child. The increased occurrence of allergies had previously been thought to be a result of increasing pollution. The hypothesis was extensively investigated by immunologists and epidemiologists and has become an important theoretical framework for the study of chronic inflammatory disorders.
The "old friends hypothesis" proposed in 2003 may offer a better explanation for the link between microbial exposure and inflammatory diseases. This hypothesis argues that the vital exposures are not common cold and other recently evolved infections, which are no older than 10,000 years, but rather microbes already present in hunter-gatherer times when the human immune system was evolving. Conventional childhood infections are mostly "crowd infections" that kill or immunise and thus cannot persist in isolated hunter-gatherer groups. Crowd infections started to appear after the neolithic agricultural revolution, when human populations increased in size and proximity. The microbes that co-evolved with mammalian immune systems are much more ancient. According to this hypothesis, humans became so dependent on them that their immune systems can neither develop nor function properly without them.
Rook proposed that these microbes most likely include:
Ambient species that exist in the same environments as humans
Species that inhabit human skin, gut and respiratory tract, and that of the animals we live with
Organisms such as viruses and helminths (worms) that establish chronic infections or carrier states that humans can tolerate and so could co-evolve a specific immunoregulatory relationship with the immune system.
The modified hypothesis later expanded to include exposure to symbiotic bacteria and parasites.
"Evolution turns the inevitable into a necessity." This means that the majority of mammalian evolution took place in mud and rotting vegetation and more than 90 percent of human evolution took place in isolated hunter-gatherer communities and farming communities. Therefore, the human immune systems have evolved to anticipate certain types of microbial input, making the inevitable exposure into a necessity. The organisms that are implicated in the hygiene hypothesis are not proven to cause the disease prevalence, however there are sufficient data on lactobacilli, saprophytic environment mycobacteria, and helminths and their association. These bacteria and parasites have commonly been found in vegetation, mud, and water throughout evolution.
Multiple possible mechanisms have been proposed for how the 'Old Friends' microorganisms prevent autoimmune diseases and asthma. They include:
Reciprocal inhibition between immune responses directed against distinct antigens of the Old Friends microbes which elicit stronger immune responses than the weaker autoantigens and allergens of autoimmune disease and allergy respectively.
Competition for cytokines, MHC receptors and growth factors needed by the immune system to mount an immune response.
Immunoregulatory interactions with host TLRs.
The "microbial diversity" hypothesis, proposed by Paolo Matricardi and developed by von Hertzen, holds that diversity of microbes in the gut and other sites is a key factor for priming the immune system, rather than stable colonization with a particular species. Exposure to diverse organisms in early development builds a "database" that allows the immune system to identify harmful agents and normalize once the danger is eliminated.
For allergic disease, the most important times for exposure are: early in development; later during pregnancy; and the first few days or months of infancy. Exposure needs to be maintained over a significant period. This fits with evidence that delivery by Caesarean section may be associated with increased allergies, whilst breastfeeding can be protective.
Evolution of the adaptive immune system
Humans and the microbes they harbor have co-evolved for thousands of centuries; however, it is thought that the human species has gone through numerous phases in history characterized by different pathogen exposures. For instance, in very early human societies, small interaction between its members has given particular selection to a relatively limited group of pathogens that had high transmission rates. It is considered that the human immune system is likely subjected to a selective pressure from pathogens that are responsible for down regulating certain alleles and therefore phenotypes in humans. The thalassemia genes that are shaped by the Plasmodium species expressing the selection pressure might be a model for this theory but is not shown in-vivo.
Recent comparative genomic studies have shown that immune response genes (protein coding and non-coding regulatory genes) have less evolutionary constraint, and are rather more frequently targeted by positive selection from pathogens that coevolve with the human subject. Of all the various types of pathogens known to cause disease in humans, helminths warrant special attention, because of their ability to modify the prevalence or severity of certain immune-related responses in human and mouse models. In fact recent research has shown that parasitic worms have served as a stronger selective pressure on select human genes encoding interleukins and interleukin receptors when compared to viral and bacterial pathogens. Helminths are thought to have been as old as the adaptive immune system, suggesting that they may have co-evolved, also implying that our immune system has been strongly focused on fighting off helminthic infections, insofar as to potentially interact with them early in infancy. The host-pathogen interaction is a very important relationship that serves to shape the immune system development early on in life.
Biological basis
The primary proposed mechanism of the hygiene hypothesis is an imbalance between the TH1 and TH2 subtypes of T helper cells. Insufficient activation of the TH1 arm would stimulate the cell defense of the immune system and lead to an overactive TH2 arm, stimulating the antibody-mediated immunity of the immune systems, which in turn led to allergic disease.
However, this explanation cannot explain the rise in incidence (similar to the rise of allergic diseases) of several TH1-mediated autoimmune diseases, including inflammatory bowel disease, multiple sclerosis and type I diabetes. [Figure 1Bach] However, the North South Gradient seen in the prevalence of multiple sclerosis has been found to be inversely related to the global distribution of parasitic infection.[Figure 2Bach] Additionally, research has shown that MS patients infected with parasites displayed TH2 type immune responses as opposed to the proinflammatory TH1 immune phenotype seen in non-infected multiple sclerosis patients.[Fleming] Parasite infection has also been shown to improve inflammatory bowel disease and may act in a similar fashion as it does in multiple sclerosis.[Lee]
Allergic conditions are caused by inappropriate immunological responses to harmless antigens driven by a TH2-mediated immune response, TH2 cells produce interleukin 4, interleukin 5, interleukin 6, interleukin 13 and predominantly stimulate immunoglobulin E production. Many bacteria and viruses elicit a TH1-mediated immune response, which down-regulates TH2 responses. TH1 immune responses are characterized by the secretion of pro-inflammatory cytokines such as interleukin 2, IFNγ, and TNFα. Factors that favor a predominantly TH1 phenotype include: older siblings, large family size, early day care attendance, infection (TB, measles, or hepatitis), rural living, or contact with animals. A TH2-dominated phenotype is associated with high antibiotic use, western lifestyle, urban environment, diet, and sensitivity to dust mites and cockroaches. TH1 and TH2 responses are reciprocally inhibitory, so when one is active, the other is suppressed.
An alternative explanation is that the developing immune system must receive stimuli (from infectious agents, symbiotic bacteria, or parasites) to adequately develop regulatory T cells. Without that stimuli it becomes more susceptible to autoimmune diseases and allergic diseases, because of insufficiently repressed TH1 and TH2 responses, respectively. For example, all chronic inflammatory disorders show evidence of failed immunoregulation. Secondly, helminths, non-pathogenic ambient pseudocommensal bacteria or certain gut commensals and probiotics, drive immunoregulation. They block or treat models of all chronic inflammatory conditions.
Evidence
There is a significant amount of evidence supporting the idea that microbial exposure is linked to allergies or other conditions, although scientific disagreement still exists. Since hygiene is difficult to define or measure directly, surrogate markers are used such as socioeconomic status, income, and diet.
Studies have shown that various immunological and autoimmune diseases are much less common in the developing world than the industrialized world and that immigrants to the industrialized world from the developing world increasingly develop immunological disorders in relation to the length of time since arrival in the industrialized world. This is true for asthma and other chronic inflammatory disorders. The increase in allergy rates is primarily attributed to diet and reduced microbiome diversity, although the mechanistic reasons are unclear.
The use of antibiotics in the first year of life has been linked to asthma and other allergic diseases, and increased asthma rates are also associated with birth by Caesarean section. However, at least one study suggests that personal hygienic practices may be unrelated to the incidence of asthma. Antibiotic usage reduces the diversity of gut microbiota. Although several studies have shown associations between antibiotic use and later development of asthma or allergy, other studies suggest that the effect is due to more frequent antibiotic use in asthmatic children. Trends in vaccine use may also be relevant, but epidemiological studies provide no consistent support for a detrimental effect of vaccination/immunization on atopy rates. In support of the old friends hypothesis, the intestinal microbiome was found to differ between allergic and non-allergic Estonian and Swedish children (although this finding was not replicated in a larger cohort), and the biodiversity of the intestinal flora in patients with Crohn's disease was diminished.
Limitations
The hygiene hypothesis does not apply to all populations. For example, in the case of inflammatory bowel disease, it is primarily relevant when a person's level of affluence increases, either due to changes in society or by moving to a more affluent country, but not when affluence remains constant at a high level.
The hygiene hypothesis has difficulty explaining why allergic diseases also occur in less affluent regions. Additionally, exposure to some microbial species actually increases future susceptibility to disease instead, as in the case of infection with rhinovirus (the main source of the common cold) which increases the risk of asthma.
Treatment
Current research suggests that manipulating the intestinal microbiota may be able to treat or prevent allergies and other immune-related conditions. Various approaches are under investigation. Probiotics (drinks or foods) have never been shown to reintroduce microbes to the gut. As yet, therapeutically relevant microbes have not been specifically identified. However, probiotic bacteria have been found to reduce allergic symptoms in some studies. Other approaches being researched include prebiotics, which promote the growth of gut flora, and synbiotics, the use of prebiotics and probiotics at the same time.
Should these therapies become accepted, public policy implications include providing green spaces in urban areas or even providing access to agricultural environments for children.
Helminthic therapy is the treatment of autoimmune diseases and immune disorders by means of deliberate infestation with a helminth larva or ova. Helminthic therapy emerged from the search for reasons why the incidence of immunological disorders and autoimmune diseases correlates with the level of industrial development. The exact relationship between helminths and allergies is unclear, in part because studies tend to use different definitions and outcomes, and because of the wide variety among both helminth species and the populations they infect. The infections induce a type 2 immune response, which likely evolved in mammals as a result of such infections; chronic helminth infection has been linked with a reduced sensitivity in peripheral T cells, and several studies have found deworming to lead to an increase in allergic sensitivity. However, in some cases helminths and other parasites are a cause of developing allergies instead. In addition, such infections are not themselves a treatment as they are a major disease burden and in fact they are one of the most important neglected diseases. The development of drugs that mimic the effects without causing disease is in progress.
Public health
The reduction of public confidence in hygiene has significant possible consequences for public health. Hygiene is essential for protecting vulnerable populations such as the elderly from infections, preventing the spread of antibiotic resistance, and for combating emerging infectious diseases such as SARS and Ebola.
The misunderstanding of the term "hygiene hypothesis" has resulted in unwarranted opposition to vaccination as well as other important public health measures. It has been suggested that public awareness of the initial form of the hygiene hypothesis has led to an increased disregard for hygiene in the home. The effective communication of science to the public has been hindered by the presentation of the hygiene hypothesis and other health-related information in the media.
Cleanliness
No evidence supports the idea that reducing modern practices of cleanliness and hygiene would have any impact on rates of chronic inflammatory and allergic disorders, but a significant amount of evidence indicates that reducing hygiene would increase the risks of infectious diseases. The phrase "targeted hygiene" has been used in order to recognize the importance of hygiene in avoiding pathogens.
If home and personal cleanliness contributes to reduced exposure to vital microbes, its role is likely to be small. The idea that homes can be made “sterile” through excessive cleanliness is implausible, and the evidence shows that after cleaning, microbes are quickly replaced by dust and air from outdoors, by shedding from the body and other living things, as well as from food. The key point may be that the microbial content of urban housing has altered, not because of home and personal hygiene habits, but because they are part of urban environments. Diet and lifestyle changes also affects the gut, skin and respiratory microbiota.
At the same time that concerns about allergies and other chronic inflammatory diseases have been increasing, so also have concerns about infectious disease. Infectious diseases continue to exert a heavy health toll. Preventing pandemics and reducing antibiotic resistance are global priorities, and hygiene is a cornerstone of containing these threats.
Infection risk management
The International Scientific Forum on Home Hygiene has developed a risk management approach to reducing home infection risks. This approach uses microbiological and epidemiological evidence to identify the key routes of infection transmission in the home. These data indicate that the critical routes involve the hands, hand and food contact surfaces and cleaning utensils. Clothing and household linens involve somewhat lower risks. Surfaces that contact the body, such as baths and hand basins, can act as infection vehicles, as can surfaces associated with toilets. Airborne transmission can be important for some pathogens. A key aspect of this approach is that it maximises protection against pathogens and infection, but is more relaxed about visible cleanliness in order to sustain normal exposure to other human, animal and environmental microbes.
See also
Antibacterial soap
Antifragility
Diseases of affluence
Germ theory of disease
Helminthic therapy
Hookworm
Human microbiome
Microbiomes of the built environment
Vaginal seeding
References
Further reading
Allergology
Epidemiology
Biological hypotheses
Immunology theories | Hygiene hypothesis | [
"Biology",
"Environmental_science"
] | 4,222 | [
"Biological hypotheses",
"Epidemiology",
"Environmental social science"
] |
407,847 | https://en.wikipedia.org/wiki/Patina | Patina ( or ) is a thin layer that variously forms on the surface of copper, brass, bronze, and similar metals and metal alloys (tarnish produced by oxidation or other chemical processes), or certain stones and wooden furniture (sheen produced by age, wear, and polishing), or any similar acquired change of a surface through age and exposure.
Additionally, the term is used to describe the aging of high-quality leather. The patinas on leather goods are unique to the type of leather, frequency of use, and exposure.
Patinas can provide a protective covering to materials that would otherwise be damaged by corrosion or weathering. They may also be aesthetically appealing.
Usage
On metal, patina is a coating of various chemical compounds such as oxides, carbonates, sulfides, or sulfates formed on the surface during exposure to atmospheric elements (oxygen, rain, acid rain, carbon dioxide, sulfur-bearing compounds). Patina also refers to accumulated changes in surface texture and color that result from normal use of an object such as a coin or a piece of furniture over time.
Archaeologists also use the term patina to refer to a corticated layer that develops over time that is due to a range of complex factors on flint tools and ancient stone monuments. This has led stone tool analysts in recent times to generally prefer the term cortification as a better term to describe the process than patination.
In geology and geomorphology, the term patina is used to refer to discolored film or thin outer layer produced either on or within the surface of a rock or other material by either the development of a weathering rind within the surface of a rock, the formation of desert varnish on the surface of a rock, or combination of both. It also refers to development as the result of weathering of a case-hardened layer, called cortex by geologists, within the surface of either a flint or chert nodule.
Etymology
The word patina comes from the Italian patina (shallow layer of deposit on a surface), derived from the Latin patĭna (pan, shallow dish). Figuratively, patina can refer to any fading, darkening, or other signs of age, which are felt to be natural or unavoidable (or both).
The chemical process by which a patina forms or is deliberately induced is called patination, and a work of art coated by a patina is said to be patinated.
Acquired patina
The green patina that forms naturally on copper and bronze, sometimes called verdigris, usually consists of varying mixtures of copper chlorides, sulfides, sulfates, and carbonates, depending upon environmental conditions such as sulfur-containing acid rain. In clean air rural environments, the patina is created by the slow chemical reaction of copper with carbon dioxide and water, producing a basic copper carbonate. In industrial and urban air environments containing sulfurous acid rain from coal-fired power plants or industrial processes, the final patina is primarily composed of sulphide or sulphate compounds.
A patina layer takes many years to develop under natural weathering. Buildings in damp coastal or marine environments will develop patina layers faster than ones in dry inland areas.
Façade cladding (copper cladding; copper wall cladding) with alloys of copper, like brass or bronze, will weather differently from "pure" copper cladding. Even a lasting gold colour is possible with copper-alloy cladding, for example Bristol Beacon in Bristol, or the Novotel at Paddington Central, London.
Antique and well-used firearms will often develop a layer of rust on the action, barrel, or other steel parts after the original finish has worn. On this subject gunsmith Mark Novak says "... This is what everybody calls patina, I call it a nice thick coat of rust..." The removal of such rust is often necessary for a firearm conservation to prevent further decay of the firearm.
Applied patina
Artists and metalworkers often deliberately add patinas as a part of the original design and decoration of art and furniture, or to simulate antiquity in newly made objects. The process is often called distressing.
A wide range of chemicals, both household and commercial, can give a variety of patinas. They are often used by artists as surface embellishments either for color, texture, or both. Patination composition varies with the reacted elements and these will determine the color of the patina. For copper alloys, such as bronze, exposure to chlorides leads to green, while sulfur compounds (such as "liver of sulfur") tend to brown. The basic palette for patinas on copper alloys includes chemicals like ammonium sulfide (blue-black), liver of sulfur (brown-black), cupric nitrate (blue-green), and ferric nitrate (yellow-brown). For artworks, patination is often deliberately accelerated by applying chemicals with heat. Colors range from matte sandstone yellow to deep blues, greens, whites, reds, and various blacks. Some patina colors are achieved by the mixing of colors from the reaction with the metal surface with pigments added to the chemicals. Sometimes the surface is enhanced by waxing, oiling, or other types of lacquers or clear-coats. More simply, the French sculptor Auguste Rodin used to instruct assistants at his studio to urinate over bronzes stored in the outside yard. A patina can be produced on copper by the application of vinegar (acetic acid). This patina is water-soluble and will not last on the outside of a building like a "true" patina. It is usually used as pigment.
Patina is also found on slip rings and commutators. This type of patina is formed by corrosion, what elements the air might hold, residue from the wear of the carbon brush, and moisture; thus, the patina needs special conditions to work as intended.
Patinas can also be found in woks or other metal baking dishes. The process of applying patinas to cookware is known as seasoning. The patina on a wok is a dark coating of oils that have been polymerized onto it to prevent food from sticking. Scrubbing or using soap on a wok or other dishware could damage the patina and possibly allow rust.
Knife collectors that own carbon steel blades sometimes force a patina onto the blade to help protect it and give it a more personalized look. This can be done using various chemicals and substances such as muriatic acid, apple cider vinegar, or mustard. It can also be done by sticking the blade into any acidic vegetable or fruit such as an orange or an apple.
Repatination
In the case of antiques, a range of views are held on the value of patination and its replacement if damaged, known as repatination.
Preserving a piece's look and character is important and removal or reduction may dramatically reduce its value. If patination has flaked off, repatination may be recommended. Appraiser Reyne Haines notes that a repatinated metal piece will be worth more than one with major imperfections in the patina, but less than a piece still with its original finish.
See also
Craquelure
Crazing
Wabi-sabi
References
Further reading
Angier, R.H. : Firearm Blueing and Browning, Onslow County 1936.
Fishlock, David : Metal Colouring, Teddington 1962.
LaNiece, Susan; Craddock, Paul : Metal Plating and Patination: Cultural, Technical and Historical Developments, Boston 1993.
Pergoli Camopanelli, A. : The value of patinaon the antiques market – Affinities and relationships between conservation theories and buyers' taste: NEWS IN CONSERVATION, (31), 2012.
Sugimori, E. : Japanese patinas, Brunswick 2004.
External links
W. A. Franke, M. Mircea Plutarch report on the blue patina of bronze statues at Delphi: A scientific explanation
Patina on Bronze Sculpture From the Historical-Artistic Point of View
Antiques
Visual arts materials
Artistic techniques
Decorative arts
Furniture
Metallurgy
Metalworking
Sculpture terms
Weathering | Patina | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,679 | [
"Metallurgy",
"Materials science",
"nan"
] |
408,026 | https://en.wikipedia.org/wiki/Relativistic%20Doppler%20effect | The relativistic Doppler effect is the change in frequency, wavelength and amplitude of light, caused by the relative motion of the source and the observer (as in the classical Doppler effect, first proposed by Christian Doppler in 1842), when taking into account effects described by the special theory of relativity.
The relativistic Doppler effect is different from the non-relativistic Doppler effect as the equations include the time dilation effect of special relativity and do not involve the medium of propagation as a reference point. They describe the total difference in observed frequencies and possess the required Lorentz symmetry.
Astronomers know of three sources of redshift/blueshift: Doppler shifts; gravitational redshifts (due to light exiting a gravitational field); and cosmological expansion (where space itself stretches). This article concerns itself only with Doppler shifts.
Summary of major results
In the following table, it is assumed that for the receiver and the source are moving away from each other, being the relative velocity and the speed of light, and .
Derivation
Relativistic longitudinal Doppler effect
Relativistic Doppler shift for the longitudinal case, with source and receiver moving directly towards or away from each other, is often derived as if it were the classical phenomenon, but modified by the addition of a time dilation term. This is the approach employed in first-year physics or mechanics textbooks such as those by Feynman or Morin.
Following this approach towards deriving the relativistic longitudinal Doppler effect, assume the receiver and the source are moving away from each other with a relative speed as measured by an observer on the receiver or the source (The sign convention adopted here is that is negative if the receiver and the source are moving towards each other).
Consider the problem in the reference frame of the source.
Suppose one wavefront arrives at the receiver. The next wavefront is then at a distance away from the receiver (where is the wavelength, is the frequency of the waves that the source emits, and is the speed of light).
The wavefront moves with speed , but at the same time the receiver moves away with speed during a time , which is the period of light waves impinging on the receiver, as observed in the frame of the source. So, where is the speed of the receiver in terms of the speed of light. The corresponding , the frequency of at which wavefronts impinge on the receiver in the source's frame, is:
Thus far, the equations have been identical to those of the classical Doppler effect with a stationary source and a moving receiver.
However, due to relativistic effects, clocks on the receiver are time dilated relative to clocks at the source: , where is the Lorentz factor. In order to know which time is dilated, we recall that is the time in the frame in which the source is at rest. The receiver will measure the received frequency to be
The ratio
is called the Doppler factor of the source relative to the receiver. (This terminology is particularly prevalent in the subject of astrophysics: see relativistic beaming.)
The corresponding wavelengths are related by
Identical expressions for relativistic Doppler shift are obtained when performing the analysis in the reference frame of the receiver with a moving source. This matches up with the expectations of the principle of relativity, which dictates that the result can not depend on which object is considered to be the one at rest. In contrast, the classic nonrelativistic Doppler effect is dependent on whether it is the source or the receiver that is stationary with respect to the medium.
Transverse Doppler effect
Suppose that a source and a receiver are both approaching each other in uniform inertial motion along paths that do not collide. The transverse Doppler effect (TDE) may refer to (a) the nominal blueshift predicted by special relativity that occurs when the emitter and receiver are at their points of closest approach; or (b) the nominal redshift predicted by special relativity when the receiver sees the emitter as being at its closest approach. The transverse Doppler effect is one of the main novel predictions of the special theory of relativity.
Whether a scientific report describes TDE as being a redshift or blueshift depends on the particulars of the experimental arrangement being related. For example, Einstein's original description of the TDE in 1907 described an experimenter looking at the center (nearest point) of a beam of "canal rays" (a beam of positive ions that is created by certain types of gas-discharge tubes). According to special relativity, the moving ions' emitted frequency would be reduced by the Lorentz factor, so that the received frequency would be reduced (redshifted) by the same factor.
On the other hand, Kündig (1963) described an experiment where a Mössbauer absorber was spun in a rapid circular path around a central Mössbauer emitter. As explained below, this experimental arrangement resulted in Kündig's measurement of a blueshift.
Source and receiver are at their points of closest approach
In this scenario, the point of closest approach is frame-independent and represents the moment where there is no change in distance versus time. Figure 2 demonstrates that the ease of analyzing this scenario depends on the frame in which it is analyzed.
Fig. 2a. If we analyze the scenario in the frame of the receiver, we find that the analysis is more complicated than it should be. The apparent position of a celestial object is displaced from its true position (or geometric position) because of the object's motion during the time it takes its light to reach an observer. The source would be time-dilated relative to the receiver, but the redshift implied by this time dilation would be offset by a blueshift due to the longitudinal component of the relative motion between the receiver and the apparent position of the source.
Fig. 2b. It is much easier if, instead, we analyze the scenario from the frame of the source. An observer situated at the source knows, from the problem statement, that the receiver is at its closest point to him. That means that the receiver has no longitudinal component of motion to complicate the analysis. (i.e. dr/dt = 0 where r is the distance between receiver and source) Since the receiver's clocks are time-dilated relative to the source, the light that the receiver receives is blue-shifted by a factor of gamma. In other words,
Receiver sees the source as being at its closest point
This scenario is equivalent to the receiver looking at a direct right angle to the path of the source. The analysis of this scenario is best conducted from the frame of the receiver. Figure 3 shows the receiver being illuminated by light from when the source was closest to the receiver, even though the source has moved on. Because the source's clock is time dilated as measured in the frame of the receiver, and because there is no longitudinal component of its motion, the light from the source, emitted from this closest point, is redshifted with frequency
In the literature, most reports of transverse Doppler shift analyze the effect in terms of the receiver pointed at direct right angles to the path of the source, thus seeing the source as being at its closest point and observing a redshift.
Point of null frequency shift
Given that, in the case where the inertially moving source and receiver are geometrically at their nearest approach to each other, the receiver observes a blueshift, whereas in the case where the receiver sees the source as being at its closest point, the receiver observes a redshift, there obviously must exist a point where blueshift changes to a redshift. In Fig. 2, the signal travels perpendicularly to the receiver path and is blueshifted. In Fig. 3, the signal travels perpendicularly to the source path and is redshifted.
As seen in Fig. 4, null frequency shift occurs for a pulse that travels the shortest distance from source to receiver. When viewed in the frame where source and receiver have the same speed, this pulse is emitted perpendicularly to the source's path and is received perpendicularly to the receiver's path. The pulse is emitted slightly before the point of closest approach, and it is received slightly after.
One object in circular motion around the other
Fig. 5 illustrates two variants of this scenario. Both variants can be analyzed using simple time dilation arguments. Figure 5a is essentially equivalent to the scenario described in Figure 2b, and the receiver observes light from the source as being blueshifted by a factor of . Figure 5b is essentially equivalent to the scenario described in Figure 3, and the light is redshifted.
The only seeming complication is that the orbiting objects are in accelerated motion. An accelerated particle does not have an inertial frame in which it is always at rest. However, an inertial frame can always be found which is momentarily comoving with the particle. This frame, the momentarily comoving reference frame (MCRF), enables application of special relativity to the analysis of accelerated particles. If an inertial observer looks at an accelerating clock, only the clock's instantaneous speed is important when computing time dilation.
The converse, however, is not true. The analysis of scenarios where both objects are in accelerated motion requires a somewhat more sophisticated analysis. Not understanding this point has led to confusion and misunderstanding.
Source and receiver both in circular motion around a common center
Suppose source and receiver are located on opposite ends of a spinning rotor, as illustrated in Fig. 6. Kinematic arguments (special relativity) and arguments based on noting that there is no difference in potential between source and receiver in the pseudogravitational field of the rotor (general relativity) both lead to the conclusion that there should be no Doppler shift between source and receiver.
In 1961, Champeney and Moon conducted a Mössbauer rotor experiment testing exactly this scenario, and found that the Mössbauer absorption process was unaffected by rotation. They concluded that their findings supported special relativity.
This conclusion generated some controversy. A certain persistent critic of relativity maintained that, although the experiment was consistent with general relativity, it refuted special relativity, his point being that since the emitter and absorber were in uniform relative motion, special relativity demanded that a Doppler shift be observed. The fallacy with this critic's argument was, as demonstrated in section Point of null frequency shift, that it is simply not true that a Doppler shift must always be observed between two frames in uniform relative motion. Furthermore, as demonstrated in section Source and receiver are at their points of closest approach, the difficulty of analyzing a relativistic scenario often depends on the choice of reference frame. Attempting to analyze the scenario in the frame of the receiver involves much tedious algebra. It is much easier, almost trivial, to establish the lack of Doppler shift between emitter and absorber in the laboratory frame.
As a matter of fact, however, Champeney and Moon's experiment said nothing either pro or con about special relativity. Because of the symmetry of the setup, it turns out that virtually any conceivable theory of the Doppler shift between frames in uniform inertial motion must yield a null result in this experiment.
Rather than being equidistant from the center, suppose the emitter and absorber were at differing distances from the rotor's center. For an emitter at radius and the absorber at radius anywhere on the rotor, the ratio of the emitter frequency, and the absorber frequency, is given by
where is the angular velocity of the rotor. The source and emitter do not have to be 180° apart, but can be at any angle with respect to the center.
Motion in an arbitrary direction
The analysis used in section Relativistic longitudinal Doppler effect can be extended in a straightforward fashion to calculate the Doppler shift for the case where the inertial motions of the source and receiver are at any specified angle.
Fig. 7 presents the scenario from the frame of the receiver, with the source moving at speed at an angle measured in the frame of the receiver. The radial component of the source's motion along the line of sight is equal to
The equation below can be interpreted as the classical Doppler shift for a stationary and moving source modified by the Lorentz factor
In the case when , one obtains the transverse Doppler effect:
In his 1905 paper on special relativity, Einstein obtained a somewhat different looking equation for the Doppler shift equation. After changing the variable names in Einstein's equation to be consistent with those used here, his equation reads
The differences stem from the fact that Einstein evaluated the angle with respect to the source rest frame rather than the receiver rest frame. is not equal to because of the effect of relativistic aberration. The relativistic aberration equation is:
Substituting the relativistic aberration equation into yields , demonstrating the consistency of these alternate equations for the Doppler shift.
Setting in or in yields , the expression for relativistic longitudinal Doppler shift.
A four-vector approach to deriving these results may be found in Landau and Lifshitz (2005).
In electromagnetic waves both the electric and the magnetic field amplitudes E and B transform in a similar manner as the frequency:
Visualization
Fig. 8 helps us understand, in a rough qualitative sense, how the relativistic Doppler effect and relativistic aberration differ from the non-relativistic Doppler effect and non-relativistic aberration of light. Assume that the observer is uniformly surrounded in all directions by yellow stars emitting monochromatic light of 570 nm. The arrows in each diagram represent the observer's velocity vector relative to its surroundings, with a magnitude of 0.89 c.
In the relativistic case, the light ahead of the observer is blueshifted to a wavelength of 137 nm in the far ultraviolet, while light behind the observer is redshifted to 2400 nm in the short wavelength infrared. Because of the relativistic aberration of light, objects formerly at right angles to the observer appear shifted forwards by 63°.
In the non-relativistic case, the light ahead of the observer is blueshifted to a wavelength of 300 nm in the medium ultraviolet, while light behind the observer is redshifted to 5200 nm in the intermediate infrared. Because of the aberration of light, objects formerly at right angles to the observer appear shifted forwards by 42°.
In both cases, the monochromatic stars ahead of and behind the observer are Doppler-shifted towards invisible wavelengths. If, however, the observer had eyes that could see into the ultraviolet and infrared, he would see the stars ahead of him as brighter and more closely clustered together than the stars behind, but the stars would be far brighter and far more concentrated in the relativistic case.
Real stars are not monochromatic, but emit a range of wavelengths approximating a black body distribution. It is not necessarily true that stars ahead of the observer would show a bluer color. This is because the whole spectral energy distribution is shifted. At the same time that visible light is blueshifted into invisible ultraviolet wavelengths, infrared light is blueshifted into the visible range. Precisely what changes in the colors one sees depends on the physiology of the human eye and on the spectral characteristics of the light sources being observed.
Doppler effect on intensity
The Doppler effect (with arbitrary direction) also modifies the perceived source intensity: this can be expressed concisely by the fact that source strength divided by the cube of the frequency is a Lorentz invariant This implies that the total radiant intensity (summing over all frequencies) is multiplied by the fourth power of the Doppler factor for frequency.
As a consequence, since Planck's law describes the black-body radiation as having a spectral intensity in frequency proportional to (where is the source temperature and the frequency), we can draw the conclusion that a black body spectrum seen through a Doppler shift (with arbitrary direction) is still a black body spectrum with a temperature multiplied by the same Doppler factor as frequency.
This result provides one of the pieces of evidence that serves to distinguish the Big Bang theory from alternative theories proposed to explain the cosmological redshift.
Experimental verification
Since the transverse Doppler effect is one of the main novel predictions of the special theory of relativity, the detection and precise quantification of this effect has been an important goal of experiments attempting to validate special relativity.
Ives and Stilwell-type measurements
Einstein (1907) had initially suggested that the TDE might be measured by observing a beam of "canal rays" at right angles to the beam. Attempts to measure TDE following this scheme proved to be impractical, since the maximum speed of a particle beam available at the time was only a few thousandths of the speed of light.
Fig. 9 shows the results of attempting to measure the 4861 Angstrom line emitted by a beam of canal rays (a mixture of H1+, H2+, and H3+ ions) as they recombine with electrons stripped from the dilute hydrogen gas used to fill the Canal ray tube. Here, the predicted result of the TDE is a 4861.06 Angstrom line. On the left, longitudinal Doppler shift results in broadening the emission line to such an extent that the TDE cannot be observed. The middle figures illustrate that even if one narrows one's view to the exact center of the beam, very small deviations of the beam from an exact right angle introduce shifts comparable to the predicted effect.
Rather than attempt direct measurement of the TDE, Ives and Stilwell (1938) used a concave mirror that allowed them to simultaneously observe a nearly longitudinal direct beam (blue) and its reflected image (red). Spectroscopically, three lines would be observed: An undisplaced emission line, and blueshifted and redshifted lines. The average of the redshifted and blueshifted lines would be compared with the wavelength of the undisplaced emission line. The difference that Ives and Stilwell measured corresponded, within experimental limits, to the effect predicted by special relativity.
Various of the subsequent repetitions of the Ives and Stilwell experiment have adopted other strategies for measuring the mean of blueshifted and redshifted particle beam emissions. In some recent repetitions of the experiment, modern accelerator technology has been used to arrange for the observation of two counter-rotating particle beams. In other repetitions, the energies of gamma rays emitted by a rapidly moving particle beam have been measured at opposite angles relative to the direction of the particle beam. Since these experiments do not actually measure the wavelength of the particle beam at right angles to the beam, some authors have preferred to refer to the effect they are measuring as the "quadratic Doppler shift" rather than TDE.
Direct measurement of transverse Doppler effect
The advent of particle accelerator technology has made possible the production of particle beams of considerably higher energy than was available to Ives and Stilwell. This has enabled the design of tests of the transverse Doppler effect directly along the lines of how Einstein originally envisioned them, i.e. by directly viewing a particle beam at a 90° angle. For example, Hasselkamp et al. (1979) observed the Hα line emitted by hydrogen atoms moving at speeds ranging from 2.53×108 cm/s to 9.28×108 cm/s, finding the coefficient of the second order term in the relativistic approximation to be 0.52±0.03, in excellent agreement with the theoretical value of 1/2.
Other direct tests of the TDE on rotating platforms were made possible by the discovery of the Mössbauer effect, which enables the production of exceedingly narrow resonance lines for nuclear gamma ray emission and absorption. Mössbauer effect experiments have proven themselves easily capable of detecting TDE using emitter-absorber relative velocities on the order of 2×104 cm/s. These experiments include ones performed by Hay et al. (1960), Champeney et al. (1965), and Kündig (1963).
Time dilation measurements
The transverse Doppler effect and the kinematic time dilation of special relativity are closely related. All validations of TDE represent validations of kinematic time dilation, and most validations of kinematic time dilation have also represented validations of TDE. An online resource, "What is the experimental basis of Special Relativity?" has documented, with brief commentary, many of the tests that, over the years, have been used to validate various aspects of special relativity. Kaivola et al. (1985) and McGowan et al. (1993) are examples of experiments classified in this resource as time dilation experiments. These two also represent tests of TDE. These experiments compared the frequency of two lasers, one locked to the frequency of a neon atom transition in a fast beam, the other locked to the same transition in thermal neon. The 1993 version of the experiment verified time dilation, and hence TDE, to an accuracy of 2.3×10−6.
Relativistic Doppler effect for sound and light
First-year physics textbooks almost invariably analyze Doppler shift for sound in terms of Newtonian kinematics, while analyzing Doppler shift for light and electromagnetic phenomena in terms of relativistic kinematics. This gives the false impression that acoustic phenomena require a different analysis than light and radio waves.
The traditional analysis of the Doppler effect for sound represents a low speed approximation to the exact, relativistic analysis. The fully relativistic analysis for sound is, in fact, equally applicable to both sound and electromagnetic phenomena.
Consider the spacetime diagram in Fig. 10. Worldlines for a tuning fork (the source) and a receiver are both illustrated on this diagram. The tuning fork and receiver start at O, at which point the tuning fork starts to vibrate, emitting waves and moving along the negative x-axis while the receiver starts to move along the positive x-axis. The tuning fork continues until it reaches A, at which point it stops emitting waves: a wavepacket has therefore been generated, and all the waves in the wavepacket are received by the receiver with the last wave reaching it at B. The proper time for the duration of the packet in the tuning fork's frame of reference is the length of OA while the proper time for the duration of the wavepacket in the receiver's frame of reference is the length of OB. If waves were emitted, then , while ; the inverse slope of AB represents the speed of signal propagation (i.e. the speed of sound) to event B. We can therefore write:
(speed of sound)
(speeds of source and receiver)
and are assumed to be less than since otherwise their passage through the medium will set up shock waves, invalidating the calculation. Some routine algebra gives the ratio of frequencies:
If and are small compared with , the above equation reduces to the classical Doppler formula for sound.
If the speed of signal propagation approaches , it can be shown that the absolute speeds and of the source and receiver merge into a single relative speed independent of any reference to a fixed medium. Indeed, we obtain , the formula for relativistic longitudinal Doppler shift.
Analysis of the spacetime diagram in Fig. 10 gave a general formula for source and receiver moving directly along their line of sight, i.e. in collinear motion.
Fig. 11 illustrates a scenario in two dimensions. The source moves with velocity (at the time of emission). It emits a signal which travels at velocity towards the receiver, which is traveling at velocity at the time of reception. The analysis is performed in a coordinate system in which the signal's speed is independent of direction.
The ratio between the proper frequencies for the source and receiver is
The leading ratio has the form of the classical Doppler effect, while the square root term represents the relativistic correction. If we consider the angles relative to the frame of the source, then and the equation reduces to , Einstein's 1905 formula for the Doppler effect. If we consider the angles relative to the frame of the receiver, then and the equation reduces to , the alternative form of the Doppler shift equation discussed previously.
See also
Doppler effect
Differential Doppler effect
Relativistic beaming
Redshift
Blueshift
Time dilation
Gravitational time dilation
Special relativity
Notes
Primary sources
References
Further reading
External links
Warp Special Relativity Simulator Computer program demonstrating the relativistic Doppler effect.
Doppler effect
Doppler effects | Relativistic Doppler effect | [
"Physics"
] | 5,150 | [
"Physical phenomena",
"Astrophysics",
"Special relativity",
"Theory of relativity",
"Doppler effects"
] |
408,108 | https://en.wikipedia.org/wiki/List%20of%20probability%20topics | This is a list of probability topics.
It overlaps with the (alphabetical) list of statistical topics. There are also the outline of probability and catalog of articles in probability theory. For distributions, see List of probability distributions. For journals, see list of probability journals. For contributors to the field, see list of mathematical probabilists and list of statisticians.
General aspects
Probability
Randomness, Pseudorandomness, Quasirandomness
Randomization, hardware random number generator
Random number generation
Random sequence
Uncertainty
Statistical dispersion
Observational error
Equiprobable
Equipossible
Average
Probability interpretations
Markovian
Statistical regularity
Central tendency
Bean machine
Relative frequency
Frequency probability
Maximum likelihood
Bayesian probability
Principle of indifference
Credal set
Cox's theorem
Principle of maximum entropy
Information entropy
Urn problems
Extractor
Free probability
Exotic probability
Schrödinger method
Empirical measure
Glivenko–Cantelli theorem
Zero–one law
Kolmogorov's zero–one law
Hewitt–Savage zero–one law
Law of truly large numbers
Littlewood's law
Infinite monkey theorem
Littlewood–Offord problem
Inclusion–exclusion principle
Impossible event
Information geometry
Talagrand's concentration inequality
Foundations of probability theory
Probability theory
Probability space
Sample space
Standard probability space
Random element
Random compact set
Dynkin system
Probability axioms
Normalizing constant
Event (probability theory)
Complementary event
Elementary event
Mutually exclusive
Boole's inequality
Probability density function
Cumulative distribution function
Law of total cumulance
Law of total expectation
Law of total probability
Law of total variance
Almost surely
Cox's theorem
Bayesianism
Prior probability
Posterior probability
Borel's paradox
Bertrand's paradox
Coherence (philosophical gambling strategy)
Dutch book
Algebra of random variables
Belief propagation
Transferable belief model
Dempster–Shafer theory
Possibility theory
Random variables
Discrete random variable
Probability mass function
Constant random variable
Expected value
Jensen's inequality
Variance
Standard deviation
Geometric standard deviation
Multivariate random variable
Joint probability distribution
Marginal distribution
Kirkwood approximation
Independent identically-distributed random variables
Independent and identically-distributed random variables
Statistical independence
Conditional independence
Pairwise independence
Covariance
Covariance matrix
De Finetti's theorem
Correlation
Uncorrelated
Correlation function
Canonical correlation
Convergence of random variables
Weak convergence of measures
Helly–Bray theorem
Slutsky's theorem
Skorokhod's representation theorem
Lévy's continuity theorem
Uniform integrability
Markov's inequality
Chebyshev's inequality = Chernoff bound
Chernoff's inequality
Bernstein inequalities (probability theory)
Hoeffding's inequality
Kolmogorov's inequality
Etemadi's inequality
Chung–Erdős inequality
Khintchine inequality
Paley–Zygmund inequality
Laws of large numbers
Asymptotic equipartition property
Typical set
Law of large numbers
Kolmogorov's two-series theorem
Random field
Conditional random field
Borel–Cantelli lemma
Wick product
Conditional probability
Conditioning (probability)
Conditional expectation
Conditional probability distribution
Regular conditional probability
Disintegration theorem
Bayes' theorem
de Finetti's theorem
Exchangeable random variables
Rule of succession
Conditional independence
Conditional event algebra
Goodman–Nguyen–van Fraassen algebra
Theory of probability distributions
Probability distribution
Probability distribution function
Probability density function
Probability mass function
Cumulative distribution function
Quantile
Moment (mathematics)
Moment about the mean
Standardized moment
Skewness
Kurtosis
Locality
Cumulant
Factorial moment
Expected value
Law of the unconscious statistician
Second moment method
Variance
Coefficient of variation
Variance-to-mean ratio
Covariance function
An inequality on location and scale parameters
Taylor expansions for the moments of functions of random variables
Moment problem
Hamburger moment problem
Carleman's condition
Hausdorff moment problem
Trigonometric moment problem
Stieltjes moment problem
Prior probability distribution
Total variation distance
Hellinger distance
Wasserstein metric
Lévy–Prokhorov metric
Lévy metric
Continuity correction
Heavy-tailed distribution
Truncated distribution
Infinite divisibility
Stability (probability)
Indecomposable distribution
Power law
Anderson's theorem
Probability bounds analysis
Probability box
Properties of probability distributions
Central limit theorem
Illustration of the central limit theorem
Concrete illustration of the central limit theorem
Berry–Esséen theorem
Berry–Esséen theorem
De Moivre–Laplace theorem
Lyapunov's central limit theorem
Misconceptions about the normal distribution
Martingale central limit theorem
Infinite divisibility (probability)
Method of moments (probability theory)
Stability (probability)
Stein's lemma
Characteristic function (probability theory)
Lévy continuity theorem
Darmois–Skitovich theorem
Edgeworth series
Helly–Bray theorem
Kac–Bernstein theorem
Location parameter
Maxwell's theorem
Moment-generating function
Factorial moment generating function
Negative probability
Probability-generating function
Vysochanskiï–Petunin inequality
Mutual information
Kullback–Leibler divergence
Le Cam's theorem
Large deviations theory
Contraction principle (large deviations theory)
Varadhan's lemma
Tilted large deviation principle
Rate function
Laplace principle (large deviations theory)
Exponentially equivalent measures
Cramér's theorem (second part)
Applied probability
Empirical findings
Benford's law
Pareto principle
Zipf's law
Boy or Girl paradox
Stochastic processes
Adapted process
Basic affine jump diffusion
Bernoulli process
Bernoulli scheme
Branching process
Point process
Chapman–Kolmogorov equation
Chinese restaurant process
Coupling (probability)
Ergodic theory
Maximal ergodic theorem
Ergodic (adjective)
Galton–Watson process
Gauss–Markov process
Gaussian process
Gaussian random field
Gaussian isoperimetric inequality
Large deviations of Gaussian random functions
Girsanov's theorem
Hawkes process
Increasing process
Itô's lemma
Jump diffusion
Law of the iterated logarithm
Lévy flight
Lévy process
Loop-erased random walk
Markov chain
Examples of Markov chains
Detailed balance
Markov property
Hidden Markov model
Maximum-entropy Markov model
Markov chain mixing time
Markov partition
Markov process
Continuous-time Markov process
Piecewise-deterministic Markov process
Martingale
Doob martingale
Optional stopping theorem
Martingale representation theorem
Azuma's inequality
Wald's equation
Poisson process
Poisson random measure
Population process
Process with independent increments
Progressively measurable process
Queueing theory
Erlang unit
Random walk
Random walk Monte Carlo
Renewal theory
Skorokhod's embedding theorem
Stationary process
Stochastic calculus
Itô calculus
Malliavin calculus
Stratonovich integral
Time series analysis
Autoregressive model
Moving average model
Autoregressive moving average model
Autoregressive integrated moving average model
Anomaly time series
Voter model
Wiener process
Brownian motion
Geometric Brownian motion
Donsker's theorem
Empirical process
Wiener equation
Wiener sausage
Geometric probability
Buffon's needle
Integral geometry
Hadwiger's theorem
Wendel's theorem
Gambling
Luck
Game of chance
Odds
Gambler's fallacy
Inverse gambler's fallacy
Parrondo's paradox
Pascal's wager
Gambler's ruin
Poker probability
Poker probability (Omaha)
Poker probability (Texas hold 'em)
Pot odds
Roulette
Martingale (betting system)
The man who broke the bank at Monte Carlo
Lottery
Lottery machine
Pachinko
Coherence (philosophical gambling strategy)
Coupon collector's problem
Coincidence
Birthday paradox
Birthday problem
Index of coincidence
Bible code
Spurious relationship
Monty Hall problem
Algorithmics
Probable prime
Probabilistic algorithm = Randomised algorithm
Monte Carlo method
Las Vegas algorithm
Probabilistic Turing machine
Stochastic programming
Probabilistically checkable proof
Box–Muller transform
Metropolis algorithm
Gibbs sampling
Inverse transform sampling method
Walk-on-spheres method
Financial mathematics
Risk
Value at risk
Market risk
Risk-neutral measure
Volatility
SWOT analysis (Marketing)
Kelly criterion
Genetics
Punnett square
Hardy–Weinberg principle
Ewens's sampling formula
Population genetics
Historical
History of probability
The Doctrine of Chances
Statistics-related lists
Lists of topics | List of probability topics | [
"Physics",
"Mathematics"
] | 1,592 | [
"Wikipedia categories named after physical quantities",
"Probability",
"Probability and statistics",
"Physical quantities"
] |
408,150 | https://en.wikipedia.org/wiki/Is%20the%20glass%20half%20empty%20or%20half%20full%3F | "Is the glass half empty or half full?", and other similar expressions such as the adjectives glass-half-full or glass-half-empty, are idioms which contrast an optimistic and pessimistic outlook on a specific situation or on the world at large. "Half full" means optimistic and "half empty" means pessimistic. The origins of this idea are unclear, but it dates at least to the early 20th century. Josiah Stamp is often given credit for introducing it in a 1935 speech, but although he did help to popularize it, a variant regarding a car's gas tank occurs in print with the optimism/pessimism connotations as early as 1929, and the glass-with-water version is mentioned simply as an intellectual paradox about the quantity of water (without reference to optimism/pessimism) as early as 1908.
See also
Cooperative principle
Cognitive bias in animals
Framing effects (psychology)
Framing (social sciences)
Less-is-better effect
List of cognitive biases
Silver lining (idiom)
References
Motivation
English-language idioms
Articles titled with a question
Optimism | Is the glass half empty or half full? | [
"Biology"
] | 229 | [
"Ethology",
"Behavior",
"Motivation",
"Human behavior"
] |
408,201 | https://en.wikipedia.org/wiki/Sieve | A sieve, fine mesh strainer, or sift, is a tool used for separating wanted elements from unwanted material or for controlling the particle size distribution of a sample, using a screen such as a woven mesh or net or perforated sheet material. The word sift derives from sieve.
In cooking, a sifter is used to separate and break up clumps in dry ingredients such as flour, as well as to aerate and combine them. A strainer (see Colander), meanwhile, is a form of sieve used to separate suspended solids from a liquid by filtration.
Industrial strainer
Some industrial strainers available are simplex basket strainers, duplex basket strainers, T-strainers and Y-strainers. Simple basket strainers are used to protect valuable or sensitive equipment in systems that are meant to be shut down temporarily. Some commonly used strainers are bell mouth strainers, foot valve strainers, basket strainers. Most processing industries (mainly pharmaceutical, coatings and liquid food industries) will opt for a self-cleaning strainer instead of a basket strainer or a simplex strainer due to limitations of simple filtration systems. The self-cleaning strainers or filters are more efficient and provide an automatic filtration solution.
Sieving
Sieving is a simple technique for separating particles of different sizes. A sieve such as used for sifting flour has very small holes. Coarse particles are separated or broken up by grinding against one another and the screen openings. Depending upon the types of particles to be separated, sieves with different types of holes are used. Sieves are also used to separate stones from sand. Sieving plays an important role in food industries where sieves (often vibrating) are used to prevent the contamination of the product by foreign bodies. The design of the industrial sieve is of primary importance here.
Triage sieving refers to grouping people according to their severity of injury.
Wooden sieves
The mesh in a wooden sieve might be made from wood or wicker. Use of wood to avoid contamination is important when the sieve is used for sampling. Henry Stephens, in his Book of the Farm, advised that the withes of a wooden riddle or sieve be made from fir or willow with American elm being best. The rims would be made of fir, oak or, especially, beech.
US standard test sieve series
A sieve analysis (or gradation test) is a practice or procedure used (commonly used in civil engineering or sedimentology) to assess the particle size distribution (also called gradation) of a granular material. Sieve sizes used in combinations of four to eight sieves.
Other types
Chinois, or conical sieve used as a strainer, also sometimes used like a food mill
Cocktail strainer, a bar accessory
Colander, a (typically) bowl-shaped sieve used as a strainer in cooking
Flour sifter or bolter, used in flour production and baking
Graduated sieves, used to separate varying small sizes of material, often soil, rock or minerals
Mesh strainer, or just "strainer", usually consisting of a fine metal mesh screen on a metal frame
Laundry strainer, to drain boiling water from laundry removed from a wash copper, usually with a wooden frame to facilitate manual handling with hot contents
Riddle, used for soil
Spider, used in Chinese cooking
Tamis, also known as a drum sieve
Tea strainer, specifically intended for use when making tea
Zaru, or bamboo sieve, used in Japanese cooking
Other uses
"Sieve" is a common term used in trash-talk referring to a goaltender in ice hockey who lets in too many goals
"Leaks like a sieve" is an English language idiom to describe a container that has multiple leaks, or, by allegory, an organization whose confidential information is routinely disclosed to the public.
See also
References
External links
Cookware and bakeware
Material-handling equipment
Solid-solid separation | Sieve | [
"Chemistry"
] | 816 | [
"Solid-solid separation",
"Separation processes by phases"
] |
408,801 | https://en.wikipedia.org/wiki/Helioseismology | Helioseismology is the study of the structure and dynamics of the Sun through its oscillations. These are principally caused by sound waves that are continuously driven and damped by convection near the Sun's surface. It is similar to geoseismology, or asteroseismology, which are respectively the studies of the Earth or stars through their oscillations. While the Sun's oscillations were first detected in the early 1960s, it was only in the mid-1970s that it was realized that the oscillations propagated throughout the Sun and could allow scientists to study the Sun's deep interior. The term was coined by Douglas Gough in the 90s. The modern field is separated into global helioseismology, which studies the Sun's resonant modes directly, and local helioseismology, which studies the propagation of the component waves near the Sun's surface.
Helioseismology has contributed to a number of scientific breakthroughs. The most notable was to show that the anomaly in the predicted neutrino flux from the Sun could not be caused by flaws in stellar models and must instead be a problem of particle physics. The so-called solar neutrino problem was ultimately resolved by neutrino oscillations. The experimental discovery of neutrino oscillations was recognized by the 2015 Nobel Prize for Physics. Helioseismology also allowed accurate measurements of the quadrupole (and higher-order) moments of the Sun's gravitational potential, which are consistent with General Relativity. The first helioseismic calculations of the Sun's internal rotation profile showed a rough separation into a rigidly-rotating core and differentially-rotating envelope. The boundary layer is now known as the tachocline and is thought to be a key component for the solar dynamo. Although it roughly coincides with the base of the solar convection zone — also inferred through helioseismology — it is conceptually distinct, being a boundary layer in which there is a meridional flow connected with the convection zone and driven by the interplay between baroclinicity and Maxwell stresses.
Helioseismology benefits most from continuous monitoring of the Sun, which began first with uninterrupted observations from near the South Pole over the austral summer. In addition, observations over multiple solar cycles have allowed helioseismologists to study changes in the Sun's structure over decades. These studies are made possible by global telescope networks like the Global Oscillations Network Group (GONG) and the Birmingham Solar Oscillations Network (BiSON), which have been operating for over several decades.
Types of solar oscillation
Solar oscillation modes are interpreted as resonant vibrations of a roughly spherically symmetric self-gravitating fluid in hydrostatic equilibrium. Each mode can then be represented approximately as the product of a function of radius and a spherical harmonic , and consequently can be characterized by the three quantum numbers which label:
the number of nodal shells in radius, known as the radial order ;
the total number of nodal circles on each spherical shell, known as the angular degree ; and
the number of those nodal circles that are longitudinal, known as the azimuthal order .
It can be shown that the oscillations are separated into two categories: interior oscillations and a special category of surface oscillations. More specifically, there are:
Pressure modes (p modes)
Pressure modes are in essence standing sound waves. The dominant restoring force is the pressure (rather than buoyancy), hence the name. All the solar oscillations that are used for inferences about the interior are p modes, with frequencies between about 1 and 5 millihertz and angular degrees ranging from zero (purely radial motion) to order . Broadly speaking, their energy densities vary with radius inversely proportional to the sound speed, so their resonant frequencies are determined predominantly by the outer regions of the Sun. Consequently it is difficult to infer from them the structure of the solar core.
Gravity modes (g modes)
Gravity modes are confined to convectively stable regions, either the radiative interior or the atmosphere. The restoring force is predominantly buoyancy, and thus indirectly gravity, from which they take their name. They are evanescent in the convection zone, and therefore interior modes have tiny amplitudes at the surface and are extremely difficult to detect and identify. It has long been recognized that measurement of even just a few g modes could substantially increase our knowledge of the deep interior of the Sun. However, no individual g mode has yet been unambiguously measured, although indirect detections have been both claimed and challenged. Additionally, there can be similar gravity modes confined to the convectively stable atmosphere.
Surface gravity modes (f modes)
Surface gravity waves are analogous to waves in deep water, having the property that the Lagrangian pressure perturbation is essentially zero. They are of high degree , penetrating a characteristic distance , where is the solar radius. To good approximation, they obey the so-called deep-water-wave dispersion law: , irrespective of the stratification of the Sun, where is the angular frequency, is the surface gravity and is the horizontal wavenumber, and tend asymptotically to that relation as .
What seismology can reveal
The oscillations that have been successfully utilized for seismology are essentially adiabatic. Their dynamics is therefore the action of pressure forces (plus putative Maxwell stresses) against matter with inertia density , which itself depends upon the relation between them under adiabatic change, usually quantified via the (first) adiabatic exponent . The equilibrium values of the variables and (together with the dynamically small angular velocity and magnetic field ) are related by the constraint of hydrostatic support, which depends upon the total mass and radius of the Sun. Evidently, the oscillation frequencies depend only on the seismic variables , , and , or any independent set of functions of them. Consequently it is only about these variables that information can be derived directly. The square of the adiabatic sound speed, , is such commonly adopted function, because that is the quantity upon which acoustic propagation principally depends. Properties of other, non-seismic, quantities, such as helium abundance, , or main-sequence age , can be inferred only by supplementation with additional assumptions, which renders the outcome more uncertain.
Data analysis
Global helioseismology
The chief tool for analysing the raw seismic data is the Fourier transform. To good approximation, each mode is a damped harmonic oscillator, for which the power as a function of frequency is a Lorentz function. Spatially resolved data are usually projected onto desired spherical harmonics to obtain time series which are then Fourier transformed. Helioseismologists typically combine the resulting one-dimensional power spectra into a two-dimensional spectrum.
The lower frequency range of the oscillations is dominated by the variations caused by granulation. This must first be filtered out before (or at the same time that) the modes are analysed. Granular flows at the solar surface are mostly horizontal, from the centres of the rising granules to the narrow downdrafts between them. Relative to the oscillations, granulation produces a stronger signal in intensity than line-of-sight velocity, so the latter is preferred for helioseismic observatories.
Local helioseismology
Local helioseismology—a term coined by Charles Lindsey, Doug Braun and Stuart Jefferies in 1993—employs several different analysis methods to make inferences from the observational data.
The Fourier–Hankel spectral method was originally used to search for wave absorption by sunspots.
Ring-diagram analysis, first introduced by Frank Hill, is used to infer the speed and direction of horizontal flows below the solar surface by observing the Doppler shifts of ambient acoustic waves from power spectra of solar oscillations computed over patches of the solar surface (typically 15° × 15°). Thus, ring-diagram analysis is a generalization of global helioseismology applied to local areas on the Sun (as opposed to half of the Sun). For example, the sound speed and adiabatic index can be compared within magnetically active and inactive (quiet Sun) regions.
Time-distance helioseismology aims to measure and interpret the travel times of solar waves between any two locations on the solar surface. Inhomogeneities near the ray path connecting the two locations perturb the travel time between those two points. An inverse problem must then be solved to infer the local structure and dynamics of the solar interior.
Helioseismic holography, introduced in detail by Charles Lindsey and Doug Braun for the purpose of far-side (magnetic) imaging, is a special case of phase-sensitive holography. The idea is to use the wavefield on the visible disk to learn about active regions on the far side of the Sun. The basic idea in helioseismic holography is that the wavefield, e.g., the line-of-sight Doppler velocity observed at the solar surface, can be used to make an estimate of the wavefield at any location in the solar interior at any instant in time. In this sense, holography is much like seismic migration, a technique in geophysics that has been in use since the 1940s. As another example, this technique has been used to give a seismic image of a solar flare.
In direct modelling, the idea is to estimate subsurface flows from direct inversion of the frequency-wavenumber correlations seen in the wavefield in the Fourier domain. Woodard demonstrated the ability of the technique to recover near-surface flows the f modes.
Inversion
Introduction
The Sun's oscillation modes represent a discrete set of observations that are sensitive to its continuous structure. This allows scientists to formulate inverse problems for the Sun's interior structure and dynamics. Given a reference model of the Sun, the differences between its mode frequencies and those of the Sun, if small, are weighted averages of the differences between the Sun's structure and that of the reference model. The frequency differences can then be used to infer those structural differences. The weighting functions of these averages are known as kernels.
Structure
The first inversions of the Sun's structure were made using Duvall's law and later using Duvall's law linearized about a reference solar model. These results were subsequently supplemented by analyses that linearize the full set of equations describing the stellar oscillations about a theoretical reference model and are now a standard way to invert frequency data. The inversions demonstrated differences in solar models that were greatly reduced by implementing gravitational settling: the gradual separation of heavier elements towards the solar centre (and lighter elements to the surface to replace them).
Rotation
If the Sun were perfectly spherical, the modes with different azimuthal orders m would have the same frequencies. Rotation, however, breaks this degeneracy, and the modes frequencies differ by rotational splittings that are weighted-averages of the angular velocity through the Sun. Different modes are sensitive to different parts of the Sun and, given enough data, these differences can be used to infer the rotation rate throughout the Sun. For example, if the Sun were rotating uniformly throughout, all the p modes would be split by approximately the same amount. Actually, the angular velocity is not uniform, as can be seen at the surface, where the equator rotates faster than the poles. The Sun rotates slowly enough that a spherical, non-rotating model is close enough to reality for deriving the rotational kernels.
Helioseismology has shown that the Sun has a rotation profile with several features:
a rigidly-rotating radiative (i.e. non-convective) zone, though the rotation rate of the inner core is not well known;
a thin shear layer, known as the tachocline, which separates the rigidly-rotating interior and the differentially-rotating convective envelope;
a convective envelope in which the rotation rate varies both with depth and latitude; and
a final shear layer just beneath the surface, in which the rotation rate slows down towards the surface.
Relationship to other fields
Geoseismology
Helioseismology was born from analogy with geoseismology but several important differences remain. First, the Sun lacks a solid surface and therefore cannot support shear waves. From the data analysis perspective, global helioseismology differs from geoseismology by studying only normal modes. Local helioseismology is thus somewhat closer in spirit to geoseismology in the sense that it studies the complete wavefield.
Asteroseismology
Because the Sun is a star, helioseismology is closely related to the study of oscillations in other stars, known as asteroseismology. Helioseismology is most closely related to the study of stars whose oscillations are also driven and damped by their outer convection zones, known as solar-like oscillators, but the underlying theory is broadly the same for other classes of variable star.
The principal difference is that oscillations in distant stars cannot be resolved. Because the brighter and darker sectors of the spherical harmonic cancel out, this restricts asteroseismology almost entirely to the study of low degree modes (angular degree ). This makes inversion much more difficult but upper limits can still be achieved by making more restrictive assumptions.
History
Solar oscillations were first observed in the early 1960s as a quasi-periodic intensity and line-of-sight velocity variation with a period of about 5 minutes. Scientists gradually realized that the oscillations might be global modes of the Sun and predicted that the modes would form clear ridges in two-dimensional power spectra. The ridges were subsequently confirmed in observations of high-degree modes in the mid 1970s, and mode multiplets of different radial orders were distinguished in whole-disc observations. At a similar time, Jørgen Christensen-Dalsgaard and Douglas Gough suggested the potential of using individual mode frequencies to infer the interior structure of the Sun. They calibrated solar models against the low-degree data finding two similarly good fits, one with low and a corresponding low neutrino production rate , the other with higher and ; earlier envelope calibrations against high-degree frequencies preferred the latter, but the results were not wholly convincing. It was not until Tom Duvall and Jack Harvey connected the two extreme data sets by measuring modes of intermediate degree to establish the quantum numbers associated with the earlier observations that the higher- model was established, thereby suggesting at that early stage that the resolution of the neutrino problem must lie in nuclear or particle physics.
New methods of inversion developed in the 1980s, allowing researchers to infer the profiles sound speed and, less accurately, density throughout most of the Sun, corroborating the conclusion that residual errors in the inference of the solar structure is not the cause of the neutrino problem. Towards the end of the decade, observations also began to show that the oscillation mode frequencies vary with the Sun's magnetic activity cycle.
To overcome the problem of not being able to observe the Sun at night, several groups had begun to assemble networks of telescopes (e.g. the Birmingham Solar Oscillations Network, or BiSON, and the Global Oscillation Network Group) from which the Sun would always be visible to at least one node. Long, uninterrupted observations brought the field to maturity, and the state of the field was summarized in a 1996 special issue of Science magazine. This coincided with the start of normal operations of the Solar and Heliospheric Observatory (SoHO), which began producing high-quality data for helioseismology.
The subsequent years saw the resolution of the solar neutrino problem, and the long seismic observations began to allow analysis of multiple solar activity cycles. The agreement between standard solar models and helioseismic inversions was disrupted by new measurements of the heavy element content of the solar photosphere based on detailed three-dimensional models. Though the results later shifted back towards the traditional values used in the 1990s, the new abundances significantly worsened the agreement between the models and helioseismic inversions. The cause of the discrepancy remains unsolved and is known as the solar abundance problem.
Space-based observations by SoHO have continued and SoHO was joined in 2010 by the Solar Dynamics Observatory (SDO), which has also been monitoring the Sun continuously since its operations began. In addition, ground-based networks (notably BiSON and GONG) continue to operate, providing nearly continuous data from the ground too.
See also
160-minute solar cycle
Coronal seismology
Differential rotation
Diskoseismology
Frequency separation
Magnetogravity wave
Moreton wave
Solar neutrino problem
Solar tower (astronomy)
Stellar rotation
References
External links
Non-technical description of helio- and asteroseismology retrieved November 2009
Scientists Issue Unprecedented Forecast of Next Sunspot Cycle National Science Foundation press release, March 6, 2006
European Helio- and Asteroseismology Network (HELAS)
Farside and Earthside images of the Sun
Living Reviews in Solar Physics
Helioseismology and Asteroseismology at MPS
Satellite instruments
VIRGO
SOI/MDI
SDO/HMI
TRACE
Ground-based instruments
BiSON
Mark-1
GONG
HiDHN
Further reading
Fields of seismology
Seismology
Stellar phenomena
Asteroseismology
Concepts in stellar astronomy | Helioseismology | [
"Physics"
] | 3,642 | [
"Physical phenomena",
"Concepts in astrophysics",
"Astrophysics",
"Asteroseismology",
"Concepts in stellar astronomy",
"Stellar phenomena"
] |
15,112,730 | https://en.wikipedia.org/wiki/Laughlin%20wavefunction | In condensed matter physics, the Laughlin wavefunction is an ansatz, proposed by Robert Laughlin for the ground state of a two-dimensional electron gas placed in a uniform background magnetic field in the presence of a uniform jellium background when the filling factor of the lowest Landau level is where is an odd positive integer. It was constructed to explain the observation of the fractional quantum Hall effect (FQHE), and predicted the existence of additional states as well as quasiparticle excitations with fractional electric charge , both of which were later experimentally observed. Laughlin received one third of the Nobel Prize in Physics in 1998 for this discovery.
Context and analytical expression
If we ignore the jellium and mutual Coulomb repulsion between the electrons as a zeroth order approximation, we have an infinitely degenerate lowest Landau level (LLL) and with a filling factor of 1/n, we'd expect that all of the electrons would lie in the LLL. Turning on the interactions, we can make the approximation that all of the electrons lie in the LLL. If is the single particle wavefunction of the LLL state with the lowest orbital angular momenta, then the Laughlin ansatz for the multiparticle wavefunction is
where position is denoted by
in (Gaussian units)
and and are coordinates in the x–y plane. Here is the reduced Planck constant, is the electron charge, is the total number of particles, and is the magnetic field, which is perpendicular to the xy plane. The subscripts on z identify the particle. In order for the wavefunction to describe fermions, n must be an odd integer. This forces the wavefunction to be antisymmetric under particle interchange. The angular momentum for this state is .
True ground state in FQHE at ν = 1/3
Consider above: resultant is a trial wavefunction; it is not exact, but qualitatively, it reproduces many features of the exact solution and quantitatively, it has very high
overlaps with the exact ground state for small systems. Assuming Coulomb repulsion between any two electrons, that
ground state can be determined using exact diagonalisation and the
overlaps have been calculated to be close to one. Moreover, with short-range interaction (Haldane pseudopotentials for set to zero),
Laughlin wavefunction becomes exact,
i.e. .
Energy of interaction for two particles
The Laughlin wavefunction is the multiparticle wavefunction for quasiparticles. The expectation value of the interaction energy for a pair of quasiparticles is
where the screened potential is (see )
where is a confluent hypergeometric function and is a Bessel function of the first kind. Here, is the distance between the centers of two current loops, is the magnitude of the electron charge, is the quantum version of the Larmor radius, and is the thickness of the electron gas in the direction of the magnetic field. The angular momenta of the two individual current loops are and where . The inverse screening length is given by (Gaussian units)
where is the cyclotron frequency, and is the area of the electron gas in the xy plane.
The interaction energy evaluates to:
{|cellpadding="2" style="border:2px solid #ccccff"
|
|}
To obtain this result we have made the change of integration variables
and
and noted (see Common integrals in quantum field theory)
The interaction energy has minima for (Figure 1)
and
For these values of the ratio of angular momenta, the energy is plotted in Figure 2 as a function of .
References
See also
Landau level
Fractional quantum Hall effect
Coulomb potential between two current loops embedded in a magnetic field
Hall effect
Condensed matter physics
Quantum phases | Laughlin wavefunction | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 801 | [
"Quantum phases",
"Physical phenomena",
"Hall effect",
"Phases of matter",
"Quantum mechanics",
"Electric and magnetic fields in matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
15,114,520 | https://en.wikipedia.org/wiki/Six%20exponentials%20theorem | In mathematics, specifically transcendental number theory, the six exponentials theorem is a result that, given the right conditions on the exponents, guarantees the transcendence of at least one of a set of six exponentials.
Statement
If are three complex numbers that are linearly independent over the rational numbers, and are two complex numbers that are also linearly independent over the rational numbers, then at least one of the following numbers is transcendental:
The theorem can be stated in terms of logarithms by introducing the set L of logarithms of algebraic numbers:
The theorem then says that if are elements of L for such that are linearly independent over the rational numbers, and λ11 and λ21 are also linearly independent over the rational numbers, then the matrix
has rank 2.
History
A special case of the result where x1, x2, and x3 are logarithms of positive integers, y1 = 1, and y2 is real, was first mentioned in a paper by Leonidas Alaoglu and Paul Erdős from 1944 in which they try to prove that the ratio of consecutive colossally abundant numbers is always prime. They claimed that Carl Ludwig Siegel knew of a proof of this special case, but it is not recorded. Using the special case they manage to prove that the ratio of consecutive colossally abundant numbers is always either a prime or a semiprime.
The theorem was first explicitly stated and proved in its complete form independently by Serge Lang and Kanakanahalli Ramachandra in the 1960s.
Five exponentials theorem
A stronger, related result is the five exponentials theorem, which is as follows. Let x1, x2 and y1, y2 be two pairs of complex numbers, with each pair being linearly independent over the rational numbers, and let γ be a non-zero algebraic number. Then at least one of the following five numbers is transcendental:
This theorem implies the six exponentials theorem and in turn is implied by the as yet unproven four exponentials conjecture, which says that in fact one of the first four numbers on this list must be transcendental.
Sharp six exponentials theorem
Another related result that implies both the six exponentials theorem and the five exponentials theorem is the sharp six exponentials theorem. This theorem is as follows. Let x1, x2, and x3 be complex numbers that are linearly independent over the rational numbers, and let y1 and y2 be a pair of complex numbers that are linearly independent over the rational numbers, and suppose that βij are six algebraic numbers for 1 ≤ i ≤ 3 and 1 ≤ j ≤ 2 such that the following six numbers are algebraic:
Then xi yj = βij for 1 ≤ i ≤ 3 and 1 ≤ j ≤ 2. The six exponentials theorem then follows by setting βij = 0 for every i and j, while the five exponentials theorem follows by setting x3 = γ/x1 and using Baker's theorem to ensure that the xi are linearly independent.
There is a sharp version of the five exponentials theorem as well, although it as yet unproven so is known as the sharp five exponentials conjecture. This conjecture implies both the sharp six exponentials theorem and the five exponentials theorem, and is stated as follows. Let x1, x2 and y1, y2 be two pairs of complex numbers, with each pair being linearly independent over the rational numbers, and let α, β11, β12, β21, β22, and γ be six algebraic numbers with γ ≠ 0 such that the following five numbers are algebraic:
Then xi yj = βij for 1 ≤ i, j ≤ 2 and γx2 = αx1.
A consequence of this conjecture that isn't currently known would be the transcendence of eπ², by setting x1 = y1 = β11 = 1, x2 = y2 = iπ, and all the other values in the statement to be zero.
Strong six exponentials theorem
A further strengthening of the theorems and conjectures in this area are the strong versions. The strong six exponentials theorem is a result proved by Damien Roy that implies the sharp six exponentials theorem. This result concerns the vector space over the algebraic numbers generated by 1 and all logarithms of algebraic numbers, denoted here as L∗. So L∗ is the set of all complex numbers of the form
for some n ≥ 0, where all the βi and αi are algebraic and every branch of the logarithm is considered. The strong six exponentials theorem then says that if x1, x2, and x3 are complex numbers that are linearly independent over the algebraic numbers, and if y1 and y2 are a pair of complex numbers that are also linearly independent over the algebraic numbers then at least one of the six numbers xi yj for 1 ≤ i ≤ 3 and 1 ≤ j ≤ 2 is not in L∗. This is stronger than the standard six exponentials theorem which says that one of these six numbers is not simply the logarithm of an algebraic number.
There is also a strong five exponentials conjecture formulated by Michel Waldschmidt. It would imply both the strong six exponentials theorem and the sharp five exponentials conjecture. This conjecture claims that if x1, x2 and y1, y2 are two pairs of complex numbers, with each pair being linearly independent over the algebraic numbers, then at least one of the following five numbers is not in L∗:
All the above conjectures and theorems are consequences of the unproven extension of Baker's theorem, that logarithms of algebraic numbers that are linearly independent over the rational numbers are automatically algebraically independent too. The diagram on the right shows the logical implications between all these results.
Generalization to commutative group varieties
The exponential function uniformizes the exponential map of the multiplicative group . Therefore, we can reformulate the six exponential theorem more abstractly as follows:
Let and take to be a non-zero complex-analytic group homomorphism. Define to be the set of complex numbers for which is an algebraic point of . If a minimal generating set of over has more than two elements then the image is an algebraic subgroup of .
(In order to derive the classical statement, set and note that is a subset of ).
In this way, the statement of the six exponentials theorem can be generalized to an arbitrary commutative group variety over the field of algebraic numbers. This generalized six exponential conjecture, however, seems out of scope at the current state of transcendental number theory.
For the special but interesting cases and , where are elliptic curves over the field of algebraic numbers, results towards the generalized six exponential conjecture were proven by Aleksander Momot. These results involve the exponential function and a Weierstrass function resp. two Weierstrass functions with algebraic invariants , instead of the two exponential functions in the classical statement.
Let and suppose is not isogenous to a curve over a real field and that is not an algebraic subgroup of . Then is generated over either by two elements , or three elements which are not all contained in a real line , where is a non-zero complex number. A similar result is shown for .
Notes
References
External links
Transcendental numbers
Exponentials
Theorems in number theory
Conjectures | Six exponentials theorem | [
"Mathematics"
] | 1,515 | [
"Unsolved problems in mathematics",
"Mathematical theorems",
"E (mathematical constant)",
"Conjectures",
"Theorems in number theory",
"Exponentials",
"Mathematical problems",
"Number theory"
] |
15,114,628 | https://en.wikipedia.org/wiki/Four%20exponentials%20conjecture | In mathematics, specifically the field of transcendental number theory, the four exponentials conjecture is a conjecture which, given the right conditions on the exponents, would guarantee the transcendence of at least one of four exponentials. The conjecture, along with two related, stronger conjectures, is at the top of a hierarchy of conjectures and theorems concerning the arithmetic nature of a certain number of values of the exponential function.
Statement
If x1, x2 and y1, y2 are two pairs of complex numbers, with each pair being linearly independent over the rational numbers, then at least one of the following four numbers is transcendental:
An alternative way of stating the conjecture in terms of logarithms is the following. For 1 ≤ i, j ≤ 2 let λij be complex numbers such that exp(λij) are all algebraic. Suppose λ11 and λ12 are linearly independent over the rational numbers, and λ11 and λ21 are also linearly independent over the rational numbers, then
An equivalent formulation in terms of linear algebra is the following. Let M be the 2×2 matrix
where exp(λij) is algebraic for 1 ≤ i, j ≤ 2. Suppose the two rows of M are linearly independent over the rational numbers, and the two columns of M are linearly independent over the rational numbers. Then the rank of M is 2.
While a 2×2 matrix having linearly independent rows and columns usually means it has rank 2, in this case we require linear independence over a smaller field so the rank isn't forced to be 2. For example, the matrix
has rows and columns that are linearly independent over the rational numbers, since π is irrational. But the rank of the matrix is 1. So in this case the conjecture would imply that at least one of e, eπ, and eπ2 is transcendental (which in this case is already known since e is transcendental).
History
The conjecture was considered in the early 1940s by Atle Selberg who never formally stated the conjecture. A special case of the conjecture is mentioned in a 1944 paper of Leonidas Alaoglu and Paul Erdős who suggest that it had been considered by Carl Ludwig Siegel. An equivalent statement was first mentioned in print by Theodor Schneider who set it as the first of eight important, open problems in transcendental number theory in 1957.
The related six exponentials theorem was first explicitly mentioned in the 1960s by Serge Lang and Kanakanahalli Ramachandra, and both also explicitly conjecture the above result. Indeed, after proving the six exponentials theorem Lang mentions the difficulty in dropping the number of exponents from six to four — the proof used for six exponentials "just misses" when one tries to apply it to four.
Corollaries
Using Euler's identity this conjecture implies the transcendence of many numbers involving e and π. For example, taking x1 = 1, x2 = , y1 = iπ, and y2 = iπ, the conjecture—if true—implies that one of the following four numbers is transcendental:
The first of these is just −1, and the fourth is 1, so the conjecture implies that eiπ is transcendental (which is already known, by consequence of the Gelfond–Schneider theorem).
An open problem in number theory settled by the conjecture is the question of whether there exists a non-integer real number t such that both 2t and 3t are integers, or indeed such that at and bt are both integers for some pair of integers a and b that are multiplicatively independent over the integers. Values of t such that 2t is an integer are all of the form t = log2m for some integer m, while for 3t to be an integer, t must be of the form t = log3n for some integer n. By setting x1 = 1, x2 = t, y1 = log(2), and y2 = log(3), the four exponentials conjecture implies that if t is irrational then one of the following four numbers is transcendental:
So if 2t and 3t are both integers then the conjecture implies that t must be a rational number. Since the only rational numbers t for which 2t is also rational are the integers, this implies that there are no non-integer real numbers t such that both 2t and 3t are integers. It is this consequence, for any two primes (not just 2 and 3), that Alaoglu and Erdős desired in their paper as it would imply the conjecture that the quotient of two consecutive colossally abundant numbers is prime, extending Ramanujan's results on the quotients of consecutive superior highly composite number.
Sharp four exponentials conjecture
The four exponentials conjecture reduces the pair and triplet of complex numbers in the hypotheses of the six exponentials theorem to two pairs. It is conjectured that this is also possible with the sharp six exponentials theorem, and this is the sharp four exponentials conjecture. Specifically, this conjecture claims that if x1, x2, and y1, y2 are two pairs of complex numbers with each pair being linearly independent over the rational numbers, and if βij are four algebraic numbers for 1 ≤ i, j ≤ 2 such that the following four numbers are algebraic:
then xi yj = βij for 1 ≤ i, j ≤ 2. So all four exponentials are in fact 1.
This conjecture implies both the sharp six exponentials theorem, which requires a third x value, and the as yet unproven sharp five exponentials conjecture that requires a further exponential to be algebraic in its hypotheses.
Strong four exponentials conjecture
The strongest result that has been conjectured in this circle of problems is the strong four exponentials conjecture. This result would imply both aforementioned conjectures concerning four exponentials as well as all the five and six exponentials conjectures and theorems, as illustrated to the right, and all the three exponentials conjectures detailed below. The statement of this conjecture deals with the vector space over the algebraic numbers generated by 1 and all logarithms of non-zero algebraic numbers, denoted here as L∗. So L∗ is the set of all complex numbers of the form
for some n ≥ 0, where all the βi and αi are algebraic and every branch of the logarithm is considered. The statement of the strong four exponentials conjecture is then as follows. Let x1, x2, and y1, y2 be two pairs of complex numbers with each pair being linearly independent over the algebraic numbers, then at least one of the four numbers xi yj for 1 ≤ i, j ≤ 2 is not in L∗.
Three exponentials conjecture
The four exponentials conjecture rules out a special case of non-trivial, homogeneous, quadratic relations between logarithms of algebraic numbers. But a conjectural extension of Baker's theorem implies that there should be no non-trivial algebraic relations between logarithms of algebraic numbers at all, homogeneous or not. One case of non-homogeneous quadratic relations is covered by the still open three exponentials conjecture. In its logarithmic form it is the following conjecture. Let λ1, λ2, and λ3 be any three logarithms of algebraic numbers and γ be a non-zero algebraic number, and suppose that λ1λ2 = γλ3. Then λ1λ2 = γλ3 = 0.
The exponential form of this conjecture is the following. Let x1, x2, and y be non-zero complex numbers and let γ be a non-zero algebraic number. Then at least one of the following three numbers is transcendental:
There is also a sharp three exponentials conjecture which claims that if x1, x2, and y are non-zero complex numbers and α, β1, β2, and γ are algebraic numbers such that the following three numbers are algebraic
then either x2y = β2 or γx1 = αx2.
The strong three exponentials conjecture meanwhile states that if x1, x2, and y are non-zero complex numbers with x1y, x2y, and x1/x2 all transcendental, then at least one of the three numbers x1y, x2y, x1/x2 is not in L∗.
As with the other results in this family, the strong three exponentials conjecture implies the sharp three exponentials conjecture which implies the three exponentials conjecture. However, the strong and sharp three exponentials conjectures are implied by their four exponentials counterparts, bucking the usual trend. And the three exponentials conjecture is neither implied by nor implies the four exponentials conjecture.
The three exponentials conjecture, like the sharp five exponentials conjecture, would imply the transcendence of eπ2 by letting (in the logarithmic version) λ1 = iπ, λ2 = −iπ, and γ = 1.
Bertrand's conjecture
Many of the theorems and results in transcendental number theory concerning the exponential function have analogues involving the modular function j. Writing q = e2πi for the nome and j() = J(q), Daniel Bertrand conjectured that if q1 and q2 are non-zero algebraic numbers in the complex unit disc that are multiplicatively independent, then J(q1) and J(q2) are algebraically independent over the rational numbers. Although not obviously related to the four exponentials conjecture, Bertrand's conjecture in fact implies a special case known as the weak four exponentials conjecture. This conjecture states that if x1 and x2 are two positive real algebraic numbers, neither of them equal to 1, then π2 and the product are linearly independent over the rational numbers. This corresponds to the special case of the four exponentials conjecture whereby y1 = iπ, y2 = −iπ, and x1 and x2 are real. Perhaps surprisingly, though, it is also a corollary of Bertrand's conjecture, suggesting there may be an approach to the full four exponentials conjecture via the modular function j.
See also
Schanuel's conjecture
Notes
References
External links
Conjectures
Transcendental numbers
Exponentials
Unsolved problems in number theory | Four exponentials conjecture | [
"Mathematics"
] | 2,138 | [
"Unsolved problems in mathematics",
"Unsolved problems in number theory",
"E (mathematical constant)",
"Conjectures",
"Exponentials",
"Mathematical problems",
"Number theory"
] |
15,118,033 | https://en.wikipedia.org/wiki/Dangerous%20Substances%20and%20Explosive%20Atmospheres%20Regulations%202002 | DSEAR, the Dangerous Substances and Explosive Atmospheres Regulations 2002, is the United Kingdom's implementation of the European Union-wide ATEX directive.
The intention of the Regulations is to reduce the risk of a fatality or serious injury resulting from a "dangerous substance" igniting and potentially exploding. Examples of a "dangerous substance", as defined by DSEAR, include sawdust, ethanol vapours, and hydrogen gas. The regulation is enforceable by the HSE or local authorities.
From June 2015, DSEAR incorporated changes in the EU Chemical Agents Directive and now also covers gases under pressure and substances that are corrosive to metals.
See also
Health and safety regulations in the United Kingdom
Area classification
Electrical equipment in hazardous areas
Equipment and protective systems intended for use in potentially explosive atmospheres
Intrinsic safety
HSEQ
References
External links
DSEAR legislation
UK HSE DSEAR page
Chemical industry in the United Kingdom
Electrical safety in the United Kingdom
Explosion protection
Health and safety in the United Kingdom
Natural gas safety
Regulation of chemicals in the United Kingdom
Safety codes
Statutory instruments of the United Kingdom | Dangerous Substances and Explosive Atmospheres Regulations 2002 | [
"Chemistry",
"Engineering"
] | 218 | [
"Explosion protection",
"Natural gas safety",
"Combustion engineering",
"Natural gas technology",
"Explosions"
] |
211,041 | https://en.wikipedia.org/wiki/Palmer%20drought%20index | The Palmer drought index, sometimes called the Palmer drought severity index (PDSI), is a regional drought index commonly used for monitoring drought events and studying areal extent and severity of drought episodes. The index uses precipitation and temperature data to study moisture supply and demand using a simple water balance model. It was developed by meteorologist Wayne Palmer, who first published his method in the 1965 paper Meteorological Drought for the Office of Climatology of the U.S. Weather Bureau.
The Palmer Drought Index is based on a supply-and-demand model of soil moisture. Supply is comparatively straightforward to calculate, but demand is more complicated as it depends on many factors, not just temperature and the amount of moisture in the soil but also hard-to-calibrate factors including evapotranspiration and recharge rates. Palmer tried to overcome these difficulties by developing an algorithm that approximated them based on the most readily available data, precipitation and temperature.
The index has proven most effective in determining long-term drought, a matter of several months, but it is not as good with conditions over a matter of weeks. It uses a 0 as normal, and drought is shown in terms of negative numbers; for example, negative 2 is moderate drought, negative 3 is severe drought, and negative 4 is extreme drought. Palmer's algorithm also is used to describe wet spells, using corresponding positive numbers. Palmer also developed a formula for standardizing drought calculations for each individual location based on the variability of precipitation and temperature at that location. The Palmer index can therefore be applied to any site for which sufficient precipitation and temperature data is available.
Critics have argued that the utility of the Palmer index is weakened by the arbitrary nature of Palmer's algorithms, including the technique used for standardization and arbitrary designation of drought severity classes and internal temporal memory. The Palmer index's inability to account for snow and frozen ground also is cited as a weakness.
The Palmer index is widely used operationally, with Palmer maps published weekly by the United States Government's National Oceanic and Atmospheric Administration. It also has been used by climatologists to standardize global long-term drought analysis. Global Palmer data sets have been developed based on instrumental records beginning in the 19th century. In addition, dendrochronology has been used to generate estimated Palmer index values for North America for the past 2000 years, allowing analysis of long term drought trends. It has also been used as a means of explaining the Late Bronze Age collapse.
In the US, regional Palmer maps are featured on the cable channel Weatherscan.
PDSI classification
The PDSI is a standardized index that ranges from -10 to +10, with negative values indicating drought conditions and positive values indicating wet conditions.
See also
Keetch–Byram drought index
Standardised Precipitation Evapotranspiration Index
United States Drought Monitor
Drought
References
External links
Drought Monitoring at NOAA's Climate Prediction Center Using the Palmer Drought Index
National Center for Climate and Atmospheric Research, Climate Analysis Section
UCL Dept. of Space and Climate Physics Global Drought Monitor Using the Palmer Drought Index
Droughts
Eponymous indices
Hydrology
Meteorological indices
Meteorological quantities
Climate change and agriculture | Palmer drought index | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering",
"Environmental_science"
] | 636 | [
"Hydrology",
"Physical quantities",
"Quantity",
"Meteorological quantities",
"Environmental engineering"
] |
211,923 | https://en.wikipedia.org/wiki/Liver%20function%20tests | Liver function tests (LFTs or LFs), also referred to as a hepatic panel or liver panel, are groups of blood tests that provide information about the state of a patient's liver. These tests include prothrombin time (PT/INR), activated partial thromboplastin time (aPTT), albumin, bilirubin (direct and indirect), and others. The liver transaminases aspartate transaminase (AST or SGOT) and alanine transaminase (ALT or SGPT) are useful biomarkers of liver injury in a patient with some degree of intact liver function.
Most liver diseases cause only mild symptoms initially, but these diseases must be detected early. Hepatic (liver) involvement in some diseases can be of crucial importance. This testing is performed on a patient's blood sample. Some tests are associated with functionality (e.g., albumin), some with cellular integrity (e.g., transaminase), and some with conditions linked to the biliary tract (gamma-glutamyl transferase and alkaline phosphatase). Because some of these tests do not measure function, it is more accurate to call these liver chemistries or liver tests rather than liver function tests.
Several biochemical tests are useful in the evaluation and management of patients with hepatic dysfunction. These tests can be used to detect the presence of liver disease. They can help distinguish among different types of liver disorders, gauge the extent of known liver damage, and monitor the response to treatment. Some or all of these measurements are also carried out (usually about twice a year for routine cases) on individuals taking certain medications, such as anticonvulsants, to ensure that these medications are not adversely impacting the person's liver.
Standard liver panel
Standard liver tests for assessing liver damage include alanine aminotransferase (ALT), aspartate aminotransferase (AST), and alkaline phosphatase (ALP). Bilirubin may be used to estimate the excretory function of the liver and coagulation tests and albumin can be used to evaluate the metabolic activity of the liver.
Although example reference ranges are given, these will vary depending on method of analysis used at the administering laboratory, as well as age, gender, ethnicity, and potentially unrelated health factors. Individual results should always be interpreted using the reference range provided by the laboratory that performed the test.
Total bilirubin
Measurement of total bilirubin includes both unconjugated (indirect) and conjugated (direct) bilirubin. Unconjugated bilirubin is a breakdown product of heme (a part of hemoglobin in red blood cells). The liver is responsible for clearing the blood of unconjugated bilirubin, by 'conjugating' it (modified to make it water-soluble) through an enzyme named UDP-glucuronyl-transferase. When the total bilirubin level exceeds 17 μmol/L, it indicates liver disease. When total bilirubin levels exceed 40 μmol/L, bilirubin deposition at the sclera, skin, and mucous membranes will give these areas a yellow colour, thus it is called jaundice.
The increase in predominantly unconjugated bilirubin is due to overproduction, reduced hepatic uptake of the unconjugated bilirubin and reduced conjugation of bilirubin. Overproduction can be due to the reabsorption of a haematoma and ineffective erythropoiesis leading to increased red blood cell destruction. Gilbert's syndrome and Crigler–Najjar syndrome have defects in the UDP-glucuronyl-transferase enzyme, affecting bilirubin conjugation.
The degree of rise in conjugated bilirubin is directly proportional to the degree of hepatocyte injury. Viral hepatitis can also cause the rise in conjugated bilirubin. In parenchymal liver disease and incomplete extrahepatic obstruction, the rise in conjugated bilirubin is less than the complete common bile duct obstruction due to malignant causes. In Dubin–Johnson syndrome, a mutation in multiple drug-resistance protein 2 (MRP2) causes a rise in conjugated bilirubin.
In acute appendicitis, total bilirubin can rise from 20.52 μmol/L to 143 μmol/L. In pregnant women, the total bilirubin level is low in all three trimesters.
The measurement of bilirubin levels in the newborns is done through the use of bilimeter or transcutanoeus bilirubinometer instead of performing LFTs. When the total serum bilirubin increases over 95th percentile for age during the first week of life for high risk babies, it is known as hyperbilirubinemia of the newborn (neonatal jaundice) and requires light therapy to reduce the amount of bilirubin in the blood. Pathological jaundice in newborns should be suspected when the serum bilirubin level rises by more than 5 mg/dL per day, serum bilirubin more than the physiological range, clinical jaundice more than 2 weeks, and conjugated bilirubin (dark urine staining clothes). Haemolytic jaundice is the commonest cause of pathological jaundice. Those babies with Rh hemolytic disease, ABO incompatibility with the mother, Glucose-6-phosphate dehydrogenase (G-6-PD) deficiency and minor blood group incompatibility are at increased risk of getting haemolytic jaundice.
Alanine transaminase (ALT)
Apart from being found in high concentrations in the liver, ALT is found in the kidneys, heart, and muscles. It catalyses the transamination reaction, and only exists in a cytoplasmic form. Any kind of liver injury can cause a rise in ALT. A rise of up to 300 IU/L is not specific to the liver, but can be due to the damage of other organs such as the kidneys or muscles. When ALT rises to more than 500 IU/L, causes are usually from the liver. It can be due to hepatitis, ischemic liver injury, and toxins that causes liver damage. The ALT levels in hepatitis C rises more than in hepatitis A and B. Persistent ALT elevation more than 6 months is known as chronic hepatitis. Alcoholic liver disease, non-alcoholic fatty liver disease (NAFLD), fat accumulation in liver during childhood obesity, steatohepatitis (inflammation of fatty liver disease) are associated with a rise in ALT. Rise in ALT is also associated with reduced insulin response, reduced glucose tolerance, and increased free fatty acids and triglycerides. Bright liver syndrome (bright liver on ultrasound suggestive of fatty liver) with raised ALT is suggestive of metabolic syndrome.
In pregnancy, ALT levels would rise during the second trimester. In one of the studies, measured ALT levels in pregnancy-related conditions such as hyperemesis gravidarum was 103.5 IU/L, pre-eclampsia was 115, HELLP syndrome was 149. ALT levels would reduce by greater than 50% in three days after child delivery. Another study also shows that caffeine consumption can reduce the risk of ALT elevation in those who consume alcohol, overweight people, impaired glucose metabolism, and viral hepatitis.
Aspartate transaminase (AST)
AST exists in two isoenzymes namely mitochondrial form and cytoplasmic form. It is found in highest concentration in the liver, followed by heart, muscle, kidney, brain, pancreas, and lungs. This wide range of AST containing organs makes it a relatively less specific indicator of liver damage compared to ALT. An increase of mitochondrial AST in bloods is highly suggestive of tissue necrosis in myocardial infarction and chronic liver disease. More than 80% of the liver AST activity are contributed by mitochondrial form of the isoenzymes, while the circulating AST in blood are contributed by cytoplasmic form of AST. AST is especially markedly raised in those with liver cirrhosis. AST can be released from a variety of other tissues and if the elevation is less than two times the normal AST, no further workup needs to be performed if a patient is proceeding to surgery.
In certain pregnancy related conditions such as hyperemesis gravidarum, AST can reach as high as 73 IU/L, 66 IU/L in pre-eclampsia, and 81 IU/L in HELLP syndrome.
AST/ALT ratio
The AST/ALT ratio increases in liver functional impairment. In alcoholic liver disease, the mean ratio is 1.45, and mean ratio is 1.33 in post necrotic liver cirrhosis. Ratio is greater than 1.17 in viral cirrhosis, greater than 2.0 in alcoholic hepatitis, and 0.9 in non-alcoholic hepatitis. Ratio is greater than 4.5 in Wilson disease or hyperthyroidism.
Alkaline phosphatase (ALP)
Alkaline phosphatase (ALP) is an enzyme in the cells lining the biliary ducts of the liver. It can also be found on the mucosal epithelium of the small intestine, proximal convoluted tubule of the kidneys, bone, liver, and placenta. It plays an important role in lipid transposition in small intestines and calcification of bones. 50% of all the serum ALP activities in blood are contributed by bone. Acute viral hepatitis usually has normal or increased ALP. For example, hepatitis A has increased ALP due to cholestasis (impaired bile formation or bile flow obstruction) and would have the feature of prolonged itching. Other causes include: infiltrative liver diseases, granulomatous liver disease, abscess, amyloidosis of the liver and peripheral arterial disease. Mild elevation of ALP can be seen in liver cirrhosis, hepatitis, and congestive cardiac failure. Transient hyperphosphataemia is a benign condition in infants, and can reach normal level in 4 months. In contrast, low levels of ALP is found in hypothyroidism, pernicious anemia, zinc deficiency, and hypophosphatasia.
ALP activity is significantly increased in the third trimester of pregnancy. This is due to increased synthesis from the placenta as well as increased synthesis in the liver induced by large amounts of estrogens. Levels in the third trimester can be as much as 2-fold greater than in non-pregnant women. As a result, ALP is not a reliable marker of hepatic function in pregnant women. In contrast to ALP, levels of ALT, AST, GGT, and lactate dehydrogenase are only slightly changed or largely unchanged during pregnancy. Bilirubin levels are significantly decreased in pregnancy.
In pregnancy conditions such as hyperemesis gravdirum, ALP levels can reach 215 IU/L, meanwhile, in pre-eclampsia, ALP can reach 14 IU/L, and in HELLP syndrome ALP levels can reach 15 IU/L.
Gamma-glutamyltransferase (GGT)
GGT is a microsomal enzyme found in hepatocytes, biliary epithelial cells, renal tubules, pancreas, and intestines. It helps in glutathione metabolism by transporting peptides across the cell membrane. Much like ALP, GGT measurements are usually elevated if cholestasis is present. In acute viral hepatitis, the GGT levels can peak at 2nd and 3rd week of illness, and remained elevated at 6 weeks of illness. GGT is also elevated in 30% of the hepatitis C patients. GGT can increase by 10 times in alcoholism. GGT can increase by 2 to 3 times in 50% of the patients with non-alcoholic liver disease. When GGT levels is elevated, the triglyceride level is elevated also. With insulin treatment, the GGT level can reduce. Other causes of elevated GGT are: diabetes mellitus, acute pancreatitis, myocardial infarction, anorexia nervosa, Guillain–Barré syndrome, hyperthyroidism, obesity and myotonic dystrophy.
In pregnancy conditions GGT activity is reduced in 2nd and 3rd trimesters. In hyperemesis gravidarum, GGT level value can reach 45 IU/L, 17 IU/L in pre-eclampsia, and 35 IU/L in HELPP syndrome.
Albumin
Albumin is a protein made specifically by the liver, and can be measured cheaply and easily. It is the main constituent of total protein (the remaining constituents are primarily globulins). Albumin levels are decreased in chronic liver disease, such as cirrhosis. It is also decreased in nephrotic syndrome, where it is lost through the urine. The consequence of low albumin can be edema since the intravascular oncotic pressure becomes lower than the extravascular space. An alternative to albumin measurement is prealbumin, which is better at detecting acute changes (half-life of albumin and prealbumin is about 2 weeks and about 2 days, respectively).
Other tests
Other tests are requested alongside LFT to rule out specific causes.
5' Nucleotidase
5' Nucleotidase (5NT) is a glycoprotein found throughout the body, in the cytoplasmic membrane, catalyzing the conversion to inorganic phosphates from nucleoside-5-phosphate. Its level is raised in conditions such as obstructive jaundice, parenchymal liver disease, liver metastases, and bone disease.
Serum NT levels are higher during 2nd and 3rd trimesters in pregnancy.
Ceruloplasmin
Ceruloplasmin is an acute phase protein synthesized in the liver. It is the carrier of the copper ion. Its level is increased in infections, rheumatoid arthritis, pregnancy, non-Wilson liver disease and obstructive jaundice. In Wilson disease, the ceruloplasmin level is depressed which lead to copper accumulation in body tissues.
Alpha-fetoprotein
Alpha-fetoprotein (AFP) is significantly expressed in foetal liver. However, the mechanism that led to the suppression of AFP synthesis in adults is not fully known. Exposure of the liver to cancer-causing agents and arrest of liver maturation in childhood can lead to the rise in AFP. AFP can reach until 400–500 μg/L in hepatocellular carcinoma. AFP concentration of more than 400 μg/L is associated with greater tumour size, involvement of both lobes of liver, portal vein invasion and a lower median survival rate.
Coagulation test
The liver is responsible for the production of the vast majority of coagulation factors. In patients with liver disease, international normalized ratio (INR) can be used as a marker of liver synthetic function as it includes factor VII, which has the shortest half life (2–6 hours) of all coagulation factors measured in INR. An elevated INR in patients with liver disease, however, does not necessarily mean the patient has a tendency to bleed, as it only measures procoagulants and not anticoagulants. In liver disease the synthesis of both are decreased and some patients are even found to be hypercoagulable (increased tendency to clot) despite an elevated INR. In liver patients, coagulation is better determined by more modern tests such as thromboelastogram (TEG) or thomboelastrometry (ROTEM).
Prothrombin time (PT) and its derived measures of prothrombin ratio (PR) and INR are measures of the extrinsic pathway of coagulation. This test is also called "ProTime INR" and "INR PT". They are used to determine the clotting tendency of blood, in the measure of warfarin dosage, liver damage, and vitamin K status.
Serum glucose
The serum glucose test, abbreviated as "BG" or "Glu", measures the liver's ability to produce glucose (gluconeogenesis); it is usually the last function to be lost in the setting of fulminant liver failure.
Lactate dehydrogenase
Lactate dehydrogenase (LDH) is found in many body tissues, including the liver. Elevated levels of LDH may indicate liver damage. LDH isotype-1 (or cardiac) is used for estimating damage to cardiac tissue, although troponin and creatine kinase tests are preferred.
See also
Reference ranges for blood tests
Elevated transaminases
Liver disorders
Child–Pugh score
References
External links
Liver Function Tests at Lab Tests Online
Overview at Mayo Clinic
Abnormal Liver Function Tests
Overview of liver enzymes
Abnormal Liver Tests Curriculum at AASLD
Further workup of abnormal liver tests: "etiology panel"
Gastroenterology
Hepatology
Blood tests | Liver function tests | [
"Chemistry"
] | 3,680 | [
"Blood tests",
"Chemical pathology",
"Liver function tests"
] |
212,101 | https://en.wikipedia.org/wiki/Diffuse%20sky%20radiation | Diffuse sky radiation is solar radiation reaching the Earth's surface after having been scattered from the direct solar beam by molecules or particulates in the atmosphere. It is also called sky radiation, the determinative process for changing the colors of the sky. Approximately 23% of direct incident radiation of total sunlight is removed from the direct solar beam by scattering into the atmosphere; of this amount (of incident radiation) about two-thirds ultimately reaches the earth as photon diffused skylight radiation.
The dominant radiative scattering processes in the atmosphere are Rayleigh scattering and Mie scattering; they are elastic, meaning that a photon of light can be deviated from its path without being absorbed and without changing wavelength.
Under an overcast sky, there is no direct sunlight, and all light results from diffused skylight radiation.
Proceeding from analyses of the aftermath of the eruption of the Philippines volcano Mount Pinatubo (in June 1991) and other studies: Diffused skylight, owing to its intrinsic structure and behavior, can illuminate under-canopy leaves, permitting more efficient total whole-plant photosynthesis than would otherwise be the case; this in stark contrast to the effect of totally clear skies with direct sunlight that casts shadows onto understory leaves and thereby limits plant photosynthesis to the top canopy layer, (see below).
Color
Earth's atmosphere scatters short-wavelength light more efficiently than that of longer wavelengths. Because its wavelengths are shorter, blue light is more strongly scattered than the longer-wavelength lights, red or green. Hence, the result that when looking at the sky away from the direct incident sunlight, the human eye perceives the sky to be blue. The color perceived is similar to that presented by a monochromatic blue (at wavelength ) mixed with white light, that is, an unsaturated blue light. The explanation of blue color by Lord Rayleigh in 1871 is a famous example of applying dimensional analysis to solving problems in physics.
Scattering and absorption are major causes of the attenuation of sunlight radiation by the atmosphere. Scattering varies as a function of the ratio of particle diameters (of particulates in the atmosphere) to the wavelength of the incident radiation. When this ratio is less than about one-tenth, Rayleigh scattering occurs. (In this case, the scattering coefficient varies inversely with the fourth power of the wavelength. At larger ratios scattering varies in a more complex fashion, as described for spherical particles by the Mie theory.) The laws of geometric optics begin to apply at higher ratios.
Daily at any global venue experiencing sunrise or sunset, most of the solar beam of visible sunlight arrives nearly tangentially to Earth's surface. Here, the path of sunlight through the atmosphere is elongated such that much of the blue or green light is scattered away from the line of perceivable visible light. This phenomenon leaves the Sun's rays, and the clouds they illuminate, abundantly orange-to-red in colors, which one sees when looking at a sunset or sunrise.
For the example of the Sun at zenith, in broad daylight, the sky is blue due to Rayleigh scattering, which also involves the diatomic gases and . Near sunset and especially during twilight, absorption by ozone () significantly contributes to maintaining blue color in the evening sky.
Under an overcast sky
There is essentially no direct sunlight under an overcast sky, so all light is then diffuse sky radiation. The flux of light is not very wavelength-dependent because the cloud droplets are larger than the light's wavelength and scatter all colors approximately equally. The light passes through the translucent clouds in a manner similar to frosted glass. The intensity ranges (roughly) from of direct sunlight for relatively thin clouds down to of direct sunlight under the extreme of thickest storm clouds.
As a part of total radiation
One of the equations for total solar radiation is:
where Hb is the beam radiation irradiance, Rb is the tilt factor for beam radiation, Hd is the diffuse radiation irradiance, Rd is the tilt factor for diffuse radiation and Rr is the tilt factor for reflected radiation.
Rb is given by:
where δ is the solar declination, Φ is the latitude, β is an angle from the horizontal and h is the solar hour angle.
Rd is given by:
and Rr by:
where ρ is the reflectivity of the surface.
Agriculture and the eruption of Mt. Pinatubo
The eruption of the Philippines volcano - Mount Pinatubo in June 1991 ejected roughly of magma and "17 million metric tons"(17 teragrams) of sulfur dioxide SO2 into the air, introducing ten times as much total SO2 as the 1991 Kuwaiti fires, mostly during the explosive Plinian/Ultra-Plinian event of June 15, 1991, creating a global stratospheric SO2 haze layer which persisted for years. This resulted in the global average temperature dropping by about . Since volcanic ash falls out of the atmosphere rapidly, the negative agricultural, effects of the eruption were largely immediate and localized to a relatively small area in close proximity to the eruption, caused by the resulting thick ash cover. Globally however, despite a several-month 5% drop in overall solar irradiation, and a reduction in direct sunlight by 30%, there was no negative impact on global agriculture. Surprisingly, a 3-4 year increase in global Agricultural productivity and forestry growth was observed, excepting boreal forest regions. The means of discovery was that initially, a mysterious drop in the rate at which carbon dioxide (CO2) was filling the atmosphere was observed, which is charted in what is known as the "Keeling Curve". This led numerous scientists to assume that the reduction was due to the lowering of Earth's temperature, and with that, a, slowdown in plant and soil respiration, indicating a deleterious impact on global agriculture from the volcanic haze layer. However upon investigation, the reduction in the rate at which carbon dioxide filled the atmosphere did not match up with the hypothesis that plant respiration rates had declined. Instead the advantageous anomaly was relatively firmly linked to an unprecedented increase in the growth/net primary production, of global plant life, resulting in the increase of the carbon sink effect of global photosynthesis. The mechanism by which the increase in plant growth was possible, was that the 30% reduction of direct sunlight can also be expressed as an increase or "enhancement" in the amount of diffuse sunlight.
The diffused skylight effect
This diffused skylight, owing to its intrinsic nature, can illuminate under-canopy leaves permitting more efficient total whole-plant photosynthesis than would otherwise be the case, and also increasing evaporative cooling, from vegetated surfaces. In stark contrast, for totally clear skies and the direct sunlight that results from it, shadows are cast onto understorey leaves, limiting plant photosynthesis to the top canopy layer. This increase in global agriculture from the volcanic haze layer also naturally results as a product of other aerosols that are not emitted by volcanoes, such, "moderately thick smoke loading" pollution, as the same mechanism, the "aerosol direct radiative effect" is behind both.
See also
Atmospheric diffraction
Aerial perspective
Cyanometer
Daylight
Nighttime airglow
Rayleigh scattering
Rayleigh sky model
Sunshine duration
Sunset#Colors
Sunrise#Colors
Tyndall effect
References
Further reading
External links
Dr. C. V. Raman lecture: Why is the sky blue?
Why is the sky blue?
Blue Sky and Rayleigh Scattering
Atmospheric Optics (.pdf), Dr. Craig Bohren
Sun
Light
Visibility
Atmospheric optical phenomena | Diffuse sky radiation | [
"Physics",
"Mathematics"
] | 1,555 | [
"Physical phenomena",
"Earth phenomena",
"Visibility",
"Physical quantities",
"Spectrum (physical sciences)",
"Quantity",
"Electromagnetic spectrum",
"Optical phenomena",
"Waves",
"Light",
"Wikipedia categories named after physical quantities",
"Atmospheric optical phenomena"
] |
212,115 | https://en.wikipedia.org/wiki/Enumeration | An enumeration is a complete, ordered listing of all the items in a collection. The term is commonly used in mathematics and computer science to refer to a listing of all of the elements of a set. The precise requirements for an enumeration (for example, whether the set must be finite, or whether the list is allowed to contain repetitions) depend on the discipline of study and the context of a given problem.
Some sets can be enumerated by means of a natural ordering (such as 1, 2, 3, 4, ... for the set of positive integers), but in other cases it may be necessary to impose a (perhaps arbitrary) ordering. In some contexts, such as enumerative combinatorics, the term enumeration is used more in the sense of counting – with emphasis on determination of the number of elements that a set contains, rather than the production of an explicit listing of those elements.
Combinatorics
In combinatorics, enumeration means counting, i.e., determining the exact number of elements of finite sets, usually grouped into infinite families, such as the family of sets each consisting of all permutations of some finite set. There are flourishing subareas in many branches of mathematics concerned with enumerating in this sense objects of special kinds. For instance, in partition enumeration and graph enumeration the objective is to count partitions or graphs that meet certain conditions.
Set theory
In set theory, the notion of enumeration has a broader sense, and does not require the set being enumerated to be finite.
Listing
When an enumeration is used in an ordered list context, we impose some sort of ordering structure requirement on the index set. While we can make the requirements on the ordering quite lax in order to allow for great generality, the most natural and common prerequisite is that the index set be well-ordered. According to this characterization, an ordered enumeration is defined to be a surjection (an onto relationship) with a well-ordered domain. This definition is natural in the sense that a given well-ordering on the index set provides a unique way to list the next element given a partial enumeration.
Countable vs. uncountable
Unless otherwise specified, an enumeration is done by means of natural numbers. That is, an enumeration of a set is a bijective function from the natural numbers or an initial segment of the natural numbers to .
A set is countable if it can be enumerated, that is, if there exists an enumeration of it. Otherwise, it is uncountable. For example, the set of the real numbers is uncountable.
A set is finite if it can be enumerated by means of a proper initial segment of the natural numbers, in which case, its cardinality is . The empty set is finite, as it can be enumerated by means of the empty initial segment of the natural numbers.
The term set is sometimes used for countable sets. However it is also often used for computably enumerable sets, which are the countable sets for which an enumeration function can be computed with an algorithm.
For avoiding to distinguish between finite and countably infinite set, it is often useful to use another definition that is equivalent: A set is countable if and only if there exists an injective function from it into the natural numbers.
Examples
The natural numbers are enumerable by the function f(x) = x. In this case is simply the identity function.
, the set of integers is enumerable by
is a bijection since every natural number corresponds to exactly one integer. The following table gives the first few values of this enumeration:
All (non empty) finite sets are enumerable. Let S be a finite set with n > 0 elements and let K = {1,2,...,n}. Select any element s in S and assign f(n) = s. Now set S' = S − {s} (where − denotes set difference). Select any element s' ∈ S' and assign f(n − 1) = s' . Continue this process until all elements of the set have been assigned a natural number. Then is an enumeration of S.
The real numbers have no countable enumeration as proved by Cantor's diagonal argument and Cantor's first uncountability proof.
Properties
There exists an enumeration for a set (in this sense) if and only if the set is countable.
If a set is enumerable it will have an uncountable infinity of different enumerations, except in the degenerate cases of the empty set or (depending on the precise definition) sets with one element. However, if one requires enumerations to be injective and allows only a limited form of partiality such that if f(n) is defined then f(m) must be defined for all m < n, then a finite set of N elements has exactly N! enumerations.
An enumeration e of a set S with domain induces a well-order ≤ on that set defined by s ≤ t if and only if . Although the order may have little to do with the underlying set, it is useful when some order of the set is necessary.
Ordinals
In set theory, there is a more general notion of an enumeration than the characterization requiring the domain of the listing function to be an initial segment of the Natural numbers where the domain of the enumerating function can assume any ordinal. Under this definition, an enumeration of a set S is any surjection from an ordinal α onto S. The more restrictive version of enumeration mentioned before is the special case where α is a finite ordinal or the first limit ordinal ω. This more generalized version extends the aforementioned definition to encompass transfinite listings.
Under this definition, the first uncountable ordinal can be enumerated by the identity function on so that these two notions do not coincide. More generally, it is a theorem of ZF that any well-ordered set can be enumerated under this characterization so that it coincides up to relabeling with the generalized listing enumeration. If one also assumes the Axiom of Choice, then all sets can be enumerated so that it coincides up to relabeling with the most general form of enumerations.
Since set theorists work with infinite sets of arbitrarily large cardinalities, the default definition among this group of mathematicians of an enumeration of a set tends to be any arbitrary α-sequence exactly listing all of its elements. Indeed, in Jech's book, which is a common reference for set theorists, an enumeration is defined to be exactly this. Therefore, in order to avoid ambiguity, one may use the term finitely enumerable or denumerable to denote one of the corresponding types of distinguished countable enumerations.
Comparison of cardinalities
Formally, the most inclusive definition of an enumeration of a set S is any surjection from an arbitrary index set I onto S. In this broad context, every set S can be trivially enumerated by the identity function from S onto itself. If one does not assume the axiom of choice or one of its variants, S need not have any well-ordering. Even if one does assume the axiom of choice, S need not have any natural well-ordering.
This general definition therefore lends itself to a counting notion where we are interested in "how many" rather than "in what order." In practice, this broad meaning of enumeration is often used to compare the relative sizes or cardinalities of different sets. If one works in Zermelo–Fraenkel set theory without the axiom of choice, one may want to impose the additional restriction that an enumeration must also be injective (without repetition) since in this theory, the existence of a surjection from I onto S need not imply the existence of an injection from S into I.
Computability and complexity theory
In computability theory one often considers countable enumerations with the added requirement that the mapping from (set of all natural numbers) to the enumerated set must be computable. The set being enumerated is then called recursively enumerable (or computably enumerable in more contemporary language), referring to the use of recursion theory in formalizations of what it means for the map to be computable.
In this sense, a subset of the natural numbers is computably enumerable if it is the range of a computable function. In this context, enumerable may be used to mean computably enumerable. However, these definitions characterize distinct classes since there are uncountably many subsets of the natural numbers that can be enumerated by an arbitrary function with domain ω and only countably many computable functions. A specific example of a set with an enumeration but not a computable enumeration is the complement of the halting set.
Furthermore, this characterization illustrates a place where the ordering of the listing is important. There exists a computable enumeration of the halting set, but not one that lists the elements in an increasing ordering. If there were one, then the halting set would be decidable, which is provably false. In general, being recursively enumerable is a weaker condition than being a decidable set.
The notion of enumeration has also been studied from the point of view of computational complexity theory for various tasks in the context of enumeration algorithms.
See also
Ordinal number
Enumerative definition
Sequence
References
External links
Enumerative combinatorics
Mathematical logic
Ordering | Enumeration | [
"Mathematics"
] | 2,053 | [
"Mathematical logic",
"Enumerative combinatorics",
"Combinatorics"
] |
212,124 | https://en.wikipedia.org/wiki/Mie%20scattering | In electromagnetism, the Mie solution to Maxwell's equations (also known as the Lorenz–Mie solution, the Lorenz–Mie–Debye solution or Mie scattering) describes the scattering of an electromagnetic plane wave by a homogeneous sphere. The solution takes the form of an infinite series of spherical multipole partial waves. It is named after German physicist Gustav Mie.
The term Mie solution is also used for solutions of Maxwell's equations for scattering by stratified spheres or by infinite cylinders, or other geometries where one can write separate equations for the radial and angular dependence of solutions. The term Mie theory is sometimes used for this collection of solutions and methods; it does not refer to an independent physical theory or law. More broadly, the "Mie scattering" formulas are most useful in situations where the size of the scattering particles is comparable to the wavelength of the light, rather than much smaller or much larger.
Mie scattering (sometimes referred to as a non-molecular scattering or aerosol particle scattering) takes place in the lower of the atmosphere, where many essentially spherical particles with diameters approximately equal to the wavelength of the incident ray may be present. Mie scattering theory has no upper size limitation, and converges to the limit of geometric optics for large particles.
Introduction
A modern formulation of the Mie solution to the scattering problem on a sphere can be found in many books, e.g., J. A. Stratton's Electromagnetic Theory. In this formulation, the incident plane wave, as well as the scattering field, is expanded into radiating spherical vector spherical harmonics. The internal field is expanded into regular vector spherical harmonics. By enforcing the boundary condition on the spherical surface, the expansion coefficients of the scattered field can be computed.
For particles much larger or much smaller than the wavelength of the scattered light there are simple and accurate approximations that suffice to describe the behavior of the system. But for objects whose size is within a few orders of magnitude of the wavelength, e.g., water droplets in the atmosphere, latex particles in paint, droplets in emulsions, including milk, and biological cells and cellular components, a more detailed approach is necessary.
The Mie solution is named after its developer, German physicist Gustav Mie. Danish physicist Ludvig Lorenz and others independently developed the theory of electromagnetic plane wave scattering by a dielectric sphere.
The formalism allows the calculation of the electric and magnetic fields inside and outside a spherical object and is generally used to calculate either how much light is scattered (the total optical cross section), or where it goes (the form factor). The notable features of these results are the Mie resonances, sizes that scatter particularly strongly or weakly. This is in contrast to Rayleigh scattering for small particles and Rayleigh–Gans–Debye scattering (after Lord Rayleigh, Richard Gans and Peter Debye) for large particles. The existence of resonances and other features of Mie scattering makes it a particularly useful formalism when using scattered light to measure particle size.
Approximations
Rayleigh approximation (scattering)
Rayleigh scattering describes the elastic scattering of light by spheres that are much smaller than the wavelength of light. The intensity I of the scattered radiation is given by
where I0 is the light intensity before the interaction with the particle, R is the distance between the particle and the observer, θ is the scattering angle, λ is the wavelength of light under consideration, n is the refractive index of the particle, and d is the diameter of the particle.
It can be seen from the above equation that Rayleigh scattering is strongly dependent upon the size of the particle and the wavelengths. The intensity of the Rayleigh scattered radiation increases rapidly as the ratio of particle size to wavelength increases. Furthermore, the intensity of Rayleigh scattered radiation is identical in the forward and reverse directions.
The Rayleigh scattering model breaks down when the particle size becomes larger than around 10% of the wavelength of the incident radiation. In the case of particles with dimensions greater than this, Mie's scattering model can be used to find the intensity of the scattered radiation. The intensity of Mie scattered radiation is given by the summation of an infinite series of terms rather than by a simple mathematical expression. It can be shown, however, that scattering in this range of particle sizes differs from Rayleigh scattering in several respects: it is roughly independent of wavelength and it is larger in the forward direction than in the reverse direction. The greater the particle size, the more of the light is scattered in the forward direction.
The blue colour of the sky results from Rayleigh scattering, as the size of the gas particles in the atmosphere is much smaller than the wavelength of visible light. Rayleigh scattering is much greater for blue light than for other colours due to its shorter wavelength. As sunlight passes through the atmosphere, its blue component is Rayleigh scattered strongly by atmospheric gases but the longer wavelength (e.g. red/yellow) components are not. The sunlight arriving directly from the Sun therefore appears to be slightly yellow, while the light scattered through rest of the sky appears blue. During sunrises and sunsets, the effect of Rayleigh scattering on the spectrum of the transmitted light is much greater due to the greater distance the light rays have to travel through the high-density air near the Earth's surface.
In contrast, the water droplets that make up clouds are of a comparable size to the wavelengths in visible light, and the scattering is described by Mie's model rather than that of Rayleigh. Here, all wavelengths of visible light are scattered approximately identically, and the clouds therefore appear to be white or grey.
Rayleigh–Gans approximation
The Rayleigh–Gans approximation is an approximate solution to light scattering when the relative refractive index of the particle is close to that of the environment, and its size is much smaller in comparison to the wavelength of light divided by |n − 1|, where n is the refractive index:
where is the wavevector of the light (), and refers to the linear dimension of the particle. The former condition is often referred as optically soft and the approximation holds for particles of arbitrary shape.
Anomalous diffraction approximation of van de Hulst
The anomalous diffraction approximation is valid for large (compared to wavelength) and optically soft spheres; soft in the context of optics implies that the refractive index of the particle (m) differs only slightly from the refractive index of the environment, and the particle subjects the wave to only a small phase shift. The extinction efficiency in this approximation is given by
where Q is the efficiency factor of scattering, which is defined as the ratio of the scattering cross-section and geometrical cross-section πa2.
The term p = 4πa(n − 1)/λ has as its physical meaning the phase delay of the wave passing through the centre of the sphere, where a is the sphere radius, n is the ratio of refractive indices inside and outside of the sphere, and λ the wavelength of the light.
This set of equations was first described by van de Hulst in (1957).
Mathematics
The scattering by a spherical nanoparticle is solved exactly regardless of the particle size. We consider scattering by a plane wave propagating along the z-axis polarized along the x-axis. Dielectric and magnetic permeabilities of a particle are and , and and for the environment.
In order to solve the scattering problem, we write first the solutions of the vector Helmholtz equation in spherical coordinates, since the fields inside and outside the particles must satisfy it. Helmholtz equation:
In addition to the Helmholtz equation, the fields must satisfy the conditions and , .
Vector spherical harmonics possess all the necessary properties, introduced as follows:
— magnetic harmonics (TE),
— electric harmonics (TM),
where
and — Associated Legendre polynomials, and — any of the spherical Bessel functions.
Next, we expand the incident plane wave in vector spherical harmonics:
Here the superscript means that in the radial part of the functions are spherical Bessel functions of the first kind.
The expansion coefficients are obtained by taking integrals of the form
In this case, all coefficients at are zero, since the integral over the angle in the numerator is zero.
Then the following conditions are imposed:
Interface conditions on the boundary between the sphere and the environment (which allow us to relate the expansion coefficients of the incident, internal, and scattered fields)
The condition that the solution is bounded at the origin (therefore, in the radial part of the generating functions , spherical Bessel functions of the first kind are selected for the internal field),
For a scattered field, the asymptotics at infinity corresponds to a diverging spherical wave (in connection with this, for the scattered field in the radial part of the generating functions spherical Hankel functions of the first kind are chosen).
Scattered fields are written in terms of a vector harmonic expansion as
Here the superscript means that in the radial part of the functions are spherical Hankel functions of the first kind (those of the second kind would have ), and ,
Internal fields:
is the wave vector outside the particle is the wave vector in the medium from the particle material, and are the refractive indices of the medium and the particle.
After applying the interface conditions, we obtain expressions for the coefficients:
where
with being the radius of the sphere.
and represent the spherical functions of Bessel and Hankel of the first kind, respectively.
Scattering and extinction cross-sections
Values commonly calculated using Mie theory include efficiency coefficients for extinction , scattering , and absorption . These efficiency coefficients are ratios of the cross section of the respective process, , to the particle protected area, , where a is the particle radius.
According to the definition of extinction,
and .
The scattering and extinction coefficients can be represented as the infinite series:
The contributions in these sums, indexed by n, correspond to the orders of a multipole expansion with being the dipole term, being the quadrupole term, and so forth.
Application to larger particles
If the size of the particle is equal to several wavelengths in the material, then the scattered fields have some features. Further, the form of the electric field is key, since the magnetic field is obtained from it by taking the curl.
All Mie coefficients depend on the frequency and have maximums when the denominator is close to zero (exact equality to zero is achieved for complex frequencies). In this case, it is possible, that the contribution of one specific harmonic dominates in scattering. Then at large distances from the particle, the radiation pattern of the scattered field will be similar to the corresponding radiation pattern of the angular part of vector spherical harmonics. The harmonics correspond to electric dipoles (if the contribution of this harmonic dominates in the expansion of the electric field, then the field is similar to the electric dipole field), correspond to the electric field of the magnetic dipole, and - electric and magnetic quadrupoles, and - octupoles, and so on. The maxima of the scattering coefficients (as well as the change of their phase to ) are called multipole resonances, and zeros can be called anapoles.
The dependence of the scattering cross-section on the wavelength and the contribution of specific resonances strongly depends on the particle material. For example, for a gold particle with a radius of 100 nm, the contribution of the electric dipole to scattering predominates in the optical range, while for a silicon particle there are pronounced magnetic dipole and quadrupole resonances. For metal particles, the peak visible in the scattering cross-section is also called localized plasmon resonance.
In the limit of small particles or long wavelengths, the electric dipole contribution dominates in the scattering cross-section.
Other directions of the incident plane wave
In case of x-polarized plane wave, incident along the z-axis, decompositions of all fields contained only harmonics with m= 1, but for an arbitrary incident wave this is not the case. For a rotated plane wave, the expansion coefficients can be obtained, for example, using the fact that during rotation, vector spherical harmonics are transformed through each other by Wigner D-matrixes.
In this case, the scattered field will be decomposed by all possible harmonics:
Then the scattering cross section will be expressed in terms of the coefficients as follows:
Kerker effect
The Kerker effect is a phenomenon in scattering directionality, which occurs when different multipole responses are presented and not negligible.
In 1983, in the work of Kerker, Wang and Giles, the direction of scattering by particles with was investigated. In particular, it was shown that for hypothetical particles with backward scattering is completely suppressed. This can be seen as an extension to a spherical surface of Giles' and Wild's results for reflection at a planar surface with equal refractive indices where reflection and transmission is constant and independent of angle of incidence.
In addition, scattering cross sections in the forward and backward directions are simply expressed in terms of Mie coefficients:
For certain combinations of coefficients, the expressions above can be minimized.
So, for example, when terms with can be neglected (dipole approximation), , corresponds to the minimum in backscattering (magnetic and electric dipoles are equal in magnitude and are in phase, this is also called first Kerker or zero-backward intensity condition). And corresponds to minimum in forward scattering, this is also called second Kerker condition (or near-zero forward intensity condition). From the optical theorem, it is shown that for a passive particle is not possible. For the exact solution of the problem, it is necessary to take into account the contributions of all multipoles. The sum of the electric and magnetic dipoles forms Huygens source
For dielectric particles, maximum forward scattering is observed at wavelengths longer than the wavelength of magnetic dipole resonance, and maximum backward scattering at shorter ones.
Later, other varieties of the effect were found. For example, the transverse Kerker effect, with nearly complete simultaneous
suppression of both forward and backward scattered fields (side-scattering patterns), optomechanical Kerker effect, in acoustic scattering, and also found in plants.
There is also a short with an explanation of the effect.
Dyadic Green's function of a sphere
Green's function is a solution to the following equation:
where — identity matrix for , and for . Since all fields are vectorial, the Green function is a 3 by 3 matrix and is called a dyadic. If polarization is induced in the system, when the fields are written as
In the same way as the fields, the Green's function can be decomposed into vector spherical harmonics.
Dyadic Green's function of a free space а:
In the presence of a sphere, the Green's function is also decomposed into vector spherical harmonics. Its appearance depends on the environment in which the points and are located.
When both points are outside the sphere ():
where the coefficients are :
When both points are inside the sphere () :
Coefficients:
Source is inside the sphere and observation point is outside ():
coefficients:
Source is outside the sphere and observation point is inside () :
coefficients:
Computational codes
Mie solutions are implemented in a number of programs written in different computer languages such as Fortran, MATLAB, and Mathematica. These solutions approximate an infinite series, and provide as output the calculation of the scattering phase function, extinction, scattering, and absorption efficiencies, and other parameters such as asymmetry parameters or radiation torque. Current usage of the term "Mie solution" indicates a series approximation to a solution of Maxwell's equations. There are several known objects that allow such a solution: spheres, concentric spheres, infinite cylinders, clusters of spheres and clusters of cylinders. There are also known series solutions for scattering by ellipsoidal particles. A list of codes implementing these specialized solutions is provided in the following:
Codes for electromagnetic scattering by spheres – solutions for a single sphere, coated spheres, multilayer sphere, and cluster of spheres;
Codes for electromagnetic scattering by cylinders – solutions for a single cylinder, multilayer cylinders, and cluster of cylinders.
A generalization that allows a treatment of more generally shaped particles is the T-matrix method, which also relies on a series approximation to solutions of Maxwell's equations.
See also external links for other codes and calculators.
Applications
Mie theory is very important in meteorological optics, where diameter-to-wavelength ratios of the order of unity and larger are characteristic for many problems regarding haze and cloud scattering. A further application is in the characterization of particles by optical scattering measurements. The Mie solution is also important for understanding the appearance of common materials like milk, biological tissue and latex paint.
Atmospheric science
Mie scattering occurs when the diameters of atmospheric particulates are similar to or larger than the wavelengths of the light. Dust, pollen, smoke and microscopic water droplets that form clouds are common causes of Mie scattering. Mie scattering occurs mostly in the lower portions of the atmosphere, where larger particles are more abundant, and dominates in cloudy conditions.
Cancer detection and screening
Mie theory has been used to determine whether scattered light from tissue corresponds to healthy or cancerous cell nuclei using angle-resolved low-coherence interferometry.
Clinical laboratory analysis
Mie theory is a central principle in the application of nephelometric based assays, widely used in medicine to measure various plasma proteins. A wide array of plasma proteins can be detected and quantified by nephelometry.
Magnetic particles
A number of unusual electromagnetic scattering effects occur for magnetic spheres. When the relative permittivity equals the permeability, the back-scatter gain is zero. Also, the scattered radiation is polarized in the same sense as the incident radiation. In the small-particle (or long-wavelength) limit, conditions can occur for zero forward scatter, for complete polarization of scattered radiation in other directions, and for asymmetry of forward scatter to backscatter. The special case in the small-particle limit provides interesting special instances of complete polarization and forward-scatter-to-backscatter asymmetry.
Metamaterial
Mie theory has been used to design metamaterials. They usually consist of three-dimensional composites of metal or non-metallic inclusions periodically or randomly embedded in a low-permittivity matrix. In such a scheme, the negative constitutive parameters are designed to appear around the Mie resonances of the inclusions: the negative effective permittivity is designed around the resonance of the Mie electric dipole scattering coefficient, whereas negative effective permeability is designed around the resonance of the Mie magnetic dipole scattering coefficient, and doubly negative material (DNG) is designed around the overlap of resonances of Mie electric and magnetic dipole scattering coefficients. The particle usually have the following combinations:
one set of magnetodielectric particles with values of relative permittivity and permeability much greater than one and close to each other;
two different dielectric particles with equal permittivity but different size;
two different dielectric particles with equal size but different permittivity.
In theory, the particles analyzed by Mie theory are commonly spherical but, in practice, particles are usually fabricated as cubes or cylinders for ease of fabrication. To meet the criteria of homogenization, which may be stated in the form that the lattice constant is much smaller than the operating wavelength, the relative permittivity of the dielectric particles should be much greater than 1, e.g. to achieve negative effective permittivity (permeability).
Particle sizing
Mie theory is often applied in laser diffraction analysis to inspect the particle sizing effect. While early computers in the 1970s were only able to compute diffraction data with the more simple Fraunhofer approximation, Mie is widely used since the 1990s and officially recommended for particles below 50 micrometers in guideline ISO 13320:2009.
Mie theory has been used in the detection of oil concentration in polluted water.
Mie scattering is the primary method of sizing single sonoluminescing bubbles of air in water and is valid for cavities in materials, as well as particles in materials, as long as the surrounding material is essentially non-absorbing.
Parasitology
It has also been used to study the structure of Plasmodium falciparum, a particularly pathogenic form of malaria.
Extensions
In 1986, P. A. Bobbert and J. Vlieger extended the Mie model to calculate scattering by a sphere in a homogeneous medium placed on flat surface: the Bobbert–Vlieger (BV) model. Like the Mie model, the extended model can be applied to spheres with a radius nearly the wavelength of the incident light. The model has been implemented in C++ source code.
Recent developments are related to scattering by ellipsoid.
The contemporary studies go to well known research of Rayleigh.
See also
Codes for electromagnetic scattering by spheres
Computational electromagnetics
Light scattering by particles
List of atmospheric radiative transfer codes
Optical properties of water and ice
References
Further reading
External links
SCATTERLIB and scattport.org are collections of light scattering codes with implementations of Mie solutions in Fortran, C++, IDL, Pascal, Mathematica, and Mathcad
JMIE (2D C++ code to calculate the analytical fields around an infinite cylinder, developed by Jeffrey M. McMahon)
ScatLab. Mie scattering software for Windows.
STRATIFY MATLAB code of scattering from multilayered spheres in cases where the source is a point dipole and a plane wave. Description in arXiv:2006.06512
Scattnlay, an open-source C++ Mie solution package with Python and JavaScript wrappers. Provides far-field and near-field simulation results for multilayered spheres.
Online Mie scattering calculator provides simulation of scattering properties (including multipole decomposition) and near-field maps for bulk, core-shell, and multilayer spheres. Material parameters include all nk-data files from refractiveindex.info website. The source code is part of Scattnlay project.
Online Mie solution calculator is available, with documentation in German and English.
Online Mie scattering calculator produces beautiful graphs over a range of parameters.
phpMie Online Mie scattering calculator written on PHP.
Mie resonance mediated light diffusion and random lasing.
Mie solution for spherical particles.
PyMieScatt, a Mie solution package written in Python.
pyMieForAll, an open-source C++ Mie solution package with Python wrapper.
Radio frequency propagation
Scattering, absorption and radiative transfer (optics)
Visibility | Mie scattering | [
"Physics",
"Chemistry",
"Mathematics"
] | 4,727 | [
"Visibility",
"Physical phenomena",
" absorption and radiative transfer (optics)",
"Physical quantities",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Quantity",
"Electromagnetic spectrum",
"Waves",
"Scattering",
"Wikipedia categories named after physical quantities"
] |
212,141 | https://en.wikipedia.org/wiki/Power%20station | A power station, also referred to as a power plant and sometimes generating station or generating plant, is an industrial facility for the generation of electric power. Power stations are generally connected to an electrical grid.
Many power stations contain one or more generators, rotating machine that converts mechanical power into three-phase electric power. The relative motion between a magnetic field and a conductor creates an electric current.
The energy source harnessed to turn the generator varies widely. Most power stations in the world burn fossil fuels such as coal, oil, and natural gas to generate electricity. Low-carbon power sources include nuclear power, and use of renewables such as solar, wind, geothermal, and hydroelectric.
History
In early 1871 Belgian inventor Zénobe Gramme invented a generator powerful enough to produce power on a commercial scale for industry.
In 1878, a hydroelectric power station was designed and built by William, Lord Armstrong at Cragside, England. It used water from lakes on his estate to power Siemens dynamos. The electricity supplied power to lights, heating, produced hot water, ran an elevator as well as labor-saving devices and farm buildings.
In January 1882 the world's first public coal-fired power station, the Edison Electric Light Station, was built in London, a project of Thomas Edison organized by Edward Johnson. A Babcock & Wilcox boiler powered a steam engine that drove a generator. This supplied electricity to premises in the area that could be reached through the culverts of the viaduct without digging up the road, which was the monopoly of the gas companies. The customers included the City Temple and the Old Bailey. Another important customer was the Telegraph Office of the General Post Office, but this could not be reached through the culverts. Johnson arranged for the supply cable to be run overhead, via Holborn Tavern and Newgate.
In September 1882 in New York, the Pearl Street Station was established by Edison to provide electric lighting in the lower Manhattan Island area. The station ran until destroyed by fire in 1890. The station used reciprocating steam engines to turn direct-current generators. Because of the DC distribution, the service area was small, limited by voltage drop in the feeders. In 1886 George Westinghouse began building an alternating current system that used a transformer to step up voltage for long-distance transmission and then stepped it back down for indoor lighting, a more efficient and less expensive system which is similar to modern systems. The war of the currents eventually resolved in favor of AC distribution and utilization, although some DC systems persisted to the end of the 20th century. DC systems with a service radius of a mile (kilometer) or so were necessarily smaller, less efficient of fuel consumption, and more labor-intensive to operate than much larger central AC generating stations.
AC systems used a wide range of frequencies depending on the type of load; lighting load using higher frequencies, and traction systems and heavy motor load systems preferring lower frequencies. The economics of central station generation improved greatly when unified light and power systems, operating at a common frequency, were developed. The same generating plant that fed large industrial loads during the day, could feed commuter railway systems during rush hour and then serve lighting load in the evening, thus improving the system load factor and reducing the cost of electrical energy overall. Many exceptions existed, generating stations were dedicated to power or light by the choice of frequency, and rotating frequency changers and rotating converters were particularly common to feed electric railway systems from the general lighting and power network.
Throughout the first few decades of the 20th century central stations became larger, using higher steam pressures to provide greater efficiency, and relying on interconnections of multiple generating stations to improve reliability and cost. High-voltage AC transmission allowed hydroelectric power to be conveniently moved from distant waterfalls to city markets. The advent of the steam turbine in central station service, around 1906, allowed great expansion of generating capacity. Generators were no longer limited by the power transmission of belts or the relatively slow speed of reciprocating engines, and could grow to enormous sizes. For example, Sebastian Ziani de Ferranti planned what would have reciprocating steam engine ever built for a proposed new central station, but scrapped the plans when turbines became available in the necessary size. Building power systems out of central stations required combinations of engineering skill and financial acumen in equal measure. Pioneers of central station generation include George Westinghouse and Samuel Insull in the United States, Ferranti and Charles Hesterman Merz in UK, and many others.
Thermal power stations
In thermal power stations, mechanical power is produced by a heat engine that transforms thermal energy, often from combustion of a fuel, into rotational energy. Most thermal power stations produce steam, so they are sometimes called steam power stations. Not all thermal energy can be transformed into mechanical power, according to the second law of thermodynamics; therefore, there is always heat lost to the environment. If this loss is employed as useful heat, for industrial processes or district heating, the power plant is referred to as a cogeneration power plant or CHP (combined heat-and-power) plant. In countries where district heating is common, there are dedicated heat plants called heat-only boiler stations. An important class of power stations in the Middle East uses by-product heat for the desalination of water.
The efficiency of a thermal power cycle is limited by the maximum working fluid temperature produced. The efficiency is not directly a function of the fuel used. For the same steam conditions, coal-, nuclear- and gas power plants all have the same theoretical efficiency. Overall, if a system is on constantly (base load) it will be more efficient than one that is used intermittently (peak load). Steam turbines generally operate at higher efficiency when operated at full capacity.
Besides use of reject heat for process or district heating, one way to improve overall efficiency of a power plant is to combine two different thermodynamic cycles in a combined cycle plant. Most commonly, exhaust gases from a gas turbine are used to generate steam for a boiler and a steam turbine. The combination of a "top" cycle and a "bottom" cycle produces higher overall efficiency than either cycle can attain alone.
In 2018, Inter RAO UES and State Grid planned to build an 8-GW thermal power plant, which's the largest coal-fired power plant construction project in Russia.
Classification
By heat source
Fossil-fuel power stations may also use a steam turbine generator or in the case of natural gas-fired power plants may use a combustion turbine. A coal-fired power station produces heat by burning coal in a steam boiler. The steam drives a steam turbine and generator that then produces electricity. The waste products of combustion include ash, sulfur dioxide, nitrogen oxides, and carbon dioxide. Some of the gases can be removed from the waste stream to reduce pollution.
Nuclear power plants use the heat generated in a nuclear reactor's core (by the fission process) to create steam which then operates a steam turbine and generator. About 20 percent of electric generation in the US is produced by nuclear power plants.
Geothermal power plants use steam extracted from hot underground rocks. These rocks are heated by the decay of radioactive material in the Earth's core.
Biomass-fuelled power plants may be fuelled by waste from sugar cane, municipal solid waste, landfill methane, or other forms of biomass.
In integrated steel mills, blast furnace exhaust gas is a low-cost, although low-energy-density, fuel.
Waste heat from industrial processes is occasionally concentrated enough to use for power generation, usually in a steam boiler and turbine.
Solar thermal electric plants use sunlight to boil water and produce steam which turns the generator.
Hydrogen power plants can use green hydrogen from electrolysis to help balance supply and demand from Variable renewable energy sources.
By prime mover
A prime mover is a machine that converts energy of various forms into energy of motion.
Steam turbine plants use the dynamic pressure generated by expanding steam to turn the blades of a turbine. Almost all large non-hydro plants use this system. About 90 percent of all electric power produced in the world is through use of steam turbines.
Gas turbine plants use the dynamic pressure from flowing gases (air and combustion products) to directly operate the turbine. Natural-gas fuelled (and oil fueled) combustion turbine plants can start rapidly and so are used to supply "peak" energy during periods of high demand, though at higher cost than base-loaded plants. These may be comparatively small units, and sometimes completely unmanned, being remotely operated. This type was pioneered by the UK, Princetown being the world's first, commissioned in 1959.
Combined cycle plants have both a gas turbine fired by natural gas, and a steam boiler and steam turbine which use the hot exhaust gas from the gas turbine to produce electricity. This greatly increases the overall efficiency of the plant, and many new baseload power plants are combined cycle plants fired by natural gas.
Internal combustion reciprocating engines are used to provide power for isolated communities and are frequently used for small cogeneration plants. Hospitals, office buildings, industrial plants, and other critical facilities also use them to provide backup power in case of a power outage. These are usually fuelled by diesel oil, heavy oil, natural gas, and landfill gas.
Microturbines, Stirling engine and internal combustion reciprocating engines are low-cost solutions for using opportunity fuels, such as landfill gas, digester gas from water treatment plants and waste gas from oil production.
By duty
Power plants that can be dispatched (scheduled) to provide energy to a system include:
Base load power plants run nearly continually to provide that component of system load that does not vary during a day or week. Baseload plants can be highly optimized for low fuel cost, but may not start or stop quickly during changes in system load. Examples of base-load plants would include large modern coal-fired and nuclear generating stations, or hydro plants with a predictable supply of water.
Peaking power plants meet the daily peak load, which may only be for one or two hours each day. While their incremental operating cost is always higher than base load plants, they are required to ensure security of the system during load peaks. Peaking plants include simple cycle gas turbines and reciprocating internal combustion engines, which can be started up rapidly when system peaks are predicted. Hydroelectric plants may also be designed for peaking use.
Load following power plants can economically follow the variations in the daily and weekly load, at lower cost than peaking plants and with more flexibility than baseload plants.
Non-dispatchable plants include such sources as wind and solar energy; while their long-term contribution to system energy supply is predictable, on a short-term (daily or hourly) base their energy must be used as available since generation cannot be deferred. Contractual arrangements ("take or pay") with independent power producers or system interconnections to other networks may be effectively non-dispatchable.
Cooling towers
All thermal power plants produce waste heat energy as a byproduct of the useful electrical energy produced. The amount of waste heat energy equals or exceeds the amount of energy converted into useful electricity. Gas-fired power plants can achieve as much as 65% conversion efficiency, while coal and oil plants achieve around 30–49%. The waste heat produces a temperature rise in the atmosphere, which is small compared to that produced by greenhouse-gas emissions from the same power plant. Natural draft wet cooling towers at many nuclear power plants and large fossil-fuel-fired power plants use large hyperboloid chimney-like structures (as seen in the image at the right) that release the waste heat to the ambient atmosphere by the evaporation of water.
However, the mechanical induced-draft or forced-draft wet cooling towers in many large thermal power plants, nuclear power plants, fossil-fired power plants, petroleum refineries, petrochemical plants, geothermal, biomass and waste-to-energy plants use fans to provide air movement upward through down coming water and are not hyperboloid chimney-like structures. The induced or forced-draft cooling towers are typically rectangular, box-like structures filled with a material that enhances the mixing of the upflowing air and the down-flowing water.
In areas with restricted water use, a dry cooling tower or directly air-cooled radiators may be necessary, since the cost or environmental consequences of obtaining make-up water for evaporative cooling would be prohibitive. These coolers have lower efficiency and higher energy consumption to drive fans, compared to a typical wet, evaporative cooling tower.
Air-cooled condenser (ACC)
Power plants can use an air-cooled condenser, traditionally in areas with a limited or expensive water supply. Air-cooled condensers serve the same purpose as a cooling tower (heat dissipation) without using water. They consume additional auxiliary power and thus may have a higher carbon footprint compared to a traditional cooling tower.
Once-through cooling systems
Electric companies often prefer to use cooling water from the ocean or a lake, river, or cooling pond instead of a cooling tower. This single pass or once-through cooling system can save the cost of a cooling tower and may have lower energy costs for pumping cooling water through the plant's heat exchangers. However, the waste heat can cause thermal pollution as the water is discharged. Power plants using natural bodies of water for cooling are designed with mechanisms such as fish screens, to limit intake of organisms into the cooling machinery. These screens are only partially effective and as a result billions of fish and other aquatic organisms are killed by power plants each year. For example, the cooling system at the Indian Point Energy Center in New York kills over a billion fish eggs and larvae annually. A further environmental impact is that aquatic organisms which adapt to the warmer discharge water may be injured if the plant shuts down in cold weather.
Water consumption by power stations is a developing issue.
In recent years, recycled wastewater, or grey water, has been used in cooling towers. The Calpine Riverside and the Calpine Fox power stations in Wisconsin as well as the Calpine Mankato power station in Minnesota are among these facilities.
Power from renewable energy
Power stations can generate electrical energy from renewable energy sources.
Hydroelectric power station
In a hydroelectric power station water flows through turbines using hydropower to generate hydroelectricity. Power is captured from the gravitational force of water falling through penstocks to water turbines connected to generators. The amount of power available is a combination of height and water flow. A wide range of Dams may be built to raise the water level, and create a lake for storing water.
Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use.
Solar
Solar energy can be turned into electricity either directly in solar cells, or in a concentrating solar power plant by focusing the light to run a heat engine.
A solar photovoltaic power plant converts sunlight into direct current electricity using the photoelectric effect. Inverters change the direct current into alternating current for connection to the electrical grid. This type of plant does not use rotating machines for energy conversion.
Solar thermal power plants use either parabolic troughs or heliostats to direct sunlight onto a pipe containing a heat transfer fluid, such as oil. The heated oil is then used to boil water into steam, which turns a turbine that drives an electrical generator. The central tower type of solar thermal power plant uses hundreds or thousands of mirrors, depending on size, to direct sunlight onto a receiver on top of a tower. The heat is used to produce steam to turn turbines that drive electrical generators.
Wind
Wind turbines can be used to generate electricity in areas with strong, steady winds, sometimes offshore. Many different designs have been used in the past, but almost all modern turbines being produced today use a three-bladed, upwind design. Grid-connected wind turbines now being built are much larger than the units installed during the 1970s. They thus produce power more cheaply and reliably than earlier models. With larger turbines (on the order of one megawatt), the blades move more slowly than older, smaller, units, which makes them less visually distracting and safer for birds.
Marine
Marine energy or marine power (also sometimes referred to as ocean energy or ocean power) refers to the energy carried by ocean waves, tides, salinity, and ocean temperature differences. The movement of water in the world's oceans creates a vast store of kinetic energy, or energy in motion. This energy can be harnessed to generate electricity to power homes, transport and industries.
The term marine energy encompasses both wave power—power from surface waves, and tidal power—obtained from the kinetic energy of large bodies of moving water. Offshore wind power is not a form of marine energy, as wind power is derived from the wind, even if the wind turbines are placed over water.
The oceans have a tremendous amount of energy and are close to many if not most concentrated populations. Ocean energy has the potential of providing a substantial amount of new renewable energy around the world.
Osmosis
Salinity gradient energy is called pressure-retarded osmosis. In this method, seawater is pumped into a pressure chamber that is at a pressure lower than the difference between the pressures of saline water and fresh water. Freshwater is also pumped into the pressure chamber through a membrane, which increases both the volume and pressure of the chamber. As the pressure differences are compensated, a turbine is spun creating energy. This method is being specifically studied by the Norwegian utility Statkraft, which has calculated that up to 25 TWh/yr would be available from this process in Norway. Statkraft has built the world's first prototype osmotic power plant on the Oslo fjord which was opened on 24 November 2009. In January 2014, however, Statkraft announced not to continue this pilot.
Biomass
Biomass energy can be produced from combustion of waste green material to heat water into steam and drive a steam turbine. Bioenergy can also be processed through a range of temperatures and pressures in gasification, pyrolysis or torrefaction reactions. Depending on the desired end product, these reactions create more energy-dense products (syngas, wood pellets, biocoal) that can then be fed into an accompanying engine to produce electricity at a much lower emission rate when compared with open burning.
Storage power stations
It is possible to store energy and produce electrical power at a later time as in pumped-storage hydroelectricity, thermal energy storage, flywheel energy storage, battery storage power station and so on.
Pumped storage
The world's largest form of storage for excess electricity, pumped-storage is a reversible hydroelectric plant. They are a net consumer of energy but provide storage for any source of electricity, effectively smoothing peaks and troughs in electricity supply and demand. Pumped storage plants typically use "spare" electricity during off peak periods to pump water from a lower reservoir to an upper reservoir. Because the pumping takes place "off peak", electricity is less valuable than at peak times. This less valuable "spare" electricity comes from uncontrolled wind power and base load power plants such as coal, nuclear and geothermal, which still produce power at night even though demand is very low. During daytime peak demand, when electricity prices are high, the storage is used for peaking power, where water in the upper reservoir is allowed to flow back to a lower reservoir through a turbine and generator. Unlike coal power stations, which can take more than 12 hours to start up from cold, a hydroelectric generator can be brought into service in a few minutes, ideal to meet a peak load demand. Two substantial pumped storage schemes are in South Africa, Palmiet Pumped Storage Scheme and another in the Drakensberg, Ingula Pumped Storage Scheme.
Typical power output
The power generated by a power station is measured in multiples of the watt, typically megawatts (106 watts) or gigawatts (109 watts). Power stations vary greatly in capacity depending on the type of power plant and on historical, geographical and economic factors. The following examples offer a sense of the scale.
Many of the largest operational onshore wind farms are located in China. As of 2022, the Roscoe Wind Farm is the largest onshore wind farm in the world, producing 8000 MW of power, followed by the Zhang Jiakou (3000 MW). As of January 2022, the Hornsea Wind Farm in United Kingdom is the largest offshore wind farm in the world at 1218 MW, followed by Walney Wind Farm in United Kingdom at 1026 MW.
In 2021, the worldwide installed capacity of power plants increased by 347 GW. Solar and wind power plant capacities rose by 80% in one year. , the largest photovoltaic (PV) power plants in the world are led by Bhadla Solar Park in India, rated at 2245 MW.
Solar thermal power stations in the U.S. have the following output:
Ivanpah Solar Power Facility is the largest of the country with an output of 392 MW
Large coal-fired, nuclear, and hydroelectric power stations can generate hundreds of megawatts to multiple gigawatts. Some examples:
The Koeberg Nuclear Power Station in South Africa has a rated capacity of 1860 megawatts.
The coal-fired Ratcliffe-on-Soar Power Station in the UK has a rated capacity of 2 gigawatts.
The Aswan Dam hydro-electric plant in Egypt has a capacity of 2.1 gigawatts.
The Three Gorges Dam hydro-electric plant in China has a capacity of 22.5 gigawatts.
Gas turbine power plants can generate tens to hundreds of megawatts. Some examples:
The Indian Queens simple-cycle, or open cycle gas turbine (OCGT), peaking power station in Cornwall UK, with a single gas turbine is rated 140 megawatts.
The Medway Power Station, a combined-cycle gas turbine (CCGT) power station in Kent, UK, with two gas turbines and one steam turbine, is rated 700 megawatts.
The rated capacity of a power station is nearly the maximum electrical power that the power station can produce.
Some power plants are run at almost exactly their rated capacity all the time, as a non-load-following base load power plant, except at times of scheduled or unscheduled maintenance.
However, many power plants usually produce much less power than their rated capacity.
In some cases a power plant produces much less power than its rated capacity because it uses an intermittent energy source.
Operators try to pull maximum available power from such power plants, because their marginal cost is practically zero, but the available power varies widely—in particular, it may be zero during heavy storms at night.
In some cases operators deliberately produce less power for economic reasons.
The cost of fuel to run a load following power plant may be relatively high, and the cost of fuel to run a peaking power plant is even higher—they have relatively high marginal costs.
Operators keep power plants turned off ("operational reserve") or running at minimum fuel consumption ("spinning reserve") most of the time.
Operators feed more fuel into load following power plants only when the demand rises above what lower-cost plants (i.e., intermittent and base load plants) can produce, and then feed more fuel into peaking power plants only when the demand rises faster than the load following power plants can follow.
Output metering
Not all of the generated power of a plant is necessarily delivered into a distribution system. Power plants typically also use some of the power themselves, in which case the generation output is classified into gross generation, and net generation.
Gross generation or gross electric output is the total amount of electricity generated by a power plant over a specific period of time. It is measured at the generating terminal and is measured in kilowatt-hours (kW·h), megawatt-hours (MW·h), gigawatt-hours (GW·h) or for the largest power plants terawatt-hours (TW·h). It includes the electricity used in the plant auxiliaries and in the transformers.
Gross generation = net generation + usage within the plant (also known as in-house loads)
Net generation is the amount of electricity generated by a power plant that is transmitted and distributed for consumer use. Net generation is less than the total gross power generation as some power produced is consumed within the plant itself to power auxiliary equipment such as pumps, motors and pollution control devices. Thus
Net generation = gross generation − usage within the plant ( in-house loads)
Operations
Operating staff at a power station have several duties. Operators are responsible for the safety of the work crews that frequently do repairs on the mechanical and electrical equipment. They maintain the equipment with periodic inspections and log temperatures, pressures and other important information at regular intervals. Operators are responsible for starting and stopping the generators depending on need. They are able to synchronize and adjust the voltage output of the added generation with the running electrical system, without upsetting the system. They must know the electrical and mechanical systems to troubleshoot problems in the facility and add to the reliability of the facility. Operators must be able to respond to an emergency and know the procedures in place to deal with it.
See also
Cogeneration
Cooling tower
Cost of electricity by source
District heating
Electricity generation
Environmental impact of electricity generation
Flue-gas stack
Fossil-fuel power station
Geothermal electricity
Gravitation water vortex power plant
Grid-tied electrical system mini-power plants
List of largest power stations in the world
List of power stations
List of thermal power station failures
Nuclear power plant
Plant efficiency
Public utility building
Unit commitment problem
Virtual power plant
References
External links
Identification System for Power Stations (KKS)
Database of carbon emissions of power plants worldwide (Carbon Monitoring For Action: CARMA)
Net vs Gross Output Measurement Archived from the original (pdf) on 21 October 2012
Measuring power generation Archived from the original (pdf) on 2 October 2012
Chemical process engineering
Infrastructure | Power station | [
"Chemistry",
"Engineering"
] | 5,345 | [
"Chemical process engineering",
"Chemical engineering",
"Construction",
"Infrastructure"
] |
212,147 | https://en.wikipedia.org/wiki/Scintillation%20counter | A scintillation counter is an instrument for detecting and measuring ionizing radiation by using the excitation effect of incident radiation on a scintillating material, and detecting the resultant light pulses.
It consists of a scintillator which generates photons in response to incident radiation, a sensitive photodetector (usually a photomultiplier tube (PMT), a charge-coupled device (CCD) camera, or a photodiode), which converts the light to an electrical signal and electronics to process this signal.
Scintillation counters are widely used in radiation protection, assay of radioactive materials and physics research because they can be made inexpensively yet with good quantum efficiency, and can measure both the intensity and the energy of incident radiation.
History
The first electronic scintillation counter was invented in 1944 by Sir Samuel Curran whilst he was working on the Manhattan Project at the University of California at Berkeley. There was a requirement to measure the radiation from small quantities of uranium, and his innovation was to use one of the newly available highly sensitive photomultiplier tubes made by the Radio Corporation of America to accurately count the flashes of light from a scintillator subjected to radiation.
This built upon the work of earlier researchers such as Antoine Henri Becquerel, who discovered radioactivity whilst working on the phosphorescence of uranium salts in 1896. Previously, scintillation events had to be laboriously detected by eye, using a spinthariscope (a simple microscope) to observe light flashes in the scintillator. The first commercial liquid scintillation counter was made by Lyle E. Packard and sold to Argonne Cancer Research Hospital at the University of Chicago in 1953. The production model was designed especially for tritium and carbon-14 which were used in metabolic studies in vivo and in vitro.
Operation
When an ionizing particle passes into the scintillator material, atoms are excited along a track. For charged particles the track is the path of the particle itself. For gamma rays (uncharged), their energy is converted to an energetic electron via either the photoelectric effect, Compton scattering or pair production.
The chemistry of atomic de-excitation in the scintillator produces a multitude of low-energy photons, typically near the blue end of the visible spectrum. The quantity is proportional to the energy deposited by the ionizing particle. These can be directed to the photocathode of a photomultiplier tube which emits at most one electron for each arriving photon due to the photoelectric effect. This group of primary electrons is electrostatically accelerated and focused by an electrical potential so that they strike the first dynode of the tube. The impact of a single electron on the dynode releases a number of secondary electrons which are in turn accelerated to strike the second dynode. Each subsequent dynode impact releases further electrons, and so there is a current amplifying effect at each dynode stage. Each stage is at a higher potential than the previous to provide the accelerating field.
The resultant output signal at the anode is a measurable pulse for each group of photons from an original ionizing event in the scintillator that arrived at the photocathode and carries information about the energy of the original incident radiation. When it is fed to a charge amplifier which integrates the energy information, an output pulse is obtained which is proportional to the energy of the particle exciting the scintillator.
The number of such pulses per unit time also gives information about the intensity of the radiation. In some applications individual pulses are not counted, but rather only the average current at the anode is used as a measure of radiation intensity.
The scintillator must be shielded from all ambient light so that external photons do not swamp the ionization events caused by incident radiation. To achieve this a thin opaque foil, such as aluminized mylar, is often used, though it must have a low enough mass to minimize undue attenuation of the incident radiation being measured.
The article on the photomultiplier tube carries a detailed description of the tube's operation.
Detection materials
The scintillator consists of a transparent crystal, usually a phosphor, plastic (usually containing anthracene) or organic liquid (see liquid scintillation counting) that fluoresces when struck by ionizing radiation.
Cesium iodide (CsI) in crystalline form is used as the scintillator for the detection of protons and alpha particles. Sodium iodide (NaI) containing a small amount of thallium is used as a scintillator for the detection of gamma waves and zinc sulfide (ZnS) is widely used as a detector of alpha particles. Zinc sulfide is the material Rutherford used to perform his scattering experiment. Lithium iodide (LiI) is used in neutron detectors.
Detector efficiencies
Gamma
The quantum efficiency of a gamma-ray detector (per unit volume) depends upon the density of electrons in the detector, and certain scintillating materials, such as sodium iodide and bismuth germanate, achieve high electron densities as a result of the high atomic numbers of some of the elements of which they are composed. However, detectors based on semiconductors, notably hyperpure germanium, have better intrinsic energy resolution than scintillators, and are preferred where feasible for gamma-ray spectrometry.
Neutron
In the case of neutron detectors, high efficiency is gained through the use of scintillating materials rich in hydrogen that scatter neutrons efficiently. Liquid scintillation counters are an efficient and practical means of quantifying beta radiation.
Applications
Scintillation counters are used to measure radiation in a variety of applications including hand held radiation survey meters, personnel and environmental monitoring for radioactive contamination, medical imaging, radiometric assay, nuclear security and nuclear plant safety.
Several products have been introduced in the market utilising scintillation counters for detection of potentially dangerous gamma-emitting materials during transport. These include scintillation counters designed for freight terminals, border security, ports, weigh bridge applications, scrap metal yards and contamination monitoring of nuclear waste. There are variants of scintillation counters mounted on pick-up trucks and helicopters for rapid response in case of a security situation due to dirty bombs or radioactive waste. Hand-held units are also commonly used.
Diffusion cloud chamber
In the United Kingdom, the Health and Safety Executive, or HSE, has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. This covers all radiation instrument technologies, and is a useful comparative guide to the use of scintillation detectors.
Radiation protection
Alpha and beta contamination
Radioactive contamination monitors, for area or personal surveys require a large detection area to ensure efficient and rapid coverage of monitored surfaces. For this a thin scintillator with a large area window and an integrated photomultiplier tube is ideally suited. They find wide application in the field of radioactive contamination monitoring of personnel and the environment. Detectors are designed to have one or two scintillation materials, depending on the application. "Single phosphor" detectors are used for either alpha or beta, and "Dual phosphor" detectors are used to detect both.
A scintillator such as zinc sulphide is used for alpha particle detection, whilst plastic scintillators are used for beta detection. The resultant scintillation energies can be discriminated so that alpha and beta counts can be measured separately with the same detector, This technique is used in both hand-held and fixed monitoring equipment, and such instruments are relatively inexpensive compared with the gas proportional detector.
Gamma
Scintillation materials are used for ambient gamma dose measurement, though a different construction is used to detect contamination, as no thin window is required.
As a spectrometer
Scintillators often convert a single photon of high energy radiation into a high number of lower-energy photons, where the number of photons per megaelectronvolt of input energy is fairly constant. By measuring the intensity of the flash (the number of the photons produced by the x-ray or gamma photon) it is therefore possible to discern the original photon's energy.
The spectrometer consists of a suitable scintillator crystal, a photomultiplier tube, and a circuit for measuring the height of the pulses produced by the photomultiplier. The pulses are counted and sorted by their height, producing a x-y plot of scintillator flash brightness vs number of the flashes, which approximates the energy spectrum of the incident radiation, with some additional artifacts. A monochromatic gamma radiation produces a photopeak at its energy. The detector also shows response at the lower energies, caused by Compton scattering, two smaller escape peaks at energies 0.511 and 1.022 MeV below the photopeak for the creation of electron-positron pairs when one or both annihilation photons escape, and a backscatter peak. Higher energies can be measured when two or more photons strike the detector almost simultaneously (pile-up, within the time resolution of the data acquisition chain), appearing as sum peaks with energies up to the value of two or more photopeaks added
See also
Gamma spectroscopy
Geiger counter
Liquid scintillation counting
Lucas cell
Pandemonium effect
Photon counting
Scintigraphy
Total absorption spectroscopy
References
Particle detectors
Spectrometers
Ionising radiation detectors
Radiation protection | Scintillation counter | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,956 | [
"Spectrum (physical sciences)",
"Radioactive contamination",
"Particle detectors",
"Measuring instruments",
"Ionising radiation detectors",
"Spectrometers",
"Spectroscopy"
] |
212,250 | https://en.wikipedia.org/wiki/Homotopy | In topology, two continuous functions from one topological space to another are called homotopic (from "same, similar" and "place") if one can be "continuously deformed" into the other, such a deformation being called a homotopy (, ; , ) between the two functions. A notable use of homotopy is the definition of homotopy groups and cohomotopy groups, important invariants in algebraic topology.
In practice, there are technical difficulties in using homotopies with certain spaces. Algebraic topologists work with compactly generated spaces, CW complexes, or spectra.
Formal definition
Formally, a homotopy between two continuous functions f and g from a
topological space X to a topological space Y is defined to be a continuous function from the product of the space X with the unit interval [0, 1] to Y such that and for all .
If we think of the second parameter of H as time then H describes a continuous deformation of f into g: at time 0 we have the function f and at time 1 we have the function g. We can also think of the second parameter as a "slider control" that allows us to smoothly transition from f to g as the slider moves from 0 to 1, and vice versa.
An alternative notation is to say that a homotopy between two continuous functions is a family of continuous functions for such that and , and the map is continuous from to . The two versions coincide by setting . It is not sufficient to require each map to be continuous.
The animation that is looped above right provides an example of a homotopy between two embeddings, f and g, of the torus into . X is the torus, Y is , f is some continuous function from the torus to R3 that takes the torus to the embedded surface-of-a-doughnut shape with which the animation starts; g is some continuous function that takes the torus to the embedded surface-of-a-coffee-mug shape. The animation shows the image of ht(X) as a function of the parameter t, where t varies with time from 0 to 1 over each cycle of the animation loop. It pauses, then shows the image as t varies back from 1 to 0, pauses, and repeats this cycle.
Properties
Continuous functions f and g are said to be homotopic if and only if there is a homotopy H taking f to g as described above. Being homotopic is an equivalence relation on the set of all continuous functions from X to Y.
This homotopy relation is compatible with function composition in the following sense: if are homotopic, and are homotopic, then their compositions and are also homotopic.
Examples
If are given by and , then the map given by is a homotopy between them.
More generally, if is a convex subset of Euclidean space and are paths with the same endpoints, then there is a linear homotopy (or straight-line homotopy) given by
Let be the identity function on the unit n-disk; i.e. the set . Let be the constant function which sends every point to the origin. Then the following is a homotopy between them:
Homotopy equivalence
Given two topological spaces X and Y, a homotopy equivalence between X and Y is a pair of continuous maps and , such that is homotopic to the identity map idX and is homotopic to idY. If such a pair exists, then X and Y are said to be homotopy equivalent, or of the same homotopy type. Intuitively, two spaces X and Y are homotopy equivalent if they can be transformed into one another by bending, shrinking and expanding operations. Spaces that are homotopy-equivalent to a point are called contractible.
Homotopy equivalence vs. homeomorphism
A homeomorphism is a special case of a homotopy equivalence, in which is equal to the identity map idX (not only homotopic to it), and is equal to idY. Therefore, if X and Y are homeomorphic then they are homotopy-equivalent, but the opposite is not true. Some examples:
A solid disk is homotopy-equivalent to a single point, since you can deform the disk along radial lines continuously to a single point. However, they are not homeomorphic, since there is no bijection between them (since one is an infinite set, while the other is finite).
The Möbius strip and an untwisted (closed) strip are homotopy equivalent, since you can deform both strips continuously to a circle. But they are not homeomorphic.
Examples
The first example of a homotopy equivalence is with a point, denoted . The part that needs to be checked is the existence of a homotopy between and , the projection of onto the origin. This can be described as .
There is a homotopy equivalence between (the 1-sphere) and .
More generally, .
Any fiber bundle with fibers homotopy equivalent to a point has homotopy equivalent total and base spaces. This generalizes the previous two examples since is a fiber bundle with fiber .
Every vector bundle is a fiber bundle with a fiber homotopy equivalent to a point.
for any , by writing as the total space of the fiber bundle , then applying the homotopy equivalences above.
If a subcomplex of a CW complex is contractible, then the quotient space is homotopy equivalent to .
A deformation retraction is a homotopy equivalence.
Null-homotopy
A function is said to be null-homotopic if it is homotopic to a constant function. (The homotopy from to a constant function is then sometimes called a null-homotopy.) For example, a map from the unit circle to any space is null-homotopic precisely when it can be continuously extended to a map from the unit disk to that agrees with on the boundary.
It follows from these definitions that a space is contractible if and only if the identity map from to itself—which is always a homotopy equivalence—is null-homotopic.
Invariance
Homotopy equivalence is important because in algebraic topology many concepts are homotopy invariant, that is, they respect the relation of homotopy equivalence. For example, if X and Y are homotopy equivalent spaces, then:
X is path-connected if and only if Y is.
X is simply connected if and only if Y is.
The (singular) homology and cohomology groups of X and Y are isomorphic.
If X and Y are path-connected, then the fundamental groups of X and Y are isomorphic, and so are the higher homotopy groups. (Without the path-connectedness assumption, one has π1(X, x0) isomorphic to π1(Y, f(x0)) where is a homotopy equivalence and
An example of an algebraic invariant of topological spaces which is not homotopy-invariant is compactly supported homology (which is, roughly speaking, the homology of the compactification, and compactification is not homotopy-invariant).
Variants
Relative homotopy
In order to define the fundamental group, one needs the notion of homotopy relative to a subspace. These are homotopies which keep the elements of the subspace fixed. Formally: if f and g are continuous maps from X to Y and K is a subset of X, then we say that f and g are homotopic relative to K if there exists a homotopy between f and g such that for all and Also, if g is a retraction from X to K and f is the identity map, this is known as a strong deformation retract of X to K.
When K is a point, the term pointed homotopy is used.
Isotopy
When two given continuous functions f and g from the topological space X to the topological space Y are embeddings, one can ask whether they can be connected 'through embeddings'. This gives rise to the concept of isotopy, which is a homotopy, H, in the notation used before, such that for each fixed t, H(x, t) gives an embedding.
A related, but different, concept is that of ambient isotopy.
Requiring that two embeddings be isotopic is a stronger requirement than that they be homotopic. For example, the map from the interval [−1, 1] into the real numbers defined by f(x) = −x is not isotopic to the identity g(x) = x. Any homotopy from f to the identity would have to exchange the endpoints, which would mean that they would have to 'pass through' each other. Moreover, f has changed the orientation of the interval and g has not, which is impossible under an isotopy. However, the maps are homotopic; one homotopy from f to the identity is H: [−1, 1] × [0, 1] → [−1, 1] given by H(x, y) = 2yx − x.
Two homeomorphisms (which are special cases of embeddings) of the unit ball which agree on the boundary can be shown to be isotopic using Alexander's trick. For this reason, the map of the unit disc in defined by f(x, y) = (−x, −y) is isotopic to a 180-degree rotation around the origin, and so the identity map and f are isotopic because they can be connected by rotations.
In geometric topology—for example in knot theory—the idea of isotopy is used to construct equivalence relations. For example, when should two knots be considered the same? We take two knots, K1 and K2, in three-dimensional space. A knot is an embedding of a one-dimensional space, the "loop of string" (or the circle), into this space, and this embedding gives a homeomorphism between the circle and its image in the embedding space. The intuitive idea behind the notion of knot equivalence is that one can deform one embedding to another through a path of embeddings: a continuous function starting at t = 0 giving the K1 embedding, ending at t = 1 giving the K2 embedding, with all intermediate values corresponding to embeddings. This corresponds to the definition of isotopy. An ambient isotopy, studied in this context, is an isotopy of the larger space, considered in light of its action on the embedded submanifold. Knots K1 and K2 are considered equivalent when there is an ambient isotopy which moves K1 to K2. This is the appropriate definition in the topological category.
Similar language is used for the equivalent concept in contexts where one has a stronger notion of equivalence. For example, a path between two smooth embeddings is a smooth isotopy.
Timelike homotopy
On a Lorentzian manifold, certain curves are distinguished as timelike (representing something that only goes forwards, not backwards, in time, in every local frame). A timelike homotopy between two timelike curves is a homotopy such that the curve remains timelike during the continuous transformation from one curve to another. No closed timelike curve (CTC) on a Lorentzian manifold is timelike homotopic to a point (that is, null timelike homotopic); such a manifold is therefore said to be multiply connected by timelike curves. A manifold such as the 3-sphere can be simply connected (by any type of curve), and yet be timelike multiply connected.
Properties
Lifting and extension properties
If we have a homotopy and a cover and we are given a map such that (h0 is called a lift of h0), then we can lift all H to a map such that The homotopy lifting property is used to characterize fibrations.
Another useful property involving homotopy is the homotopy extension property,
which characterizes the extension of a homotopy between two functions from a subset of some set to the set itself. It is useful when dealing with cofibrations.
Groups
Since the relation of two functions being homotopic relative to a subspace is an equivalence relation, we can look at the equivalence classes of maps between a fixed X and Y. If we fix , the unit interval [0, 1] crossed with itself n times, and we take its boundary as a subspace, then the equivalence classes form a group, denoted , where is in the image of the subspace .
We can define the action of one equivalence class on another, and so we get a group. These groups are called the homotopy groups. In the case , it is also called the fundamental group.
Homotopy category
The idea of homotopy can be turned into a formal category of category theory. The homotopy category is the category whose objects are topological spaces, and whose morphisms are homotopy equivalence classes of continuous maps. Two topological spaces X and Y are isomorphic in this category if and only if they are homotopy-equivalent. Then a functor on the category of topological spaces is homotopy invariant if it can be expressed as a functor on the homotopy category.
For example, homology groups are a functorial homotopy invariant: this means that if f and g from X to Y are homotopic, then the group homomorphisms induced by f and g on the level of homology groups are the same: Hn(f) = Hn(g) : Hn(X) → Hn(Y) for all n. Likewise, if X and Y are in addition path connected, and the homotopy between f and g is pointed, then the group homomorphisms induced by f and g on the level of homotopy groups are also the same: πn(f) = πn(g) : πn(X) → πn(Y).
Applications
Based on the concept of the homotopy, computation methods for algebraic and differential equations have been developed. The methods for algebraic equations include the homotopy continuation method and the continuation method (see numerical continuation). The methods for differential equations include the homotopy analysis method.
Homotopy theory can be used as a foundation for homology theory: one can represent a cohomology functor on a space X by mappings of X into an appropriate fixed space, up to homotopy equivalence. For example, for any abelian group G, and any based CW-complex X, the set of based homotopy classes of based maps from X to the Eilenberg–MacLane space is in natural bijection with the n-th singular cohomology group of the space X. One says that the omega-spectrum of Eilenberg-MacLane spaces are representing spaces for singular cohomology with coefficients in G.
See also
Fiber-homotopy equivalence (relative version of a homotopy equivalence)
Homeotopy
Homotopy type theory
Mapping class group
Poincaré conjecture
Regular homotopy
References
Sources
Maps of manifolds
Theory of continuous functions | Homotopy | [
"Mathematics"
] | 3,200 | [
"Theory of continuous functions",
"Topology"
] |
212,490 | https://en.wikipedia.org/wiki/Subatomic%20particle | In physics, a subatomic particle is a particle smaller than an atom. According to the Standard Model of particle physics, a subatomic particle can be either a composite particle, which is composed of other particles (for example, a baryon, like a proton or a neutron, composed of three quarks; or a meson, composed of two quarks), or an elementary particle, which is not composed of other particles (for example, quarks; or electrons, muons, and tau particles, which are called leptons). Particle physics and nuclear physics study these particles and how they interact. Most force-carrying particles like photons or gluons are called bosons and, although they have quanta of energy, do not have rest mass or discrete diameters (other than pure energy wavelength) and are unlike the former particles that have rest mass and cannot overlap or combine which are called fermions. The W and Z bosons, however, are an exception to this rule and have relatively large rest masses at approximately 80 GeV and 90 GeV respectively.
Experiments show that light could behave like a stream of particles (called photons) as well as exhibiting wave-like properties. This led to the concept of wave–particle duality to reflect that quantum-scale behave both like particles and like waves; they are sometimes called wavicles to reflect this.
Another concept, the uncertainty principle, states that some of their properties taken together, such as their simultaneous position and momentum, cannot be measured exactly. The wave–particle duality has been shown to apply not only to photons but to more massive particles as well.
Interactions of particles in the framework of quantum field theory are understood as creation and annihilation of quanta of corresponding fundamental interactions. This blends particle physics with field theory.
Even among particle physicists, the exact definition of a particle has diverse descriptions. These professional attempts at the definition of a particle include:
A particle is a collapsed wave function
A particle is a quantum excitation of a field
A particle is an irreducible representation of the Poincaré group
A particle is an observed thing
Classification
By composition
Subatomic particles are either "elementary", i.e. not made of multiple other particles, or "composite" and made of more than one elementary particle bound together.
The elementary particles of the Standard Model are:
Six "flavors" of quarks: up, down, strange, charm, bottom, and top;
Six types of leptons: electron, electron neutrino, muon, muon neutrino, tau, tau neutrino;
Twelve gauge bosons (force carriers): the photon of electromagnetism, the three W and Z bosons of the weak force, and the eight gluons of the strong force;
The Higgs boson.
All of these have now been discovered through experiments, with the latest being the top quark (1995), tau neutrino (2000), and Higgs boson (2012).
Various extensions of the Standard Model predict the existence of an elementary graviton particle and many other elementary particles, but none have been discovered as of 2021.
Hadrons
The word hadron comes from Greek and was introduced in 1962 by Lev Okun. Nearly all composite particles contain multiple quarks (and/or antiquarks) bound together by gluons (with a few exceptions with no quarks, such as positronium and muonium). Those containing few (≤ 5) quarks (including antiquarks) are called hadrons. Due to a property known as color confinement, quarks are never found singly but always occur in hadrons containing multiple quarks. The hadrons are divided by number of quarks (including antiquarks) into the baryons containing an odd number of quarks (almost always 3), of which the proton and neutron (the two nucleons) are by far the best known; and the mesons containing an even number of quarks (almost always 2, one quark and one antiquark), of which the pions and kaons are the best known.
Except for the proton and neutron, all other hadrons are unstable and decay into other particles in microseconds or less. A proton is made of two up quarks and one down quark, while the neutron is made of two down quarks and one up quark. These commonly bind together into an atomic nucleus, e.g. a helium-4 nucleus is composed of two protons and two neutrons. Most hadrons do not live long enough to bind into nucleus-like composites; those that do (other than the proton and neutron) form exotic nuclei.
By statistics
Any subatomic particle, like any particle in the three-dimensional space that obeys the laws of quantum mechanics, can be either a boson (with integer spin) or a fermion (with odd half-integer spin).
In the Standard Model, all the elementary fermions have spin 1/2, and are divided into the quarks which carry color charge and therefore feel the strong interaction, and the leptons which do not. The elementary bosons comprise the gauge bosons (photon, W and Z, gluons) with spin 1, while the Higgs boson is the only elementary particle with spin zero.
The hypothetical graviton is required theoretically to have spin 2, but is not part of the Standard Model. Some extensions such as supersymmetry predict additional elementary particles with spin 3/2, but none have been discovered as of 2023.
Due to the laws for spin of composite particles, the baryons (3 quarks) have spin either 1/2 or 3/2 and are therefore fermions; the mesons (2 quarks) have integer spin of either 0 or 1 and are therefore bosons.
By mass
In special relativity, the energy of a particle at rest equals its mass times the speed of light squared, E = mc2. That is, mass can be expressed in terms of energy and vice versa. If a particle has a frame of reference in which it lies at rest, then it has a positive rest mass and is referred to as massive.
All composite particles are massive. Baryons (meaning "heavy") tend to have greater mass than mesons (meaning "intermediate"), which in turn tend to be heavier than leptons (meaning "lightweight"), but the heaviest lepton (the tau particle) is heavier than the two lightest flavours of baryons (nucleons). It is also certain that any particle with an electric charge is massive.
When originally defined in the 1950s, the terms baryons, mesons and leptons referred to masses; however, after the quark model became accepted in the 1970s, it was recognised that baryons are composites of three quarks, mesons are composites of one quark and one antiquark, while leptons are elementary and are defined as the elementary fermions with no color charge.
All massless particles (particles whose invariant mass is zero) are elementary. These include the photon and gluon, although the latter cannot be isolated.
By decay
Most subatomic particles are not stable. All leptons, as well as baryons decay by either the strong force or weak force (except for the proton). Protons are not known to decay, although whether they are "truly" stable is unknown, as some very important Grand Unified Theories (GUTs) actually require it. The μ and τ muons, as well as their antiparticles, decay by the weak force. Neutrinos (and antineutrinos) do not decay, but a related phenomenon of neutrino oscillations is thought to exist even in vacuums. The electron and its antiparticle, the positron, are theoretically stable due to charge conservation unless a lighter particle having magnitude of electric charge e exists (which is unlikely). Its charge is not shown yet.
Other properties
All observable subatomic particles have their electric charge an integer multiple of the elementary charge. The Standard Model's quarks have "non-integer" electric charges, namely, multiple of e, but quarks (and other combinations with non-integer electric charge) cannot be isolated due to color confinement. For baryons, mesons, and their antiparticles the constituent quarks' charges sum up to an integer multiple of e.
Through the work of Albert Einstein, Satyendra Nath Bose, Louis de Broglie, and many others, current scientific theory holds that all particles also have a wave nature. This has been verified not only for elementary particles but also for compound particles like atoms and even molecules. In fact, according to traditional formulations of non-relativistic quantum mechanics, wave–particle duality applies to all objects, even macroscopic ones; although the wave properties of macroscopic objects cannot be detected due to their small wavelengths.
Interactions between particles have been scrutinized for many centuries, and a few simple laws underpin how particles behave in collisions and interactions. The most fundamental of these are the laws of conservation of energy and conservation of momentum, which let us make calculations of particle interactions on scales of magnitude that range from stars to quarks. These are the prerequisite basics of Newtonian mechanics, a series of statements and equations in Philosophiae Naturalis Principia Mathematica, originally published in 1687.
Dividing an atom
The negatively charged electron has a mass of about of that of a hydrogen atom. The remainder of the hydrogen atom's mass comes from the positively charged proton. The atomic number of an element is the number of protons in its nucleus. Neutrons are neutral particles having a mass slightly greater than that of the proton. Different isotopes of the same element contain the same number of protons but different numbers of neutrons. The mass number of an isotope is the total number of nucleons (neutrons and protons collectively).
Chemistry concerns itself with how electron sharing binds atoms into structures such as crystals and molecules. The subatomic particles considered important in the understanding of chemistry are the electron, the proton, and the neutron. Nuclear physics deals with how protons and neutrons arrange themselves in nuclei. The study of subatomic particles, atoms and molecules, and their structure and interactions, requires quantum mechanics. Analyzing processes that change the numbers and types of particles requires quantum field theory. The study of subatomic particles per se is called particle physics. The term high-energy physics is nearly synonymous to "particle physics" since creation of particles requires high energies: it occurs only as a result of cosmic rays, or in particle accelerators. Particle phenomenology systematizes the knowledge about subatomic particles obtained from these experiments.
History
The term "subatomic particle" is largely a retronym of the 1960s, used to distinguish a large number of baryons and mesons (which comprise hadrons) from particles that are now thought to be truly elementary. Before that hadrons were usually classified as "elementary" because their composition was unknown.
A list of important discoveries follows:
See also
Atom: Journey Across the Subatomic Cosmos (book)
Atom: An Odyssey from the Big Bang to Life on Earth...and Beyond (book)
CPT invariance
Dark matter
Hot spot effect in subatomic physics
List of fictional elements, materials, isotopes and atomic particles
List of particles
Poincaré symmetry
References
Further reading
General readers
Textbooks
An undergraduate text for those not majoring in physics.
External links
University of California: Particle Data Group.
Subatomic particles
Quantum mechanics | Subatomic particle | [
"Physics"
] | 2,448 | [
"Theoretical physics",
"Quantum mechanics",
"Subatomic particles",
"Particle physics",
"Nuclear physics",
"Atoms",
"Matter"
] |
212,764 | https://en.wikipedia.org/wiki/Superluminous%20supernova | A super-luminous supernova (SLSN, plural super luminous supernovae or SLSNe) is a type of stellar explosion with a luminosity 10 or more times higher than that of standard supernovae. Like supernovae, SLSNe seem to be produced by several mechanisms, which is readily revealed by their light-curves and spectra. There are multiple models for what conditions may produce an SLSN, including core collapse in particularly massive stars, millisecond magnetars, interaction with circumstellar material (CSM model), or pair-instability supernovae.
The first confirmed superluminous supernova connected to a gamma ray burst was not found until 2003, when GRB 030329 illuminated the Leo constellation. SN 2003dh represented the death of a star 25 times more massive than the Sun, with material being blasted out at over a tenth the speed of light.
Stars with are likely to produce superluminous supernovae.
Classification
Discoveries of many SLSNe in the 21st century showed that not only were they more luminous by an order of magnitude than most supernovae, their remnants were also unlikely to be powered by the typical radioactive decay that is responsible for the observed energies of conventional supernovae.
SLSNe events use a separate classification scheme to distinguish them from the conventional type Ia, type Ib/Ic, and type II supernovae, roughly distinguishing between the spectral signature of hydrogen-rich and hydrogen-poor events.
Hydrogen-rich SLSNe are classified as Type SLSN-II, with observed radiation passing through the changing opacity of a thick expanding hydrogen envelope. Most hydrogen-poor events are classified as Type SLSN-I, with its visible radiation produced from a large expanding envelope of material powered by an unknown mechanism. A third less common group of SLSNe is also hydrogen-poor and abnormally luminous, but clearly powered by radioactivity from 56Ni.
Increasing number of discoveries find that some SLSNe do not fit cleanly into these three classes, so further sub-classes or unique events have been described. Many or all SLSN-I show spectra without hydrogen or helium but have lightcurves comparable to conventional type Ic supernovae, and are now classed as SLSN-Ic. PS1-10afx is an unusually red hydrogen-free SLSN with an extremely rapid rise to a near-record peak luminosity and an unusually rapid decline. PS1-11ap is similar to a type Ic SLSN but has an unusually slow rise and decline.
Astrophysical models
A wide variety of causes have been proposed to explain events that are an order of magnitude or more greater than standard supernovae. The collapsar and CSM (circumstellar material) models are generally accepted and a number of events are well-observed. Other models are still only tentatively accepted or remain entirely theoretical.
Collapsar model
The collapsar model is a type of superluminous supernova that produces a gravitationally collapsed object, or black hole. The word "collapsar", short for "collapsed star", was formerly used to refer to the end product of stellar gravitational collapse, a stellar-mass black hole. The word is now sometimes used to refer to a specific model for the collapse of a fast-rotating star. When core collapse occurs in a star with a core at least around fifteen times the Sun's mass ()—though chemical composition and rotational rate are also significant—the explosion energy is insufficient to expel the outer layers of the star, and it will collapse into a black hole without producing a visible supernova outburst.
A star with a core mass slightly below this level—in the range of —will undergo a supernova explosion, but so much of the ejected mass falls back onto the core remnant that it still collapses into a black hole. If such a star is rotating slowly, then it will produce a faint supernova, but if the star is rotating quickly enough, then the fallback to the black hole will produce relativistic jets. The energy that these jets transfer into the ejected shell renders the visible outburst substantially more luminous than a standard supernova. The jets also beam high energy particles and gamma rays directly outward and thereby produce x-ray or gamma-ray bursts; the jets can last for several seconds or longer and correspond to long-duration gamma-ray bursts, but they do not appear to explain short-duration gamma-ray bursts.
Stars with cores have an approximate total mass of , assuming the star has not undergone significant mass loss. Such a star will still have a hydrogen envelope and will explode as a Type II supernova. Faint Type II supernovae have been observed, but no definite candidates for a Type II SLSN (except type IIn, which are not thought to be jet supernovae). Only the very lowest metallicity population III stars will reach this stage of their life with little mass loss. Other stars, including most of those visible to us, will have had most of their outer layers blown away by their high luminosity and become Wolf-Rayet stars. Some theories propose these will produce either Type Ib or Type Ic supernovae, but none of these events so far has been observed in nature. Many observed SLSNe are likely Type Ic. Those associated with gamma-ray bursts are almost always Type Ic, being very good candidates for having relativistic jets produced by fallback to a black hole. However, not all Type Ic SLSNe correspond to observed gamma-ray bursts but the events would only be visible if one of the jets were aimed towards us.
In recent years, much observational data on long-duration gamma-ray bursts have significantly increased our understanding of these events and made clear that the collapsar model produces explosions that differ only in detail from more or less ordinary supernovae and have energy ranges from approximately normal to around 100 times larger.
A good example of a collapsar SLSN is SN 1998bw, which was associated with the gamma-ray burst GRB 980425. It is classified as a type Ic supernova due to its distinctive spectral properties in the radio spectrum, indicating the presence of relativistic matter.
Circumstellar material model
Almost all observed SLSNe have had spectra similar to either a type Ic or type IIn supernova. The type Ic SLSNe are thought to be produced by jets from fallback to a black hole, but type IIn SLSNe have significantly different light curves and are not associated with gamma-ray bursts. Type IIn supernovae are all embedded in a dense nebula probably expelled from the progenitor star itself, and this circumstellar material (CSM) is thought to be the cause of the extra luminosity. When material expelled in an initial normal supernova explosion meets dense nebular material or dust close to the star, the shockwave converts kinetic energy efficiently into visible radiation. This effect greatly enhances these extended duration and extremely luminous supernovae, even though the initial explosive energy was the same as that of normal supernovae.
Although any supernova type could potentially produce Type IIn SLSNe, theoretical constraints on the surrounding CSM sizes and densities do suggest that it will almost always be produced from the central progenitor star itself immediately prior to the observed supernova event. Such stars are likely candidates of hypergiants or LBVs that appear to be undergoing substantial mass loss, due to Eddington instability, for example, SN2005gl.
Pair-instability supernova
Another type of suspected SLSN is a pair-instability supernova, of which SN 2006gy may possibly be the first observed example. This supernova event was observed in a galaxy about 238 million light years (73 megaparsecs) from Earth.
The theoretical basis for pair-instability collapse has been known for many decades and was suggested as a dominant source of higher mass elements in the early universe as super-massive population III stars exploded. In a pair-instability supernova, the pair production effect causes a sudden pressure drop in the star's core, leading to a rapid partial collapse. Gravitational potential energy from the collapse causes runaway fusion of the core which entirely disrupts the star, leaving no remnant.
Models show that this phenomenon only happens in stars with extremely low metallicity and masses between about 130 and 260 times the Sun, making them extremely unlikely in the local universe. Although originally expected to produce SLSN explosions hundreds of times greater than a normal supernova, current models predict that they actually produce luminosities ranging from about the same as a normal core collapse supernova to perhaps 50 times brighter, although remaining bright for much longer.
Magnetar energy release
Models of the creation and subsequent spin-down of a magnetar yield much higher luminosities than regular supernova events and match the observed properties of at least some SLSNe. In cases where pair-instability supernova may not be a good fit for explaining a SLSN, a magnetar explanation is more plausible.
Other models
There are still models for SLSN explosions produced from binary systems, white dwarf or neutron stars in unusual arrangements or undergoing mergers, and some of these are proposed to account for some observed gamma-ray bursts.
See also
SN 2018cow
References
Further reading
External links
List of all superluminous supernovae at The Open Supernova Catalog .
Stellar phenomena
Hypergiants
Astronomical events
+
Wolf–Rayet stars
Stellar evolution | Superluminous supernova | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,968 | [
"Supernovae",
"Physical phenomena",
"Astronomical events",
"Astrophysics",
"Stellar evolution",
"Explosions",
"Stellar phenomena"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.