id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
22,844,401 | https://en.wikipedia.org/wiki/Homotopy%20analysis%20method | The homotopy analysis method (HAM) is a semi-analytical technique to solve nonlinear ordinary/partial differential equations. The homotopy analysis method employs the concept of the homotopy from topology to generate a convergent series solution for nonlinear systems. This is enabled by utilizing a homotopy-Maclaurin series to deal with the nonlinearities in the system.
The HAM was first devised in 1992 by Liao Shijun of Shanghai Jiaotong University in his PhD dissertation and further modified in 1997 to introduce a non-zero auxiliary parameter, referred to as the convergence-control parameter, c0, to construct a homotopy on a differential system in general form. The convergence-control parameter is a non-physical variable that provides a simple way to verify and enforce convergence of a solution series. The capability of the HAM to naturally show convergence of the series solution is unusual in analytical and semi-analytic approaches to nonlinear partial differential equations.
Characteristics
The HAM distinguishes itself from various other analytical methods in four important aspects. First, it is a series expansion method that is not directly dependent on small or large physical parameters. Thus, it is applicable for not only weakly but also strongly nonlinear problems, going beyond some of the inherent limitations of the standard perturbation methods. Second, the HAM is a unified method for the Lyapunov artificial small parameter method, the delta expansion method, the Adomian decomposition method, and the homotopy perturbation method. The greater generality of the method often allows for strong convergence of the solution over larger spatial and parameter domains. Third, the HAM gives excellent flexibility in the expression of the solution and how the solution is explicitly obtained. It provides great freedom to choose the basis functions of the desired solution and the corresponding auxiliary linear operator of the homotopy. Finally, unlike the other analytic approximation techniques, the HAM provides a simple way to ensure the convergence of the solution series.
The homotopy analysis method is also able to combine with other techniques employed in nonlinear differential equations such as spectral methods and Padé approximants. It may further be combined with computational methods, such as the boundary element method to allow the linear method to solve nonlinear systems. Different from the numerical technique of homotopy continuation, the homotopy analysis method is an analytic approximation method as opposed to a discrete computational method. Further, the HAM uses the homotopy parameter only on a theoretical level to demonstrate that a nonlinear system may be split into an infinite set of linear systems which are solved analytically, while the continuation methods require solving a discrete linear system as the homotopy parameter is varied to solve the nonlinear system.
Applications
In the last twenty years, the HAM has been applied to solve a growing number of nonlinear ordinary/partial differential equations in science, finance, and engineering.
For example, multiple steady-state resonant waves in deep and finite water depth were found with the wave resonance criterion of arbitrary number of traveling gravity waves; this agreed with Phillips' criterion for four waves with small amplitude. Further, a unified wave model applied with the HAM, admits not only the traditional smooth progressive periodic/solitary waves, but also the progressive solitary waves with peaked crest in finite water depth. This model shows peaked solitary waves are consistent solutions along with the known smooth ones. Additionally, the HAM has been applied to many other nonlinear problems such as nonlinear heat transfer, the limit cycle of nonlinear dynamic systems, the American put option, the exact Navier–Stokes equation, the option pricing under stochastic volatility, the electrohydrodynamic flows, the Poisson–Boltzmann equation for semiconductor devices, and others.
Brief mathematical description
Consider a general nonlinear differential equation
,
where is a nonlinear operator. Let denote an auxiliary linear operator, u0(x) an initial guess of u(x), and c0 a constant (called the convergence-control parameter), respectively. Using the embedding parameter q ∈ [0,1] from homotopy theory, one may construct a family of equations,
called the zeroth-order deformation equation, whose solution varies continuously with respect to the embedding parameter q ∈ [0,1]. This is the linear equation
with known initial guess U(x; 0) = u0(x) when q = 0, but is equivalent to the original nonlinear equation , when q = 1, i.e. U(x; 1) = u(x)). Therefore, as q increases from 0 to 1, the solution U(x; q) of the zeroth-order deformation equation varies (or deforms) from the chosen initial guess u0(x) to the solution u(x) of the considered nonlinear equation.
Expanding U(x; q) in a Taylor series about q = 0, we have the homotopy-Maclaurin series
Assuming that the so-called convergence-control parameter c0 of the zeroth-order deformation equation is properly chosen that the above series is convergent at q = 1, we have the homotopy-series solution
From the zeroth-order deformation equation, one can directly derive the governing equation of um(x)
called the mth-order deformation equation, where and for k > 1, and the right-hand side Rm is dependent only upon the known results u0, u1, ..., um − 1 and can be obtained easily using computer algebra software. In this way, the original nonlinear equation is transferred into an infinite number of linear ones, but without the assumption of any small/large physical parameters.
Since the HAM is based on a homotopy, one has great freedom to choose the initial guess u0(x), the auxiliary linear operator , and the convergence-control parameter c0 in the zeroth-order deformation equation. Thus, the HAM provides the mathematician freedom to choose the equation-type of the high-order deformation equation and the base functions of its solution. The optimal value of the convergence-control parameter c0 is determined by the minimum of the squared residual error of governing equations and/or boundary conditions after the general form has been solved for the chosen initial guess and linear operator. Thus, the convergence-control parameter c0 is a simple way to guarantee the convergence of the homotopy series solution and differentiates the HAM from other analytic approximation methods. The method overall gives a useful generalization of the concept of homotopy.
The HAM and computer algebra
The HAM is an analytic approximation method designed for the computer era with the goal of "computing with functions instead of numbers." In conjunction with a computer algebra system such as Mathematica or Maple, one can gain analytic approximations of a highly nonlinear problem to arbitrarily high order by means of the HAM in only a few seconds. Inspired by the recent successful applications of the HAM in different fields, a Mathematica package based on the HAM, called BVPh, has been made available online for solving nonlinear boundary-value problems . BVPh is a solver package for highly nonlinear ODEs with singularities, multiple solutions, and multipoint boundary conditions in either a finite or an infinite interval, and includes support for certain types of nonlinear PDEs. Another HAM-based Mathematica code, APOh, has been produced to solve for an explicit analytic approximation of the optimal exercise boundary of American put option, which is also available online .
Frequency response analysis for nonlinear oscillators
The HAM has recently been reported to be useful for obtaining analytical solutions for nonlinear frequency response equations. Such solutions are able to capture various nonlinear behaviors such as hardening-type, softening-type or mixed behaviors of the oscillator. These analytical equations are also useful in prediction of chaos in nonlinear systems.
References
External links
http://numericaltank.sjtu.edu.cn/BVPh.htm
http://numericaltank.sjtu.edu.cn/APO.htm
Asymptotic analysis
Partial differential equations
Homotopy theory | Homotopy analysis method | [
"Mathematics"
] | 1,641 | [
"Mathematical analysis",
"Asymptotic analysis"
] |
22,847,720 | https://en.wikipedia.org/wiki/Centro%20de%20Estudios%20Cient%C3%ADficos | Centro de Estudios Científicos (CECs; Center for Scientific Studies) is a private, non-profit corporation based in Valdivia, Chile, devoted to the development, promotion and diffusion of scientific research.
History
CECs research areas include biophysics, molecular physiology, theoretical physics, glaciology and climate change.
The centre was created in 1984 as Centro de Estudios Científicos de Santiago, with a grant of 150,000 dollars a year (for three years) from the Tinker Foundation of New York City.
In 2004-2005 glaciologists from CECs organized the Chilean South Pole Expedition in collaboration with the Chilean Navy and Instituto Antártico Chileno.
CECs was founded in Santiago but is since 2000 housed in the recently modernized, German-style Hotel Schuster located by Valdivia River. Claudio Bunster, a physicist and winner of Chile's National Prize for Exact Sciences, is the director of CECs.
In 2014 CECs discovered what would be a subglacial lake in the West Antarctica, They investigated and concluded after a year that it is a lake, which was named Lake CECs in honor of the institution. The conclusion was published in Geophysical Research Letters on May 22, 2015.
The authors of the discovery are Andrés Rivera, Jose Uribe, Rodrigo Zamora and Jonathan Oberreuter.
References
External links
Research institutes in Chile
Multidisciplinary research institutes
Valdivia
Research institutes established in 1984
Astrophysics research institutes
Earth science research institutes
Biochemistry research institutes
1984 establishments in Chile | Centro de Estudios Científicos | [
"Physics",
"Chemistry"
] | 317 | [
"Biochemistry research institutes",
"Astrophysics research institutes",
"Astrophysics",
"Biochemistry organizations"
] |
22,848,552 | https://en.wikipedia.org/wiki/Lunarcrete | Lunarcrete, also known as "mooncrete", an idea first proposed by Larry A. Beyer of the University of Pittsburgh in 1985, is a hypothetical construction aggregate, similar to concrete, formed from lunar regolith, that would reduce the construction costs of building on the Moon. AstroCrete is a more general concept also applicable for Mars.
Ingredients
Only comparatively small amounts of Moon rock have been transported to Earth, so in 1988 researchers at the University of North Dakota proposed simulating the construction of such a material by using lignite coal ash. Other researchers have used the subsequently developed lunar regolith simulant materials, such as JSC-1 (developed in 1994 and as used by Toutanji et al.) and LHS-1 (developed and produced by Exolith Lab). Some small-scale testing, with actual regolith, has been performed in laboratories, however.
The basic ingredients for lunarcrete would be the same as those for terrestrial concrete: aggregate, water, and cement. In the case of lunarcrete, the aggregate would be lunar regolith. The cement would be manufactured by beneficiating lunar rock that had a high calcium content. Water would either be supplied from off the Moon, or by combining oxygen with hydrogen produced from lunar soil.
Lin et al. used 40g of the lunar regolith samples obtained by Apollo 16 to produce lunarcrete in 1986. The lunarcrete was cured by using steam on a dry aggregate/cement mixture. Lin proposed that the water for such steam could be produced by mixing hydrogen with lunar ilmenite at 800 °C, to produce titanium oxide, iron, and water. It was capable of withstanding compressive pressures of 75 MPa, and lost only 20% of that strength after repeated exposure to vacuum.
In 2008, Houssam Toutanji, of the University of Alabama in Huntsville, and Richard Grugel, of the Marshall Space Flight Center, used a lunar soil simulant to determine whether lunarcrete could be made without water, using sulfur (obtainable from lunar dust) as the binding agent. The process to create this sulfur concrete required heating the sulfur to 130–140 °C. After exposure to 50 cycles of temperature changes, from -27 °C to room temperature, the simulant lunarcrete was found to be capable of withstanding compressive pressures of 17MPa, which Toutanji and Grugel believed could be raised to 20MPa if the material were reinforced with silica (also obtainable from lunar dust).
Casting and production
There would need to be significant infrastructure in place before industrial scale production of lunarcrete could be possible.
The casting of lunarcrete would require a pressurized environment, because attempting to cast in a vacuum would simply result in the water sublimating, and the lunarcrete failing to harden. Two solutions to this problem have been proposed: premixing the aggregate and the cement and then using a steam injection process to add the water, or the use of a pressurized concrete fabrication plant that produces pre-cast concrete blocks.
Lunarcrete shares the same lack of tensile strength as terrestrial concrete. One suggested lunar equivalent tensioning material for creating pre-stressed concrete is lunar glass, also formed from regolith, much as fibreglass is already sometimes used as a terrestrial concrete reinforcement material. Another tensioning material, suggested by David Bennett, is Kevlar, imported from Earth (which would be cheaper, in terms of mass, to import from Earth than conventional steel).
Sulfur based "Waterless Concrete"
This proposal is based on the observation that water is likely to be a precious commodity on the Moon. Also sulfur gains strength in a very short time and doesn't need any period of cooling, unlike hydraulic cement. This would reduce the time that human astronauts would need to be exposed to the surface lunar environment.
Sulfur is present on the Moon in the form of the mineral troilite, (FeS) and could be reduced to obtain sulfur. It also doesn't require the ultra high temperatures needed for extraction of cementitious components (e.g. anorthosites).
Sulfur concrete is an established construction material. Strictly speaking it isn't a concrete as there is little by way of chemical reaction. Instead the sulfur acts as a thermoplastic material binding with a non reactive substrate. Cement and water are not required. The concrete doesn't have to be cured, instead it is simply heated to above the melting point of sulfur, 140 °C, and after cooling it reaches high strength immediately.
The best mixture for tensile and compressive strength is 65% JSC-1 lunar regolith simulant and 35% sulfur, with an average compressive strength of 33.8 MPa and tensile strength of 3.7 MPa. Addition of 2% metal fiber increase the compressive strength to 43.0 MPa Addition of silica also increases the strength of the concrete.
This sulfur concrete could be of especial value for dust minimization, for instance to create a launching pad for rockets leaving the Moon.
Issues with "Sulfur Concrete"
It provides less protection from cosmic radiation, so walls would need to be thicker than Portland-cement-based concrete walls (the water in concrete is an especially good absorber of cosmic radiation).
Sulfur melts at 115.2 °C, and lunar temperatures in high latitudes can reach 123 °C at midday. In addition, the temperature changes could change the volume of the sulfur concrete due to polymorphic transitions in the sulfur. (see Allotropes of sulfur).
So unprotected sulfur concrete on the Moon, if directly exposed to the surface temperatures, would need to be limited to higher latitudes or shaded locations with maximum temperatures less than 96 °C and monthly variations not exceeding 114 °C.
The material would degrade through repeated temperature cycles, but the effects are likely to be less extreme on the Moon due to the slowness of the monthly temperature cycle. The outer few millimeters may be damaged through sputtering from impact of high energy particles from the solar wind and solar flares. This may however be easy to repair, by reheating or recoating the surface layers in order to sinter away cracks and heal the damage.
AstroCrete
AstroCrete is a concrete-like material proposed to be used on Moon or Mars made from regolith and human serum albumin (HSA), a protein from human blood. Scientists demonstrated that such material had compressive strengths as high as 25 MPa, while ordinary concrete had 20–32 MPa. By adding urea (byproduct in urine, sweat, and tears), the resultant material became substantially stronger than ordinary concrete, with 40 MPa of compressive strength.
As noted by the authors:
Researchers also experimented with synthetic spider silk and bovine serum albumin as regolith binders, noting that these materials could also be produced on Mars after advancements in biomanufacturing technology.
The idea behind AstroCrete is not new, that is acknowledged by authors: "adhesives and binders of biological origin were widely utilized by humanity for millennia before the development of synthetic petroleum-derived adhesives. Tree resins, collagen from hooves, casein from cheese, and animal blood were all used as binders and additives for various applications".
Researchers calculated that a crew of 6 astronauts could produce over 500 kg of AstroCrete over the course of a two-year mission on the surface of Mars. Each astronaut "could produce enough additional habitat space to support another astronaut, potentially allowing the steady expansion of an early Martian colony".
In 2023, A.D. Roberts, wrote a story on the use of 'AstroCrete' being tested in the creation of a building material on Mars in order to overcome the challenge of obtaining bulk material for construction on the planet.
Use
David Bennett, of the British Cement Association, argues that lunarcrete has the following advantages as a construction material for lunar bases:
Lunarcrete production would require less energy than lunar production of steel, aluminium, or brick.
It is unaffected by temperature variations of +120 °C to −150 °C.
It will absorb gamma rays.
Material integrity is not affected by prolonged exposure to vacuum. Although free water will evaporate from the material, the water that is chemically bound as a result of the curing process will not.
He observes, however, that lunarcrete is not an airtight material, and to make it airtight would require the application of an epoxy coating to the interior of any lunarcrete structure.
Bennett suggests that hypothetical lunar buildings made of lunarcrete would most likely use a low-grade concrete block for interior compartments and rooms, and a high-grade dense silica particle cement-based concrete for exterior skins.
See also
Lunar resources
References
Further reading
— also:
External links
https://marspedia.org/Sintered_regolith
https://lunarpedia.org/w/Sintered_Regolith
Concrete
Building materials
Exploration of the Moon
Colonization of the Moon
Industry in space
Exploration of Mars
Colonization of Mars | Lunarcrete | [
"Physics",
"Astronomy",
"Engineering"
] | 1,888 | [
"Structural engineering",
"Industry in space",
"Outer space",
"Building engineering",
"Architecture",
"Construction",
"Materials",
"Concrete",
"Matter",
"Building materials"
] |
22,851,039 | https://en.wikipedia.org/wiki/Isbell%20duality | Isbell conjugacy (a.k.a. Isbell duality or Isbell adjunction) (named after John R. Isbell) is a fundamental construction of enriched category theory formally introduced by William Lawvere in 1986. That is a duality between covariant and contravariant representable presheaves associated with an objects of categories under the Yoneda embedding. In addition, Lawvere is states as follows; "Then the conjugacies are the first step toward expressing the duality between space and quantity fundamental to mathematics".
Definition
Yoneda embedding
The (covariant) Yoneda embedding is a covariant functor from a small category into the category of presheaves on , taking to the contravariant representable functor:
and the co-Yoneda embedding (a.k.a. contravariant Yoneda embedding or the dual Yoneda embedding) is a contravariant functor from a small category into the opposite of the category of co-presheaves on , taking to the covariant representable functor:
Every functor has an Isbell conjugate , given by
In contrast, every functor has an Isbell conjugate given by
Isbell duality
Isbell duality is the relationship between Yoneda embedding and co-Yoneda embedding;
Let be a symmetric monoidal closed category, and let be a small category enriched in .
The Isbell duality is an adjunction between the functor categories; .
The functors of Isbell duality are such that and .
See also
Kan extension
Limit (category theory)
Isbell completion
References
Bibliography
.
.
Footnote
External links
Category theory
Adjoint functors | Isbell duality | [
"Mathematics"
] | 382 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
22,852,793 | https://en.wikipedia.org/wiki/Asset/liability%20modeling | Asset/liability modeling is the process used to manage the business and financial objectives of a financial institution or an individual through an assessment of the portfolio assets and liabilities in an integrated manner. The process is characterized by an ongoing review, modification and revision of asset and liability management strategies so that sensitivity to interest rate changes are confined within acceptable tolerance levels.
Different models use different elements based on specific needs and contexts. An individual or an organization may keep parts of the ALM process and outsource the modeling function or adapt the model according to the requirements and capabilities of relevant institutions such as banks, which often have their in-house modeling process. There is a vast array of models available today for practical asset and liability modeling and these have been the subject of several research studies.
Asset/liability modeling (pension)
In 2008, a financial crisis drove the 100 largest corporate pension plans to a record $300 billion loss of funded status. In the wake of those losses, many pension plan sponsors reexamined their pension plan asset allocation strategies, to consider risk exposures. A recent study indicates that many corporate defined benefit plans fail to address the full range of risks facing them, especially the ones related to liabilities. Too often, the study says, corporate pensions are distracted by concerns that have nothing to do with the long-term health of the fund. Asset/liability modeling is an approach to examining pension risks and allows the sponsor to set informed policies for funding, benefit design and asset allocation.
Asset/liability modeling goes beyond the traditional, asset-only analysis of the asset-allocation decision. Traditional asset-only models analyze risk and rewards in terms of investment performance. Asset/liability models take a comprehensive approach to analyze risk and rewards in terms of the overall pension plan impact. An actuary or investment consultant may look at expectations and downside risk measures on the present value of contributions, plan surplus, excess returns (asset return less liability return), asset returns and any number of other variables. The model may consider measures over 5-, 10- or 20-year horizons, as well as quarterly or annual value at risk measures.
Pension plans face a variety of liability risks, including price and wage inflation, interest rate, and longevity. While some of these risks materialize slowly over time, others – such as interest rate risk – are felt with each measurement period. Liabilities are the actuarial present value of future plan cash flows, discounted at current interest rates. Thus, asset/liability management strategies often include bonds and swaps or other derivatives to accomplish some degree of interest rate hedging (immunization, cash flow matching, duration matching, etc.). Such approaches are sometimes called “liability-driven investment” (LDI) strategies. In 2008, plans with such approaches strongly outperformed those with traditional “total return” seeking investment policies.
Asset/liability studies
Successful asset/liability studies:
Increase a plan sponsor’s understanding of the pension plan’s current situation and likely future trends
Highlight key asset and liability risks that should be considered
Help establish a cohesive risk-management framework
Analyze surplus return, standard deviation, funding status, contribution requirements and balance-sheet impacts
Consider customized risk measures based on the plan sponsor, plan design and time horizon
Help design an appropriate strategic investment strategy
Provide insight into current market dislocations and practical implications for the near term
Historically, most pension plan sponsors conducted comprehensive asset/liability studies every three to five years, or after a significant change in demographics, plan design, funding status, sponsor circumstances, or funding legislation. Recent trends suggest more frequent studies and/or a desire for regular tracking of key asset/liability risk metrics in between formal studies.
Additional challenges
In the United States, the Pension Protection Act of 2006 (PPA) introduced stricter standards on pension plans, requiring higher funding targets and larger contributions from plan sponsors. With growing deficits and PPA funding requirements looming large, there is an unprecedented need for asset/liability modeling and overall pension risk management.
Asset/liability modeling for individuals
Some financial advisors offer Monte Carlo simulation tools aimed at helping individuals plan for retirement. These tools are designed to model the individual’s likelihood of assets surpassing expenses (liabilities).
Proponents of Monte Carlo simulation contend that these tools are valuable because they offer simulation using randomly ordered returns based on a set of reasonable parameters. For example, the tool can model retirement cash flows 500 or 1,000 times, reflecting a range of possible outcomes.
Some critics of these tools claim that the consequences of failure are not laid out and argue that these tools are no better than typical retirement tools that use standard assumptions. Recent financial turmoil has fueled the claims of critics who believe that Monte Carlo simulation tools are inaccurate and overly optimistic.
See also
Net worth
High-net-worth individual
References
External links
Implementing Asset/Liability Management - A User’s Guide to ALM, LDI and Other Three-Letter Words, Society of Actuaries
Application of a Linear Regression Model to the Proactive Investment Strategy of a Pension Fund, Society of Actuaries
Beyond Rebalancing: Rethinking long-term asset allocation, J.P. Morgan [PAGE NOT FOUND]
MetLife U.S. Pension Behavior IndexSM, MetLife
Actuarial science
Pensions
Investment
Liability (financial accounting)
Retirement
Personal finance
Asset | Asset/liability modeling | [
"Mathematics"
] | 1,075 | [
"Applied mathematics",
"Actuarial science"
] |
31,499,475 | https://en.wikipedia.org/wiki/Heated%20humidified%20high-flow%20therapy | Heated humidified high-flow therapy, often simply called high flow therapy , is a type of respiratory support that delivers a flow of medical gas to a patient of up to 60 liters per minute and 100% oxygen through a large bore or high flow nasal cannula. Primarily studied in neonates, it has also been found effective in some adults to treat hypoxemia and work of breathing issues. The key components of it are a gas blender, heated humidifier, heated circuit, and cannula.
History
The development of heated humidified high flow started in 1999 with Vapotherm introducing the concept of high flow use with race horses.
High flow was approved by the U.S. Food and Drug Administration in the early 2000s and used as an alternative to positive airway pressure for treatment of apnea of prematurity in neonates. The term high flow is relative to the size of the patient which is why the flow rate used in children is done by weight as just a few liters can meet the inspiratory demands of a neonate unlike in adults It has since become popular for use in adults for respiratory failure
Mechanism
The traditional low flow system used for medical gas delivery is the Nasal cannula which is limited to the delivery of 1–6 L/min of oxygen or up to 15 L/min in certain types. This is because even with quiet breathing, the inspiratory flow rate at the nares of an adult usually exceeds 30 L/min. Therefore, the oxygen provided is diluted with room air during inspiration. Being a high flow system means that it meets or exceeds the flow demands of the patient.
Oxygenation
Since it is a high flow system, it is able to maintain the wearers fraction of inhaled oxygen (FiO2) at the set rate because they shouldn't be entraining ambient air. However, this may not be the case in patients who are poorly compliant with the therapy and are actively breathing through their mouth.
Ventilation
The flow can wash out some of the dead space in the upper airway. This can reduce slightly the amount of carbon dioxide rebreathed.
There is a correlation of the flow rate to mean airway pressure and in some subjects there has been an increase in lung volumes and decrease in respiratory rate. However, positive end expiratory pressure has only been measured at less 3 cmH2O meaning it is not able to provide close to what a closed ventilatory system could provide. In neonates it has been found, however, with a good fit and mouth closed, it can provide end expiratory pressure comparable to nasal continuous positive airway pressure.
Humidification
The higher the flow, the more important proper humidification and heating of the flow becomes to prevent tissue irritation and mucous drying. It has been found that long term use of flows of 20-25 L/min can help reduce symptoms of chronic obstructive pulmonary disease. This is because, heat and humidity help mucociliary clearance. This is the reason why high-flow therapy is assumed to help with mucus clearance better than other less humidified methodologies.
Medical use
High-flow therapy is useful in patients that are spontaneously breathing but are in some type of respiratory failure. These are hypoxemic and certain cases of hypercapnic respiratory failure stemming from exacerbations of asthma and chronic obstructive pulmonary disease, bronchiolitis, pneumonia, and congestive heart failure are all possible situations where high-flow therapy may be indicated.
Newborn babies
High-flow therapy has shown to be useful in neonatal intensive care settings for premature infants with Infant respiratory distress syndrome, as it prevents many infants from needing more invasive ventilatory treatments.
Due to the decreased stress of effort needed to breathe, the neonatal body is able to spend more time utilizing metabolic efforts elsewhere, which causes decreased days on a mechanical ventilator, faster weight gain, and overall decreased hospital stay entirely.
High-flow therapy has been successfully implemented in infants and older children. The cannula improves the respiratory distress, the oxygen saturation, and the patient's comfort. Its mechanism of action is the application of mild positive airway pressure and lung volume recruitment.
Hypoxemic respiratory failure
In high-flow therapy, clinicians can deliver higher FiO2 than is possible with typical oxygen delivery therapy without the use of a non-rebreather mask or tracheal intubation. Some patients requiring respiratory support for bronchospasm benefit using air delivered by high-flow therapy without additional oxygen. Patients can speak during use of high-flow therapy. As this is a non-invasive therapy, it avoids the risk of ventilator-associated pneumonia.
Use of nasal high flow in acute hypoxemic respiratory failure does not affect mortality or length of stay either in hospital or in the intensive care unit. It can however reduce the need for tracheal intubation and escalation of oxygenation and respiratory support.
Hypercapnic respiratory failure
Stable patients with hypercapnia on high-flow therapy have been found to have their carbon dioxide levels decrease similar amounts to noninvasive treatment, but evidence is still limited as to its efficacy and currently the practice guideline is still to use noninvasive ventilation for those with exacerbations of chronic obstructive pulmonary disease and acidosis.
Other uses
Heated humidified high-flow therapy has been used in spontaneously breathing patients with during general anesthesia to facilitate surgery for airway obstruction.
High flow therapy is useful in the treatment of sleep apnea.
References
Medical equipment
Respiratory therapy
Mechanical ventilation | Heated humidified high-flow therapy | [
"Biology"
] | 1,153 | [
"Medical equipment",
"Medical technology"
] |
31,500,459 | https://en.wikipedia.org/wiki/Protein%20Circular%20Dichroism%20Data%20Bank | The Protein Circular Dichroism Data Bank (PCDDB) is a database of circular dichroism and synchrotron radiation.
See also
Circular dichroism
Synchrotron Radiation Circular Dichroism
References
External links
http://pcddb.cryst.bbk.ac.uk.
Protein databases
Polarization (waves)
Protein structure
Electromagnetic radiation | Protein Circular Dichroism Data Bank | [
"Physics",
"Chemistry"
] | 81 | [
"Physical phenomena",
"Electromagnetic radiation",
"Astrophysics",
"Radiation",
"Structural biology",
"Protein structure",
"Polarization (waves)"
] |
31,501,135 | https://en.wikipedia.org/wiki/Spectral%20acceleration | Spectral acceleration (SA) is a unit measured in g (the acceleration due to Earth's gravity, equivalent to g-force) that describes the maximum acceleration in an earthquake on an object – specifically a damped, harmonic oscillator moving in one physical dimension. This can be measured at (or specified for) different oscillation frequencies and with different degrees of damping, although 5% damping is commonly applied. The SA at different frequencies may be plotted to form a response spectrum.
Spectral acceleration, with a value related to the natural frequency of vibration of the building, is used in earthquake engineering and gives a closer approximation to the motion of a building or other structure in an earthquake than the peak ground acceleration value, although there is normally a correlation between [short period] SA and PGA.
Some seismic hazard maps are also produced using spectral acceleration.
See also
Seismic scale
External links
Spectral Acceleration Hazard Map of California – for 1 sec period
2005 National Building Code of Canada – Spectral Acceleration Hazard Maps for various periods
Revision of Time-Independent Probabilistic Seismic Hazard Maps for Alaska
What is a ground shaking hazard map? – Includes explanations of SA and PGA
References
Seismology
Earthquake engineering | Spectral acceleration | [
"Engineering"
] | 241 | [
"Structural engineering",
"Earthquake engineering",
"Civil engineering"
] |
31,501,543 | https://en.wikipedia.org/wiki/Coffman%E2%80%93Graham%20algorithm | The Coffman–Graham algorithm is an algorithm for arranging the elements of a partially ordered set into a sequence of levels. The algorithm chooses an arrangement such that an element that comes after another in the order is assigned to a lower level, and such that each level has a number of elements that does not exceed a fixed width bound . When , it uses the minimum possible number of distinct levels, and in general it uses at most times as many levels as necessary.
It is named after Edward G. Coffman, Jr. and Ronald Graham, who published it in 1972 for an application in job shop scheduling. In this application, the elements to be ordered are jobs, the bound is the number of jobs that can be scheduled at any one time, and the partial order describes prerequisite relations between the jobs. The goal is to find a schedule that completes all jobs in minimum total time. Subsequently, the same algorithm has also been used in graph drawing, as a way of placing the vertices of a directed graph into layers of fixed widths so that most or all edges are directed consistently downwards.
For a partial ordering given by its transitive reduction (covering relation), the Coffman–Graham algorithm can be implemented in linear time using the partition refinement data structure as a subroutine. If the transitive reduction is not given, it takes polynomial time to construct it.
Problem statement and applications
In the version of the job shop scheduling problem solved by the Coffman–Graham algorithm, one is given a set of jobs , together with a system of precedence constraints requiring that job be completed before job begins. Each job is assumed to take unit time to complete. The scheduling task is to assign each of these jobs to time slots on a system of identical processors, minimizing the makespan of the assignment (the time from the beginning of the first job until the completion of the final job). Abstractly, the precedence constraints define a partial order on the jobs, so the problem can be rephrased as one of assigning the elements of this partial order to levels (time slots) in such a way that each time slot has at most as many jobs as processors (at most elements per level), respecting the precedence constraints. This application was the original motivation for Coffman and Graham to develop their algorithm.
In the layered graph drawing framework outlined by the input is a directed graph, and a drawing of a graph is constructed in several stages:
A feedback arc set is chosen, and the edges of this set reversed, in order to convert the input into a directed acyclic graph with (if possible) few reversed edges.
The vertices of the graph are given integer -coordinates in such a way that, for each edge, the starting vertex of the edge has a higher coordinate than the ending vertex, with at most vertices sharing the same -coordinate. In this way, all edges of the directed acyclic graph and most edges of the original graph will be oriented consistently downwards.
Dummy vertices are introduced within each edge so that the subdivided edges all connect pairs of vertices that are in adjacent levels of the drawing.
Within each group of vertices with the same -coordinate, the vertices are permuted in order to minimize the number of crossings in the resulting drawing, and the vertices are assigned -coordinates consistently with this permutation.
The vertices and edges of the graph are drawn with the coordinates assigned to them.
In this framework, the -coordinate assignment again involves grouping elements of a partially ordered set (the vertices of the graph, with the reachability ordering on the vertex set) into layers (sets of vertices with the same -coordinate), which is the problem solved by the Coffman–Graham algorithm. Although there exist alternative approaches than the Coffman–Graham algorithm to the layering step, these alternatives in general are either not able to incorporate a bound on the maximum width of a level or rely on complex integer programming procedures.
More abstractly, both of these problems can be formalized as a problem in which the input consists of a partially ordered set and an integer . The desired output is an assignment of integer level numbers to the elements of the partially ordered set such that, if is an ordered pair of related elements of the partial order, the number assigned to is smaller than the number assigned to , such that at most elements are assigned the same number as each other, and minimizing the difference between the smallest and the largest assigned numbers.
The algorithm
The Coffman–Graham algorithm performs the following steps.
Represent the partial order by its transitive reduction or covering relation, a directed acyclic graph that has an edge from x to y whenever and there does not exist any third element of the partial order for which . In the graph drawing applications of the Coffman–Graham algorithm, the resulting directed acyclic graph may not be the same as the graph being drawn, and in the scheduling applications it may not have an edge for every precedence constraint of the input: in both cases, the transitive reduction removes redundant edges that are not necessary for defining the partial order.
Construct a topological ordering of in which the vertices are ordered lexicographically by the set of positions of their incoming neighbors. To do so, add the vertices one at a time to the ordering, at each step choosing a vertex to add such that the incoming neighbors of are all already part of the partial ordering, and such that the most recently added incoming neighbor of is earlier than the most recently added incoming neighbor of any other vertex that could be added in place of . If two vertices have the same most recently added incoming neighbor, the algorithm breaks the tie in favor of the one whose second most recently added incoming neighbor is earlier, etc.
Assign the vertices of to levels in the reverse of the topological ordering constructed in the previous step. For each vertex , add to a level that is at least one step higher than the highest level of any outgoing neighbor of , that does not already have elements assigned to it, and that is as low as possible subject to these two constraints.
Analysis
Output quality
As originally proved, their algorithm computes an optimal assignment for ; that is, for scheduling problems with unit length jobs on two processors, or for layered graph drawing problems with at most two vertices per layer. A closely related algorithm also finds the optimal solution for scheduling of jobs with varying lengths, allowing pre-emption of scheduled jobs, on two processors. For , the Coffman–Graham algorithm uses a number of levels (or computes a schedule with a makespan) that is within a factor of of optimal. For instance, for , this means that it uses at most times as many levels as is optimal. When the partial order of precedence constraints is an interval order, or belongs to several related classes of partial orders, the Coffman–Graham algorithm finds a solution with the minimum number of levels regardless of its width bound.
As well as finding schedules with small makespan, the Coffman–Graham algorithm (modified from the presentation here so that it topologically orders the reverse graph of and places the vertices as early as possible rather than as late as possible) minimizes the total flow time of two-processor schedules, the sum of the completion times of the individual jobs. A related algorithm can be used to minimize the total flow time for a version of the problem in which preemption of jobs is allowed.
Time complexity
and state the time complexity of the Coffman–Graham algorithm, on an -element partial order, to be . However, this analysis omits the time for constructing the transitive reduction, which is not known to be possible within this bound. shows how to implement the topological ordering stage of the algorithm in linear time, based on the idea of partition refinement. Sethi also shows how to implement the level assignment stage of the algorithm efficiently by using a disjoint-set data structure. In particular, with a version of this structure published later by , this stage also takes linear time.
References
Graph drawing
Processor scheduling algorithms
Optimal scheduling | Coffman–Graham algorithm | [
"Engineering"
] | 1,636 | [
"Optimal scheduling",
"Industrial engineering"
] |
31,502,236 | https://en.wikipedia.org/wiki/Duramold | Duramold is a composite material process developed by Virginius E. Clark. Birch or poplar plies are impregnated with phenolic resin and laminated together in a mold under heat (280 °F, 138 °C) and pressure for use as a lightweight structural material. Similar to plywood, Duramold and other lightweight composite materials like the similar Haskelite were considered critical during periods of material shortage in World War II, replacing scarce materials such as aluminum alloys and steel.
The material has some advantages over metal in strength, construction technique, and weight. A cylinder made of Duramold is 80% stronger than a cylinder made of aluminum. Over 17 varieties of Duramold were developed, using various combinations of types of wood in thin plies. The Duramold process has also been used to make radomes for aircraft, as well as missile bodies.
Virginius Clark developed Duramold for Fairchild Aircraft, working with George Meyercord of the Haskelite Corporation. Fairchild patented the process, designing and constructing the F-46 as the first aircraft made using the Duramold process, and forming the Duramold Corporation. Several aircraft used Duramold in parts of their structure, the largest manufactured with the process being the Hughes H-4 Hercules designed by Howard Hughes and Glenn Odekirk, which was built almost completely with Duramold including very large sections. For this use, Hughes Aircraft bought the rights to the use of Duramold on aircraft exceeding 20,000 lb; Fairchild and Meyercord otherwise retained the rights, but the material was found to be poorly adapted to heavy aircraft.
The Duramold and Haskelite process was first developed in 1937, followed by Gene Vidal's Weldwood and later the Timm Aircraft Company's Aeromold process, which differs in that it is baked at a low 100 °F (38°C) at cutting and forming, and 180 °F (82°C) for fusing together sections after the resins are added. In the United Kingdom, the De Havilland Aircraft Company (founded by Geoffrey de Havilland, a cousin of Olivia de Havilland, the actress who dated Howard Hughes in 1938) used similar composite construction for aircraft, including the DH.88 Comet, DH.91 Albatross, the Mosquito, and Vampire.
See also
Tego film
Aerolite (adhesive)
Stitch and glue
References
Aircraft components
Aerospace materials
Plywood | Duramold | [
"Engineering"
] | 494 | [
"Aerospace materials",
"Aerospace engineering"
] |
31,505,928 | https://en.wikipedia.org/wiki/Green%20bridge%20%28filtration%20system%29 | Green bridges are an ecotechnological in-situ bio remediation system. Their different physical and biological filters work in combination to remove suspended and dissolved impurities of water. Green bridge filters help in reducing the suspended solids by filtration process, reducing Chemical Oxygen Demand (COD)/Biochemical Oxygen Demand (BOD) by aerobic degradation. Green Bridges also help in the restoration of ecological food chain.
Development
Natural streams, rivers and lakes have their own in-built purification system which consists of natural slopes, stones for biological growth and complex food web help in the purification process. This food web is nothing but utilization of one's waste by another as its own food. Nature has her own living machinery of detritivorous microbes and other living species to consume wastes. These principles have been harnessed in the treatment of polluted streams.
Green bridges are developed using fibrous material with stones. All the floatable and suspended solids are trapped in this biological bridge and the turbidity of flowing water is reduced. Green plants on the bridges increase the DO level in water, which in turn facilitates the growth of aerobic organisms, which degrade organic pollutants. Sandeep Joshi, director, SERI (Shrishti Eco-Research Institute) has developed this technology and has received a patent for it.
Benefits
Capital expenditure is comparable with the annual operational cost of conventional bioremediation systems.
Can be developed and operated in combination with conventional systems to improve the performance of the latter.
Reduce the ecotoxicity of the man-made substances released into the water bodies and facilitate the eco-assimilation of those pollutants into the ecological cycles thus reducing the quantum of hazardous residues to zero which otherwise require costly secured landfill and incineration techniques.
Expected results
Solids control : 40–80% reduction
Pollution control : COD/BOD reduction – 40–90%
Fecal coliforms control : 50–100% reduction
Ecotoxicity : Nil
Dissolved oxygen : Increased by 150% – 1200%
Aquatic species :
Increase in plants/plankton – 200%
Increase in micro-invertebrates – 200%
Other than the changes in the water quality mentioned above a multifold change in population of avifauna, terrestrial plants along the riverbanks has been noticed. There is an overall odour and mosquito reduction and improvement of river aesthetics. Increase in health status of aquatic life in lentic-lotic system by reduction in ecotoxicity of pollutants.
References
External links
http://moef.nic.in/downloads/public-information/press-note-launch-of-bio-remediation-project-ludhiana.pdf
Badal gives go ahead to NEER project for Budha Nullha LivePunjab
- Express India
Green bridge tech helps restore nullah - Times Of India
India Together: Cost-effective technology stalled by Pune government - 31 May 2010
Aquatic ecology
Environmental engineering
Pollution control technologies
Sewerage
Water treatment
Sanitation
Sewerage infrastructure | Green bridge (filtration system) | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 613 | [
"Water treatment",
"Chemical engineering",
"Sewerage infrastructure",
"Pollution control technologies",
"Water pollution",
"Sewerage",
"Civil engineering",
"Ecosystems",
"Environmental engineering",
"Water technology",
"Aquatic ecology"
] |
31,508,894 | https://en.wikipedia.org/wiki/J-aggregate | A J-aggregate is a type of dye with an absorption band that shifts to a longer wavelength (bathochromic shift) of increasing sharpness (higher absorption coefficient) when it aggregates under the influence of a solvent or additive or concentration as a result of supramolecular self-organisation. The dye can be characterized further by a small Stokes shift with a narrow band. The J in J-aggregate refers to E.E. Jelley who discovered the phenomenon in 1936. The dye is also called a Scheibe aggregate after G. Scheibe who also independently published on this topic in 1937.
Scheibe and Jelley independently observed that in ethanol the dye PIC chloride has two broad absorption maxima at around 19,000 cm−1 and 20,500 cm−1 (526 and 488 nm respectively) and that in water a third sharp absorption maximum appears at 17,500 cm−1 (571 nm). The intensity of this band further increases on increasing concentration and on adding sodium chloride. In the oldest aggregation model for PIC chloride the individual molecules are stacked like a roll of coins forming a supramolecular polymer but the true nature of this aggregation phenomenon is still under investigation. Analysis is complicated because PIC chloride is not a planar molecule. The molecular axis can tilt in the stack creating a helix pattern. In other models the dye molecules orient themselves in a brickwork, ladder, or staircase fashion. In various experiments the J-band was found to split as a function of temperature, liquid crystal phases were found with concentrated solutions and CryoTEM revealed aggregate rods 350 nm long and 2.3 nm in diameter.
J-aggregate dyes are found with polymethine dyes in general, with cyanines, merocyanines, squaraine and perylene bisimides. Certain π-conjugated macrocycles, reported by Swager and co-workers at MIT, were also found to form J-aggregates and exhibited exceptionally high photoluminescence quantum yields. In 2020, a famous cyanine dye (TDBC) was reported with enhanced photoluminescence quantum yield (> 50%) in the solution at room-temperature.
Molecular PIC aggregates exhibiting J-like properties have been shown to spontaneously template into sequence specific DNA duplex strands. These DNA based J-aggregates, known as J-bits, have been sought after as a bottom-up method of self-assembling PIC J-aggregates into large scale multi-functional DNA scaffolds. Critically, J-bits have been observed to engage in energy transfer when in proximity to quantum dots as well as organic dyes such as Alexa Fluor dyes. Prototypical DNA energy transfer arrays, which are based on the molecular photonic wire design, use FRET to transfer excitons step-wise down an energy gradient. Since the FRET efficiency between two Fluorophores decays by their separation distance to the 6th power, the spatial limitations of these systems are highly constrained. It is hypothesized that integrating J-bit relays between FRET nodes would allow some of this energy loss to be recouped. In theory, dense packing and rigid alignment of the PIC monomers enables superposition of the transition dipoles allowing excitons to propagate through the length of the aggregate with low loss.
See also
H-aggregates, in which a hypsochromic shift is observed with low or no fluorescence.
References
Dyes
Supramolecular chemistry
Fluorescence
Materials
Absorption spectroscopy | J-aggregate | [
"Physics",
"Chemistry",
"Materials_science"
] | 728 | [
"Luminescence",
"Fluorescence",
"Spectrum (physical sciences)",
"Absorption spectroscopy",
"Materials",
"nan",
"Nanotechnology",
"Spectroscopy",
"Matter",
"Supramolecular chemistry"
] |
35,685,954 | https://en.wikipedia.org/wiki/Generalized%20distributive%20law | The generalized distributive law (GDL) is a generalization of the distributive property which gives rise to a general message passing algorithm. It is a synthesis of the work of many authors in the information theory, digital communications, signal processing, statistics, and artificial intelligence communities. The law and algorithm were introduced in a semi-tutorial by Srinivas M. Aji and Robert J. McEliece with the same title.
Introduction
"The distributive law in mathematics is the law relating the operations of multiplication and addition, stated symbolically, ; that is, the monomial factor is distributed, or separately applied, to each term of the binomial factor , resulting in the product " - Britannica
As it can be observed from the definition, application of distributive law to an arithmetic expression reduces the number of operations in it. In the previous example the total number of operations reduced from three (two multiplications and an addition in ) to two (one multiplication and one addition in ). Generalization of distributive law leads to a large family of fast algorithms. This includes the FFT and Viterbi algorithm.
This is explained in a more formal way in the example below:
where and are real-valued functions, and (say)
Here we are "marginalizing out" the independent variables (, , and ) to obtain the result. When we are calculating the computational complexity, we can see that for each pairs of , there are terms due to the triplet which needs to take part in the evaluation of with each step having one addition and one multiplication. Therefore, the total number of computations needed is . Hence the asymptotic complexity of the above function is .
If we apply the distributive law to the RHS of the equation, we get the following:
This implies that can be described as a product where and
Now, when we are calculating the computational complexity, we can see that there are additions in and each and there are multiplications when we are using the product to evaluate . Therefore, the total number of computations needed is . Hence the asymptotic complexity of calculating reduces to from . This shows by an example that applying distributive law reduces the computational complexity which is one of the good features of a "fast algorithm".
History
Some of the problems that used distributive law to solve can be grouped as follows
1. Decoding algorithms
A GDL like algorithm was used by Gallager's for decoding low density parity-check codes. Based on Gallager's work Tanner introduced the Tanner graph and expressed Gallagers work in message passing form. The tanners graph also helped explain the Viterbi algorithm.
It is observed by Forney that Viterbi's maximum likelihood decoding of convolutional codes also used algorithms of GDL-like generality.
2. Forward-backward algorithm
The forward backward algorithm helped as an algorithm for tracking the states in the Markov chain. And this also was used the algorithm of GDL like generality
3. Artificial intelligence
The notion of junction trees has been used to solve many problems in AI. Also the concept of bucket elimination used many of the concepts.
The MPF problem
MPF or marginalize a product function is a general computational problem which as special case includes many classical problems such as computation of discrete Hadamard transform, maximum likelihood decoding of a linear code over a memory-less channel, and matrix chain multiplication. The power of the GDL lies in the fact that it applies to situations in which additions and multiplications are generalized.
A commutative semiring is a good framework for explaining this behavior. It is defined over a set with operators "" and "" where and are a commutative monoids and the distributive law holds.
Let be variables such that where is a finite set and . Here . If and , let
,
,
,
, and
Let where . Suppose a function is defined as , where is a commutative semiring. Also, are named the local domains and as the local kernels.
Now the global kernel is defined as :
Definition of MPF problem: For one or more indices , compute a table of the values of -marginalization of the global kernel , which is the function defined as
Here is the complement of with respect to and the is called the objective function, or the objective function at . It can observed that the computation of the objective function in the obvious way needs operations. This is because there are additions and multiplications needed in the computation of the objective function. The GDL algorithm which is explained in the next section can reduce this computational complexity.
The following is an example of the MPF problem.
Let and be variables such that and . Here and . The given functions using these variables are and and we need to calculate and defined as:
Here local domains and local kernels are defined as follows:
where is the objective function and is the objective function.
Consider another example where and is a real valued function. Now, we shall consider the MPF problem where the commutative semiring is defined as the set of real numbers with ordinary addition and multiplication and the local domains and local kernels are defined as follows:
Now since the global kernel is defined as the product of the local kernels, it is
and the objective function at the local domain is
This is the Hadamard transform of the function . Hence we can see that the computation of Hadamard transform is a special case of the MPF problem. More examples can be demonstrated to prove that the MPF problem forms special cases of many classical problem as explained above whose details can be found at
GDL: an algorithm for solving the MPF problem
If one can find a relationship among the elements of a given set , then one can solve the MPF problem basing on the notion of belief propagation which is a special use of "message passing" technique. The required relationship is that the given set of local domains can be organised into a junction tree. In other words, we create a graph theoretic tree with the elements of as the vertices of the tree , such that for any two arbitrary vertices say and where and there exists an edge between these two vertices, then the intersection of corresponding labels, viz , is a subset of the label on each vertex on the unique path from to .
For example,
Example 1: Consider the following nine local domains:
For the above given set of local domains, one can organize them into a junction tree as shown below:
Similarly If another set like the following is given
Example 2: Consider the following four local domains:
Then constructing the tree only with these local domains is not possible since this set of values has no common domains which can be placed between any two values of the above set. But however, if add the two dummy domains as shown below then organizing the updated set into a junction tree would be possible and easy too.
5.,
6.,
Similarly for these set of domains, the junction tree looks like shown below:
Generalized distributive law (GDL) algorithm
Input: A set of local domains.
Output: For the given set of domains, possible minimum number of operations that is required to solve the problem is computed.
So, if and are connected by an edge in the junction tree, then a message from to is a set/table of values given by a function: :. To begin with all the functions i.e. for all combinations of and in the given tree, is defined to be identically and when a particular message is update, it follows the equation given below.
=
where means that is an adjacent vertex to in tree.
Similarly each vertex has a state which is defined as a table containing the values from the function , Just like how messages initialize to 1 identically, state of is defined to be local kernel , but whenever gets updated, it follows the following equation:
Basic working of the algorithm
For the given set of local domains as input, we find out if we can create a junction tree, either by using the set directly or by adding dummy domains to the set first and then creating the junction tree, if construction junction is not possible then algorithm output that there is no way to reduce the number of steps to compute the given equation problem, but once we have junction tree, algorithm will have to schedule messages and compute states, by doing these we can know where steps can be reduced, hence will be discusses this below.
Scheduling of the message passing and the state computation
There are two special cases we are going to talk about here namely Single Vertex Problem in which the objective function is computed at only one vertex and the second one is All Vertices Problem where the goal is to compute the objective function at all vertices.
Lets begin with the single-vertex problem, GDL will start by directing each edge towards the targeted vertex . Here messages are sent only in the direction towards the targeted vertex. Note that all the directed messages are sent only once. The messages are started from the leaf nodes(where the degree is 1) go up towards the target vertex . The message travels from the leaves to its parents and then from there to their parents and so on until it reaches the target vertex . The target vertex will compute its state only when it receives all messages from all its neighbors. Once we have the state, We have got the answer and hence the algorithm terminates.
For Example, let us consider a junction tree constructed from the set of local domains given above i.e. the set from example 1, Now the Scheduling table for these domains is (where the target vertex is ).
Thus the complexity for Single Vertex GDL can be shown as
arithmetic operations
Where (Note: The explanation for the above equation is explained later in the article )
is the label of .
is the degree of (i.e. number of vertices adjacent to v).
To solve the All-Vertices problem, we can schedule GDL in several ways, some of them are parallel implementation where in each round, every state is updated and every message is computed and transmitted at the same time. In this type of implementation the states and messages will stabilizes after number of rounds that is at most equal to the diameter of the tree. At this point all the all states of the vertices will be equal to the desired objective function.
Another way to schedule GDL for this problem is serial implementation where its similar to the Single vertex problem except that we don't stop the algorithm until all the vertices of a required set have not got all the messages from all their neighbors and have compute their state.
Thus the number of arithmetic this implementation requires is at most arithmetic operations.
Constructing a junction tree
The key to constructing a junction tree lies in the local domain graph , which is a weighted complete graph with vertices i.e. one for each local domain, having the weight of the edge defined by
.
if , then we say is contained in. Denoted by (the weight of a maximal-weight spanning tree of ), which is defined by
where n is the number of elements in that set. For more clarity and details, please refer to these.
Scheduling theorem
Let be a junction tree with vertex set and edge set . In this algorithm, the messages are sent in both the direction on any edge, so we can say/regard the edge set E as set of ordered pairs of vertices. For example, from Figure 1 can be defined as follows
NOTE: above gives you all the possible directions that a message can travel in the tree.
The schedule for the GDL is defined as a finite sequence of subsets of. Which is generally represented by
{}, Where is the set of messages updated during the round of running the algorithm.
Having defined/seen some notations, we will see want the theorem says,
When we are given a schedule , the corresponding message trellis as a finite directed graph with Vertex set of , in which a typical element is denoted by for , Then after completion of the message passing, state at vertex will be the objective defined in
and iff there is a path from to
Computational complexity
Here we try to explain the complexity of solving the MPF problem in terms of the number of mathematical operations required for the calculation. i.e. We compare the number of operations required when calculated using the normal method (Here by normal method we mean by methods that do not use message passing or junction trees in short methods that do not use the concepts of GDL)and the number of operations using the generalized distributive law.
Example: Consider the simplest case where we need to compute the following expression .
To evaluate this expression naively requires two multiplications and one addition. The expression when expressed using the distributive law can be written as a simple optimization that reduces the number of operations to one addition and one multiplication.
Similar to the above explained example we will be expressing the equations in different forms to perform as few operation as possible by applying the GDL.
As explained in the previous sections we solve the problem by using the concept of the junction trees. The optimization obtained by the use of these trees is comparable to the optimization obtained by solving a semi group problem on trees. For example, to find the minimum of a group of numbers we can observe that if we have a tree and the elements are all at the bottom of the tree, then we can compare the minimum of two items in parallel and the resultant minimum will be written to the parent. When this process is propagated up the tree the minimum of the group of elements will be found at the root.
The following is the complexity for solving the junction tree using message passing
We rewrite the formula used earlier to the following form. This is the eqn for a message to be sent from vertex v to w
----message equation
Similarly we rewrite the equation for calculating the state of vertex v as follows
We first will analyze for the single-vertex problem and assume the target vertex is and hence we have one edge from to .
Suppose we have an edge we calculate the message using the message equation. To calculate requires
additions and
multiplications.
(We represent the as .)
But there will be many possibilities for hence
possibilities for .
Thus the entire message will need
additions and
multiplications
The total number of arithmetic operations required to send a message towards along the edges of tree will be
additions and
multiplications.
Once all the messages have been transmitted the algorithm terminates with the computation of state at The state computation requires more multiplications.
Thus number of calculations required to calculate the state is given as below
additions and
multiplications
Thus the grand total of the number of calculations is
----
where is an edge and its size is defined by
The formula above gives us the upper bound.
If we define the complexity of the edge as
Therefore, can be written as
We now calculate the edge complexity for the problem defined in Figure 1 as follows
The total complexity will be which is considerably low compared to the direct method. (Here by direct method we mean by methods that do not use message passing. The time taken using the direct method will be the equivalent to calculating message at each node and time to calculate the state of each of the nodes.)
Now we consider the all-vertex problem where the message will have to be sent in both the directions and state must be computed at both the vertexes. This would take but by precomputing we can reduce the number of multiplications to . Here is the degree of the vertex. Ex : If there is a set with numbers. It is possible to compute all the d products of of the with at most multiplications rather than the obvious .
We do this by precomputing the quantities
and this takes multiplications. Then if denotes the product of all except for we have and so on will need another multiplications making the total
There is not much we can do when it comes to the construction of the junction tree except that we may have many maximal weight spanning tree and we should choose the spanning tree with the least and sometimes this might mean adding a local domain to lower the junction tree complexity.
It may seem that GDL is correct only when the local domains can be expressed as a junction tree. But even in cases where there are cycles and a number of iterations the messages will approximately be equal to the objective function. The experiments on Gallager–Tanner–Wiberg algorithm for low density parity-check codes were supportive of this claim.
References
Information theory
Algorithms
Graphical models
Digital signal processing | Generalized distributive law | [
"Mathematics",
"Technology",
"Engineering"
] | 3,354 | [
"Telecommunications engineering",
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Computer science",
"Information theory"
] |
35,690,430 | https://en.wikipedia.org/wiki/Fuzzy%20extractor | Fuzzy extractors are a method that allows biometric data to be used as inputs to standard cryptographic techniques, to enhance computer security. "Fuzzy", in this context, refers to the fact that the fixed values required for cryptography will be extracted from values close to but not identical to the original key, without compromising the security required. One application is to encrypt and authenticate users records, using the biometric inputs of the user as a key.
Fuzzy extractors are a biometric tool that allows for user authentication, using a biometric template constructed from the user's biometric data as the key, by extracting a uniform and random string from an input , with a tolerance for noise. If the input changes to but is still close to , the same string will be re-constructed. To achieve this, during the initial computation of the process also outputs a helper string which will be stored to recover later and can be made public without compromising the security of . The security of the process is also ensured when an adversary modifies . Once the fixed string has been calculated, it can be used, for example, for key agreement between a user and a server based only on a biometric input.
History
One precursor to fuzzy extractors was the so-called "Fuzzy Commitment", as designed by Juels and Wattenberg. Here, the cryptographic key is decommitted using biometric data.
Later, Juels and Sudan came up with Fuzzy vault schemes. These are order invariant for the fuzzy commitment scheme and use a Reed–Solomon error correction code. The code word is inserted as the coefficients of a polynomial, and this polynomial is then evaluated with respect to various properties of the biometric data.
Both Fuzzy Commitment and Fuzzy Vaults were precursors to Fuzzy Extractors.
Motivation
In order for fuzzy extractors to generate strong keys from biometric and other noisy data, cryptography paradigms will be applied to this biometric data. These paradigms:
(1) Limit the number of assumptions about the content of the biometric data (this data comes from a variety of sources; so, in order to avoid exploitation by an adversary, it's best to assume the input is unpredictable).
(2) Apply usual cryptographic techniques to the input. (Fuzzy extractors convert biometric data into secret, uniformly random, and reliably reproducible random strings.)
These techniques can also have other broader applications for other type of noisy inputs such as approximative data from human memory, images used as passwords, and keys from quantum channels. Fuzzy extractors also have applications in the proof of impossibility of the strong notions of privacy with regard to statistical databases.
Basic definitions
Predictability
Predictability indicates the probability that an adversary can guess a secret key. Mathematically speaking, the predictability of a random variable is .
For example, given a pair of random variable and , if the adversary knows of , then the predictability of will be . So, an adversary can predict with . We use the average over as it is not under adversary control, but since knowing makes the prediction of adversarial, we take the worst case over .
Min-entropy
Min-entropy indicates the worst-case entropy. Mathematically speaking, it is defined as .
A random variable with a min-entropy at least of is called a -source.
Statistical distance
Statistical distance is a measure of distinguishability. Mathematically speaking, it is expressed for two probability distributions and as = . In any system, if is replaced by , it will behave as the original system with a probability at least of .
Definition 1 (strong extractor)
Setting as a strong randomness extractor. The randomized function Ext: , with randomness of length , is a strong extractor for all -sources on where is independent of .
The output of the extractor is a key generated from with the seed . It behaves independently of other parts of the system, with the probability of . Strong extractors can extract at most bits from an arbitrary -source.
Secure sketch
Secure sketch makes it possible to reconstruct noisy input; so that, if the input is and the sketch is , given and a value close to , can be recovered. But the sketch must not reveal information about , in order to keep it secure.
If is a metric space, a secure sketch recovers the point from any point close to , without disclosing itself.
Definition 2 (secure sketch)
An secure sketch is a pair of efficient randomized procedures (SS – Sketch; Rec – Recover) such that:
(1) The sketching procedure SS takes as input and returns a string .
The recovery procedure Rec takes as input the two elements and .
(2) Correctness: If then .
(3) Security: For any -source over , the min-entropy of , given , is high:
For any , if , then .
Fuzzy extractor
Fuzzy extractors do not recover the original input but generate a string (which is close to uniform) from and allow its subsequent reproduction (using helper string ) given any close to . Strong extractors are a special case of fuzzy extractors when = 0 and .
Definition 3 (fuzzy extractor)
An fuzzy extractor is a pair of efficient randomized procedures (Gen – Generate and Rep – Reproduce) such that:
(1) Gen, given , outputs an extracted string and a helper string .
(2) Correctness: If and , then .
(3) Security: For all m-sources over , the string is nearly uniform, even given . So, when , then .
So Fuzzy extractors output almost uniform random sequences of bits which are a prerequisite for using cryptographic applications (as secret keys). Since the output bits are slightly non-uniform, there's a risk of a decreased security; but the distance from a uniform distribution is no more than . As long as this distance is sufficiently small, the security will remain adequate.
Secure sketches and fuzzy extractors
Secure sketches can be used to construct fuzzy extractors: for example, applying SS to to obtain , and strong extractor Ext, with randomness , to , to get . can be stored as helper string . can be reproduced by and . can recover and can reproduce .
The following lemma formalizes this.
Lemma 1 (fuzzy extractors from sketches)
Assume (SS,Rec) is an secure sketch and let Ext be an average-case strong extractor. Then the following (Gen, Rep) is an fuzzy extractor:
(1) Gen : set and output .
(2) Rep : recover and output .
Proof:
from the definition of secure sketch (Definition 2), ;
and since Ext is an average-case -strong extractor;
Corollary 1
If (SS,Rec) is an secure sketch and Ext is an strong extractor,then the above construction (Gen, Rep) is a fuzzy extractor.
The cited paper includes many generic combinatorial bounds on secure sketches and fuzzy extractors.
Basic constructions
Due to their error-tolerant properties, secure sketches can be treated, analyzed, and constructed like a general error-correcting code or for linear codes, where is the length of codewords, is the length of the message to be coded, is the distance between codewords, and is the alphabet. If is the universe of possible words then it may be possible to find an error correcting code such that there exists a unique codeword for every with a Hamming distance of . The first step in constructing a secure sketch is determining the type of errors that will likely occur and then choosing a distance to measure.
Hamming distance constructions
When there is no risk of data being deleted and only of its being corrupted, then the best measurement to use for error correction is the Hamming distance. There are two common constructions for correcting Hamming errors, depending on whether the code is linear or not. Both constructions start with an error-correcting code that has a distance of where is the number of tolerated errors.
Code-offset construction
When using a general code, assign a uniformly random codeword to each , then let which is the shift needed to change into . To fix errors in , subtract from , then correct the errors in the resulting incorrect codeword to get , and finally add to to get . This means . This construction can achieve the best possible tradeoff between error tolerance and entropy loss when and a Reed–Solomon code is used, resulting in an entropy loss of . The only way to improve upon this result would be to find a code better than Reed–Solomon.
Syndrome construction
When using a linear code, let the be the syndrome of . To correct , find a vector such that ; then .
Set difference constructions
When working with a very large alphabet or very long strings resulting in a very large universe , it may be more efficient to treat and as sets and look at set differences to correct errors. To work with a large set it is useful to look at its characteristic vector , which is a binary vector of length that has a value of 1 when an element and , or 0 when . The best way to decrease the size of a secure sketch when is large is to make large, since the size is determined by . A good code on which to base this construction is a BCH code, where and , so that . It is useful that BCH codes can be decoded in sub-linear time.
Pin sketch construction
Let . To correct , first find , then find a set v where , and finally compute the symmetric difference, to get . While this is not the only construction that can be used to set the difference, it is the easiest one.
Edit distance constructions
When data can be corrupted or deleted, the best measurement to use is edit distance. To make a construction based on edit distance, the easiest way is to start with a construction for set difference or hamming distance as an intermediate correction step, and then build the edit distance construction around that.
Other distance measure constructions
There are many other types of errors and distances that can be used to model other situations. Most of these other possible constructions are built upon simpler constructions, such as edit-distance constructions.
Improving error tolerance via relaxed notions of correctness
It can be shown that the error tolerance of a secure sketch can be improved by applying a probabilistic method to error correction with a high probability of success. This allows potential code words to exceed the Plotkin bound, which has a limit of error corrections, and to approach Shannon's bound, which allows for nearly corrections. To achieve this enhanced error correction, a less restrictive error distribution model must be used.
Random errors
For this most restrictive model, use a BSC to create a with a probability at each position in that the bit received is wrong. This model can show that entropy loss is limited to , where is the binary entropy function.If min-entropy then errors can be tolerated, for some constant .
Input-dependent errors
For this model, errors do not have a known distribution and can be from an adversary, the only constraints being and that a corrupted word depends only on the input and not on the secure sketch. It can be shown for this error model that there will never be more than errors, since this model can account for all complex noise processes, meaning that Shannon's bound can be reached; to do this a random permutation is prepended to the secure sketch that will reduce entropy loss.
Computationally bounded errors
This model differs from the input-dependent model by having errors that depend on both the input and the secure sketch, and an adversary is limited to polynomial-time algorithms for introducing errors. Since algorithms that can run in better-than-polynomial-time are not currently feasible in the real world, then a positive result using this error model would guarantee that any errors can be fixed. This is the least restrictive model, where the only known way to approach Shannon's bound is to use list-decodable codes, although this may not always be useful in practice, since returning a list, instead of a single code word, may not always be acceptable.
Privacy guarantees
In general, a secure system attempts to leak as little information as possible to an adversary. In the case of biometrics, if information about the biometric reading is leaked, the adversary may be able to learn personal information about a user. For example, an adversary notices that there is a certain pattern in the helper strings that implies the ethnicity of the user. We can consider this additional information a function . If an adversary were to learn a helper string, it must be ensured that, from this data he can not infer any data about the person from whom the biometric reading was taken.
Correlation between helper string and biometric input
Ideally the helper string would reveal no information about the biometric input . This is only possible when every subsequent biometric reading is identical to the original . In this case, there is actually no need for the helper string; so, it is easy to generate a string that is in no way correlated to .
Since it is desirable to accept biometric input similar to , the helper string must be somehow correlated. The more different and are allowed to be, the more correlation there will be between and ; the more correlated they are, the more information reveals about . We can consider this information to be a function . The best possible solution is to make sure an adversary can't learn anything useful from the helper string.
Gen(W) as a probabilistic map
A probabilistic map hides the results of functions with a small amount of leakage . The leakage is the difference in probability two adversaries have of guessing some function, when one knows the probabilistic map and one does not. Formally:
If the function is a probabilistic map, then even if an adversary knows both the helper string and the secret string , they are only negligibly more likely figure something out about the subject that if they knew nothing. The string is supposed to be kept secret; so, even if it is leaked (which should be very unlikely)m the adversary can still figure out nothing useful about the subject, as long as is small. We can consider to be any correlation between the biometric input and some physical characteristic of the person. Setting in the above equation changes it to:
This means that if one adversary has and a second adversary knows nothing, their best guesses at are only apart.
Uniform fuzzy extractors
Uniform fuzzy extractors are a special case of fuzzy extractors, where the output of is negligibly different from strings picked from the uniform distribution, i.e. .
Uniform secure sketches
Since secure sketches imply fuzzy extractors, constructing a uniform secure sketch allows for the easy construction of a uniform fuzzy extractor. In a uniform secure sketch, the sketch procedure is a randomness extractor , where is the biometric input and is the random seed. Since randomness extractors output a string that appears to be from a uniform distribution, they hide all information about their input.
Applications
Extractor sketches can be used to construct -fuzzy perfectly one-way hash functions. When used as a hash function the input is the object you want to hash. The that outputs is the hash value. If one wanted to verify that a within from the original , they would verify that . Such fuzzy perfectly one-way hash functions are special hash functions where they accept any input with at most errors, compared to traditional hash functions which only accept when the input matches the original exactly. Traditional cryptographic hash functions attempt to guarantee that is it is computationally infeasible to find two different inputs that hash to the same value. Fuzzy perfectly one-way hash functions make an analogous claim. They make it computationally infeasible two find two inputs that are more than Hamming distance apart and hash to the same value.
Protection against active attacks
An active attack could be one where an adversary can modify the helper string . If an adversary is able to change to another string that is also acceptable to the reproduce function , it causes to output an incorrect secret string . Robust fuzzy extractors solve this problem by allowing the reproduce function to fail, if a modified helper string is provided as input.
Robust fuzzy extractors
One method of constructing robust fuzzy extractors is to use hash functions. This construction requires two hash functions and . The function produces the helper string by appending the output of a secure sketch to the hash of both the reading and secure sketch . It generates the secret string by applying the second hash function to and . Formally:
The reproduce function also makes use of the hash functions and . In addition to verifying that the biometric input is similar enough to the one recovered using the function, it also verifies that the hash in the second part of was actually derived from and . If both of those conditions are met, it returns , which is itself the second hash function applied to and . Formally:
Get and from
If and then else
If has been tampered with, it will be obvious, because will fail on output with very high probability. To cause the algorithm to accept a different , an adversary would have to find a such that . Since hash function are believed to be one-way functions, it is computationally infeasible to find such a . Seeing would provide an adversary with no useful information. Since, again, hash function are one-way functions, it is computationally infeasible for an adversary to reverse the hash function and figure out . Part of is the secure sketch, but by definition the sketch reveals negligible information about its input. Similarly seeing (even though it should never see it) would provide an adversary with no useful information, as an adversary wouldn't be able to reverse the hash function and see the biometric input.
References
Further reading
External links
Biometrics
Coding theory
Cryptographic algorithms | Fuzzy extractor | [
"Mathematics"
] | 3,645 | [
"Discrete mathematics",
"Coding theory"
] |
2,046,416 | https://en.wikipedia.org/wiki/Nuclear%20fuel | Nuclear fuel refers to any substance, typically fissile material, which is used by nuclear power stations or other nuclear devices to generate energy.
Oxide fuel
For fission reactors, the fuel (typically based on uranium) is usually based on the metal oxide; the oxides are used rather than the metals themselves because the oxide melting point is much higher than that of the metal and because it cannot burn, being already in the oxidized state.
Uranium dioxide
Uranium dioxide is a black semiconducting solid. It can be made by heating uranyl nitrate to form .
This is then converted by heating with hydrogen to form UO2. It can be made from enriched uranium hexafluoride by reacting with ammonia to form a solid called ammonium diuranate, . This is then heated (calcined) to form and U3O8 which is then converted by heating with hydrogen or ammonia to form UO2. The UO2 is mixed with an organic binder and pressed into pellets. The pellets are then fired at a much higher temperature (in hydrogen or argon) to sinter the solid. The aim is to form a dense solid which has few pores.
The thermal conductivity of uranium dioxide is very low compared with that of zirconium metal, and it goes down as the temperature goes up. Corrosion of uranium dioxide in water is controlled by similar electrochemical processes to the galvanic corrosion of a metal surface.
While exposed to the neutron flux during normal operation in the core environment, a small percentage of the in the fuel absorbs excess neutrons and is transmuted into . rapidly decays into which in turn rapidly decays into . The small percentage of has a higher neutron cross section than . As the accumulates the chain reaction shifts from pure at initiation of the fuel use to a ratio of about 70% and 30% at the end of the 18 to 24 month fuel exposure period.
MOX
Mixed oxide, or MOX fuel, is a blend of plutonium and natural or depleted uranium which behaves similarly (though not identically) to the enriched uranium feed for which most nuclear reactors were designed. MOX fuel is an alternative to low enriched uranium (LEU) fuel used in the light water reactors which predominate nuclear power generation.
Some concern has been expressed that used MOX cores will introduce new disposal challenges, though MOX is a means to dispose of surplus plutonium by transmutation. Reprocessing of commercial nuclear fuel to make MOX was done in the Sellafield MOX Plant (England). As of 2015, MOX fuel is made in France at the Marcoule Nuclear Site, and to a lesser extent in Russia at the Mining and Chemical Combine, India and Japan. China plans to develop fast breeder reactors and reprocessing.
The Global Nuclear Energy Partnership was a U.S. proposal in the George W. Bush administration to form an international partnership to see spent nuclear fuel reprocessed in a way that renders the plutonium in it usable for nuclear fuel but not for nuclear weapons. Reprocessing of spent commercial-reactor nuclear fuel has not been permitted in the United States due to nonproliferation considerations. All other reprocessing nations have long had nuclear weapons from military-focused research reactor fuels except for Japan. Normally, with the fuel being changed every three years or so, about half of the is 'burned' in the reactor, providing about one third of the total energy. It behaves like and its fission releases a similar amount of energy. The higher the burnup, the more plutonium is present in the spent fuel, but the available fissile plutonium is lower. Typically about one percent of the used fuel discharged from a reactor is plutonium, and some two thirds of this is fissile (c. 50% , 15% ).
Metal fuel
Metal fuels have the advantage of a much higher heat conductivity than oxide fuels but cannot survive equally high temperatures. Metal fuels have a long history of use, stretching from the Clementine reactor in 1946 to many test and research reactors. Metal fuels have the potential for the highest fissile atom density. Metal fuels are normally alloyed, but some metal fuels have been made with pure uranium metal. Uranium alloys that have been used include uranium aluminum, uranium zirconium, uranium silicon, uranium molybdenum, uranium zirconium hydride (UZrH), and uranium zirconium carbonitride. Any of the aforementioned fuels can be made with plutonium and other actinides as part of a closed nuclear fuel cycle. Metal fuels have been used in light-water reactors and liquid metal fast breeder reactors, such as Experimental Breeder Reactor II.
TRIGA fuel
TRIGA fuel is used in TRIGA (Training, Research, Isotopes, General Atomics) reactors. The TRIGA reactor uses UZrH fuel, which has a prompt negative fuel temperature coefficient of reactivity, meaning that as the temperature of the core increases, the reactivity decreases—so it is highly unlikely for a meltdown to occur. Most cores that use this fuel are "high leakage" cores where the excess leaked neutrons can be utilized for research. That is, they can be used as a neutron source. TRIGA fuel was originally designed to use highly enriched uranium, however in 1978 the U.S. Department of Energy launched its Reduced Enrichment for Research Test Reactors program, which promoted reactor conversion to low-enriched uranium fuel. There are 35 TRIGA reactors in the US and an additional 35 in other countries.
Actinide fuel
In a fast-neutron reactor, the minor actinides produced by neutron capture of uranium and plutonium can be used as fuel. Metal actinide fuel is typically an alloy of zirconium, uranium, plutonium, and minor actinides. It can be made inherently safe as thermal expansion of the metal alloy will increase neutron leakage.
Molten plutonium
Molten plutonium, alloyed with other metals to lower its melting point and encapsulated in tantalum, was tested in two experimental reactors, LAMPRE I and LAMPRE II, at Los Alamos National Laboratory in the 1960s. LAMPRE experienced three separate fuel failures during operation.
Non-oxide ceramic fuels
Ceramic fuels other than oxides have the advantage of high heat conductivities and melting points, but they are more prone to swelling than oxide fuels and are not understood as well.
Uranium nitride
Uranium nitride is often the fuel of choice for reactor designs that NASA produces. One advantage is that uranium nitride has a better thermal conductivity than UO2. Uranium nitride has a very high melting point. This fuel has the disadvantage that unless 15N was used (in place of the more common 14N), a large amount of 14C would be generated from the nitrogen by the (n,p) reaction.
As the nitrogen needed for such a fuel would be so expensive it is likely that the fuel would require pyroprocessing to enable recovery of the 15N. It is likely that if the fuel was processed and dissolved in nitric acid that the nitrogen enriched with 15N would be diluted with the common 14N. Fluoride volatility is a method of reprocessing that does not rely on nitric acid, but it has only been demonstrated in relatively small scale installations whereas the established PUREX process is used commercially for about a third of all spent nuclear fuel (the rest being largely subject to a "once through fuel cycle").
All nitrogen-fluoride compounds are volatile or gaseous at room temperature and could be fractionally distilled from the other gaseous products (including recovered uranium hexafluoride) to recover the initially used nitrogen. If the fuel could be processed in such a way as to ensure low contamination with non-radioactive carbon (not a common fission product and absent in nuclear reactors that don't use it as a moderator) then fluoride volatility could be used to separate the produced by producing carbon tetrafluoride. is proposed for use in particularly long lived low power nuclear batteries called diamond batteries.
Uranium carbide
Much of what is known about uranium carbide is in the form of pin-type fuel elements for liquid metal fast reactors during their intense study in the 1960s and 1970s. Recently there has been a revived interest in uranium carbide in the form of plate fuel and most notably, micro fuel particles (such as tristructural-isotropic particles).
The high thermal conductivity and high melting point makes uranium carbide an attractive fuel. In addition, because of the absence of oxygen in this fuel (during the course of irradiation, excess gas pressure can build from the formation of O2 or other gases) as well as the ability to complement a ceramic coating (a ceramic-ceramic interface has structural and chemical advantages), uranium carbide could be the ideal fuel candidate for certain Generation IV reactors such as the gas-cooled fast reactor. While the neutron cross section of carbon is low, during years of burnup, the predominantly will undergo neutron capture to produce stable as well as radioactive . Unlike the produced by using uranium nitrate, the will make up only a small isotopic impurity in the overall carbon content and thus make the entirety of the carbon content unsuitable for non-nuclear uses but the concentration will be too low for use in nuclear batteries without enrichment. Nuclear graphite discharged from reactors where it was used as a moderator presents the same issue.
Liquid fuels
Liquid fuels contain dissolved nuclear fuel and have been shown to offer numerous operational advantages compared to traditional solid fuel approaches. Liquid-fuel reactors offer significant safety advantages due to their inherently stable "self-adjusting" reactor dynamics. This provides two major benefits: virtually eliminating the possibility of a runaway reactor meltdown, and providing an automatic load-following capability which is well suited to electricity generation and high-temperature industrial heat applications.
In some liquid core designs, the fuel can be drained rapidly into a passively safe dump-tank. This advantage was conclusively demonstrated repeatedly as part of a weekly shutdown procedure during the highly successful Molten-Salt Reactor Experiment from 1965 to 1969.
A liquid core is able to release xenon gas, which normally acts as a neutron absorber ( is the strongest known neutron poison and is produced both directly and as a decay product of as a fission product) and causes structural occlusions in solid fuel elements (leading to the early replacement of solid fuel rods with over 98% of the nuclear fuel unburned, including many long-lived actinides). In contrast, molten-salt reactors are capable of retaining the fuel mixture for significantly extended periods, which increases fuel efficiency dramatically and incinerates the vast majority of its own waste as part of the normal operational characteristics. A downside to letting the escape instead of allowing it to capture neutrons converting it to the basically stable and chemically inert , is that it will quickly decay to the highly chemically reactive, long lived radioactive , which behaves similar to other alkali metals and can be taken up by organisms in their metabolism.
Molten salts
Molten salt fuels are mixtures of actinide salts (e.g. thorium/uranium fluoride/chloride) with other salts, used in liquid form above their typical melting points of several hundred degrees C. In some molten salt-fueled reactor designs, such as the liquid fluoride thorium reactor (LFTR), this fuel salt is also the coolant; in other designs, such as the stable salt reactor, the fuel salt is contained in fuel pins and the coolant is a separate, non-radioactive salt. There is a further category of molten salt-cooled reactors in which the fuel is not in molten salt form, but a molten salt is used for cooling.
Molten salt fuels were used in the LFTR known as the Molten Salt Reactor Experiment, as well as other liquid core reactor experiments. The liquid fuel for the molten salt reactor was a mixture of lithium, beryllium, thorium and uranium fluorides: LiF-BeF2-ThF4-UF4 (72-16-12-0.4 mol%). It had a peak operating temperature of 705 °C in the experiment, but could have operated at much higher temperatures since the boiling point of the molten salt was in excess of 1400 °C.
Aqueous solutions of uranyl salts
The aqueous homogeneous reactors (AHRs) use a solution of uranyl sulfate or other uranium salt in water. Historically, AHRs have all been small research reactors, not large power reactors.
Liquid metals or alloys
The dual fluid reactor (DFR) has a variant DFR/m which works with eutectic liquid metal alloys, e.g. U-Cr or U-Fe.
Common physical forms
Uranium dioxide (UO2) powder is compacted to cylindrical pellets and sintered at high temperatures to produce ceramic nuclear fuel pellets with a high density and well defined physical properties and chemical composition. A grinding process is used to achieve a uniform cylindrical geometry with narrow tolerances. Such fuel pellets are then stacked and filled into the metallic tubes. The metal used for the tubes depends on the design of the reactor. Stainless steel was used in the past, but most reactors now use a zirconium alloy which, in addition to being highly corrosion-resistant, has low neutron absorption. The tubes containing the fuel pellets are sealed: these tubes are called fuel rods. The finished fuel rods are grouped into fuel assemblies that are used to build up the core of a power reactor.
Cladding is the outer layer of the fuel rods, standing between the coolant and the nuclear fuel. It is made of a corrosion-resistant material with low absorption cross section for thermal neutrons, usually Zircaloy or steel in modern constructions, or magnesium with small amount of aluminium and other metals for the now-obsolete Magnox reactors. Cladding prevents radioactive fission fragments from escaping the fuel into the coolant and contaminating it. Besides the prevention of radioactive leaks this also serves to keep the coolant as non-corrosive as feasible and to prevent reactions between chemically aggressive fission products and the coolant. For example, the highly reactive alkali metal caesium which reacts strongly with water, producing hydrogen, and which is among the more common fission products.
Pressurized water reactor fuel
Pressurized water reactor (PWR) fuel consists of cylindrical rods put into bundles. A uranium oxide ceramic is formed into pellets and inserted into Zircaloy tubes that are bundled together. The Zircaloy tubes are about in diameter, and the fuel cladding gap is filled with helium gas to improve heat conduction from the fuel to the cladding. There are about 179–264 fuel rods per fuel bundle and about 121 to 193 fuel bundles are loaded into a reactor core. Generally, the fuel bundles consist of fuel rods bundled 14×14 to 17×17. PWR fuel bundles are about long. In PWR fuel bundles, control rods are inserted through the top directly into the fuel bundle. The fuel bundles usually are enriched several percent in 235U. The uranium oxide is dried before inserting into the tubes to try to eliminate moisture in the ceramic fuel that can lead to corrosion and hydrogen embrittlement. The Zircaloy tubes are pressurized with helium to try to minimize pellet-cladding interaction which can lead to fuel rod failure over long periods.
Boiling water reactor fuel
In boiling water reactors (BWR), the fuel is similar to PWR fuel except that the bundles are "canned". That is, there is a thin tube surrounding each bundle. This is primarily done to prevent local density variations from affecting neutronics and thermal hydraulics of the reactor core. In modern BWR fuel bundles, there are either 91, 92, or 96 fuel rods per assembly depending on the manufacturer. A range between 368 assemblies for the smallest and 800 assemblies for the largest BWR in the U.S. form the reactor core. Each BWR fuel rod is backfilled with helium to a pressure of about .
Canada deuterium uranium fuel
Canada deuterium uranium fuel (CANDU) fuel bundles are about long and in diameter. They consist of sintered (UO2) pellets in zirconium alloy tubes, welded to zirconium alloy end plates. Each bundle weighs roughly , and a typical core loading is on the order of 4500–6500 bundles, depending on the design. Modern types typically have 37 identical fuel pins radially arranged about the long axis of the bundle, but in the past several different configurations and numbers of pins have been used. The CANFLEX bundle has 43 fuel elements, with two element sizes. It is also about 10 cm (4 inches) in diameter, 0.5 m (20 in) long and weighs about 20 kg (44 lb) and replaces the 37-pin standard bundle. It has been designed specifically to increase fuel performance by utilizing two different pin diameters. Current CANDU designs do not need enriched uranium to achieve criticality (due to the lower neutron absorption in their heavy water moderator compared to light water), however, some newer concepts call for low enrichment to help reduce the size of the reactors. The Atucha nuclear power plant in Argentina, a similar design to the CANDU but built by German KWU was originally designed for non-enriched fuel but since switched to slightly enriched fuel with a content about 0.1 percentage points higher than in natural uranium.
Less-common fuel forms
Various other nuclear fuel forms find use in specific applications, but lack the widespread use of those found in BWRs, PWRs, and CANDU power plants. Many of these fuel forms are only found in research reactors, or have military applications.
Magnox fuel
Magnox (magnesium non-oxidising) reactors are pressurised, carbon dioxide–cooled, graphite-moderated reactors using natural uranium (i.e. unenriched) as fuel and Magnox alloy as fuel cladding. Working pressure varies from for the steel pressure vessels, and the two reinforced concrete designs operated at . Magnox alloy consists mainly of magnesium with small amounts of aluminium and other metals—used in cladding unenriched uranium metal fuel with a non-oxidising covering to contain fission products. This material has the advantage of a low neutron capture cross-section, but has two major disadvantages:
It limits the maximum temperature, and hence the thermal efficiency, of the plant.
It reacts with water, preventing long-term storage of spent fuel under water - such as in a spent fuel pool.
Magnox fuel incorporated cooling fins to provide maximum heat transfer despite low operating temperatures, making it expensive to produce. While the use of uranium metal rather than oxide made nuclear reprocessing more straightforward and therefore cheaper, the need to reprocess fuel a short time after removal from the reactor meant that the fission product hazard was severe. Expensive remote handling facilities were required to address this issue.
Tristructural-isotropic fuel
Tristructural-isotropic (TRISO) fuel is a type of micro-particle fuel. A particle consists of a kernel of UOX fuel (sometimes UC or UCO), which has been coated with four layers of three isotropic materials deposited through fluidized chemical vapor deposition (FCVD). The four layers are a porous buffer layer made of carbon that absorbs fission product recoils, followed by a dense inner layer of protective pyrolytic carbon (PyC), followed by a ceramic layer of SiC to retain fission products at elevated temperatures and to give the TRISO particle more structural integrity, followed by a dense outer layer of PyC. TRISO particles are then encapsulated into cylindrical or spherical graphite pellets. TRISO fuel particles are designed not to crack due to the stresses from processes (such as differential thermal expansion or fission gas pressure) at temperatures up to 1600 °C, and therefore can contain the fuel in the worst of accident scenarios in a properly designed reactor. Two such reactor designs are the prismatic-block gas-cooled reactor (such as the GT-MHR) and the pebble-bed reactor (PBR). Both of these reactor designs are high temperature gas reactors (HTGRs). These are also the basic reactor designs of very-high-temperature reactors (VHTRs), one of the six classes of reactor designs in the Generation IV initiative that is attempting to reach even higher HTGR outlet temperatures.
TRISO fuel particles were originally developed in the United Kingdom as part of the Dragon reactor project. The inclusion of the SiC as diffusion barrier was first suggested by D. T. Livey. The first nuclear reactor to use TRISO fuels was the Dragon reactor and the first powerplant was the THTR-300. Currently, TRISO fuel compacts are being used in some experimental reactors, such as the HTR-10 in China and the high-temperature engineering test reactor in Japan. In the United States, spherical fuel elements utilizing a TRISO particle with a UO2 and UC solid solution kernel are being used in the Xe-100, and Kairos Power is developing a 140 MWE nuclear reactor that uses TRISO.
QUADRISO fuel
In QUADRISO particles a burnable neutron poison (europium oxide or erbium oxide or carbide) layer surrounds the fuel kernel of ordinary TRISO particles to better manage the excess of reactivity. If the core is equipped both with TRISO and QUADRISO fuels, at beginning of life neutrons do not reach the fuel of the QUADRISO particles because they are stopped by the burnable poison. During reactor operation, neutron irradiation of the poison causes it to "burn up" or progressively transmute to non-poison isotopes, depleting this poison effect and leaving progressively more neutrons available for sustaining the chain-reaction. This mechanism compensates for the accumulation of undesirable neutron poisons which are an unavoidable part of the fission products, as well as normal fissile fuel "burn up" or depletion. In the generalized QUADRISO fuel concept the poison can eventually be mixed with the fuel kernel or the outer pyrocarbon. The QUADRISO concept was conceived at Argonne National Laboratory.
RBMK fuel
RBMK reactor fuel was used in Soviet-designed and built RBMK-type reactors. This is a low-enriched uranium oxide fuel. The fuel elements in an RBMK are 3 m long each, and two of these sit back-to-back on each fuel channel, pressure tube. Reprocessed uranium from Russian VVER reactor spent fuel is used to fabricate RBMK fuel. Following the Chernobyl accident, the enrichment of fuel was changed from 2.0% to 2.4%, to compensate for control rod modifications and the introduction of additional absorbers.
CerMet fuel
CerMet fuel consists of ceramic fuel particles (usually uranium oxide) embedded in a metal matrix. It is hypothesized that this type of fuel is what is used in United States Navy reactors. This fuel has high heat transport characteristics and can withstand a large amount of expansion.
Plate-type fuel
Plate-type fuel has fallen out of favor over the years. Plate-type fuel is commonly composed of enriched uranium sandwiched between metal cladding. Plate-type fuel is used in several research reactors where a high neutron flux is desired, for uses such as material irradiation studies or isotope production, without the high temperatures seen in ceramic, cylindrical fuel. It is currently used in the Advanced Test Reactor (ATR) at Idaho National Laboratory, and the nuclear research reactor at the University of Massachusetts Lowell Radiation Laboratory.
Sodium-bonded fuel
Sodium-bonded fuel consists of fuel that has liquid sodium in the gap between the fuel slug (or pellet) and the cladding. This fuel type is often used for sodium-cooled liquid metal fast reactors. It has been used in EBR-I, EBR-II, and the FFTF. The fuel slug may be metallic or ceramic. The sodium bonding is used to reduce the temperature of the fuel.
Accident tolerant fuels
Accident tolerant fuels (ATF) are a series of new nuclear fuel concepts, researched in order to improve fuel performance under accident conditions, such as loss-of-coolant accident (LOCA) or reaction-initiated accidents (RIA). These concerns became more prominent after the Fukushima Daiichi nuclear disaster in Japan, in particular regarding light-water reactor (LWR) fuels performance under accident conditions.
Neutronics analyses were performed for the application of the new fuel-cladding material systems for various types of ATF materials.
The aim of the research is to develop nuclear fuels that can tolerate loss of active cooling for a considerably longer period than the existing fuel designs and prevent or delay the release of radionuclides during an accident. This research is focused on reconsidering the design of fuel pellets and cladding, as well as the interactions between the two.
Spent nuclear fuel
Used nuclear fuel is a complex mixture of the fission products, uranium, plutonium, and the transplutonium metals. In fuel which has been used at high temperature in power reactors it is common for the fuel to be heterogeneous; often the fuel will contain nanoparticles of platinum group metals such as palladium. Also the fuel may well have cracked, swollen, and been heated close to its melting point. Despite the fact that the used fuel can be cracked, it is very insoluble in water, and is able to retain the vast majority of the actinides and fission products within the uranium dioxide crystal lattice. The radiation hazard from spent nuclear fuel declines as its radioactive components decay, but remains high for many years. For example 10 years after removal from a reactor, the surface dose rate for a typical spent fuel assembly still exceeds 10,000 rem/hour, resulting in a fatal dose in just minutes.
Oxide fuel under accident conditions
Two main modes of release exist, the fission products can be vaporised or small particles of the fuel can be dispersed.
Fuel behavior and post-irradiation examination
Post-Irradiation Examination (PIE) is the study of used nuclear materials such as nuclear fuel. It has several purposes. It is known that by examination of used fuel that the failure modes which occur during normal use (and the manner in which the fuel will behave during an accident) can be studied. In addition information is gained which enables the users of fuel to assure themselves of its quality and it also assists in the development of new fuels. After major accidents the core (or what is left of it) is normally subject to PIE to find out what happened. One site where PIE is done is the ITU which is the EU centre for the study of highly radioactive materials.
Materials in a high-radiation environment (such as a reactor) can undergo unique behaviors such as swelling and non-thermal creep. If there are nuclear reactions within the material (such as what happens in the fuel), the stoichiometry will also change slowly over time. These behaviors can lead to new material properties, cracking, and fission gas release.
The thermal conductivity of uranium dioxide is low; it is affected by porosity and burn-up. The burn-up results in fission products being dissolved in the lattice (such as lanthanides), the precipitation of fission products such as palladium, the formation of fission gas bubbles due to fission products such as xenon and krypton and radiation damage of the lattice. The low thermal conductivity can lead to overheating of the center part of the pellets during use. The porosity results in a decrease in both the thermal conductivity of the fuel and the swelling which occurs during use.
According to the International Nuclear Safety Center the thermal conductivity of uranium dioxide can be predicted under different conditions by a series of equations.
The bulk density of the fuel can be related to the thermal conductivity.
Where ρ is the bulk density of the fuel and ρtd is the theoretical density of the uranium dioxide.
Then the thermal conductivity of the porous phase (Kf) is related to the conductivity of the perfect phase (Ko, no porosity) by the following equation. Note that s is a term for the shape factor of the holes.
Kf = Ko(1 − p/1 + (s − 1)p)
Rather than measuring the thermal conductivity using the traditional methods such as Lees' disk, the Forbes' method, or Searle's bar, it is common to use Laser Flash Analysis where a small disc of fuel is placed in a furnace. After being heated to the required temperature one side of the disc is illuminated with a laser pulse, the time required for the heat wave to flow through the disc, the density of the disc, and the thickness of the disk can then be used to calculate and determine the thermal conductivity.
λ = ρCpα
λ thermal conductivity
ρ density
Cp heat capacity
α thermal diffusivity
If t1/2 is defined as the time required for the non illuminated surface to experience half its final temperature rise then.
α = 0.1388 L2/t1/2
L is the thickness of the disc
For details see K. Shinzato and T. Baba (2001).
Radioisotope decay fuels
Radioisotope battery
An atomic battery (also called a nuclear battery or radioisotope battery) is a device which uses the radioactive decay to generate electricity. These systems use radioisotopes that produce low energy beta particles or sometimes alpha particles of varying energies. Low energy beta particles are needed to prevent the production of high energy penetrating bremsstrahlung radiation that would require heavy shielding. Radioisotopes such as plutonium-238, curium-242, curium-244 and strontium-90 have been used. Tritium, nickel-63, promethium-147, and technetium-99 have been tested.
There are two main categories of atomic batteries: thermal and non-thermal. The non-thermal atomic batteries, which have many different designs, exploit charged alpha and beta particles. These designs include the direct charging generators, betavoltaics, the optoelectric nuclear battery, and the radioisotope piezoelectric generator. The thermal atomic batteries on the other hand, convert the heat from the radioactive decay to electricity. These designs include thermionic converter, thermophotovoltaic cells, alkali-metal thermal to electric converter, and the most common design, the radioisotope thermoelectric generator.
Radioisotope thermoelectric generator
A radioisotope thermoelectric generator (RTG) is a simple electrical generator which converts heat into electricity from a radioisotope using an array of thermocouples.
has become the most widely used fuel for RTGs, in the form of plutonium dioxide. It has a half-life of 87.7 years, reasonable energy density, and exceptionally low gamma and neutron radiation levels. Some Russian terrestrial RTGs have used ; this isotope has a shorter half-life and a much lower energy density, but is cheaper. Early RTGs, first built in 1958 by the U.S. Atomic Energy Commission, have used . This fuel provides phenomenally huge energy density, (a single gram of polonium-210 generates 140 watts thermal) but has limited use because of its very short half-life and gamma production, and has been phased out of use for this application.
Radioisotope heater unit (RHU)
A radioisotope heater unit (RHU) typically provides about 1 watt of heat each, derived from the decay of a few grams of plutonium-238. This heat is given off continuously for several decades.
Their function is to provide highly localised heating of sensitive equipment (such as electronics in outer space). The Cassini–Huygens orbiter to Saturn contains 82 of these units (in addition to its 3 main RTGs for power generation). The Huygens probe to Titan contains 35 devices.
Fusion fuels
Fusion fuels are fuels to use in hypothetical Fusion power reactors. They include deuterium (2H) and tritium (3H) as well as helium-3 (3He). Many other elements can be fused together, but the larger electrical charge of their nuclei means that much higher temperatures are required. Only the fusion of the lightest elements is seriously considered as a future energy source. Fusion of the lightest atom, 1H hydrogen, as is done in the Sun and other stars, has also not been considered practical on Earth. Although the energy density of fusion fuel is even higher than fission fuel, and fusion reactions sustained for a few minutes have been achieved, utilizing fusion fuel as a net energy source remains only a theoretical possibility.
First-generation fusion fuel
Deuterium and tritium are both considered first-generation fusion fuels; they are the easiest to fuse, because the electrical charge on their nuclei is the lowest of all elements. The three most commonly cited nuclear reactions that could be used to generate energy are:
2H + 3H → n (14.07 MeV) + 4He (3.52 MeV)
2H + 2H → n (2.45 MeV) + 3He (0.82 MeV)
2H + 2H → p (3.02 MeV) + 3H (1.01 MeV)
Second-generation fusion fuel
Second-generation fuels require either higher confinement temperatures or longer confinement time than those required of first-generation fusion fuels, but generate fewer neutrons. Neutrons are an unwanted byproduct of fusion reactions in an energy generation context, because they are absorbed by the walls of a fusion chamber, making them radioactive. They cannot be confined by magnetic fields, because they are not electrically charged. This group consists of deuterium and helium-3. The products are all charged particles, but there may be significant side reactions leading to the production of neutrons.
2H + 3He → p (14.68 MeV) + 4He (3.67 MeV)
Third-generation fusion fuel
Third-generation fusion fuels produce only charged particles in the primary reactions, and side reactions are relatively unimportant. Since a very small amount of neutrons is produced, there would be little induced radioactivity in the walls of the fusion chamber. This is often seen as the end goal of fusion research. 3He has the highest Maxwellian reactivity of any 3rd generation fusion fuel. However, there are no significant natural sources of this substance on Earth.
3He + 3He → 2 p + 4He (12.86 MeV)
Another potential aneutronic fusion reaction is the proton-boron reaction:
p + 11B → 3 4He (8.7 MeV)
Under reasonable assumptions, side reactions will result in about 0.1% of the fusion power being carried by neutrons. With 123 keV, the optimum temperature for this reaction is nearly ten times higher than that for the pure hydrogen reactions, the energy confinement must be 500 times better than that required for the D-T reaction, and the power density will be 2500 times lower than for D-T.
See also
Fissile material
Global Nuclear Energy Partnership
Integrated Nuclear Fuel Cycle Information System
Lists of nuclear disasters and radioactive incidents
Nuclear fuel bank
Nuclear fuel cycle
Reprocessed uranium
Uranium market
Notes
References
External links
PWR fuel
Picture showing handling of a PWR bundle
BWR fuel
Physical description of LWR fuel
Links to BWR photos from the nuclear tourist webpage
CANDU fuel
CANDU Fuel pictures and FAQ
Basics on CANDU design
The Evolution of CANDU Fuel Cycles and their Potential Contribution to World Peace
CANDU Fuel and Reactor Specifics (Nuclear Tourist)
Candu Fuel Rods and Bundles
TRISO fuel
TRISO fuel descripción
Non-Destructive Examination of SiC Nuclear Fuel Shell using X-Ray Fluorescence Microtomography Technique
GT-MHR fuel compact process
Description of TRISO fuel for "pebbles"
LANL webpage showing various stages of TRISO fuel production
Method to calculate the temperature profile in TRISO fuel
QUADRISO fuel
Conceptual Design of QUADRISO Fuel
CERMET fuel
Thoria-based Cermet Nuclear Fuel: Sintered Microsphere Fabrication by Spray Drying
Plate type fuel
https://pubs.aip.org/aip/adv/article/9/7/075112/22584/Reactor-Monte-Carlo-RMC-model-validation-and
List of reactors at INL and picture of ATR core
ATR plate fuel
TRIGA fuel
Fusion fuel
Advanced fusion fuels presentation
Nuclear reprocessing
Nuclear technology
Nuclear chemistry
Actinides | Nuclear fuel | [
"Physics",
"Chemistry"
] | 7,622 | [
"Nuclear chemistry",
"Nuclear technology",
"nan",
"Nuclear physics"
] |
2,047,150 | https://en.wikipedia.org/wiki/Coincidence%20point | In mathematics, a coincidence point (or simply coincidence) of two functions is a point in their common domain having the same image.
Formally, given two functions
we say that a point x in X is a coincidence point of f and g if f(x) = g(x).
Coincidence theory (the study of coincidence points) is, in most settings, a generalization of fixed point theory, the study of points x with f(x) = x. Fixed point theory is the special case obtained from the above by letting X = Y and taking g to be the identity function.
Just as fixed point theory has its fixed-point theorems, there are theorems that guarantee the existence of coincidence points for pairs of functions. Notable among them, in the setting of manifolds, is the Lefschetz coincidence theorem, which is typically known only in its special case formulation for fixed points.
Coincidence points, like fixed points, are today studied using many tools from mathematical analysis and topology. An equaliser is a generalization of the coincidence set.
See also
Incidence (geometry)
Intersection (geometry)
References
Mathematical analysis
Topology
Fixed points (mathematics) | Coincidence point | [
"Physics",
"Mathematics"
] | 234 | [
"Mathematical analysis",
"Mathematical analysis stubs",
"Fixed points (mathematics)",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Dynamical systems"
] |
2,050,029 | https://en.wikipedia.org/wiki/ISOLDE | The ISOLDE (Isotope Separator On Line DEvice) Radioactive Ion Beam Facility, is an on-line isotope separator facility located at the centre of the CERN accelerator complex on the Franco-Swiss border. Created in 1964, the ISOLDE facility started delivering radioactive ion beams (RIBs) to users in 1967. Originally located at the Synchro-Cyclotron (SC) accelerator (CERN's first ever particle accelerator), the facility has been upgraded several times most notably in 1992 when the whole facility was moved to be connected to CERN's ProtonSynchroton Booster (PSB). ISOLDE is currently the longest-running facility in operation at CERN, with continuous developments of the facility and its experiments keeping ISOLDE at the forefront of science with RIBs. ISOLDE benefits a wide range of physics communities with applications covering nuclear, atomic, molecular and solid-state physics, but also biophysics and astrophysics, as well as high-precision experiments looking for physics beyond the Standard Model. The facility is operated by the ISOLDE Collaboration, comprising CERN and sixteen (mostly) European countries. As of 2019, close to 1,000 experimentalists around the world (including all continents) are coming to ISOLDE to perform typically 50 different experiments per year.
Radioactive nuclei are produced at ISOLDE by shooting a high-energy (1.4GeV) beam of protons delivered by CERN's PSB accelerator on a 20 cm thick target. Several target materials are used depending on the desired final isotopes that are requested by the experimentalists. The interaction of the proton beam with the target material produces radioactive species through spallation, fragmentation and fission reactions. They are subsequently extracted from the bulk of the target material through thermal diffusion processes by heating the target to about 2,000 °C.
The cocktail of produced isotopes is ultimately filtered using one of ISOLDE's two magnetic dipole mass separators to yield the desired isobar of interest. The time required for the extraction process to occur is dictated by the nature of the desired isotope and/or that of the target material and places a lower limit on the half-life of isotopes which can be produced by this method, and is typically of the order of a few milliseconds. For an additional separation, the Resonance Ionisation Laser Ion Source (RILIS) uses lasers to ionise a particular element, which separates the radioisotopes by their atomic number. Once extracted, the isotopes are directed either to one of several low-energy nuclear physics experiments or an isotope-harvesting area. A major upgrade of the REX post-accelerator to the HIE-ISOLDE (High Intensity and Energy Upgrade) superconducting linac completed construction in 2018, allowing for the re-acceleration of radioisotopes to higher energies than previously achievable.
Background
Most atomic nuclei contain protons and neutrons. The number of protons determines the chemical element the nucleus belongs to. Different isotopes of the same element have different numbers of neutrons in their nuclei, but contain the same number of protons. For example, isotopes of carbon include carbon-12, carbon-13, carbon-14, which contain 6, 7, 8 neutrons respectively, but all contain 6 protons. Each isotope of an element has a different nuclear energy state, and may have different stability.
A nuclide is a more general term than isotope, and refers to atoms that have any particular number of protons and neutrons. Stable nuclides are not radioactive and do not spontaneously undergo radioactive decay, so are more usually found in nature. Whereas unstable (i.e. radioactive) nuclides are not found in nature, unless there is a recent source of them, because they are shorter lived, and will spontaneously decay, in one or more steps, to more stable nuclides. For example, carbon-14 is unstable but is found in nature. Scientists use accelerators and nuclear reactors to produce radioactive nuclides. As a general trend, and among other factors, the neutron–proton ratio of a nuclide determines its stability. The value of this ratio for stable nuclides generally increases for larger nuclei with more protons and neutrons. Many unstable nuclides have neutron-proton ratios beyond the zone of stability. The time required to lose half of a quantity of a given nuclide through radioactive decays, the half-life, is a measure of how stable an isotope is.
Nuclides can be visually represented on a table (Segré chart or table of nuclides) where the proton number is plotted against the neutron number.
History
In 1950, two Danish physicists Otto Kofoed-Hansen and Karl-Ove Nielsen discovered a new technique for producing radioisotopes which enabled production of isotopes with shorter half-lives than earlier methods. The Copenhagen experiment they carried out included a simplified version of the same elements used in modern on-line experiments. Ten years later, in Vienna, at a symposium about separating radioisotopes, plans for an ‘on-line’ isotope separator were published. Using these plans, CERN's Nuclear Chemistry Group (NCG) built a prototype on-line mass separator coupled to target and ion source, which was bombarded by a 600 MeV proton beam delivered by CERN's the Synchro-Cyclotron. The test was a success and showed that the SC was an ideal machine for on-line rare isotope production. The plan for an electromagnetic isotope separator was developed during 1963–4 by European nuclear physicists and, in late 1964, their proposal was accepted by the CERN Director-General and the ISOLDE project began.
The "Finance Committee" for the project set up originally with five members, then extended to twelve to include two members per 'country' (including CERN). As the term "Finance Committee" had other connotations, it was decided 'until a better name was found' to call the project ISOLDE and the committee the ISOLDE Committee. In 1965, as the underground hall at CERN was being excavated, the isotope separator for ISOLDE was being constructed in Aarhus. In May 1966, the SC shut down for some major modifications. One of these modifications was the construction of a new tunnel to send proton beams to a future underground hall that would be dedicated to ISOLDE. Separator construction made good progress in 1966, along with the appointing of Arve Kjelberg as the first ISOLDE coordinator, and the underground hall was finished in 1967. On 16 October 1967, the first proton beams interacted with the target and the first experiments were successful in proving that the technique worked as expected. In 1969, the first paper was published with studies of various short-lived isotopes.
Shortly after the ISOLDE experimental program started, some major improvements for SC were planned. In 1972 the SC shut down to upgrade its beam intensity by changing its radiofrequency system. The SC Improvement Program (SCIP) increased the primary proton beam intensity by about a factor of about 100. To be able to handle this high-intensity ISOLDE facility also needed some modifications to successfully extract the improved beam to ISOLDE. After necessary modifications, the new ISOLDE facility also known as ISOLDE 2 was launched in 1974. Its new target design combined with the increased beam intensity from the SC led to significant enhancements in the number of nuclides produced. However, after some time the external beam current from the SC started to be a limiting factor. The collaboration discussed the possibility of moving the facility to an accelerator that could reach higher current values but decided on building another separator with ultra-modern design, for the facility. The new high-resolution separator, ISOLDE 3, was in full use by the end of the 80s. In 1990 a new ion source RILIS was installed at the facility to selectively and efficiently produce radioactive beams.
The SC was decommissioned in 1990, after having been in operation for more than three decades. As a consequence, the collaboration decided to relocate the ISOLDE facility to the Proton Synchrotron, and place the targets in an external beam from its 1 GeV booster. The construction of the new ISOLDE experimental hall started about three months prior to the decommissioning of the SC. With the relocation also came several upgrades. The most notable being the installation of two new magnetic dipole mass separators. One general-purpose separator with one bending magnet and the other one is a high-resolution separator with two bending magnets. The latter one is a reconstructed version of the ISOLDE 3. The first experiment at the new facility, known as ISOLDE PSB, was performed on 26 June 1992. In May 1995, two industrial robots were installed in the facility to handle the targets and ion sources units without human intervention.
To diversify the scientific activities of the facility, a post-accelerator system called REX-ISOLDE (Radioactive beam EXperiments at ISOLDE) was approved in 1995 and inaugurated at the facility in 2001. With this new addition, nuclear reaction experiments which require a high-energy RIB could now be performed at ISOLDE. Additionally, REXTRAP operates as a Penning Trap for the REX-ISOLDE then transfers bunches of ions to REXEBIS, an Electron Beam Ion Source (EBIS), which traps the isotopes produced and further ionises them.
The facility building was extended in 2005 to allow more experiments to be set up. ISCOOL, an ion cooler and buncher, increasing the beam quality for experiments was installed at the facility in 2007. In 2006, the International Advisory Board decided that upgrading ISOLDE hall with a linear post-accelerator design based on superconducting quarter-wave resonators would allow for a full-energy availability, crucially without the reduction of beam quality. The HIE-ISOLDE project was approved in December 2009, and involves an upgrade of the energy range from 3 MeV per nucleon, to 5 MeV, and lastly to 10 MeV per nucleon. The design also incorporated an intensity upgrade to make best use of the delivered proton beams. The upgrade project was split into three different phases, to be completed over a number of years.
In late 2013 the construction of a new facility for medical research called CERN MEDICIS (MEDical Isotopes Collected from ISOLDE) started. Of the incident proton beams used at ISOLDE, only 10% are actually stopped in the targets and achieve their objective, while the remaining 90% are not used. The MEDICIS facility is designed to work with the remaining proton beams that have already passed a first target. The second target produces specific radioisotopes that are delivered to hospitals and research facilities and can be made injectable.
In 2013, during the Long Shutdown 1, three ISOLDE buildings were demolished. They've been built again as a new single building with a new control room, a data storage room, three laser laboratories, a biology and materials laboratory, and a room for visitors. Another building extension for the MEDICIS project and several others equipped with electrical, cooling and ventilation systems to be used for the HIE-ISOLDE project in the future were also built. In addition, the robots which were installed for the handling of radioactive targets have been replaced with more modern robots. In 2015, for the first time, a radioactive isotope beam could be accelerated to an energy level of 4.3 MeV per nucleon in the ISOLDE facility thanks to the HIE-ISOLDE upgrades. In late 2017, the CERN-MEDICIS facility produced its first radioisotopes and by the end of 2020 had provided external nine hospitals and research facilities with 41 batches of radioisotopes. Phase 2 of the facility's HIE-ISOLDE upgrade was completed in 2018, which allows ISOLDE to accelerate radioactive beams up to 10 MeV per nucleon.
Facility and concept
The ISOLDE facility contains the Class A laboratories, buildings for the HIE-ISOLDE and MEDICIS projects, and the control rooms located in building 508. Before ISOLDE, the radioactive nuclides were transported from the production are to the laboratory for examination. At ISOLDE, all processes from the production to the measurements are connected and the radioactive material requires no extra transport. Due to this, ISOLDE is referred to as an on-line facility.
At the ISOLDE facility, the main proton beam for reactions comes from the PSB. The incoming proton beam has an energy of 1.4 GeV and its average intensity varies up to 2 μA. The beam enters the facility and is directed towards one of two mass separators: the General Purpose Separator (GPS) and the High Resolution Separator (HRS). The separators have independently run target-ion source systems, delivering 60 keV RIBs.
The targets used at ISOLDE allow for the quick production and extraction of radioactive nuclei. Targets consist sometimes of molten metal kept at high temperature (700 °C to 1400 °C), which result in long isotope release times. Heating the target to higher temperatures, typically above 2000 °C, makes for a faster release time. Using a target heavier than the desired isotope, results in production via spallation or fragmentation.
The ion sources, used in combination with the targets at ISOLDE, produce an ion beam of (preferably) one chemical element. There are three types used: surface ion sources, plasma ion sources and laser ion sources. The surface ion sources consist of a metal tube with a high work function heated up to 2400 °C, so that the atom can be ionised. If an atom cannot be surface ionised, the plasma ion source is used. The plasma is produced by an ionised gas mixture and optimised using an additional magnetic field. The laser ion source used at ISOLDE is RILIS.
The GPS is made with a double focusing magnet with a bending radius of 1.5 m and a bending angle of 70°. The resolution of the GPS is approximately 800. The GPS sends beams to an electronic switchyard, allowing three mass separated beams to be simultaneously extracted. The second separator, the HRS, consists of two dipole magnets, with bending radii of 1 m and bending angles of 90° and 60°, and an elaborate ion-optical system. The overall resolution of the HRS has been measured as 7000, which enables it to be used for experiments requiring higher mass resolution values. The GPS switchyard and HRS are connected to a common central beam-line used to provide beam to the various experimental setups located in the ISOLDE facility.
ISCOOL
The ISOLDE COOLer (ISCOOL) is located downstream from the HRS, and extends up to the merging switchyard joining the two mass separator beams. ISCOOL is a general-purpose Radio Frequency Quadrupole Cooler and Buncher (RFQCB), with the purpose of cooling (improving the beam quality) and bunching the RIB from the HRS. Incoming ions collide with the neutral buffer gas, losing their energy, and then are radially confined. The beam is then extracted from ISCOOL.
RILIS
The magnetic mass separators are able to separate isobars by mass number, however they are unable to sort isotopes of the same mass. If an experiment requires a higher degree of chemical purity, it will need the beam to have an additional separation, by proton number. RILIS provides this separation by using step-wise resonance photo-ionisation, involving precisely tuned laser wavelengths matched exactly to a specific element's successive electron transition energies. Ionisation will only occur of the desired element, and the other elements within the ion-source will remain unchanged. This process of laser ionisation takes place in a hot metal cavity to provide the spatial confinement needed for the atomic vapour to be illuminated. A high frequency laser system is needed to ionise the atom before it leaves the cavity. All in all, the ISOLDE facility provides 1300 isotopes from 75 elements in the periodic table.
CERN-MEDICIS
The project CERN-MEDICIS is running to supply radioactive isotopes for medical applications. The proton beams from the PSB preserve 90% of their intensities after hitting a standard target in the facility. The CERN-MEDICIS facility uses the remaining protons on a target that is placed behind the HRS target, in order to produce radioisotopes for medical purposes. The irradiated target is then carried to the MEDICIS building by using an automated conveyor to separator and collect the isotopes of interest.
REX-ISOLDE
The post-accelerator REX-ISOLDE is a combination of different devices used to accelerate radioisotopes to boost their energy to 10 MeV per nucleon, increased from 3 MeV per nucleon due to HIE-ISOLDE upgrades. The incoming RIBs have enough energy to overcome the first potential threshold of the Penning trap, REXTRAP, but within the trap the ions lose energy through collisions with buffer gas atoms. This cools the ions and their movement is dampened by a combination of a radio-frequency (RF) excitation and a buffer gas. The ion bunches are extracted from REXTRAP and injected into REXEBIS.
REXEBIS uses a strong magnetic field to focus electrons from an electron gun in order to produce highly charged ions. The ions are confined radially and longitudinally, after which they will undergo stepwise ionisation through electron impact. A mass separator is required to separator the subsequent ions, due to the small intensity after being extracted from EBIS.
The next stage of REX-ISOLDE consists of a normal conducting (room-temperature) linac, where the ions are accelerated by an RFQ. An interdigital H-type (IH) structure uses resonators to boost the beam energy up to its maximum value.
REX-ISOLDE was originally intended to accelerate light isotopes, but has passed this goal and provided post-accelerated beams of a wider mass range, from 6He up to 224Ra. The post-accelerator has delivered accelerated beams of more than 100 isotopes and 30 elements since its commissioning.
HIE-ISOLDE upgrades
To be able to satisfy the ever-increasing needs of higher quality, intensity, and energy of the production beam is very important for facilities such as ISOLDE. As the latest response to satisfy these needs, HIE-ISOLDE upgrade project is currently ongoing. Due to its phased planning, the upgrade project is being carried out with the least impact on the experiments continuing in the facility. The project included an energy increase for the REX-ISOLDE up to 10 MeV as well as resonator and cooler upgrades, enhancement of the input beam from PSB, improvements on targets, ion sources, and mass separators. Following the completion of the phase two upgrade in 2018 for the HIE-ISOLDE which included installing four high-beta cryomodules, the next and final phase will replace REX structures after the IH-structure (IHS) with two low-beta cryomodules. This will improve the beam quality and allow a continuously variable energy between 0.45 and 10 MeV per nucleon. As a state-of-the-art project, HIE-ISOLDE is expected to expand the research opportunities in ISOLDE facility to the next level. When completed, the upgraded facility will be able to host advanced experiments in fields like nuclear physics and nuclear astrophysics.
Experimental setups
ISOLDE contains both temporary and fixed experimental setups. Temporary setups in the ISOLDE facility are there for shorter time periods, and generally focus on detecting specific decay modes of nuclei. The fixed experimental setups have a permanent position at the facility. They include:
COLLAPS
The COLinear LAser SPectroScopy (COLLAPS) experiment has been operating at ISOLDE since the late 1970s and is the oldest active experiment at the facility. COLLAPS studies ground and isomeric state properties of highly-unstable (exotic), short-lived nuclei, including measurements of their spins, electro-magnetic moments and charge radii. The experiment uses the technique of collinear spectroscopy using lasers to access necessary atomic transitions.
CRIS
The Collinear Resonance Ionization Spectroscopy (CRIS) experiment uses fast beam collinear laser spectroscopy alongside the technique of resonance ionization to produce results with a high resolution and efficiency. The experiment studies group-state properties of exotic nuclei and produces isomeric beams used for decay studies.
EC-SLI
The Emission Channeling with Short-Lived Isotopes (EC-SLI) experiment uses the emission channelling method to study lattice locations of dopants and impurities in crystals and epitaxial thin films. This is done by introducing short-lived isotope probes into the crystal and measuring the electron intensity affected to determine whether they have been affected by the decay particles emitted.
IDS
The ISOLDE Decay Station (IDS) experiment is a setup that allows different experiment systems to be coupled to the station, using spectroscopy techniques such as fast timing or time-of-flight (ToF). The station, operational since 2014, is used to measure decay properties of a wide range of radioactive isotopes for a variety of applications. Results from the IDS have been useful for astrophysics, as they measured the probability of a particular decay seen in red giant stars.
ISS
The ISOLDE Solenoidal Spectrometer (ISS) experiment uses an ex-MRI magnet to direct RIBs at a light target. Conditions produced by this reaction replicate those present in astrophysical processes, and measuring the properties of the atomic nuclei will also provide a better understanding of nucleon-nucleon interactions in exotic nuclei. The experiment was commissioned in 2021 and finished construction during the Long Shutdown 2.
ISOLTRAP
The ISOLTRAP experiment is a high-precision mass spectrometer that uses the ToF detection technique to measure mass. Since the start of its operation, ISOLTRAP has measured the mass of hundreds of short-lived radioactive nuclei, as well as confirming the existence of doubly magic isotopes. The setup was upgraded in 2011 to include a multi-reflection time-of-flight mass spectrometer (MR-ToF), allowing the detection of more exotic isotopes.
LUCRECIA
The LUCRECIA experiment is based on a Total Absorption gamma Spectrometer (TAS), which measures the gamma transitions in an unstable parent nucleus. From these measurements, nuclear structure is analysed and used to confirm theoretical models and make stellar predictions.
Miniball
The Miniball experiment is a gamma-ray spectroscopy setup consisting of a high-resolution germanium detector array. The experiment is used to analyse the decays of short-lived nuclei involved in Coulomb excitation and transfer reactions. Results from Miniball at ISOLDE that found evidence of pear-shaped heavy nuclei was named in the Institute of Physics (IoP) "top 10 breakthroughs in physics".
MIRACLS
The Multi Ion Reflection Apparatus for CoLlinear Spectroscopy (MIRACLS) experiment determines properties exotic radioisotopes by measuring their hyperfine structure. MIRACLS uses laser spectrometer on ion bunches trapped in a MR-ToF, to increase the flight path of the ions. Currently, the experiment is being designed and constructed.
SEC
The Scattering Experiments Chamber (SEC) experiment facilitates diversified reaction experiments, and is complimentary to the ISS and Miniball, due to SEC not detecting gamma radiation. The station is used to study low-lying resonances in light atomic nuclei through transfer reactions.
VITO
The Versatile Ion polarisation Technique Online (VITO) experiment is a beamline used to investigate the weak interaction and determine properties of short-lived unstable nuclei. The experiment uses the technique of optical pumping to produce laser-polarised RIBs allowing for versatile studies. There are three independent studies on the VITO beamline including a β-NMR spectroscopy station.
WISArD
The Weak Interaction Studies with 32Ar Decay (WISArD) experiment investigates the weak interaction to search for physics beyond the Standard Model (SM). The WISArD setup reuses some of the WITCH experiment's infrastructure, as well as its superconducting magnet. The experiment measures the angular correlation between particles emitted by a parent and daughter nucleus to calculate non-SM contributions.
Solid-state physics laboratory
Attached to ISOLDE in building 508, is CERN's solid-state physics laboratory. Solid state physics research (SSP) accounts for 10–15% of the yearly allocation of beam time and uses about 20–25% of the overall number of experiments running at ISOLDE. The laboratory uses the technique of Time Differential Perturbed Angular Correlation (TDPAC) to probe the large quantity of available radioactive elements provided by ISOLDE. This technique has also been used to measure ferromagnetic and ferroelectric properties of materials, as well as providing ion beams for other facilities within ISOLDE. Additional methods used for SSP are tracer diffusion, online-Mössbauer spectroscopy (57Mn) and photoluminescence with radioactive nuclei.
Beamline installations
The HIE-ISOLDE project introduced a network of High Energy Beam Transfer (HEBT) beamlines to the ISOLDE facility. The common section beamline, XT00, joins to three bending beamlines (XT01, XT02, XT03) leading to different experiment setups. The three identical beamlines are independent of each other, for example, if the first XT01 dipole magnet is off, the beam will continue to the XT02 and XT03. They all bend the beam by 90 degrees and focus it using two dipole magnets and a doublet-quadrupole. The XT01 beamline leads to Miniball, the XT02 beamline leads to the ISS, and the XT03 beamline leads to movable setups, such as the SEC scattering chamber.
Offline 2 was recently installed as a mass separator beamline at ISOLDE, with the purpose of satisfying the increased demands on the original offline facility, Offline 1. The facility includes the beamline enclosed in a Faraday cage as well as a laser laboratory and control station. The offline facility is designed for target test studies, and upgraded to include potential for the production and study of molecular ion beams.
Results and discoveries
Below is the list of some physics activities done at ISOLDE facility.
Extension of the table of nuclides by discovering new isotopes
The ISOLDE facility continuously develops the nuclear chart, and was the first to study structural evolution in long chains of noble gas, alkali elements and mercury isotopes.
High precision measurements of nuclear masses
The ISOLTRAP experimental setup Is able to make high precision measurements of nuclear masses by using a series of Penning traps. The experiment has been able to measure isotopes with very short half-lives (<100 ms) with a precision of below 10−8. For his work on "key contributions to the masses..." of isotopes at ISOLTRAP, among other work, Heinz-Jürgen Kluge was a recipient of the Lise Meitner Prize in 2006.
Discovery of shape staggering in light Hg isotopes
Atomic nuclei are usually spherical, however gradual changes in nuclear shape can occur when the number of neutrons of a given element changes. Research published in 1971 showed that if single neutrons are added to or removed from the nuclei of mercury isotopes, the shape will change to a "rugby ball". Newer studies, from RILIS, show that this shape staggering also occurs with bismuth isotopes.
Contributions to island of inversion measurements and potential discovery of new magic numbers
The island of inversion is a region of the chart of nuclides in which isotopes have enhanced stability, compared to the surrounding unstable nuclei. The island is associated with the magic neutron numbers (N = 8, 14, 20, 28, 50, 82, 126), where this breakdown occurs. Various experiments at ISOLDE have determined properties of these island of inversion isotopes, including the first of their kind measurements performed with Miniball on magnesium-32, lying in the island of inversion at N = 20. Furthermore, the ISOLTRAP experiment provided results using calcium-52 to reveal a potential new magic number, 32, which was later disproven by the CRIS experiment.
Production of isomeric beams
A nuclear isomer is a metastable state of a nucleus, in which one or more nucleons occupy higher energy levels than in the ground state of the same nucleus. In the mid-2000s, REX-ISOLDE developed a technique to select and post-accelerate isomeric beams to use in nuclear-decay experiments, such as at Miniball.
Discovery of beta-delayed multi-particle emission
The first observation of beta-delayed two-neutron emission was made at ISOLDE in 1979, using the isotope lithium-11. Beta-delayed emission occurs for isotopes further away from the line of stability, and involves particle emission after beta decay. Newer studies have been proposed to investigate beta-delayed multi-particle emission of lithium-11 using the IDS.
Studies on nuclear resonance systems beyond the drip line and existence of halo structure
The nuclear drip line is the boundary beyond which adding nucleons to a nucleus will result in the immediate decay of a nucleon (nucleon has 'dripped' out of the nucleus). Accelerated RIBs from REX-ISOLDE are used in transfer reactions which allow for studies of nuclear resonance systems beyond the dripline.
Some light nuclei close to the drip line may have a neutron halo structure, due to the tunnelling of loosely bound neutrons outside the nucleus. This proof of the halo structure was made at ISOLDE from a series of experiments analysing the lithium-11 nucleus.
First observations of short-lived pear-shaped atomic nuclei
Research conducting using the Miniball experimental setup found evidence of pear-shaped heavy nuclei, in particular radon-220 and radium-224. These results were named in the Institute of Physics (IoP) "top 10 breakthroughs in physics" in 2013, and was featured as the cover of Nature 2013. In 2020, due to the HIE-ISOLDE upgrade, radium-222 was also found to have a "stable pear shape". Laser spectroscopy has been performed on a short-lived radioactive molecule, containing radium, which further studies into could reveal physics beyond the Standard Model due to time-reversal symmetry breaking.
Measurement of 229mTh transition energy
In 2023, ISOLDE made the first 1%-level measurement of the ultralow-energy thorium-229m nuclear isomer, detecting photons at an energy of . This was a key step in the construction of a future nuclear clock.
Improvements and future work
Below is a list of improvements needed for the ISOLDE facility, considering both medium and long-term goals. Some of these improvements have been proposed by the EPIC project.
Medium-term
Parallel RIBs operation
New beam dumps for the two target stations will give a proton beam at higher energy and double intensity
Phase 3 upgrade to the HIE-ISOLDE post-accelerator to increase energy beyond 10 MeV per nucleon
Upgrade of transfer line from the PSB
Long-term
Addition of a storage ring with the capabilities to store short-lived isotopes
A new HRS with a higher resolving power
New ISOLDE building
Installation of two extra target stations
See also
Eurisol
Total absorption spectroscopy
Facility for Rare Isotope Beams
Rare Isotope Science Project
External links
ISOLDE page within CERN website
A mini documentary series about ISOLDE by CERN (YouTube playlist)
Celebrating 50 years of physics at ISOLDE by CERN (YouTube video)
A poster about ISOLDE from ISOLDE website
A poster about HIE-ISOLDE and some other upgrades from ISOLDE website
Further reading
References
Particle physics facilities
CERN facilities
Mass spectrometry | ISOLDE | [
"Physics",
"Chemistry"
] | 6,569 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
2,050,151 | https://en.wikipedia.org/wiki/As%20low%20as%20reasonably%20practicable | As low as reasonably practicable (ALARP), or as low as reasonably achievable (ALARA), is a principle in the regulation and management of safety-critical and safety-involved systems. The principle is that the residual risk shall be reduced as far as reasonably practicable. In UK and NZ Health and safety law, it is equivalent to so far as is reasonably practicable (SFAIRP). In the US, ALARA is used in the regulation of radiation risks.
For a risk to be ALARP, it must be possible to demonstrate that the cost involved in reducing the risk further would be grossly disproportionate to the benefit gained. The ALARP principle arises from the fact that infinite time, effort and money could be spent in the attempt of reducing a risk to zero; not the fact that reducing the risk in half would require a finite time, effort and money. It should not be understood as simply a quantitative measure of benefit against detriment. It is more a best common practice of judgement of the balance of risk and societal benefit.
Factors
In this context, risk is the combination of the frequency (likelihood) and the consequence of a specified hazardous event.
Several factors are likely to be considered when deciding whether or not a risk has been reduced as far as reasonably practicable:
Health and safety guidelines and codes of practice
Manufacturer's specifications and recommendations
Industry practice
International standards and laws
Suggestions from advisory bodies
Comparison with similar hazardous events in other industries
Cost of further measures would be disproportionate to the risk reduction they would achieve
Another factor is often the cost of assessing the improvement gained in an attempted risk reduction. In extremely complex systems, this can be very high, and could be the limiting factor in practicability of risk reduction, although according to UK HSE guidance, cost alone should never be a justification for taking extra safety risks.
Determining that a risk has been reduced to ALARP involves an assessment of the risk to be avoided, of the sacrifice (in money, time and trouble) involved in taking measures to avoid that risk, and a comparison of the two. This is a cost–benefit analysis (CBA). A difficulty arising in CBAs is assigning a meaningful and agreed financial value to human life. A CBA exercise, in the context of ALARP, must have a means of assigning financial values to impacts to the environment, physical assets, production stoppage, company reputation, etc., which also presents significant challenges to the analyst.
Origin in UK law
The term ALARP arises from UK legislation, particularly the Health and Safety at Work etc. Act 1974, which requires "Provision and maintenance of plant and systems of work that are, so far as is reasonably practicable, safe and without risks to health". The phrase So Far As is Reasonably Practicable (SFARP) in this and similar clauses is interpreted as leading to a requirement that risks must be reduced to a level that is As Low As is Reasonably Practicable (ALARP).
The key question in determining whether a risk is ALARP is the definition of reasonably practicable. This term has been enshrined in the UK case law since the case of Edwards v. National Coal Board in 1949. The ruling was that the risk must be significant in relation to the sacrifice (in terms of money, time or trouble) required to avert it: risks must be averted unless there is a gross disproportion between the costs and benefits of doing so.
Including gross disproportion means that an ALARP judgement in the UK is not a simple cost benefit analysis, but is weighted to favour carrying out the safety improvement. However, there is no broad consensus on the precise factor that would be appropriate: the HSE recommends that the bias towards safety "has to be argued in the light of all the circumstances applying to the case and the precautionary approach that these circumstances warrant".
Use outside the UK
The ALARP or ALARA principle is mandated by particular legislation in some countries outside the UK, including Australia, the Netherlands and Norway. Where the ALARP principle is used, it may not have the same implications as in the UK, as "reasonably practicable" may be interpreted according to the local culture, without introducing the concept of gross disproportionality.
The term ALARA, or "as low as reasonably achievable" is used interchangeably in the United States of America. It is used in the field of radiation protection. Its application in the regulation of radiation risk in some areas has been challenged.
Health Canada's Medical Devices Directorate is transitioning from the ALARP standard to AFAP ("As Far As Possible") in the regulation of risk of medical devices. The ALARP concept can be interpreted to promote financial consideration in higher regard than of the requirements of safety and performance of medical devices. Contradicting this approach, AFAP requires that all ventures of safety must be addressed in the intent of the consumer and effectiveness of the product rather than capital gain of the corporation. Risks previously deemed 'negligible' may be ignored under the old standard but must be taken into account and included in risk analysis under the newer AFAP-based standard. Under AFAP standards there are two defined justifications for the lack of implementation of risk-preventative measures. The first indicates that the additional risk control will not provide additional support for the system, such as an additional alarm when a previous alarm is functioning. The second states that a risk control system does not have to be implemented if there is a more effective risk control that can not be simultaneously executed due to various scenarios such as spatial boundaries. By implementing this new standard of risk mitigation, companies must demonstrate that they have considered and implemented all necessary means of addressing risk of a product or developed system.
In Australia the Work Health & Safety Act 2011 introduced the term So Far As Is Reasonably Practical (SFAIRP) based on the UK legislation. In some industry sectors the term SFARIP has become the common usage and can be used interchangeably with ALARP, but some people believe that SFAIRP and ALARP are two different legal tests.
Legal challenge
A two-year legal battle in the European Court of Justice resulted in the SFAIRP principle being upheld on 18 January 2007.
The European Commission had claimed that the SFAIRP wording in the Health & Safety at Work Act did not fully implement the requirements of the Framework Directive. The Directive gives employers an absolute duty "to ensure the safety and health of workers in every aspect related to the work", whereas the Act qualifies the duty "So Far As is Reasonably Practicable". The court dismissed the action and ordered the commission to pay the UK's costs.
Had the case been upheld, it would have called into question the proportionate approach to safety risk management embodied in the ALARP principle.
Carrot diagrams
'Carrot diagrams' show high (normally unacceptable) risks at the upper/wider end and low (broadly acceptable) risks at the lower/narrower end, with a 'tolerable' or 'ALARP' region in between. They were originally developed by the Health and Safety Executive (HSE) to illustrate their framework for the Tolerability of Risk (TOR), which set out the HSE's approach to regulating safety risks. While the ALARP principle applies at all levels of risk under UK health and safety law, the TOR framework captures the concept that some risks are too great to be acceptable, whatever the benefit; while others are so low as to be insignificant. The HSE, as regulators, would not usually require further action to reduce these broadly acceptable risks unless reasonably practicable measures were available, although they would still take into account that duty holders must reduce risks wherever it is reasonably practicable to do so. Between the two extremes, risks can be tolerated in order to secure benefits, so long as they have been risk assessed and are kept ALARP.
Carrot diagrams are sometimes known as 'ALARP Triangles'. However, this can be misleading because they illustrate the Tolerability of Risk framework rather than the ALARP principle itself, and can be misinterpreted as meaning either that ALARP legally applies only in the tolerable region, or that risks in tolerable region are automatically ALARP.
See also
Value of life
Safety integrity level
References
External links
UK Health and Safety Executive ALARP Suite of Guidance
UK Health and Safety Executive principles for Cost Benefit Analysis (CBA) in support of ALARP decisions
UK Defence Standard 00-56 Safety Management Requirements
Safety
Process safety
Risk management | As low as reasonably practicable | [
"Chemistry",
"Engineering"
] | 1,763 | [
"Chemical process engineering",
"Safety engineering",
"Process safety"
] |
2,050,667 | https://en.wikipedia.org/wiki/Quantum%20tomography | Quantum tomography or quantum state tomography is the process by which a quantum state is reconstructed using measurements on an ensemble of identical quantum states. The source of these states may be any device or system which prepares quantum states either consistently into quantum pure states or otherwise into general mixed states. To be able to uniquely identify the state, the measurements must be tomographically complete. That is, the measured operators must form an operator basis on the Hilbert space of the system, providing all the information about the state. Such a set of observations is sometimes called a quorum. The term tomography was first used in the quantum physics literature in a 1993 paper introducing experimental optical homodyne tomography.
In quantum process tomography on the other hand, known quantum states are used to probe a quantum process to find out how the process can be described. Similarly, quantum measurement tomography works to find out what measurement is being performed. Whereas, randomized benchmarking scalably obtains a figure of merit of the overlap between the error prone physical quantum process and its ideal counterpart.
The general principle behind quantum state tomography is that by repeatedly performing many different measurements on quantum systems described by identical density matrices, frequency counts can be used to infer probabilities, and these probabilities are combined with Born's rule to determine a density matrix which fits the best with the observations.
This can be easily understood by making a classical analogy. Consider a harmonic oscillator (e.g. a pendulum). The position and momentum of the oscillator at any given point can be measured and therefore the motion can be completely described by the phase space. This is shown in figure 1. By performing this measurement for a large number of identical oscillators we get a probability distribution in the phase space (figure 2). This distribution can be normalized (the oscillator at a given time has to be somewhere) and the distribution must be non-negative. So we have retrieved a function which gives a description of the chance of finding the particle at a given point with a given momentum.
For quantum mechanical particles the same can be done. The only difference is that the Heisenberg's uncertainty principle mustn't be violated, meaning that we cannot measure the particle's momentum and position at the same time. The particle's momentum and its position are called quadratures (see Optical phase space for more information) in quantum related states. By measuring one of the quadratures of a large number of identical quantum states will give us a probability density corresponding to that particular quadrature. This is called the marginal distribution, or (see figure 3). In the following text we will see that this probability density is needed to characterize the particle's quantum state, which is the whole point of quantum tomography.
What quantum state tomography is used for
Quantum tomography is applied on a source of systems, to determine the quantum state of the output of that source. Unlike a measurement on a single system, which determines the system's current state after the measurement (in general, the act of making a measurement alters the quantum state), quantum tomography works to determine the state(s) prior to the measurements.
Quantum tomography can be used for characterizing optical signals, including measuring the signal gain and loss of optical devices, as well as in quantum computing and quantum information theory to reliably determine the actual states of the qubits. One can imagine a situation in which a person Bob prepares many identical objects (particles or fields) in the same quantum states and then gives them to Alice to measure. Not confident with Bob's description of the state, Alice may wish to do quantum tomography to classify the state herself.
Methods of quantum state tomography
Linear inversion
Using Born's rule, one can derive the simplest form of quantum tomography. Generally, being in a pure state is not known in advance, and a state may be mixed. In this case, many different types of measurements will have to be performed, many times each. To fully reconstruct the density matrix for a mixed state in a finite-dimensional Hilbert space, the following technique may be used.
Born's rule states , where is a particular measurement outcome projector and is the density matrix of the system.
Given a histogram of observations for each measurement, one has an approximation
to for each .
Given linear operators and , define the inner product
where is representation of the operator as a column vector and a row vector such that is the inner product in of the two.
Define the matrix as
.
Here Ei is some fixed list of individual measurements (with binary outcomes), and A does all the measurements at once.
Then applying this to yields the probabilities:
.
Linear inversion corresponds to inverting this system using the observed relative frequencies to derive (which is isomorphic to ).
This system is not going to be square in general, as for each measurement being made there will generally be multiple measurement outcome projectors . For example, in a 2-D Hilbert space with 3 measurements , each measurement has 2 outcomes, each of which has a projector Ei, for 6 projectors, whereas the real dimension of the space of density matrices is (2⋅22)/2=4, leaving to be 6 x 4. To solve the system, multiply on the left by :
.
Now solving for yields the pseudoinverse:
.
This works in general only if the measurement list Ei is tomographically complete. Otherwise, the matrix will not be invertible.
Continuous variables and quantum homodyne tomography
In infinite dimensional Hilbert spaces, e.g. in measurements of continuous variables such as position, the methodology is somewhat more complex. One notable example is in the tomography of light, known as optical homodyne tomography. Using balanced homodyne measurements, one can derive the Wigner function and a density matrix for the state of the light.
One approach involves measurements along different rotated directions in phase space. For each direction , one can find a probability distribution for the probability density of measurements in the direction of phase space yielding the value . Using an inverse Radon transformation (the filtered back projection) on leads to the Wigner function, , which can be converted by an inverse Fourier transform into the density matrix for the state in any basis. A similar technique is often used in medical tomography.
Example: single-qubit state tomography
The density matrix of a single qubit can be expressed in terms of its Bloch vector and the Pauli vector :
.
The single-qubit state tomography can be performed by means of single-qubit Pauli measurements:
First, create a list of three quantum circuits, with the first one measuring the qubit in the computational basis (Z-basis), the second one performing a Hadamard gate before measurement (which makes the measurement in X-basis), and the third one performing the appropriate phase shift gate (that is ) followed by a Hadamard gate before measurement (which makes the measurement in Y-basis);
Then, run these circuits (typically thousands of times), and the counts in the measurement results of the first circuit produces , the second circuit , and the third circuit ;
Finally, if , then a measured Bloch vector is produced as , and the measured density matrix is ; If , it'll be necessary to renormalize the measured Bloch vector as before using it to calculate the measured density matrix.
This algorithm is the foundation for qubit tomography and is used in some quantum programming routines, like that of Qiskit.
Example: homodyne tomography.
Electromagnetic field amplitudes (quadratures) can be measured with high efficiency using photodetectors together with temporal mode selectivity. Balanced homodyne tomography is a reliable technique of reconstructing quantum states in the optical domain. This technique combines the advantages of the high efficiencies of photodiodes in measuring the intensity or photon number of light, together with measuring the quantum features of light by a clever set-up called the homodyne tomography detector.
Quantum homodyne tomography is understood by the following example.
A laser is directed onto a 50-50% beamsplitter, splitting the laser beam into two beams. One is used as a local oscillator (LO) and the other is used to generate photons with a particular quantum state. The generation of quantum states can be realized, e.g. by directing the laser beam through a frequency doubling crystal and then onto a parametric down-conversion crystal. This crystal generates two photons in a certain quantum state. One of the photons is used as a trigger signal used to trigger (start) the readout event of the homodyne tomography detector. The other photon is directed into the homodyne tomography detector, in order to reconstruct its quantum state. Since the trigger and signal photons are entangled (this is explained by the spontaneous parametric down-conversion article), it is important to realize that the optical mode of the signal state is created nonlocal only when the trigger photon impinges the photodetector (of the trigger event readout module) and is actually measured. More simply said, it is only when the trigger photon is measured, that the signal photon can be measured by the homodyne detector.
Now consider the homodyne tomography detector as depicted in figure 4 (figure missing). The signal photon (this is the quantum state we want to reconstruct) interferes with the local oscillator, when they are directed onto a 50-50% beamsplitter. Since the two beams originate from the same so called master laser, they have the same fixed phase relation. The local oscillator must be intense, compared to the signal so it provides a precise phase reference. The local oscillator is so intense, that we can treat it classically (a = α) and neglect the quantum fluctuations.
The signal field is spatially and temporally controlled by the local oscillator, which has a controlled shape. Where the local oscillator is zero, the signal is rejected. Therefore, we have temporal-spatial mode selectivity of the signal.
The beamsplitter redirects the two beams to two photodetectors. The photodetectors generate an electric current proportional to the photon number. The two detector currents are subtracted and the resulting current is proportional to the electric field operator in the signal mode, depended on relative optical phase of signal and local oscillator.
Since the electric field amplitude of the local oscillator is much higher than that of the signal the intensity or fluctuations in the signal field can be seen. The homodyne tomography system functions as an amplifier. The system can be seen as an interferometer with such a high intensity reference beam (the local oscillator) that unbalancing the interference by a single photon in the signal is measurable. This amplification is well above the photodetectors noise floor.
The measurement is reproduced a large number of times. Then the phase difference between the signal and local oscillator is changed in order to ‘scan’ a different angle in the phase space. This can be seen from figure 4. The measurement is repeated again a large number of times and a marginal distribution is retrieved from the current difference. The marginal distribution can be transformed into the density matrix and/or the Wigner function. Since the density matrix and the Wigner function give information about the quantum state of the photon, we have reconstructed the quantum state of the photon.
The advantage of this balanced detection method is that this arrangement is insensitive to fluctuations in the intensity of the laser.
The quantum computations for retrieving the quadrature component from the current difference are performed as follows.
The photon number operator for the beams striking the photodetectors after the beamsplitter is given by:
,
where i is 1 and 2, for respectively beam one and two.
The mode operators of the field emerging the beamsplitters are given by:
The denotes the annihilation operator of the signal and alpha the complex amplitude of the local oscillator.
The number of photon difference is eventually proportional to the quadrature and given by:
,
Rewriting this with the relation:
Results in the following relation:
,
where we see clear relation between the photon number difference and the quadrature component . By keeping track of the sum current, one can recover information about the local oscillator's intensity, since this is usually an unknown quantity, but an important quantity for calculating the quadrature component .
Problems with linear inversion
One of the primary problems with using linear inversion to solve for the density matrix is that in general the computed solution will not be a valid density matrix. For example, it could give negative probabilities or probabilities greater than 1 to certain measurement outcomes. This is particularly an issue when fewer measurements are made.
Another issue is that in infinite dimensional Hilbert spaces, an infinite number of measurement outcomes would be required. Making assumptions about the structure and using a finite measurement basis leads to artifacts in the phase space density.
Maximum likelihood estimation
Maximum likelihood estimation (also known as MLE or MaxLik) is a popular technique for dealing with the problems of linear inversion. By restricting the domain of density matrices to the proper space, and searching for the density matrix which maximizes the likelihood of giving the experimental results, it guarantees the state to be theoretically valid while giving a close fit to the data. The likelihood of a state is the probability that would be assigned to the observed results had the system been in that state.
Suppose the measurements have been observed with frequencies . Then the likelihood associated with a state is
where is the probability of outcome for the state .
Finding the maximum of this function is non-trivial and generally involves iterative methods. The methods are an active topic of research.
Problems with maximum likelihood estimation
Maximum likelihood estimation suffers from some less obvious problems than linear inversion. One problem is that it makes predictions about probabilities that cannot be justified by the data. This is seen most easily by looking at the problem of zero eigenvalues. The computed solution using MLE often contains eigenvalues which are 0, i.e. it is rank deficient. In these cases, the solution then lies on the boundary of the n-dimensional Bloch sphere. This can be seen as related to linear inversion giving states which lie outside the valid space (the Bloch sphere). MLE in these cases picks a nearby point that is valid, and the nearest points are generally on the boundary.
This is not physically a problem, the real state might have zero eigenvalues. However, since no value may be less than 0, an estimate of an eigenvalue being 0 implies that the estimator is certain the value is 0, otherwise they would have estimated some greater than 0 with a small degree of uncertainty as the best estimate. This is where the problem arises, in that it is not logical to conclude with absolute certainty after a finite number of measurements that any eigenvalue (that is, the probability of a particular outcome) is 0. For example, if a coin is flipped 5 times and each time heads was observed, it does not mean there is 0 probability of getting tails, despite that being the most likely description of the coin.
Bayesian methods
Bayesian mean estimation (BME) is a relatively new approach which addresses the problems of maximum likelihood estimation. It focuses on finding optimal solutions which are also honest in that they include error bars in the estimate. The general idea is to start with a likelihood function and a function describing the experimenter's prior knowledge (which might be a constant function), then integrate over all density matrices using the product of the likelihood function and prior knowledge function as a weight.
Given a reasonable prior knowledge function, BME will yield a state strictly within the n-dimensional Bloch sphere. In the case of a coin flipped N times to get N heads described above, with a constant prior knowledge function, BME would assign as the probability for tails.
BME provides a high degree of accuracy in that it minimizes the operational divergences of the estimate from the actual state.
Methods for incomplete data
The number of measurements needed for a full quantum state tomography for a multi-particle system scales exponentially with the number of particles, which
makes such a procedure impossible even for modest system sizes. Hence, several methods have been developed to
realize quantum tomography with fewer measurements.
The concept of matrix completion and compressed sensing have been applied to reconstruct density matrices from an incomplete set of measurements (that is, a set of measurements which is not a quorum). In general, this is impossible, but under assumptions (for example, if the density matrix is a pure state, or a combination of just a few pure states) then the density matrix has fewer degrees of freedom, and it may be possible to reconstruct the state from the incomplete measurements.
Permutationally Invariant Quantum Tomography
is a procedure that has been developed mostly for states that are close to being
permutationally symmetric, which is typical in nowadays experiments. For two-state particles, the number of measurements needed scales only quadratically with the number of particles.
Besides the modest measurement effort, the processing of the measured data can also be done efficiently:
It is possible to carry out the fitting of a physical density matrix on the measured data even for large systems.
Permutationally Invariant Quantum Tomography has been combined with compressed sensing in a six-qubit
photonic experiment.
Quantum measurement tomography
One can imagine a situation in which an apparatus performs some measurement on quantum systems, and determining what particular measurement is desired. The strategy is to send in systems of various known states, and use these states to estimate the outcomes of the unknown measurement. Also known as "quantum estimation", tomography techniques are increasingly important including those for quantum measurement tomography and the very similar quantum state tomography. Since a measurement can always be characterized by a set of POVM's, the goal is to reconstruct the characterizing POVM's . The simplest approach is linear inversion. As in quantum state observation, use
.
Exploiting linearity as above, this can be inverted to solve for the .
Not surprisingly, this suffers from the same pitfalls as in quantum state tomography: namely, non-physical results, in particular negative probabilities. Here the will not be valid POVM's, as they will not be positive. Bayesian methods as well as Maximum likelihood estimation of the density matrix can be used to restrict the operators to valid physical results.
Quantum process tomography
Quantum process tomography (QPT) deals with identifying an unknown quantum dynamical process. The first approach, introduced in 1996 and sometimes known as standard quantum process tomography (SQPT) involves preparing an ensemble of quantum states and sending them through the process, then using quantum state tomography to identify the resultant states. Other techniques include ancilla-assisted process tomography (AAPT) and entanglement-assisted process tomography (EAPT) which require an extra copy of the system.
Each of the techniques listed above are known as indirect methods for characterization of quantum dynamics, since they require the use of quantum state tomography to reconstruct the process. In contrast, there are direct methods such as direct characterization of quantum dynamics (DCQD) which provide a full characterization of quantum systems without any state tomography.
The number of experimental configurations (state preparations and measurements) required for full quantum process tomography grows exponentially with the number of constituent particles of a system. Consequently, in general, QPT is an impossible task for large-scale quantum systems. However, under weak decoherence assumption, a quantum dynamical map can find a sparse representation. The method of compressed quantum process tomography (CQPT) uses the compressed sensing technique and applies the sparsity assumption to reconstruct a quantum dynamical map from an incomplete set of measurements or test state preparations.
Quantum dynamical maps
A quantum process, also known as a quantum dynamical map, , can be described by a completely positive map
,
where , the bounded operators on Hilbert space; with operation elements satisfying so that .
Let be an orthogonal basis for . Write the operators in this basis
.
This leads to
,
where .
The goal is then to solve for , which is a positive superoperator and completely characterizes with respect to the basis.
Standard quantum process tomography
SQPT approaches this using linearly independent inputs , where is the dimension of the Hilbert space . For each of these input states , sending it through the process gives an output state which can be written as a linear combination of the , i.e. . By sending each through many times, quantum state tomography can be used to determine the coefficients experimentally.
Write
,
where is a matrix of coefficients.
Then
.
Since form a linearly independent basis,
.
Inverting gives :
.
See also
Quantum discord
Quantum process
References
Quantum mechanics
Tomography | Quantum tomography | [
"Physics"
] | 4,340 | [
"Theoretical physics",
"Quantum mechanics"
] |
2,050,671 | https://en.wikipedia.org/wiki/Managed%20retreat | Managed retreat involves the purposeful, coordinated movement of people and buildings away from risks. This may involve the movement of a person, infrastructure (e.g., building or road), or community. It can occur in response to a variety of hazards such as flood, wildfire, or drought. Politicians, insurers, and residents are increasingly paying attention to managed retreat from low-lying coastal areas because of the threat of sea level rise due to climate change. Trends in climate change predict substantial sea level rises worldwide, causing damage to human infrastructure through coastal erosion and putting communities at risk of severe coastal flooding.
The type of managed retreat proposed depends on the location and type of natural hazard, and on local policies and practices for managed retreat. In the United Kingdom, managed realignment through removal of flood defences is often a response to sea-level rise exacerbated by local subsidence. In the United States, managed retreat often occurs through voluntary acquisition and demolition or relocation of at-risk properties by government. In the Global South, relocation may occur through government programs. Some low-lying countries, facing inundation due to sea-level rise, are planning for the relocation of their populations, such as Kiribati planning for "Migration with Dignity".
Managed realignment
In the United Kingdom, the main reason for managed realignment is to improve coastal stability, essentially replacing artificial 'hard' coastal defences with natural 'soft' coastal landforms. According to University of Southampton researchers Matthew M. Linham and Robert J. Nicholls, "one of the biggest drawbacks of managed realignment is that the option requires land to be yielded to the sea." One of its benefits is that it can help protect land further inland by creating natural spaces that act as buffers to absorb water or dampen the force of waves.
Managed realignment has also been used to mitigate for loss of intertidal habitat. Although land reclamation has been an important factor for salt marsh loss in the UK in the past, the majority of current salt marsh loss in the UK is believed to be due to erosion. This erosion may involve coastal squeeze, where protective sea walls prevent the landward migration of salt marsh in response to sea level rise when sediment supply is limited. Salt marshes are protected under the EU Habitats Directive as well as providing habitat for a number of species protected by the Birds Directive (see Natura 2000). Following this guidance, the UK's biodiversity action plan aims to prevent net losses to the area of salt marsh present in 1992. It is, therefore, a legal requirement that all losses in marsh area must be compensated by replacement habitat with equivalent biological characteristics. This equates to the need to restore approximately 1.4 km2 of salt marsh habitat per year in the UK. One of the major reasons cited for the slow pace of current salt marsh restoration in the UK is the uncertainty associated with the practice (Foresight).
There are no agreed protocols on the monitoring of managed realignment sites and, consequently, very few of the sites are being monitored consistently and effectively. Due to the low levels of monitoring, there is little evidence on which to base future managed realignment projects. This has led to the results of managed realignment schemes being extremely unpredictable.
Relocation programs
Managed retreat in the form of relocation has been used in inland and coastal areas in response to severe flooding and hurricanes. In the United States, this often takes the form of "buyout" programs, in which government acquires and relocates or demolishes at-risk properties. In some cases, individual homes are purchased after disasters. In other cases, such as Odanah and Soldiers Grove, Wisconsin, or Valmeyer, Illinois, or Isle de Jean Charles, Louisiana the entire community has relocated.
Managed retreat can be very controversial. A lawsuit in Del Mar California brought on by residents was initiated to stop a managed retreat program based on worries that home values, insurance costs and restricted home expansion have been effects of the policy. Some areas included in managed retreat are above sea level and are recommended based primarily on estimated engineering costs and by studies financed by the California Coastal Commission itself.
Despite the controversy, as the costs of climate change adaptation increase, more communities are beginning to consider managed retreat. One such community is Marina, California, adjacent to Monterey Bay. Marina's general acceptance of managed retreat became the subject of a Los Angeles Times feature article, published in 2020.
Realignment examples
In the UK, the first managed retreat site was an area of at Northey Island in Essex flooded in 1991, followed by larger sites at Tollesbury and Orplands (1995), Freiston Shore (2001) and Abbott's Hall Farm, at Great Wigborough in the Blackwater Estuary, it is one of the largest managed retreat schemes in Europe. It covers nearly of land on the north side of the estuary (2002) and a number of others. The programme was started by the Essex Wildlife Trust (EWT) who own Abbott's Hall Farm. They made five breaches in the original old sea wall to allow the held-back sea to flood through to create salt marshland. The marshland over time reverted to its original state before cultivation, providing excellent bird habitat and breeding grounds.
Forced retreat under climate change
Since 2010, the New Zealand Coastal Policy Statement, a policy under the Resource Management Act of 1991, has required the government to conduct managed retreats.
As a result of two climate change related landslides in New Zealand in 2005, the Whakatane District Council began to plan for climate-related migration to the Matata township over the next decade. The vast majority of residents accepted the need to relocate and did so with council assistance and compensation but one resident has rejected both the process and the need to move and is now the neighbourhood's sole remaining occupant. NIWA coastal hazards expert Rob Bell says the issue of retreat is primarily socio-political rather than technocratic.
See also
(an involuntary, forced case)
(a hypothetical extreme case in science fiction)
References
External links
Board on Environmental Studies and Toxicology, 2001. Compensating for Wetland Losses Under the Clean Water Act
The UK Environment Agency’s Managed Realignment Electronic Platform
The Online Managed Realignment Guide (OMReG): A website designed to act as a 'collecting point' for information about coastal Managed Realignment and Regulated Tidal Exchange projects in the UK and Northern Europe
Managed Coastal Retreat: A Legal Handbook on Shifting Development Away from Vulnerable Areas
Animation explaining basic principle of Managed Realignment
Geomorphology
Coastal engineering
Ecology | Managed retreat | [
"Engineering",
"Biology"
] | 1,335 | [
"Coastal engineering",
"Civil engineering",
"Ecology"
] |
2,050,894 | https://en.wikipedia.org/wiki/Astringent | An astringent (sometimes called adstringent) is a chemical that shrinks or constricts body tissues. The word derives from the Latin adstringere, which means "to bind fast". Astringency, the dry, puckering or numbing mouthfeel caused by the tannins in unripe fruits, lets the fruit mature by deterring eating. Tannins, being a kind of polyphenol, bind salivary proteins and make them precipitate and aggregate, producing a rough, "sandpapery", or dry sensation in the mouth.
Smoking tobacco is also reported to have an astringent effect.
In a scientific study, astringency was still detectable by subjects who had local anesthesia applied to their taste nerves, but not when both these and the trigeminal nerves were disabled.
Uses
In medicine, astringents cause constriction or contraction of mucous membranes and exposed tissues and are often used internally to reduce discharge of blood serum and mucous secretions. This can happen with a sore throat, hemorrhages, diarrhea, and peptic ulcers. Externally applied astringents, which cause mild coagulation of skin proteins, dry, harden, and protect the skin. People with acne are often advised to use astringents if they have oily skin. Mild astringents relieve such minor skin irritations as those resulting from superficial cuts; allergies; insect bites; anal hemorrhoids; and fungal infections such as athlete's foot. Redness-reducing eye drops contain an astringent. Use of Goulard's Extract has been discontinued due to lead poisoning.
Examples
Some common astringents are alum, acacia, sage, yarrow, witch hazel, bayberry, distilled vinegar, very cold water, and rubbing alcohol. Astringent preparations include silver nitrate, potassium permanganate, zinc oxide, zinc sulfate, Burow's solution, tincture of benzoin, and such vegetable substances as tannic and gallic acids. Balaustines are the red rose-like flowers of the pomegranate, which are very bitter to the taste. In medicine, their dried form has been used as an astringent. Some metal salts and acids have also been used as astringents.
Calamine lotion, witch hazel, and yerba mansa, are astringents, as are the powdered leaves of the myrtle. Ripe fruits and fruit parts including blackthorn (sloe berries), Aronia chokeberry, chokecherry, bird cherry, rhubarb, quince, jabuticaba and persimmon fruits (especially when unripe), banana skins (or unripe bananas), cashew fruits and acorns are astringent. Citrus fruits, like lemons, are somewhat astringent. The tannins in some teas, coffee, and red grape wines like Cabernet Sauvignon and Merlot produce mild astringency. Astringency is used in classifications of white wine.
References
External links
Drugs
Gustation | Astringent | [
"Chemistry"
] | 671 | [
"Pharmacology",
"Chemicals in medicine",
"Drugs",
"Products of chemical industry"
] |
2,051,037 | https://en.wikipedia.org/wiki/Environmental%20biotechnology | Environmental biotechnology is biotechnology that is applied to and used to study the natural environment. Environmental biotechnology could also imply that one try to harness biological process for commercial uses and exploitation. The International Society for Environmental Biotechnology defines environmental biotechnology as "the development, use and regulation of biological systems for remediation of contaminated environments (land, air, water), and for environment-friendly processes (green manufacturing technologies and sustainable development)".
Environmental biotechnology can simply be described as "the optimal use of nature, in the form of plants, animals, bacteria, fungi and algae, to produce renewable energy, food and nutrients in a synergistic integrated cycle of profit making processes where the waste of each process becomes the feedstock for another process".
Significance for agriculture, food security, climate change mitigation and adaptation and the MDGs
The IAASTD has called for the advancement of small-scale agro-ecological farming systems and technology in order to achieve food security, climate change mitigation, climate change adaptation and the realisation of the Millennium Development Goals. Environmental biotechnology has been shown to play a significant role in agroecology in the form of zero waste agriculture and most significantly through the operation of over 15 million biogas digesters worldwide.
Significance towards industrial biotechnology
Consider the effluents of starch plant which has mixed up with a local water body like a lake or pond. We find huge deposits of starch which are not so easily taken up for degradation by microorganisms except for a few exemptions. Microorganisms from the polluted site are scan for genomic changes that allow them to degrade/utilize the starch better than other microbes of the same genus. The modified genes are then identified. The resultant genes are cloned into industrially significant microorganisms and are used for economically processes like in pharmaceutical industry, fermentations... etc..
Similar situations can be encountered in the case of marine oil spills which require cleanup, where microbes isolated from oil rich environments like oil wells, oil transfer pipelines...etc. have been found having the potential to degrade oil or use it as an energy source. Thus they serve as a remedy to oil spills.
Microbes isolated from pesticide-contaminated soils may capable of utilizing the pesticides as energy source and hence when mixed along with bio-fertilizers, could serve as an insurance against increased pesticide-toxicity levels in agricultural platform.
On the other hand, these newly introduced microorganisms could create an imbalance in the environment concerned. The mutual harmony in which the organisms in that particular environment existed may have to face alteration and we should be extremely careful so as to not disturb the mutual relationships already existing in the environment of both the benefits and the disadvantages would pave way for an improvised version of environmental biotechnology.
Applications and Implications
Humans have long been manipulating genetic material through breeding and modern genetic modification for optimizing crop yield, etc.. There can also be unexpected, negative health and environmental outcomes. Environmental biotechnology is about the balance between the applications that provide for these and the implications of manipulating genetic material. Textbooks address both the applications and implications. Environmental engineering texts addressing sewage treatment and biological principles are often now considered to be environmental biotechnology texts. These generally address the applications of biotechnologies, whereas the implications of these technologies are less often addressed; usually in books concerned with potential impacts and even catastrophic events.
See also
Agricultural biotechnology
Microbial ecology
Molecular Biotechnology
References
External links
International Society for Environmental Biotechnology
Biotechnology
Environmental science | Environmental biotechnology | [
"Biology",
"Environmental_science"
] | 721 | [
"nan",
"Biotechnology"
] |
21,400,224 | https://en.wikipedia.org/wiki/Rossby-gravity%20waves | Rossby-gravity waves are equatorially trapped waves (much like Kelvin waves), meaning that they rapidly decay as their distance increases away from the equator (so long as the Brunt–Vaisala frequency does not remain constant). These waves have the same trapping scale as Kelvin waves, more commonly known as the equatorial Rossby deformation radius. They always carry energy eastward, but their 'crests' and 'troughs' may propagate westward if their periods are long enough.
Derivation
The eastward speed of propagation of these waves can be derived for an inviscid slowly moving layer of fluid of uniform depth H. Because the Coriolis parameter (f = 2Ω sin(θ) where Ω is the angular velocity of the earth, 7.2921 × 10−5 rad/s, and θ is latitude) vanishes at 0 degrees latitude (equator), the “equatorial beta plane” approximation must be made. This approximation states that f is approximately equal to βy, where y is the distance from the equator and β is the variation of the Coriolis parameter with latitude, . With the inclusion of this approximation, the primitive equations become (neglecting friction):
the continuity equation (accounting for the effects of horizontal convergence and divergence and written with geopotential height):
the U-momentum equation (zonal wind component):
the V-momentum equation (meridional wind component):
.
These three equations can be separated and solved using solutions in the form of zonally propagating waves, which are analogous to exponential solutions with a dependence on x and t and the inclusion of structure functions that vary in the y-direction:
.
Once the frequency relation is formulated in terms of ω, the angular frequency, the problem can be solved with three distinct solutions. These three solutions correspond to the equatorially trapped gravity wave, the equatorially trapped Rossby wave and the mixed Rossby-gravity wave (which has some of the characteristics of the former two) . Equatorial gravity waves can be either westward- or eastward-propagating, and correspond to n=1 (same as for the equatorially trapped Rossby wave) on a dispersion relation diagram ("w-k" diagram). At n = 0 on a dispersion relation diagram, the mixed Rossby-gravity waves can be found where for large, positive zonal wave numbers (+k), the solution behaves like a gravity wave; but for large, negative zonal wave numbers (−k), the solution appears to be a Rossby wave (hence the term Rossby-gravity waves). As mentioned earlier, the group velocity (or energy packet/dispersion) is always directed toward the east with a maximum for short waves (gravity waves).
Vertically propagating Rossby-gravity waves
As previously stated, the mixed Rossby-gravity waves are equatorially trapped waves unless the buoyancy frequency remains constant, introducing an additional vertical wave number to complement the zonal wave number and angular frequency. If this Brunt–Vaisala frequency does not change, then these waves become vertically propagating solutions. On a typical "m,k" dispersion diagram, the group velocity (energy) would be directed at right angles to the n = 0 (mixed Rossby-gravity waves) and n = 1 (gravity or Rossby waves) curves and would increase in the direction of increasing angular frequency. Typical group velocities for each component are the following: 1 cm/s for gravity waves and 2 mm/s for planetary (Rossby) waves.
These vertically propagating mixed Rossby-gravity waves were first observed in the stratosphere as westward-propagating mixed waves by M. Yanai. They had the following characteristics: 4–5 days, horizontal wavenumbers of 4 (four waves circling the earth, corresponding to wavelengths of 10,000 km), vertical wavelengths of 4–8 km, and upward group velocity. Similarly, westward-propagating mixed waves were also found in the Atlantic Ocean by Weisberg et al. (1979) with periods of 31 days, horizontal wavelengths of 1200 km, vertical wavelengths of 1 km, and downward group velocity. Also, the vertically propagating gravity wave component was found in the stratosphere with periods of 35 hours, horizontal wavelengths of 2400 km, and vertical wavelengths of 5 km.
See also
Rossby wave
Equatorial Rossby wave
References
Physical oceanography
Gravity waves | Rossby-gravity waves | [
"Physics",
"Chemistry"
] | 918 | [
"Gravity waves",
"Applied and interdisciplinary physics",
"Physical oceanography",
"Fluid dynamics"
] |
21,402,632 | https://en.wikipedia.org/wiki/Electroencephalography | Electroencephalography (EEG)
is a method to record an electrogram of the spontaneous electrical activity of the brain. The biosignals detected by EEG have been shown to represent the postsynaptic potentials of pyramidal neurons in the neocortex and allocortex. It is typically non-invasive, with the EEG electrodes placed along the scalp (commonly called "scalp EEG") using the International 10–20 system, or variations of it. Electrocorticography, involving surgical placement of electrodes, is sometimes called "intracranial EEG". Clinical interpretation of EEG recordings is most often performed by visual inspection of the tracing or quantitative EEG analysis.
Voltage fluctuations measured by the EEG bioamplifier and electrodes allow the evaluation of normal brain activity. As the electrical activity monitored by EEG originates in neurons in the underlying brain tissue, the recordings made by the electrodes on the surface of the scalp vary in accordance with their orientation and distance to the source of the activity. Furthermore, the value recorded is distorted by intermediary tissues and bones, which act in a manner akin to resistors and capacitors in an electrical circuit. This means that not all neurons will contribute equally to an EEG signal, with an EEG predominately reflecting the activity of cortical neurons near the electrodes on the scalp. Deep structures within the brain further away from the electrodes will not contribute directly to an EEG; these include the base of the cortical gyrus, mesial walls of the major lobes, hippocampus, thalamus, and brain stem.
A healthy human EEG will show certain patterns of activity that correlate with how awake a person is. The range of frequencies one observes are between 1 and 30 Hz, and amplitudes will vary between 20 and 100 μV. The observed frequencies are subdivided into various groups: alpha (8–13 Hz), beta (13–30 Hz), delta (0.5–4 Hz), and theta (4–7 Hz). Alpha waves are observed when a person is in a state of relaxed wakefulness and are mostly prominent over the parietal and occipital sites. During intense mental activity, beta waves are more prominent in frontal areas as well as other regions. If a relaxed person is told to open their eyes, one observes alpha activity decreasing and an increase in beta activity. Theta and delta waves are not generally seen in wakefulness - if they are, it is a sign of brain dysfunction.
EEG can detect abnormal electrical discharges such as sharp waves, spikes, or spike-and-wave complexes, as observable in people with epilepsy; thus, it is often used to inform medical diagnosis. EEG can detect the onset and spatio-temporal (location and time) evolution of seizures and the presence of status epilepticus. It is also used to help diagnose sleep disorders, depth of anesthesia, coma, encephalopathies, cerebral hypoxia after cardiac arrest, and brain death. EEG used to be a first-line method of diagnosis for tumors, stroke, and other focal brain disorders, but this use has decreased with the advent of high-resolution anatomical imaging techniques such as magnetic resonance imaging (MRI) and computed tomography (CT). Despite its limited spatial resolution, EEG continues to be a valuable tool for research and diagnosis. It is one of the few mobile techniques available and offers millisecond-range temporal resolution, which is not possible with CT, PET, or MRI.
Derivatives of the EEG technique include evoked potentials (EP), which involves averaging the EEG activity time-locked to the presentation of a stimulus of some sort (visual, somatosensory, or auditory). Event-related potentials (ERPs) refer to averaged EEG responses that are time-locked to more complex processing of stimuli; this technique is used in cognitive science, cognitive psychology, and psychophysiological research.
Uses
Epilepsy
EEG is the gold standard diagnostic procedure to confirm epilepsy. The sensitivity of a routine EEG to detect interictal epileptiform discharges at epilepsy centers has been reported to be in the range of 29–55%. Given the low to moderate sensitivity, a routine EEG (typically with a duration of 20–30 minutes) can be normal in people that have epilepsy. When an EEG shows interictal epileptiform discharges (e.g. sharp waves, spikes, spike-and-wave, etc.) it is confirmatory of epilepsy in nearly all cases (high specificity), however up to 3.5% of the general population may have epileptiform abnormalities in an EEG without ever having had a seizure (low false positive rate) or with a very low risk of developing epilepsy in the future.
When a routine EEG is normal and there is a high suspicion or need to confirm epilepsy, it may be repeated or performed with a longer duration in the epilepsy monitoring unit (EMU) or at home with an ambulatory EEG. In addition, there are activating maneuvers such as photic stimulation, hyperventilation and sleep deprivation that can increase the diagnostic yield of the EEG.
Epilepsy Monitoring Unit (EMU)
At times, a routine EEG is not sufficient to establish the diagnosis or determine the best course of action in terms of treatment. In this case, attempts may be made to record an EEG while a seizure is occurring. This is known as an ictal recording, as opposed to an interictal recording, which refers to the EEG recording between seizures. To obtain an ictal recording, a prolonged EEG is typically performed accompanied by a time-synchronized video and audio recording. This can be done either as an outpatient (at home) or during a hospital admission, preferably to an Epilepsy Monitoring Unit (EMU) with nurses and other personnel trained in the care of patients with seizures. Outpatient ambulatory video EEGs typically last one to three days. An admission to an Epilepsy Monitoring Unit typically lasts several days but may last for a week or longer. While in the hospital, seizure medications are usually withdrawn to increase the odds that a seizure will occur during admission. For reasons of safety, medications are not withdrawn during an EEG outside of the hospital. Ambulatory video EEGs, therefore, have the advantage of convenience and are less expensive than a hospital admission, but they also have the disadvantage of a decreased probability of recording a clinical event.
Epilepsy monitoring is often considered when patients continue having events despite being on anti-seizure medications or if there is concern that the patient's events have an alternate diagnosis, e.g., psychogenic non-epileptic seizures, syncope (fainting), sub-cortical movement disorders, migraine variants, stroke, etc. In cases of epileptic seizures, continuous EEG monitoring helps to characterize seizures and localize/lateralize the region of the brain from which a seizure originates. This can help identify appropriate non-medication treatment options. In clinical use, EEG traces are visually analyzed by neurologists to look at various features. Increasingly, quantitative analysis of EEG is being used in conjunction with visual analysis. Quantitative analysis displays like power spectrum analysis, alpha-delta ratio, amplitude integrated EEG, and spike detection can help quickly identify segments of EEG that need close visual analysis or, in some cases, be used as surrogates for quick identification of seizures in long-term recordings.
Other brain disorders
An EEG might also be helpful for diagnosing or treating the following disorders:
Brain tumor
Brain damage from head injury
Brain dysfunction that can have a variety of causes (encephalopathy)
Inflammation of the brain (encephalitis)
Stroke
Sleep disorders
It can also:
distinguish epileptic seizures from other types of spells, such as psychogenic non-epileptic seizures, syncope (fainting), sub-cortical movement disorders and migraine variants
differentiate "organic" encephalopathy or delirium from primary psychiatric syndromes such as catatonia
serve as an adjunct test of brain death in comatose patients
prognosticate in comatose patients (in certain instances) or in newborns with brain injury from various causes around the time of birth
determine whether to wean anti-epileptic medications.
Intensive Care Unit (ICU)
EEG can also be used in intensive care units for brain function monitoring to monitor for non-convulsive seizures/non-convulsive status epilepticus, to monitor the effect of sedative/anesthesia in patients in medically induced coma (for treatment of refractory seizures or increased intracranial pressure), and to monitor for secondary brain damage in conditions such as subarachnoid hemorrhage (currently a research method).
In cases where significant brain injury is suspected, e.g., after cardiac arrest, EEG can provide some prognostic information.
If a patient with epilepsy is being considered for resective surgery to treat epilepsy, it is often necessary to localize the focus (source) of the epileptic brain activity with a resolution greater than what is provided by scalp EEG. In these cases, neurosurgeons typically implant strips and grids of electrodes or penetrating depth electrodes under the dura mater, through either a craniotomy or a burr hole. The recording of these signals is referred to as electrocorticography (ECoG), subdural EEG (sdEEG), intracranial EEG (icEEG), or stereotactic EEG (sEEG). The signal recorded from ECoG is on a different scale of activity than the brain activity recorded from scalp EEG. Low-voltage, high-frequency components that cannot be seen easily (or at all) in scalp EEG can be seen clearly in ECoG. Further, smaller electrodes (which cover a smaller parcel of brain surface) allow for better spatial resolution to narrow down the areas critical for seizure onset and propagation. Some clinical sites record data from penetrating microelectrodes.
Home ambulatory EEG
Sometimes it is more convenient or clinically necessary to perform ambulatory EEG recordings in the home of the person being tested. These studies typically have a duration of 24–72 hours.
Research use
EEG and the related study of ERPs are used extensively in neuroscience, cognitive science, cognitive psychology, neurolinguistics, and psychophysiological research, as well as to study human functions such as swallowing. Any EEG techniques used in research are not sufficiently standardised for clinical use, and many ERP studies fail to report all of the necessary processing steps for data collection and reduction, limiting the reproducibility and replicability of many studies. Based on a 2024 systematic literature review and meta analysis commissioned by the Patient-Centered Outcomes Research Institute (PCORI), EEG scans cannot be used reliably to assist in making a clinical diagnosis of ADHD. However, EEG continues to be used in research on mental disabilities, such as auditory processing disorder (APD), ADD, and ADHD. EEGs have also been studied for their utility in detecting neurophysiological changes in the brain after concussion, however, at this time there are no advanced imaging techniques that can be used clinically to diagnose or monitor recovery from concussion.
Advantages
Several other methods to study brain function exist, including functional magnetic resonance imaging (fMRI), positron emission tomography (PET), magnetoencephalography (MEG), nuclear magnetic resonance spectroscopy (NMR or MRS), electrocorticography (ECoG), single-photon emission computed tomography (SPECT), near-infrared spectroscopy (NIRS), and event-related optical signal (EROS). Despite the relatively poor spatial sensitivity of EEG, the "one-dimensional signals from localised peripheral regions on the head make it attractive for its simplistic fidelity and has allowed high clinical and basic research throughput". Thus, EEG possesses some advantages over some of those other techniques:
Hardware costs are significantly lower than those of most other techniques
EEG prevents limited availability of technologists to provide immediate care in high traffic hospitals.
EEG only requires a quiet room and briefcase-size equipment, whereas fMRI, SPECT, PET, MRS, or MEG require bulky and immobile equipment. For example, MEG requires equipment consisting of liquid helium-cooled detectors that can be used only in magnetically shielded rooms, altogether costing upwards of several million dollars; and fMRI requires the use of a 1-ton magnet in, again, a shielded room.
EEG can readily have a high temporal resolution, (although sub-millisecond resolution generates less meaningful data), because the two to 32 data streams generated by that number of electrodes is easily stored and processed, whereas 3D spatial technologies provide thousands or millions times as many input data streams, and are thus limited by hardware and software. EEG is commonly recorded at sampling rates between 250 and 2000 Hz in clinical and research settings.
EEG is relatively tolerant of subject movement, unlike most other neuroimaging techniques. There even exist methods for minimizing, and even eliminating movement artifacts in EEG data
EEG is silent, which allows for better study of the responses to auditory stimuli.
EEG does not aggravate claustrophobia, unlike fMRI, PET, MRS, SPECT, and sometimes MEG
EEG does not involve exposure to high-intensity (>1 Tesla) magnetic fields, as in some of the other techniques, especially MRI and MRS. These can cause a variety of undesirable issues with the data, and also prohibit use of these techniques with participants that have metal implants in their body, such as metal-containing pacemakers
EEG does not involve exposure to radioligands, unlike positron emission tomography.
ERP studies can be conducted with relatively simple paradigms, compared with IE block-design fMRI studies
Relatively non-invasive, in contrast to electrocorticography, which requires electrodes to be placed on the actual surface of the brain.
EEG also has some characteristics that compare favorably with behavioral testing:
EEG can detect covert processing (i.e., processing that does not require a response)
EEG can be used in subjects who are incapable of making a motor response
EEG is a method widely used in the study of sport performance, valued for its portability and lightweight design
Some ERP components can be detected even when the subject is not attending to the stimuli
Unlike other means of studying reaction time, ERPs can elucidate stages of processing (rather than just the result)
the simplicity of EEG readily provides for tracking of brain changes during different phases of life. EEG sleep analysis can indicate significant aspects of the timing of brain development, including evaluating adolescent brain maturation.
In EEG there is a better understanding of what signal is measured as compared to other research techniques, e.g. the BOLD response in MRI.
Disadvantages
Low spatial resolution on the scalp. fMRI, for example, can directly display areas of the brain that are active, while EEG requires intense interpretation just to hypothesize what areas are activated by a particular response.
Depending on the orientation and location of the dipole causing an EEG change, there may be a false localization due to the inverse problem.
EEG poorly measures neural activity that occurs below the upper layers of the brain (the cortex).
Unlike PET and MRS, EEG cannot identify specific locations in the brain at which various neurotransmitters, drugs, etc. can be found.
Often takes a long time to connect a subject to EEG, as it requires precise placement of dozens of electrodes around the head and the use of various gels, saline solutions, and/or pastes to maintain good conductivity, and a cap is used to keep them in place. While the length of time differs dependent on the specific EEG device used, as a general rule it takes considerably less time to prepare a subject for MEG, fMRI, MRS, and SPECT.
Signal-to-noise ratio is poor, so sophisticated data analysis and relatively large numbers of subjects are needed to extract useful information from EEG.
EEGs are not currently very compatible with individuals who have coarser and/or textured hair. Even protective styles can pose issues during testing. Researchers are currently trying to build better options for patients and technicians alike Furthermore, researchers are starting to implement more culturally-informed data collection practices to help reduce racial biases in EEG research.
With other neuroimaging techniques
Simultaneous EEG recordings and fMRI scans have been obtained successfully, though recording both at the same time effectively requires that several technical difficulties be overcome, such as the presence of ballistocardiographic artifact, MRI pulse artifact and the induction of electrical currents in EEG wires that move within the strong magnetic fields of the MRI. While challenging, these have been successfully overcome in a number of studies.
MRI's produce detailed images created by generating strong magnetic fields that may induce potentially harmful displacement force and torque. These fields produce potentially harmful radio frequency heating and create image artifacts rendering images useless. Due to these potential risks, only certain medical devices can be used in an MR environment.
Similarly, simultaneous recordings with MEG and EEG have also been conducted, which has several advantages over using either technique alone:
EEG requires accurate information about certain aspects of the skull that can only be estimated, such as skull radius, and conductivities of various skull locations. MEG does not have this issue, and a simultaneous analysis allows this to be corrected for.
MEG and EEG both detect activity below the surface of the cortex very poorly, and like EEG, the level of error increases with the depth below the surface of the cortex one attempts to examine. However, the errors are very different between the techniques, and combining them thus allows for correction of some of this noise.
MEG has access to virtually no sources of brain activity below a few centimetres under the cortex. EEG, on the other hand, can receive signals from greater depth, albeit with a high degree of noise. Combining the two makes it easier to determine what in the EEG signal comes from the surface (since MEG is very accurate in examining signals from the surface of the brain), and what comes from deeper in the brain, thus allowing for analysis of deeper brain signals than either EEG or MEG on its own.
Recently, a combined EEG/MEG (EMEG) approach has been investigated for the purpose of source reconstruction in epilepsy diagnosis.
EEG has also been combined with positron emission tomography. This provides the advantage of allowing researchers to see what EEG signals are associated with different drug actions in the brain.
Recent studies using machine learning techniques such as neural networks with statistical temporal features extracted from frontal lobe EEG brainwave data has shown high levels of success in classifying mental states (Relaxed, Neutral, Concentrating), mental emotional states (Negative, Neutral, Positive) and thalamocortical dysrhythmia.
Mechanisms
The brain's electrical charge is maintained by billions of neurons. Neurons are electrically charged (or "polarized") by membrane transport proteins that pump ions across their membranes. Neurons are constantly exchanging ions with the extracellular milieu, for example to maintain resting potential and to propagate action potentials. Ions of similar charge repel each other, and when many ions are pushed out of many neurons at the same time, they can push their neighbours, who push their neighbours, and so on, in a wave. This process is known as volume conduction. When the wave of ions reaches the electrodes on the scalp, they can push or pull electrons on the metal in the electrodes. Since metal conducts the push and pull of electrons easily, the difference in push or pull voltages between any two electrodes can be measured by a voltmeter. Recording these voltages over time gives us the EEG.
The electric potential generated by an individual neuron is far too small to be picked up by EEG or MEG. EEG activity therefore always reflects the summation of the synchronous activity of thousands or millions of neurons that have similar spatial orientation. If the cells do not have similar spatial orientation, their ions do not line up and create waves to be detected. Pyramidal neurons of the cortex are thought to produce the most EEG signal because they are well-aligned and fire together. Because voltage field gradients fall off with the square of distance, activity from deep sources is more difficult to detect than currents near the skull.
Scalp EEG activity shows oscillations at a variety of frequencies. Several of these oscillations have characteristic frequency ranges, spatial distributions and are associated with different states of brain functioning (e.g., waking and the various sleep stages). These oscillations represent synchronized activity over a network of neurons. The neuronal networks underlying some of these oscillations are understood (e.g., the thalamocortical resonance underlying sleep spindles), while many others are not (e.g., the system that generates the posterior basic rhythm). Research that measures both EEG and neuron spiking finds the relationship between the two is complex, with a combination of EEG power in the gamma band and phase in the delta band relating most strongly to neuron spike activity.
Method
In conventional scalp EEG, the recording is obtained by placing electrodes on the scalp with a conductive gel or paste, usually after preparing the scalp area by light abrasion to reduce impedance due to dead skin cells. Many systems typically use electrodes, each of which is attached to an individual wire. Some systems use caps or nets into which electrodes are embedded; this is particularly common when high-density arrays of electrodes are needed.
Electrode locations and names are specified by the International 10–20 system for most clinical and research applications (except when high-density arrays are used). This system ensures that the naming of electrodes is consistent across laboratories. In most clinical applications, 19 recording electrodes (plus ground and system reference) are used. A smaller number of electrodes are typically used when recording EEG from neonates. Additional electrodes can be added to the standard set-up when a clinical or research application demands increased spatial resolution for a particular area of the brain. High-density arrays (typically via cap or net) can contain up to 256 electrodes more-or-less evenly spaced around the scalp.
Each electrode is connected to one input of a differential amplifier (one amplifier per pair of electrodes); a common system reference electrode is connected to the other input of each differential amplifier. These amplifiers amplify the voltage between the active electrode and the reference (typically 1,000–100,000 times, or 60–100 dB of power gain). In analog EEG, the signal is then filtered (next paragraph), and the EEG signal is output as the deflection of pens as paper passes underneath. Most EEG systems these days, however, are digital, and the amplified signal is digitized via an analog-to-digital converter, after being passed through an anti-aliasing filter. Analog-to-digital sampling typically occurs at 256–512 Hz in clinical scalp EEG; sampling rates of up to 20 kHz are used in some research applications.
During the recording, a series of activation procedures may be used. These procedures may induce normal or abnormal EEG activity that might not otherwise be seen. These procedures include hyperventilation, photic stimulation (with a strobe light), eye closure, mental activity, sleep and sleep deprivation. During (inpatient) epilepsy monitoring, a patient's typical seizure medications may be withdrawn.
The digital EEG signal is stored electronically and can be filtered for display. Typical settings for the high-pass filter and a low-pass filter are 0.5–1 Hz and 35–70 Hz respectively. The high-pass filter typically filters out slow artifact, such as electrogalvanic signals and movement artifact, whereas the low-pass filter filters out high-frequency artifacts, such as electromyographic signals. An additional notch filter is typically used to remove artifact caused by electrical power lines (60 Hz in the United States and 50 Hz in many other countries).
The EEG signals can be captured with opensource hardware such as OpenBCI and the signal can be processed by freely available EEG software such as EEGLAB or the Neurophysiological Biomarker Toolbox.
As part of an evaluation for epilepsy surgery, it may be necessary to insert electrodes near the surface of the brain, under the surface of the dura mater. This is accomplished via burr hole or craniotomy. This is referred to variously as "electrocorticography (ECoG)", "intracranial EEG (I-EEG)" or "subdural EEG (SD-EEG)". Depth electrodes may also be placed into brain structures, such as the amygdala or hippocampus, structures, which are common epileptic foci and may not be "seen" clearly by scalp EEG. The electrocorticographic signal is processed in the same manner as digital scalp EEG (above), with a couple of caveats. ECoG is typically recorded at higher sampling rates than scalp EEG because of the requirements of Nyquist theorem – the subdural signal is composed of a higher predominance of higher frequency components. Also, many of the artifacts that affect scalp EEG do not impact ECoG, and therefore display filtering is often not needed.
A typical adult human EEG signal is about 10 μV to 100 μV in amplitude when measured from the scalp.
Since an EEG voltage signal represents a difference between the voltages at two electrodes, the display of the EEG for the reading encephalographer may be set up in one of several ways. The representation of the EEG channels is referred to as a montage.
Sequential montage Each channel (i.e., waveform) represents the difference between two adjacent electrodes. The entire montage consists of a series of these channels. For example, the channel "Fp1-F3" represents the difference in voltage between the Fp1 electrode and the F3 electrode. The next channel in the montage, "F3-C3", represents the voltage difference between F3 and C3, and so on through the entire array of electrodes.
Referential montage Each channel represents the difference between a certain electrode and a designated reference electrode. There is no standard position for this reference; it is, however, at a different position than the "recording" electrodes. Midline positions are often used because they do not amplify the signal in one hemisphere vs. the other, such as Cz, Oz, Pz etc. as online reference. The other popular offline references are:
REST reference: which is an offline computational reference at infinity where the potential is zero. REST (reference electrode standardization technique) takes the equivalent sources inside the brain of any a set of scalp recordings as springboard to link the actual recordings with any an online or offline( average, linked ears etc.) non-zero reference to the new recordings with infinity zero as the standardized reference.
"linked ears": which is a physical or mathematical average of electrodes attached to both earlobes or mastoids.
Average reference montage The outputs of all of the amplifiers are summed and averaged, and this averaged signal is used as the common reference for each channel.
Laplacian montage Each channel represents the difference between an electrode and a weighted average of the surrounding electrodes.
When analog (paper) EEGs are used, the technologist switches between montages during the recording in order to highlight or better characterize certain features of the EEG. With digital EEG, all signals are typically digitized and stored in a particular (usually referential) montage; since any montage can be constructed mathematically from any other, the EEG can be viewed by the electroencephalographer in any display montage that is desired.
The EEG is read by a clinical neurophysiologist or neurologist (depending on local custom and law regarding medical specialities), optimally one who has specific training in the interpretation of EEGs for clinical purposes. This is done by visual inspection of the waveforms, called graphoelements. The use of computer signal processing of the EEG – so-called quantitative electroencephalography – is somewhat controversial when used for clinical purposes (although there are many research uses).
Dry EEG electrodes
In the early 1990s Babak Taheri, at University of California, Davis demonstrated the first single and also multichannel dry active electrode arrays using micro-machining. The single channel dry EEG electrode construction and results were published in 1994. The arrayed electrode was also demonstrated to perform well compared to silver/silver chloride electrodes. The device consisted of four sites of sensors with integrated electronics to reduce noise by impedance matching. The advantages of such electrodes are: (1) no electrolyte used, (2) no skin preparation, (3) significantly reduced sensor size, and (4) compatibility with EEG monitoring systems. The active electrode array is an integrated system made of an array of capacitive sensors with local integrated circuitry housed in a package with batteries to power the circuitry. This level of integration was required to achieve the functional performance obtained by the electrode. The electrode was tested on an electrical test bench and on human subjects in four modalities of EEG activity, namely: (1) spontaneous EEG, (2) sensory event-related potentials, (3) brain stem potentials, and (4) cognitive event-related potentials. The performance of the dry electrode compared favorably with that of the standard wet electrodes in terms of skin preparation, no gel requirements (dry), and higher signal-to-noise ratio.
In 1999 researchers at Case Western Reserve University, in Cleveland, Ohio, led by Hunter Peckham, used 64-electrode EEG skullcap to return limited hand movements to quadriplegic Jim Jatich. As Jatich concentrated on simple but opposite concepts like up and down, his beta-rhythm EEG output was analysed using software to identify patterns in the noise. A basic pattern was identified and used to control a switch: Above average activity was set to on, below average off. As well as enabling Jatich to control a computer cursor the signals were also used to drive the nerve controllers embedded in his hands, restoring some movement.
In 2018, a functional dry electrode composed of a polydimethylsiloxane elastomer filled with conductive carbon nanofibers was reported. This research was conducted at the U.S. Army Research Laboratory. EEG technology often involves applying a gel to the scalp which facilitates strong signal-to-noise ratio. This results in more reproducible and reliable experimental results. Since patients dislike having their hair filled with gel, and the lengthy setup requires trained staff on hand, utilizing EEG outside the laboratory setting can be difficult. Additionally, it has been observed that wet electrode sensors' performance reduces after a span of hours. Therefore, research has been directed to developing dry and semi-dry EEG bioelectronic interfaces.
Dry electrode signals depend upon mechanical contact. Therefore, it can be difficult getting a usable signal because of impedance between the skin and the electrode. Some EEG systems attempt to circumvent this issue by applying a saline solution. Others have a semi dry nature and release small amounts of the gel upon contact with the scalp. Another solution uses spring loaded pin setups. These may be uncomfortable. They may also be dangerous if they were used in a situation where a patient could bump their head since they could become lodged after an impact trauma incident.
Currently, headsets are available incorporating dry electrodes with up to 30 channels. Such designs are able to compensate for some of the signal quality degradation related to high impedances by optimizing pre-amplification, shielding and supporting mechanics.
Limitations
EEG has several limitations. Most important is its poor spatial resolution. EEG is most sensitive to a particular set of post-synaptic potentials: those generated in superficial layers of the cortex, on the crests of gyri directly abutting the skull and radial to the skull. Dendrites which are deeper in the cortex, inside sulci, in midline or deep structures (such as the cingulate gyrus or hippocampus), or producing currents that are tangential to the skull, make far less contribution to the EEG signal.
EEG recordings do not directly capture axonal action potentials. An action potential can be accurately represented as a current quadrupole, meaning that the resulting field decreases more rapidly than the ones produced by the current dipole of post-synaptic potentials. In addition, since EEGs represent averages of thousands of neurons, a large population of cells in synchronous activity is necessary to cause a significant deflection on the recordings. Action potentials are very fast and, as a consequence, the chances of field summation are slim. However, neural backpropagation, as a typically longer dendritic current dipole, can be picked up by EEG electrodes and is a reliable indication of the occurrence of neural output.
Not only do EEGs capture dendritic currents almost exclusively as opposed to axonal currents, they also show a preference for activity on populations of parallel dendrites and transmitting current in the same direction at the same time. Pyramidal neurons of cortical layers II/III and V extend apical dendrites to layer I. Currents moving up or down these processes underlie most of the signals produced by electroencephalography.
EEG thus provides information with a large bias in favor of particular neuron types, locations and orientations. So it generally should not be used to make claims about global brain activity. The meninges, cerebrospinal fluid and skull "smear" the EEG signal, obscuring its intracranial source.
It is mathematically impossible to reconstruct a unique intracranial current source for a given EEG signal, as some currents produce potentials that cancel each other out. This is referred to as the inverse problem. However, much work has been done to produce remarkably good estimates of, at least, a localized electric dipole that represents the recorded currents.
EEG vis-à-vis fMRI, fNIRS, fUS and PET
EEG has several strong points as a tool for exploring brain activity. EEGs can detect changes over milliseconds, which is excellent considering an action potential takes approximately 0.5–130 milliseconds to propagate across a single neuron, depending on the type of neuron. Other methods of looking at brain activity, such as PET, fMRI or fUS have time resolution between seconds and minutes. EEG measures the brain's electrical activity directly, while other methods record changes in blood flow (e.g., SPECT, fMRI, fUS) or metabolic activity (e.g., PET, NIRS), which are indirect markers of brain electrical activity.
EEG can be used simultaneously with fMRI or fUS so that high-temporal-resolution data can be recorded at the same time as high-spatial-resolution data, however, since the data derived from each occurs over a different time course, the data sets do not necessarily represent exactly the same brain activity.
There are technical difficulties associated with combining EEG and fMRI including the need to remove the MRI gradient artifact present during MRI acquisition. Furthermore, currents can be induced in moving EEG electrode wires due to the magnetic field of the MRI.
EEG can be used simultaneously with NIRS or fUS without major technical difficulties. There is no influence of these modalities on each other and a combined measurement can give useful information about electrical activity as well as hemodynamics at medium spatial resolution.
EEG vis-à-vis MEG
EEG reflects correlated synaptic activity caused by post-synaptic potentials of cortical neurons. The ionic currents involved in the generation of fast action potentials may not contribute greatly to the averaged field potentials representing the EEG. More specifically, the scalp electrical potentials that produce EEG are generally thought to be caused by the extracellular ionic currents caused by dendritic electrical activity, whereas the fields producing magnetoencephalographic signals are associated with intracellular ionic currents.
Normal activity
The EEG is typically described in terms of (1) rhythmic activity and (2) transients. The rhythmic activity is divided into bands by frequency. To some degree, these frequency bands are a matter of nomenclature (i.e., any rhythmic activity between 8–12 Hz can be described as "alpha"), but these designations arose because rhythmic activity within a certain frequency range was noted to have a certain distribution over the scalp or a certain biological significance. Frequency bands are usually extracted using spectral methods (for instance Welch) as implemented for instance in freely available EEG software such as EEGLAB or the Neurophysiological Biomarker Toolbox.
Computational processing of the EEG is often named quantitative electroencephalography (qEEG).
Most of the cerebral signal observed in the scalp EEG falls in the range of 1–20 Hz (activity below or above this range is likely to be artifactual, under standard clinical recording techniques). Waveforms are subdivided into bandwidths known as alpha, beta, theta, and delta to signify the majority of the EEG used in clinical practice.
Comparison of EEG bands
The practice of using only whole numbers in the definitions comes from practical considerations in the days when only whole cycles could be counted on paper records. This leads to gaps in the definitions, as seen elsewhere on this page. The theoretical definitions have always been more carefully defined to include all frequencies. Unfortunately there is no agreement in standard reference works on what these ranges should be – values for the upper end of alpha and lower end of beta include 12, 13, 14 and 15. If the threshold is taken as 14 Hz, then the slowest beta wave has about the same duration as the longest spike (70 ms), which makes this the most useful value.
Wave patterns
Delta waves is the frequency range up to 4 Hz. It tends to be the highest in amplitude and the slowest waves. It is seen normally in adults in slow-wave sleep. It is also seen normally in babies. It may occur focally with subcortical lesions and in general distribution with diffuse lesions, metabolic encephalopathy hydrocephalus or deep midline lesions. It is usually most prominent frontally in adults (e.g. FIRDA – frontal intermittent rhythmic delta) and posteriorly in children (e.g. OIRDA – occipital intermittent rhythmic delta).
Theta is the frequency range from 4 Hz to 7 Hz. Theta is seen normally in young children. It may be seen in drowsiness or arousal in older children and adults; it can also be seen in meditation. Excess theta for age represents abnormal activity. It can be seen as a focal disturbance in focal subcortical lesions; it can be seen in generalized distribution in diffuse disorder or metabolic encephalopathy or deep midline disorders or some instances of hydrocephalus. On the contrary this range has been associated with reports of relaxed, meditative, and creative states.
Alpha is the frequency range from 8 Hz to 12 Hz. Hans Berger named the first rhythmic EEG activity he observed the "alpha wave". This was the "posterior basic rhythm" (also called the "posterior dominant rhythm" or the "posterior alpha rhythm"), seen in the posterior regions of the head on both sides, higher in amplitude on the dominant side. It emerges with closing of the eyes and with relaxation, and attenuates with eye opening or mental exertion. The posterior basic rhythm is actually slower than 8 Hz in young children (therefore technically in the theta range).
In addition to the posterior basic rhythm, there are other normal alpha rhythms such as the mu rhythm (alpha activity in the contralateral sensory and motor cortical areas) that emerges when the hands and arms are idle; and the "third rhythm" (alpha activity in the temporal or frontal lobes). Alpha can be abnormal; for example, an EEG that has diffuse alpha occurring in coma and is not responsive to external stimuli is referred to as "alpha coma".
Beta is the frequency range from 13 Hz to about 30 Hz. It is seen usually on both sides in symmetrical distribution and is most evident frontally. Beta activity is closely linked to motor behavior and is generally attenuated during active movements. Low-amplitude beta with multiple and varying frequencies is often associated with active, busy or anxious thinking and active concentration. Rhythmic beta with a dominant set of frequencies is associated with various pathologies, such as Dup15q syndrome, and drug effects, especially benzodiazepines. It may be absent or reduced in areas of cortical damage. It is the dominant rhythm in patients who are alert or anxious or who have their eyes open.
Gamma is the frequency range approximately 30–100 Hz. Gamma rhythms are thought to represent binding of different populations of neurons together into a network for the purpose of carrying out a certain cognitive or motor function.
Mu range is 8–13 Hz and partly overlaps with other frequencies. It reflects the synchronous firing of motor neurons in rest state. Mu suppression is thought to reflect motor mirror neuron systems, because when an action is observed, the pattern extinguishes, possibly because the normal and mirror neuronal systems "go out of sync" and interfere with one other.
"Ultra-slow" or "near-DC" activity is recorded using DC amplifiers in some research contexts. It is not typically recorded in a clinical context because the signal at these frequencies is susceptible to a number of artifacts.
Some features of the EEG are transient rather than rhythmic. Spikes and sharp waves may represent seizure activity or interictal activity in individuals with epilepsy or a predisposition toward epilepsy. Other transient features are normal: vertex waves and sleep spindles are seen in normal sleep.
There are types of activity that are statistically uncommon, but not associated with dysfunction or disease. These are often referred to as "normal variants". The mu rhythm is an example of a normal variant.
The normal electroencephalogram (EEG) varies by age. The prenatal EEG and neonatal EEG is quite different from the adult EEG. Fetuses in the third trimester and newborns display two common brain activity patterns: "discontinuous" and "trace alternant." "Discontinuous" electrical activity refers to sharp bursts of electrical activity followed by low frequency waves. "Trace alternant" electrical activity describes sharp bursts followed by short high amplitude intervals and usually indicates quiet sleep in newborns. The EEG in childhood generally has slower frequency oscillations than the adult EEG.
The normal EEG also varies depending on state. The EEG is used along with other measurements (EOG, EMG) to define sleep stages in polysomnography. Stage I sleep (equivalent to drowsiness in some systems) appears on the EEG as drop-out of the posterior basic rhythm. There can be an increase in theta frequencies. Santamaria and Chiappa cataloged a number of the variety of patterns associated with drowsiness. Stage II sleep is characterized by sleep spindles – transient runs of rhythmic activity in the 12–14 Hz range (sometimes referred to as the "sigma" band) that have a frontal-central maximum. Most of the activity in Stage II is in the 3–6 Hz range. Stage III and IV sleep are defined by the presence of delta frequencies and are often referred to collectively as "slow-wave sleep". Stages I–IV comprise non-REM (or "NREM") sleep. The EEG in REM (rapid eye movement) sleep appears somewhat similar to the awake EEG.
EEG under general anesthesia depends on the type of anesthetic employed. With halogenated anesthetics, such as halothane or intravenous agents, such as propofol, a rapid (alpha or low beta), nonreactive EEG pattern is seen over most of the scalp, especially anteriorly; in some older terminology this was known as a WAR (widespread anterior rapid) pattern, contrasted with a WAIS (widespread slow) pattern associated with high doses of opiates. Anesthetic effects on EEG signals are beginning to be understood at the level of drug actions on different kinds of synapses and the circuits that allow synchronized neuronal activity.
Artifacts
EEG is an extremely useful technique for studying brain activity, but the signal measured is always contaminated by artifacts which can impact the analysis of the data. An artifact is any measured signal that does not originate within the brain. Although multiple algorithms exist for the removal of artifacts, the problem of how to deal with them remains an open question. The source of artifacts can be from issues relating to the instrument, such as faulty electrodes, line noise or high electrode impedance, or they may be from the physiology of the subject being recorded. This can include, eye blinks and movement, cardiac activity and muscle activity and these types of artifacts are more complicated to remove. Artifacts may bias the visual interpretation of EEG data as some may mimic cognitive activity that could affect diagnoses of problems such as Alzheimer's disease or sleep disorders. As such the removal of such artifacts in EEG data used for practical applications is of the utmost importance.
Artifact removal
It is important to be able to distinguish artifacts from genuine brain activity in order to prevent incorrect interpretations of EEG data. General approaches for the removal of artifacts from the data are, prevention, rejection and cancellation. The goal of any approach is to develop methodology capable of identifying and removing artifacts without affecting the quality of the EEG signal. As artifact sources are quite different the majority of researchers focus on developing algorithms that will identify and remove a single type of noise in the signal. Simple filtering using a notch filter is commonly employed to reject components with a 50/60 Hz frequency. However such simple filters are not an appropriate choice for dealing with all artifacts, as for some, their frequencies will overlap with the EEG frequencies.
Regression algorithms have a moderate computation cost and are simple. They represented the most popular correction method up until the mid-1990s when they were replaced by "blind source separation" type methods. Regression algorithms work on the premise that all artifacts are comprised by one or more reference channels. Subtracting these reference channels from the other contaminated channels, in either the time or frequency domain, by estimating the impact of the reference channels on the other channels, would correct the channels for the artifact. Although the requirement of reference channels ultimately lead to this class of algorithm being replaced, they still represent the benchmark against which modern algorithms are evaluated.
Blind source separation (BSS) algorithms employed to remove artifacts include principal component analysis (PCA) and independent component analysis (ICA) and several algorithms in this class have been successful at tackling most physiological artifacts.
Physiological artifacts
Ocular artifacts
Ocular artifacts affect the EEG signal significantly. This is due to eye movements involving a change in electric fields surrounding the eyes, distorting the electric field over the scalp, and as EEG is recorded on the scalp, it therefore distorts the recorded signal. A difference of opinion exists among researchers, with some arguing ocular artifacts are, or may be reasonably described as a single generator, whilst others argue it is important to understand the potentially complicated mechanisms. Three potential mechanisms have been proposed to explain the ocular artifact.
The first is corneal retinal dipole movement which argues that an electric dipole is formed between the cornea and retina, as the former is positively and the latter negatively charged. When the eye moves, so does this dipole which impacts the electrical field over the scalp, this is the most standard view. The second mechanism is retinal dipole movement, which is similar to the first but differing in that it argues there is a potential difference, hence dipole across the retina with the cornea having little effect. The third mechanism is eyelid movement. It is known that there is a change in voltage around the eyes when the eyelid moves, even if the eyeball does not. It is thought that the eyelid can be described as a sliding potential source and that the impacting of blinking is different to eye movement on the recorded EEG.
Eyelid fluttering artifacts of a characteristic type were previously called Kappa rhythm (or Kappa waves). It is usually seen in the prefrontal leads, that is, just over the eyes. Sometimes they are seen with mental activity. They are usually in the Theta (4–7 Hz) or Alpha (7–14 Hz) range. They were named because they were believed to originate from the brain. Later study revealed they were generated by rapid fluttering of the eyelids, sometimes so minute that it was difficult to see. They are in fact noise in the EEG reading, and should not technically be called a rhythm or wave. Therefore, current usage in electroencephalography refers to the phenomenon as an eyelid fluttering artifact, rather than a Kappa rhythm (or wave).
The propagation of the ocular artifact is impacted by multiple factors including the properties of the subject's skull, neuronal tissues and skin but the signal may be approximated as being inversely proportional to the distance from the eyes squared. The electrooculogram (EOG) consists of a series of electrodes measuring voltage changes close to the eye and is the most common tool for dealing with the eye movement artifact in the EEG signal.
Muscular artifacts
Another source of artifacts are various muscle movements across the body. This particular class of artifact is usually recorded by all electrodes on the scalp due to myogenic activity (increase or decrease of blood pressure). The origin of these artifacts have no single location and arises from functionally independent muscle groups, meaning the characteristics of the artifact are not constant. The observed patterns due to muscular artifacts will change depending on subject sex, the particular muscle tissue, and its degree of contraction. The frequency range for muscular artifacts is wide and overlaps with every classic EEG rhythm. However most of the power is concentrated in the lower range of the observed frequencies of 20 to 300 Hz making the gamma band particularly susceptible to muscular artifacts. Some muscle artifacts may have activity with a frequency as low as 2 Hz, so delta and theta bands may also be affected by muscle activity. Muscular artifacts may impact sleep studies, as unconscious bruxism (grinding of teeth) movements or snoring can seriously impact the quality of the recorded EEG. In addition the recordings made of epilepsy patients may be significantly impacted by the existence of muscular artifacts.
Cardiac artifacts
The potential due to cardiac activity introduces electrocardiograph (ECG) errors in the EEG. Artifacts arising due to cardiac activity may be removed with the help of an ECG reference signal.
Other physiological artifacts
Glossokinetic artifacts are caused by the potential difference between the base and the tip of the tongue. Minor tongue movements can contaminate the EEG, especially in parkinsonian and tremor disorders.
Environmental artifacts
In addition to artifacts generated by the body, many artifacts originate from outside the body. Movement by the patient, or even just settling of the electrodes, may cause electrode pops, spikes originating from a momentary change in the impedance of a given electrode. Poor grounding of the EEG electrodes can cause significant 50 or 60 Hz artifact, depending on the local power system's frequency. A third source of possible interference can be the presence of an IV drip; such devices can cause rhythmic, fast, low-voltage bursts, which may be confused for spikes.
Abnormal activity
Abnormal activity can broadly be separated into epileptiform and non-epileptiform activity. It can also be separated into focal or diffuse.
Focal epileptiform discharges represent fast, synchronous potentials in a large number of neurons in a somewhat discrete area of the brain. These can occur as interictal activity, between seizures, and represent an area of cortical irritability that may be predisposed to producing epileptic seizures. Interictal discharges are not wholly reliable for determining whether a patient has epilepsy nor where his/her seizure might originate. (See focal epilepsy.)
Generalized epileptiform discharges often have an anterior maximum, but these are seen synchronously throughout the entire brain. They are strongly suggestive of a generalized epilepsy.
Focal non-epileptiform abnormal activity may occur over areas of the brain where there is focal damage of the cortex or white matter. It often consists of an increase in slow frequency rhythms and/or a loss of normal higher frequency rhythms. It may also appear as focal or unilateral decrease in amplitude of the EEG signal.
Diffuse non-epileptiform abnormal activity may manifest as diffuse abnormally slow rhythms or bilateral slowing of normal rhythms, such as the PBR.
Intracortical Encephalogram electrodes and sub-dural electrodes can be used in tandem to discriminate and discretize artifact from epileptiform and other severe neurological events.
More advanced measures of abnormal EEG signals have also recently received attention as possible biomarkers for different disorders such as Alzheimer's disease.
Remote communication
Systems for decoding imagined speech from EEG have applications such as in brain–computer interfaces.
EEG diagnostics
The Department of Defense (DoD) and Veteran's Affairs (VA), and U.S Army Research Laboratory (ARL), collaborated on EEG diagnostics in order to detect mild to moderate Traumatic Brain Injury (mTBI) in combat soldiers. Between 2000 and 2012, 75 percent of U.S. military operations brain injuries were classified mTBI. In response, the DoD pursued new technologies capable of rapid, accurate, non-invasive, and field-capable detection of mTBI to address this injury.
Combat personnel often develop PTSD and mTBI in correlation. Both conditions present with altered low-frequency brain wave oscillations. Altered brain waves from PTSD patients present with decreases in low-frequency oscillations, whereas, mTBI injuries are linked to increased low-frequency wave oscillations. Effective EEG diagnostics can help doctors accurately identify conditions and appropriately treat injuries in order to mitigate long-term effects.
Traditionally, clinical evaluation of EEGs involved visual inspection. Instead of a visual assessment of brain wave oscillation topography, quantitative electroencephalography (qEEG), computerized algorithmic methodologies, analyzes a specific region of the brain and transforms the data into a meaningful "power spectrum" of the area. Accurately differentiating between mTBI and PTSD can significantly increase positive recovery outcomes for patients especially since long-term changes in neural communication can persist after an initial mTBI incident.
Another common measurement made from EEG data is that of complexity measures such as Lempel-Ziv complexity, fractal dimension, and spectral flatness, which are associated with particular pathologies or pathology stages.
Economics
Inexpensive EEG devices exist for the low-cost research and consumer markets. Recently, a few companies have miniaturized medical grade EEG technology to create versions accessible to the general public. Some of these companies have built commercial EEG devices retailing for less than US$100.
In 2004 OpenEEG released its ModularEEG as open source hardware. Compatible open source software includes a game for balancing a ball.
In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy. This was also the first large scale EEG device to use dry sensor technology.
In 2008 OCZ Technology developed device for use in video games relying primarily on electromyography.
In 2008 the Final Fantasy developer Square Enix announced that it was partnering with NeuroSky to create a game, Judecca.
In 2009 Mattel partnered with NeuroSky to release the Mindflex, a game that used an EEG to steer a ball through an obstacle course. By far the best-selling consumer based EEG to date.
In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing the Force.
In 2010, NeuroSky added a blink and electromyography function to the MindSet.
In 2011, NeuroSky released the MindWave, an EEG device designed for educational purposes and games. The MindWave won the Guinness Book of World Records award for "Heaviest machine moved using a brain control interface".
In 2012, a Japanese gadget project, neurowear, released Necomimi: a headset with motorized cat ears. The headset is a NeuroSky MindWave unit with two motors on the headband where a cat's ears might be. Slipcovers shaped like cat ears sit over the motors so that as the device registers emotional states the ears move to relate. For example, when relaxed, the ears fall to the sides and perk up when excited again.
In 2014, OpenBCI released an eponymous open source brain-computer interface after a successful kickstarter campaign in 2013. The board, later renamed "Cyton", has 8 channels, expandable to 16 with the Daisy module. It supports EEG, EKG, and EMG. The Cyton Board is based on the Texas Instruments ADS1299 IC and the Arduino or PIC microcontroller, and initially costed $399 before increasing in price to $999. It uses standard metal cup electrodes and conductive paste.
In 2015, Mind Solutions Inc released the smallest consumer BCI to date, the NeuroSync. This device functions as a dry sensor at a size no larger than a Bluetooth ear piece.
In 2015, A Chinese-based company Macrotellect released BrainLink Pro and BrainLink Lite, a consumer grade EEG wearable product providing 20 brain fitness enhancement Apps on Apple and Android App Stores.
In 2021, BioSerenity release the Neuronaute and Icecap a single-use disposable EEG headset that allows recording with equivalent quality to traditional cup electrodes.
Future research
The EEG has been used for many purposes besides the conventional uses of clinical diagnosis and conventional cognitive neuroscience. An early use was during World War II by the U.S. Army Air Corps to screen out pilots in danger of having seizures; long-term EEG recordings in epilepsy patients are still used today for seizure prediction. Neurofeedback remains an important extension, and in its most advanced form is also attempted as the basis of brain computer interfaces. The EEG is also used quite extensively in the field of neuromarketing.
The EEG is altered by drugs that affect brain functions, the chemicals that are the basis for psychopharmacology. Berger's early experiments recorded the effects of drugs on EEG. The science of pharmaco-electroencephalography has developed methods to identify substances that systematically alter brain functions for therapeutic and recreational use.
Honda is attempting to develop a system to enable an operator to control its Asimo robot using EEG, a technology it eventually hopes to incorporate into its automobiles.
EEGs have been used as evidence in criminal trials in the Indian state of Maharashtra. Brain Electrical Oscillation Signature Profiling (BEOS), an EEG technique, was used in the trial of State of Maharashtra v. Sharma to show Sharma remembered using arsenic to poison her ex-fiancé, although the reliability and scientific basis of BEOS is disputed.
A lot of research is currently being carried out in order to make EEG devices smaller, more portable and easier to use. So called "Wearable EEG" is based upon creating low power wireless collection electronics and 'dry' electrodes which do not require a conductive gel to be used. Wearable EEG aims to provide small EEG devices which are present only on the head and which can record EEG for days, weeks, or months at a time, as ear-EEG. Such prolonged and easy-to-use monitoring could make a step change in the diagnosis of chronic conditions such as epilepsy, and greatly improve the end-user acceptance of BCI systems. Research is also being carried out on identifying specific solutions to increase the battery lifetime of Wearable EEG devices through the use of the data reduction approach.
In research, currently EEG is often used in combination with machine learning. EEG data are pre-processed then passed on to machine learning algorithms. These algorithms are then trained to recognize different diseases like schizophrenia, epilepsy or dementia. Furthermore, they are increasingly used to study seizure detection. By using machine learning, the data can be analyzed automatically. In the long run this research is intended to build algorithms that support physicians in their clinical practice and to provide further insights into diseases. In this vein, complexity measures of EEG data are often calculated, such as Lempel-Ziv complexity, fractal dimension, and spectral flatness. It has been shown that combining or multiplying such measures can reveal previously hidden information in EEG data.
EEG signals from musical performers were used to create instant compositions and one CD by the Brainwave Music Project, run at the Computer Music Center at Columbia University by Brad Garton and Dave Soldier. Similarly, an hour-long recording of the brainwaves of Ann Druyan was included on the Voyager Golden Record, launched on the Voyager probes in 1977, in case any extraterrestrial intelligence could decode her thoughts, which included what it was like to fall in love.
History
In 1875, Richard Caton (1842–1926), a physician practicing in Liverpool, presented his findings about electrical phenomena of the exposed cerebral hemispheres of rabbits and monkeys in the British Medical Journal. In 1890, Polish physiologist Adolf Beck published an investigation of spontaneous electrical activity of the brain of rabbits and dogs that included rhythmic oscillations altered by light. Beck started experiments on the electrical brain activity of animals. Beck placed electrodes directly on the surface of the brain to test for sensory stimulation. His observation of fluctuating brain activity led to the conclusion of brain waves.
In 1912, Ukrainian physiologist Vladimir Vladimirovich Pravdich-Neminsky published the first animal EEG and the evoked potential of the mammalian (dog). In 1914, Napoleon Cybulski and Jelenska-Macieszyna photographed EEG recordings of experimentally induced seizures.
German physiologist and psychiatrist Hans Berger (1873–1941) recorded the first human EEG in 1924. Expanding on work previously conducted on animals by Richard Caton and others, Berger also invented the electroencephalograph (giving the device its name), an invention described "as one of the most surprising, remarkable, and momentous developments in the history of clinical neurology". His discoveries were first confirmed by British scientists Edgar Douglas Adrian and B. H. C. Matthews in 1934 and developed by them.
In 1934, Fisher and Lowenbach first demonstrated epileptiform spikes. In 1935, Gibbs, Davis and Lennox described interictal spike waves and the three cycles/s pattern of clinical absence seizures, which began the field of clinical electroencephalography. Subsequently, in 1936 Gibbs and Jasper reported the interictal spike as the focal signature of epilepsy. The same year, the first EEG laboratory opened at Massachusetts General Hospital.
Franklin Offner (1911–1999), professor of biophysics at Northwestern University developed a prototype of the EEG that incorporated a piezoelectric inkwriter called a Crystograph (the whole device was typically known as the Offner Dynograph).
In 1947, The American EEG Society was founded and the first International EEG congress was held. In 1953 Aserinsky and Kleitman described REM sleep.
In the 1950s, William Grey Walter developed an adjunct to EEG called EEG topography, which allowed for the mapping of electrical activity across the surface of the brain. This enjoyed a brief period of popularity in the 1980s and seemed especially promising for psychiatry. It was never accepted by neurologists and remains primarily a research tool.
An electroencephalograph system manufactured by Beckman Instruments was used on at least one of the Project Gemini manned spaceflights (1965–1966) to monitor the brain waves of astronauts on the flight. It was one of many Beckman Instruments specialized for and used by NASA.
The first instance of the use of EEG to control a physical object, a robot, was in 1988. The robot would follow a line or stop depending on the alpha activity of the subject. If the subject relaxed and closed their eyes therefore increasing alpha activity, the bot would move. Opening their eyes thus decreasing alpha activity would cause the robot to stop on the trajectory.
See also
References
Further reading
External links
Diagnostic neurology
Electrophysiology
Neurophysiology
Neurotechnology
Electrodiagnosis
Brain–computer interface
Mathematics in medicine | Electroencephalography | [
"Mathematics"
] | 13,592 | [
"Applied mathematics",
"Mathematics in medicine"
] |
21,406,108 | https://en.wikipedia.org/wiki/Zirconocene%20dichloride | Zirconocene dichloride is an organozirconium compound composed of a zirconium central atom, with two cyclopentadienyl and two chloro ligands. It is a colourless diamagnetic solid that is somewhat stable in air.
Preparation and structure
Zirconocene dichloride may be prepared from zirconium(IV) chloride-tetrahydrofuran complex and sodium cyclopentadienide:
ZrCl4(THF)2 + 2 NaCp → Cp2ZrCl2 + 2 NaCl + 2 THF
The closely related compound Cp2ZrBr2 was first described by Birmingham and Wilkinson.
The compound is a bent metallocene: the Cp rings are not parallel, the average Cp(centroid)-M-Cp angle being 128°. The Cl-Zr-Cl angle of 97.1° is wider than in niobocene dichloride (85.6°) and molybdocene dichloride (82°). This trend helped to establish the orientation of the HOMO in this class of complex.
Reactions
Schwartz's reagent
Zirconocene dichloride reacts with lithium aluminium hydride to give Cp2ZrHCl Schwartz's reagent:
(C5H5)2ZrCl2 + 1/4 LiAlH4 → (C5H5)2ZrHCl + 1/4 LiAlCl4
Since lithium aluminium hydride is a strong reductant, some over-reduction occurs to give the dihydrido complex, Cp2ZrH2; treatment of the product mixture with methylene chloride converts it to Schwartz's reagent.
Negishi reagent
Zirconocene dichloride can also be used to prepare the Negishi reagent, Cp2Zr(η2-butene), which can be used as a source of Cp2Zr in oxidative cyclisation reactions. The Negishi reagent is prepared by treating zirconocene dichloride with n-BuLi, leading to replacement of the two chloride ligands with butyl groups. The dibutyl compound subsequently undergoes beta-hydride elimination to give one η2-butene ligand, with the other butyl ligand promptly lost as butane via reductive elimination.
Carboalumination
Zirconocene dichloride catalyzes the carboalumination of alkynes by trimethylaluminium to give a (alkenyl)dimethylalane, a versatile intermediate for further cross coupling reactions for the synthesis of stereodefined trisubstituted olefins. For example, α-farnesene can be prepared as a single stereoisomer by carboalumination of 1-buten-3-yne with trimethylaluminium, followed by palladium-catalyzed coupling of the resultant vinylaluminium reagent with geranyl chloride.
The use of trimethylaluminium for this reaction results in exclusive formation of the syn-addition product and, for terminal alkynes, the anti-Markovnikov addition with high selectivity (generally > 10:1). Unfortunately, the use of higher alkylaluminium reagents results in lowered yield, due to the formation of the hydroalumination product (via β-hydrogen elimination of the alkylzirconium intermediate) as side product, and only moderate regioselectivities. Thus, practical applications of the carboalumination reaction are generally confined to the case of methylalumination. Although this is a major limitation, the synthetic utility of this process remains significant, due to the frequent appearance of methyl-substituted alkenes in natural products.
Zr-walk
Zirconocene dichloride together with a reducing reagent can form the zirconocene hydride catalyst in situ, which allows a positional isomerization (so-called "Zr-walk"), and ends up with a cleavage of allylic bonds. Not only individual steps under stoichiometric conditions were described with Schwartz reagent, and Negishi reagent, but also catalytic applications on alkene hydroaluminations, radical cyclisation, polybutadiene cleavage, and reductive removal of functional groups were reported.
References
Further reading
Organozirconium compounds
Metallocenes
Chloro complexes
Cyclopentadienyl complexes
Zirconium(IV) compounds | Zirconocene dichloride | [
"Chemistry"
] | 967 | [
"Organometallic chemistry",
"Cyclopentadienyl complexes"
] |
21,406,724 | https://en.wikipedia.org/wiki/U-Key | A U-Key is an implementation of the MIFARE RFID chip, encased in a plastic key style housing. It is used as a prepayment system on vending machines and for some self-service diving air compressors in Switzerland, and they will be most likely made by Selecta (company).
References
Smart cards
Radio-frequency identification
Automatic identification and data capture | U-Key | [
"Technology",
"Engineering"
] | 78 | [
"Radio-frequency identification",
"Radio electronics",
"Data",
"Automatic identification and data capture"
] |
21,407,524 | https://en.wikipedia.org/wiki/Bark%20spud | The bark spud (also known as a peeling iron, peeler bar, peeling spud, or abbreviated to spud) is an implement which is used to remove bark from felled timber.
Construction
Most bark spuds have steel heads and wooden handles, typically hickory or ash. The head is curved, sometimes in one direction with a single cutting edge, and sometimes more dish shaped and sharpened on three sides.
Method of use
In use, the sharpened edge is slid between the bark and the wood, removing the bark from the tree in a number of strips. It is especially useful at removing bark without damaging the wood below the bark.
Similar tools
A coa de jima is a similar specialized tool for harvesting agaves. The drawknife also removes bark from felled trees.
References
Mechanical hand tools
Green woodworking tools
Forestry | Bark spud | [
"Physics"
] | 170 | [
"Mechanics",
"Mechanical hand tools"
] |
20,361,941 | https://en.wikipedia.org/wiki/Iopydol | Iopydol is a pharmaceutical drug used as a radiocontrast agent in X-ray imaging.
See also
Iodinated contrast
References
Iodoarenes
Vicinal diols
4-Pyridones
Radiocontrast agents | Iopydol | [
"Chemistry"
] | 50 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
20,361,954 | https://en.wikipedia.org/wiki/International%20Institute%20of%20Earthquake%20Engineering%20and%20Seismology | International Institute of Earthquake Engineering and Seismology (IIEES) (Persian: پژوهشگاه بینالمللی زلزلهشناسی و مهندسی زلزله) is an international earthquake engineering and seismology institute based in Iran. It was established as a result of the 24th UNESCO General Conference Resolution DR/250 under Iranian government approval in 1989. It was founded as an independent institute within the Iran's Ministry of Science, Research and Technology.
On its establishment, the IIEES drew up a seismic code in an attempt to improve the infrastructural response to earthquakes and seismic activity in the country. Its primary objective is to reduce the risk of seismic activity on buildings and roads and provide mitigation measures both in Iran and the region.
The institute is responsible for research and education in this field along with several universities and institutes in Iran by conducting research and providing education and knowledge in seismotectonic studies, seismology and earthquake engineering. In addition conducts research and educates in risk management and generating possibilities for an effective earthquake response strategy.
The IIEES is composed of the following research Centers: Seismology, Geotechnical Earthquake Engineering, Structural Earthquake Engineering, Risk Management; National center for Earthquake Prediction, and Graduate School, Public Education and Information Division.
See also
2003 Bam earthquake
Bahram Akasheh
Earthquake Engineering Research Institute
National Center for Research on Earthquake Engineering
References
External links
Official site (Persian)
Official site (English)
Earthquake and seismic risk mitigation
Earthquake engineering
Research institutes in Iran
Science and technology in Iran
Scientific organisations based in Iran | International Institute of Earthquake Engineering and Seismology | [
"Engineering"
] | 334 | [
"Structural engineering",
"Earthquake engineering",
"Earthquake and seismic risk mitigation",
"Civil engineering"
] |
20,361,979 | https://en.wikipedia.org/wiki/Adipiodone | Adipiodone (INN, or iodipamide; trade names Cholografin and Biligrafin) is a pharmaceutical drug used as a radiocontrast agent in X-ray imaging. It was introduced in the 1950s.
References
Radiocontrast agents
Iodobenzene derivatives
Benzoic acids
Anilides | Adipiodone | [
"Chemistry"
] | 70 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
20,364,782 | https://en.wikipedia.org/wiki/WEAP | WEAP (the Water Evaluation and Planning system) is a model-building tool for water resource planning and policy analysis that is distributed at no charge to non-profit, academic, and governmental organizations in developing countries.
WEAP can be used to create simulations of water demand, supply, runoff, evapotranspiration, water allocation, infiltration, crop irrigation requirements, instream flow requirements, ecosystem services, groundwater and surface storage, reservoir operations, pollution generation, treatment, discharge, and instream water quality. The simulations can be created under scenarios of varying policy, hydrology, climate, land use, technology, and socio-economic factors. WEAP links to the USGS MODFLOW groundwater flow model and the US EPA QUAL2K surface water quality model.
WEAP was created in 1988 and continues to be developed and supported by the U.S. Center of the Stockholm Environment Institute, a non-profit research institute based at Tufts University in Somerville, Massachusetts. It is used for climate change vulnerability studies and adaptation planning and has been applied by researchers and planners in thousands of organizations worldwide.
Establishing the ‘current accounts’ and building scenarios and evaluating the scenarios about criteria are the main WEAP applications in Simulation problems.
References
Sieber, J., WEAP History and Credits, WEAP Website, accessed August 14, 2020.
Matchett, E., et al., "A Framework for Modeling Anthropogenic Impacts on Waterbird Habitats: Addressing future uncertainty in conservation planning," USGS Report, pp. 1–40, , February 2015.
Sánchez-Torres Esqueda, G., et al., "Vulnerability of water resources to climate change scenarios. Impacts on the irrigation districts in the Guayalejo-Tamesí river basin, Tamaulipas, México," Atmósfera, 24 (2011), pp. 141–155, January 2011.
Purkey, D., et al., "Robust analysis of future climate change impacts on water for agriculture and other sectors: a case study in the Sacramento Valley," Climatic Change, (87) 2008, pp 109–122, , March 2008.
Purkey, D., et al., "Integrating a Climate Change Assessment Tool into Stakeholder-Driven Water Management Decision-Making Processes in California," Water Resources Management, 21 (2007), pp. 315–329, , January 2007.
Vogel, R., et al., "Relations Among Storage, Yield and Instream Flow," Water Resources Research, 43 (2007), W05403, , May 2007.
Yates, D., et al., "WEAP21: A Demand-, Priority-, and Preference-Driven Water Planning Model, Parts 1: Model Characteristics", Water International, 30(487-500), , December 2005.
Lévite, H., Sally, H., Cour, J., "Water demand management scenarios in a water-stressed basin in South Africa: application of the WEAP model," Physics and Chemistry of the Earth 28 (2003) pp. 779–786, , 2003.
External links
Stockholm Environment Institute-U.S. Center
Main SEI website
Tufts University
Integrated hydrologic modelling
Hydrology
Hydrology models
Environmental engineering
Physical geography
Scientific simulation software | WEAP | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 677 | [
"Hydrology",
"Biological models",
"Chemical engineering",
"Civil engineering",
"Hydrology models",
"Environmental engineering",
"Environmental modelling"
] |
20,370,216 | https://en.wikipedia.org/wiki/Hydrogen-cooled%20turbo%20generator | A hydrogen-cooled turbo generator is a turbo generator with gaseous hydrogen as a coolant. Hydrogen-cooled turbo generators are designed to provide a low-drag atmosphere and cooling for single-shaft and combined-cycle applications in combination with steam turbines. Because of the high thermal conductivity and other favorable properties of hydrogen gas, this is the most common type in its field today.
History
Based on the air-cooled turbo generator, gaseous hydrogen first went into service as the coolant in a hydrogen-cooled turbo generator in October 1937, at the Dayton Power & Light Co. in Dayton, Ohio.
Design
The use of gaseous hydrogen as a coolant is based on its properties, namely low density, high specific heat, and the highest thermal conductivity (at 0.168 W/(m·K)) of all gases; it is 7 to 10 times better at cooling than air. Another advantage of hydrogen is its easy detection by hydrogen sensors. A hydrogen-cooled generator can be significantly smaller, and therefore less expensive, than an air-cooled one. For stator cooling, water can be used.
Helium with a thermal-conductivity of 0.142 W/(m·K) was considered as coolant as well; however, its high cost hinders its adoption despite its non-flammability.
Generally, three cooling approaches are used. For generators up to 60 MW, air cooling can be used. Between 60 and 450 MW hydrogen cooling is employed. For the highest power generators, up to 1800 MW, hydrogen and water cooling is used; the rotor is hydrogen-cooled, while the stator windings are made of hollow copper tubes cooled by water circulating through them.
The generators produce high voltage; the choice of voltage depends on the tradeoff between demands of electrical insulation and handling high electric current. For generators up to 40 MVA, the voltage is 6.3 kV; large generators with power above 1000 MW generate voltages up to 27 kV; voltages between 2.3 and 30 kV are used depending on the size of the generator. The generated power is sent to a nearby step-up transformer, where it is converted to the electric power transmission line voltage (typically between 115 and 1200 kV).
To control the centrifugal forces at high rotational speeds, the rotor diameter typically does not exceed 1.25 meters; the required large size of the coils is achieved by their length and so the generator is mounted horizontally. Two-pole machines typically operate at 3000 rpm for 50 Hz and 3600 rpm for 60 Hz systems, half of that for four-pole machines.
The turbogenerator also contains a smaller generator producing direct current excitation power for the rotor coil. Older generators used dynamos and slip rings for DC injection to the rotor, but the moving mechanical contacts were subject to wear. Modern generators have the excitation generator on the same shaft as the turbine and main generator; the diodes needed are located directly on the rotor. The excitation current on larger generators can reach 10 kA. The amount of excitation power ranges between 0.5 and 3% of the generator output power.
The rotor usually contains caps or cage made of nonmagnetic material; its role is to provide a low impedance path for eddy currents which occur when the three phases of the generator are unevenly loaded. In such cases, eddy currents are generated in the rotor, and the resulting Joule heating could in extreme cases destroy the generator.
Hydrogen gas is circulated in a closed loop to remove heat from the active parts then it is cooled by gas-to-water heat exchangers on the stator frame. The working pressure is up to 6 bar.
An on-line thermal conductivity detector (TCD) analyzer is used with three measuring ranges. The first range (80–100% H2) is to monitor the hydrogen purity during normal operation. The second (0–100% H2) and third (0–100% CO2) measuring ranges allow safe opening of the turbines for maintenance.
Hydrogen has very low viscosity, a favorable property for reducing drag losses in the rotor. These losses can be significant due to the rotor's high rotational speed. A reduction in the purity of the hydrogen coolant increases windage losses in the turbine due to the associated increase in viscosity and drag. A drop of only a few percent in hydrogen purity can increase windage losses by hundreds of kilowatts in a large generator. Windage losses also increase heat loss in the generator and increase the problem of dealing with the waste heat.
Operation
The absence of oxygen in the atmosphere within significantly reduces damage to the winding insulation from corona discharges; these can be problematic as the generators typically operate at high voltage, often 20 kV.
Seal oil system
The bearings have to be leak-tight. A hermetic seal, usually a liquid seal, is employed; a turbine oil at pressure higher than the hydrogen inside is typically used. A metal, e.g. brass, ring is pressed by springs onto the generator shaft, the oil is forced under pressure between the ring and the shaft; part of the oil flows into the hydrogen side of the generator, another part to the air side. The oil entrains a small amount of air; as the oil is recirculated, some of the air is carried over into the generator. This causes a gradual air contamination buildup and requires maintaining hydrogen purity.
Scavenging systems are used for this purpose; gas (mixture of entrained air and hydrogen, released from the oil) is collected in the holding tank for the sealing oil, and released into the atmosphere; the hydrogen losses have to be replenished, either from gas cylinders or from on-site hydrogen generators. Degradation of bearings leads to higher oil leaks, which increases the amount of air transferred into the generator. Increased oil consumption can be detected by a flow meter for each bearing.
Drying
Presence of water in hydrogen has to be avoided, as it causes deterioration of hydrogen's cooling properties, corrosion of the generator parts, and arcing in the high voltage windings, and reduces the lifetime of the generator. A desiccant-based dryer is usually included in the gas circulation loop, typically with a moisture probe in the dryer's outlet, sometimes also in its inlet. Presence of moisture is also indirect evidence of air leaking into the generator compartment. Another option is optimizing the hydrogen scavenging, so the dew point is kept within the generator's specifications. The water is usually introduced into the generator atmosphere as an impurity in the turbine oil; another route is via leaks in water cooling systems.
Purging
The flammability limits (4–75% of hydrogen in air at normal temperature, wider at high temperatures,), its autoignition temperature at 571 °C, its very low minimum ignition energy, and its tendency to form explosive mixtures with air, require provisions to be made for maintaining the hydrogen content within the generator above the upper or below the lower flammability limit at all times, and other hydrogen safety measures. When the generator is filled with hydrogen, overpressure has to be maintained as inflow of air into the generator could cause a dangerous explosion in its confined space.
The generator enclosure is purged before opening it for maintenance, and before refilling the generator with hydrogen. During shutdown, hydrogen is purged by an inert gas, and then the inert gas is replaced by air; the opposite sequence is used before startup. Carbon dioxide or nitrogen can be used for this purpose, as they do not form combustible mixtures with hydrogen and are inexpensive. Gas purity sensors are used to indicate the end of the purging cycle, which shortens the startup and shutdown times and reduces consumption of the purging gas.
Carbon dioxide is favored as due to the very high density difference it easily displaces the hydrogen. The carbon dioxide is admitted to the bottom of the generator first, pushing the hydrogen out at the top. Then air is admitted to the top, pushing the carbon dioxide out at the bottom. Purging is best done with the generator stopped. If it is done during slow-speed unloaded rotation, the generator fans will mix the gases, greatly increasing the time required to achieve purity.
Make-up
Hydrogen is often produced on-site using a plant consisting of an array of electrolysis cells, compressors and storage vessels. This reduces the need for storing compressed hydrogen and allows storage in lower pressure tanks, with associated safety benefits and lower costs. Some gaseous hydrogen has to be kept for refilling the generator but it can be also generated on-site.
As technology evolves materials not susceptible to hydrogen embrittlement are used in generator designs. Not doing so can lead to equipment failure from hydrogen embrittlement.
See also
Timeline of hydrogen technologies
Precooled jet engine
References
External links
The turbogenerator – A continuous engineering challenge
Electric power
Hydrogen technologies
Turbo generators
de:Turbogenerator#Kühlung | Hydrogen-cooled turbo generator | [
"Physics",
"Engineering"
] | 1,855 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
28,926,782 | https://en.wikipedia.org/wiki/Project%20X%20%28accelerator%29 | Project-X is a proposed high-intensity proton accelerator complex which is to be built at the Fermi National Accelerator Laboratory. It is planned to produce beams of different energies up to 8 GeV for precision experiments involving kaons and muons. The complex can also be used to create a high-intensity neutrino beam for neutrino oscillation experiments such as NOνA and the Long Baseline Neutrino Experiment. Project-X will be based on superconducting RF components such as those developed for the International Linear Collider.
Immediate plans are for cost-effective upgrades in proton luminosity referred to as the Proton Improvement Plan-II. More future-thinking proposals see Project X as laying the groundwork for a possible Muon collider at the Fermilab site.
Background
Project X is a long range plan to bring accelerators at Fermilab campus to new frontiers. The plan for accelerators focuses on two of the three frontiers that are long-term plans of Fermilab. In the intensity frontier, the new high-intensity accelerators will support experiments that require intense particle beams to understand particles such as neutrinos, muons, kaons, and nuclei. In the energy frontier, the accelerators will support the detection of new particles and forces with potential future projects such as a multi-TeV Muon Collider.
Stages
The immediate plan of Project X is to focus on the intensity frontier. The project is broken down into 3 stages. Stage one includes upgrades to existing facilities to support immediate experiments. This stage has translated into work done in the Proton Improvement Plan. Stage two includes delivery of three concurrent beam levels: 2.9 MW at 3 GeV; 50–200 kW at 8 GeV and 2.3 MW at 60–120 GeV. Stage three is to build next generation accelerators as the front end to the energy frontier based on international collaboration in projects such as the Neutrino Factory and Muon Collider.
References
External links
Project X public website
Fermilab | Project X (accelerator) | [
"Physics"
] | 417 | [
"Particle physics stubs",
"Particle physics"
] |
28,928,091 | https://en.wikipedia.org/wiki/Complexity%20function | In computer science, the complexity function of a word or string (a finite or infinite sequence of symbols from some alphabet) is the function that counts the number of distinct factors (substrings of consecutive symbols) of that string. More generally, the complexity function of a formal language (a set of finite strings) counts the number of distinct words of given length.
Complexity function of a word
Let u be a (possibly infinite) sequence of symbols from an alphabet. Define the function
pu(n) of a positive integer n to be the number of different factors (consecutive substrings) of length n from the string u.
For a string u of length at least n over an alphabet of size k we clearly have
the bounds being achieved by the constant word and a disjunctive word, for example, the Champernowne word respectively. For infinite words u, we have pu(n) bounded if u is ultimately periodic (a finite, possibly empty, sequence followed by a finite cycle). Conversely, if pu(n) ≤ n for some n, then u is ultimately periodic.
An aperiodic sequence is one which is not ultimately periodic. An aperiodic sequence has strictly increasing complexity function (this is the Morse–Hedlund theorem), so p(n) is at least n+1.
A set S of finite binary words is balanced if for each n the subset Sn of words of length n has the property that the Hamming weight of the words in Sn takes at most two distinct values. A balanced sequence is one for which the set of factors is balanced. A balanced sequence has complexity function at most n+1.
A Sturmian word over a binary alphabet is one with complexity function n + 1. A sequence is Sturmian if and only if it is balanced and aperiodic. An example is the Fibonacci word. More generally, a Sturmian word over an alphabet of size k is one with complexity n+k−1. An Arnoux-Rauzy word over a ternary alphabet has complexity 2n + 1: an example is the Tribonacci word.
For recurrent words, those in which each factor appears infinitely often, the complexity function almost characterises the set of factors: if s is a recurrent word with the same complexity function as t are then s has the same set of factors as t or δt where δ denotes the letter doubling morphism a → aa.
Complexity function of a language
Let L be a language over an alphabet and define the function pL(n) of a positive integer n to be the number of different words of length n in L The complexity function of a word is thus the complexity function of the language consisting of the factors of that word.
The complexity function of a language is less constrained than that of a word. For example, it may be bounded but not eventually constant: the complexity function of the regular language takes values 3 and 4 on odd and even n≥2 respectively. There is an analogue of the Morse–Hedlund theorem: if the complexity of L satisfies pL(n) ≤ n for some n, then pL is bounded and there is a finite language F such that
A polynomial or sparse language is one for which the complexity function p(n) is bounded by a fixed power of n. A regular language which is not polynomial is exponential: there are infinitely many n for which p(n) is greater than kn for some fixed k > 1.
Related concepts
The topological entropy of an infinite sequence u is defined by
The limit exists as the logarithm of the complexity function is subadditive. Every real number between 0 and 1 occurs as the topological entropy of some sequence is applicable, which may be taken to be uniformly recurrent or even uniquely ergodic.
For x a real number and b an integer ≥ 2 then the complexity function of x in base b is the complexity function p(x,b,n) of the sequence of digits of x written in base b.
If x is an irrational number then p(x,b,n) ≥ n+1; if x is rational then p(x,b,n) ≤ C for some constant C depending on x and b. It is conjectured that for algebraic irrational x the complexity is bn (which would follow if all such numbers were normal) but all that is known in this case is that p grows faster than any linear function of n.
The abelian complexity function pab(n) similarly counts the number of occurrences of distinct factors of given length n, where now we identify factors that differ only by a permutation of the positions. Clearly pab(n) ≤ p(n). The abelian complexity of a Sturmian sequence satisfies pab(n) = 2.
References
Theoretical computer science | Complexity function | [
"Mathematics"
] | 994 | [
"Theoretical computer science",
"Applied mathematics"
] |
28,933,065 | https://en.wikipedia.org/wiki/Geographic%20center%20of%20the%20United%20States | The geographic center of the United States is a point approximately north of Belle Fourche, South Dakota at . It has been regarded as such by the United States Coast and Geodetic Survey and the U.S. National Geodetic Survey (NGS) since the additions of Alaska and Hawaii to the United States in 1959.
Overview
This is distinct from the contiguous geographic center, which has not changed since the 1912 admissions of New Mexico and Arizona to the 48 contiguous United States, and falls near the town of Lebanon, Kansas. This served as the overall geographic center of the United States for 47 years, until the 1959 admissions of Alaska and Hawaii moved the geographic center of the overall United States approximately northwest by north.
While any measurement of the exact center of a land mass will always be imprecise due to changing shorelines and other factors, the NGS coordinates identify the center of the fifty states as an uninhabited parcel of private pastureland approximately east of the cornerpoint where the South Dakota–Wyoming–Montana borders meet. According to the NGS data sheet, the actual marker is "set in an irregular mass of concrete 36 inches below the surface of the ground."
For public commemoration, a nearby proxy marker is located in a park in Belle Fourche, where one will find a flag atop a small concrete slab bearing a United States Coast and Geodetic Survey Reference Marker.
Contiguous United States
The geographic center of the 48 contiguous or conterminous United States, determined in a 1918 survey, is located at , about northwest of the center of Lebanon, Kansas, approximately south of the Kansas–Nebraska border. The determination is accurate to about .
While any measurement of the exact center of a land mass will always be imprecise due to changing shorelines and other factors, the NGS coordinates are recognized in a historical marker in a small park at the intersection of AA Road and K-191. It is accessible by a turn-off from U.S. Route 281.
It is distinct from the geographic center of the 50 United States located at a point northeast of Belle Fourche, South Dakota, reflecting the 1959 additions of the states of Alaska and Hawaii.
In a technical glitch, a farmstead northeast of Potwin, Kansas, became the default geolocation of 600 million IP addresses (due to a lack of fine granularity) when the Massachusetts-based digital mapping company MaxMind changed the putative geographic center of the contiguous United States from to .
Marker
In order to protect the privacy of the private land owner where the point identified by the 1918 survey falls, a proxy marker was erected in 1940 about half a mile (800 m) away, at the 130/AA intersection ().
Its inscription reads:
The GEOGRAPHIC CENTER of the UNITED STATES
LAT. 39°50' LONG. −98°35'
NE 1/4 – SE 1/4 – S32 – T2S – R11W
Located by L.T. Hagadorn of Paulette & Wilson – Engineers and L.A. Beardslee – County Engineer. From data furnished by United States Coast and Geodetic Survey.
Sponsored by Lebanon Hub Club. Lebanon, Kansas. April 25, 1940
An American flag usually flies atop a pole placed on the monument. A covered picnic area and the U.S. Center Chapel, a small eight-pew chapel, are nearby.
Method of measurement
In 1918, the United States Coast and Geodetic Survey found this location by balancing on a point a cardboard cutout shaped like the U.S. This method was accurate to within , but while the Geodetic Survey no longer endorses any location as the center of the U.S., the identification of Lebanon, Kansas, has remained.
Cultural references
The geographic center of the contiguous United States is mentioned in Neil Gaiman's American Gods as a neutral ground where the modern and the old gods can meet despite the war between them.
In the 1969 Disney movie The Computer Wore Tennis Shoes, the final question of the college knowledge program is, "A small Midwest city is located exactly on an area designated as the 'geographic center of the United States.' For ten points and $100,000, can you tell us the name of that city?" The answer of Lebanon, Kansas is accepted as correct.
A 2021 Jeep Super Bowl commercial titled "The Middle", starring Bruce Springsteen, features the U.S. Center Chapel in Lebanon, Kansas.
Belle Fourche, South Dakota, is referenced as the geographic center of the U.S. in "A Serpent's Tooth: A Longmire Mystery Book 9" by Craig Johnson.
See also
Center of population
Geographic centers of the United States
Mean center of the United States population
Median center of the United States population
United States Coast and Geodetic Survey (USC&GS)
References
External links
In the Middle of Nowhere, a Nation’s Center, New York Times
Smith County Map, KDOT
Kansas Travel article
Center for Land Use Interpretation article about the origins and accuracy of the marker
Roadside America article
USGS information
The Center of the United States article about applying mathematical methods to geography
Geography of the United States
United States
Historic surveying landmarks in the United States
Geography of Smith County, Kansas
Geography of Butte County, South Dakota
1918 establishments in Kansas | Geographic center of the United States | [
"Physics",
"Mathematics"
] | 1,072 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
28,934,494 | https://en.wikipedia.org/wiki/Panorama9 | Panorama9 is a cloud-based service within enterprise Network management. The service is provided by the company of the same name and is a cloud-based remote monitoring and management RMM (Remote monitoring and management) service which consists of a hosted Dashboard displaying the status of all devices on an enterprise's network and also provides a set of reports on inventory on both hardware, software and users. The service operates by collating data transmitted from agents installed on each monitored device.
In November 2011 the service introduced an interactive network map displaying real-time information.
In May 2012 Zendesk and Panorama9 announced a partnership.
In 2014 an MSP Managed services Control Panel was introduced for service providers which enables MSP's to manage multiple clients from one dashboard.
In 2017 a mobile app was released.
The company
The company operating the service has the same name, Panorama9, and was founded in Copenhagen and later moved their headquarters to San Francisco, California with some sales and development teams continuing from Copenhagen
See also
Software Asset Management
IT Asset Management
SNMP
List of mergers and acquisitions by Symantec
References
External links
American companies established in 2010
System administration
Network management
Port scanners
Network analyzers
2010 establishments in California | Panorama9 | [
"Technology",
"Engineering"
] | 242 | [
"Information systems",
"Computer networks engineering",
"Network management",
"System administration"
] |
28,936,135 | https://en.wikipedia.org/wiki/String%20group | In topology, a branch of mathematics, a string group is an infinite-dimensional group introduced by as a -connected cover of a spin group. A string manifold is a manifold with a lifting of its frame bundle to a string group bundle. This means that in addition to being able to define holonomy along paths, one can also define holonomies for surfaces going between strings. There is a short exact sequence of topological groupswhere is an Eilenberg–MacLane space and is a spin group. The string group is an entry in the Whitehead tower (dual to the notion of Postnikov tower) for the orthogonal group:It is obtained by killing the homotopy group for , in the same way that is obtained from by killing . The resulting manifold cannot be any finite-dimensional Lie group, since all finite-dimensional compact Lie groups have a non-vanishing . The fivebrane group follows, by killing .
More generally, the construction of the Postnikov tower via short exact sequences starting with Eilenberg–MacLane spaces can be applied to any Lie group G, giving the string group String(G).
Intuition for the string group
The relevance of the Eilenberg-Maclane space lies in the fact that there are the homotopy equivalencesfor the classifying space , and the fact . Notice that because the complex spin group is a group extensionthe String group can be thought of as a "higher" complex spin group extension, in the sense of higher group theory since the space is an example of a higher group. It can be thought of the topological realization of the groupoid whose object is a single point and whose morphisms are the group . Note that the homotopical degree of is , meaning its homotopy is concentrated in degree , because it comes from the homotopy fiber of the mapfrom the Whitehead tower whose homotopy cokernel is . This is because the homotopy fiber lowers the degree by .
Understanding the geometry
The geometry of String bundles requires the understanding of multiple constructions in homotopy theory, but they essentially boil down to understanding what -bundles are, and how these higher group extensions behave. Namely, -bundles on a space are represented geometrically as bundle gerbes since any -bundle can be realized as the homotopy fiber of a map giving a homotopy squarewhere . Then, a string bundle must map to a spin bundle which is -equivariant, analogously to how spin bundles map equivariantly to the frame bundle.
Fivebrane group and higher groups
The fivebrane group can similarly be understood by killing the group of the string group using the Whitehead tower. It can then be understood again using an exact sequence of higher groupsgiving a presentation of it terms of an iterated extension, i.e. an extension by by . Note map on the right is from the Whitehead tower, and the map on the left is the homotopy fiber.
See also
Gerbe
N-group (category theory)
Elliptic cohomology
String bordism
References
External links
From Loop Groups to 2-groups - gives a characterization of String(n) as a 2-group
What is an elliptic object?
Group theory
Differential geometry
String theory
Homotopy theory | String group | [
"Astronomy",
"Mathematics"
] | 669 | [
"String theory",
"Fields of abstract algebra",
"Astronomical hypotheses",
"Group theory"
] |
28,937,040 | https://en.wikipedia.org/wiki/Smoluchowski%20factor | The Smoluchowski factor, also known as von Smoluchowski's f-factor is related to inter-particle interactions. It is named after Marian Smoluchowski.
References
See also
Flocculation
Smoluchowski coagulation equation
Einstein–Smoluchowski relation
Physical chemistry | Smoluchowski factor | [
"Physics",
"Chemistry"
] | 64 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"Physical chemistry stubs",
"nan"
] |
5,167,489 | https://en.wikipedia.org/wiki/Flexible%20manufacturing%20system | A flexible manufacturing system (FMS) is a manufacturing system in which there is some amount of flexibility that allows the system to react in case of changes, whether predicted or unpredicted.
This flexibility is generally considered to fall into two categories, which both contain numerous subcategories.
The first category is called routing flexibility, which covers the system's ability to be changed to produce new product types, and the ability to change the order of operations executed on a part.
The second category is called machine flexibility, which consists of the ability to use multiple machines to perform the same operation on a part, as well as the system's ability to absorb large-scale changes, such as in volume, capacity, or capability.
Most flexible manufacturing systems consist of three main systems:
The work machines which are often automated CNC machines are connected by
A material handling system to optimize parts flow and
The central control computer controls material movements and machine flow.
The main advantage of a flexible manufacturing system is its high flexibility in managing manufacturing resources like time and effort to manufacture a new product.
The best application of a flexible manufacturing system is found in the 'production of small sets of products like those from mass production.
Advantages
Reduced manufacturing cost
Lower cost per unit produced,
Greater labor productivity,
Greater machine efficiency,
Improved quality,
Increased system reliability,
Reduced parts inventories,
Adaptability to CAD/CAM operations.
Shorter lead times
Improved efficiency
Increase production rate
Disadvantages
Initial set-up cost is high,
Substantial pre-planning
Requirement of skilled labor
Complicated system
Maintenance is complicated
чЬшМ
Flexibility
Flexibility in manufacturing means the ability to deal with slightly or greatly mixed parts, to allow variation in parts assembly and variations in process sequence, change the production volume and change the design of certain product being manufactured.
Industrial FMS communication
An industrial flexible manufacturing system consists of robots, computer-controlled Machines, computer numerical controlled machines (CNC), instrumentation devices, computers, sensors, and other stand alone systems such as inspection machines. The use of robots in the production segment of manufacturing industries promises a variety of benefits ranging from high utilization to high volume of productivity. Each Robotic cell or node will be located along a material handling system such as a conveyor or automatic guided vehicle. The production of each part or work-piece will require a different combination of manufacturing nodes. The movement of parts from one node to another is done through the material handling system. At the end of part processing, the finished parts will be routed to an automatic inspection node, and subsequently unloaded from the Flexible Manufacturing System.
The FMS data traffic consists of large files and short messages, and mostly come from nodes, devices and instruments. The message size ranges between a few bytes to several hundreds of bytes. Executive software and other data, for example, are files with a large size, while messages for machining data, instrument to instrument communications, status monitoring, and data reporting are transmitted in small size.
There is also some variation on response time. Large program files from a main computer usually take about 60 seconds to be down loaded into each instrument or node at the beginning of FMS operation. Messages for instrument data need to be sent in a periodic time with deterministic time delay. Other types of messages used for emergency reporting are quite short in size and must be transmitted and received with an almost instantaneous response.
The demands for reliable FMS protocol that support all the FMS data characteristics are now urgent. The existing IEEE standard protocols do not fully satisfy the real time communication requirements in this environment. The delay of CSMA/CD is unbounded as the number of nodes increases due to the message collisions. Token bus has a deterministic message delay, but it does not support prioritized access scheme which is needed in FMS communications. Token Ring provides prioritized access and has a low message delay, however, its data transmission is unreliable. A single node failure which may occur quite often in FMS causes transmission errors of passing message in that node. In addition, the topology of Token Ring results in high wiring installation and cost.
A design of FMS communication that supports a real time communication with bounded message delay and reacts promptly to any emergency signal is needed. Because of machine failure and malfunction due to heat, dust, and electromagnetic interference is common, a prioritized mechanism and immediate transmission of emergency messages are needed so that a suitable recovery procedure can be applied. A modification of standard Token Bus to implement a prioritized access scheme was proposed to allow transmission of short and periodic messages with a low delay compared to the one for long messages.
Further reading
Manufacturing Flexibility: a literature review. By A. de Toni and S. Tonchia. International Journal of Production Research, 1998, vol. 36, no. 6, 1587-617.
Computer Control of Manufacturing Systems. By Y. Koren. McGraw Hill, Inc. 1983, 287 pp,
Manufacturing Systems – Theory and Practice. By G. Chryssolouris. New York, NY: Springer Verlag, 2005. 2nd edition.
Design of Flexible Production Systems – Methodologies and Tools. By T. Tolio. Berlin: Springer, 2009.
See also
Agile management
Lean manufacturing
References
External links
FMS video 1
FMS video 2
Manufacturing | Flexible manufacturing system | [
"Engineering"
] | 1,065 | [
"Manufacturing",
"Mechanical engineering"
] |
5,168,545 | https://en.wikipedia.org/wiki/Langevin%20dynamics | In physics, Langevin dynamics is an approach to the mathematical modeling of the dynamics of molecular systems using the Langevin equation. It was originally developed by French physicist Paul Langevin. The approach is characterized by the use of simplified models while accounting for omitted degrees of freedom by the use of stochastic differential equations. Langevin dynamics simulations are a kind of Monte Carlo simulation.
Overview
A real world molecular system is unlikely to be present in vacuum. Jostling of solvent or air molecules causes friction, and the occasional high velocity collision will perturb the system. Langevin dynamics attempts to extend molecular dynamics to allow for these effects. Also, Langevin dynamics allows temperature to be controlled as with a thermostat, thus approximating the canonical ensemble.
Langevin dynamics mimics the viscous aspect of a solvent. It does not fully model an implicit solvent; specifically, the model does not account for the electrostatic screening and also not for the hydrophobic effect. For denser solvents, hydrodynamic interactions are not captured via Langevin dynamics.
For a system of particles with masses , with coordinates that constitute a time-dependent random variable, the resulting Langevin equation is
where is the particle interaction potential; is the gradient operator such that is the force calculated from the particle interaction potentials; the dot is a time derivative such that is the velocity and is the acceleration; is the damping constant (units of reciprocal time), also known as the collision frequency; is the temperature, is the Boltzmann constant; and is a delta-correlated stationary Gaussian process with zero-mean, satisfying
Here, is the Dirac delta.
If the main objective is to control temperature, care should be exercised to use a small damping constant . As grows, it spans from the inertial all the way to the diffusive (Brownian) regime. The Langevin dynamics limit of non-inertia is commonly described as Brownian dynamics. Brownian dynamics can be considered as overdamped Langevin dynamics, i.e. Langevin dynamics where no average acceleration takes place.
The Langevin equation can be reformulated as a Fokker–Planck equation that governs the probability distribution of the random variable X.
The Langevin equation can be generalized to rotational dynamics of molecules, Brownian particles, etc. A standard (according to NIST) way to do it is to leverage a quaternion-based description of the stochastic rotational motion.
See also
Hamiltonian mechanics
Statistical mechanics
Implicit solvation
Stochastic differential equations
Langevin equation
Klein–Kramers equation
References
External links
Langevin Dynamics (LD) Simulation
Classical mechanics
Statistical mechanics
Dynamical systems
Symplectic geometry | Langevin dynamics | [
"Physics",
"Mathematics"
] | 551 | [
"Statistical mechanics",
"Mechanics",
"Classical mechanics",
"Dynamical systems"
] |
5,168,909 | https://en.wikipedia.org/wiki/Chemogenomics | Chemogenomics, or chemical genomics, is the systematic screening of targeted chemical libraries of small molecules against individual drug target families (e.g., GPCRs, nuclear receptors, kinases, proteases, etc.) with the ultimate goal of identification of novel drugs and drug targets. Typically some members of a target library have been well characterized where both the function has been determined and compounds that modulate the function of those targets (ligands in the case of receptors, inhibitors of enzymes, or blockers of ion channels) have been identified. Other members of the target family may have unknown function with no known ligands and hence are classified as orphan receptors. By identifying screening hits that modulate the activity of the less well characterized members of the target family, the function of these novel targets can be elucidated. Furthermore, the hits for these targets can be used as a starting point for drug discovery. The completion of the human genome project has provided an abundance of potential targets for therapeutic intervention. Chemogenomics strives to study the intersection of all possible drugs on all of these potential targets.
A common method to construct a targeted chemical library is to include known ligands of at least one and preferably several members of the target family. Since a portion of ligands that were designed and synthesized to bind to one family member will also bind to additional family members, the compounds contained in a targeted chemical library should collectively bind to a high percentage of the target family.
Strategy
Chemogenomics integrates target and drug discovery by using active compounds, which function as ligands, as probes to characterize proteome functions. The interaction between a small compound and a protein induces a phenotype. Once the phenotype is characterized, we could associate a protein to a molecular event. Compared with genetics, chemogenomics techniques are able to modify the function of a protein rather than the gene. Also, chemogenomics is able to observe the interaction as well as reversibility in real-time. For example, the modification of a phenotype can be observed only after addition of a specific compound and can be interrupted after its withdrawal from the medium.
Currently, there are two experimental chemogenomic approaches: forward (classical) chemogenomics and reverse chemogenomics. Forward chemogenomics attempt to identify drug targets by searching for molecules which give a certain phenotype on cells or animals, while reverse chemogenomics aim to validate phenotypes by searching for molecules that interact specifically with a given protein. Both of these approaches require a suitable collection of compounds and an appropriate model system for screening the compounds and looking for the parallel identification of biological targets and biologically active compounds. The biologically active compounds that are discovered through forward or reverse chemogenomics approaches are known as modulators because they bind to and modulate specific molecular targets, thus they could be used as ‘targeted therapeutics’.
Forward chemogenomics
In forward chemogenomics, which is also known as classical chemogenomics, a particular phenotype is studied and small compound interacting with this function are identified. The molecular basis of this desired phenotype is unknown. Once the modulators have been identified, they will be used as tools to look for the protein responsible for the phenotype. For example, a loss-of-function phenotype could be an arrest of tumor growth. Once compounds that lead to a target phenotype have been identified, identifying the gene and protein targets should be the next step. The main challenge of forward chemogenomics strategy lies in designing phenotypic assays that lead immediately from screening to target identification.
Reverse chemogenomics
In reverse chemogenomics, small compounds that perturb the function of an enzyme in the context of an in vitro enzymatic test will be identified. Once the modulators have been identified, the phenotype induced by the molecule is analyzed in a test on cells or on whole organisms. This method will identify or confirm the role of the enzyme in the biological response. Reverse chemogenomics used to be virtually identical to the target-based approaches that have been applied in drug discovery and molecular pharmacology over the past decade. This strategy is now enhanced by parallel screening and by the ability to perform lead optimization on many targets that belong to one target family.
Applications
Determining mode of action
Chemogenomics has been used to identify mode of action (MOA) for traditional Chinese medicine (TCM) and Ayurveda. Compounds contained in traditional medicines are usually more soluble than synthetic compounds, have “privileged structures” (chemical structures that are more frequently found to bind in different living organisms), and have more comprehensively known safety and tolerance factors. Therefore, this makes them especially attractive as a resource for lead structures in when developing new molecular entities. Databases containing chemical structures of compounds used in alternative medicine along with their phenotypic effects, in silico analysis may be of use to assist in determining MOA for example, by predicting ligand targets that were relevant to known phenotypes for traditional medicines. In a case study for TCM, the therapeutic class of ‘toning and replenishing medicine” was evaluated. Therapeutic actions (or phenotypes) for that class include anti-inflammatory, antioxidant, neuroprotective, hypoglycemic activity, immunomodulatory, antimetastatic, and hypotensive. Sodium-glucose transport proteins and PTP1B (an insulin signaling regulator) were identified as targets which link to the hypoglycemic phenotype suggested. The case study for Ayurveda involved anti-cancer formulations. In this case, the target prediction program enriched for targets directly connected to cancer progression such as steroid-5-alpha-reductase and synergistic targets like the efflux pump P-gp. These target-phenotype links can help identify novel MOAs.
Beyond TCM and Ayurveda, chemogenomics can be applied early in drug discovery to determine a compound's mechanism of action and take advantage of genomic biomarkers of toxicity and efficacy for application to Phase I and II clinical trials.
Identifying new drug targets
Chemogenomics profiling can be used to identify totally new therapeutic targets, for example new antibacterial agents. The study capitalized on the availability of an existing ligand library for an enzyme called murD that is used in the peptidoglycan synthesis pathway. Relying on the chemogenomics similarity principle, the researchers mapped the murD ligand library to other members of the mur ligase family (murC, murE, murF, murA, and murG) to identify new targets for the known ligands. Ligands identified would be expected to be broad-spectrum Gram-negative inhibitors in experimental assays since peptidoglycan synthesis is exclusive to bacteria. Structural and molecular docking studies revealed candidate ligands for murC and murE ligases.
Identifying genes in biological pathway
Thirty years after the posttranslationally modified histidine derivative diphthamide was determined, chemogenomics was used to discover the enzyme responsible for the final step in its synthesis. Dipthamide is a posttranslationally modified histidine residue found on the translation elongation factor 2 (eEF-2). The first two steps of the biosynthesis pathway leading to dipthine have been known, but the enzyme responsible for the amidation of dipthine to diphthamide remained a mystery. The researchers capitalized on Saccharomyces cerevisiae cofitness data. Cofitness data is data representing the similarity of growth fitness under various conditions between any two different deletion strains. Under the assumption that strains lacking the diphthamide synthetase gene should have high cofitness with strain lacking other diphthamide biosynthesis genes, they identified ylr143w as the strain with the highest cofitness to the all other strains lacking known diphthamide biosynthesis genes. Subsequent experimental assays confirmed that YLR143W was required for diphthamide synthesis and was the missing diphthamide synthetase.
See also
Chemical biology
Chemical genetics
Drug discovery
High-throughput screening
Personalized medicine
Phenotypic screening
References
Further reading
External links
GLASS: A comprehensive database for experimentally-validated GPCR-ligand associations
Kubinyi's slides
Computational chemistry
Genomics
Drug discovery
Cheminformatics | Chemogenomics | [
"Chemistry",
"Biology"
] | 1,773 | [
"Life sciences industry",
"Drug discovery",
"Theoretical chemistry",
"Computational chemistry",
"nan",
"Cheminformatics",
"Medicinal chemistry"
] |
5,170,027 | https://en.wikipedia.org/wiki/18-electron%20rule | The 18-electron rule is a chemical rule of thumb used primarily for predicting and rationalizing formulas for stable transition metal complexes, especially organometallic compounds. The rule is based on the fact that the valence orbitals in the electron configuration of transition metals consist of five (n−1)d orbitals, one ns orbital, and three np orbitals, where n is the principal quantum number. These orbitals can collectively accommodate 18 electrons as either bonding or non-bonding electron pairs. This means that the combination of these nine atomic orbitals with ligand orbitals creates nine molecular orbitals that are either metal-ligand bonding or non-bonding. When a metal complex has 18 valence electrons, it is said to have achieved the same electron configuration as the noble gas in the period, lending stability to the complex. Transition metal complexes that deviate from the rule are often interesting or useful because they tend to be more reactive. The rule is not helpful for complexes of metals that are not transition metals. The rule was first proposed by American chemist Irving Langmuir in 1921.
Applicability
The rule usefully predicts the formulas for low-spin complexes of the Cr, Mn, Fe, and Co triads. Well-known examples include ferrocene, iron pentacarbonyl, chromium carbonyl, and nickel carbonyl.
Ligands in a complex determine the applicability of the 18-electron rule. In general, complexes that obey the rule are composed at least partly of π-acceptor ligands (also known as π-acids). This kind of ligand exerts a very strong ligand field, which lowers the energies of the resultant molecular orbitals so that they are favorably occupied. Typical ligands include olefins, phosphines, and CO. Complexes of π-acids typically feature metal in a low-oxidation state. The relationship between oxidation state and the nature of the ligands is rationalized within the framework of π backbonding.
Consequences for reactivity
Compounds that obey the 18-electron rule are typically "exchange inert". Examples include [Co(NH3)6]Cl3, Mo(CO)6, and [Fe(CN)6]4−. In such cases, in general ligand exchange occurs via dissociative substitution mechanisms, wherein the rate of reaction is determined by the rate of dissociation of a ligand. On the other hand, 18-electron compounds can be highly reactive toward electrophiles such as protons, and such reactions are associative in mechanism, being acid-base reactions.
Complexes with fewer than 18 valence electrons tend to show enhanced reactivity. Thus, the 18-electron rule is often a recipe for non-reactivity in either a stoichiometric or a catalytic sense.
Duodectet rule
Computational findings suggest valence p-orbitals on the metal participate in metal-ligand bonding, albeit weakly. However, Weinhold and Landis within the context of natural bond orbitals do not count the metal p-orbitals in metal-ligand bonding, although these orbitals are still included as polarization functions. This results in a duodectet (12-electron) rule for five d-orbitals and one s-orbital only.
The current consensus in the general chemistry community is that unlike the singular octet rule for main group elements, transition metals do not strictly obey either the 12-electron or 18-electron rule, but that the rules describe the lower bound and upper bound of valence electron count respectively. Thus, while transition metal d-orbital and s-orbital bonding readily occur, the involvement of the higher energy and more spatially diffuse p-orbitals in bonding depends on the central atom and coordination environment.
Exceptions
π-donor or σ-donor ligands with small interactions with the metal orbitals lead to a weak ligand field which increases the energies of t2g orbitals. These molecular orbitals become non-bonding or weakly anti-bonding orbitals (small Δoct). Therefore, addition or removal of electron has little effect on complex stability. In this case, there is no restriction on the number of d-electrons and complexes with 12–22 electrons are possible. Small Δoct makes filling eg* possible (>18 e−) and π-donor ligands can make t2g antibonding (<18 e−). These types of ligand are located in the low-to-medium part of the spectrochemical series. For example: [TiF6]2− (Ti(IV), d0, 12 e−), [Co(NH3)6]3+ (Co(III), d6, 18 e−), [Cu(OH2)6]2+ (Cu(II), d9, 21 e−).
In terms of metal ions, Δoct increases down a group as well as with increasing oxidation number. Strong ligand fields lead to low-spin complexes which cause some exceptions to the 18-electron rule.
16-electron complexes
An important class of complexes that violate the 18e rule are the 16-electron complexes with metal d8 configurations. All high-spin d8 metal ions are octahedral (or tetrahedral), but the low-spin d8 metal ions are all square planar. Important examples of square-planar low-spin d8 metal Ions are Rh(I), Ir(I), Ni(II), Pd(II), and Pt(II). At picture below is shown the splitting of the d subshell in low-spin square-planar complexes. Examples are especially prevalent for derivatives of the cobalt and nickel triads. Such compounds are typically square-planar. The most famous example is Vaska's complex (IrCl(CO)(PPh3)2), [PtCl4]2−, and Zeise's salt [PtCl3(η2-C2H4)]−. In such complexes, the dz2 orbital is doubly occupied and nonbonding.
Many catalytic cycles operate via complexes that alternate between 18-electron and square-planar 16-electron configurations. Examples include Monsanto acetic acid synthesis, hydrogenations, hydroformylations, olefin isomerizations, and some alkene polymerizations.
Other violations can be classified according to the kinds of ligands on the metal center.
Bulky ligands
Bulky ligands can preclude the approach of the full complement of ligands that would allow the metal to achieve the 18 electron configuration.
Examples:
Ti(neopentyl)4 (8 e−)
Cp*2Ti(C2H4) (16 e−)
V(CO)6 (17 e−)
Cp*Cr(CO)3 (17 e−)
Pt(PtBu3)2 (14 e−)
Co(norbornyl)4 (13 e−)
[FeCp2]+ (17 e−)
Sometimes such complexes engage in agostic interactions with the hydrocarbon framework of the bulky ligand. For example:
W(CO)3[P(C6H11)3]2 has 16 e− but has a short bonding contact between one C–H bond and the W center.
Cp(PMe3)V(CHCMe3) (14 e−, diamagnetic) has a short V–H bond with the 'alkylidene-H', so the description of the compound is somewhere between Cp(PMe3)V(CHCMe3) and Cp(PMe3)V(H)(CCMe3).
High-spin complexes
High-spin metal complexes have singly occupied orbitals and may not have any empty orbitals into which ligands could donate electron density. In general, there are few or no π-acidic ligands in the complex. These singly occupied orbitals can combine with the singly occupied orbitals of radical ligands (e.g., oxygen), or addition of a strong field ligand can cause electron-pairing, thus creating a vacant orbital that it can donate into.
Examples:
CrCl3(THF)3 (15 e−)
[Mn(H2O)6]2+ (17 e−)
[Cu(H2O)6]2+ (21 e−, see comments below)
Complexes containing strongly π-donating ligands often violate the 18-electron rule. These ligands include fluoride (F−), oxide (O2−), nitride (N3−), alkoxides (RO−), and imides (RN2−). Examples:
[CrO4]2− (16 e−)
Mo(=NR)2Cl2 (12 e−)
In the latter case, there is substantial donation of the nitrogen lone pairs to the Mo (so the compound could also be described as a 16 e− compound). This can be seen from the short Mo–N bond length, and from the angle Mo–N–C(R), which is nearly 180°.
Counter-examples:
trans-WO2(Me2PCH2CH2PMe2)2 (18 e−)
Cp*ReO3 (18 e−)
In these cases, the M=O bonds are "pure" double bonds (i.e., no donation of the lone pairs of the oxygen to the metal), as reflected in the relatively long bond distances.
π-donating ligands
Ligands where the coordinating atoms bearing nonbonding lone pairs often stabilize unsaturated complexes. Metal amides and alkoxides often violate the 18e rule.
Combinations of effects
The above factors can sometimes combine. Examples include
Cp*VOCl2 (14 e−)
TiCl4 (8 e−)
Higher electron counts
Some complexes have more than 18 electrons. Examples:
Cobaltocene (19 e−)
Nickelocene (20 e−)
The hexaaquacopper(II) ion [Cu(H2O)6]2+ (21 e−)
TM(CO)8− (TM = Sc, Y) (20 e−)
Often, cases where complexes have more than 18 valence electrons are attributed to electrostatic forces – the metal attracts ligands to itself to try to counterbalance its positive charge, and the number of electrons it ends up with is unimportant. In the case of the metallocenes, the chelating nature of the cyclopentadienyl ligand stabilizes its bonding to the metal. Somewhat satisfying are the two following observations: cobaltocene is a strong electron donor, readily forming the 18-electron cobaltocenium cation; and nickelocene tends to react with substrates to give 18-electron complexes, e.g. CpNiCl(PR3) and free CpH.
In the case of nickelocene, the extra two electrons are in orbitals which are weakly metal-carbon antibonding; this is why it often participates in reactions where the M–C bonds are broken and the electron count of the metal changes to 18.
The 20-electron systems TM(CO)8− (TM = Sc, Y) have a cubic (Oh) equilibrium geometry and a singlet (1A1g) electronic ground state. There is one occupied valence MO with a2u symmetry, which is formed only by ligand orbitals without a contribution from the metal AOs. But the adducts TM(CO)8− (TM=Sc, Y) fulfill the 18-electron rule when one considers only those valence electrons, which occupy metal–ligand bonding orbitals.
See also
References
Further reading
Chemical bonding
Inorganic chemistry
Rules of thumb | 18-electron rule | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,432 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
5,173,456 | https://en.wikipedia.org/wiki/Scalar%20field%20theory | In theoretical physics, scalar field theory can refer to a relativistically invariant classical or quantum theory of scalar fields. A scalar field is invariant under any Lorentz transformation.
The only fundamental scalar quantum field that has been observed in nature is the Higgs field. However, scalar quantum fields feature in the effective field theory descriptions of many physical phenomena. An example is the pion, which is actually a pseudoscalar.
Since they do not involve polarization complications, scalar fields are often the easiest to appreciate second quantization through. For this reason, scalar field theories are often used for purposes of introduction of novel concepts and techniques.
The signature of the metric employed below is .
Classical scalar field theory
A general reference for this section is Ramond, Pierre (2001-12-21). Field Theory: A Modern Primer (Second Edition). USA: Westview Press. , Ch 1.
Linear (free) theory
The most basic scalar field theory is the linear theory. Through the Fourier decomposition of the fields, it represents the normal modes of an infinity of coupled oscillators where the continuum limit of the oscillator index i is now denoted by . The action for the free relativistic scalar field theory is then
where is known as a Lagrangian density; for the three spatial coordinates; is the Kronecker delta function; and for the -th coordinate .
This is an example of a quadratic action, since each of the terms is quadratic in the field, . The term proportional to is sometimes known as a mass term, due to its subsequent interpretation, in the quantized version of this theory, in terms of particle mass.
The equation of motion for this theory is obtained by extremizing the action above. It takes the following form, linear in ,
where ∇2 is the Laplace operator. This is the Klein–Gordon equation, with the interpretation as a classical field equation, rather than as a quantum-mechanical wave equation.
Nonlinear (interacting) theory
The most common generalization of the linear theory above is to add a scalar potential to the Lagrangian, where typically, in addition to a mass term , the potential has higher order polynomial terms in . Such a theory is sometimes said to be interacting, because the Euler–Lagrange equation is now nonlinear, implying a self-interaction. The action for the most general such theory is
The factors in the expansion are introduced because they are useful in the Feynman diagram expansion of the quantum theory, as described below.
The corresponding Euler–Lagrange equation of motion is now
Dimensional analysis and scaling
Physical quantities in these scalar field theories may have dimensions of length, time or mass, or some combination of the three.
However, in a relativistic theory, any quantity , with dimensions of time, can be readily converted into a length, , by using the velocity of light, . Similarly, any length is equivalent to an inverse mass, , using the Planck constant, . In natural units, one thinks of a time as a length, or either time or length as an inverse mass.
In short, one can think of the dimensions of any physical quantity as defined in terms of just one independent dimension, rather than in terms of all three. This is most often termed the mass dimension of the quantity. Knowing the dimensions of each quantity, allows one to uniquely restore conventional dimensions from a natural units expression in terms of this mass dimension, by simply reinserting the requisite powers of and required for dimensional consistency.
One conceivable objection is that this theory is classical, and therefore it is not obvious how the Planck constant should be a part of the theory at all. If desired, one could indeed recast the theory without mass dimensions at all: However, this would be at the expense of slightly obscuring the connection with the quantum scalar field. Given that one has dimensions of mass, the Planck constant is thought of here as an essentially arbitrary fixed reference quantity of action (not necessarily connected to quantization), hence with dimensions appropriate to convert between mass and inverse length.
Scaling dimension
The classical scaling dimension, or mass dimension, , of describes the transformation of the field under a rescaling of coordinates:
The units of action are the same as the units of , and so the action itself has zero mass dimension. This fixes the scaling dimension of the field to be
Scale invariance
There is a specific sense in which some scalar field theories are scale-invariant. While the actions above are all constructed to have zero mass dimension, not all actions are invariant under the scaling transformation
The reason that not all actions are invariant is that one usually thinks of the parameters m and as fixed quantities, which are not rescaled under the transformation above. The condition for a scalar field theory to be scale invariant is then quite obvious: all of the parameters appearing in the action should be dimensionless quantities. In other words, a scale invariant theory is one without any fixed length scale (or equivalently, mass scale) in the theory.
For a scalar field theory with spacetime dimensions, the only dimensionless parameter satisfies = . For example, in = 4, only is classically dimensionless, and so the only classically scale-invariant scalar field theory in = 4 is the massless 4 theory.
Classical scale invariance, however, normally does not imply quantum scale invariance, because of the renormalization group involved – see the discussion of the beta function below.
Conformal invariance
A transformation
is said to be conformal if the transformation satisfies
for some function .
The conformal group contains as subgroups the isometries of the metric (the Poincaré group) and also the scaling transformations (or dilatations) considered above. In fact, the scale-invariant theories in the previous section are also conformally-invariant.
φ4 theory
Massive 4 theory illustrates a number of interesting phenomena in scalar field theory.
The Lagrangian density is
Spontaneous symmetry breaking
This Lagrangian has a symmetry under the transformation .
This is an example of an internal symmetry, in contrast to a space-time symmetry.
If is positive, the potential
has a single minimum, at the origin. The solution φ=0 is clearly invariant under the symmetry.
Conversely, if is negative, then one can readily see that the potential
has two minima. This is known as a double well potential, and the lowest energy states (known as the vacua, in quantum field theoretical language) in such a theory are invariant under the symmetry of the action (in fact it maps each of the two vacua into the other). In this case, the symmetry is said to be spontaneously broken.
Kink solutions
The 4 theory with a negative 2 also has a kink solution, which is a canonical example of a soliton. Such a solution is of the form
where is one of the spatial variables ( is taken to be independent of , and the remaining spatial variables). The solution interpolates between the two different vacua of the double well potential. It is not possible to deform the kink into a constant solution without passing through a solution of infinite energy, and for this reason the kink is said to be stable. For D>2 (i.e., theories with more than one spatial dimension), this solution is called a domain wall.
Another well-known example of a scalar field theory with kink solutions is the sine-Gordon theory.
Complex scalar field theory
In a complex scalar field theory, the scalar field takes values in the complex numbers, rather than the real numbers. The complex scalar field represents spin-0 particles and antiparticles with charge. The action considered normally takes the form
This has a U(1), equivalently O(2) symmetry, whose action on the space of fields rotates , for some real phase angle .
As for the real scalar field, spontaneous symmetry breaking is found if m2 is negative. This gives rise to Goldstone's Mexican hat potential which is a rotation of the double-well potential of a real scalar field through 2π radians about the V axis. The symmetry breaking takes place in one higher dimension, i.e., the choice of vacuum breaks a continuous U(1) symmetry instead of a discrete one. The two components of the scalar field are reconfigured as a massive mode and a massless Goldstone boson.
O(N) theory
One can express the complex scalar field theory in terms of two real fields, φ1 = Re φ and φ2 = Im φ, which transform in the vector representation of the U(1) = O(2) internal symmetry. Although such fields transform as a vector under the internal symmetry, they are still Lorentz scalars.
This can be generalised to a theory of N scalar fields transforming in the vector representation of the O(N) symmetry. The Lagrangian for an O(N)-invariant scalar field theory is typically of the form
using an appropriate O(N)-invariant inner product. The theory can also be expressed for complex vector fields, i.e. for , in which case the symmetry group is the Lie group SU(N).
Gauge-field couplings
When the scalar field theory is coupled in a gauge invariant way to the Yang–Mills action, one obtains the Ginzburg–Landau theory of superconductors. The topological solitons of that theory correspond to vortices in a superconductor; the minimum of the Mexican hat potential corresponds to the order parameter of the superconductor.
Quantum scalar field theory
A general reference for this section is Ramond, Pierre (2001-12-21). Field Theory: A Modern Primer (Second Edition). USA: Westview Press. , Ch. 4
In quantum field theory, the fields, and all observables constructed from them, are replaced by quantum operators on a Hilbert space. This Hilbert space is built on a vacuum state, and dynamics are governed by a quantum Hamiltonian, a positive-definite operator which annihilates the vacuum. A construction of a quantum scalar field theory is detailed in the canonical quantization article, which relies on canonical commutation relations among the fields. Essentially, the infinity of classical oscillators repackaged in the scalar field as its (decoupled) normal modes, above, are now quantized in the standard manner, so the respective quantum operator field describes an infinity of quantum harmonic oscillators acting on a respective Fock space.
In brief, the basic variables are the quantum field and its canonical momentum . Both these operator-valued fields are Hermitian. At spatial points , and at equal times, their canonical commutation relations are given by
while the free Hamiltonian is, similarly to above,
A spatial Fourier transform leads to momentum space fields
which resolve to annihilation and creation operators
where .
These operators satisfy the commutation relations
The state annihilated by all of the operators a is identified as the bare vacuum, and a particle with momentum is created by applying to the vacuum.
Applying all possible combinations of creation operators to the vacuum constructs the relevant Hilbert space: This construction is called Fock space. The vacuum is annihilated by the Hamiltonian
where the zero-point energy has been removed by Wick ordering. (See canonical quantization.)
Interactions can be included by adding an interaction Hamiltonian. For a φ4 theory, this corresponds to adding a Wick ordered term g:φ4:/4! to the Hamiltonian, and integrating over x. Scattering amplitudes may be calculated from this Hamiltonian in the interaction picture. These are constructed in perturbation theory by means of the Dyson series, which gives the time-ordered products, or n-particle Green's functions as described in the Dyson series article. The Green's functions may also be obtained from a generating function that is constructed as a solution to the Schwinger–Dyson equation.
Feynman path integral
The Feynman diagram expansion may be obtained also from the Feynman path integral formulation. The time ordered vacuum expectation values of polynomials in , known as the n-particle Green's functions, are constructed by integrating over all possible fields, normalized by the vacuum expectation value with no external fields,
All of these Green's functions may be obtained by expanding the exponential in J(x)φ(x) in the generating function
A Wick rotation may be applied to make time imaginary. Changing the signature to (++++) then turns the Feynman integral into a statistical mechanics partition function in Euclidean space,
Normally, this is applied to the scattering of particles with fixed momenta, in which case, a Fourier transform is useful, giving instead
where is the Dirac delta function.
The standard trick to evaluate this functional integral is to write it as a product of exponential factors, schematically,
The second two exponential factors can be expanded as power series, and the combinatorics of this expansion can be represented graphically through Feynman diagrams of the Quartic interaction.
The integral with g = 0 can be treated as a product of infinitely many elementary Gaussian integrals: the result may be expressed as a sum of Feynman diagrams, calculated using the following Feynman rules:
Each field (p) in the n-point Euclidean Green's function is represented by an external line (half-edge) in the graph, and associated with momentum p.
Each vertex is represented by a factor −g.
At a given order gk, all diagrams with n external lines and vertices are constructed such that the momenta flowing into each vertex is zero. Each internal line is represented by a propagator 1/(q2 + m2), where is the momentum flowing through that line.
Any unconstrained momenta are integrated over all values.
The result is divided by a symmetry factor, which is the number of ways the lines and vertices of the graph can be rearranged without changing its connectivity.
Do not include graphs containing "vacuum bubbles", connected subgraphs with no external lines.
The last rule takes into account the effect of dividing by [0]. The Minkowski-space Feynman rules are similar, except that each vertex is represented by −ig, while each internal line is represented by a propagator i/(q2−m2+iε), where the term represents the small Wick rotation needed to make the Minkowski-space Gaussian integral converge.
Renormalization
The integrals over unconstrained momenta, called "loop integrals", in the Feynman graphs typically diverge. This is normally handled by renormalization, which is a procedure of adding divergent counter-terms to the Lagrangian in such a way that the diagrams constructed from the original Lagrangian and counter-terms is finite. A renormalization scale must be introduced in the process, and the coupling constant and mass become dependent upon it.
The dependence of a coupling constant on the scale is encoded by a beta function, , defined by
This dependence on the energy scale is known as "the running of the coupling parameter", and theory of this systematic scale-dependence in quantum field theory is described by the renormalization group.
Beta-functions are usually computed in an approximation scheme, most commonly perturbation theory, where one assumes that the coupling constant is small. One can then make an expansion in powers of the coupling parameters and truncate the higher-order terms (also known as higher loop contributions, due to the number of loops in the corresponding Feynman graphs).
The -function at one loop (the first perturbative contribution) for the 4 theory is
The fact that the sign in front of the lowest-order term is positive suggests that the coupling constant increases with energy. If this behavior persisted at large couplings, this would indicate the presence of a Landau pole at finite energy, arising from quantum triviality. However, the question can only be answered non-perturbatively, since it involves strong coupling.
A quantum field theory is said to be trivial when the renormalized coupling, computed through its beta function, goes to zero when the ultraviolet cutoff is removed. Consequently, the propagator becomes that of a free particle and the field is no longer interacting.
For a 4 interaction, Michael Aizenman proved that the theory is indeed trivial, for space-time dimension ≥ 5.
For = 4, the triviality has yet to be proven rigorously, but lattice computations have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass. This can also lead to a predictable Higgs mass in asymptotic safety scenarios.
See also
Renormalization
Quantum triviality
Landau pole
Scale invariance (CFT description)
Scalar electrodynamics
Notes
References
External links
The Conceptual Basis of Quantum Field Theory Click on the link for Chap. 3 to find an extensive, simplified introduction to scalars in relativistic quantum mechanics and quantum field theory.
Quantum field theory
Mathematical physics
Scalars | Scalar field theory | [
"Physics",
"Mathematics"
] | 3,569 | [
"Quantum field theory",
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Mathematical physics"
] |
5,174,226 | https://en.wikipedia.org/wiki/Contact%20shot | A contact shot is a gunshot wound incurred while the muzzle of the firearm is in direct contact with the body at the moment of discharge. Contact shots are often the result of close-range gunfights, suicide, or execution.
Terminal effects
Wounds caused by contact shots are devastating, as the body absorbs the entire discharge of the cartridge, not just the projectile. In this case the injection of rapidly expanding propellant gasses may cause significantly more damage than the bullet itself. Even a blank cartridge can very easily cause lethal wounds if fired in contact with the body, so powerheads, which are intended to fire at contact range, are still very effective when loaded with blanks, while being relatively safe if accidentally discharged from a distance.
Firearms such as muzzleloaders and shotguns often have additional materials in the shot, such as a patch or wadding. While they are generally too lightweight to penetrate at longer ranges, they will penetrate in a contact shot. Since these are often made of porous materials such as cloth and cardboard, there is a significantly elevated risk of infection from the wound.
Characteristics
In the field of forensic ballistics, the characteristics of a contact shot are often an important part of recreating a shooting. A contact shot produces a distinctive wound, with extensive tissue damage from the burning propellant. Unlike a shot from point-blank range, the powder burns will cover a very small area right around the entry wound; often there will be a distinct pattern, called tattooing. Star-shaped tattooing is often caused by the rifling in the gun barrel, and distinct patterns may also be made by flash suppressors or muzzle brakes. The shape of the tattooing may help identify the firearm used.
In many cases, the body's absorption of the muzzle blast will act as a silencer, trapping the propellant gases under the skin and muffling the sound of the shot.
See also
Captive bolt pistol, a device designed to stun livestock with contact shots
References
Chest Injury in Close-Range Shot by Muzzle Loader Gun: Report of Two Cases
Ballistics
Firearm terminology | Contact shot | [
"Physics"
] | 424 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
5,175,578 | https://en.wikipedia.org/wiki/Piston%20motion%20equations | The reciprocating motion of a non-offset piston connected to a rotating crank through a connecting rod (as would be found in internal combustion engines) can be expressed by equations of motion. This article shows how these equations of motion can be derived using calculus as functions of angle (angle domain) and of time (time domain).
Crankshaft geometry
The geometry of the system consisting of the piston, rod and crank is represented as shown in the following diagram:
Definitions
From the geometry shown in the diagram above, the following variables are defined:
rod length (distance between piston pin and crank pin)
crank radius (distance between crank center and crank pin, i.e. half stroke)
crank angle (from cylinder bore centerline at TDC)
piston pin position (distance upward from crank center along cylinder bore centerline)
The following variables are also defined:
piston pin velocity (upward from crank center along cylinder bore centerline)
piston pin acceleration (upward from crank center along cylinder bore centerline)
crank angular velocity (in the same direction/sense as crank angle )
Angular velocity
The frequency (Hz) of the crankshaft's rotation is related to the engine's speed (revolutions per minute) as follows:
So the angular velocity (radians/s) of the crankshaft is:
Triangle relation
As shown in the diagram, the crank pin, crank center and piston pin form triangle NOP.
By the cosine law it is seen that:
where and are constant and varies as changes.
Equations with respect to angular position (angle domain)
Angle domain equations are expressed as functions of angle.
Deriving angle domain equations
The angle domain equations of the piston's reciprocating motion are derived from the system's geometry equations as follows.
Position (geometry)
Position with respect to crank angle (from the triangle relation, completing the square, utilizing the Pythagorean identity, and rearranging):
Velocity
Velocity with respect to crank angle (take first derivative, using the chain rule):
Acceleration
Acceleration with respect to crank angle (take second derivative, using the chain rule and the quotient rule):
Non Simple Harmonic Motion
The angle domain equations above show that the motion of the piston (connected to rod and crank) is not simple harmonic motion, but is modified by the motion of the rod as it swings with the rotation of the crank. This is in contrast to the Scotch Yoke which directly produces simple harmonic motion.
Example graphs
Example graphs of the angle domain equations are shown below.
Equations with respect to time (time domain)
Time domain equations are expressed as functions of time.
Angular velocity derivatives
Angle is related to time by angular velocity as follows:
If angular velocity is constant, then:
and:
Deriving time domain equations
The time domain equations of the piston's reciprocating motion are derived from the angle domain equations as follows.
Position
Position with respect to time is simply:
Velocity
Velocity with respect to time (using the chain rule):
Acceleration
Acceleration with respect to time (using the chain rule and product rule, and the angular velocity derivatives):
Scaling for angular velocity
From the foregoing, you can see that the time domain equations are simply scaled forms of the angle domain equations: is unscaled, is scaled by ω, and is scaled by ω².
To convert the angle domain equations to time domain, first replace A with ωt, and then scale for angular velocity as follows: multiply by ω, and multiply by ω².
Velocity maxima and minima
By definition, the velocity maxima and minima
occur at the acceleration zeros (crossings of the horizontal axis).
Crank angle not right-angled
The velocity maxima and minima (see the acceleration zero crossings in the graphs below) depend on rod length and half stroke and do not occur when the crank angle is right angled.
Crank-rod angle not right angled
The velocity maxima and minima do not necessarily occur when the crank makes a right angle with the rod. Counter-examples exist to disprove the statement "velocity maxima and minima only occur when the crank-rod angle is right angled".
Example
For rod length 6" and crank radius 2" (as shown in the example graph below), numerically solving the acceleration zero-crossings finds the velocity maxima/minima to be at crank angles of ±73.17615°. Then, using the triangle law of sines, it is found that the rod-vertical angle is 18.60647° and the crank-rod angle is 88.21738°. Clearly, in this example, the angle between the crank and the rod is not a right angle. Summing the angles of the triangle 88.21738° + 18.60647° + 73.17615° gives 180.00000°. A single counter-example is sufficient to disprove the statement "velocity maxima/minima occur when crank makes a right angle with rod".
Example graphs of piston motion
Angle Domain Graphs
The graphs below show the angle domain equations for a constant rod length (6.0") and various values of half stroke (1.8", 2.0", 2.2").
Note in the graphs that L is rod length and R is half stroke .
Animation
Below is an animation of the piston motion equations with the same values of rod length and crank radius as in the graphs above.
Units of Convenience
Note that for the automotive/hotrod use-case the most convenient (used by enthusiasts) unit of length for the piston-rod-crank geometry is the inch, with typical dimensions being 6" (inch) rod length and 2" (inch) crank radius. This article uses units of inch (") for position, velocity and acceleration, as shown in the graphs above.
See also
Connecting rod
Crankshaft
Equations of motion
Internal combustion engine
Newton's laws of motion
Piston
Reciprocating engine
Scotch yoke
Simple Harmonic Motion
Slider-crank linkage
References
External links
animated engines Animated Otto Engine
desmos Interactive Stroke vs Rod Piston Position and Derivatives
desmos Interactive Crank Animation
codecogs Piston Velocity and Acceleration
youtube Rotating SBC 350 Engine
youtube 3D Animation of V8 Engine
youtube Inside V8 Engine
Motion equations
Engine technology
Mechanical engineering
Equations | Piston motion equations | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 1,273 | [
"Applied and interdisciplinary physics",
"Engines",
"Mathematical objects",
"Equations",
"Engine technology",
"Mechanical engineering"
] |
25,792,575 | https://en.wikipedia.org/wiki/International%20Federation%20of%20Medical%20and%20Biological%20Engineering | The International Federation of Medical and Biological Engineering (IFMBE) was initially formed as International Federation for Medical Electronics and Biological Engineering during the 2nd International Conference of Medical and Biological Engineering, in the UNESCO Building, Paris, France in 1959. It is primarily a federation of national and transnational organizations. These organizations represent national interests in medical and biological engineering.
The objectives of the IFMBE are scientific, technological, literary, and educational. Within the field of medical, biological and clinical engineering IFMBE's aims are to encourage research and the application of knowledge, and to disseminate information and promote collaboration.
History of IFMBE
In 1959 a group of medical engineers, physicists and physicians met at the 2nd International Conference of Medical and Biological Engineering, in the UNESCO Building, Paris, France to create an organization entitled International Federation for Medical Electronics and Biological Engineering. At that time there were few national biomedical engineering societies and workers in the discipline joined as Associates of the Federation. Later, as national societies were formed, these societies became affiliates of the Federation.
In the mid-1960s, the name was shortened to International Federation for Medical and Biological Engineering. Its international conferences were held first on a yearly basis, then on a two-year basis and eventually on a three-year basis, to conform to the practice of most other international scientific bodies.
As the Federation grew, its constituency and objectives changed. During the first ten years of its existence, clinical engineering became a viable subdiscipline with an increasing number of members employed in the health care area. The IFMBE mandate was expanded to represent those engaged in research and development and in clinical engineering. The latter category now represents close to half of the total membership.
As of October 2010, IFMBE has an estimated 130,000 members in 61 affiliated organizations.
It is also associated with the International Organization for Medical Physics (IOMP) since 1976, and together the two bodies established the International Union for Physical and Engineering Sciences in Medicine (IUPESM).
Publications
Medical & Biological Engineering & Computing (ISSN 0140-0118)
The IFMBE publishes the journal Medical and Biological Engineering and Computing with Springer, aims to cover all fields of Medical and Biological Engineering and Sciences.
IFMBE News
The IFMBE News Magazine, published electronically with Springer, documents developments in biomedical engineering.
IFMBE Book Series
The official IFMBE book series, Biomedical Engineering represents another service to the Biomedical Engineering Community.
IFMBE Proceedings
In co-operation with its World Congress and regional conferences, IFMBE also issues the IFMBE Proceedings Series published with Springer.
World Congress
The major International Conference of the Federation is now titled the World Congress. Meetings of the Federation are combined with those of the Organization for medical Physics (IOMP) and co-ordinated by the Conference Co-ordinating Committee of the International Union for Physics and Engineering Sciences in Medicine (IUPESM). The Congresses are scheduled on a three-year basis and aligned with Federation's General Assembly meeting at which elections are held.
Membership
Members of IFMBE include:
American Institute for Medical and Biological Engineering
Association for the Advancement of Medical Instrumentation
Canadian Medical and Biological Engineering Society
Hong Kong Institution of Engineers
IEEE Engineering in Medicine & Biology Society
Institute of Physics and Engineering in Medicine
John von Neumann Computer Society
References
Medical physics organizations
Biomedical engineering | International Federation of Medical and Biological Engineering | [
"Engineering",
"Biology"
] | 665 | [
"Biological engineering",
"Medical technology",
"Biomedical engineering"
] |
25,798,453 | https://en.wikipedia.org/wiki/Pharmacy%20automation | Pharmacy automation involves the mechanical processes of handling and distributing medications. Any pharmacy task may be involved, including counting small objects (e.g., tablets, capsules); measuring and mixing powders and liquids for compounding; tracking and updating customer information in databases (e.g., personally identifiable information (PII), medical history, drug interaction risk detection); and inventory management. This article focuses on the changes that have taken place in the local, or community pharmacy since the 1960s.
History
Dispensing medications in a community pharmacy before the 1970s was a time-consuming operation. The pharmacist dispensed prescriptions in tablet or capsule form with a simple tray and spatula. Many new medications were developed by pharmaceutical manufacturers at an ever-increasing pace, and medications prices were rising steeply. A typical community pharmacist was working longer hours and often forced to hire staff to handle increased workloads which resulted in less time to focus on safety issues. These additional factors led to use of a machine to count medications.
The original electronic portable digital tablet counting technology was invented in Manchester, England between 1967 and 1970 by the brothers John and Frank Kirby.
I had the original idea of how the machine would work and it was my patent, but it was a joint effort getting it to work in a saleable form. It was 3 years of very hard work. I had originally studied heavy electrical engineering before changing over to Medical School and qualifying as a Medical Doctor in 1968. In fact I was Senior House (Casualty) Officer (A&E or ER) in 1970 at North Manchester General Hospital when I filed the patent. I must have been the only hospital doctor in Britain with an oscilloscope, a soldering iron and a drawing board in his room in the Doctors' Residence. The housekeepers were bemused by all the wires. Frank originally trained as a Banker but quit to take a job with a local electronics firm during the development. He died in 1987, a terrible loss. [Extract from personal communication received in March 2010 from John Kirby.]
Frank and John Kirby and their associate Rodney Lester were pioneers in pharmacy automation and small-object counting technology. In 1967, the Kirbys invented a portable digital tablet counter to count tablets and capsules. With Lester they formed a limited company. In 1970, their invention was patented and put into production in Oldham, England. The tablet counter aided the pharmacy industry with time-consuming manual counting of drug prescriptions.
A counting machine consistently counted medications accurately and quickly. This aspect of pharmacy automation was quickly adopted, and innovations emerged every decade to aid the pharmacy industry to deliver medications quickly, safely, and economically. Modern pharmacies have many new options to improve their workflow by using the new technology, and can choose intelligently from the many options available.
Chronology
On 1 January 1971 commercial production of the first portable digital tablet counters in the World began. John Kirby had filed U.K. Patent number GB1358378(A) on 8 September 1970 and U.S. patent number 3789194 on 9 August 1971. These early electronic counters were designed to help pharmacies replace the common (but often inaccurate) practice of counting medications by hand.
In 1975, the digital technology was exported to America. In early 1980 a dedicated research, development and production facility was built in Oldham, England at a cost of £500,000.
Between 1982 and 1983, two separate development facilities had been created. In America, overseen by Rodney Lester; and in England, overseen by the Kirby brothers. In 1987, Frank Kirby died. In 1989, John Kirby moved his UK facility to Devon, England.
A simple to operate machine had been developed to accurately and quickly count prescription medications. Technology improvements soon resulted in a more compact model. The price of such equipment in 1980 was around £1,300. This substantial investment in new technology was a major financial consideration, but the pharmacy community considered the use of a counting machine as a superior method compared to hand-counting medications. These early devices became known as tablet counter, capsule counter, pill counter, or drug counter.
The new counting technology replaced manual methods in many industries such as, vitamin and diet supplement manufacturing. Technicians needed a small, affordable device to count and bottle medications. In England and America, the 1980s and 1990s saw new the development of high-speed machines for counting and bottle filling, Like their pharmacy-based counterparts, these industrial units were designed to be fast and simple to operate, yet remain small and cost effective.
In America, in the late 1990s/early 2000s a new type of tablet counter appeared. It was simple to use, compact, inexpensive, and had good counting accuracy. At the turn of the millennium technical advances allowed the design of counters with a software verification system. With an onboard computer, displaying photo images of medications to assist the pharmacist or pharmacy technician to verify that the correct medication was being dispensed. In addition, a database for storing all prescriptions that were counted on the device.
Between September 2005 and May 2007, an American company undertook major financial investment, and relocated. This move added extra space for product research and development facility (R&D). It allowed the opportunity to develop new advanced technology products that met the pharmacy's needs for simple, accurate, and cost-effective ways to dispense prescriptions safely.
Pictured here is an early American type of integrated counter and packaging device. This machine was a third generation step in the evolution of pharmacy automated devices. Later models held pre-counted containers of commonly-prescribed medications.
Global variations
In the EU member states legislation was introduced in 1998 which had a major effect on UK Pharmacy operations. It effectively prohibited the use of tablet counters for counting and dispensing bulk packaged tablets. Both usage and sales of the machines in the UK declined rapidly as a result of the introduction of blister packaging for medicines.
Current state of the industry
A tablet counter has become a standard in more than 30,000 sites in 35 countries (as of 2010) (including many non-pharmacy sites, such as manufacturing facilities that use a counting machine as a check for small items).
During the 1990s through 2012, numerous new pharmacy automation products came to market. During this timeframe, counting technologies, robotics, workflow management software, and interactive voice recognition (IVR) systems for retail (both chain and independent), outpatient, government, and closed-door pharmacies (mail order and central fill) were all introduced. Additionally, the concept of scalability - of migrating from an entry-level product to the next level of automation (e.g., counting technology to robotics) - was introduced and subsequently launched a new product line in 1997.
Pharmacists everywhere are making the switch to automation for its increased speed, greater accuracy, and better security. As the industry evolves and customer expectations grow, automation is becoming less of a luxury and more of a necessity. Especially for independent pharmacies, automation is now a means of keeping up with the competition of large chain pharmacies.
Technological changes and design improvements
Constant developments in technology make the dispensing of prescription medications safer, more accurate and more efficient.
In America, in 2008, "next-generation" counting and verification systems were introduced. Based on the counting technology employed in preceding models, later machines included the ability to help the pharmacy operate more effectively. Equipped with a new computer interface to a pharmacy management system, with workflow and inventory software. It also included "checks and balances" to ensure the technician and pharmacist were dispensing the correct medication for each patient. This is something that is important to keep reported correctly when dealing with controlled substances like narcotics. This was a step forward to verify all 100% of prescriptions that were dispensed by pharmacy staff.
In America, in 2009, further advanced counters were designed that included the ability to dispense hands-free – a feature that many operators had desired. This allowed pharmacies to automate their most commonly dispensed medications via calibrated cassettes. Thirty of a pharmacy's common medications would now be dispensed automatically. Another new model doubled that throughput via an enclosed robotic mechanism. Robotics had been employed in pharmacies since the mid-1990s, but later machines dispense and label filled patient vials in a comparatively tiny space (about nine square feet of floor space). These newer technologies allowed pharmacy staff to confidently dispense hundreds of prescriptions per day and still be able to manage the many functions of a busy community pharmacy. This would increase the number of patients that are able to be served each day.
Other pharmacy-dispensing concerns besides counting
The primary purpose of a tablet counter (also known as a pill counter or drug counter) is to accurately count prescription medications in tablet or capsule form to aid the requirement for patient medication safety, to increase efficiency and reduce costs for the typical pharmacy. Newer versions of this counting device include advanced software to continue to improve safety for the patient who is receiving the prescription, ensuring that the pharmacy staff dispense the right medication at correct dosage strength for the right patient. (see also medication safety). Today's pharmacy industry recognizes the need for heightened vigilance against medication errors across the entire spectrum. A wealth of research has been conducted regarding the prevalence of medication errors and the ability of technology to decrease or eliminate such errors. (See the March 2003 landmark study by Auburn University's Center for Pharmacy Operations and Designs). Prescription dispensing safety and accuracy in the pharmacy are an essential part of ensuring the right patient gets the right medication at the right dosage. A trend in pharmacy is to place a greater reliance on technology and pharmacy automation to minimize the chance of human error and speed up the process of dispensing. Pharmacy management generally sees technology as a solution to industry challenges like staffing shortages, prescription volume increases, long and hectic work hours and complicated insurance reimbursement procedures. Pharmacies employ advanced technologies that help to handle an ever-escalating number of prescriptions, while making dispensing safer and more precise.
Cross-contamination
Perhaps the most controversial debate surrounding the use of pharmacy automated tablet counters is the impact of cross-contamination. Automated tablet-counting machines (sometimes better known as "pill counters") are designed to sort, count, and dispense drugs at high speeds for quick counting transactions. When more than one drug is exposed to the same surface, leaving seemingly unnoticeable traces of residues, the issue of cross-contamination arises. While one tablet is unlikely to leave enough residues to cause harm to a future patient, the risk of contamination increases sevenfold as the machine processes thousands of varying pills throughout the course of a day. A typical pharmacy may on average process under 100 scripts per day, while other larger dispensaries can accommodate a few hundred scripts in that amount of time.
Thoroughly cleaning pharmacy automated tablet counters is recommended to prevent the chance of cross-contamination. This method is widely preached by manufacturers of these machines, but is not always easily followed. Performing an efficient cleaning of an automated tablet counter significantly increases the amount of time spent on counts by users. Many critics argue that these problems can easily be prevented by taking the proper precautions and following all cleaning procedures, but the increase in time spent makes it hard to justify such an investment. The National Institute for Occupational Safety and Health (NIOSH) considers a drug to be hazardous if it exhibits one or more of the following characteristics in humans or animals: carcinogenicity, teratogenicity or developmental toxicity, reproductive toxicity, organ toxicity at low doses, genotoxicity, or structure and toxicity profiles of new drugs that mimic existing hazardous drugs. Specialty pharmacies that stock and dispense medications on the NIOSH list of Hazardous Drugs must follow strict standards. Community pharmacies typically handle some Hazardous Drugs; therefore, using pharmacy automation for Hazardous Drugs generally follows this guideline: pharmacy staff use an exception tray and spatula to count any Hazardous Drug, and decontaminate the tray and spatula immediately following. Pharmacy robots should not store any Hazardous Drugs for chance of pill-grinding and dust-generation. All other medications dispensed in the pharmacy that are not Hazardous Drugs can be counted with pharmacy automation safely if the manufacturer's cleaning directions are followed.
Future development
Various companies are currently developing a range of remote tablet counters, verification systems and pharmacy automation components to improve the accuracy, safety, speed and efficiency of medication dispensing. Products that are used in retail, mail order, hospital outpatient and specialty pharmacies as well as industrial settings such as manufacturing and component factories. These advanced systems will continue to provide accurate counting without the need for adjustment or calibration when counting in different production environments.
Pictured here is a modern (2010) remote controlled tablet hopper mechanism for use with bulk packaged individual tablets or capsules. In the UK these items are more suited to Hospital Pharmacies, where the issue of E.U. blister packaging regulations relating to medicine packaging does not apply. Also pictured is another version of an automated machine that does not allow unauthorised interference to the internal store of drugs. (A useful security feature in a large pharmacy with public access.)
Repackaging process and stability data
The transient or definitive displacement of the solid oral form from the original atmosphere to enter a repackaging process, sometimes automated, is likely to play a primary role in the pharmaceutical controversy in some countries. However, the solid oral dose is to be repackaged in materials with defined quality. Considering these data, a review of the literature for determination of conditions for repackaged drug stability according to different international guidelines is presented by F Lagrange.
See also
Automated dispensing cabinet
Medical technology
Medical robots
Remote dispensing
References
External links
Royal Pharmaceutical Society Victorian pharmacy history
Victorian Pharmacy—BBC television series 2010
Online search tool for pharmecuitical citation / references
Automation
Medical robotics
Pharmacies | Pharmacy automation | [
"Engineering",
"Biology"
] | 2,875 | [
"Control engineering",
"Medical robotics",
"Automation",
"Medical technology"
] |
3,819,635 | https://en.wikipedia.org/wiki/Peak%20inverse%20voltage | The peak inverse voltage is either the specified maximum voltage that a diode rectifier can block, or, alternatively, the maximum voltage that a rectifier needs to block in a given circuit. The peak inverse voltage increases with an increase in temperature and decreases with a decrease in temperature.
In semiconductor diodes
In semiconductor diodes, peak reverse voltage or peak inverse voltage is the maximum voltage that a diode can withstand in the reverse direction without breaking down or avalanching.
If this voltage is exceeded the diode may be destroyed. Diodes must have a peak inverse voltage rating that is higher than the maximum voltage that will be applied to them in a given application.
In rectifier applications
For rectifier applications, peak inverse voltage (PIV) or peak reverse voltage (PRV) is the maximum value of reverse voltage which occurs at the peak of the input cycle when the diode is reverse-biased. The portion of the sinusoidal waveform which repeats or duplicates itself is known as the cycle. The part of the cycle above the horizontal axis is called the positive half-cycle, or alternation; the part of the cycle below the horizontal axis is called the negative alternation. With reference to the amplitude of the cycle, the peak inverse voltage is specified as the maximum negative value of the sine-wave within a cycle's negative alternation.
References
Diodes
Electrical parameters | Peak inverse voltage | [
"Engineering"
] | 290 | [
"Electrical engineering",
"Electrical parameters"
] |
3,821,872 | https://en.wikipedia.org/wiki/Friendly%20number | In number theory, friendly numbers are two or more natural numbers with a common abundancy index, the ratio between the sum of divisors of a number and the number itself. Two numbers with the same "abundancy" form a friendly pair; n numbers with the same abundancy form a friendly n-tuple.
Being mutually friendly is an equivalence relation, and thus induces a partition of the positive naturals into clubs (equivalence classes) of mutually friendly numbers.
A number that is not part of any friendly pair is called solitary.
The abundancy index of n is the rational number σ(n) / n, in which σ denotes the sum of divisors function. A number n is a friendly number if there exists m ≠ n such that σ(m) / m = σ(n) / n. Abundancy is not the same as abundance, which is defined as σ(n) − 2n.
Abundancy may also be expressed as where denotes a divisor function with equal to the sum of the k-th powers of the divisors of n.
The numbers 1 through 5 are all solitary. The smallest friendly number is 6, forming for example, the friendly pair 6 and 28 with abundancy σ(6) / 6 = (1+2+3+6) / 6 = 2, the same as σ(28) / 28 = (1+2+4+7+14+28) / 28 = 2. The shared value 2 is an integer in this case but not in many other cases. Numbers with abundancy 2 are also known as perfect numbers. There are several unsolved problems related to the friendly numbers.
In spite of the similarity in name, there is no specific relationship between the friendly numbers and the amicable numbers or the sociable numbers, although the definitions of the latter two also involve the divisor function.
Examples
As another example, 30 and 140 form a friendly pair, because 30 and 140 have the same abundancy:
The numbers 2480, 6200 and 40640 are also members of this club, as they each have an abundancy equal to 12/5.
For an example of odd numbers being friendly, consider 135 and 819 (abundancy 16/9 (deficient)). There are also cases of even numbers being friendly to odd numbers, such as 42, 3472, 56896, ... and 544635 (abundancy of 16/7). The odd friend may be less than the even one, as in 84729645 and 155315394 (abundancy of 896/351), or in 6517665, 14705145 and 2746713837618 (abundancy of 64/27).
A square number can be friendly, for instance both 693479556 (the square of 26334) and 8640 have abundancy 127/36 (this example is credited to Dean Hickerson).
Status for small n
In the table below, blue numbers are proven friendly , red numbers are proven solitary , numbers n such that n and are coprime are left uncolored, though they are known to be solitary. Other numbers have unknown status and are yellow.
Solitary numbers
A number that belongs to a singleton club, because no other number is friendly with it, is a solitary number. All prime numbers are known to be solitary, as are powers of prime numbers. More generally, if the numbers n and σ(n) are coprime – meaning that the greatest common divisor of these numbers is 1, so that σ(n)/n is an irreducible fraction – then the number n is solitary . For a prime number p we have σ(p) = p + 1, which is co-prime with p.
No general method is known for determining whether a number is friendly or solitary.
Is 10 a solitary number?
The smallest number whose classification is unknown is 10; it is conjectured to be solitary. If it is not, its smallest friend is at least . J.Ward proved that any positive integer other than 10 with abundancy index 9/5 must be a square with at least six distinct prime factors, the smallest being 5. Further, at least one of the prime factors must be congruent to 1 modulo 3 and appear with an exponent congruent to 2 modulo 6 in the prime power factorization of . In the authors proposed necessary upper bounds for the second, third and fourth smallest prime divisors of friends of 10, if is a friend of 10 and if are the second, third, fourth smallest prime divisors of respectively, then
where is the number of distinct prime divisors of and is the ceiling function. Further, they proposed reasonable upper bounds for all prime divisors of friends of 10 in , for any smallest r-th prime divisor of , we have
where
, ,
where is the i-th prime number and .
Small numbers with a relatively large smallest friend do exist: for instance, 24 is friendly, with its smallest friend 91,963,648.
Large clubs
It is an open problem whether there are infinitely large clubs of mutually friendly numbers. The perfect numbers form a club, and it is conjectured that there are infinitely many perfect numbers (at least as many as there are Mersenne primes), but no proof is known. There are clubs with more known members: in particular, those formed by multiply perfect numbers, which are numbers whose abundancy is an integer. Although some are known to be quite large, clubs of multiply perfect numbers (excluding the perfect numbers themselves) are conjectured to be finite.
Asymptotic density
Every pair a, b of friendly numbers gives rise to a positive proportion of all natural numbers being friendly (but in different clubs), by considering pairs na, nb for multipliers n with gcd(n, ab) = 1. For example, the "primitive" friendly pair 6 and 28 gives rise to friendly pairs 6n and 28n for all n that are congruent to 1, 5, 11, 13, 17, 19, 23, 25, 29, 31, 37, or 41 modulo 42.
This shows that the natural density of the friendly numbers (if it exists) is positive.
Anderson and Hickerson proposed that the density should in fact be 1 (or equivalently that the density of the solitary numbers should be 0). According to the MathWorld article on Solitary Number (see References section below), this conjecture has not been resolved, although Pomerance thought at one point he had disproved it.
Notes
References
Grime, James. A video about the number 10. Numberphile.
Divisor function
Integer sequences
Unsolved problems in number theory | Friendly number | [
"Mathematics"
] | 1,417 | [
"Sequences and series",
"Unsolved problems in mathematics",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Unsolved problems in number theory",
"Combinatorics",
"Mathematical problems",
"Numbers",
"Number theory"
] |
3,822,979 | https://en.wikipedia.org/wiki/Antibonding%20molecular%20orbital | In theoretical chemistry, an antibonding orbital is a type of molecular orbital that weakens the chemical bond between two atoms and helps to raise the energy of the molecule relative to the separated atoms. Such an orbital has one or more nodes in the bonding region between the nuclei. The density of the electrons in the orbital is concentrated outside the bonding region and acts to pull one nucleus away from the other and tends to cause mutual repulsion between the two atoms. This is in contrast to a bonding molecular orbital, which has a lower energy than that of the separate atoms, and is responsible for chemical bonds.
Diatomic molecules
Antibonding molecular orbitals (MOs) are normally higher in energy than bonding molecular orbitals. Bonding and antibonding orbitals form when atoms combine into molecules. If two hydrogen atoms are initially far apart, they have identical atomic orbitals. However, as the spacing between the two atoms becomes smaller, the electron wave functions begin to overlap. The Pauli exclusion principle prohibits any two electrons (e-) in a molecule from having the same set of quantum numbers. Therefore each original atomic orbital of the isolated atoms (for example, the ground state energy level, 1s) splits into two molecular orbitals belonging to the pair, one lower in energy than the original atomic level and one higher. The orbital which is in a lower energy state than the orbitals of the separate atoms is the bonding orbital, which is more stable and promotes the bonding of the two H atoms into H2. The higher-energy orbital is the antibonding orbital, which is less stable and opposes bonding if it is occupied. In a molecule such as H2, the two electrons normally occupy the lower-energy bonding orbital, so that the molecule is more stable than the separate H atoms.
A molecular orbital becomes antibonding when there is less electron density between the two nuclei than there would be if there were no bonding interaction at all. When a molecular orbital changes sign (from positive to negative) at a nodal plane between two atoms, it is said to be antibonding with respect to those atoms. Antibonding orbitals are often labelled with an asterisk (*) on molecular orbital diagrams.
In homonuclear diatomic molecules, σ* (sigma star) antibonding orbitals have no nodal planes passing through the two nuclei, like sigma bonds, and π* (pi star) orbitals have one nodal plane passing through the two nuclei, like pi bonds. The Pauli exclusion principle dictates that no two electrons in an interacting system may have the same quantum state. If the bonding orbitals are filled, then any additional electrons will occupy antibonding orbitals. This occurs in the He2 molecule, in which both the 1sσ and 1sσ* orbitals are filled. Since the antibonding orbital is more antibonding than the bonding orbital is bonding, the molecule has a higher energy than two separated helium atoms, and it is therefore unstable.
Polyatomic molecules
In molecules with several atoms, some orbitals may be delocalized over more than two atoms. A particular molecular orbital may be bonding with respect to some adjacent pairs of atoms and antibonding with respect to other pairs. If the bonding interactions outnumber the antibonding interactions, the MO is said to be bonding, whereas, if the antibonding interactions outnumber the bonding interactions, the molecular orbital is said to be antibonding.
For example, butadiene has pi orbitals which are delocalized over all four carbon atoms. There are two bonding pi orbitals which are occupied in the ground state: π1 is bonding between all carbons, while π2 is bonding between C1 and C2 and between C3 and C4, and antibonding between C2 and C3. There are also antibonding pi orbitals with two and three antibonding interactions as shown in the diagram; these are vacant in the ground state, but may be occupied in excited states.
Similarly benzene with six carbon atoms has three bonding pi orbitals and three antibonding pi orbitals. Since each carbon atom contributes one electron to the π-system of benzene, there are six pi electrons which fill the three lowest-energy pi molecular orbitals (the bonding pi orbitals).
Antibonding orbitals are also important for explaining chemical reactions in terms of molecular orbital theory. Roald Hoffmann and Kenichi Fukui shared the 1981 Nobel Prize in Chemistry for their work and further development of qualitative molecular orbital explanations for chemical reactions.
See also
Bonding molecular orbital
Valence and conduction bands
Valence bond theory
Molecular orbital theory
Conjugated system
References
Further reading
Orchin, M. Jaffe, H.H. (1967) The Importance of Antibonding Orbitals. Houghton Mifflin. ISBN B0006BPT5O
Chemical bonding | Antibonding molecular orbital | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,001 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
3,823,804 | https://en.wikipedia.org/wiki/Ring%20flip | In organic chemistry, a ring flip (also known as a ring inversion or ring reversal) is the interconversion of cyclic conformers that have equivalent ring shapes (e.g., from a chair conformer to another chair conformer) that results in the exchange of nonequivalent substituent positions. The overall process generally takes place over several steps, involving coupled rotations about several of the molecule's single bonds, in conjunction with minor deformations of bond angles. Most commonly, the term is used to refer to the interconversion of the two chair conformers of cyclohexane derivatives, which is specifically referred to as a chair flip, although other cycloalkanes and inorganic rings undergo similar processes.
Chair flip
As stated above, a chair flip is a ring inversion specifically of cyclohexane (and its derivatives) from one chair conformer to another, often to reduce steric strain. The term, "flip" is misleading, because the direction of each carbon remains the same; what changes is the orientation. A conformation is a unique structural arrangement of atoms, in particular one achieved through the rotation of single bonds. A conformer is a conformational isomer, a blend of the two words.
Cyclohexane
There exist many different conformations for cyclohexane, such as chair, boat, and twist-boat, but the chair conformation is the most commonly observed state for cyclohexanes because it requires the least amount of energy. The chair conformation minimizes both angle strain and torsional strain by having all carbon-carbon bonds at 110.9° and all hydrogens staggered from one another.
The molecular motions involved in a chair flip are detailed in the figure on the right: The half-chair conformation (D, 10.8 kcal/mol, C2 symmetry) is the energy maximum when proceeding from the chair conformer (A, 0 kcal/mol reference, D3d symmetry) to the higher energy twist-boat conformer (B, 5.5 kcal/mol, D2 symmetry). The boat conformation (C, 6.9 kcal/mol, C2v symmetry) is a local energy maximum for the interconversion of the two mirror image twist-boat conformers, the second of which is converted to the other chair confirmation through another half-chair. At the end of the process, all axial positions have become equatorial and vice versa. The overall barrier of 10.8 kcal/mol corresponds to a rate constant of about 105 s–1 at room temperature.
Note that the twist-boat (D2) conformer and the half-chair (C2) transition state are in chiral point groups and are therefore chiral molecules. In the figure, the two depictions of B and two depictions of D are pairs of enantiomers.
As a consequence of the chair flip, the axially-substituted and equatorially-substituted conformers of a molecule like chlorocyclohexane cannot be isolated at room temperature. However, in some cases, the isolation of individual conformers of substituted cyclohexane derivatives has been achieved at low temperatures (–150 °C).
Axial and equatorial positions
As noted above, by transitioning from one chair conformer to another, all axial positions become equatorial and all equatorial positions become axial. Substituent groups in equatorial positions roughly follow along the equator of the cyclohexane ring and are perpendicular to the axis, while substituents in axial positions roughly follow the imaginary axis of the carbon ring and are perpendicular to the equator.
Diaxial interactions or axial-axial interactions is what the steric strain between an axial substituent and another axial group, typically a hydrogen, on the same side of a chair conformation ring. The interaction is labeled by the carbon number they come from. A 1,3-diaxial interaction happens between the atoms connected to the first and third carbons. The more interactions the more strain on the molecule and the conformations with the most strain are less likely to be seen. An example is cyclopropane which, because of its planar geometry, has six fully eclipsed carbon and axial hydrogen bonds making the strain 116 kJ/mol (27.7 kcal/mol). Strain can also be decreased when the carbon-carbon bond angles are close or at the preferred bond angle of 109.5°, meaning a ring having six tetrahedral carbons is typically lower than that of most rings.
Examples
Cyclohexane is a prototype for low-energy degenerate ring flipping. Two 1H NMR signals should be observed in principle, corresponding to axial and equatorial protons. However, due to the cyclohexane chair flip, only one signal is seen for a solution of cyclohexane at room temperature, as the axial and equatorial proton rapidly interconvert relative to the NMR time scale. The coalescence temperature at 60 MHz is ca. –60 °C. As a consequence of the chair flip, the axially-substituted and equatorially-substituted conformers of a molecule like chlorocyclohexane cannot be isolated at room temperature.
However, in some cases, the isolation of individual conformers of substituted cyclohexane derivatives has been achieved at low temperatures (–150 °C).
Most compounds with nonplanar rings engage in degenerate ring flipping. One well-studied example is titanocene pentasulfide, where the inversion barrier is high relative to cyclohexane's. Hexamethylcyclotrisiloxane on the other hand is subject to a very low barrier.
Bicycloalkanes are alkanes containing two rings that are connected to each other by sharing two carbon atoms. Orientation within bicycloalkanes is dependent on the cis or trans orientation of the hydrogen shared by the different rings instead of the methyl groups present in the rings.
Tetrodotoxin is one of the world's most potent toxins. It is made up of multiple six member rings set in chair conformations, with each ring but one containing an atom other than carbon.
See also
Cyclohexane conformation
Conformational isomerism
References
External links
Conformations of Alkanes & Cycloalkanes
Molecular geometry
Stereochemistry | Ring flip | [
"Physics",
"Chemistry"
] | 1,322 | [
"Molecules",
"Molecular geometry",
"Stereochemistry",
"Space",
"nan",
"Spacetime",
"Matter"
] |
3,823,863 | https://en.wikipedia.org/wiki/P-y%20method | In geotechnical civil engineering, the p–y is a method of analyzing the ability of deep foundations to resist loads applied in the lateral direction. This method uses the finite difference method and p-y graphs to find a solution. P–y graphs are graphs which relate the force applied to soil to the lateral deflection of the soil. In essence, non-linear springs are attached to the foundation in place of the soil. The springs can be represented by the following equation:
where is the non-linear spring stiffness defined by the p–y curve, is the deflection of the spring, and is the force applied to the spring.
The p–y curves vary depending on soil type.
The available geotechnical engineering software programs for the p–y method include FB-MultiPier by the Bridge Software Institute, DeepFND by Deep Excavation LLC, PileLAT by Innovative Geotechnics, LPile by Ensoft, and PyPile by Yong Technology.
References
Salgado, R. (2007). "The Engineering of Foundations." McGraw-Hill, in press. (1)
Hasani, H., Golafshani, A., Estekanchi, H. Seismic performance evaluation of jacket-type offshore platforms using endurance time method considering soil-pile-superstructure interaction. Scientia Iranica, 2017; 24(4): 1843-1854. doi: 10.24200/sci.2017.4275 http://scientiairanica.sharif.edu/article_4275_f79d8b4fdd0cc8d159b91b1a3b968585.pdf
Soil mechanics | P-y method | [
"Physics",
"Engineering"
] | 355 | [
"Soil mechanics",
"Civil engineering",
"Applied and interdisciplinary physics",
"Civil engineering stubs"
] |
3,824,261 | https://en.wikipedia.org/wiki/Speed%20Dependent%20Damping%20Control | Speed Dependent Damping Control (also called SD²C) was an automatic damper system installed on late-1980s and early-1990s Cadillac automobiles. This system firmed up the suspension at 25 mph (40 km/h) and again at 60 mph (97 km/h). The firmest setting was also used when starting from a standstill until 5 mph (8 km/h).
Applications:
1989–1992 Cadillac Allanté
Computer Command Ride
The semi-active suspension system was updated as Computer Command Ride in 1991. This new system included acceleration, braking rates, and lateral acceleration to the existing vehicle speed metric.
1991– Cadillac Fleetwood
1991– Cadillac Eldorado
1991– Cadillac Seville
1991– Cadillac De Ville (optional, standard for 1993)
1992– Oldsmobile Achieva SCX W41
References
Automotive suspension technologies
Automotive technology tradenames
Vehicle safety technologies
Auto parts
Mechanical power control
Shock absorbers
Cadillac | Speed Dependent Damping Control | [
"Physics"
] | 186 | [
"Mechanics",
"Mechanical power control"
] |
3,824,441 | https://en.wikipedia.org/wiki/Vitallium | Vitallium is an alloy of 65% cobalt, 30% chromium, 5% molybdenum, and other substances. The alloy is used in dentistry and artificial joints, because of its resistance to corrosion. It is also used for components of turbochargers because of its thermal resistance. Vitallium was developed by Albert W. Merrick for the Austenal Laboratories in 1932.
In 2016 Norman Sharp, a 91-year-old British man, was recognised as having the world's oldest hip replacement implants. The two Vitallium implants were implanted in November 1948 at the Royal National Orthopaedic Hospital, under the newly formed NHS. The 67-year-old implants had such an unusually long life, partly because they had not required the typical replacement of such implants, but also because of Sharp's young age of 23 when they were implanted, owing to a childhood case of septic arthritis.
For high-temperature use in engines, particularly turbochargers, the first alloy used was Haynes Stellite No. 21, similar to Vitallium. This was suggested by the British engineer, and denture wearer, S.D. Heron during World War II. Although the characteristics of the material obviously suggested itself for making turbocharger blades, it was thought impossible to cast it to the precision needed. Heron demonstrated that it could be, by showing his Vitallium dentures.
References
External links
Articles on Vitallium from DentalArticles.com
NASA article mentioning Vitallium in turbochargers
Cobalt alloys
Chromium alloys
Biomaterials | Vitallium | [
"Physics",
"Chemistry",
"Biology"
] | 327 | [
"Biomaterials",
"Alloy stubs",
"Biotechnology stubs",
"Materials",
"Alloys",
"Medical technology stubs",
"Medical technology",
"Chromium alloys",
"Matter",
"Cobalt alloys"
] |
3,824,661 | https://en.wikipedia.org/wiki/High%20availability | High availability (HA) is a characteristic of a system that aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period.
There is now more dependence on these systems as a result of modernization. For instance, in order to carry out their regular daily tasks, hospitals and data centers need their systems to be highly available. Availability refers to the ability of the user community to obtain a service or good, access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is – from the user's point of view – unavailable. Generally, the term downtime is used to refer to periods when a system is unavailable.
Resilience
High availability is a property of network resilience, the ability to "provide and maintain an acceptable level of service in the face of faults and challenges to normal operation." Threats and challenges for services can range from simple misconfiguration over large scale natural disasters to targeted attacks. As such, network resilience touches a very wide range of topics. In order to increase the resilience of a given communication network, the probable challenges and risks have to be identified and appropriate resilience metrics have to be defined for the service to be protected.
The importance of network resilience is continuously increasing, as communication networks are becoming a fundamental component in the operation of critical infrastructures. Consequently, recent efforts focus on interpreting and improving network and computing resilience with applications to critical infrastructures. As an example, one can consider as a resilience objective the provisioning of services over the network, instead of the services of the network itself. This may require coordinated response from both the network and from the services running on top of the network.
These services include:
supporting distributed processing
supporting network storage
maintaining service of communication services such as
video conferencing
instant messaging
online collaboration
access to applications and data as needed
Resilience and survivability are interchangeably used according to the specific context of a given study.
Principles
There are three principles of systems design in reliability engineering that can help achieve high availability.
Elimination of single points of failure. This means adding or building redundancy into the system so that failure of a component does not mean failure of the entire system.
Reliable crossover. In redundant systems, the crossover point itself tends to become a single point of failure. Reliable systems must provide for reliable crossover.
Detection of failures as they occur. If the two principles above are observed, then a user may never see a failure – but the maintenance activity must.
Scheduled and unscheduled downtime
A distinction can be made between scheduled and unscheduled downtime. Typically, scheduled downtime is a result of maintenance that is disruptive to system operation and usually cannot be avoided with a currently installed system design. Scheduled downtime events might include patches to system software that require a reboot or system configuration changes that only take effect upon a reboot. In general, scheduled downtime is usually the result of some logical, management-initiated event. Unscheduled downtime events typically arise from some physical event, such as a hardware or software failure or environmental anomaly. Examples of unscheduled downtime events include power outages, failed CPU or RAM components (or possibly other failed hardware components), an over-temperature related shutdown, logically or physically severed network connections, security breaches, or various application, middleware, and operating system failures.
If users can be warned away from scheduled downtimes, then the distinction is useful. But if the requirement is for true high availability, then downtime is downtime whether or not it is scheduled.
Many computing sites exclude scheduled downtime from availability calculations, assuming that it has little or no impact upon the computing user community. By doing this, they can claim to have phenomenally high availability, which might give the illusion of continuous availability. Systems that exhibit truly continuous availability are comparatively rare and higher priced, and most have carefully implemented specialty designs that eliminate any single point of failure and allow online hardware, network, operating system, middleware, and application upgrades, patches, and replacements. For certain systems, scheduled downtime does not matter, for example, system downtime at an office building after everybody has gone home for the night.
Percentage calculation
Availability is usually expressed as a percentage of uptime in a given year. The following table shows the downtime that will be allowed for a particular percentage of availability, presuming that the system is required to operate continuously. Service level agreements often refer to monthly downtime or availability in order to calculate service credits to match monthly billing cycles. The following table shows the translation from a given availability percentage to the corresponding amount of time a system would be unavailable.
The terms uptime and availability are often used interchangeably but do not always refer to the same thing. For example, a system can be "up" with its services not "available" in the case of a network outage. Or a system undergoing software maintenance can be "available" to be worked on by a system administrator, but its services do not appear "up" to the end user or customer. The subject of the terms is thus important here: whether the focus of a discussion is the server hardware, server OS, functional service, software service/process, or similar, it is only if there is a single, consistent subject of the discussion that the words uptime and availability can be used synonymously.
Five-by-five mnemonic
A simple mnemonic rule states that 5 nines allows approximately 5 minutes of downtime per year. Variants can be derived by multiplying or dividing by 10: 4 nines is 50 minutes and 3 nines is 500 minutes. In the opposite direction, 6 nines is 0.5 minutes (30 sec) and 7 nines is 3 seconds.
"Powers of 10" trick
Another memory trick to calculate the allowed downtime duration for an "-nines" availability percentage is to use the formula seconds per day.
For example, 90% ("one nine") yields the exponent , and therefore the allowed downtime is seconds per day.
Also, 99.999% ("five nines") gives the exponent , and therefore the allowed downtime is seconds per day.
"Nines"
Percentages of a particular order of magnitude are sometimes referred to by the number of nines or "class of nines" in the digits. For example, electricity that is delivered without interruptions (blackouts, brownouts or surges) 99.999% of the time would have 5 nines reliability, or class five. In particular, the term is used in connection with mainframes or enterprise computing, often as part of a service-level agreement.
Similarly, percentages ending in a 5 have conventional names, traditionally the number of nines, then "five", so 99.95% is "three nines five", abbreviated 3N5. This is casually referred to as "three and a half nines", but this is incorrect: a 5 is only a factor of 2, while a 9 is a factor of 10, so a 5 is 0.3 nines (per below formula: ): 99.95% availability is 3.3 nines, not 3.5 nines. More simply, going from 99.9% availability to 99.95% availability is a factor of 2 (0.1% to 0.05% unavailability), but going from 99.95% to 99.99% availability is a factor of 5 (0.05% to 0.01% unavailability), over twice as much.
A formulation of the class of 9s based on a system's unavailability would be
(cf. Floor and ceiling functions).
A similar measurement is sometimes used to describe the purity of substances.
In general, the number of nines is not often used by a network engineer when modeling and measuring availability because it is hard to apply in formula. More often, the unavailability expressed as a probability (like 0.00001), or a downtime per year is quoted. Availability specified as a number of nines is often seen in marketing documents. The use of the "nines" has been called into question, since it does not appropriately reflect that the impact of unavailability varies with its time of occurrence. For large amounts of 9s, the "unavailability" index (measure of downtime rather than uptime) is easier to handle. For example, this is why an "unavailability" rather than availability metric is used in hard disk or data link bit error rates.
Sometimes the humorous term "nine fives" (55.5555555%) is used to contrast with "five nines" (99.999%), though this is not an actual goal, but rather a sarcastic reference to something totally failing to meet any reasonable target.
Measurement and interpretation
Availability measurement is subject to some degree of interpretation. A system that has been up for 365 days in a non-leap year might have been eclipsed by a network failure that lasted for 9 hours during a peak usage period; the user community will see the system as unavailable, whereas the system administrator will claim 100% uptime. However, given the true definition of availability, the system will be approximately 99.9% available, or three nines (8751 hours of available time out of 8760 hours per non-leap year). Also, systems experiencing performance problems are often deemed partially or entirely unavailable by users, even when the systems are continuing to function. Similarly, unavailability of select application functions might go unnoticed by administrators yet be devastating to users – a true availability measure is holistic.
Availability must be measured to be determined, ideally with comprehensive monitoring tools ("instrumentation") that are themselves highly available. If there is a lack of instrumentation, systems supporting high volume transaction processing throughout the day and night, such as credit card processing systems or telephone switches, are often inherently better monitored, at least by the users themselves, than systems which experience periodic lulls in demand.
An alternative metric is mean time between failures (MTBF).
Closely related concepts
Recovery time (or estimated time of repair (ETR), also known as recovery time objective (RTO) is closely related to availability, that is the total time required for a planned outage or the time required to fully recover from an unplanned outage. Another metric is mean time to recovery (MTTR). Recovery time could be infinite with certain system designs and failures, i.e. full recovery is impossible. One such example is a fire or flood that destroys a data center and its systems when there is no secondary disaster recovery data center.
Another related concept is data availability, that is the degree to which databases and other information storage systems faithfully record and report system transactions. Information management often focuses separately on data availability, or Recovery Point Objective, in order to determine acceptable (or actual) data loss with various failure events. Some users can tolerate application service interruptions but cannot tolerate data loss.
A service level agreement ("SLA") formalizes an organization's availability objectives and requirements.
Military control systems
High availability is one of the primary requirements of the control systems in unmanned vehicles and autonomous maritime vessels. If the controlling system becomes unavailable, the Ground Combat Vehicle (GCV) or ASW Continuous Trail Unmanned Vessel (ACTUV) would be lost.
System design
On one hand, adding more components to an overall system design can undermine efforts to achieve high availability because complex systems inherently have more potential failure points and are more difficult to implement correctly. While some analysts would put forth the theory that the most highly available systems adhere to a simple architecture (a single, high-quality, multi-purpose physical system with comprehensive internal hardware redundancy), this architecture suffers from the requirement that the entire system must be brought down for patching and operating system upgrades. More advanced system designs allow for systems to be patched and upgraded without compromising service availability (see load balancing and failover). High availability requires less human intervention to restore operation in complex systems; the reason for this being that the most common cause for outages is human error.
High availability through redundancy
On the other hand, redundancy is used to create systems with high levels of availability (e.g. popular ecommerce websites). In this case it is required to have high levels of failure detectability and avoidance of common cause failures.
If redundant parts are used in parallel and have independent failure (e.g. by not being within the same data center), they can exponentially increase the availability and make the overall system highly available. If you have N parallel components each having X availability, then you can use following formula:
Availability of parallel components = 1 - (1 - X)^ N
So for example if each of your components has only 50% availability, by using 10 of components in parallel, you can achieve 99.9023% availability.
Two kinds of redundancy are passive redundancy and active redundancy.
Passive redundancy is used to achieve high availability by including enough excess capacity in the design to accommodate a performance decline. The simplest example is a boat with two separate engines driving two separate propellers. The boat continues toward its destination despite failure of a single engine or propeller. A more complex example is multiple redundant power generation facilities within a large system involving electric power transmission. Malfunction of single components is not considered to be a failure unless the resulting performance decline exceeds the specification limits for the entire system.
Active redundancy is used in complex systems to achieve high availability with no performance decline. Multiple items of the same kind are incorporated into a design that includes a method to detect failure and automatically reconfigure the system to bypass failed items using a voting scheme. This is used with complex computing systems that are linked. Internet routing is derived from early work by Birman and Joseph in this area. Active redundancy may introduce more complex failure modes into a system, such as continuous system reconfiguration due to faulty voting logic.
Zero downtime system design means that modeling and simulation indicates mean time between failures significantly exceeds the period of time between planned maintenance, upgrade events, or system lifetime. Zero downtime involves massive redundancy, which is needed for some types of aircraft and for most kinds of communications satellites. Global Positioning System is an example of a zero downtime system.
Fault instrumentation can be used in systems with limited redundancy to achieve high availability. Maintenance actions occur during brief periods of downtime only after a fault indicator activates. Failure is only significant if this occurs during a mission critical period.
Modeling and simulation is used to evaluate the theoretical reliability for large systems. The outcome of this kind of model is used to evaluate different design options. A model of the entire system is created, and the model is stressed by removing components. Redundancy simulation involves the N-x criteria. N represents the total number of components in the system. x is the number of components used to stress the system. N-1 means the model is stressed by evaluating performance with all possible combinations where one component is faulted. N-2 means the model is stressed by evaluating performance with all possible combinations where two component are faulted simultaneously.
Reasons for unavailability
A survey among academic availability experts in 2010 ranked reasons for unavailability of enterprise IT systems. All reasons refer to not following best practice in each of the following areas (in order of importance):
Monitoring of the relevant components
Requirements and procurement
Operations
Avoidance of network failures
Avoidance of internal application failures
Avoidance of external services that fail
Physical environment
Network redundancy
Technical solution of backup
Process solution of backup
Physical location
Infrastructure redundancy
Storage architecture redundancy
A book on the factors themselves was published in 2003.
Costs of unavailability
In a 1998 report from IBM Global Services, unavailable systems were estimated to have cost American businesses $4.54 billion in 1996, due to lost productivity and revenues.
See also
Availability
Fault tolerance
High-availability cluster
Overall equipment effectiveness
Reliability, availability and serviceability
Responsiveness
Scalability
Ubiquitous computing
Notes
References
External links
Lecture Notes on Enterprise Computing University of Tübingen
Lecture notes on Embedded Systems Engineering by Prof. Phil Koopman
Uptime Calculator (SLA)
System administration
Quality control
Applied probability
Reliability engineering
Measurement
Computer networks engineering | High availability | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 3,380 | [
"Systems engineering",
"Physical quantities",
"Applied probability",
"Computer engineering",
"Reliability engineering",
"Applied mathematics",
"Quantity",
"Computer networks engineering",
"Measurement",
"Size",
"System administration",
"Information systems"
] |
3,824,919 | https://en.wikipedia.org/wiki/Vapor%20recovery | Vapor (or vapour) recovery is the process of collecting the vapors of gasoline and other fuels, so that they do not escape into the atmosphere. This is often done (and sometimes required by law) at filling stations, to reduce noxious and potentially explosive fumes and pollution.
The negative pressure created by a vacuum pump typically located in the fuel dispenser, combined with the pressure in the car's fuel tank caused by the inflow, is usually used to pull in the vapors. They are drawn in through holes in the side of the nozzle and travel along a return path through another hose.
In 1975 the Vapor Recovery Gasoline Nozzle was an improvement on the idea of the original gasoline nozzle delivery system.
The improved idea was the brain child of Mark Maine of San Diego, California, where Mark was a gas station attendant at a corporate owned and operated Chevron U.S.A. service station. The story is, after watching the tanker truck driver deliver gasoline to the station using two hoses, one to deliver the gasoline from the tanker, and the other hose to recover the escaping gasoline vapors back into the emptying tanker. Mark talked with the driver to understand why the two hose system was used, and also why it was not implemented on the standard delivery nozzle, allowing vapors to escape from the vehicle gas tank. After the tanker driver left, Mark drew an idea for a Vapor Recovery Gasoline Nozzle and submitted it to the Chevron Station Management as an employee suggestion.
Mark was included in the design and development as the original Vapor recovery gasoline nozzle, which was manufactured and delivered by Huddleson. Mark was also promoted from the Chevron Service Station to an executive position based out of the Corporate in La Habra, California. Mark was appointed as the Vapor Recovery Gasoline Nozzle executive for the 2 year implementation program, his duties were to train and oversee the installation and maintenance of 124 Chevron Service Stations within San Diego County.
Chevron USA lobbied California Law Makers, and the law was changed to require the new improved Vapor Recovery Gasoline Nozzle delivery system state wide and eventually such followed across the USA.
In Australia, vapor recovery has become mandatory in major urban areas. There are two categories - VR1 and VR2. VR1 must be installed at fuel stations that pump less than 500,000 litres annually, VR2 must be installed for larger amounts, or as designated by various EPA bodies.
Other industries
Vapor recovery is also used in the chemical process industry to remove and recover vapors from storage tanks. The vapors are usually either environmentally hazardous, or valuable. The process consists of a closed venting system from the storage tank ullage space to a vapor recovery unit which will recover the vapors for return to the process or destroy them, usually by oxidation.
Vapor recovery towers are also used in the oil and gas industry to provide flash gas recovery at near atmospheric pressure without the chance of oxygen ingress at the top of the storage tanks. The ability to create the vapor flash inside the tower often reduces storage tank emissions to less than six tons per year, exempting the tank battery from Quad O reporting requirements.
The identifiable benefits from an organizational stand point behind vapor recovery is that it helps to make the industry more sustainable and creates a pipeline for pumping exhausts back into production.
See also
Automobile emissions control
Onboard refueling vapor recovery
References
External links
Quad O Regulations from EPA.gov (affects the Oil & Gas Industry)
EPA Gas STAR Program use of vapor recovery to capture methane in oil and gas industry
Use of Vapor Recovery Units and Towers from epa.gov
Gases
Gas technologies
Pollution control technologies | Vapor recovery | [
"Physics",
"Chemistry",
"Engineering"
] | 733 | [
"Matter",
"Phases of matter",
"Pollution control technologies",
"Environmental engineering",
"Statistical mechanics",
"Gases"
] |
30,441,390 | https://en.wikipedia.org/wiki/N%20%3D%202%20superconformal%20algebra | In mathematical physics, the 2D N = 2 superconformal algebra is an infinite-dimensional Lie superalgebra, related to supersymmetry, that occurs in string theory and two-dimensional conformal field theory. It has important applications in mirror symmetry. It was introduced by as a gauge algebra of the U(1) fermionic string.
Definition
There are two slightly different ways to describe the N = 2 superconformal algebra, called the N = 2 Ramond algebra and the N = 2 Neveu–Schwarz algebra, which are isomorphic (see below) but differ in the choice of standard basis.
The N = 2 superconformal algebra is the Lie superalgebra with basis of even elements c, Ln, Jn, for n an integer, and odd elements G, G, where (for the Ramond basis) or (for the Neveu–Schwarz basis) defined by the following relations:
c is in the center
If in these relations, this yields the
N = 2 Ramond algebra; while if are half-integers, it gives the N = 2 Neveu–Schwarz algebra. The operators generate a Lie subalgebra isomorphic to the Virasoro algebra. Together with the operators , they generate a Lie superalgebra isomorphic to the super Virasoro algebra,
giving the Ramond algebra if are integers and the Neveu–Schwarz algebra otherwise. When represented as operators on a complex inner product space, is taken to act as multiplication by a real scalar, denoted by the same letter and called the central charge, and the adjoint structure is as follows:
Properties
The N = 2 Ramond and Neveu–Schwarz algebras are isomorphic by the spectral shift isomorphism of : with inverse:
In the N = 2 Ramond algebra, the zero mode operators , , and the constants form a five-dimensional Lie superalgebra. They satisfy the same relations as the fundamental operators in Kähler geometry, with corresponding to the Laplacian, the degree operator, and the and operators.
Even integer powers of the spectral shift give automorphisms of the N = 2 superconformal algebras, called spectral shift automorphisms. Another automorphism , of period two, is given by In terms of Kähler operators, corresponds to conjugating the complex structure. Since , the automorphisms and generate a group of automorphisms of the N = 2 superconformal algebra isomorphic to the infinite dihedral group .
Twisted operators were introduced by and satisfy: so that these operators satisfy the Virasoro relation with central charge 0. The constant still appears in the relations for and the modified relations
Constructions
Free field construction
give a construction using two commuting real bosonic fields ,
and a complex fermionic field
is defined to the sum of the Virasoro operators naturally associated with each of the three systems
where normal ordering has been used for bosons and fermions.
The current operator is defined by the standard construction from fermions
and the two supersymmetric operators by
This yields an N = 2 Neveu–Schwarz algebra with c = 3.
SU(2) supersymmetric coset construction
gave a coset construction of the N = 2 superconformal algebras, generalizing the coset constructions of for the discrete series representations of the Virasoro and super Virasoro algebra. Given a representation of the affine Kac–Moody algebra of SU(2) at level with basis satisfying
the supersymmetric generators are defined by
This yields the N=2 superconformal algebra with
The algebra commutes with the bosonic operators
The space of physical states consists of eigenvectors of simultaneously annihilated by the 's for positive and the supercharge operator
(Neveu–Schwarz)
(Ramond)
The supercharge operator commutes with the action of the affine Weyl group and the physical states lie in a single orbit of this group, a fact which implies the Weyl-Kac character formula.
Kazama–Suzuki supersymmetric coset construction
generalized the SU(2) coset construction to any pair consisting of a simple compact Lie group and a closed subgroup of maximal rank, i.e. containing a maximal torus of , with the additional condition that the dimension of the centre of is non-zero. In this case the compact Hermitian symmetric space is a Kähler manifold, for example when . The physical states lie in a single orbit of the affine Weyl group, which again implies the Weyl–Kac character formula for the affine Kac–Moody algebra of .
See also
Virasoro algebra
Super Virasoro algebra
Coset construction
Type IIB string theory
Notes
References
String theory
Conformal field theory
Lie algebras
Representation theory
Supersymmetry | N = 2 superconformal algebra | [
"Physics",
"Astronomy",
"Mathematics"
] | 999 | [
"Astronomical hypotheses",
"Unsolved problems in physics",
"Fields of abstract algebra",
"Physics beyond the Standard Model",
"Representation theory",
"String theory",
"Supersymmetry",
"Symmetry"
] |
30,445,334 | https://en.wikipedia.org/wiki/Fixes%20that%20fail | Fixes that fail is a system archetype that in system dynamics is used to describe and analyze a situation, where a fix effective in the short-term creates side effects for the long-term behaviour of the system and may result in the need of even more fixes. This archetype may be also known as fixes that backfire or corrective actions that fail. It resembles the Shifting the burden archetype.
Description
In a "fixes that fail" scenario the encounter of a problem is faced by a corrective action or fix that seems to solve the issue. However, this action leads to some unforeseen consequences. They form then a feedback loop that either worsens the original problem or creates a related one.
In system dynamics this is described by a circles of causality (Fig. 1) as a system consisting of two feedback loops. One is the balancing feedback loop B1 of the corrective action, the second is the reinforcing feedback loop R2 of the unintended consequences. These influence the problem with a delay and therefore make it difficult to recognize the source of the new rise of the problem.
Representation of the long-term disadvantages of the scenario can be seen on Fig. 2. Although the symptoms go through a decrease when fixes are applied, the overall crisis threshold rises.
A representation with a stock and flow diagram of this archetype is on Fig. 3.
The fix influences the number of problems present in the system proportionally to the fix factor and the problems to be resolved. When activated by the action variable, the fix lowers the problems, thus creating a balancing loop. However, each fix also starts a delayed consequence which adds to the problems proportionally to the consequence factor and the fix applied. Combined, these create a growing number of problems to be dealt with.
Uses
As an archetype, it helps to gain insight into patterns that are present in the system and thereby also give the tools to change these patterns. In the case of "Fixes that fail", the warning sign is a problem which reappears although fixes were applied. It is crucial to recognize that the fix only adds to the overall deteriorating state and does not solve the problem. To identify this pattern, it is needed to consider a connection between the symptoms and the fixes we apply to solve them, which can be very difficult to do. In management this can be present as a "hero-scapegoat" cycle. While the manager who applied the fix gets promoted for diminishing the problem. A new manager must face the returning problem symptom and may be punished for failing to do his job. Then a new hero is found who temporarily solves the problem symptoms. The delay of the reinforcing loop makes it difficult to recognize the causal relation between the fix applied to the symptoms and the new problems arising. What then seems to be a series of successes in short-term then are steps towards failure on the long-term.
Some typical ways of thinking associated with the pattern are:
"It always seemed to work before; why isn't it working now?"
"This is a simple problem and the solution is straightforward."
"We need to fix this problem now. We can deal with any consequences later."
They can serve as a warning that this archetype is present or will be.
If this pattern is recognized, then there are multiple possibilities how to react, depending on which leverage point is addressed:
Focus on the long-term and if a fix is inevitably needed, use it only to buy time to work on the long-term remedy.
Raise awareness of the unintended consequences of the fixes.
Focus on the underlying problem and not the symptoms.
Find either a fix without consequences or with limited long-term impact.
Find a way to measure the intended and also unintended consequences of the solutions by learning also from the past fixes.
Change the performance review time so that the long-term progress becomes visible.
Examples
A few common examples of the pattern. The situation describes always the starting point to which a fix is applied. This bears then the consequences which are confronted again with a new fix.
Maximizing ROR
Situation: A manufacturing company becomes successful with high-performance parts, and its CEO wants to maximize the ROR.
Fix: Refusal of investment in expensive, new production machines.
Consequences: The product quality drops and therefore the sales of it.
Cutting back maintenance
Situation: The company needs to save money.
Fix: Decrease the amount of maintenance.
Consequences: More breakdowns of the equipment, higher costs and cost-cutting pressure.
Quest for water
Situation: Farmers are confronted with water shortage.
Fix: Drilling new wells or making the old ones deeper.
Consequences: The water table drops.
Cash shortage
Situation: A person can't pay interest (for example on a credit card).
Fix: Take up a new loan to pay the interest (a new credit card).
Consequences: There is more interest to pay next time.
Tax revenue shortage
Situation: A government is not satisfied with its tax revenues.
Fix: Increase the cigarette tax to raise more taxes.
Consequences: Smuggling of cigarettes develops and reduces the number of taxed cigarettes sold in the country.
Situation: A farm struggles with a fungal infection on some plants
Fix: Using more fungicides
Consequences: Fungus develops fungucide resistance, and the quantity to apply becomes even higher
See also
The Fifth Discipline
System Dynamics
Organizational learning
Limits to Growth
References
Risk management
Operations research
Complex systems theory | Fixes that fail | [
"Mathematics"
] | 1,120 | [
"Applied mathematics",
"Operations research"
] |
37,039,369 | https://en.wikipedia.org/wiki/Geologic%20overpressure | Geologic overpressure in stratigraphic layers is caused by the inability of connate pore fluids to escape as the surrounding mineral matrix compacts under the lithostatic pressure caused by overlying layers. Fluid escape may be impeded by sealing of the compacting rock by surrounding impermeable layers (such as evaporites, chalk and cemented sandstones). Alternatively, the rate of burial of the stratigraphic layer may be so great that the efflux of fluid is not sufficiently rapid to maintain hydrostatic pressure.
Common situations where overpressure may occur: in a buried river channel filled with coarse sand that is sealed on all sides by impermeable shales, or when there is an explosion within a confined space.
It is extremely important to be able to diagnose overpressured units when drilling through them, as the drilling mud weight (density) must be adjusted to compensate. If it is not, there is a risk that the pressure difference down-well will cause a dramatic decompression of the overpressured layer and result in a blowout at the well-head with possibly disastrous consequences.
Because overpressured sediments tend to exhibit better porosity than would be predicted from their depth, they often make attractive hydrocarbon reservoirs and are therefore of important economic interest.
References
Petroleum geology
Oil wells
Pressure | Geologic overpressure | [
"Physics",
"Chemistry"
] | 277 | [
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Petroleum technology",
"Pressure",
"Petroleum",
"Oil wells",
"Petroleum geology",
"Wikipedia categories named after physical quantities"
] |
37,039,918 | https://en.wikipedia.org/wiki/Group%20structure%20and%20the%20axiom%20of%20choice | In mathematics a group is a set together with a binary operation on the set called multiplication that obeys the group axioms. The axiom of choice is an axiom of ZFC set theory which in one form states that every set can be wellordered.
In ZF set theory, i.e. ZFC without the axiom of choice, the following statements are equivalent:
For every nonempty set there exists a binary operation such that is a group.
The axiom of choice is true.
A group structure implies the axiom of choice
In this section it is assumed that every set can be endowed with a group structure .
Let be a set. Let be the Hartogs number of . This is the least cardinal number such that there is no injection from into . It exists without the assumption of the axiom of choice. Assume here for technical simplicity of proof that has no ordinal. Let denote multiplication in the group .
For any there is an such that . Suppose not. Then there is an such that for all . But by elementary group theory, the are all different as α ranges over (i). Thus such a gives an injection from into . This is impossible since is a cardinal such that no injection into exists.
Now define a map of into endowed with the lexicographical wellordering by sending to the least such that . By the above reasoning the map exists and is unique since least elements of subsets of wellordered sets are unique. It is, by elementary group theory, injective.
Finally, define a wellordering on by if . It follows that every set can be wellordered and thus that the axiom of choice is true.
For the crucial property expressed in (i) above to hold, and hence the whole proof, it is sufficient for to be a cancellative magma, e.g. a quasigroup. The cancellation property is enough to ensure that the are all different.
The axiom of choice implies a group structure
Any nonempty finite set has a group structure as a cyclic group generated by any element. Under the assumption of the axiom of choice, every infinite set is equipotent with a unique cardinal number which equals an aleph. Using the axiom of choice, one can show that for any family of sets (A). Moreover, by Tarski's theorem on choice, another equivalent of the axiom of choice, for all finite (B).
Let be an infinite set and let denote the set of all finite subsets of . There is a natural multiplication on . For , let , where denotes the symmetric difference. This turns into a group with the empty set, , being the identity and every element being its own inverse; . The associative property, i.e. is verified using basic properties of union and set difference. Thus is a group with multiplication .
Any set that can be put into bijection with a group becomes a group via the bijection. It will be shown that , and hence a one-to-one correspondence between and the group exists. For , let be the subset of consisting of all subsets of cardinality exactly . Then is the disjoint union of the . The number of subsets of of cardinality is at most because every subset with elements is an element of the -fold cartesian product of . So for all (C) by (B).
Putting these results together it is seen that by (A) and (C). Also, , since contains all singletons. Thus, and , so, by the Schröder–Bernstein theorem, . This means precisely that there is a bijection between and . Finally, for define . This turns into a group. Hence every set admits a group structure.
A ZF set with no group structure
There are models of ZF in which the axiom of choice fails. In such a model, there are sets that cannot be well-ordered (call these "non-wellorderable" sets). Let be any such set. Now consider the set . If were to have a group structure, then, by the construction in first section, can be well-ordered. This contradiction shows that there is no group structure on the set .
If a set is such that it cannot be endowed with a group structure, then it is necessarily non-wellorderable. Otherwise the construction in the second section does yield a group structure. However these properties are not equivalent. Namely, it is possible for sets which cannot be well-ordered to have a group structure.
For example, if is any set, then has a group structure, with symmetric difference as the group operation. Of course, if cannot be well-ordered, then neither can . One interesting example of sets which cannot carry a group structure is from sets with the following two properties:
is an infinite Dedekind-finite set. In other words, has no countably infinite subset.
If is partitioned into finite sets, then all but finitely many of them are singletons.
To see that the combination of these two cannot admit a group structure, note that given any permutation of such set must have only finite orbits, and almost all of them are necessarily singletons which implies that most elements are not moved by the permutation. Now consider the permutations given by , for which is not the neutral element, there are infinitely many such that , so at least one of them is not the neutral element either. Multiplying by gives that is in fact the identity element which is a contradiction.
The existence of such a set is consistent, for example given in Cohen's first model. Surprisingly, however, being an infinite Dedekind-finite set is not enough to rule out a group structure, as it is consistent that there are infinite Dedekind-finite sets with Dedekind-finite power sets.
Notes
References
Axiom of choice
Group theory | Group structure and the axiom of choice | [
"Mathematics"
] | 1,212 | [
"Mathematical axioms",
"Group theory",
"Fields of abstract algebra",
"Axiom of choice",
"Axioms of set theory"
] |
37,040,021 | https://en.wikipedia.org/wiki/Single%20customer%20view | A single customer view is an aggregated, consistent and holistic representation of the data held by an organisation about its customers that can be viewed in one place, such as a single page. The advantage to an organisation of attaining this unified view comes from the ability it gives to analyse past behaviour in order to better target and personalise future customer interactions. A single customer view is also considered especially relevant where organisations engage with customers through multichannel marketing, since customers expect those interactions to reflect a consistent understanding of their history and preferences. However, some commentators have challenged the idea that a single view of customers across an entire organisation is either natural or meaningful, proposing that the priority should instead be consistency between the multiple views that arise in different contexts.
Where representations of a customer are held in more than one data set, achieving a single customer view can be difficult: firstly because customer identity must be traceable between the records held in those systems, and secondly because anomalies or discrepancies in the customer data must be data cleansed for data quality. As such, the acquisition by an organisation of a single customer view is one potential outcome of successful master data management. Since 31 December, 2010, maintaining a single customer view, and submitting it within 72 hours, has become mandatory for financial institutions in the United Kingdom due to new rules introduced by the Financial Services Compensation Scheme.
See also
Data warehouse
References
Identity management
Business intelligence terms
Data management
Data warehousing | Single customer view | [
"Technology"
] | 294 | [
"Data management",
"Data"
] |
37,040,772 | https://en.wikipedia.org/wiki/Bour%27s%20minimal%20surface | In mathematics, Bour's minimal surface is a two-dimensional minimal surface, embedded with self-crossings into three-dimensional Euclidean space. It is named after Edmond Bour, whose work on minimal surfaces won him the 1861 mathematics prize of the French Academy of Sciences.
Description
Bour's surface crosses itself on three coplanar rays, meeting at equal angles at the origin of the space. The rays partition the surface into six sheets, topologically equivalent to half-planes; three sheets lie in the halfspace above the plane of the rays, and three below. Four of the sheets are mutually tangent along each ray.
Equation
The points on the surface may be parameterized in polar coordinates by a pair of numbers . Each such pair corresponds to a point in three dimensions according to the parametric equations
The surface can also be expressed as the solution to a polynomial equation of order 16 in the Cartesian coordinates of the three-dimensional space.
Properties
The Weierstrass–Enneper parameterization, a method for turning certain pairs of functions over the complex numbers into minimal surfaces, produces this surface for the two functions . It was proved by Bour that surfaces in this family are developable onto a surface of revolution.
References
Minimal surfaces | Bour's minimal surface | [
"Chemistry"
] | 252 | [
"Foams",
"Minimal surfaces"
] |
37,041,835 | https://en.wikipedia.org/wiki/MERCI%20Retriever | The MERCI Retriever is a medical device designed to treat Ischemic Strokes. The name is an acronym for Mechanical Embolus Removal in Cerebral Ischemia. Designed by University of California, Los Angeles in 2001, MERCI was the first device approved in the U.S. to remove blood clots in patients who had acute brain ischemia.
Medical uses
It may result in benefit in certain people as long as 8 hours after the ischemic event. In this carefully selected group it achieved 48% vessel re-canalization and lower mortality rates than the use of r-tPA in revascularized patients.
Mechanism
In an ischemic stroke, there is an obstruction within blood vessels that supply blood to the brain. The goal in treatment of such a stroke is to restore blood flow through these blood vessels. The MERCI retriever does so by allowing the removal of the blood clots causing the obstruction.
The retriever consists of a long thin wire with a helical coil formed at the distal end. A balloon catheter is snaked into the affected vessel from the femoral artery, and the balloon is inflated to prevent blood flow that could hinder the retrieval process. The retriever is then fed through the catheter, during which the distal coil is straightened to fit through the catheter tube. When the retriever emerges at the clot site, the coil reforms, wrapping around the clot and allowing the clot to be removed with the catheter.
History
The MERCI Retriever obtained U.S. FDA clearance in August 2004 for re-canalization of cerebral arteries in acute stroke.
FDA Process
Concentric Medical undertook a preliminary study of the MERCI retriever to assess its effectiveness. The manufacturers of the MERCI device filed a 510(k) premarket notification, and received an FDA premarket approval in 2004.
References
Medical devices
Management of stroke | MERCI Retriever | [
"Biology"
] | 391 | [
"Medical devices",
"Medical technology"
] |
37,041,987 | https://en.wikipedia.org/wiki/Bargmann%E2%80%93Wigner%20equations | In relativistic quantum mechanics and quantum field theory, the Bargmann–Wigner equations describe free particles with non-zero mass and arbitrary spin , an integer for bosons () or half-integer for fermions (). The solutions to the equations are wavefunctions, mathematically in the form of multi-component spinor fields.
They are named after Valentine Bargmann and Eugene Wigner.
History
Paul Dirac first published the Dirac equation in 1928, and later (1936) extended it to particles of any half-integer spin before Fierz and Pauli subsequently found the same equations in 1939, and about a decade before Bargman, and Wigner. Eugene Wigner wrote a paper in 1937 about unitary representations of the inhomogeneous Lorentz group, or the Poincaré group. Wigner notes Ettore Majorana and Dirac used infinitesimal operators applied to functions. Wigner classifies representations as irreducible, factorial, and unitary.
In 1948 Valentine Bargmann and Wigner published the equations now named after them in a paper on a group theoretical discussion of relativistic wave equations.
Statement of the equations
For a free particle of spin without electric charge, the BW equations are a set of coupled linear partial differential equations, each with a similar mathematical form to the Dirac equation. The full set of equations are:
which follow the pattern;
for . (Some authors e.g. Loide and Saar use to remove factors of 2. Also the spin quantum number is usually denoted by in quantum mechanics, however in this context is more typical in the literature). The entire wavefunction has components
and is a rank-2j 4-component spinor field. Each index takes the values 1, 2, 3, or 4, so there are components of the entire spinor field , although a completely symmetric wavefunction reduces the number of independent components to . Further, are the gamma matrices, and
is the 4-momentum operator.
The operator constituting each equation, , is a matrix, because of the matrices, and the term scalar-multiplies the identity matrix (usually not written for simplicity). Explicitly, in the Dirac representation of the gamma matrices:
where is a vector of the Pauli matrices, E is the energy operator, is the 3-momentum operator, denotes the identity matrix, the zeros (in the second line) are actually blocks of zero matrices.
The above matrix operator contracts with one bispinor index of at a time (see matrix multiplication), so some properties of the Dirac equation also apply to the BW equations:
the equations are Lorentz covariant,
all components of the solutions also satisfy the Klein–Gordon equation, and hence fulfill the relativistic energy–momentum relation,
second quantization is still possible.
Unlike the Dirac equation, which can incorporate the electromagnetic field via minimal coupling, the B–W formalism comprises intrinsic contradictions and difficulties when the electromagnetic field interaction is incorporated. In other words, it is not possible to make the change , where is the electric charge of the particle and is the electromagnetic four-potential. An indirect approach to investigate electromagnetic influences of the particle is to derive the electromagnetic four-currents and multipole moments for the particle, rather than include the interactions in the wave equations themselves.
Lorentz group structure
The representation of the Lorentz group for the BW equations is
where each is an irreducible representation. This representation does not have definite spin unless equals 1/2 or 0. One may perform a Clebsch–Gordan decomposition to find the irreducible terms and hence the spin content. This redundancy necessitates that a particle of definite spin that transforms under the representation satisfies field equations.
The representations and can each separately represent particles of spin . A state or quantum field in such a representation would satisfy no field equation except the Klein–Gordon equation.
Formulation in curved spacetime
Following M. Kenmoku, in local Minkowski space, the gamma matrices satisfy the anticommutation relations:
where is the Minkowski metric. For the Latin indices here, . In curved spacetime they are similar:
where the spatial gamma matrices are contracted with the vierbein to obtain , and is the metric tensor. For the Greek indices; .
A covariant derivative for spinors is given by
with the connection given in terms of the spin connection by:
The covariant derivative transforms like :
With this setup, equation () becomes:
See also
Two-body Dirac equation
Generalizations of Pauli matrices
Wigner D-matrix
Weyl–Brauer matrices
Higher-dimensional gamma matrices
Joos–Weinberg equation, alternative equations which describe free particles of any spin
Higher-spin theory
Notes
References
Further reading
Books
Selected papers
External links
Relativistic wave equations:
Dirac matrices in higher dimensions, Wolfram Demonstrations Project
Learning about spin-1 fields, P. Cahill, K. Cahill, University of New Mexico
Field equations for massless bosons from a Dirac–Weinberg formalism, R.W. Davies, K.T.R. Davies, P. Zory, D.S. Nydick, American Journal of Physics
Quantum field theory I, Martin Mojžiš
The Bargmann–Wigner Equation: Field equation for arbitrary spin, FarzadQassemi, IPM School and Workshop on Cosmology, IPM, Tehran, Iran
Lorentz groups in relativistic quantum physics:
Representations of Lorentz Group, indiana.edu
Appendix C: Lorentz group and the Dirac algebra, mcgill.ca
The Lorentz Group, Relativistic Particles, and Quantum Mechanics, D. E. Soper, University of Oregon, 2011
Representations of Lorentz and Poincaré groups, J. Maciejko, Stanford University
Representations of the Symmetry Group of Spacetime, K. Drake, M. Feinberg, D. Guild, E. Turetsky, 2009
Quantum mechanics
Quantum field theory
Mathematical physics | Bargmann–Wigner equations | [
"Physics",
"Mathematics"
] | 1,242 | [
"Quantum field theory",
"Applied mathematics",
"Theoretical physics",
"Quantum mechanics",
"Mathematical physics"
] |
37,044,564 | https://en.wikipedia.org/wiki/Current%20limiting%20reactor | In electrical engineering, current limiting reactors can reduce short-circuit currents, which result from plant expansions and power source additions, to levels that can be adequately handled by existing distribution equipment.
They can also be used in high voltage electric power transmission grids for a similar purpose. In the control of electric motors, current limiting reactors can be used to restrict starting current or as part of a speed control system.
History
Current limiting reactors, once called current limiting reactance coils, were first presented in 1915.
The inventor of the current limiting reactance coil was Vern E. Alden who filed the patent on November 20, 1917 with an issue date of September 11, 1923. The original assignee was Westinghouse Electric & Manufacturing Company.
Operation
A current limiting reactor is used when the prospective short-circuit current in a distribution or transmission system is calculated to exceed the interrupting rating of the associated switchgear. The inductive reactance is chosen to be low enough for an acceptable voltage drop during normal operation, but high enough to restrict a short circuit to the rating of the switchgear.
The amount of protection that a current limiting reactor offers depends upon the percentage increase in impedance that it provides for the system.
The main motive of using current limiting reactors is to reduce short-circuit currents so that circuit breakers with lower short circuit breaking capacity can be used. They can also be used to protect other system components from high current levels and to limit the inrush current when starting a large motor.
Construction
It is desirable that the reactor does not go into magnetic saturation during a short circuit, so generally an air-core coil is used. At low and medium voltages, air-insulated coils are practical; for high transmission voltages, the coils may be immersed in transformer oil.
Installation of air-core coils requires consideration of the magnetic field produced by the coils, which may induce current in large nearby metal objects. This may result in objectionable temperature rise and waste of energy.
Line reactor
A line reactor is an inductor wired between a power source and a load. In addition to the current limiting function, the device serves to filter out spikes of current and may also reduce injection of harmonic currents into the power supply. The most common type is designed for three-phase electric power, in which three isolated inductors are each wired in series with one of the three line phases. Line reactors are generally installed in motor driven equipment to limit starting current, and may be used to protect Variable-frequency drives and motors.:)
See also
Electrical ballast
References
Electrical engineering
Electric power transmission
American inventions
Over-current protection devices | Current limiting reactor | [
"Engineering"
] | 537 | [
"Electrical engineering"
] |
24,372,941 | https://en.wikipedia.org/wiki/Mitochondrial%20ferritin | Mitochondrial ferritin is a ferroxidase enzyme that in humans is encoded by the FTMT gene.
It is classified as a metal-binding protein which is located within the mitochondria. After the protein is taken up by the mitochondria it can be processed into a mature protein and assemble functional ferritin shells.
Structure
Its structure was determined at 1.70 Å through the use of X-ray diffraction and contains 182 residues. It is 67% helical. The Ramachandran plot shows that the structure of mitochondrial ferritin is mainly alpha helical with a low prevalence of beta sheets.
References
Further reading
EC 1.16.3
Storage proteins
Mitochondria | Mitochondrial ferritin | [
"Chemistry"
] | 148 | [
"Mitochondria",
"Metabolism"
] |
24,376,396 | https://en.wikipedia.org/wiki/Filigree%20concrete | The Filigree Wideslab method is a process for construction of concrete floor decks from two interconnected concrete placements, one precast in a factory, and the other done in the field. The method was developed during the late 1960s by Harry H. Wise as a more efficient and economic construction process than conventional cast-in-place technologies.
Description
The process begins by manufacturing thin precast concrete panels, typically 2.25" thick, with the deck's bottom reinforcement included. The panels are then shipped to a jobsite and erected on temporary shoring. Subsequently, the deck's top reinforcing steel is placed on top of the precast panels at the site, and concrete is poured over the entire assembly to achieve the final thickness of the deck.
This process effectively accelerates the construction of structures by eliminating the need for costly and time-consuming field forming, and the placing of bottom reinforcement. Polystyrene blocks are often incorporated into the panels during their manufacture in order to create voids, reducing both the quantity and cost of concrete added in the field, and the overall weight of the structure, which further reduces the costs of columns and foundations.
The soffits of the panels have a smooth uniform finish as a result of casting them in polished steel molds. This reduces the labor cost and time typically required to grind and patch the soffits of cast-in-place concrete decks to achieve an acceptable aesthetic finish.
The method of deck construction can be applied anywhere conventionally poured-in-place concrete is specified, such as flat plate, beam and slab, and wall-bearing structures.
See also
References
External links
Concrete Contractor
Concrete buildings and structures
Concrete | Filigree concrete | [
"Engineering"
] | 341 | [
"Structural engineering",
"Concrete"
] |
24,376,661 | https://en.wikipedia.org/wiki/Bray%E2%80%93Curtis%20dissimilarity | In ecology and biology, the Bray–Curtis dissimilarity is a statistic used to quantify the dissimilarity in species composition between two different sites, based on counts at each site. It is named after J. Roger Bray and John T. Curtis who first presented it in a paper in 1957.
The Bray-Curtis dissimilarity between two sites j and k is
where is the number of specimens of species i at site j, is the number of specimens of species i at site k, and p the total number of species in the samples.
In the alternative shorthand notation is the sum of the lesser counts of each species. and are the total number of specimens counted at both sites. The index can be simplified to 1-2C/2 = 1-C when the abundances at each site are expressed as proportions, though the two forms of the equation only produce matching results when the total number of specimens counted at both sites are the same. Further treatment can be found in Legendre & Legendre.
The Bray–Curtis dissimilarity is bounded between 0 and 1, where 0 means the two sites have the same composition (that is they share all the species), and 1 means the two sites do not share any species. At sites with where BC is intermediate (e.g. BC = 0.5) this index differs from other commonly used indices.
The Bray–Curtis dissimilarity is directly related to the quantitative Sørensen similarity index between the same sites:
.
The Bray–Curtis dissimilarity is often erroneously called a distance ("A well-defined distance function obeys the triangle inequality, but there are several justifiable measures of difference between samples which do not have this property: to distinguish these from true distances we often refer to them as dissimilarities"). It is not a distance since it does not satisfy triangle inequality, and should always be called a dissimilarity to avoid confusion.
Example
For a simple example, consider the data from two aquariums with 3 species in them, as shown in the table. The table shows the number of each species in each tank, as well as some statistics needed to compute the Bray-Curtis dissimilarity.
To calculate Bray–Curtis, let's first calculate , the sum of only the lesser counts for each species found in both sites. Goldfish are found on both sites; the lesser count is 6. Guppies are only on one site, so the lesser count is 0 and will not contribute to the sum. Rainbow fish, though, are in both, and the lesser count is 4.
So .
(total number of specimens counted on site j) , and
(total number of specimens counted on site k) .
This leads to .
References
Further reading
Czekanowski J (1909) Zur Differentialdiagnose der Neandertalgruppe. Korrespbl dt Ges Anthrop 40: 44–47.
Ricotta C & Podani J (2017) On some properties of the Bray–Curtis dissimilarity and their ecological meaning. Ecological Complexity 31: 201–205.
Somerfield, PJ (2008) Identification of Bray–Curtis similarity index: comment on Yoshioka (2008). Mar Ecol Prog Ser 372: 303–306.
Yoshioka PM (2008) Misidentification of the Bray–Curtis similarity index. Mar Ecol Prog Ser 368: 309–310. http://doi.org/10.3354/meps07728
Measurement of biodiversity | Bray–Curtis dissimilarity | [
"Biology"
] | 744 | [
"Biodiversity",
"Measurement of biodiversity"
] |
24,376,770 | https://en.wikipedia.org/wiki/Molecular%20Membrane%20Biology | Molecular Membrane Biology is a peer-reviewed scientific journal that publishes review articles of biomembranes at the molecular level. It is published by Taylor & Francis. The editor-in-chief is Vincent Postis.
External links
Academic journals established in 1978
Molecular and cellular biology journals
Taylor & Francis academic journals
English-language journals
Annual journals | Molecular Membrane Biology | [
"Chemistry"
] | 69 | [
"Molecular and cellular biology journals",
"Molecular biology"
] |
24,376,808 | https://en.wikipedia.org/wiki/Rubblization | Rubblization is a construction and engineering technique that involves saving time and transportation costs by reducing existing concrete into rubble at its current location rather than hauling it to another location. Rubblization has two primary applications: creating a base for new roadways and decommissioning nuclear power plants.
Road construction
In road construction, a worn-out Portland cement concrete can be rubblized and then overlaid with a new surface, usually asphalt concrete. Specialized equipment breaks up the old roadway into small pieces to make a base for new pavement. This saves the expense of transporting the old pavement to a disposal site, and purchasing/transporting new base materials for the replacement paving. The result is a smoother pavement surface than would be obtained if a layer of asphalt were to be applied to the unbroken concrete surface. The technique has been used on roads since the late 1990s, and is also being used for concrete airport runways.
The rubblizing process provides many benefits versus other methods of road rehabilitation, such as crack and seat or removal and replacement of a concrete surface including: rubblizing a concrete surface is 52% less expensive than remove and replacing concrete; rubblizing reduces road reconstruction time, from days of lane closures to hours, providing large savings to contractors and reduced impact on travelling public; and rubblization is an environmentally friendly "green" process.,
Nuclear power plant decommissioning
Rubblization is used in decommissioning of nuclear power plants. As with other decommissioning techniques, all equipment from buildings is removed and the surfaces are decontaminated. The difference with rubblization is that above-grade structures, including the concrete containment building, are demolished into rubble and buried in the structure's foundation below ground. The site surface is then covered, regraded, and landscaped for unrestricted use. This saves the expense of removing and transporting the building pieces to a different site.
External links
References
Materials
Building engineering | Rubblization | [
"Physics",
"Engineering"
] | 398 | [
"Building engineering",
"Materials",
"Civil engineering",
"Matter",
"Architecture"
] |
24,377,618 | https://en.wikipedia.org/wiki/HD%20171238 | HD 171238 is a star with an orbiting exoplanet in the southern constellation of Sagittarius. It is located at a distance of 145 light years from the Sun based on parallax measurements, and is drifting further away with a radial velocity of 21 km/s. The star has an absolute magnitude of 5.15, but at the distance of this system it is too faint to be viewed with the naked eye, having an apparent visual magnitude of 8.61.
The spectrum of HD 171238 presents as an ordinary G-type main-sequence star with a stellar classification of G8 V. At an estimated age of around four billion years, it is spinning with a projected rotational velocity of 1.5 km/s. The metallicity of the star – the abundance of elements more massive than helium – is 48% higher than solar, based on the abundance of iron. There are indications of a significant level of magnetic activity in the chromosphere. The star has 99% of the mass of the Sun and 95% of the Sun's girth. It is radiating just 77% of the luminosity of the Sun from its photosphere at an effective temperature of 5,570 K.
Planetary system
In August 2009, it was announced that this star has a super-jovian exoplanet. Using astrometry from Gaia, astronomers were able to deduce the true mass of HD 171238 b as ; higher than the minimum mass estimated from Doppler spectroscopy.
See also
HD 147018
HD 204313
List of extrasolar planets
References
G-type main-sequence stars
Planetary systems with one confirmed planet
Sagittarius (constellation)
Durchmusterung objects
171238
091085 | HD 171238 | [
"Astronomy"
] | 355 | [
"Sagittarius (constellation)",
"Constellations"
] |
24,378,029 | https://en.wikipedia.org/wiki/C16H13Cl2NO4 | {{DISPLAYTITLE:C16H13Cl2NO4}}
The molecular formula C16H13Cl2NO4 (molar mass: 354.185 g/mol, exact mass: 353.0222 u) may refer to:
Aceclofenac
Quinfamide
Molecular formulas | C16H13Cl2NO4 | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,378,119 | https://en.wikipedia.org/wiki/C12H11N | {{DISPLAYTITLE:C12H11N}}
The molecular formula C12H11N (molar mass: 169.22 g/mol) may refer to:
Aminobiphenyls
2-Aminobiphenyl (2-APB)
3-Aminobiphenyl
4-Aminobiphenyl (4-APB)
Diphenylamine
Molecular formulas | C12H11N | [
"Physics",
"Chemistry"
] | 85 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,381,424 | https://en.wikipedia.org/wiki/Ferritic%20nitrocarburizing | Ferritic nitrocarburizing or FNC, also known by the proprietary names "Tenifer", "Tufftride", Melonite, and "Arcor", is a range of proprietary case hardening processes that diffuse nitrogen and carbon into ferrous metals at sub-critical temperatures during a salt bath. Other methods of ferritic nitrocarburizing include gaseous processes such as Nitrotec and ion (plasma) ones. The processing temperature ranges from to , but usually occurs at . Steel and other ferrous alloys remain in the ferritic phase region at this temperature. This allows for better control of the dimensional stability that would not be present in case hardening processes that occur when the alloy is transitioned into the austenitic phase. There are four main classes of ferritic nitrocarburizing: gaseous, salt bath, ion or plasma, and fluidized-bed.
The process improves three main surface integrity aspects: scuffing resistance, fatigue properties, and corrosion resistance. It has the advantage of inducing little shape distortion during the hardening process. This is because of the low processing temperature, which reduces thermal shocks and avoids phase transitions in steel.
History
The first ferritic nitrocarburizing methods were done at low temperatures, around , in a liquid salt bath. The first company to successfully commercialize the process was the Imperial Chemical Industries in Great Britain. ICI called their process "the cassel" due to the plant where it was developed or "Sulfinuz" treatment because it had sulfur in the salt bath. While the process was very successful with high-speed spindles and cutting tools, there were issues with cleaning the solution off because it was not very water-soluble.
Because of the cleaning issues, Lucas Industries began experimenting with gaseous forms of ferritic nitrocarburizing in the late 1950s. The company applied for a patent in 1961. It produced a similar surface finish as the Sulfinuz process, except for the formation of sulfides. The atmosphere consists of ammonia, hydrocarbon gases, and a small amount of other carbon-containing gases.
This innovation spurred the development of a more environmentally friendly salt bath process by the German company Degussa after acquiring ICI patents. Their process is widely known as the Tufftride or Tenifer process. Following this, the ion nitriding process was invented in the early 1980s. This process had faster cycle times, required less cleaning and preparation, formed deeper cases, and allowed for better control of the process.
Processes
Despite the naming, the process is a modified form of nitriding and not carburizing. The shared attribute of this class of this process is the introduction of nitrogen and carbon in the ferritic state of the material. The processes are divided into four main classes: gaseous, salt bath, ion or plasma, or fluidized-bed. The trade name and patented processes may vary slightly from the general description, but they are all a form of ferritic nitrocarburizing.
Salt bath ferritic nitrocarburizing
Salt bath ferritic nitrocarburizing is also known as liquid ferritic nitrocarburizing or liquid nitrocarburizing and is also known by the trademarked names "Tufftride" and "Tenifer".
The simplest form of this process is encompassed by the trademarked "Melonite" process, also known as "Meli 1". It is most commonly used on steels, sintered irons, and cast irons to lower friction and improve wear and corrosion resistance.
The process uses a salt bath of alkali cyanate. This is contained in a steel pot that has an aeration system. The cyanate thermally reacts with the surface of the workpiece to form an alkali carbonate. The bath is then treated to convert the carbonate back to a cyanate. The surface formed from the reaction has a compound layer and a diffusion layer. The compound layer consists of iron, nitrogen, and oxygen is abrasion resistant and is stable at elevated temperatures. The diffusion layer contains nitrides and carbides. The surface hardness ranges from 800 to 1500 HV depending on the steel grade. This also inversely affects the depth of the case; i.e., a high carbon steel will form a hard, but shallow case.
A similar process is the trademarked "Nu-Tride" process, also known incorrectly as the "Kolene" process (which is the company's name), includes a preheat and an intermediate quench cycle. The intermediate quench is an oxidizing salt bath at . This quench is held for 5 to 20 minutes before the final quenching to room temperature. This is done to minimize distortion and to destroy any lingering cyanates or cyanides left on the workpiece.
Other trademarked processes are "Sursulf" and "Tenoplus". Sursulf has a sulfur compound in the salt bath to create surface sulfides creating porosity in the workpiece surface. This porosity is used to contain lubrication. Tenoplus is a two-stage high-temperature process. The first stage occurs at , while the second stage occurs at .
Gaseous ferritic nitrocarburizing
Gaseous ferritic nitrocarburizing is also known as controlled nitrocarburizing, soft nitriding, and vacuum nitrocarburizing or by the tradenames "UltraOx", "Nitrotec", "Nitemper", "Deganit", "Triniding", "Corr-I-Dur", "Nitroc", "Nitreg-C", "Nitrowear", and "Nitroneg". The process works to achieve the same result as the salt bath process, except gaseous mixtures are used to diffuse the nitrogen and carbon into the workpiece.
The parts are first cleaned, usually with a vapor degreasing process, and then nitrocarburized around , with a processing time that ranges from one to four hours. The actual gas mixtures are proprietary, but they usually contain ammonia and an endothermic gas.
In comparison to a standard nitriding process, ferritic nitrocarburizing or FNC in a vacuum furnace takes less time to achieve case depth requirements - mainly in part due to the addition of carbon to achieve faster diffusion.
Plasma-assisted ferritic nitrocarburizing
Plasma-assisted ferritic nitrocarburizing is also known as ion nitriding, plasma ion nitriding, or glow-discharge nitriding. The process works to achieve the same result as the salt bath and gaseous process, except the reactivity of the media is not due to the temperature but to the gas ionized state. In this technique intense electric fields are used to generate ionized molecules of the gas around the surface to diffuse the nitrogen and carbon into the workpiece. Such highly active gas with ionized molecules is called plasma, naming the technique. The gas used for plasma nitriding is usually pure nitrogen since no spontaneous decomposition is needed (as is the case of gaseous ferritic nitrocarburizing with ammonia). Due to the relatively low-temperature range ( to ) generally applied during plasma-assisted ferritic nitrocarburizing and gentle cooling in the furnace, the distortion of workpieces can be minimized. Stainless steel workpieces can be processed at moderate temperatures (like ) without the formation of chromium nitride precipitates and hence maintaining their corrosion resistance properties.
Post-oxidation black oxide
An additional step can be added to the nitrocarburizing process called post-oxidation. When properly performed, post-oxidation creates a layer of black oxide (Fe3O4), that greatly increases the corrosion resistance of the treated substrate while leaving an aesthetically attractive black color. Since the introduction of the Glock pistol in 1982, this type of nitrocarburizing with post-oxidation finish has become popular as a factory finish for military-style handguns.
This combination of nitrocarburizing and oxidizing is sometimes called "nitrox", but this word also has another meaning.
Uses
These processes are most commonly used on low-carbon, low-alloy steels. However, they are also used on medium and high-carbon steels. Typical applications include spindles, cams, gears, dies, hydraulic piston rods, and powdered metal components.
One of the initial applications of the hardening process for mass-produced automobile engines was by Kaiser-Jeep for the crankshaft in the 1962 Jeep Tornado engine. This was one of many innovations in the OHC six-cylinder engine. The crankshaft was strengthened by Tufftriding in a unique salt bath for two hours at that, according to Kaiser-Jeep, increased engine life by 50% and also made the journal surfaces hard enough to be compatible with heavy-duty tri-metal engine bearings.
Glock Ges.m.b.H., an Austrian firearms manufacturer, utilized the Tenifer process until 2010, to protect the barrels and slides of the pistols they manufacture. The finish on a Glock pistol is the third and final hardening process. It is thick and produces a 64 Rockwell C hardness rating via a nitride bath. The final matte, non-glare finish meets or exceeds stainless steel specifications, is 85% more corrosion resistant than a hard chrome finish, and is 99.9% salt-water corrosion resistant. After the Tenifer process, a black Parkerized finish is applied and the slide is protected even if the finish were to wear off. In 2010, Glock switched to a gaseous ferritic nitrocarburizing process. Besides Glock other pistol and other firearms manufacturers, including Smith & Wesson and HS Produkt, also use ferritic nitrocarburizing for finishing parts like barrels and slides, but they call it Melonite finish. Heckler & Koch use a nitrocarburizing process they refer to as Hostile Environment. Pistol manufacturer Caracal International, headquartered in the United Arab Emirates, uses ferritic nitrocarburizing for finishing parts such as barrels and slides with the plasma-based post-oxidation process (PlasOx). Grand Power, a Slovakian firearms producer, also uses a quench polish quench (QPQ) treatment to harden metal parts on its K100 pistols.
References
Bibliography
External links
Tufftride-/QPQ-process: technical information
: What is Tufftride?
Metal heat treatments | Ferritic nitrocarburizing | [
"Chemistry"
] | 2,231 | [
"Metallurgical processes",
"Metal heat treatments"
] |
24,383,048 | https://en.wikipedia.org/wiki/Cherenkov%20radiation | Cherenkov radiation () is electromagnetic radiation emitted when a charged particle (such as an electron) passes through a dielectric medium (such as distilled water) at a speed greater than the phase velocity (speed of propagation of a wavefront in a medium) of light in that medium. A classic example of Cherenkov radiation is the characteristic blue glow of an underwater nuclear reactor. Its cause is similar to the cause of a sonic boom, the sharp sound heard when faster-than-sound movement occurs. The phenomenon is named after Soviet physicist Pavel Cherenkov.
History
The radiation is named after the Soviet scientist Pavel Cherenkov, the 1958 Nobel Prize winner, who was the first to detect it experimentally under the supervision of Sergey Vavilov at the Lebedev Institute in 1934. Therefore, it is also known as Vavilov–Cherenkov radiation. Cherenkov saw a faint bluish light around a radioactive preparation in water during experiments. His doctorate thesis was on luminescence of uranium salt solutions that were excited by gamma rays instead of less energetic visible light, as done commonly. He discovered the anisotropy of the radiation and came to the conclusion that the bluish glow was not a fluorescent phenomenon.
A theory of this effect was later developed in 1937 within the framework of Einstein's special relativity theory by Cherenkov's colleagues Igor Tamm and Ilya Frank, who also shared the 1958 Nobel Prize.
Cherenkov radiation as conical wavefronts had been theoretically predicted by the English polymath Oliver Heaviside in papers published between 1888 and 1889 and by Arnold Sommerfeld in 1904, but both had been quickly dismissed following the relativity theory's restriction of superluminal particles until the 1970s. Marie Curie observed a pale blue light in a highly concentrated radium solution in 1910, but did not investigate its source. In 1926, the French radiotherapist Lucien Mallet described the luminous radiation of radium irradiating water having a continuous spectrum.
In 2019, a team of researchers from Dartmouth's and Dartmouth-Hitchcock's Norris Cotton Cancer Center discovered Cherenkov light being generated in the vitreous humor of patients undergoing radiotherapy. The light was observed using a camera imaging system called a CDose, which is specially designed to view light emissions from biological systems. For decades, patients had reported phenomena such as "flashes of bright or blue light" when receiving radiation treatments for brain cancer, but the effects had never been experimentally observed.
Physical origin
Basics
While the speed of light in vacuum is a universal constant (), the speed in a material may be significantly less, as it is perceived to be slowed by the medium. For example, in water it is only 0.75c. Matter can accelerate to a velocity higher than this (although still less than c, the speed of light in vacuum) during nuclear reactions and in particle accelerators. Cherenkov radiation results when a charged particle, most commonly an electron, travels through a dielectric (can be polarized electrically) medium with a speed greater than light's speed in that medium.
The effect can be intuitively described in the following way. From classical physics, it is known that accelerating charged particles emit EM waves and via Huygens' principle these waves will form spherical wavefronts which propagate with the phase velocity of that medium (i.e. the speed of light in that medium given by , for , the refractive index). When any charged particle passes through a medium, the particles of the medium will polarize around it in response. The charged particle excites the molecules in the polarizable medium and on returning to their ground state, the molecules re-emit the energy given to them to achieve excitation as photons. These photons form the spherical wavefronts which can be seen originating from the moving particle. If , that is the velocity of the charged particle is less than that of the speed of light in the medium, then the polarization field which forms around the moving particle is usually symmetric. The corresponding emitted wavefronts may be bunched up, but they do not coincide or cross, and there are therefore no interference effects to consider. In the reverse situation, i.e. , the polarization field is asymmetric along the direction of motion of the particle, as the particles of the medium do not have enough time to recover to their "normal" randomized states. This results in overlapping waveforms (as in the animation) and constructive interference leads to an observed cone-like light signal at a characteristic angle: Cherenkov light.
A common analogy is the sonic boom of a supersonic aircraft. The sound waves generated by the aircraft travel at the speed of sound, which is slower than the aircraft, and cannot propagate forward from the aircraft, instead forming a conical shock front. In a similar way, a charged particle can generate a "shock wave" of visible light as it travels through an insulator.
The velocity that must be exceeded is the phase velocity of light rather than the group velocity of light. The phase velocity can be altered dramatically by using a periodic medium, and in that case one can even achieve Cherenkov radiation with no minimum particle velocity, a phenomenon known as the Smith–Purcell effect. In a more complex periodic medium, such as a photonic crystal, one can also obtain a variety of other anomalous Cherenkov effects, such as radiation in a backwards direction (see below) whereas ordinary Cherenkov radiation forms an acute angle with the particle velocity.
In their original work on the theoretical foundations of Cherenkov radiation, Tamm and Frank wrote, "This peculiar radiation can evidently not be explained by any common mechanism such as the interaction of the fast electron with individual atom or as radiative scattering of electrons on atomic nuclei. On the other hand, the phenomenon can be explained both qualitatively and quantitatively if one takes into account the fact that an electron moving in a medium does radiate light even if it is moving uniformly provided that its velocity is greater than the velocity of light in the medium."
Emission angle
In the figure on the geometry, the particle (red arrow) travels in a medium with speed such that
where is speed of light in vacuum, and is the refractive index of the medium. If the medium is water, the condition is , since for water at 20 °C.
We define the ratio between the speed of the particle and the speed of light as
The emitted light waves (denoted by blue arrows) travel at speed
The left corner of the triangle represents the location of the superluminal particle at some initial moment (). The right corner of the triangle is the location of the particle at some later time t. In the given time t, the particle travels the distance
whereas the emitted electromagnetic waves are constricted to travel the distance
So the emission angle results in
Arbitrary emission angle
Cherenkov radiation can also radiate in an arbitrary direction using properly engineered one dimensional metamaterials. The latter is designed to introduce a gradient of phase retardation along the trajectory of the fast travelling particle (), reversing or steering Cherenkov emission at arbitrary angles given by the generalized relation:
Note that since this ratio is independent of time, one can take arbitrary times and achieve similar triangles. The angle stays the same, meaning that subsequent waves generated between the initial time and final time t will form similar triangles with coinciding right endpoints to the one shown.
Reverse Cherenkov effect
A reverse Cherenkov effect can be experienced using materials called negative-index metamaterials (materials with a subwavelength microstructure that gives them an effective "average" property very different from their constituent materials, in this case having negative permittivity and negative permeability). This means that, when a charged particle (usually electrons) passes through a medium at a speed greater than the phase velocity of light in that medium, that particle emits trailing radiation from its progress through the medium rather than in front of it (as is the case in normal materials with, both permittivity and permeability positive). One can also obtain such reverse-cone Cherenkov radiation in non-metamaterial periodic media where the periodic structure is on the same scale as the wavelength, so it cannot be treated as an effectively homogeneous metamaterial.
In vacuum
The Cherenkov effect can occur in vacuum. In a slow-wave structure, like in a traveling-wave tube (TWT), the phase velocity decreases and the velocity of charged particles can exceed the phase velocity while remaining lower than . In such a system, this effect can be derived from conservation of the energy and momentum where the momentum of a photon should be ( is phase constant) rather than the de Broglie relation . This type of radiation (VCR) is used to generate high-power microwaves.
Collective Cherenkov
Radiation with the same properties of typical Cherenkov radiation can be created by structures of electric current that travel faster than light. By manipulating density profiles in plasma acceleration setups, structures up to nanocoulombs of charge are created and may travel faster than the speed of light and emit optical shocks at the Cherenkov angle. Electrons are still subluminal, hence the electrons that compose the structure at a time are different from the electrons in the structure at a time .
Characteristics
The frequency spectrum of Cherenkov radiation by a particle is given by the Frank–Tamm formula:
The Frank–Tamm formula describes the amount of energy emitted from Cherenkov radiation, per unit length traveled and per frequency . is the permeability and is the index of refraction of the material the charged particle moves through. is the electric charge of the particle, is the speed of the particle, and is the speed of light in vacuum.
Unlike fluorescence or emission spectra that have characteristic spectral peaks, Cherenkov radiation is continuous. Around the visible spectrum, the relative intensity per unit frequency is approximately proportional to the frequency. That is, higher frequencies (shorter wavelengths) are more intense in Cherenkov radiation. This is why visible Cherenkov radiation is observed to be brilliant blue. In fact, most Cherenkov radiation is in the ultraviolet spectrum—it is only with sufficiently accelerated charges that it even becomes visible; the sensitivity of the human eye peaks at green, and is very low in the violet portion of the spectrum.
There is a cut-off frequency above which the equation can no longer be satisfied. The refractive index varies with frequency (and hence with wavelength) in such a way that the intensity cannot continue to increase at ever shorter wavelengths, even for very relativistic particles (where v/c is close to 1). At X-ray frequencies, the refractive index becomes less than 1 (note that in media, the phase velocity may exceed c without violating relativity) and hence no X-ray emission (or shorter wavelength emissions such as gamma rays) would be observed. However, X-rays can be generated at special frequencies just below the frequencies corresponding to core electronic transitions in a material, as the index of refraction is often greater than 1 just below a resonant frequency .
As in sonic booms and bow shocks, the angle of the shock cone is directly related to the velocity of the disruption. The Cherenkov angle is zero at the threshold velocity for the emission of Cherenkov radiation. The angle takes on a maximum as the particle speed approaches the speed of light. Hence, observed angles of incidence can be used to compute the direction and speed of a Cherenkov radiation-producing charge.
Cherenkov radiation can be generated in the eye by charged particles hitting the vitreous humour, giving the impression of flashes, as in cosmic ray visual phenomena and possibly some observations of criticality accidents.
Uses
Detection of labelled biomolecules
Cherenkov radiation is widely used to facilitate the detection of small amounts and low concentrations of biomolecules. Radioactive atoms such as phosphorus-32 are readily introduced into biomolecules by enzymatic and synthetic means and subsequently may be easily detected in small quantities for the purpose of elucidating biological pathways and in characterizing the interaction of biological molecules such as affinity constants and dissociation rates.
Medical imaging of radioisotopes and external beam radiotherapy
More recently, Cherenkov light has been used to image substances in the body. These discoveries have led to intense interest around the idea of using this light signal to quantify and/or detect radiation in the body, either from internal sources such as injected radiopharmaceuticals or from external beam radiotherapy in oncology. Radioisotopes such as the positron emitters 18F and 13N or beta emitters 32P or 90Y have measurable Cherenkov emission and isotopes 18F and 131I have been imaged in humans for diagnostic value demonstration.
External beam radiation therapy has been shown to induce a substantial amount of Cherenkov light in the tissue being treated, due to electron beams or photon beams with energy in the 6 MV to 18 MV ranges. The secondary electrons induced by these high energy x-rays result in the Cherenkov light emission, where the detected signal can be imaged at the entry and exit surfaces of the tissue. The Cherenkov light emitted from patient's tissue during radiation therapy is a very low light level signal but can be detected by specially designed cameras that synchronize their acquisition to the linear accelerator pulses. The ability to see this signal shows the shape of the radiation beam as it is incident upon the tissue in real time.
Nuclear reactors
Cherenkov radiation is used to detect high-energy charged particles. In open pool reactors, beta particles (high-energy electrons) are released as the fission products decay. The glow continues after the chain reaction stops, dimming as the shorter-lived products decay. Similarly, Cherenkov radiation can characterize the remaining radioactivity of spent fuel rods. This phenomenon is used to verify the presence of spent nuclear fuel in spent fuel pools for nuclear safeguards purposes.
Astrophysics experiments
When a high-energy (TeV) gamma photon or cosmic ray interacts with the Earth's atmosphere, it may produce an electron–positron pair with enormous velocities. The Cherenkov radiation emitted in the atmosphere by these charged particles is used to determine the direction and energy of the cosmic ray or gamma ray, which is used for example in the Imaging Atmospheric Cherenkov Technique (IACT), by experiments such as VERITAS, H.E.S.S., MAGIC. Cherenkov radiation emitted in tanks filled with water by those charged particles reaching earth is used for the same goal by the Extensive Air Shower experiment HAWC, the Pierre Auger Observatory and other projects. Similar methods are used in very large neutrino detectors, such as the Super-Kamiokande, the Sudbury Neutrino Observatory (SNO) and IceCube. Other projects operated in the past applying related techniques, such as STACEE, a former solar tower refurbished to work as a non-imaging Cherenkov observatory, which was located in New Mexico.
Astrophysics observatories using the Cherenkov technique to measure air showers are key to determining the properties of astronomical objects that emit very-high-energy gamma rays, such as supernova remnants and blazars.
Particle physics experiments
Cherenkov radiation is commonly used in experimental particle physics for particle identification. One could measure (or put limits on) the velocity of an electrically charged elementary particle by the properties of the Cherenkov light it emits in a certain medium. If the momentum of the particle is measured independently, one could compute the mass of the particle by its momentum and velocity (see four-momentum), and hence identify the particle.
The simplest type of particle identification device based on a Cherenkov radiation technique is the threshold counter, which answers whether the velocity of a charged particle is lower or higher than a certain value (, where is the speed of light, and is the refractive index of the medium) by looking at whether this particle emits Cherenkov light in a certain medium. Knowing particle momentum, one can separate particles lighter than a certain threshold from those heavier than the threshold.
The most advanced type of a detector is the RICH, or ring-imaging Cherenkov detector, developed in the 1980s. In a RICH detector, a cone of Cherenkov light is produced when a high-speed charged particle traverses a suitable medium, often called radiator. This light cone is detected on a position sensitive planar photon detector, which allows reconstructing a ring or disc, whose radius is a measure for the Cherenkov emission angle. Both focusing and proximity-focusing detectors are in use. In a focusing RICH detector, the photons are collected by a spherical mirror and focused onto the photon detector placed at the focal plane. The result is a circle with a radius independent of the emission point along the particle track. This scheme is suitable for low refractive index radiators—i.e. gases—due to the larger radiator length needed to create enough photons. In the more compact proximity-focusing design, a thin radiator volume emits a cone of Cherenkov light which traverses a small distance—the proximity gap—and is detected on the photon detector plane. The image is a ring of light whose radius is defined by the Cherenkov emission angle and the proximity gap. The ring thickness is determined by the thickness of the radiator. An example of a proximity gap RICH detector is the High Momentum Particle Identification Detector (HMPID), a detector currently under construction for ALICE (A Large Ion Collider Experiment), one of the six experiments at the LHC (Large Hadron Collider) at CERN.
See also
Askaryan radiation, similar radiation produced by fast uncharged particles
Blue noise
Bremsstrahlung, radiation produced when charged particles are decelerated by other charged particles
Faster-than-light, about conjectural propagation of information or matter faster than the speed of light
Frank–Tamm formula, giving the spectrum of Cherenkov radiation
Light echo
List of light sources
Non-radiation condition
Radioluminescence
Tachyon
Transition radiation
Citations
Sources
External links
Physical phenomena
Particle physics
Special relativity
Experimental particle physics
Light sources | Cherenkov radiation | [
"Physics"
] | 3,779 | [
"Physical phenomena",
"Special relativity",
"Experimental particle physics",
"Experimental physics",
"Particle physics",
"Theory of relativity"
] |
34,511,184 | https://en.wikipedia.org/wiki/Current%20meter | A current meter is an oceanographic device for flow measurement by mechanical, tilt, acoustical or electrical means.
Different reference frames
In physics, one distinguishes different reference frames depending on where the observer is located, this is the basics for
the Lagrangian and Eulerian specification of the flow field in fluid dynamics: The observer can be either in the Moving frame (as for a Lagrangian drifter) or in a resting frame.
Lagrangian current meters measure the displacement of an oceanographic drifter, an unmoored buoy or a non-anchored ship's actual position to the position predicted by dead reckoning.
Eulerian current meters measure current passing a resting current meter.
Types
Mechanical
Mechanical current meters are mostly based on counting the rotations of a propeller and are thus rotor current meters. A mid-20th-century realization is the Ekman current meter which drops balls into a container to count the number of rotations. The Roberts radio current meter is a device mounted on a moored buoy and transmits its findings via radio to a servicing vessel. Savonius current meters rotate around a vertical axis in order to minimize error introduced by vertical motion.
Acoustic
There are two basic types of acoustic current meters: Doppler and Travel Time. Both methods use a ceramic transducer to emit a sound into the water.
Doppler instruments are more common. An instrument of this type is the Acoustic Doppler current profiler (ADCP), which measures the water current velocities over a depth range using the Doppler effect of sound waves scattered back from particles within the water column. The ADCPs use the traveling time of the sound to determine the position of the moving particles. Single-point devices use again the Doppler shift, but ignoring the traveling times. Such a single-point Doppler Current Sensor (DCS) has a typical velocity range of 0 to 300 cm/s. The devices are usually equipped with additional optional sensors.
Travel time instruments determine water velocity by at least two acoustic signals, one up stream and one down stream. By precisely measuring the time to travel from the emitter to the receiver, in both directions, the average water speed can be determined between the two points. By using multiple paths, the water velocity can be determined in three dimensions.
Travel time meters are generally more accurate than Doppler meters, but only record the velocity between the transducers. Doppler meters have the advantage that they can determine the water velocity at a considerable range, and in the case of an ADCP, at multiple ranges.
Electromagnetic induction
This novel approach is for instance employed in the Florida Strait where electromagnetic induction in submerged telephone cable is used to estimate the through-flow through the gateway and the complete setup can be seen as one huge current meter. The physics behind: Charged particles (the ions in seawater) are moving with the ocean currents in the magnetic field of the Earth which is perpendicular to the movement. Using Faraday's law of induction (the third of Maxwell's equations), it is possible to evaluate the variability of the averaged horizontal flow by measuring the induced electric currents. The method has a minor vertical weighting effect due to small conductivity changes at different depths.
Tilt
Tilt current meters operate under the drag-tilt principle and are designed to either float or sink depending on the type. A floating tilt current meter typically consists of a sub-surface buoyant housing that is anchored to the sea floor with a flexible line or tether. A sinking tilt current is similar, but the housing is designed such that the meter hangs from the attachment point. In either case, the housing tilts as a function of its shape, buoyancy (negative or positive) and the water velocity. Once the characteristics of a housing is known, the velocity can be determined by measuring the angle of the housing and direction of tilt. The housing contains a data logger that records the orientation (angle from vertical and compass bearing) of the Tilt Current Meter. Floating tilt current meters are typically deployed on the bottom with a lead or concrete anchor but may be deployed on lobster traps or other convenient anchors of opportunity. Sinking tilt current meters may be attached to an oceanographic mooring, floating dock or fish pen. Tilt current meters have the advantage over other methods of measuring current in that they are generally relatively low-cost instruments and the design and operation is relatively simple. The low-cost of the instrument may allow researchers to use the meters in greater numbers (thereby increasing spatial density) and/or in locations where there is a risk of instrument loss.
Depth correction
Current meters are usually deployed within an oceanographic mooring consisting of an anchor weight on the ground, a mooring line with the instrument(s) connected to it and a floating device to keep the mooring line more or less vertical. Like a kite in the wind, the actual shape of the mooring line will not be completely straight, but following a so-called (half-)catenary.
Under the influence of water currents (and wind if the top buoy is above the sea surface) the shape of the mooring line can be determined and by this the actual depth of the instruments. If the currents are strong (above 0.1 m/s) and the mooring lines are long (more than 1 km), the instrument position may vary up to 50 m.
See also
Stream gauge
References
Physical oceanography
Oceanographic instrumentation
Ocean currents | Current meter | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,117 | [
"Ocean currents",
"Oceanographic instrumentation",
"Applied and interdisciplinary physics",
"Measuring instruments",
"Physical oceanography",
"Fluid dynamics"
] |
34,514,105 | https://en.wikipedia.org/wiki/Multiparty%20communication%20complexity | In theoretical computer science, multiparty communication complexity is the study of communication complexity in the setting where there are more than 2 players.
In the traditional two–party communication game, introduced by , two players, P1 and P2 attempt to compute a Boolean function
Player P1 knows the value of x2, P2 knows the value of x1, but Pi does not know the value of xi, for i = 1, 2.
In other words, the players know the other's variables, but not their own. The minimum number of bits that must be communicated by the players to compute f is the communication complexity of f, denoted by κ(f).
The multiparty communication game, defined in 1983, is a powerful generalization of the 2–party case: Here the players know all the others' input, except their own. Because of this property, sometimes this model is called "numbers on the forehead" model, since if the players were seated around a round table, each wearing their own input on the forehead, then every player would see all the others' input, except their own.
The formal definition is as follows: players: intend to compute a Boolean function
On set of variables there is a fixed partition of classes , and player knows every variable, except those in , for . The players have unlimited computational power, and they communicate with the help of a blackboard, viewed by all players.
The aim is to compute ), such that at the end of the computation, every player knows this value. The cost of the computation is the number of bits written onto the blackboard for the given input and partition . The cost of a multiparty protocol is the maximum number of bits communicated for any from the set {0,1}n and the given partition . The -party communication complexity, of a function , with respect to partition , is the minimum of costs of those -party protocols which compute . The -party symmetric communication complexity of is defined as
where the maximum is taken over all k-partitions of set .
Upper and lower bounds
For a general upper bound both for two and more players, let us suppose that A1 is one of the smallest classes of the partition A1,A2,...,Ak. Then P1 can compute any Boolean function of S with |A1| + 1 bits of communication: P2 writes down the |A1| bits of A1 on the blackboard, P1 reads it, and computes and announces the value . So, the following can be written:
The Generalized Inner Product function (GIP) is defined as follows:
Let be -bit vectors, and let be the times matrix, with columns as the vectors. Then is the number of the all-1 rows of matrix , taken modulo 2. In other words, if the vectors correspond to the characteristic vectors of subsets of an element base-set, then GIP corresponds to the parity of the intersection of these subsets.
It was shown that
with a constant c > 0.
An upper bound on the multiparty communication complexity of GIP shows that
with a constant c > 0.
For a general Boolean function f, one can bound the multiparty communication complexity of f by using its L1 norm as follows:
Multiparty communication complexity and pseudorandom generators
A construction of a pseudorandom number generator was based on the BNS lower bound for the GIP function.
Applied mathematics
Information theory | Multiparty communication complexity | [
"Mathematics",
"Technology",
"Engineering"
] | 704 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
22,857,779 | https://en.wikipedia.org/wiki/Udwadia%E2%80%93Kalaba%20formulation | In classical mechanics, the Udwadia–Kalaba formulation is a method for deriving the equations of motion of a constrained mechanical system. The method was first described by Anatolii Fedorovich Vereshchagin for the particular case of robotic arms, and later generalized to all mechanical systems by Firdaus E. Udwadia and Robert E. Kalaba in 1992. The approach is based on Gauss's principle of least constraint. The Udwadia–Kalaba method applies to both holonomic constraints and nonholonomic constraints, as long as they are linear with respect to the accelerations. The method generalizes to constraint forces that do not obey D'Alembert's principle.
Background
The Udwadia–Kalaba equation was developed in 1992 and describes the motion of a constrained mechanical system that is subjected to equality constraints.
This differs from the Lagrangian formalism, which uses the Lagrange multipliers to describe the motion of constrained mechanical systems, and other similar approaches such as the Gibbs–Appell approach. The physical interpretation of the equation has applications in areas beyond theoretical physics, such as the control of highly nonlinear general dynamical systems.
The central problem of constrained motion
In the study of the dynamics of mechanical systems, the configuration of a given system S is, in general, completely described by n generalized coordinates so that its generalized coordinate n-vector is given by
where T denotes matrix transpose. Using Newtonian or Lagrangian dynamics, the unconstrained equations of motion of the system S under study can be derived as a matrix equation (see matrix multiplication):
where the dots represent derivatives with respect to time:
It is assumed that the initial conditions q(0) and are known. We call the system S unconstrained because may be arbitrarily assigned.
The n-vector Q denotes the total generalized force acted on the system by some external influence; it can be expressed as the sum of all the conservative forces as well as non-conservative forces.
The n-by-n matrix M is symmetric, and it can be positive definite or semi-positive definite . Typically, it is assumed that M is positive definite; however, it is not uncommon to derive the unconstrained equations of motion of the system S such that M is only semi-positive definite; i.e., the mass matrix may be singular (it has no inverse matrix).
Constraints
We now assume that the unconstrained system S is subjected to a set of m consistent equality constraints given by
where A is a known m-by-n matrix of rank r and b is a known m-vector. We note that this set of constraint equations encompass a very general variety of holonomic and non-holonomic equality constraints. For example, holonomic constraints of the form
can be differentiated twice with respect to time while non-holonomic constraints of the form
can be differentiated once with respect to time to obtain the m-by-n matrix A and the m-vector b. In short, constraints may be specified that are
nonlinear functions of displacement and velocity,
explicitly dependent on time, and
functionally dependent.
As a consequence of subjecting these constraints to the unconstrained system S, an additional force is conceptualized to arise, namely, the force of constraint. Therefore, the constrained system Sc becomes
where Qc—the constraint force—is the additional force needed to satisfy the imposed constraints. The central problem of constrained motion is now stated as follows:
given the unconstrained equations of motion of the system S,
given the generalized displacement q(t) and the generalized velocity of the constrained system Sc at time t, and
given the constraints in the form as stated above,
find the equations of motion for the constrained system—the acceleration—at time t, which is in accordance with the agreed upon principles of analytical dynamics.
Notation
Below, for positive definite , denotes the inverse of its square root, defined as
,
where is the orthogonal matrix arising from eigendecomposition (whose rows consist of suitably selected eigenvectors of ), and is the diagonal matrix whose diagonal elements are the inverse square roots of the eigenvalues corresponding to the eigenvectors in .
Equation of motion
The solution to this central problem is given by the Udwadia–Kalaba equation. When the matrix M is positive definite, the equation of motion of the constrained system Sc, at each instant of time, is
where the '+' symbol denotes the pseudoinverse of the matrix . The force of constraint is thus given explicitly as
and since the matrix M is positive definite the generalized acceleration of the constrained system Sc is determined explicitly by
In the case that the matrix M is semi-positive definite , the above equation cannot be used directly because M may be singular. Furthermore, the generalized accelerations may not be unique unless the (n + m)-by-n matrix
has full rank (rank = n). But since the observed accelerations of mechanical systems in nature are always unique, this rank condition is a necessary and sufficient condition for obtaining the uniquely defined generalized accelerations of the constrained system Sc at each instant of time. Thus, when has full rank, the equations of motion of the constrained system Sc at each instant of time are uniquely determined by (1) creating the auxiliary unconstrained system
and by (2) applying the fundamental equation of constrained motion to this auxiliary unconstrained system so that the auxiliary constrained equations of motion are explicitly given by
Moreover, when the matrix has full rank, the matrix is always positive definite. This yields, explicitly, the generalized accelerations of the constrained system Sc as
This equation is valid when the matrix M is either positive definite or positive semi-definite. Additionally, the force of constraint that causes the constrained system Sc—a system that may have a singular mass matrix M—to satisfy the imposed constraints is explicitly given by
Non-ideal constraints
At any time during the motion we may consider perturbing the system by a virtual displacement δr consistent with the constraints of the system. The displacement is allowed to be either reversible or irreversible. If the displacement is irreversible, then it performs virtual work. We may write the virtual work of the displacement as
The vector describes the non-ideality of the virtual work and may be related, for example, to friction or drag forces (such forces have velocity dependence). This is a generalized D'Alembert's principle, where the usual form of the principle has vanishing virtual work with .
The Udwadia–Kalaba equation is modified by an additional non-ideal constraint term to
Examples
Inverse Kepler problem
The method can solve the inverse Kepler problem of determining the force law that corresponds to the orbits that are conic sections. We take there to be no external forces (not even gravity) and instead constrain the particle motion to follow orbits of the form
where , is the eccentricity, and is the semi-latus rectum. Differentiating twice with respect to time and rearranging slightly gives a constraint
We assume the body has a simple, constant mass. We also assume that angular momentum about the focus is conserved as
with time derivative
We can combine these two constraints into the matrix equation
The constraint matrix has inverse
The force of constraint is therefore the expected, central inverse square law
Inclined plane with friction
Consider a small block of constant mass on an inclined plane at an angle above horizontal. The constraint that the block lie on the plane can be written as
After taking two time derivatives, we can put this into a standard constraint matrix equation form
The constraint matrix has pseudoinverse
We allow there to be sliding friction between the block and the inclined plane. We parameterize this force by a standard coefficient of friction multiplied by the normal force
Whereas the force of gravity is reversible, the force of friction is not. Therefore, the virtual work associated with a virtual displacement will depend on C. We may summarize the three forces (external, ideal constraint, and non-ideal constraint) as follows:
Combining the above, we find that the equations of motion are
This is like a constant downward acceleration due to gravity with a slight modification. If the block is moving up the inclined plane, then the friction increases the downward acceleration. If the block is moving down the inclined plane, then the friction reduces the downward acceleration.
References
Dynamics (mechanics)
Classical mechanics
Mathematical physics | Udwadia–Kalaba formulation | [
"Physics",
"Mathematics"
] | 1,719 | [
"Physical phenomena",
"Applied mathematics",
"Theoretical physics",
"Classical mechanics",
"Motion (physics)",
"Mechanics",
"Dynamics (mechanics)",
"Mathematical physics"
] |
22,858,502 | https://en.wikipedia.org/wiki/Opposition%20surge | The opposition surge (sometimes known as the opposition effect, opposition spike or Seeliger effect) is the brightening of a rough surface, or an object with many particles, when illuminated from directly behind the observer. The term is most widely used in astronomy, where generally it refers to the sudden noticeable increase in the brightness of a celestial body such as a planet, moon, or comet as its phase angle of observation approaches zero. It is so named because the reflected light from the Moon and Mars appear significantly brighter than predicted by simple Lambertian reflectance when at astronomical opposition. Two physical mechanisms have been proposed for this observational phenomenon: shadow hiding and coherent backscatter.
Overview
The phase angle is defined as the angle between the observer, the observed object and the source of light. In the case of the Solar System, the light source is the Sun, and the observer is generally on Earth. At zero phase angle, the Sun is directly behind the observer and the object is directly ahead, fully illuminated.
As the phase angle of an object lit by the Sun decreases, the object's luminous intensity increases. This is partly due to the increased area lit, but is also partly due to the intrinsic brightness (the luminance) of the part that is sunlit. This is affected by the illuminance of the surface, which is stongest right under the sun and goes to zero at the parts of the object that face at right angle to the sun. But the luminance is also affected by the angle at which light reflected from the object is observed. For this reason, moonlight at full moon is much more than at first or third quarter, even though the visible area illuminated is only twice as large.
Physical mechanisms
Shadow hiding
When the angle of reflection is close to the angle at which the light's rays hit the surface (that is, when the Sun and the object are close to opposition from the viewpoint of the observer), this intrinsic brightness is usually close to its maximum. At a phase angle of zero degrees, all shadows disappear and the object is fully illuminated. When phase angles approach zero, there is a sudden increase in apparent brightness, and this sudden increase is referred to as the opposition surge.
The effect is particularly pronounced on regolith surfaces of airless bodies in the Solar System. The usual major cause of the effect is that a surface's small pores and pits that would otherwise be in shadow at other incidence angles become lit up when the observer is almost in the same line as the source of illumination. The effect is usually only visible for a very small range of phase angles near zero. For bodies whose reflectance properties have been quantitatively studied, details of the opposition effect – its strength and angular extent – are described by two of the Hapke parameters. In the case of planetary rings (such as Saturn's), an opposition surge is due to the uncovering of shadows on the ring particles. This explanation was first proposed by Hugo von Seeliger in 1887.
Coherent backscatter
A theory for an additional effect that increases brightness during opposition is that of coherent backscatter. In the case of coherent backscatter, the reflected light is enhanced at narrow angles if the size of the scatterers in the surface of the body is comparable to the wavelength of light and the distance between scattering particles is greater than a wavelength. The increase in brightness is due to the reflected light combining coherently with the emitted light.
Coherent backscatter phenomena have also been observed with radar. In particular, recent observations of Titan at 2.2 cm with Cassini have shown that a strong coherent backscatter effect is required to explain the high albedos at radar wavelengths.
Water droplets
On Earth, water droplets can also create bright spots around the antisolar point in various situations. For more details, see Heiligenschein and Glory (optical phenomenon).
Throughout the Solar System
The existence of the opposition surge was described in 1956 by Tom Gehrels during his study of the reflected light from an asteroid. Gehrels' later studies showed that the same effect could be shown in the moon's brightness. He coined the term "opposition effect" for the phenomenon, but the more intuitive "opposition surge" is now more widely used.
Since Gehrels' early studies, an opposition surge has been noted for most airless solar system bodies. No such surge has been reported for bodies with significant atmospheres.
In the case of the Moon, B. J. Buratti et al. used observations from the Clementine spacecraft at very low phase angle to find that the moon's brightness increases by more than 40% between a phase angle of 4° and one of 0°. (Observation from Earth cannot be at a phase angle less than about half a degree without there being a lunar eclipse. A phase angle of 4° is achieved about eight hours before or after a lunar eclipse.) This increase is greater for the rougher-surfaced highland areas than for the relatively smooth maria. As for the principal mechanism of the phenomenon, measurements indicate that the opposition effect exhibits only a small wavelength dependence: the surge is 3-4% larger at 0.41 μm than at 1.00 μm. This result suggests that the principal cause of the lunar opposition surge is shadow-hiding rather than coherent backscatter.
See also
Albedo
Bidirectional reflectance function
Brocken spectre, the apparently enormous and magnified shadow of an observer cast upon the upper surfaces of clouds opposite the Sun
Gegenschein
Geometric albedo
References
External links
Hayabusa observes the opposition surge of Asteroid Itokawa
opposition effect, "Atmospheric optics" website. Includes a picture of the opposition surge on the moon
opposition effect mechanism, "Atmospheric optics" website. Diagrammatic representation of the opposition surge
"The-moon wikispaces" opposition surge page
Opposition surge on Saturn's B Ring as seen by Cassini–Huygens
Astronomical events
Lunar science
Optical phenomena
Observational astronomy
Radiometry
Scattering, absorption and radiative transfer (optics) | Opposition surge | [
"Physics",
"Chemistry",
"Astronomy",
"Engineering"
] | 1,232 | [
"Physical phenomena",
" absorption and radiative transfer (optics)",
"Telecommunications engineering",
"Astronomical events",
"Observational astronomy",
"Optical phenomena",
"Scattering",
"Astronomical sub-disciplines",
"Radiometry"
] |
22,860,615 | https://en.wikipedia.org/wiki/Plantibody | Passive immunization is a medical strategy long employed to provide temporary protection against pathogens. Early implementations involved recovering ostensibly cell-free plasma from the blood of human survivors or from non-human animals deliberately exposed to a specific pathogen or toxin. These approaches resulted in crude purifications of plasma-soluble proteins including antibodies.
Antibodies (also known as an immunoglobulins) are complex proteins produced by vertebrates that recognize antigens (or molecular patterns) on pathogens and some dangerous compounds in order to alert the adaptive immune system that there are pathogens within the body.
A plantibody is an antibody that is produced by plants that have been genetically engineered with animal DNA encoding a specific human antibody known to neutralize a particular pathogen or toxin. The transgenic plants produce antibodies that are similar to their human counterparts, and following purification, plantibodies can be administered therapeutically to acutely ill patients or prophylactically to at-risk individuals (such as healthcare workers). The term plantibody and the concept are trademarked by the company Biolex.
Production
A plantibody is produced by insertion of genes encoding antibodies into a transgenic plant. The plantibodies are then modified by intrinsic plant mechanisms (N-glycosylation). Plantibodies are purified from plant tissues by mechanical disruption and denaturation/removal of intrinsic plant proteins by treatment at high temperature and low pH, as antibodies tend to be stable under these conditions. Antibodies can further be purified away from other acid- and temperature- stable proteins by capture on commercially produced protein A resins. Production of antibodies in whole transgenic plants, such as species in the genus Nicotiana, is cheaper and safer than in cultured animal cells.
Advantages
Transgenic plants offer an attractive method for large-scale production of antibodies for immunotherapy. Antibodies produced in plants have many advantages that are beneficial to humans, plants, and the economy as well. They can be purified cheaply and in large numbers. The many seeds of plants allow for ample storage, and they have no risk of transmitting diseases to humans because the antibodies are produced without the need of the antigen or infectious microorganisms. Plants could be engineered to produce antibodies which fight off their own plant diseases and pests, for example, nematodes, and eliminate the need for toxic pesticides.
Applications
Antibodies generated by plants are cheaper, easier to manage, and safer to use than those obtained from animals. The applications are increasing because recombinant DNA (rDNA) is very useful in creating proteins that are identical when exposed into a plant's. A recombinant DNA is an artificial DNA that is created by combining two or more sequences that would not normally come together. In this way, DNA injected into a plant is turned into recombinant DNA and manipulated. The favorable properties of plants are likely to make the plant systems a useful alternative for small, medium and large scale production throughout the development of new antibody-based pharmaceuticals.
Medical
The main reason plants are being used to produce antibodies is for treatment of illnesses such as immune disorders, cancer, and inflammatory diseases, given the fact that the plantibodies also have no risk of spreading diseases to humans.
In the past 2 decades, research has shown that plant-derived antibodies have become easier to produce.
Commercial
Plantibodies are close to passing clinical trials and becoming approved commercially because of key points. Plants are more economical than most forms of creating antibodies and the technology for harvesting and maintaining them is already present. Plants also reduce the chance of coming in contact with pathogens, making their antibodies safer to use. Plantibodies can be made at an affordable cost and easier manufacturing due to the availability and relatively easy manipulation of genetic information in crops such as potatoes, soybean, alfalfa, rice, wheat and tobacco.
Outlook
Commercial use is not yet legalized, but clinical trials are underway to implement the use of plantibodies for humans as injections. So far, companies have started conducting human tests of pharmaceutical products, creating plantibodies that include:
Hepatitis B vaccine
Antibody to fight cavity causing bacteria
Antibodies to prevent sexually transmitted diseases
Antibodies for non-Hodgkin's -cell lymphoma
Vaccine against HIV virus
Anthrax vaccine (from tobacco)
Antibodies against Ebola virus
By being able to genetically alter plants to create specific antibodies, it is easier to produce antibodies that will fight diseases not only for plants but for human as well. For that reason, plantibody applications will move more towards the medicinal field.
References
External links
https://www.news-medical.net/health/What-is-an-Antibody.aspx
Genetic engineering
Plant products
Therapeutic antibodies | Plantibody | [
"Chemistry",
"Engineering",
"Biology"
] | 968 | [
"Biological engineering",
"Natural products",
"Genetic engineering",
"Molecular biology",
"Plant products"
] |
22,861,361 | https://en.wikipedia.org/wiki/Lunar%20regolith%20simulant | A lunar regolith simulant is a terrestrial material synthesized in order to approximate the chemical, mechanical, engineering, mineralogical, or particle-size distribution properties of lunar regolith. Lunar regolith simulants are used by researchers who wish to research the materials handling, excavation, transportation, and uses of lunar regolith. Samples of actual lunar regolith are too scarce, and too small, for such research, and have been contaminated by exposure to Earth's atmosphere.
Early simulants
In the run-up to the Apollo program, crushed terrestrial rocks were first used to simulate the anticipated soils that astronauts would encounter on the lunar surface. In some cases the properties of these early simulants were substantially different from actual lunar soil, and the issues associated with the pervasive, fine-grained, sharp dust grains on the Moon came as a surprise.
Later simulants
After Apollo and particularly during the development of the Constellation program, there was a large proliferation of lunar simulants produced by different organizations and researchers. Many of these were given three-letter acronyms to distinguish them (e.g., MLS-1, JSC-1), and numbers to designate subsequent versions. These simulants were broadly divided into highlands or mare soils, and were usually produced by crushing and sieving analogous terrestrial rocks (anorthosite for highlands, basalt for mare). Returned Apollo and Luna samples were used as reference materials in order to target specific properties such as elemental chemistry or particle size distribution. Many of these simulants were criticized by prominent lunar scientist Larry Taylor for a lack of quality control and wasted money on features like nanophase iron that had no documented purpose.
JSC-1 and -1A
JSC-1 (Johnson Space Center Number One) was a lunar regolith simulant that was developed in 1994 by NASA and the Johnson Space Center. Its developers intended it to approximate the lunar soil of the maria. It was sourced from a basaltic ash with a high glass content.
In 2005, NASA contracted with Orbital Technologies Corporation (ORBITEC) for a second batch of simulant in three grades:
JSC-1AF, fine, 27 μm average size
JSC-1A, a reproduction of JSC 1, less than 1 mm size
JSC-1AC, coarse, a distribution of sizes < 5 mm
NASA received 14 metric tons of JSC-1A, and one ton each of AF and AC in 2006. Another 15 tons of JSC-1A and 100 kg of JSC-1F were produced by ORBITEC for commercial sale, but ORBITEC is no longer selling simulants and was acquired by the Sierra Nevada Corporation. An 8-ton sand box of commercial JSC-1A is available for daily rental from the NASA Solar System Exploration Research Virtual Institute (SSERVI).
JSC-1A can geopolymerize in an alkaline solutions resulting in a hard, rock-like, material. Tests show that the maximum compressive and flexural strength of the 'lunar' geopolymer is comparable to that of conventional cements.
JSC-1 and JSC-1A are now no longer available outside of NASA centers.
NU-LHT and OB-1
Two lunar highlands simulants, the NU-LHT (lunar highlands type) series and OB-1 (olivine-bytownite) were developed and produced in anticipation of the Constellation activities. Both of these simulants are sourced mostly from rare anorthosite deposits on the Earth. For NU-LHT the anorthosite came from the Stillwater complex, and for OB-1 it came from the Shawmere Anorthosite in Ontario. Neither of these simulants were widely distributed.
Recent simulants
Most of the previously developed lunar simulants are no longer being produced or distributed outside of NASA. Multiple companies have tried to sell regolith simulants for profit, including Zybek Advanced Products, ORBITEC, and Deep Space Industries. None of these efforts have seen much success. NASA is unable to sell simulants, or distribute unlimited amounts for free; however, NASA can award set amounts of simulant to grant winners.
Several lunar simulants have been developed recently and are either being sold commercially or are available for rent inside large regolith bins. These include the OPRL2N Standard Representative Lunar Mare Simulant and Standard Representative Lunar Highland Simulant. Off Planet Research also produces customized simulants for specific locations on the Moon including lunar polar icy regolith simulants that include the volatiles identified in the LCROSS mission.
Other simulants include Lunar Highlands Simulant (LHS-1) and Lunar Mare Simulant (LMS-1) produced and distributed by the not-for-profit Exolith Lab run out of the University of Central Florida.
Indian Space Research Organisation has developed its own lunar highland soil simulant called LSS-ISAC1 for its Chandrayaan programme. The raw material for this simulant was sourced from Sithampoondi and Kunnamalai villages in Tamil Nadu.
In 2020, a team of independent researchers from Thailand also developed the Thailand Lunar Simulant - Batch 1 (TLS-1) using domestic sources - the first ever successful simulant production attempt in the country that is based on the properties of the Apollo 11 sample, further applications in the field of space and material engineering were also made using the produced simulant.
See also
Lunar soil
Martian regolith simulant
Moon rock
References
Further reading
Concrete
Materials science
Space colonization | Lunar regolith simulant | [
"Physics",
"Materials_science",
"Engineering"
] | 1,157 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Materials science",
"nan",
"Concrete"
] |
22,862,692 | https://en.wikipedia.org/wiki/Nuts%20and%20bolts%20%28general%20relativity%29 | In physics, in the theory of general relativity, spacetimes with at least a 1-parameter group of isometries can be classified according to the fixed point-sets of the action. Isolated fixed points are called nuts. The other possibility is that the fixed point set is a metric 2-sphere, called bolt. The number of nuts and bolts can also be related to topological invariants, such as the Euler characteristic. This classification is widely used in the analysis of gravitational instantons.
References
Gibbons, G. W.; Hawking, S. W., Classification of gravitational instanton symmetries. Comm. Math. Phys. 66 (1979), no. 3, 291–310.
General relativity
Quantum gravity
Mathematical physics | Nuts and bolts (general relativity) | [
"Physics",
"Mathematics"
] | 154 | [
"Applied mathematics",
"Theoretical physics",
"Unsolved problems in physics",
"General relativity",
"Quantum gravity",
"Relativity stubs",
"Theory of relativity",
"Mathematical physics",
"Physics beyond the Standard Model"
] |
22,862,812 | https://en.wikipedia.org/wiki/Red%20heat | The practice of using colours to determine the temperature of a piece of (usually) ferrous metal comes from blacksmithing. Long before thermometers were widely available, it was necessary to know what state the metal was in for heat treating it and the only way to do this was to heat it up to a colour which was known to be best for the work.
Chapman
According to Chapman's Workshop Technology, the colours which can be observed in steel are:
Stirling
In 1905, Stirling Consolidated Boiler Company published a slightly different set of values:
See also
Black-body radiation
Color temperature
Incandescence
Notes
References
Metallurgy
Temperature | Red heat | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 130 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"Metallurgy",
"SI base quantities",
"Intensive quantities",
"Materials science",
"Thermodynamics",
"nan",
"Wikipedia categories named after physical quantities"
] |
22,863,481 | https://en.wikipedia.org/wiki/Composite%20fermion | A composite fermion is the topological bound state of an electron and an even number of quantized vortices, sometimes visually pictured as the bound state of an electron and, attached, an even number of magnetic flux quanta. Composite fermions were originally envisioned in the context of the fractional quantum Hall effect, but subsequently took on a life of their own, exhibiting many other consequences and phenomena.
Vortices are an example of topological defect, and also occur in other situations. Quantized vortices are found in type II superconductors, called Abrikosov vortices. Classical vortices are relevant to the Berezenskii–Kosterlitz–Thouless transition in two-dimensional XY model.
Description
When electrons are confined to two dimensions, cooled to very low temperatures, and subjected to a strong magnetic field, their kinetic energy is quenched due to Landau level quantization. Their behavior under such conditions is governed by the Coulomb repulsion alone, and they produce a strongly correlated quantum liquid. Experiments have shown that electrons minimize their interaction by capturing quantized vortices to become composite fermions. The interaction between composite fermions themselves is often negligible to a good approximation, which makes them the physical quasiparticles of this quantum liquid.
The signature quality of composite fermions, which is responsible for the otherwise unexpected behavior of this system, is that they experience a much smaller magnetic field than electrons. The magnetic field seen by composite fermions is given by
where is the external magnetic field, is the number of vortices bound to composite fermion (also called the vorticity or the vortex charge of the composite fermion), is the particle density in two dimensions, and is called the "flux quantum" (which differs from the superconducting flux quantum by a factor of two). The effective magnetic field is a direct manifestation of the existence of composite fermions, and also embodies a fundamental distinction between electrons and composite fermions.
Sometimes it is said that electrons "swallow" flux quanta each to transform into composite fermions, and the composite fermions then experience the residual magnetic field More accurately, the vortices bound to electrons produce their own geometric phases which partly cancel the Aharonov–Bohm phase due to the external magnetic field to generate a net geometric phase that can be modeled as an Aharonov–Bohm phase in an effective magnetic field
The behavior of composite fermions is similar to that of electrons in an effective magnetic field Electrons form Landau levels in a magnetic field, and the number of filled Landau levels is called the filling factor, given by the expression Composite fermions form Landau-like levels in the effective magnetic field which are called composite fermion Landau levels or levels. One defines the filling factor for composite fermions as This gives the following relation between the electron and composite fermion filling factors
The minus sign occurs when the effective magnetic field is antiparallel to the applied magnetic field, which happens when the geometric phase from the vortices overcompensate the Aharonov–Bohm phase.
Experimental manifestations
The central statement of composite fermion theory is that the strongly correlated electrons at a magnetic field (or filling factor ) turn into weakly interacting composite fermions at a magnetic field (or composite fermion filling factor ). This allows an effectively single-particle explanation of the otherwise complex many-body behavior, with the interaction between electrons manifesting as an effective kinetic energy of composite fermions. Here are some of the phenomena arising from composite fermions:
Fermi sea
The effective magnetic field for composite fermions vanishes for , where the filling factor for electrons is . Here, composite fermions make a Fermi sea. This Fermi sea has been observed at half filled Landau level in a number of experiments, which also measure the Fermi wave vector.
Cyclotron orbits
As the magnetic field is moved slightly away from , composite fermions execute semiclassical cyclotron orbits. These have been observed by coupling to surface acoustic waves, resonance peaks in antidot superlattice, and magnetic focusing. The radius of the cyclotron orbits is consistent with the effective magnetic field and is sometimes an order of magnitude or more larger than the radius of the cyclotron orbit of an electron at the externally applied magnetic field . Also, the observed direction of trajectory is opposite to that of electrons when is anti-parallel to .
Cyclotron resonance
In addition to the cyclotron orbits, cyclotron resonance of composite fermions has also been observed by photoluminescence.
Shubnikov de Haas oscillations
As the magnetic field is moved further away from , quantum oscillations are observed that are periodic in These are Shubnikov–de Haas oscillations of composite fermions. These oscillations arise from the quantization of the semiclassical cyclotron orbits of composite fermions into composite fermion Landau levels. From the analysis of the Shubnikov–de Haas experiments, one can deduce the effective mass and the quantum lifetime of composite fermions.
Integer quantum Hall effect
With further increase in or decrease in temperature and disorder, composite fermions exhibit integer quantum Hall effect. The integer fillings of composite fermions, , correspond to the electrons fillings
Combined with
which are obtained by attaching vortices to holes in the lowest Landau level, these constitute the prominently observed sequences of fractions. Examples are
The fractional quantum Hall effect of electrons is thus explained as the integer quantum Hall effect of composite fermions. It results in fractionally quantized Hall plateaus at
with given by above quantized values. These sequences terminate at the composite fermion Fermi sea. Note that the fractions have odd denominators, which follows from the even vorticity of composite fermions.
Fractional quantum Hall effect
The above sequences account for most, but not all, observed fractions. Other fractions have been observed, which arise from a weak residual interaction between composite fermions, and are thus more delicate. A number of these are understood as fractional quantum Hall effect of composite fermions. For example, the fractional quantum Hall effect of composite fermions at produces the fraction 4/11, which does not belong to the primary sequences.
Superconductivity
An even denominator fraction, has been observed. Here the second Landau level is half full, but the state cannot be a Fermi sea of composite fermions, because the Fermi sea is gapless and does not show quantum Hall effect. This state is viewed as a "superconductor" of composite fermion, arising from a weak attractive interaction between composite fermions at this filling factor. The pairing of composite fermions opens a gap and produces a fractional quantum Hall effect.
Excitons
The neutral excitations of various fractional quantum Hall states are excitons of composite fermions, that is, particle hole pairs of composite fermions. The energy dispersion of these excitons has been measured by light scattering and phonon scattering.
Spin
At high magnetic fields the spin of composite fermions is frozen, but it is observable at relatively low magnetic fields. The fan diagram of the composite fermion Landau levels has been determined by transport, and shows both spin-up and spin-down composite fermion Landau levels. The fractional quantum Hall states as well as composite fermion Fermi sea are also partially spin polarized for relatively low magnetic fields.
Effective magnetic field
The effective magnetic field of composite fermions has been confirmed by the similarity of the fractional and the integer quantum Hall effects, observation of Fermi sea at half filled Landau level, and measurements of the cyclotron radius.
Mass
The mass of composite fermions has been determined from the measurements of: the effective cyclotron energy of composite fermions; the temperature dependence of Shubnikov–de Haas oscillations; energy of the cyclotron resonance; spin polarization of the Fermi sea; and quantum phase transitions between states with different spin polarizations. Its typical value in GaAs systems is on the order of the electron mass in vacuum. (It is unrelated to the electron band mass in GaAs, which is 0.07 of the electron mass in vacuum.)
Theoretical formulations
Much of the experimental phenomenology can be understood from the qualitative picture of composite fermions in an effective magnetic field. In addition, composite fermions also lead to a detailed and accurate microscopic theory of this quantum liquid. Two approaches have proved useful.
Trial wave functions
The following trial wave functions embody the composite fermion physics:
Here is the wave function of interacting electrons at filling factor ; is the wave function for weakly interacting electrons at ; is the number of electrons or composite fermions; is the coordinate of the th particle; and is an operator that projects the wave function into the lowest Landau level. This provides an explicit mapping between the integer and the fractional quantum Hall effects. Multiplication by attaches vortices to each electron to convert it into a composite fermion. The right hand side is thus interpreted as describing composite fermions at filling factor . The above mapping gives wave functions for both the ground and excited states of the fractional quantum Hall states in terms of the corresponding known wave functions for the integral quantum Hall states. The latter do not contain any adjustable parameters for , so the FQHE wave functions do not contain any adjustable parameters at .
Comparisons with exact results show that these wave functions are quantitatively accurate. They can be used to compute a number of measurable quantities, such as the excitation gaps and exciton dispersions, the phase diagram of composite fermions with spin, the composite fermion mass, etc. For they reduce to the Laughlin wavefunction at fillings .
Chern–Simons field theory
Another formulation of the composite fermion physics is through a Chern–Simons field theory, wherein flux quanta are attached to electrons by a singular gauge transformation. At the mean field approximation the physics of free fermions in an effective field is recovered. Perturbation theory at the level of the random phase approximation captures many of the properties of composite fermions.
See also
Composite boson
Integer quantum Hall effect
Fractional quantum Hall effect
References
External links
Nobel Lecture: "The fractional quantum Hall effect" by H. L. Stormer
"Composite Fermions: New particles in the fractional quantum Hall effect", by H. Störmer and D. Tsui, Physics News in 1994, American Institute of Physics, 1995, p. 33.
Composite Fermion at the Pennsylvania State University
Composite Fermions - von Klitzing's department at the Max Planck Institute
"Composite fermions are real" at the Physics News Update, American Institute of Physics
"Half filled Landau level yields intriguing data and theory" in Physics Today
"The composite fermion: A quantum particle and its quantum fluids" in Physics Today
Hall effect
Quasiparticles
Quantum phases | Composite fermion | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,323 | [
"Quantum phases",
"Physical phenomena",
"Hall effect",
"Phases of matter",
"Quantum mechanics",
"Electric and magnetic fields in matter",
"Electrical phenomena",
"Subatomic particles",
"Condensed matter physics",
"Quasiparticles",
"Solid state engineering",
"Matter"
] |
22,863,648 | https://en.wikipedia.org/wiki/Band%20V | Band V (meaning Band 5) is the name of a radio frequency range within the ultra high frequency part of the electromagnetic spectrum. It is not to be confused with the V band in the extremely high frequency part of the spectrum.
Sources differ on the exact frequency range of UHF Band V. For example, the Broadcast engineer's reference book and the BBC define the range as 614 to 854 MHz. The IPTV India Forum define the range as 582 to 806 MHz and the DVB Worldwide website refers to the range as 585 to 806 MHz. Band V is primarily used for analogue and digital (DVB-T & ATSC) television broadcasting, as well as radio microphones and services intended for mobile devices such as DVB-H. With the close-down of analog television services most countries have auctioned off frequencies from 694 MHz and up to 4G cellular network providers.
Television
Australia
In Australia UHF channel allocations are 7 MHz wide. Band V includes channels 36 to 69, with base frequencies of 585.5 MHz to 816.5 MHz. More details are available on the television frequencies page.
New Zealand
In New Zealand UHF channel allocations are 8 MHz wide. Band V includes digital channels 36 to 49, with base frequencies of 594.0 MHz to 698.0 MHz. More details are available on the television frequencies page.
United Kingdom
In the UK, Band V allocations for television are 8 MHz wide, traditionally consisting of 30 channels from UHF 39 to 68 inclusive. There is also a channel 69. Semi-wideband aerials of the group E type cover this entire band. However, aerials of types group B and group C/D will cover the lower and upper halves of Band V respectively with higher gain than a group E.
The following table shows TV channel allocations in Band V in the UK.
Rows with a yellow background (channels 61–68 inclusive) indicate channels cleared for 4G mobile broadband services following an auction run by the UK spectrum regulator Ofcom in January 2013 and the subsequent award of spectrum (which also included channel 69) to the winning mobile operators on 1 March 2013.
Rows with an orange background (channels 49–60 inclusive) indicate channels that are due to be cleared so that from 2022 they can be used by mobile data services. The decision to reallocate these channels was published by Ofcom on 19 November 2014.
United States
698–806 MHz: Was auctioned in March 2008; bidders got full use after the transition to digital TV was completed on June 12, 2009 (formerly UHF TV channels 52–69). T-Mobile USA, licensee of "block A" (channels 52 and 57), began using its frequency allotment in 2015, in media markets where TV stations on 51 either did not exist or relocated early.
614–698MHz (TV channels 38–51) will be auctioned in March 2016.
References
Radio spectrum
Broadcast engineering | Band V | [
"Physics",
"Engineering"
] | 608 | [
"Broadcast engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Electronic engineering"
] |
6,793,014 | https://en.wikipedia.org/wiki/Rotation%20of%20axes%20in%20two%20dimensions | In mathematics, a rotation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x′y′-Cartesian coordinate system in which the origin is kept fixed and the x′ and y′ axes are obtained by rotating the x and y axes counterclockwise through an angle . A point P has coordinates (x, y) with respect to the original system and coordinates (x′, y′) with respect to the new system. In the new coordinate system, the point P will appear to have been rotated in the opposite direction, that is, clockwise through the angle . A rotation of axes in more than two dimensions is defined similarly. A rotation of axes is a linear map and a rigid transformation.
Motivation
Coordinate systems are essential for studying the equations of curves using the methods of analytic geometry. To use the method of coordinate geometry, the axes are placed at a convenient position with respect to the curve under consideration. For example, to study the equations of ellipses and hyperbolas, the foci are usually located on one of the axes and are situated symmetrically with respect to the origin. If the curve (hyperbola, parabola, ellipse, etc.) is not situated conveniently with respect to the axes, the coordinate system should be changed to place the curve at a convenient and familiar location and orientation. The process of making this change is called a transformation of coordinates.
The solutions to many problems can be simplified by rotating the coordinate axes to obtain new axes through the same origin.
Derivation
The equations defining the transformation in two dimensions, which rotates the xy axes counterclockwise through an angle into the x′y′ axes, are derived as follows.
In the xy system, let the point P have polar coordinates . Then, in the x′y′ system, P will have polar coordinates .
Using trigonometric functions, we have
and using the standard trigonometric formulae for differences, we have
Substituting equations () and () into equations () and (), we obtain
Equations () and () can be represented in matrix form as
which is the standard matrix equation of a rotation of axes in two dimensions.
The inverse transformation is
or
Examples in two dimensions
Example 1
Find the coordinates of the point after the axes have been rotated through the angle , or 30°.
Solution:
The axes have been rotated counterclockwise through an angle of and the new coordinates are . Note that the point appears to have been rotated clockwise through with respect to fixed axes so it now coincides with the (new) x′ axis.
Example 2
Find the coordinates of the point after the axes have been rotated clockwise 90°, that is, through the angle , or −90°.
Solution:
The axes have been rotated through an angle of , which is in the clockwise direction and the new coordinates are . Again, note that the point appears to have been rotated counterclockwise through with respect to fixed axes.
Rotation of conic sections
The most general equation of the second degree has the form
Through a change of coordinates (a rotation of axes and a translation of axes), equation () can be put into a standard form, which is usually easier to work with. It is always possible to rotate the coordinates at a specific angle so as to eliminate the x′y′ term. Substituting equations () and () into equation (), we obtain
where
If is selected so that we will have and the x′y′ term in equation () will vanish.
When a problem arises with B, D and E all different from zero, they can be eliminated by performing in succession a rotation (eliminating B) and a translation (eliminating the D and E terms).
Identifying rotated conic sections
A non-degenerate conic section given by equation () can be identified by evaluating . The conic section is:
an ellipse or a circle, if ;
a parabola, if ;
a hyperbola, if .
Generalization to several dimensions
Suppose a rectangular xyz-coordinate system is rotated around its z axis counterclockwise (looking down the positive z axis) through an angle , that is, the positive x axis is rotated immediately into the positive y axis. The z coordinate of each point is unchanged and the x and y coordinates transform as above. The old coordinates (x, y, z) of a point Q are related to its new coordinates (x′, y′, z′) by
Generalizing to any finite number of dimensions, a rotation matrix is an orthogonal matrix that differs from the identity matrix in at most four elements. These four elements are of the form
and
for some and some i ≠ j.
Example in several dimensions
Example 3
Find the coordinates of the point after the positive w axis has been rotated through the angle , or 15°, into the positive z axis.
Solution:
See also
Rotation
Rotation (mathematics)
Notes
References
Functions and mappings
Euclidean geometry
Linear algebra
Transformation (function)
Rotation | Rotation of axes in two dimensions | [
"Physics",
"Mathematics"
] | 1,019 | [
"Mathematical analysis",
"Physical phenomena",
"Functions and mappings",
"Transformation (function)",
"Mathematical objects",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Mathematical relations",
"Geometry",
"Linear algebra",
"Algebra"
] |
6,793,679 | https://en.wikipedia.org/wiki/Pointwise%20mutual%20information | In statistics, probability theory and information theory, pointwise mutual information (PMI), or point mutual information, is a measure of association. It compares the probability of two events occurring together to what this probability would be if the events were independent.
PMI (especially in its positive pointwise mutual information variant) has been described as "one of the most important concepts in NLP", where it "draws on the intuition that the best way to weigh the association between two words is to ask how much more the two words co-occur in [a] corpus than we would have expected them to appear by chance."
The concept was introduced in 1961 by Robert Fano under the name of "mutual information", but today that term is instead used for a related measure of dependence between random variables: The mutual information (MI) of two discrete random variables refers to the average PMI of all possible events.
Definition
The PMI of a pair of outcomes x and y belonging to discrete random variables X and Y quantifies the discrepancy between the probability of their coincidence given their joint distribution and their individual distributions, assuming independence. Mathematically:
(with the latter two expressions being equal to the first by Bayes' theorem). The mutual information (MI) of the random variables X and Y is the expected value of the PMI (over all possible outcomes).
The measure is symmetric (). It can take positive or negative values, but is zero if X and Y are independent. Note that even though PMI may be negative or positive, its expected outcome over all joint events (MI) is non-negative. PMI maximizes when X and Y are perfectly associated (i.e. or ), yielding the following bounds:
Finally, will increase if is fixed but decreases.
Here is an example to illustrate:
Using this table we can marginalize to get the following additional table for the individual distributions:
With this example, we can compute four values for . Using base-2 logarithms:
(For reference, the mutual information would then be 0.2141709.)
Similarities to mutual information
Pointwise Mutual Information has many of the same relationships as the mutual information. In particular,
Where is the self-information, or .
Variants
Several variations of PMI have been proposed, in particular to address what has been described as its "two main limitations":
PMI can take both positive and negative values and has no fixed bounds, which makes it harder to interpret.
PMI has "a well-known tendency to give higher scores to low-frequency events", but in applications such as measuring word similarity, it is preferable to have "a higher score for pairs of words whose relatedness is supported by more evidence."
Positive PMI
The positive pointwise mutual information (PPMI) measure is defined by setting negative values of PMI to zero:
This definition is motivated by the observation that "negative PMI values (which imply things are co-occurring less often than we would expect by chance) tend to be unreliable unless our corpora are enormous" and also by a concern that "it's not clear whether it's even possible to evaluate such scores of 'unrelatedness' with human judgment". It also avoid having to deal with values for events that never occur together (), by setting PPMI for these to 0.
Normalized pointwise mutual information (npmi)
Pointwise mutual information can be normalized between [-1,+1] resulting in -1 (in the limit) for never occurring together, 0 for independence, and +1 for complete co-occurrence.
Where is the joint self-information .
PMIk family
The PMIk measure (for k=2, 3 etc.), which was introduced by Béatrice Daille around 1994, and as of 2011 was described as being "among the most widely used variants", is defined as
In particular, . The additional factors of inside the logarithm are intended to correct the bias of PMI towards low-frequency events, by boosting the scores of frequent pairs. A 2011 case study demonstrated the success of PMI3 in correcting this bias on a corpus drawn from English Wikipedia. Taking x to be the word "football", its most strongly associated words y according to the PMI measure (i.e. those maximizing ) were domain-specific ("midfielder", "cornerbacks", "goalkeepers") whereas the terms ranked most highly by PMI3 were much more general ("league", "clubs", "england").
Specific Correlation
Total correlation is an extension of mutual information to multi-variables. Analogously to the definition of total correlation, the extension of PMI to multi-variables is "specific correlation."
The SI of the results of random variables is expressed as the following:
Chain-rule
Like mutual information, point mutual information follows the chain rule, that is,
This is proven through application of Bayes' theorem:
Applications
PMI could be used in various disciplines e.g. in information theory, linguistics or chemistry (in profiling and analysis of chemical compounds). In computational linguistics, PMI has been used for finding collocations and associations between words. For instance, countings of occurrences and co-occurrences of words in a text corpus can be used to approximate the probabilities and respectively. The following table shows counts of pairs of words getting the most and the least PMI scores in the first 50 millions of words in Wikipedia (dump of October 2015) filtering by 1,000 or more co-occurrences. The frequency of each count can be obtained by dividing its value by 50,000,952. (Note: natural log is used to calculate the PMI values in this example, instead of log base 2)
Good collocation pairs have high PMI because the probability of co-occurrence is only slightly lower than the probabilities of occurrence of each word. Conversely, a pair of words whose probabilities of occurrence are considerably higher than their probability of co-occurrence gets a small PMI score.
References
External links
Demo at Rensselaer MSR Server (PMI values normalized to be between 0 and 1)
Information theory
Summary statistics for contingency tables
Entropy and information | Pointwise mutual information | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 1,281 | [
"Telecommunications engineering",
"Physical quantities",
"Applied mathematics",
"Entropy and information",
"Computer science",
"Entropy",
"Information theory",
"Dynamical systems"
] |
6,797,958 | https://en.wikipedia.org/wiki/Extracellular%20signal-regulated%20kinases | In molecular biology, extracellular signal-regulated kinases (ERKs) or classical MAP kinases are widely expressed protein kinase intracellular signalling molecules that are involved in functions including the regulation of meiosis, mitosis, and postmitotic functions in differentiated cells. Many different stimuli, including growth factors, cytokines, virus infection, ligands for heterotrimeric G protein-coupled receptors, transforming agents, and carcinogens, activate the ERK pathway.
The term, "extracellular signal-regulated kinases", is sometimes used as a synonym for mitogen-activated protein kinase (MAPK), but has more recently been adopted for a specific subset of the mammalian MAPK family.
In the MAPK/ERK pathway, Ras activates c-Raf, followed by mitogen-activated protein kinase kinase (abbreviated as MKK, MEK, or MAP2K) and then MAPK1/2 (below). Ras is typically activated by growth hormones through receptor tyrosine kinases and GRB2/SOS, but may also receive other signals. ERKs are known to activate many transcription factors, such as ELK1, and some downstream protein kinases.
Disruption of the ERK pathway is common in cancers, especially Ras, c-Raf, and receptors such as HER2.
Mitogen-activated protein kinase 1
Mitogen-activated protein kinase 1 (MAPK1) is also known as extracellular signal-regulated kinase 2 (ERK2). Two similar protein kinases with 85% sequence identity were originally called ERK1 and ERK2. They were found during a search for protein kinases that are rapidly phosphorylated after activation of cell surface tyrosine kinases such as the epidermal growth factor receptor. Phosphorylation of ERKs leads to the activation of their kinase activity.
The molecular events linking cell surface receptors to activation of ERKs are complex. It was found that Ras GTP-binding proteins are involved in the activation of ERKs. Another protein kinase, Raf-1, was shown to phosphorylate a "MAP kinase-kinase", thus qualifying as a "MAP kinase kinase kinase". The MAP kinase-kinase, which activates ERK, was named "MAPK/ERK kinase" (MEK).
Receptor-linked tyrosine kinases, Ras, Raf, MEK, and MAPK could be fitted into a signaling cascade linking an extracellular signal to MAPK activation. See: MAPK/ERK pathway.
Transgenic gene knockout mice lacking MAPK1 have major defects in early development. Conditional deletion of Mapk1 in B cells showed a role for MAPK1 in T-cell-dependent antibody production. A dominant gain-of-function mutant of Mapk1 in transgenic mice showed a role for MAPK1 in T-cell development. Conditional inactivation of Mapk1 in neural progenitor cells of the developing cortex lead to a reduction of cortical thickness and reduced proliferation in neural progenitor cells.
Mitogen-activated protein kinase 3
Mitogen-activated protein kinase 3 (MAPK3) is also known as extracellular signal-regulated kinase 1 (ERK1). Transgenic gene knockout mice lacking MAPK3 are viable and it is thought that MAPK1 can fulfill some MAPK3 functions in most cells. The main exception is in T cells. Mice lacking MAPK3 have reduced T cell development past the CD4+ and CD8+ stage.
Clinical significance
Activation of the ERK1/2 pathway by aberrant RAS/RAF signalling, DNA damage, and oxidative stress leads to cellular senescence. Low doses of DNA damage resulting from cancer therapy cause ERK1/2 to induce senescence, whereas higher doses of DNA damage fail to activate ERK1/2, and thus induce cell death by apoptosis.
References
External links
The Extracellular Signal-Regulated Kinases
MAP Kinase Resource .
MAPK1
MAPK3 Info with links in the Cell Migration Gateway
Signal transduction
Mitogen-activated protein kinases
EC 2.7.11 | Extracellular signal-regulated kinases | [
"Chemistry",
"Biology"
] | 852 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
21,409,717 | https://en.wikipedia.org/wiki/Abelian%20sandpile%20model | The Abelian sandpile model (ASM) is the more popular name of the original Bak–Tang–Wiesenfeld model (BTW). The BTW model was the first discovered example of a dynamical system displaying self-organized criticality. It was introduced by Per Bak, Chao Tang and Kurt Wiesenfeld in a 1987 paper.
Three years later Deepak Dhar discovered that the BTW sandpile model follows abelian dynamics, and therefore referred to this model as the Abelian sandpile model.
The model is a cellular automaton. In its original formulation, each site on a finite grid has an associated value that corresponds to the slope of the pile. This slope builds up as "grains of sand" (or "chips") are randomly placed onto the pile, until the slope exceeds a specific threshold value at which time that site collapses transferring sand into the adjacent sites, increasing their slope. Bak, Tang, and Wiesenfeld considered process of successive random placement of sand grains on the grid; each such placement of sand at a particular site may have no effect, or it may cause a cascading reaction that will affect many sites.
Dhar has shown that the final stable sandpile configuration after the avalanche is terminated, is independent of the precise sequence of topplings that is followed during the avalanche. As a direct consequence of this fact, it is shown that if two sand grains are added to the stable configuration in two different orders, e.g., first at site A and then at site B, and first at B and then at A, the final stable configuration of sand grains turns out to be exactly the same. When a sand grain is added to a stable sandpile configuration, it results in an avalanche which finally stops leading to another stable configuration. Dhar proposed that the addition of a sand grain can be looked upon as an operator, when it acts on one stable configuration, it produces another stable configuration. Dhar showed that all such addition operators form an abelian group, hence the name Abelian sandpile model.
The model has since been studied on the infinite lattice, on other (non-square) lattices, and on arbitrary graphs (including directed multigraphs). It is closely related to the dollar game, a variant of the chip-firing game introduced by Biggs.
Definition (rectangular grids)
The sandpile model is a cellular automaton originally defined on a rectangular grid (checkerboard) of the standard square lattice .
To each vertex (site, field) of the grid, we associate a value (grains of sand, slope, particles) , with referred to as the (initial) configuration of the sandpile.
The dynamics of the automaton at iteration are then defined as follows:
Choose a random vertex according to some probability distribution (usually uniform).
Add one grain of sand to this vertex while letting the grain numbers for all other vertices unchanged, i.e. set and for all .
If all vertices are stable, i.e. for all , also the configuration is said to be stable. In this case, continue with the next iteration.
If at least one vertex is unstable, i.e. for some , the whole configuration is said to be unstable. In this case, choose any unstable vertex at random. Topple this vertex by reducing its grain number by four and by increasing the grain numbers of each of its (at maximum four) direct neighbors by one, i.e. set, and if .If a vertex at the boundary of the domain topples, this results in a net loss of grains (two grains at the corner of the grid, one grain otherwise).
Due to the redistribution of grains, the toppling of one vertex can render other vertices unstable. Thus, repeat the toppling procedure until all vertices of eventually become stable and continue with the next iteration.
The toppling of several vertices during one iteration is referred to as an avalanche. Every avalanche is guaranteed to eventually stop, i.e. after a finite number of topplings some stable configuration is reached such that the automaton is well defined. Moreover, although there will often be many possible choices for the order in which to topple vertices, the final stable configuration does not depend on the chosen order; this is one sense in which the sandpile is abelian. Similarly, the number of times each vertex topples during each iteration is also independent of the choice of toppling order.
Definition (undirected finite multigraphs)
To generalize the sandpile model from the rectangular grid of the standard square lattice to an arbitrary undirected finite multigraph , a special vertex called the sink is specified that is not allowed to topple. A configuration (state) of the model is then a function counting the non-negative number of grains on each non-sink vertex. A non-sink vertex with
is unstable; it can be toppled, which sends one of its grains to each of its (non-sink) neighbors:
for all , .
The cellular automaton then progresses as before, i.e. by adding, in each iteration, one particle to a randomly chosen non-sink vertex and toppling until all vertices are stable.
The definition of the sandpile model given above for finite rectangular grids of the standard square lattice can then be seen as a special case of this definition: consider the graph which is obtained from by adding an additional vertex, the sink, and by drawing additional edges from the sink to every boundary vertex of such that the degree of every non-sink vertex of is four. In this manner, also sandpile models on non-rectangular grids of the standard square lattice (or of any other lattice) can be defined: Intersect some bounded subset of with . Contract every edge of whose two endpoints are not in . The single remaining vertex outside of then constitutes the sink of the resulting sandpile graph.
Transient and recurrent configurations
In the dynamics of the sandpile automaton defined above, some stable configurations ( for all ) appear infinitely often, while others can only appear a finite number of times (if at all). The former are referred to as recurrent configurations, while the latter are referred to as transient configurations. The recurrent configurations thereby consist of all stable non-negative configurations which can be reached from any other stable configuration by repeatedly adding grains of sand to vertices and toppling. It is easy to see that the minimally stable configuration , where every vertex carries grains of sand, is reachable from any other stable configuration (add grains to every vertex). Thus, equivalently, the recurrent configurations are exactly those configurations which can be reached from the minimally stable configuration by only adding grains of sand and stabilizing.
Not every non-negative stable configuration is recurrent. For example, in every sandpile model on a graph consisting of at least two connected non-sink vertices, every stable configuration where both vertices carry zero grains of sand is non-recurrent. To prove this, first note that the addition of grains of sand can only increase the total number of grains carried by the two vertices together. To reach a configuration where both vertices carry zero particles from a configuration where this is not the case thus necessarily involves steps where at least one of the two vertices is toppled. Consider the last one of these steps. In this step, one of the two vertices has to topple last. Since toppling transfers a grain of sand to every neighboring vertex, this implies that the total number of grains carried by both vertices together cannot be lower than one, which concludes the proof.
Sandpile group
Given a configuration , for all , toppling unstable non-sink vertices on a finite connected graph until no unstable non-sink vertex remains leads to a unique stable configuration , which is called the stabilization of . Given two stable configurations and , we can define the operation , corresponding to the vertex-wise addition of grains followed by the stabilization of the resulting sandpile.
Given an arbitrary but fixed ordering of the non-sink vertices, multiple toppling operations, which can e.g. occur during the stabilization of an unstable configuration, can be efficiently encoded by using the graph Laplacian , where is the degree matrix and is the adjacency matrix of the graph.
Deleting the row and column of corresponding with the sink yields the reduced graph Laplacian . Then, when starting with a configuration and toppling each vertex a total of times yields the configuration , where is the contraction product. Furthermore, if corresponds to the number of times each vertex is toppled during the stabilization of a given configuration , then
In this case, is referred to as the toppling or odometer function (of the stabilization of ).
Under the operation , the set of recurrent configurations forms an abelian group isomorphic to the cokernel of the reduced graph Laplacian , i.e. to , whereby denotes the number of vertices (including the sink). More generally, the set of stable configurations (transient and recurrent) forms a commutative monoid under the operation . The minimal ideal of this monoid is then isomorphic to the group of recurrent configurations.
The group formed by the recurrent configurations, as well as the group to which the former is isomorphic, is most commonly referred to as the sandpile group. Other common names for the same group are critical group, Jacobian group or (less often) Picard group. Note, however, that some authors only denote the group formed by the recurrent configurations as the sandpile group, while reserving the name Jacobian group or critical group for the (isomorphic) group defined by (or for related isomorphic definitions). Finally, some authors use the name Picard group to refer to the direct product of the sandpile group and , which naturally appears in a cellular automaton closely related to the sandpile model, referred to as the chip firing or dollar game.
Given the isomorphisms stated above, the order of the sandpile group is the determinant of , which by the matrix tree theorem is the number of spanning trees of the graph.
Self-organized criticality
The original interest behind the model stemmed from the fact that in simulations on lattices, it is attracted to its critical state, at which point the correlation length of the system and the correlation time of the system go to infinity, without any fine tuning of a system parameter. This contrasts with earlier examples of critical phenomena, such as the phase transitions between solid and liquid, or liquid and gas, where the critical point can only be reached by precise tuning (e.g., of temperature). Hence, in the sandpile model we can say that the criticality is self-organized.
Once the sandpile model reaches its critical state there is no correlation between the system's response to a perturbation and the details of a perturbation. Generally this means that dropping another grain of sand onto the pile may cause nothing to happen, or it may cause the entire pile to collapse in a massive slide. The model also displays 1/ƒ noise, a feature common to many complex systems in nature.
This model only displays critical behaviour in two or more dimensions. The sandpile model can be expressed in 1D; however, instead of evolving to its critical state, the 1D sandpile model instead reaches a minimally stable state where every lattice site goes toward the critical slope.
For two dimensions, it has been hypothesized that the associated conformal field theory consists of symplectic fermions with a central charge c = −2.
Properties
Least action principle
The stabilization of chip configurations obeys a form of least action principle: each vertex topples no more than necessary in the course of the stabilization.
This can be formalized as follows. Call a sequence of topples legal if it only topples unstable vertices, and stabilizing if it results in a stable configuration. The standard way of stabilizing the sandpile is to find a maximal legal sequence; i.e., by toppling so long as it is possible. Such a sequence is obviously stabilizing, and the Abelian property of the sandpile is that all such sequences are equivalent up to permutation of the toppling order; that is, for any vertex , the number of times topples is the same in all legal stabilizing sequences. According to the least action principle, a minimal stabilizing sequence is also equivalent up to permutation of the toppling order to a legal (and still stabilizing) sequence. In particular, the configuration resulting from a minimal stabilizing sequence is the same as results from a maximal legal sequence.
More formally, if is a vector such that is the number of times the vertex topples during the stabilization (via the toppling of unstable vertices) of a chip configuration , and is an integral vector (not necessarily non-negative) such that is stable, then for all vertices .
Scaling limits
The animation shows the recurrent configuration corresponding to the identity of the sandpile group on different square grids of increasing sizes , whereby the configurations are rescaled to always have the same physical dimension. Visually, the identities on larger grids seem to become more and more detailed and to "converge to a continuous image". Mathematically, this suggests the existence of scaling-limits of the sandpile identity on square grids based on the notion of weak-* convergence (or some other, generalized notion of convergence). Indeed, existence of scaling limits of recurrent sandpile configurations has been proved by Wesley Pegden and Charles Smart. In further joint work with Lionel Levine, they use the scaling limit to explain the fractal structure of the sandpile on square grids. Another scaling limit, when the relaxations of a perturbation of the maximal stable state converge to a picture defined by tropical curves, is established in works of Nikita Kalinin and Mikhail Shkolnikov.
Turing completeness
Abelian sandpiles in three or more dimensions can be used to simulate a Turing machine and are therefore Turing complete.
Generalizations and related models
Sandpile models on infinite grids
There exist several generalizations of the sandpile model to infinite grids. A challenge in such generalizations is that, in general, it is not guaranteed anymore that every avalanche will eventually stop. Several of the generalization thus only consider the stabilization of configurations for which this can be guaranteed.
A rather popular model on the (infinite) square lattice with sites is defined as follows:
Begin with some nonnegative configuration of values which is finite, meaning
Any site with
is unstable and can topple (or fire), sending one of its chips to each of its four neighbors:
Since the initial configuration is finite, the process is guaranteed to terminate, with the grains scattering outward.
A popular special case of this model is given when the initial configuration is zero for all vertices except the origin. If the origin carries a huge number of grains of sand, the configuration after relaxation forms fractal patterns (see figure). When letting the initial number of grains at the origin go to infinity, the rescaled stabilized configurations were shown to converge to a unique limit.
Sandpile models on directed graphs
The sandpile model can be generalized to arbitrary directed multigraphs. The rules are that any vertex with
is unstable; toppling again sends chips to each of its neighbors, one along each outgoing edge:
and, for each :
where is the number of edges from to .
In this case the Laplacian matrix is not symmetric. If we specify a sink such that there is a path from every other vertex to , then the stabilization operation on finite graphs is well-defined and the sandpile group can be written
as before.
The order of the sandpile group is again the determinant of , which by the general version of the matrix tree theorem is the number of oriented spanning trees rooted at the sink.
The extended sandpile model
To better understand the structure of the sandpile group for different finite convex grids of the standard square lattice , Lang and Shkolnikov introduced the extended sandpile model in 2019. The extended sandpile model is defined nearly exactly the same as the usual sandpile model (i.e. the original Bak–Tang–Wiesenfeld model ), except that vertices at the boundary of the grid are now allowed to carry a non-negative real number of grains. In contrast, vertices in the interior of the grid are still only allowed to carry integer numbers of grains. The toppling rules remain unchanged, i.e. both interior and boundary vertices are assumed to become unstable and topple if the grain number reaches or exceeds four.
Also the recurrent configurations of the extended sandpile model form an abelian group, referred to as the extended sandpile group, of which the usual sandpile group is a discrete subgroup. Different to the usual sandpile group, the extended sandpile group is however a continuous Lie group. Since it is generated by only adding grains of sand to the boundary of the grid, the extended sandpile group furthermore has the topology of a torus of dimension and a volume given by the order of the usual sandpile group.
Of specific interest is the question how the recurrent configurations dynamically change along the continuous geodesics of this torus passing through the identity. This question leads to the definition of the sandpile dynamics
(extended sandpile model)
respectively
(usual sandpile model)
induced by the integer-valued harmonic function at time , with the identity of the sandpile group and the floor function.
For low-order polynomial harmonic functions, the sandpile dynamics are characterized by the
smooth transformation and apparent conservation of the patches
constituting the sandpile identity. For example, the harmonic dynamics induced by resemble the "smooth stretching" of the identity along the main diagonals visualized in the animation. The configurations appearing in the dynamics induced by the same harmonic function on square grids of different sizes were furthermore conjectured to weak-* converge, meaning that there supposedly exist scaling limits for them. This proposes a natural renormalization for the extended and usual sandpile groups, meaning a mapping of recurrent configurations on a given grid to recurrent configurations on a sub-grid. Informally, this renormalization simply maps configurations appearing at a given time in the sandpile dynamics induced by some harmonic function on the larger grid to the corresponding configurations which appear at the same time in the sandpile dynamics induced by the restriction of to the respective sub-grid.
The divisible sandpile
A strongly related model is the so-called divisible sandpile model, introduced by Levine and Peres in 2008, in which, instead of a discrete number of particles in each site , there is a real number representing the amount of mass on the site. In case such mass is negative, one can understand it as a hole. The topple occurs whenever a site has mass larger than 1; it topples the excess evenly between its neighbors resulting in the situation that, if a site is full at time , it will be full for all later times.
Cultural references
The Bak–Tang–Wiesenfeld sandpile was mentioned on the Numb3rs episode "Rampage," as mathematician Charlie Eppes explains to his colleagues a solution to a criminal investigation.
The computer game Hexplode is based around the Abelian sandpile model on a finite hexagonal grid where instead of random grain placement, grains are placed by players.
References
Further reading
External links
Phelps, Christopher (2021-11-05). An interactive Python implementation of square-lattice models
Self-organization
Phase transitions
Dynamical systems
Critical phenomena
Nonlinear systems
Cellular automaton rules | Abelian sandpile model | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 4,009 | [
"Self-organization",
"Physical phenomena",
"Phase transitions",
"Critical phenomena",
"Phases of matter",
"Nonlinear systems",
"Mechanics",
"Condensed matter physics",
"Statistical mechanics",
"Matter",
"Dynamical systems"
] |
21,412,311 | https://en.wikipedia.org/wiki/Kullback%27s%20inequality | In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function. If P and Q are probability distributions on the real line, such that P is absolutely continuous with respect to Q, i.e. P << Q, and whose first moments exist, then
where is the rate function, i.e. the convex conjugate of the cumulant-generating function, of , and is the first moment of
The Cramér–Rao bound is a corollary of this result.
Proof
Let P and Q be probability distributions (measures) on the real line, whose first moments exist, and such that P << Q. Consider the natural exponential family of Q given by
for every measurable set A, where is the moment-generating function of Q. (Note that Q0 = Q.) Then
By Gibbs' inequality we have so that
Simplifying the right side, we have, for every real θ where
where is the first moment, or mean, of P, and is called the cumulant-generating function. Taking the supremum completes the process of convex conjugation and yields the rate function:
Corollary: the Cramér–Rao bound
Start with Kullback's inequality
Let Xθ be a family of probability distributions on the real line indexed by the real parameter θ, and satisfying certain regularity conditions. Then
where is the convex conjugate of the cumulant-generating function of and is the first moment of
Left side
The left side of this inequality can be simplified as follows:
which is half the Fisher information of the parameter θ.
Right side
The right side of the inequality can be developed as follows:
This supremum is attained at a value of t=τ where the first derivative of the cumulant-generating function is but we have so that
Moreover,
Putting both sides back together
We have:
which can be rearranged as:
See also
Kullback–Leibler divergence
Cramér–Rao bound
Fisher information
Large deviations theory
Convex conjugate
Rate function
Moment-generating function
Notes and references
Information theory
Statistical inequalities
Estimation theory | Kullback's inequality | [
"Mathematics",
"Technology",
"Engineering"
] | 451 | [
"Theorems in statistics",
"Telecommunications engineering",
"Applied mathematics",
"Statistical inequalities",
"Computer science",
"Information theory",
"Inequalities (mathematics)"
] |
21,420,666 | https://en.wikipedia.org/wiki/The%20Unscrambler | The Unscrambler X is a commercial software product for multivariate data analysis, used for calibration of multivariate data which is often in the application of analytical data such as near infrared spectroscopy and Raman spectroscopy, and development of predictive models for use in real-time spectroscopic analysis of materials. The software was originally developed in 1986 by Harald Martens and later by CAMO Software.
Functionality
The Unscrambler X was an early adaptation of the use of partial least squares (PLS). Other techniques supported include principal component analysis (PCA), 3-way PLS, multivariate curve resolution, design of experiments, supervised classification, unsupervised classification
and cluster analysis.
The software is used in spectroscopy (IR, NIR, Raman, etc.), chromatography, and process applications in research and non-destructive quality control systems in pharmaceutical manufacturing, sensory analysis and the chemical industry.
References
Statistical software
Computational chemistry
Spectroscopy
ca:Quimiometria
de:Chemometrik
et:Kemomeetria
es:Quimiometría
it:Chemiometria
mk:Хемометрија
nl:Chemometrie
ja:計量化学
pl:Chemometria
pt:Quimiometria
su:Kémometrik
fi:Kemometria
sv:Kemometri
zh:化学计量学 | The Unscrambler | [
"Physics",
"Chemistry",
"Mathematics"
] | 301 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Theoretical chemistry",
"Computational chemistry",
"Statistical software",
"Spectroscopy",
"Mathematical software"
] |
21,421,435 | https://en.wikipedia.org/wiki/Limiting%20oxygen%20concentration | The limiting oxygen concentration (LOC), also known as the minimum oxygen concentration (MOC), is defined as the limiting concentration of oxygen below which combustion is not possible, independent of the concentration of fuel. It is expressed in units of volume percent of oxygen. The LOC varies with pressure and temperature. It is also dependent on the type of inert (non-flammable) gas.
Limiting oxygen concentration for solid materials
The effect of increasing the concentration of inert gas can be understood by viewing the inert as thermal ballast that quenches the flame temperature to a level below which the flame cannot exist. Carbon dioxide is therefore more effective than nitrogen due to its higher molar heat capacity.
The concept has important practical use in fire safety engineering. For instance, to safely fill a new container or a pressure vessel with flammable gases, the atmosphere of normal air (containing 20.9 volume percent of oxygen) in the vessel would first be flushed (purged) with nitrogen or another non-flammable inert gas, thereby reducing the oxygen concentration inside the container. When the oxygen concentration is below the LOC, flammable gas can then be safely admitted to the vessel, because the possibility of internal explosion has been eliminated.
The limiting oxygen concentration is a necessary parameter when designing hypoxic air fire prevention systems.
See also
Flammability limit
Flammability diagram
Sources
Monographs
Chapter 23
(Presented at 9th International Symposium on Loss Prevention and Safety Promotion in the Process Industries, May 1998, Barcelona (Spain))
References
Combustion
Explosion protection
Fire
Safety | Limiting oxygen concentration | [
"Chemistry",
"Engineering"
] | 323 | [
"Explosion protection",
"Combustion engineering",
"Combustion",
"Explosions",
"Fire"
] |
915,449 | https://en.wikipedia.org/wiki/Counterexamples%20in%20Topology | Counterexamples in Topology (1970, 2nd ed. 1978) is a book on mathematics by topologists Lynn Steen and J. Arthur Seebach, Jr.
In the process of working on problems like the metrization problem, topologists (including Steen and Seebach) have defined a wide variety of topological properties. It is often useful in the study and understanding of abstracts such as topological spaces to determine that one property does not follow from another. One of the easiest ways of doing this is to find a counterexample which exhibits one property but not the other. In Counterexamples in Topology, Steen and Seebach, together with five students in an undergraduate research project at St. Olaf College, Minnesota in the summer of 1967, canvassed the field of topology for such counterexamples and compiled them in an attempt to simplify the literature.
For instance, an example of a first-countable space which is not second-countable is counterexample #3, the discrete topology on an uncountable set. This particular counterexample shows that second-countability does not follow from first-countability.
Several other "Counterexamples in ..." books and papers have followed, with similar motivations.
Reviews
In her review of the first edition, Mary Ellen Rudin wrote:
In other mathematical fields one restricts one's problem by requiring that the space be Hausdorff or paracompact or metric, and usually one doesn't really care which, so long as the restriction is strong enough to avoid this dense forest of counterexamples. A usable map of the forest is a fine thing...
In his submission to Mathematical Reviews C. Wayne Patty wrote:
...the book is extremely useful, and the general topology student will no doubt find it very valuable. In addition it is very well written.
When the second edition appeared in 1978 its review in Advances in Mathematics treated topology as territory to be explored:
Lebesgue once said that every mathematician should be something of a naturalist. This book, the updated journal of a continuing expedition to the never-never land of general topology, should appeal to the latent naturalist in every mathematician.
Notation
Several of the naming conventions in this book differ from more accepted modern conventions, particularly with respect to the separation axioms. The authors use the terms T3, T4, and T5 to refer to regular, normal, and completely normal. They also refer to completely Hausdorff as Urysohn. This was a result of the different historical development of metrization theory and general topology; see History of the separation axioms for more.
The long line in example 45 is what most topologists nowadays would call the 'closed long ray'.
List of mentioned counterexamples
Finite discrete topology
Countable discrete topology
Uncountable discrete topology
Indiscrete topology
Partition topology
Odd–even topology
Deleted integer topology
Finite particular point topology
Countable particular point topology
Uncountable particular point topology
Sierpiński space, see also particular point topology
Closed extension topology
Finite excluded point topology
Countable excluded point topology
Uncountable excluded point topology
Open extension topology
Either-or topology
Finite complement topology on a countable space
Finite complement topology on an uncountable space
Countable complement topology
Double pointed countable complement topology
Compact complement topology
Countable Fort space
Uncountable Fort space
Fortissimo space
Arens–Fort space
Modified Fort space
Euclidean topology
Cantor set
Rational numbers
Irrational numbers
Special subsets of the real line
Special subsets of the plane
One point compactification topology
One point compactification of the rationals
Hilbert space
Fréchet space
Hilbert cube
Order topology
Open ordinal space [0,Γ) where Γ<Ω
Closed ordinal space [0,Γ] where Γ<Ω
Open ordinal space [0,Ω)
Closed ordinal space [0,Ω]
Uncountable discrete ordinal space
Long line
Extended long line
An altered long line
Lexicographic order topology on the unit square
Right order topology
Right order topology on R
Right half-open interval topology
Nested interval topology
Overlapping interval topology
Interlocking interval topology
Hjalmar Ekdal topology, whose name was introduced in this book.
Prime ideal topology
Divisor topology
Evenly spaced integer topology
The p-adic topology on Z
Relatively prime integer topology
Prime integer topology
Double pointed reals
Countable complement extension topology
Smirnov's deleted sequence topology
Rational sequence topology
Indiscrete rational extension of R
Indiscrete irrational extension of R
Pointed rational extension of R
Pointed irrational extension of R
Discrete rational extension of R
Discrete irrational extension of R
Rational extension in the plane
Telophase topology
Double origin topology
Irrational slope topology
Deleted diameter topology
Deleted radius topology
Half-disk topology
Irregular lattice topology
Arens square
Simplified Arens square
Niemytzki's tangent disk topology
Metrizable tangent disk topology
Sorgenfrey's half-open square topology
Michael's product topology
Tychonoff plank
Deleted Tychonoff plank
Alexandroff plank
Dieudonné plank
Tychonoff corkscrew
Deleted Tychonoff corkscrew
Hewitt's condensed corkscrew
Thomas's plank
Thomas's corkscrew
Weak parallel line topology
Strong parallel line topology
Concentric circles
Appert space
Maximal compact topology
Minimal Hausdorff topology
Alexandroff square
ZZ
Uncountable products of Z+
Baire product metric on Rω
II
[0,Ω)×II
Helly space
C[0,1]
Box product topology on Rω
Stone–Čech compactification
Stone–Čech compactification of the integers
Novak space
Strong ultrafilter topology
Single ultrafilter topology
Nested rectangles
Topologist's sine curve
Closed topologist's sine curve
Extended topologist's sine curve
Infinite broom
Closed infinite broom
Integer broom
Nested angles
Infinite cage
Bernstein's connected sets
Gustin's sequence space
Roy's lattice space
Roy's lattice subspace
Cantor's leaky tent
Cantor's teepee
Pseudo-arc
Miller's biconnected set
Wheel without its hub
Tangora's connected space
Bounded metrics
Sierpinski's metric space
Duncan's space
Cauchy completion
Hausdorff's metric topology
Post Office metric
Radial metric
Radial interval topology
Bing's discrete extension space
Michael's closed subspace
See also
References
Bibliography
Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. (Dover edition).
External links
π-Base: An Interactive Encyclopedia of Topological Spaces
1978 non-fiction books
Topology
General topology
Mathematics books | Counterexamples in Topology | [
"Mathematics"
] | 1,366 | [
"General topology",
"Topology"
] |
916,340 | https://en.wikipedia.org/wiki/Type%20I%20keratin | Type I keratins (or Type I cytokeratins) are cytokeratins that constitute the Type I intermediate filaments (IFs) of the intracytoplasmatic cytoskeleton, which is present in all mammalian epithelial cells. Most of the type I keratins consist of acidic, low molecular weight proteins which in vivo are arranged in pairs of heterotypic Type I and Type II keratin chains, coexpressed during differentiation of simple and stratified epithelial tissues.
Type I keratins are encoded on chromosome 17q and encompasses: K9, K10, K11, K12, K13, K14, K15, K16, K17, K18, K19 and K20. Their molecular weight ranges from 40 kDa (K19) to 64 kDa (K9).
See also
Type II keratin
External links
Proteopedia page on keratins
Keratins | Type I keratin | [
"Chemistry"
] | 205 | [
"Biochemistry stubs",
"Protein stubs"
] |
916,374 | https://en.wikipedia.org/wiki/Keratin%2019 | Keratin, type I cytoskeletal 19 (Keratin-19)) also known as cytokeratin-19 (CK-19) is a 40 kDa protein that in humans is encoded by the KRT19 gene. Keratin-19 is a type I keratin.
Function
Keratin-19 is a member of the keratin family. The keratins are intermediate filament proteins responsible for the structural integrity of epithelial cells and are subdivided into cytokeratins and hair keratins.
Keratin-19 is a type I keratin. The type I cytokeratins consist of acidic proteins which are arranged in pairs of heterotypic keratin chains. Unlike its related family members, this smallest known acidic cytokeratin is not paired with a basic cytokeratin in epithelial cells. It is specifically found in the embryonic periderm, the transiently superficial layer that envelops the developing epidermis. The type I cytokeratins are clustered in a region of chromosome 17 (q12-q21).
Use as biomarker
CYFRA 21-1, a soluble fragment of KRT19, is a tumor marker of various types of cancer, including lung, breast, stomach, pancreas, ovary. KRT19 is commonly expressed in carcinomas of these organs and CYFRA 21-1 is produced when KRT19 is cleaved during cell apoptosis.
Due to its high sensitivity, KRT19 is the most used marker for the RT-PCR-mediated detection of tumor cells disseminated in lymph nodes, peripheral blood, and bone marrow of breast cancer patients. Depending on the assays, KRT19 has been shown to be both a specific and a non-specific marker.
False positivity in CYFRA 21-1 / KRT19 RT-PCR studies include:
illegitimate transcription (expression of small amounts of KRT19 mRNA by tissues in which it has no real physiological role)
haematological disorders (KRT19 induction in peripheral blood cells by cytokines and growth factors, which circulate at higher concentrations in inflammatory conditions and neutropenia)
the presence of pseudogenes (two KRT19 pseudogenes, KRT19a and KRT19b, have been identified, which have significant sequence homology to KRT19 mRNA. Subsequently, attempts to detect the expression of the authentic KRT19 may result in the detection of either or both of these pseudogenes)
sample contamination (introduction of contaminating epithelial cells during peripheral blood sampling for subsequent RT-PCR analysis).
trauma and stress (such as shear stress, heat shock, toxins, infection, aging and oxidative stress such as from smoking), which increase KRT19 expression and cell apoptosis
weight loss and muscle wasting/apoptosis (KRT19 is expressed in muscle)
Moreover, Ck-19 is widely applied as post-operative diagnostic marker of papillary thyroid carcinoma.
Keratin-19 is often used together with keratin 8 and keratin 18 to differentiate cells of epithelial origin from hematopoietic cells in tests that enumerate circulating tumor cells in blood.
Interactions
Keratin-19 has been shown to interact with Pinin.
References
Further reading
Keratins
Tumor markers | Keratin 19 | [
"Chemistry",
"Biology"
] | 712 | [
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
916,394 | https://en.wikipedia.org/wiki/Primary%20constraint | In Hamiltonian mechanics, a primary constraint is a relation between the coordinates and momenta that holds without using the equations of motion. A secondary constraint is one that is not primary—in other words it holds when the equations of motion are satisfied, but need not hold if they are not satisfied The secondary constraints arise from the condition that the primary constraints should be preserved in time. A few authors use more refined terminology, where the non-primary constraints are divided into secondary, tertiary, quaternary, etc. constraints. The secondary constraints arise directly from the condition that the primary constraints are preserved by time, the tertiary constraints arise from the condition that the secondary ones are also preserved by time, and so on. Primary and secondary constraints were introduced by Anderson and Bergmann and developed by Dirac.
The terminology of primary and secondary constraints is confusingly similar to that of first- and second-class constraints. These divisions are independent: both first- and second-class constraints can be either primary or secondary, so this gives altogether four different classes of constraints.
References
2001 reprint by Dover.
Footnotes
Further reading
Hamiltonian mechanics | Primary constraint | [
"Physics",
"Mathematics"
] | 226 | [
"Hamiltonian mechanics",
"Theoretical physics",
"Classical mechanics",
"Dynamical systems"
] |
916,587 | https://en.wikipedia.org/wiki/Polyvinyl%20butyral | Polyvinyl butyral (or PVB) is a resin mostly used for applications that require strong binding, optical clarity, adhesion to many surfaces, toughness and flexibility. It is prepared from polyvinyl alcohol by reaction with butyraldehyde. The major application is laminated safety glass for automobile windshields. Trade names for PVB-films include KB PVB, GUTMANN PVB, Saflex, GlasNovations, Butacite, WINLITE, S-Lec, Trosifol and EVERLAM. PVB is also available as 3D printer filament that is stronger and more heat resistant than polylactic acid (PLA).
Applications
Automotive and architectural
Laminated glass, commonly used in the automotive and architectural fields, comprises a protective interlayer, usually polyvinyl butyral, bonded between two panels of glass. The bonding process takes place under heat and pressure. When laminated under these conditions, the PVB interlayer becomes optically clear and binds the two panes of glass together. Once sealed together, the glass "sandwich" (i.e., laminate) behaves as a single unit and looks like normal glass. The polymer interlayer of PVB is tough and ductile, so brittle cracks will not pass from one side of the laminate to the other.
Colors
PVB interlayer can be manufactured in colored sheets, such as for the blue or green "shade band" at the top edge of many automobile windshields. PVB interlayers can also be manufactured in different colors for architectural laminated glass.
Solar modules
PVB has gained acceptance among manufacturers of photovoltaic thin film solar modules. The photovoltaic circuit is formed on a sheet of glass using thin film deposition and patterning techniques. PVB and a second sheet of glass (called back glass) are then placed directly on the circuit. The lamination of this sandwich encapsulates the circuit, protecting it from the environment. Current is extracted from the module at a sealed terminal box that is attached to the circuit through a hole in the back glass. Another common laminant used in the solar industry is ethylene-vinyl acetate (EVA).
Non-film applications
PVB resins (provided by the manufacturer in powdered or granulated form) are also utilized in a range of applications including technical ceramic (temporary) binders, inks, dye transfer ribbon inks, paints & coatings (including wash primers), binders for reflective sheet and binders for magnetic media. PVB resin is particularly useful at bonding to metals, ceramics and other inorganics.
Properties of PVB-laminated glass
Annealed glass, heat-strengthened, or tempered glass can be used to produce laminated glass. While laminated glass will crack if struck with sufficient force, the resulting glass fragments tend to adhere to the interlayer rather than falling free and potentially causing injury.
In practice, the interlayer provides three beneficial properties to laminated glass panes: first, the interlayer functions to distribute impact forces across a greater area of the glass panes, thus increasing the impact resistance of the glass; second, the interlayer functions to bind the resulting shards if the glass is ultimately broken; third the viscoelastic interlayer undergoes plastic deformation during impact and under static loads after impact, absorbing energy and reducing penetration by the impacting object as well as reducing the energy of the impact that is transmitted to impacting object, e.g. a passenger in a car crash. Thus, the benefits of laminated glass include safety and security. Laminated glass also has decorative applications. The interlayer can be colored or patterned.
History
PVB was invented in 1927 by the Canadian chemists Howard W. Matheson and Frederick W. Skirrow. PVB has been the dominant interlayer material since the late 1930s. It is manufactured and marketed by a number of companies worldwide, including:
Saflex made by Eastman in Kingsport, Tennessee
S-Lec films and powdered resins made by Sekisui in Kyoto, Japan, Winchester, Kentucky, Geleen & Roermond, The Netherlands and Cuernavaca, Mexico
Kuraray Europe GmbH manufactures Trosifol and Mowital / Pioloform PVB products in Frankfurt, Germany
Chang Chung Petrochemicals Co. Ltd of Taiwan manufactures WINLITE brand PVB products
EVERLAM in Hamm-Uentrop, Germany markets its eponymous Everlam brand
The market for laminated glass products is mature. With only minor modifications, the PVB interlayer sold today is essentially identical to the PVB sold 30 years ago. Since its introduction in 1938, the worldwide market for PVB interlayer has been dominated by a handful of large chemical companies. As a result, inventive efforts have tended toward methods of making the interlayer itself cheaper to manufacture, or making the interlayer easier to handle and less prone to material defects during the process of fabricating laminated glass.
Other interlayer materials
Other types of interlayer materials are in use, including polyurethanes such as Duraflex-brand thermoplastic polyurethane film, manufactured by Bayer MaterialScience, Leverkusen, Germany.
See also
Glass
Polyvinyl chloride
References
Further reading
Study of PVB from several manufacturers that establishes the possibility of using recycled PVB from laminated glass.
(PDF; 75 kB)
Vinyl polymers
Synthetic resins
Transparent materials
Car windows | Polyvinyl butyral | [
"Physics",
"Chemistry"
] | 1,119 | [
"Physical phenomena",
"Synthetic resins",
"Synthetic materials",
"Optical phenomena",
"Materials",
"Transparent materials",
"Matter"
] |
916,941 | https://en.wikipedia.org/wiki/Bulk%20movement | In cell biology, bulk flow is the process by which proteins with a sorting signal travel to and from different cellular compartments. In other words, bulk transport is a type of transport which involves the transport of large amount of substance like lipid droplets and solid food particles across plasma membrane by utilising energy. Special processes are involved in the transport of such large quantities of materials, which include endocytosis and exocytosis.
It is thought that cargo travels through the Golgi cisternae (from cis- to trans- Golgi) via bulk flow.
See also
Protein targeting
Vesicle (biology)
COPI
COPII
Mass flow
References
1. Rothman J.E. and Weiland F.T. Protein sorting by transport vesicles. Science 272. 227-234. 1996.
Protein targeting | Bulk movement | [
"Biology"
] | 169 | [
"Protein targeting",
"Cellular processes"
] |
916,971 | https://en.wikipedia.org/wiki/Nuclear%20marine%20propulsion | Nuclear marine propulsion is propulsion of a ship or submarine with heat provided by a nuclear reactor. The power plant heats water to produce steam for a turbine used to turn the ship's propeller through a gearbox or through an electric generator and motor. Nuclear propulsion is used primarily within naval warships such as nuclear submarines and supercarriers. A small number of experimental civil nuclear ships have been built.
Compared to oil- or coal-fuelled ships, nuclear propulsion offers the advantage of very long intervals of operation before refueling. All the fuel is contained within the nuclear reactor, so no cargo or supplies space is taken up by fuel, nor is space taken up by exhaust stacks or combustion air intakes. The low fuel cost is offset by high operating costs and investment in infrastructure, however, so nearly all nuclear-powered vessels are military.
Power plants
Basic operation of naval ship or submarine
Most naval nuclear reactors are of the pressurized water type, with the exception of a few attempts at using liquid sodium-cooled reactors. A primary water circuit transfers heat generated from nuclear fission in the fuel to a steam generator; this water is kept under pressure so it does not boil. This circuit operates at a temperature of around . Any radioactive contamination in the primary water is confined. Water is circulated by pumps; at lower power levels, reactors designed for submarines may rely on natural circulation of the water to reduce noise generated by the pumps.
The hot water from the reactor heats a separate water circuit in the steam generator. That water is converted to steam and passes through steam driers on its way to the steam turbine. Spent steam at low pressure runs through a condenser cooled by seawater and returns to liquid form. The water is pumped back to the steam generator and continues the cycle. Any water lost in the process can be made up by desalinated sea water added to the steam generator feed water.
In the turbine, the steam expands and reduces its pressure as it imparts energy to the rotating blades of the turbine. There may be many stages of rotating blades and fixed guide vanes. The output shaft of the turbine may be connected to a gearbox to reduce rotation speed, then a shaft connects to the vessel's propellers. In another form of drive system, the turbine turns an electrical generator, and the electric power produced is fed to one or more drive motors for the vessel's propellers. The Russian, U.S. and British navies rely on direct steam turbine propulsion, while French and Chinese ships use the turbine to generate electricity for propulsion (turbo-electric transmission).
Some nuclear submarines have a single reactor, but Russian submarines have two, and so had . Most American aircraft carriers are powered by two reactors, but had eight. The majority of marine reactors are of the pressurized water type, although the U.S. and Soviet navies have designed warships powered with liquid metal cooled reactors.
Differences from land power plants
Marine-type reactors differ from land-based commercial electric power reactors in several respects.
While land-based reactors in nuclear power plants produce up to around 1600 megawatts of net electrical power (the nameplate capacity of the EPR), a typical marine propulsion reactor produces no more than a few hundred megawatts. Some small modular reactors (SMR) are similar to marine propulsion reactors in capacity and some design considerations and thus nuclear marine propulsion (whether civilian or military) is sometimes proposed as an additional market niche for SMRs. Unlike for land-based applications where hundreds of hectares can be occupied by installations like Bruce Nuclear Generating Station, at sea tight space limits dictate that a marine reactor must be physically small, so it must generate higher power per unit of space. This means its components are subject to greater stresses than those of a land-based reactor. Its mechanical systems must operate flawlessly under the adverse conditions encountered at sea, including vibration and the pitching and rolling of a ship operating in rough seas. Reactor shutdown mechanisms cannot rely on gravity to drop control rods into place as in a land-based reactor that always remains upright. Salt water corrosion is an additional problem that complicates maintenance.
As the core of a seagoing reactor is much smaller than a power reactor, the probability of a neutron intersecting with a fissionable nucleus before it escapes into the shielding is much lower. As such, the fuel is typically more highly enriched (i.e., contains a higher concentration of 235U vs. 238U) than that used in a land-based nuclear power plant, which increases the probability of fission to the level where a sustained reaction can occur. Some marine reactors run on relatively low-enriched uranium, which requires more frequent refueling. Others run on highly enriched uranium, varying from 20% 235U, to the over 96% 235U found in U.S. submarines, in which the resulting smaller core is quieter in operation (a big advantage to a submarine). Using more-highly enriched fuel also increases the reactor's power density and extends the usable life of the nuclear fuel load, but is more expensive and a greater risk to nuclear proliferation than less-highly enriched fuel.
A marine nuclear propulsion plant must be designed to be highly reliable and self-sufficient, requiring minimal maintenance and repairs, which might have to be undertaken many thousands of miles from its home port. One of the technical difficulties in designing fuel elements for a seagoing nuclear reactor is the creation of fuel elements that will withstand a large amount of radiation damage. Fuel elements may crack over time and gas bubbles may form. The fuel used in marine reactors is a metal-zirconium alloy rather than the ceramic UO2 (uranium dioxide) often used in land-based reactors. Marine reactors are designed for long core life, enabled by the relatively high enrichment of the uranium and by incorporating a "burnable poison" in the fuel elements, which is slowly depleted as the fuel elements age and become less reactive. The gradual dissipation of the "nuclear poison" increases the reactivity of the core to compensate for the lessening reactivity of the aging fuel elements, thereby extending the usable life of the fuel. The compact reactor pressure vessel is provided with an internal neutron shield, which reduces the damage to the steel from constant neutron bombardment.
Decommissioning
Decommissioning nuclear-powered submarines has become a major task for U.S. and Russian navies. After defuelling, U.S. practice is to cut the reactor section from the vessel for disposal in shallow land burial as low-level waste (see the ship-submarine recycling program). In Russia, whole vessels, or sealed reactor sections, typically remain stored afloat, although a new facility near Sayda Bay is to provide storage in a concrete-floored facility on land for some submarines in the far north.
Future designs
Russia built a floating nuclear power plant for its far eastern territories. The design has two 35 MWe units based on the KLT-40 reactor used in icebreakers (with refueling every four years). Some Russian naval vessels have been used to supply electricity for domestic and industrial use in remote far eastern and Siberian towns.
In 2010, Lloyd's Register was investigating the possibility of civilian nuclear marine propulsion and rewriting draft rules (see text under Merchant Ships).
Civil liability
Insurance of nuclear vessels is not like the insurance of conventional ships. The consequences of an accident could span national boundaries, and the magnitude of possible damage is beyond the capacity of private insurers. A special international agreement, the Brussels Convention on the Liability of Operators of Nuclear Ships, developed in 1962, would have made signatory national governments liable for accidents caused by nuclear vessels under their flag but was never ratified owing to disagreement on the inclusion of warships under the convention. Nuclear reactors under United States jurisdiction are insured by the provisions of the Price–Anderson Act.
Military nuclear ships
By 1990, there were more nuclear reactors powering ships (mostly military) than there were generating electric power in commercial power plants worldwide.
Under the direction of U.S. Navy Captain (later Admiral) Hyman G. Rickover, the design, development and production of nuclear marine propulsion plants started in the United States in the 1940s. The first prototype naval reactor was constructed and tested at the Naval Reactor Facility at the National Reactor Testing Station in Idaho (now called the Idaho National Laboratory) in 1953.
Submarines
The first nuclear submarine, , put to sea in 1955 (SS was a traditional hull classification symbol for U.S. submarines, while SSN denoted the first "nuclear" submarine).
The Soviet Union also developed nuclear submarines. The first types developed were the Project 627, NATO-designated with two water-cooled reactors, the first of which, K-3 Leninsky Komsomol, was underway under nuclear power in 1958.
Nuclear power revolutionized the submarine, finally making it a true "underwater" vessel, rather than a "submersible" craft, which could only stay underwater for limited periods. It gave the submarine the ability to operate submerged at high speeds, comparable to those of surface vessels, for unlimited periods, dependent only on the endurance of its crew. To demonstrate this was the first vessel to execute a submerged circumnavigation of the Earth (Operation Sandblast), doing so in 1960.
Nautilus, with a pressurized water reactor (PWR), led to the parallel development of other submarines like a unique liquid metal cooled (sodium) reactor in , or two reactors in Triton, and then the s, powered by single reactors, and a cruiser, , in 1961, powered by two reactors.
By 1962, the United States Navy had 26 operational nuclear submarines and another 30 under construction. Nuclear power had revolutionized the Navy. The United States shared its technology with the United Kingdom, while French, Soviet, Indian and Chinese development proceeded separately.
After the Skate-class vessels, U.S. submarines were powered by a series of standardized, single-reactor designs built by Westinghouse and General Electric. Rolls-Royce plc built similar units for Royal Navy submarines, eventually developing a modified version of their own, the PWR2.
The largest nuclear submarines ever built are the 26,500 tonne Russian . The smallest nuclear warships to date are the 2,700 tonne French attack submarines. The U.S. Navy operated an unarmed nuclear submarine, the NR-1 Deep Submergence Craft, between 1969 and 2008, which was not a combat vessel but was the smallest nuclear-powered submarine at 400 tons.
Aircraft carriers
The United States and France have built nuclear aircraft carriers.
French Navy
The sole French nuclear aircraft carrier example is , commissioned in 2001 (a successor is planned).
The French carrier is equipped with catapults and arresters. The has 42,000 tonnes, is the flagship of the French Navy (Marine Nationale). The ship carries a complement of Dassault Rafale M and E‑2C Hawkeye aircraft, EC725 Caracal and AS532 Cougar helicopters for combat search and rescue, as well as modern electronics and Aster missiles.
United States Navy
The United States Navy operates 11 carriers, all nuclear-powered:
: in service 1962–2012, powered by eight reactor units, is still the only aircraft carrier to house more than two nuclear reactors, with each A2W reactor taking the place of one of the conventional boilers in earlier constructions.
: ten 101,000-ton, 1,092 ft long fleet carriers, the first of which was commissioned in 1975. A Nimitz-class carrier is powered by two nuclear reactors providing steam to four steam turbines.
, one 110,000-ton, 1,106 ft long fleet carrier. The lead of the class , came into service in 2017, with another nine planned.
Destroyers and cruisers
Russian Navy
The Kirov class, Soviet designation 'Project 1144 Orlan' (sea eagle), is a class of nuclear-powered guided-missile cruisers of the Soviet Navy and Russian Navy, the largest and heaviest surface combatant warships (i.e. not an aircraft carrier or amphibious assault ship) in operation in the world. Among modern warships, they are second in size only to large aircraft carriers, and of similar size to World War II era battleships. The Soviet classification of the ship-type is "heavy nuclear-powered guided missile cruiser" (). The ships are often referred to as battlecruisers by Western defence commentators due to their size and general appearance.
United States Navy
The United States Navy at one time had nuclear-powered cruisers as part of its fleet. The first such ship was USS Long Beach (CGN-9). Commissioned in 1961, she was the world's first nuclear-powered surface combatant. She was followed a year later by USS Bainbridge (DLGN-25). While Long Beach was designed and built as a cruiser, Bainbridge began life as a frigate, though at that time the Navy was using the hull code "DLGN" for "destroyer leader, guided missile, nuclear".
The last nuclear-powered cruisers the Americans would produce would be the four-ship . was commissioned in 1976, followed by in 1977, in 1978 and finally in 1980. Ultimately, all these ships proved to be too costly to maintain and they were all retired between 1993 and 1999.
Other military ships
Communication and command ships
SSV-33 Ural (ССВ-33 Урал; NATO reporting name: Kapusta [Russian for "cabbage"]) was a command and control naval ship operated by the Soviet Navy. SSV-33s hull was derived from that of the nuclear-powered s with nuclear marine propulsion. SSV-33 served in electronic intelligence, missile tracking, space tracking, and communications relay roles. Due to high operating costs, SSV-33 was laid up.
SSV-33 carried only light defensive weapons. These were two AK-176 76 mm guns, four AK-630 30 mm guns, and four quadruple Igla missile mounts.
Nuclear-powered UUV
The Poseidon (, "Poseidon", NATO reporting name Kanyon), previously known by Russian codename Status-6 (), is a nuclear-powered and nuclear-armed unmanned underwater vehicle under development by Rubin Design Bureau, capable of delivering both conventional and nuclear payloads. According to Russian state TV, it is able to deliver a thermonuclear cobalt bomb of up to 200 megatonnes (four times as powerful as the most powerful device ever detonated, the Tsar Bomba, and twice its maximum theoretical yield) against an enemy's naval ports and coastal cities.
Civilian nuclear ships
The following are ships that are or were in commercial or civilian use and have nuclear marine propulsion.
Merchant ships
Nuclear-powered civil merchant ships have not developed beyond a few experimental ships. The U.S.-built , completed in 1962, was primarily a demonstration of civil nuclear power and was too small and expensive to operate economically as a merchant ship. The design was too much of a compromise, being neither an efficient freighter nor a viable passenger liner. The German-built , completed in 1968, a cargo ship and research facility, sailed some on 126 voyages over 10 years without any technical problems. It proved too expensive to operate and was converted to diesel. The Japanese , completed in 1972, was dogged by technical and political problems. Its reactor had significant radiation leakage and fishermen protested against the vessel's operation. All of these three ships used low-enriched uranium. Sevmorput, a Soviet and later Russian LASH carrier with icebreaking capability, has operated successfully on the Northern Sea Route since it was commissioned in 1988. , it is the only nuclear-powered merchant ship in service.
Civilian nuclear ships suffer from the costs of specialized infrastructure. The Savannah was expensive to operate since it was the only vessel using its specialized nuclear shore staff and servicing facility. A larger fleet could share fixed costs among more operating vessels, reducing operating costs.
Despite this, there is still interest in nuclear propulsion. In November 2010 British Maritime Technology and Lloyd's Register embarked upon a two-year study with U.S.-based Hyperion Power Generation (now Gen4 Energy), and the Greek ship operator Enterprises Shipping and Trading SA to investigate the practical maritime applications for small modular reactors. The research intended to produce a concept tanker-ship design, based on a 70 MWt reactor such as Hyperion's. In response to its members' interest in nuclear propulsion, Lloyd's Register has also re-written its 'rules' for nuclear ships, which concern the integration of a reactor certified by a land-based regulator with the rest of the ship. The overall rationale of the rule-making process assumes that in contrast to the current marine industry practice where the designer/builder typically demonstrates compliance with regulatory requirements, in the future the nuclear regulators will wish to ensure that it is the operator of the nuclear plant that demonstrates safety in operation, in addition to the safety through design and construction. Nuclear ships are currently the responsibility of their own countries, but none are involved in international trade. As a result of this work in 2014 two papers on commercial nuclear marine propulsion were published by Lloyd's Register and the other members of this consortium. These publications review past and recent work in the area of marine nuclear propulsion and describe a preliminary concept design study for a Suezmax tanker that is based on a conventional hull form with alternative arrangements for accommodating a 70 MWt nuclear propulsion plant delivering up to 23.5 MW shaft power at maximum continuous rating (average: 9.75 MW). The Gen4Energy power module is considered. This is a small fast-neutron reactor using lead–bismuth eutectic cooling and able to operate for ten full-power years before refueling, and in service last for a 25-year operational life of the vessel. They conclude that the concept is feasible, but further maturity of nuclear technology and the development and harmonisation of the regulatory framework would be necessary before the concept would be viable.
Nuclear propulsion has been proposed again on the wave of decarbonization of marine shipping, which accounts for 3–4% of global greenhouse gas emissions.
Merchant cargo ships
USNS American Explorer; United States tanker, converted to conventional power while under construction
, Japan (1970–1992); never carried commercial cargo, rebuilt as diesel engine powered RV Mirai in 1996
, Germany (1968–1979); re-powered with diesel engine in 1979
, United States (1962–1972)
Sevmorput, Russia (1988–present), ice-strengthened nuclear-powered lighter aboard ship (LASH) carrier
In December 2023, the Jiangnan Shipyard under the China State Shipbuilding Corporation officially released a design of a 24000 TEU-class container ship — known as the KUN-24AP — at Marintec China 2023, a premier maritime industry exhibition held in Shanghai. The container ship is reported to be powered by a thorium-based molten salt reactor, making it a first thorium-powered container ship and, if completed, the largest nuclear-powered container ship in the world.
Icebreakers
Nuclear propulsion has proven both technically and economically feasible for nuclear-powered icebreakers in the Soviet, and later Russian, Arctic. Nuclear-fuelled ships operate for years without refueling, and the vessels have powerful engines, well-suited to the task of icebreaking.
The Soviet icebreaker Lenin was the world's first nuclear-powered surface vessel in 1959 and remained in service for 30 years (new reactors were fitted in 1970). It led to a series of larger icebreakers, the 23,500 ton of six vessels, launched beginning in 1975. These vessels have two reactors and are used in deep Arctic waters. NS Arktika was the first surface vessel to reach the North Pole.
For use in shallow waters such as estuaries and rivers, shallow-draft, Taymyr-class icebreakers were built in Finland and then fitted with their single-reactor nuclear propulsion system in Russia. They were built to conform to international safety standards for nuclear vessels.
All nuclear-powered icebreakers have been commissioned by the Soviet Union or Russia.
(1959–1989; museum ship)
(1975–2008; decommissioned)
(1977–1992; scrapped)
(1985–2013; decommissioned)
(1989–present)
(1989–2014; decommissioned)
(1990–present)
(1992–present)
(2007–present)
(2020–present)
(2021–present)
(2022–present)
(2024–present)
See also
Air-independent propulsion
Aircraft Nuclear Propulsion
Knolls Atomic Power Laboratory
List of United States Naval reactors
Naval Reactors
Nuclear navy
Nuclear-powered aircraft
Nuclear Power School
Soviet naval reactors
United States naval reactors
United States Navy Nuclear Propulsion
Notes
Citations
References
AFP, 11 November 1998; in "Nuclear Submarines Provide Electricity for Siberian Town," FBIS-SOV-98-315, 11 November 1998.
ITAR-TASS, 11 November 1998; in "Russian Nuclear Subs Supply Electricity to Town in Far East," FBIS-SOV-98-316, 12 November 1998.
Harold Wilson's plan BBC News story
External links
The World Nuclear Association
Naval Nuclear Power Training Command
Marine propulsion
Marine propulsion | Nuclear marine propulsion | [
"Engineering"
] | 4,319 | [
"Marine propulsion",
"Marine engineering"
] |
917,006 | https://en.wikipedia.org/wiki/Bijective%20proof | In combinatorics, bijective proof is a proof technique for proving that two sets have equally many elements, or that the sets in two combinatorial classes have equal size, by finding a bijective function that maps one set one-to-one onto the other. This technique can be useful as a way of finding a formula for the number of elements of certain sets, by corresponding them with other sets that are easier to count. Additionally, the nature of the bijection itself often provides powerful insights into each or both of the sets.
Basic examples
Proving the symmetry of the binomial coefficients
The symmetry of the binomial coefficients states that
This means that there are exactly as many combinations of things in a set of size as there are combinations of things in a set of size .
The key idea of the bijective proof may be understood from a simple example: selecting children to be rewarded with ice cream cones, out of a group of children, has exactly the same effect as choosing instead the children to be denied ice cream cones.
Other examples
Problems that admit bijective proofs are not limited to binomial coefficient identities. As the complexity of the problem increases, a bijective proof can become very sophisticated. This technique is particularly useful in areas of discrete mathematics such as combinatorics, graph theory, and number theory.
The most classical examples of bijective proofs in combinatorics include:
Prüfer sequence, giving a proof of Cayley's formula for the number of labeled trees.
Robinson-Schensted algorithm, giving a proof of Burnside's formula for the symmetric group.
Conjugation of Young diagrams, giving a proof of a classical result on the number of certain integer partitions.
Bijective proofs of the pentagonal number theorem.
Bijective proofs of the formula for the Catalan numbers.
See also
Binomial theorem
Schröder–Bernstein theorem
Double counting (proof technique)
Combinatorial principles
Combinatorial proof
Categorification
References
Further reading
Loehr, Nicholas A. (2011). Bijective Combinatorics. CRC Press. , .
External links
"Division by three" – by Doyle and Conway.
"A direct bijective proof of the hook-length formula" – by Novelli, Pak and Stoyanovsky.
"Bijective census and random generation of Eulerian planar maps with prescribed vertex degrees" – by Gilles Schaeffer.
"Kathy O'Hara's Constructive Proof of the Unimodality of the Gaussian Polynomials" – by Doron Zeilberger.
"Partition Bijections, a Survey" – by Igor Pak.
Garsia-Milne Involution Principle – from MathWorld.
Enumerative combinatorics
Articles containing proofs
Mathematical proofs | Bijective proof | [
"Mathematics"
] | 577 | [
"Articles containing proofs",
"Enumerative combinatorics",
"nan",
"Combinatorics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.