id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
33,844,583 | https://en.wikipedia.org/wiki/Fenzy | Fenzy is a scuba diving and industrial breathing equipment design and manufacturing firm. It started in or before 1920 in France. Finally Honeywell bought them out.
In 1961 the company's founder and owner, Maurice Fenzy, invented a divers' adjustable buoyancy life jacket (ABLJ) (European terminology) or buoyancy compensator (BC) (North American terminology) that became so well known that the company name has become synonymous with the item, although Fenzy also manufactured rebreathers and other items.
Some industrial breathing sets whose make names contain "Fenzy", are made by Honeywell.
See also
References
External links
Fenzy rebreathers:
Patent for an industrial rebreather designed in 1920
Fenzy Escape
Fenzy ORM 55
Fenzy 55
Fenzy 67 J
Fenzy PO68
Underwater diving engineering
Rebreather makers
Underwater diving equipment manufacturers | Fenzy | [
"Engineering"
] | 185 | [
"Underwater diving engineering",
"Marine engineering"
] |
33,846,186 | https://en.wikipedia.org/wiki/Square%20root%20of%20a%202%20by%202%20matrix | A square root of a 2×2 matrix M is another 2×2 matrix R such that M = R2, where R2 stands for the matrix product of R with itself. In general, there can be zero, two, four, or even an infinitude of square-root matrices. In many cases, such a matrix R can be obtained by an explicit formula.
Square roots that are not the all-zeros matrix come in pairs: if R is a square root of M, then −R is also a square root of M, since (−R)(−R) = (−1)(−1)(RR) = R2 = M.A 2×2 matrix with two distinct nonzero eigenvalues has four square roots. A positive-definite matrix has precisely one positive-definite square root.
A general formula
The following is a general formula that applies to almost any 2 × 2 matrix. Let the given matrix be
where A, B, C, and D may be real or complex numbers. Furthermore, let τ = A + D be the trace of M, and δ = AD − BC be its determinant. Let s be such that s2 = δ, and t be such that t2 = τ + 2s. That is,
Then, if t ≠ 0, a square root of M is
Indeed, the square of R is
Note that R may have complex entries even if M is a real matrix; this will be the case, in particular, if the determinant δ is negative.
The general case of this formula is when δ is nonzero, and τ2 ≠ 4δ, in which case s is nonzero, and t is nonzero for each choice of sign of s. Then the formula above will provide four distinct square roots R, one for each choice of signs for s and t.
Special cases of the formula
If the determinant δ is zero, but the trace τ is nonzero, the general formula above will give only two distinct solutions, corresponding to the two signs of t. Namely,
where t is any square root of the trace τ.
The formula also gives only two distinct solutions if δ is nonzero, and τ2 = 4δ (the case of duplicate eigenvalues), in which case one of the choices for s will make the denominator t be zero. In that case, the two roots are
where s is the square root of δ that makes τ − 2s nonzero, and t is any square root of τ − 2s.
The formula above fails completely if δ and τ are both zero; that is, if D = −A, and A2 = −BC, so that both the trace and the determinant of the matrix are zero. In this case, if M is the null matrix (with A = B = C = D = 0), then the null matrix is also a square root of M, as is any matrix
where b and c are arbitrary real or complex values. Otherwise M has no square root.
Formulas for special matrices
Idempotent matrix
If M is an idempotent matrix, meaning that MM = M, then if it is not the identity matrix, its determinant is zero, and its trace equals its rank, which (excluding the zero matrix) is 1. Then the above formula has s = 0 and τ = 1, giving M and −M as two square roots of M.
Exponential matrix
If the matrix M can be expressed as real multiple of the exponent of some matrix A, , then two of its square roots are . In this case the square root is real.
Diagonal matrix
If M is diagonal (that is, B = C = 0), one can use the simplified formula
where a = ±√A, and d = ±√D. This, for the various sign choices, gives four, two, or one distinct matrices, if none of, only one of, or both A and D are zero, respectively.
Identity matrix
Because it has duplicate eigenvalues, the 2×2 identity matrix has infinitely many symmetric rational square roots given by
where are any complex numbers such that
Matrix with one off-diagonal zero
If B is zero, but A and D are not both zero, one can use
This formula will provide two solutions if A = D or A = 0 or D = 0, and four otherwise. A similar formula can be used when C is zero, but A and D are not both zero.
Real matrices with real square roots
The algebra M(2, R) of 2x2 real matrices has three types of planar subalgebras. Each subalgebra admits the exponential map. If are square roots of p. The condition that the matrix is the image under exp limits it to half the plane of dual numbers, and to a quarter of the plane of split complex numbers, but does not constrain ordinary complex planes since the exponential mapping covers them. In the split-complex case there are two more square roots of p since each quadrant contains one.
References
Matrices | Square root of a 2 by 2 matrix | [
"Mathematics"
] | 1,047 | [
"Matrices (mathematics)",
"Mathematical objects"
] |
33,848,674 | https://en.wikipedia.org/wiki/Chain%20walking | In polymer chemistry, chain walking (CW) or chain running or chain migration is a mechanism that operates during some alkene polymerization reactions. CW can be also considered as a specific case of intermolecular chain transfer (analogous to radical ethene polymerization). This reaction gives rise to branched and hyperbranched/dendritic hydrocarbon polymers. This process is also characterized by accurate control of polymer architecture and topology. The extent of CW, displayed in the number of branches formed and positions of branches on the polymers are controlled by the choice of a catalyst. The potential applications of polymers formed by this reaction are diverse, from drug delivery to phase transfer agents, nanomaterials, and catalysis.
Catalysts
Catalysts that promote chain walking were discovered in the 1980-1990s. Nickel(II) and palladium(II) complexes of α-diimine ligands were known to efficiently catalyze polymerization of alkenes. They are also referred as Brookhart's catalysts after being used for making of high molar mass polyolefins for the first time at University of North Carolina at Chapel Hill in 1995. Currently nickel and palladium complexes bearing α-diimine ligands, such as the two examples shown, are the most thoroughly described chain walking catalysts in scientific literature. Ligand design influences not only CW extent but also regio- and stereoselectivity and also the sensitivity of the catalyst to undergo chain-breaking reactions, mainly β-H elimination, influencing achievable molar mass and also the possibility to achieve living polymerization behaviour. Thus stereo block copolymers could be made by combination of living and stereospecific CW polymerization catalysts. Continuous research effort led to design of other ligands which provide CW polymerization catalysts upon complexation to late transition metals. Examples are β-diimine, α-keto-β-diimine, amine-imine and most recently diamine ligands. As the vast majority of CW polymerization catalysts is based on late-transition metal complexes, having generally lower oxophilicity, these complexes were demonstrated also to provide copolymerisation of olefins with polar monomers like acrylates, alkylvinylketones, ω-alken-1-ols, ω-alken-1-carboxylic acids etc., which was the main initial intention of development of this class of catalysts. These random copolymers could further be utilized in the construction of sophisticated amphiphilic grafted copolymers with hydrophobic polyolefin core and shell based on hydrophilic arms, in some cases made of stimuli-responsive polymers.
Mechanism
CW occurs after the polymer chain has grown somewhat on the metal catalyst. The precursor is a 16 e− complex with the general formula [ML2(C2H4)(chain)]+. The ethylene ligand (the monomer) dissociates to produce a highly unsaturated 14 e− cation. This cation is stabilized by an agostic interaction. β-Hydride elimination then occurs to give a hydride-alkene complex. Subsequent reinsertion of the M-H into the C=C bond, but in the opposite sense gives a metal-alkyl complex.
This process, a step in the chain walk, moves the metal from the end of a chain to a secondary carbon center. At this stage, two options are available: (1) chain walking can continue or (2) a molecule of ethylene can bind to reform the 16e complex. At this second resting state, the ethylene molecule can insert to grow the polymer or dissociate inducing further chain walking. If many branches can form, a hyperbranched topology results. Therefore, ethene only homopolymerization can provide branched polymer whereas the same mechanism leads to chain straightening in α-olefin polymerization. The variation of CW by changing T, monomer concentration, or catalyst switch can be used to produce block copolymer with amorphous and semi-crystalline blocks or with blocks of different topology.
References
Polymer chemistry
Organometallic chemistry | Chain walking | [
"Chemistry",
"Materials_science",
"Engineering"
] | 873 | [
"Organometallic chemistry",
"Materials science",
"Polymer chemistry"
] |
33,850,965 | https://en.wikipedia.org/wiki/Streeter%E2%80%93Phelps%20equation | The Streeter–Phelps equation is used in the study of water pollution as a water quality modelling tool. The model describes how dissolved oxygen (DO) decreases in a river or stream along a certain distance by degradation of biochemical oxygen demand (BOD). The equation was derived by H. W. Streeter, a sanitary engineer, and Earle B. Phelps, a consultant for the U.S. Public Health Service, in 1925, based on field data from the Ohio River. The equation is also known as the DO sag equation.
Streeter–Phelps equation
The Streeter–Phelps equation determines the relation between the dissolved oxygen concentration and the biological oxygen demand over time and is a solution to the linear first order differential equation
This differential equation states that the total change in oxygen deficit (D) is equal to the difference between the two rates of deoxygenation and reaeration at any time.
The Streeter–Phelps equation, assuming a plug-flow stream at steady state is then
where
is the saturation deficit, which can be derived from the dissolved oxygen concentration at saturation minus the actual dissolved oxygen concentration (). has the dimensions .
is the deoxygenation rate, usually in .
is the reaeration rate, usually in .
is the initial oxygen demand of organic matter in the water, also called the ultimate BOD (BOD at time t=infinity). The unit of is .
is the oxygen demand remaining at time t, .
is the initial oxygen deficit .
is the elapsed time, usually .
lies typically within the range 0.05-0.5 and lies typically within the range 0.4-1.5 .
The Streeter–Phelps equation is also known as the DO sag equation. This is due to the shape of the graph of the DO over time.
Critical oxygen deficit
On the DO sag curve a minimum concentration occurs at some point, along a stream. If the Streeter–Phelps equation is differentiated with respect to time, and set equal to zero, the time at which the minimum DO occurs is expressed by
To find the value of the critical oxygen deficit, , the Streeter–Phelps equation is combined with the equation above, for the critical time, . Then the minimum dissolved oxygen concentration is
Mathematically it is possible to get a negative value of , even though it is not possible to have a negative amount of DO in reality.
The distance traveled in a river from a given point source pollution or waste discharge downstream to the (which is the minimum DO) is found by
where is the flow velocity of the stream. This formula is a good approximation as long as the flow can be regarded as a plug flow (turbulent).
Estimation of reaeration rate
Several estimations of the reaeration rate exist, which generally follow the equation
where
is a constant.
is the flow velocity [m/s].
is the depth [m].
is a constant.
is a constant.
The constants depend on the system to which the equation is applied, i.e. the flow velocity and the size of the stream or river. Different values are available in the literature.
The software "International Hydrological Programme" applies the following equation derived on the basis of values used in published literature
where
.
is the average flow velocity [m/s].
is the average depth of flow in the river [m].
Temperature correction
Both the deoxygenation rate, and reaeration rate, can be temperature corrected, following the general formula.
where
is the rate at 20 degrees Celsius.
θ is a constant, which differs for the two rates.
is the actual temperature in the stream in degC.
Normally θ has the value 1.048 for and 1.024 for .
An increasing temperature has the most impact on the deoxygenation rate, and results in an increased critical deficit (), and decreases. Furthermore, a decreased concentration occurs with increasing temperature, which leads to a decrease in the DO concentration.
Mixing of rivers
When two streams or rivers merge or water is discharged to a stream it is possible to determine the BOD and DO after mixing assuming steady state conditions and instantaneous mixing. The two streams are considered as dilutions of each other thus the initial BOD and DO will be
and
where
is the initial concentration of BOD in the river downstream of the mixing, also called BOD(0). The unit of is .
is the background BOD of the concentration in the river .
is the BOD of the content of the merging river .
is the initial concentration of the dissolved oxygen in the river downstream of the conjoining point .
is the background concentration of the dissolved oxygen content in the river .
is the background concentration of the dissolved oxygen content in the merging river .
is the flow in the river upstream from the mixing point .
is the flow in the merging river upstream from the mixing point .
Numerical approach
Nowadays it is possible to solve the classical Streeter–Phelps equation numerically by use of computers. The differential equations are solved by integration.
History
In 1925, a study on the phenomena of oxidation and reaeration in the Ohio River in the US was published by the sanitary engineer, Harold Warner Streeter and the consultant, Earle Bernard Phelps (1876–1953). The study was based on data obtained from May 1914 to April 1915 by the United States Public Health Service under supervision of Surg. W.H. Frost.
More complex versions of the Streeter–Phelps model were introduced during the 1960s, where computers made it possible to include further contributions to the oxygen development in streams. At the head of this development were O'Connor (1960) and Thomann (1963). O'Connor added the contributions from photosynthesis, respiration and sediment oxygen demand (SOD). Thomann expanded the Streeter–Phelps model to allow for multi segment systems.
Applications and limitations
The simple Streeter–Phelps model is based on the assumptions that a single BOD input is distributed evenly at the cross section of a stream or river and that it moves as plug flow with no mixing in the river. Furthermore, only one DO sink (carbonaceous BOD) and one DO source (reaeration) is considered in the classical Streeter–Phelps model. These simplifications will give rise to errors in the model. For example the model does not include BOD removal by sedimentation, that suspended BOD is converted to a dissolved state, that sediment has an oxygen demand and that photosynthesis and respiration will impact the oxygen balance.
Expanded model
In addition to the oxidation of organic matter and the reaeration process, there are many other processes in a stream which affect the DO. In order to make a more accurate model it is possible to include these factors using an expanded model.
The expanded model is a modification of the traditional model and includes internal sources (reaeration and photosynthesis) and sinks (BOD, background BOD, SOD and respiration) of DO.
It is not always necessary to include all of these parameters. Instead relevant sources and sinks can be summed to yield the overall solution for the particular model.
Parameters in the expanded model can be either measured in the field or estimated theoretically.
Background BOD
Background BOD or benthic oxygen demand is the diffuse source of BOD represented by the decay of organic matter that has already settled on the bottom. This will give rise to a constant diffuse input thus the change in BOD over time will be
where
is the rate for oxygen consumption by BOD, usually in .
is the BOD from organic matter in the water .
is the background BOD input .
Sedimentation of BOD
Sedimented BOD does not directly consume oxygen and this should therefore be taken into account. This is done by introducing a rate of BOD removal combined with a rate of oxygen consumption by BOD. Giving a total rate for oxygen removal by BOD
where
is the rate of oxygen consumption by BOD, usually in .
is the rate of settling of BOD, usually in .
The change in BOD over time is described as
where is the BOD from organic matter in the water .
is typically in the range of 0.5-5 .
Sediment oxygen demand
Oxygen can be consumed by organisms in the sediment. This process is referred to as sediment oxygen demand (SOD). Measurement of SOD can be undertaken by measuring the change of oxygen in a box on the sediment (benthic respirometer).
The change in oxygen deficit due to consumption by sediment is described as
where
is the depth of the river [m]
is the SOD
D is the saturation deficit .
is the reaeration rate [].
The range of the SOD is typically in the range of 0.1 – 1 for a natural river with low pollution and 5 – 10 for a river with moderate to heavy pollution.
Nitrification
Ammonium is oxidized to nitrate under aerobic conditions
NH4+ + 2O2 → NO3− + H2O + 2H+
Ammonium oxidation can be treated as part of BOD, so that BOD = CBOD + NBOD, where CBOD is the carbonaceous biochemical oxygen demand and NBOD is nitrogenous BOD. Usually CBOD is much higher than the ammonium concentration and thus NBOD often does not need to be considered. The change in oxygen deficit due to oxidation of ammonium is described as
where
D is the saturation deficit.
is the nitrification rate .
is ammonium-nitrogen concentration.
The range of is typically 0.05-0.5 .
Photosynthesis and respiration
Photosynthesis and respiration are performed by algae and by macrophytes. Respiration is also performed by bacteria and animals. Assuming steady state (net daily average) the change in deficit will be
where
is the respiration .
is the photosynthesis .
Note that BOD only includes respiration of microorganisms e.g. algae and bacteria and not by macrophytes and animals.
Due to the variation of light over time, the variation of the photosynthetic oxygen can be described by a periodical function over time, where time is after sunrise and before sunset
where
is the photosynthesis at a given time .
is the daily maximum of the photosynthesis .
is the fraction of day with sunlight, usually day.
is the time at which sun rises .
The range of the daily average value of primary production is typically 0.5-10 .
See also
Water pollution
Water quality modelling
Biochemical oxygen demand
Oxygenation (environmental)
Oxygen saturation
Oxygen depletion
Hypoxia (environmental)
Deoxygenation
Water aeration
Photosynthesis
Nitrification
Fick's laws of diffusion
Ohio River
United States Public Health Service
References
External links
O'Connor D. J., 1960, Oxygen Balance of an Estuary, Journal of the Sanitary Engineering Division, ASCE, Vol. 86, No. SA3, Proc. Paper 2472, May, 1960
Thomann R. V.,1963, Mathematical model for dissolved oxygen, Journal of the Sanitary Engineering Division, American Society of Civil Engineers, Volume 89, No. SA5
Environmental engineering
Water and the environment
Water pollution
Mathematics articles needing expert attention | Streeter–Phelps equation | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,293 | [
"Chemical engineering",
"Water pollution",
"Civil engineering",
"Environmental engineering"
] |
28,446,647 | https://en.wikipedia.org/wiki/Metallurgical%20failure%20analysis | Metallurgical failure analysis is the process to determine the mechanism that has caused a metal component to fail. It can identify the cause of failure, providing insight into the root cause and potential solutions to prevent similar failures in the future, as well as culpability, which is important in legal cases. Resolving the source of metallurgical failures can be of financial interest to companies. The annual cost of corrosion (a common cause of metallurgical failures) in the United States was estimated by NACE International in 2012 to be $450 billion a year, a 67% increase compared to estimates for 2001. These failures can be analyzed to determine their root cause, which if corrected, would save reduce the cost of failures to companies.
Failure can be broadly divided into functional failure and expected performance failure. Functional failure occurs when a component or process fails and its entire parent system stops functioning entirely. This category includes the common idea of a component fracturing rapidly. Expected performance failures are when a component causes the system to perform below a certain performance criterion, such as life expectancy, operating limits, or shape and color. Some performance criteria are documented by the supplier, such as maximum load allowed on a tractor, while others are implied or expected by the customer, such gas consumption (miles per gallon for automobiles).
Often a combination of both environmental conditions and stress will cause failure. Metal components are designed to withstand the environment and stresses that they will be subjected to. The design of a metal component involves not only a specific elemental composition but also specific manufacturing process such as heat treatments, machining processes, etc. The huge arrays of different metals that result all have unique physical properties. Specific properties are designed into metal components to make them more robust to various environmental conditions. These differences in physical properties will exhibit unique failure modes. A metallurgical failure analysis takes into account as much of this information as possible during analysis. The ultimate goal of failure analysis is to provide a determination of the root cause and a solution to any underlying problems to prevent future failures.
Failure investigation
The first step in failure analysis is investigating the failure to collect information. The sequence of steps for information gathering in a failure investigation are:
Collection information about the circumstances surrounding the failure and selection of specimens
Preliminary examination of the failed part (visual examination) and comparison with parts that have not failed
Macroscopic examination and analysis and photographic documentation of specimens (fracture surfaces, secondary cracks, and other surface phenomena)
Microscopic examination and analysis of specimens (fracture surfaces)
Selection and preparation of metallographic sections
Microscopic examination and analysis of prepared metallographic specimens
Nondestructive testing
Destructive/mechanical testing
Determination of failure mechanism
Chemical analysis (bulk, local, surface corrosion products, deposits or coatings)
Identify all possible root causes
Testing most likely possible root causes under simulated service conditions
Analysis of all the evidence, formulation of conclusions, and writing the report including recommendations
Techniques used
Various techniques are used in the investigative process of metallurgical failure analysis.
Macroscopic examination: camera, stereoscope
Microscopic examination: light microscopy, electron microscopy, x-ray microscopy, metallographic etching
Mechanical testing: hardness testing, tensile testing, Charpy impact testing
Chemical testing: microprobe analysis, energy dispersive spectroscopy
Non-destructive testing: Non-destructive testing is a test method that allows certain physical properties of metal to be examined without taking the samples completely out of service. NDT is generally used to detect failures in components before the component fails catastrophically.
Destructive testing: Destructive testing involves removing a metal component from service and sectioning the component for analysis. Destructive testing gives the failure analyst the ability to conduct the analysis in a laboratory setting and perform tests on the material that will ultimately destroy the component.
Metallurgical failure modes
There is no standardized list of metallurgical failure modes and different metallurgists might use a different name for the same failure mode. The failure mode terms listed below are those accepted by ASTM, ASM, and/or NACE as distinct metallurgical failure mechanisms.
Caused by corrosion and stress
Stress corrosion cracking Stress corrosion (NACE term)
Corrosion fatigue
Caustic cracking (ASTM term)
Caustic embrittlement (ASM term)
Sulfide stress cracking (ASM, NACE term)
Stress-accelerated Corrosion (NACE term)
Hydrogen stress cracking (ASM term)
Hydrogen-assisted stress corrosion cracking (ASM term)
Caused by stress
Fatigue (ASTM, ASM term)
Mechanical overload
Creep
Rupture
Cracking (NACE term)
Embrittlement
Caused by corrosion
Erosion corrosion
Pitting corrosion Oxygen pitting
Hydrogen embrittlement
Hydrogen-induced cracking (ASM term)
Corrosion embrittlement (ASM term)
Hydrogen disintegration (NACE term)
Hydrogen-assisted cracking (ASM term)
Hydrogen blistering
Corrosion
Potential root causes
Potential root causes of metallurgical failures are vast, spanning the lifecycle of component from design to manufacturing to usage. The most common reasons for failures can be classified into the following categories:
Service or operation conditions
Failures due to service or operation conditions includes using a component outside of its intended conditions, such as an impact force or a high load. It can also include failures due to unexpected conditions in usage, such as an unexpected contact point that causes wear and abrasion or an unexpected humidity level or chemical presence that causes corrosion. These factors result in the component failing at an earlier time than expected.
Improper maintenance
Improper maintenance would cause potential sources of fracture to go untreated and lead to premature failure of a component in the future. The reason for improper maintenance could be either intentional, such as skipping a yearly maintenance to avoid the cost, or unintentional, such as using the wrong engine oil.
Improper testing or inspection
Testing and/or inspection are typically included in component manufacturing lines to verify the product meets some set of standards to ensure the desired performance in the field. Improper testing or inspection would circumvent these quality checks and could allow a part with a defect that would normally disqualify the component from field use to be sold to a customer, potentially leading to a failure.
Fabrication or manufacturing errors
Manufacturing or fabrication errors occur during the processing of the material or component. For metal parts, casting defects are common, such as cold shut, hot tears or slag inclusions. It can also be surface treatment problems, processing parameters such as ramming a sand mold or wrong temperature during hardening.
Design errors
Design errors arise when the desired use case was not properly accounted for, leading to a ineffective design, such as the stress state in service or potential corrosive agents in the service environment. Design errors often include dimensioning and materials selection, but it can also be the complete design.
Use of computational methods for failure analysis
Computational methods have been increasing in popularity as a method to test possible root because they do not need to sacrifice a component to prove a root cause. Common cases where computational methods are used are for failures due to erosion, failures of components under complex stress states, and for predictive analyses. Computational fluid dynamics is used to determine the flow pattern and shear stresses on a component that had failed due to erosive wear. Finite element analysis is used to model components under complex stress states. Finite element analysis as well as phase field models can be used for predicting crack propagation and failure, which are then used to prevent failure by influencing component design.
See also
Forensic engineering
Corrosion engineering
Failure analysis
Fracture
Fracture mechanics
References
Mechanical engineering
Materials science
Metallurgy | Metallurgical failure analysis | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,525 | [
"Applied and interdisciplinary physics",
"Metallurgy",
"Materials science",
"nan",
"Mechanical engineering"
] |
32,246,658 | https://en.wikipedia.org/wiki/Physics%20of%20Plasmas | Physics of Plasmas is a peer-reviewed monthly scientific journal on plasma physics published by the American Institute of Physics, with cooperation by the American Physical Society's Division of Plasma Physics, since 1994.
Until 1988, the journal topic was covered by Physics of Fluids. From 1989 until 1993, Physics of Fluids was split into Physics of Fluids A covering fluid dynamics and Physics of Fluids B dedicated to plasma physics. In 1994, Physics of Plasmas was split off as a separate journal.
External links
Monthly journals
English-language journals
American Institute of Physics academic journals
Academic journals established in 1994
Plasma science journals | Physics of Plasmas | [
"Physics"
] | 120 | [
"Plasma science journals",
"Plasma physics stubs",
"Plasma physics"
] |
32,249,901 | https://en.wikipedia.org/wiki/Nanotechnology%20%28journal%29 | Nanotechnology is a peer-reviewed scientific journal published by IOP Publishing. It covers research in all areas of nanotechnology. The editor-in-chief is Ray LaPierre (McMaster University, Canada).
Abstracting, indexing, and impact factor
According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.9.
It is indexed in the following bibliographic databases:
Chemical Abstracts
Compendex
Inspec
Web of Science
PubMed
Scopus
Astrophysics Data System
Aerospace & High Technology
EMBASE
Environmental Science and Pollution Management
International Nuclear Information System
References
External links
Nanotechnology journals
IOP Publishing academic journals
Weekly journals
Academic journals established in 1990
English-language journals | Nanotechnology (journal) | [
"Materials_science"
] | 147 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Nanotechnology journals",
"Nanotechnology stubs",
"Nanotechnology"
] |
32,249,963 | https://en.wikipedia.org/wiki/Nano%20Research | Nano Research is a peer-reviewed scientific journal co-published by Tsinghua Press and Springer Science+Business Media. It covers research in all areas of nanotechnology. It was established in 2008 and is published monthly. The current editors-in-chief are Yadong Li and Shoushan Fan.
Abstracting and indexing
The journal is indexed and abstracted in the following bibliographic databases:
According to the Journal Citation Reports, the journal has a 2023 impact factor of 9.5. According to Scopus, it has a CiteScore of 14.3, ranking 17/434 in the category "Condensed Matter Physics".
References
External links
Springer Science+Business Media academic journals
Monthly journals
Academic journals established in 2008
Nanotechnology journals
English-language journals | Nano Research | [
"Materials_science"
] | 159 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Nanotechnology journals",
"Nanotechnology stubs",
"Nanotechnology"
] |
32,250,099 | https://en.wikipedia.org/wiki/Nanoscale%20Research%20Letters | Nanoscale Research Letters is a peer-reviewed open access scientific journal covering research in all areas of nanotechnology and published by Springer Science+Business Media.
External links
Springer Science+Business Media academic journals
Monthly journals
Academic journals established in 2006
Nanotechnology journals
English-language journals
Open access journals | Nanoscale Research Letters | [
"Materials_science"
] | 61 | [
"Materials science stubs",
"Nanotechnology journals",
"Materials science journals",
"Materials science journal stubs",
"Nanotechnology stubs",
"Nanotechnology"
] |
32,253,448 | https://en.wikipedia.org/wiki/Telluric%20iron | Telluric iron, also called native iron, is iron that originated on Earth, and is found in a metallic form rather than as an ore. Telluric iron is extremely rare, with only one known major deposit in the world, located in Greenland.
Introduction
With the exception of its molten core, nearly all elemental iron on Earth is found as iron ores. All metallic iron was thought to have been transformed into iron oxides during the Great Oxidation Event, beginning roughly 2 billion years ago, among other theories. Until the late 1800s, iron as a native metal was only a matter of speculation, outside of isolated Greenland. The only known, terrestrial iron in metallic form was found as meteorites, which were deposited onto the Earth from outer space.
Telluric iron is so named after the Latin word Tellus, meaning "Earth" (the planet, as opposed to terra meaning "earth": the land, ground or soil), combined with the suffix -ic meaning "of" or "born from", differentiating it from meteorites. Telluric iron resembles meteoric iron, in that it contains both a significant amount of nickel and Widmanstatten structures. However, telluric iron typically contains only around 3% nickel, which is too low for meteorites, of which none have been found with less than 5%. There are two types of telluric iron: Both type 1 and type 2 contain comparable amounts of nickel and other impurities. The main difference between the two is the carbon content, which greatly affects the hardness, workability, and melting point of the metal.
Material properties
Telluric iron is metallic iron that formed within the Earth's mantle and crust. Although minor deposits of telluric iron have been found around the world, the west shores of Greenland hold the only known major deposits. However, these deposits may vary drastically in shape and composition, even in the same region, as well as drastic variations between different regions such as Uivfaq, Asuk, Blaafjeld, and Mellemfjord. The common factor is that all Greenlandic deposits tend to be found in dikes (lava-filled fractures in the bedrock) or extrusions where molten rock was able to flow out onto the surface. Another commonality is that all deposits are found in association with graphite-rich feldspar, likely contributing to the high carbon-content and low oxide presence in the metal, although it is unknown if the metal managed to escape being oxidized with the rest of Earth's iron, or if it began as beds of ore and coal that subducted and then were naturally smelted in the lava due to the reducing environment provided by the carbon-rich, graphitic feldspar.
Telluric iron in Greenland is unique, in that it can be found in nearly all phases of iron-carbon alloys, and with drastically varying crystalline structures. In some rock it is found mixed with basalt as very small grains with sharp corners and irregular shapes, whereas in others the small, grain-sized droplets in the molten magma were able to coalesce into larger, pea-sized droplets that crystallized with a mostly spherical or oblong shape. Still in others the dike or extrusion may be made almost entirely out of very high-carbon cast-iron, which could more easily coalesce within the magma and flow into cracks due to its lower viscosity and melting point. This cast iron is often crusted with or contains inclusions of basalt, as it extruded out of the ground as very large, globular masses within the lava, out of which large boulders formed due to natural erosion of the surrounding basalt.
Telluric iron is largely divided into two groups, depending on the carbon content. Type 1 is a cast-iron typically containing over 2.0% carbon, while type 2 ranges somewhere between wrought iron and a eutectoid steel. Both types tend to handle weathering in the elements very well, but tend to decompose and crumble very quickly in the dry, controlled atmosphere of a museum, although type 2 is far more prone to this kind of damage.
Type 1
Type 1 telluric iron contains a significant amount of carbon. Type 1 is a white nickel cast-iron, containing 1.7 to 4% carbon and 0.05 to 4% nickel, which is very hard and brittle and does not respond well to cold working. The structure of type 1 consists mainly of pearlite and cementite or cohenite, with inclusions of troilite and silicate. The individual ferrite grains are typically about a millimeter in size. Although the composition of the grains may vary, even within the same grain, they are mostly composed of fairly pure nickel-ferrite. The ferrite grains are connected with cementite laminations; typically 5–25 micrometers thick; forming the pearlite.
Type 1 is found as massive extrusions or very large boulders, typically ranging from a few tons to tens of tons. The metal could not be cold worked by the ancient Inuit, (the local inhabitants of Greenland), and proves extremely difficult to machine even with modern tools. Machining of type 1 is possibly best accomplished with a carborundum wheel and water cooling. However type 1 was possibly used as hammer and anvil stones by the Inuit.
When sawed in half, boulders of type 1 tend to have a thick shell of cast-iron on the outside that can barely be broken with pneumatic jackhammers, but inside a much more brittle construction of iron grains in an almost powdery form, sintered together to form a porous, sponge-iron type of material that pulverizes at the strike of a hammer.
Type 2
Type 2 telluric iron also contains around 0.05 to 4% nickel, but typically less than 0.7% carbon. Type 2 is a malleable nickel-iron which responds well to cold working. The carbon and nickel content have a great effect on the final hardness of the cold-worked piece.
Type 2 is found as small grains mixed within basalt rock. The grains are usually 1–5 millimeters in diameter. The grains are usually found individually, separated by the basalt, although they are sometimes sintered together to form larger aggregates. The larger pieces also contain small amounts of cohenite, ilmenite, pearlite, and troilite. Type 2 was used by the Inuit to make items such as knives and ulus. The basalt was usually crushed in order to release the pea-sized grains, which were then hammered into discs about the size of coins. The metal is very soft and can be hammered into very thin plates. These flat discs were usually inserted into long slits carved into bone handles, in rows so that they slightly overlapped each other, forming an edge that resembled a combination of a knife and a saw (an inverted scalloped edge).
History
Aside from a very small deposit of telluric iron in Kassel, Germany, which has now been depleted, and a few other minor deposits from around the world, the only known major deposits exist in and nearby the area of Disko Bay, in Greenland. The material was found in the volcanic plains of basalt rock, and used by the local Inuit to make cutting edges for tools like knives and ulus. The Inuit were the only people to make practical use of telluric iron.
In 1870, Adolf Erik Nordenskiöld discovered large boulders of iron near the Disko Bay area of Greenland. Knowing that the Inuit had made tools from the Cape York meteorite, mainly due to Sir John Ross' discovery that the natives of Greenland used iron knives, Nordenskiöld landed at Fortune Bay on Disko Island to search for the material. The Inuit had told Ross that they got the iron from high on a mountain, at a site where two large boulders lay. One was very hard and could not be broken, but the other was chipped into smaller pieces from which balls of iron were extracted and hammered into flat discs for the knives. Nordenskiöld searched unsuccessfully for the site, until being led by some of the local Inuit to a place called Uivfaq, where large masses of metallic iron were strewn about the area. He assumed that the metal was of meteoric origin, since both contain significant amounts of nickel and both had Widmanstätten patterns. Most scientists at the time believed that no un-oxidized telluric iron existed, and few questioned Nordenskiöld's finding.
Gustav Nauckhoff made an expedition to Greenland in 1871. Armed with dynamite and lifting equipment, his expedition collected three large samples of telluric iron, also believing them to be meteoric, per Nordenskiöld's examination, and brought them back to Europe for further study. These samples can be found currently in Sweden, Finland, and Denmark. A 25-ton block now rests outside of the Riksmuseum in Stockholm, a 6.6 ton block outside the Geological Museum in Copenhagen, and a 3-ton block can be found in the Museum of Natural History in Kumpula, Helsinki.
Accompanying Nauckhoff in 1871 was K. J. V. Steenstrup. Due to circumstances like the shape of the boulders, which often had sharp corners or jagged edges that are not characteristic of meteorites (which ablate considerably during atmospheric entry), or the fact that many had areas that were encrusted with basalt, Steenstrup disagreed with Nordenskiöld about the origin of the boulders, and set out on an expedition of his own in 1878. In 1879, Steenstrup first identified the type 2 iron, showing that it also contained Widmanstätten structures. Steenstrup later reported what he found:
In the autumn of 1879, I made a discovery in connection with this matter, for in an old grave at Ekaluit ... I found 9 pieces of basalt containing round balls and irregular pieces of metallic iron. These pieces were lying together with bone knives, similar to those brought home by Ross, as well as with the usual stone tools ... whereas the 9 pieces of basalt with the iron balls were evidently the material for the bone knives. This iron is soft and keeps well in the air, from which reason it is fit for use in the manner described by Ross. The rock in which the iron appears is a typical, large-grained felspar-basalt. The discovery has a double significance, firstly, because it is the first time we have seen the material out of which the Esquimaux made artificial knives, and secondly, because it showed that they have used telluric iron for that purpose.
After the discovery in the grave at Ekaluit, Steenstrup found many large outcrops of basalt containing the type 2 iron. Since the type 2 grains are embedded within volcanic basalt that matches the underlying bedrock, Steenstrup was able to show that the iron was from terrestrial, or telluric, sources. In his report, Steenstrup added,
This peculiar layer of basalt is filled from top to bottom with iron-grains of all sizes from a fraction of a millimeter to a length of 18 mm with a breadth of 14 mm, which is the greatest I have found. ... When polished, this iron shows beautiful Widmannstätten figures. ... Metallic nickel-iron with Widmannstätten figures has now been proved to be also a telluric mineral, and the presence of nickel together with a certain crystalline structure are consequently not sufficient to give the character of meteorites to loose iron blocks.
Steenstrup's findings were later confirmed by meteorite expert J. Lawrence Smith in 1879, and then by Joh Lorenzen in 1882. The extremely rare telluric iron found in western Greenland has been under study ever since.
Occurrence
In addition to the Disko Island deposit native iron has been reported from Fortune Bay, Mellemfjord, Asuk, and other locations along Greenland's west coast. Other locations include:
Ben Breck, Scotland in granite with magnetite
in County Antrim, Northern Ireland
occurs in basalt at Bühl, near Ahnatal-Weimar, Hesse, and associated with nodules of pyrite within limestone at Muhlhausen, Thuringia, Germany
near Rivne, Volhynia, Ukraine
in trachyte at Auvergne, France
in Russia at Grushersk in the Don district southern Urals associated with pyrite; in the Huntukungskii (Khungtukun) massif, Krasnoyarsk Kray; and on the Tolbachik fissure volcano on the Kamchatka Peninsula
in the Hatrurim Formation, Negev, Israel
In the United States occurrences have been reported from coal beds near Cameron, Clinton County, Missouri; and from carboniferous shale near New Brunswick, Somerset County, New Jersey
In Ontario it has been reported from Cameron Township, Nipissing District, and on St. Joseph Island in Lake Huron.
Native nickel-iron alloys with Ni3Fe to Ni2Fe occur as placer deposits derived from ultramafic rocks. Awaruite was described in 1885 from New Zealand.
References
Metals
Ferrous alloys
Native element minerals
Cubic minerals
Minerals in space group 229 | Telluric iron | [
"Chemistry"
] | 2,735 | [
"Metals",
"Ferrous alloys",
"Alloys"
] |
32,253,961 | https://en.wikipedia.org/wiki/Photon%20upconversion | Photon upconversion (UC) is a process in which the sequential absorption of two or more photons leads to the emission of light at shorter wavelength than the excitation wavelength. It is an anti-Stokes type emission. An example is the conversion of infrared light to visible light. Upconversion can take place in both organic and inorganic materials, through a number of different mechanisms. Organic molecules that can achieve photon upconversion through triplet-triplet annihilation are typically polycyclic aromatic hydrocarbons (PAHs). Inorganic materials capable of photon upconversion often contain ions of d-block or f-block elements. Examples of these ions are Ln3+, Ti2+, Ni2+, Mo3+, Re4+, Os4+, and so on.
Physical mechanisms
There are three basic mechanisms for photon upconversion in inorganic materials and at least two distinct mechanisms in organic materials. In inorganic materials photon upconversion occurs through energy transfer upconversion (ETU), excited-state absorption (ESA) and photon avalanche (PA). Such processes can be observed in materials with very different sizes and structures, including optical fibers, bulk crystals or nanoparticles, as long as they contain any of the active ions mentioned above. Organic molecules can upconvert photons through sensitized triplet-triplet annihilation (sTTA) and energy pooling.
Upconversion should be distinguished from two-photon absorption and second-harmonic generation. These two physical processes have a similar outcome to photon upconversion (emission of photons of shorter wavelength than the excitation) but the mechanism behind is different. An early proposal (a solid-state IR quantum counter) was made by Nicolaas Bloembergen in 1959 and the process was first observed by François Auzel in 1966.
A thermal upconversion mechanism is also possible. This mechanism is based on the absorption of photons with low energies in the upconverter, which heats up and re-emits photons with higher energies. To improve this process, the density of optical states of the upconverter can be carefully engineered to provide frequency- and angularly-selective emission characteristics. For example, a planar thermal upconverting platform can have a front surface that absorbs low-energy photons incident within a narrow angular range, and a back surface that efficiently emits only high-energy photons. These surface properties can be realized through designs of photonic crystals, and theories and experiments have been demonstrated on thermophotovoltaics and passive radiative cooling. Under best criterion, energy conversion efficiency from solar radiation to electricity by introducing up-converter can go up to 73% using AM1.5D spectrum and 76% considering sun as a black body source at 6,000 K for a single-junction cell.
Sensitized triplet-triplet annihilation
Sensitized triplet-triplet annihilation (sTTA) based photon upconversion is a bimolecular process that through a number of energy transfer steps, efficiently combines two low frequency photons into one photon of higher frequency. TTA systems consist of one absorbing species, the sensitizer, and one emitting species, the emitter (or annihilator). Emitters are typically polyaromatic chromophores with large singlet-triplet energy splitting, such as anthracene and its derivatives.
The first step in sensitized triplet-triplet annihilation is absorption of a low energy photon by the sensitizer. The sensitizer then populates its first triplet excited state (3Sen*) after intersystem crossing (ISC). The excitation energy on the sensitizer then transfers through a Dexter type triplet energy transfer (TET) to a ground state emitter, generating a triplet excited emitter (3Em*). Two triplet excited emitters then interact in a second energy transfer process, known as triplet-triplet annihilation (TTA). Upon TTA the triplet energies are fused leaving one emitter in its excited singlet state (1Em*) and the other emitter in its ground state. From the singlet excited state the emitter returns to the ground state through the emission of a photon. In this way two low energy photons are converted into one photon of higher energy. The principle relies on long lived triplet states to temporarily store the photon energy. Since molecular oxygen effectively quenches triplet states it is important that samples are thoroughly degassed or encapsulated to function efficiently.
Photon upconversion through sensitized triplet-triplet annihilation has the advantage of being efficient even at low excitation intensities making it potentially useful for converting sun light to enhance solar cell efficiencies.
Upconverting nanoparticles
Although photon upconversion was first studied in bulk crystals and optical fibers, it became better known with the development of nanomaterials. This happened due to the many ways in which nanostructures with photon upconversion properties can be applied. This new class of materials may broadly be referred to as upconverting nanoparticles or UCNPs.
Lanthanide-doped nanoparticles
Lanthanide-doped nanoparticles emerged in the late 1990s owing to the increasing focus on nanotechnology. Although their optical transitions essentially resemble those in bulk materials, the nanostructure amenable to surface modifications results in improved or new characteristics. Besides, the small size of the particles allow their use as alternatives to molecular fluorophores for biological applications. Their unique optical properties, such as large Stokes shift and the lack of blinking, have enabled them to rival conventional luminescent probes in challenging tasks including single-molecule tracking and deep tissue imaging. In the case of bioimaging, as lanthanide-doped nanoparticles can be excited with near-infrared light, they can reduce autofluorescence of biological samples and thus improve the contrast of the image.
Lanthanide-doped nanoparticles are nanocrystals of a transparent material (more often the fluorides NaYF4, NaGdF4, LiYF4, YF3, CaF2 or oxides such as Gd2O3) doped with lanthanide ions. The most common lanthanide ions used in photon upconversion are the pairs erbium-ytterbium (Er3+,Yb3+) or thulium-ytterbium (Tm3+, Yb3+). In such combinations ytterbium ions are added as antennas, to absorb light at around 980 nm and transfer it to the upconverter ion. If this ion is erbium, then a characteristic green and red emission is observed, while when the upconverter ion is thulium, the emission includes near-ultraviolet, blue and red light.
Despite the promising aspects of these nanomaterials, one urgent task that confronts materials chemists lies in the synthesis of nanoparticles with tunable emission, which is essential for applications in multiplexed imaging and sensing. The development of a reproducible, high yield synthetic route that allows controlled growth of rare earth halide nanoparticles has enabled the development and commercialization of upconversion nanoparticles in many different bioapplications. The recent progress in this direction includes the synthesis of structured nanocrystals crystals, such as particles with a core/shell structure, allowing upconversion through interfacial energy transfer (IET).
Semiconductor nanoparticles
Semiconductor nanoparticles or quantum dots have often been demonstrated to emit light of shorter wavelength than the excitation following a two-photon absorption mechanism, not photon upconversion. However, recently the use of semiconductor nanoparticles, such as CdSe, PbS and PbSe as sensitizers combined with molecular emitters has been shown as a new strategy for photon upconversion through triplet-triplet annihilation. They have been used to upconvert 980 nm infrared light to 600 nm visible light; green light to blue light; and blue light to ultraviolet. This technique benefits from a very high upconverting capability. Especially, these materials can be used to capture the infrared region of sunlight to electricity and enhance the efficiency of photovoltaic solar cells.
Upconversion nanocapsules for differential cancer bioimaging in vivo
Early diagnosis of tumor malignancy is crucial for timely cancer treatment aimed at imparting desired clinical outcomes. The traditional fluorescence-based imaging is unfortunately faced with challenges such as low tissue penetration and background autofluorescence. Upconversion (UC)-based bioimaging can overcome these limitations as their excitation occurs at lower frequencies and the emission at higher frequencies. Kwon et al. developed multifunctional silica-based nanocapsules, synthesized to encapsulate two distinct triplet-triplet annihilation UC chromophore pairs. Each nanocapsule emits different colors, blue or green, following a red light excitation. These nanocapsules were further conjugated with either antibodies or peptides to selectively target breast or colon cancer cells, respectively. Both in vitro and in vivo experimental results demonstrated cancer-specific and differential-color imaging from single wavelength excitation as well as far greater accumulation at targeted tumor sites than that due to the enhanced permeability and retention effect. This approach can be used to host a variety of chromophore pairs for various tumor-specific, color-coding scenarios and can be employed for diagnosis of a wide range of cancer types within the heterogeneous tumor microenvironment.
See also
Spontaneous parametric down-conversion
References
Light
Quantum optics | Photon upconversion | [
"Physics"
] | 2,060 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Quantum optics",
"Electromagnetic spectrum",
"Quantum mechanics",
"Waves",
"Light"
] |
31,241,150 | https://en.wikipedia.org/wiki/Pulser%20pump | A pulser pump is a gas lift device that uses gravity to pump water to a higher elevation. It has no moving parts.
Operation
A pulser pump makes use of water that flows through pipes and an air chamber from an upper reservoir to a lower reservoir. The intake is a trompe, which uses water flow to pump air to a separation chamber; air trapped in the chamber then drives an airlift pump. The top of the pipe that connects the upper reservoir to the air chamber is positioned just below the water surface. As the water drops down the pipe, air is sucked down with it. The air forms a "bubble" near the roof of the air chamber. A narrow riser pipe extends from the air chamber up to the higher elevation to which the water will be pumped.
Initially the water level will be near the roof of the air chamber. As air accumulates, pressure builds, which will push water up into the riser pipe. At some point the "air bubble" will extend below the bottom of the riser pipe, which will allow some of the air to escape through the riser, pushing the water that is already in the pipe up with it. As the air escapes, the water level in the air chamber will rise again. The alternating pressure build up and escape causes a pulsing effect, hence the name: pulser pump.
The maximum air pressure that can accumulate depends on the height of the water column between the air chamber and the lower reservoir. The deeper the air chamber is positioned, the higher the elevation to which the water can be pumped. The depth of the air chamber position is limited by the depth to which the flowing water can pull the air from the surface of the upper reservoir down to the chamber. This depth partially depends on the speed of the water, which in turn depends on the difference in height between the upper and lower reservoir.
History
Brian White, stonemason by profession, claims to have invented the pulser pump in 1987. He put the idea in the public domain.
However, Charles H. Taylor invented the hydraulic air compressor before the year 1910 while living in Montreal. The working principle of the hydraulic air compressor and the pulser pump is exactly the same. But the purpose of the compressor is to generate compressed air. Expelling the water up to 30 meter high serves to prevent potentially damaging
over-pressure. The primary purpose of the pulser pump is to use the air pressure to expel the water to a higher elevation.
See also
Airlift pump
Hydraulic ram
References
External links
The Pulser Pump
How the pulser pump works
Appropedia: Pulser pump
Worlds simplest water pump Video
Pumps | Pulser pump | [
"Physics",
"Chemistry"
] | 535 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
31,243,023 | https://en.wikipedia.org/wiki/Flapless%20Air%20Vehicle%20Integrated%20Industrial%20Research | Flapless Air Vehicle Integrated Industrial Research is a research project at Cranfield University with collaboration from nine other universities and BAE Systems. Funding totaling $9.85 million USD comes from BAE Systems and Engineering and Physical Sciences Research Council.
The project's goal is to create aircraft without ailerons, as ailerons, in addition to being heavy and requiring extensive maintenance, make it difficult for stealth aircraft to hide from radar. The project has created an unmanned aircraft named Demon which, although it still has ailerons, also uses fluidic controls to change direction in flight. The fluidic controls, in contrast to ailerons, do not move metal parts, but instead use pressurized air to change the direction of airflow over the wing surface. The plane does not need ailerons at all, and has flown successfully without them, but they were included as a backup in case the fluidic controls failed. The project aims to eventually implement the control system on a larger aircraft.
References
Aircraft configurations
BAE Systems research and development
Cranfield University
Engineering and Physical Sciences Research Council
Science and technology in Bedfordshire | Flapless Air Vehicle Integrated Industrial Research | [
"Engineering"
] | 224 | [
"Aircraft configurations",
"Aerospace engineering"
] |
31,244,492 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Tur%C3%A1n%20conjecture%20on%20additive%20bases | The Erdős–Turán conjecture is an old unsolved problem in additive number theory (not to be confused with Erdős conjecture on arithmetic progressions) posed by Paul Erdős and Pál Turán in 1941.
It concerns additive bases, subsets of natural numbers with the property that every natural number can be represented as the sum of a bounded number of elements from the basis. Roughly, it states that the number of representations of this type cannot also be bounded.
Background and formulation
The question concerns subsets of the natural numbers, typically denoted by , called additive bases. A subset is called an (asymptotic) additive basis of finite order if there is some positive integer such that every sufficiently large natural number can be written as the sum of at most elements of . For example, the natural numbers are themselves an additive basis of order 1, since every natural number is trivially a sum of at most one natural number. Lagrange's four-square theorem says that the set of positive square numbers is an additive basis of order 4. Another highly non-trivial and celebrated result along these lines is Vinogradov's theorem.
One is naturally inclined to ask whether these results are optimal. It turns out that Lagrange's four-square theorem cannot be improved, as there are infinitely many positive integers which are not the sum of three squares. This is because no positive integer which is the sum of three squares can leave a remainder of 7 when divided by 8. However, one should perhaps expect that a set which is about as sparse as the squares (meaning that in a given interval , roughly of the integers in lie in ) which does not have this obvious deficit should have the property that every sufficiently large positive integer is the sum of three elements from . This follows from the following probabilistic model: suppose that is a positive integer, and are 'randomly' selected from . Then the probability of a given element from being chosen is roughly . One can then estimate the expected value, which in this case will be quite large. Thus, we 'expect' that there are many representations of as a sum of three elements from , unless there is some arithmetic obstruction (which means that is somehow quite different than a 'typical' set of the same density), like with the squares. Therefore, one should expect that the squares are quite inefficient at representing positive integers as the sum of four elements, since there should already be lots of representations as sums of three elements for those positive integers that passed the arithmetic obstruction. Examining Vinogradov's theorem quickly reveals that the primes are also very inefficient at representing positive integers as the sum of four primes, for instance.
This begets the question: suppose that , unlike the squares or the prime numbers, is very efficient at representing positive integers as a sum of elements of . How efficient can it be? The best possibility is that we can find a positive integer and a set such that every positive integer is the sum of at most elements of in exactly one way. Failing that, perhaps we can find a such that every positive integer is the sum of at most elements of in at least one way and at most ways, where is a function of .
This is basically the question that Paul Erdős and Pál Turán asked in 1941. Indeed, they conjectured a negative answer to this question, namely that if is an additive basis of order of the natural numbers, then it cannot represent positive integers as a sum of at most too efficiently; the number of representations of , as a function of , must tend to infinity.
History
The conjecture was made jointly by Paul Erdős and Pál Turán in 1941. In the original paper, they write
"(2) If for , then ",
where denotes the limit superior. Here is the number of ways one can write the natural number as the sum of two (not necessarily distinct) elements of . If is always positive for sufficiently large , then is called an additive basis (of order 2). This problem has attracted significant attention but remains unsolved.
In 1964, Erdős published a multiplicative version of this conjecture.
Progress
While the conjecture remains unsolved, there have been some advances on the problem. First, we express the problem in modern language. For a given subset , we define its representation function . Then the conjecture states that if for all sufficiently large, then .
More generally, for any and subset , we can define the representation function as . We say that is an additive basis of order if for all sufficiently large. One can see from an elementary argument that if is an additive basis of order , then
So we obtain the lower bound .
The original conjecture spawned as Erdős and Turán sought a partial answer to Sidon's problem (see: Sidon sequence). Later, Erdős set out to answer the following question posed by Sidon: how close to the lower bound can an additive basis of order get? This question was answered in the case by Erdős in 1956. Erdős proved that there exists an additive basis of order 2 and constants such that for all sufficiently large. In particular, this implies that there exists an additive basis such that , which is essentially best possible. This motivated Erdős to make the following conjecture:
If is an additive basis of order , then
In 1986, Eduard Wirsing proved that a large class of additive bases, including the prime numbers, contains a subset that is an additive basis but significantly thinner than the original. In 1990, Erdős and Prasad V. Tetali extended Erdős's 1956 result to bases of arbitrary order. In 2000, V. Vu proved that thin subbases exist in the Waring bases using the Hardy–Littlewood circle method and his polynomial concentration results. In 2006, Borwein, Choi, and Chu proved that for all additive bases , eventually exceeds 7.
References
Additive number theory
Conjectures
Unsolved problems in number theory | Erdős–Turán conjecture on additive bases | [
"Mathematics"
] | 1,214 | [
"Unsolved problems in mathematics",
"Unsolved problems in number theory",
"Conjectures",
"Mathematical problems",
"Number theory"
] |
31,244,986 | https://en.wikipedia.org/wiki/Plancherel%20measure | In mathematics, Plancherel measure is a measure defined on the set of irreducible unitary representations of a locally compact group , that describes how the regular representation breaks up into irreducible unitary representations. In some cases the term Plancherel measure is applied specifically in the context of the group being the finite symmetric group – see below. It is named after the Swiss mathematician Michel Plancherel for his work in representation theory.
Definition for finite groups
Let be a finite group, we denote the set of its irreducible representations by . The corresponding Plancherel measure over the set is defined by
where , and denotes the dimension of the irreducible representation .
Definition on the symmetric group
An important special case is the case of the finite symmetric group , where is a positive integer. For this group, the set of irreducible representations is in natural bijection with the set of integer partitions of . For an irreducible representation associated with an integer partition , its dimension is known to be equal to , the number of standard Young tableaux of shape , so in this case Plancherel measure is often thought of as a measure on the set of integer partitions of given order n, given by
The fact that those probabilities sum up to 1 follows from the combinatorial identity
which corresponds to the bijective nature of the Robinson–Schensted correspondence.
Application
Plancherel measure appears naturally in combinatorial and probabilistic problems, especially in the study of longest increasing subsequence of a random permutation . As a result of its importance in that area, in many current research papers the term Plancherel measure almost exclusively refers to the case of the symmetric group .
Connection to longest increasing subsequence
Let denote the length of a longest increasing subsequence of a random permutation in chosen according to the uniform distribution. Let denote the shape of the corresponding Young tableaux related to by the Robinson–Schensted correspondence. Then the following identity holds:
where denotes the length of the first row of . Furthermore, from the fact that the Robinson–Schensted correspondence is bijective it follows that the distribution of is exactly the Plancherel measure on . So, to understand the behavior of , it is natural to look at with chosen according to the Plancherel measure in , since these two random variables have the same probability distribution.
Poissonized Plancherel measure
Plancherel measure is defined on for each integer . In various studies of the asymptotic behavior of as , it has proved useful to extend the measure to a measure, called the Poissonized Plancherel measure, on the set of all integer partitions. For any , the Poissonized Plancherel measure with parameter on the set is defined by
for all .
Plancherel growth process
The Plancherel growth process is a random sequence of Young diagrams such that each is a random Young diagram of order whose probability distribution is the nth Plancherel measure, and each successive is obtained from its predecessor by the addition of a single box, according to the transition probability
for any given Young diagrams and of sizes n − 1 and n, respectively.
So, the Plancherel growth process can be viewed as a natural coupling of the different Plancherel measures of all the symmetric groups, or alternatively as a random walk on Young's lattice. It is not difficult to show that the probability distribution of in this walk coincides with the Plancherel measure on .
Compact groups
The Plancherel measure for compact groups is similar to that for finite groups, except that the measure need not be finite. The unitary dual is a discrete set of finite-dimensional representations, and the Plancherel measure of an irreducible finite-dimensional representation is proportional to its dimension.
Abelian groups
The unitary dual of a locally compact abelian group is another locally compact abelian group, and the Plancherel measure is proportional to the Haar measure of the dual group.
Semisimple Lie groups
The Plancherel measure for semisimple Lie groups was found by Harish-Chandra. The support is the set of tempered representations, and in particular not all unitary representations need occur in the support.
References
Representation theory | Plancherel measure | [
"Mathematics"
] | 861 | [
"Representation theory",
"Fields of abstract algebra"
] |
31,250,262 | https://en.wikipedia.org/wiki/Frictional%20contact%20mechanics | Contact mechanics is the study of the deformation of solids that touch each other at one or more points. This can be divided into compressive and adhesive forces in the direction perpendicular to the interface, and frictional forces in the tangential direction. Frictional contact mechanics is the study of the deformation of bodies in the presence of frictional effects, whereas frictionless contact mechanics assumes the absence of such effects.
Frictional contact mechanics is concerned with a large range of different scales.
At the macroscopic scale, it is applied for the investigation of the motion of contacting bodies (see Contact dynamics). For instance the bouncing of a rubber ball on a surface depends on the frictional interaction at the contact interface. Here the total force versus indentation and lateral displacement are of main concern.
At the intermediate scale, one is interested in the local stresses, strains and deformations of the contacting bodies in and near the contact area. For instance to derive or validate contact models at the macroscopic scale, or to investigate wear and damage of the contacting bodies' surfaces. Application areas of this scale are tire-pavement interaction, railway wheel-rail interaction, roller bearing analysis, etc.
Finally, at the microscopic and nano-scales, contact mechanics is used to increase our understanding of tribological systems (e.g., investigate the origin of friction) and for the engineering of advanced devices like atomic force microscopes and MEMS devices.
This page is mainly concerned with the second scale: getting basic insight in the stresses and deformations in and near the contact patch, without paying too much attention to the detailed mechanisms by which they come about.
History
Several famous scientists, engineers and mathematicians contributed to our understanding of friction.
They include Leonardo da Vinci, Guillaume Amontons, John Theophilus Desaguliers, Leonhard Euler, and Charles-Augustin de Coulomb. Later, Nikolai Pavlovich Petrov, Osborne Reynolds and Richard Stribeck supplemented this understanding with theories of lubrication.
Deformation of solid materials was investigated in the 17th and 18th centuries by Robert Hooke, Joseph Louis Lagrange, and in the 19th and 20th centuries by d’Alembert and Timoshenko. With respect to contact mechanics the classical contribution by Heinrich Hertz stands out. Further the fundamental solutions by Boussinesq and Cerruti are of primary importance for the investigation of frictional contact problems in the (linearly) elastic regime.
Classical results for a true frictional contact problem concern the papers by F.W. Carter (1926) and H. Fromm (1927). They independently presented the creep versus creep force relation for a cylinder on a plane or for two cylinders in steady rolling contact using Coulomb’s dry friction law (see below). These are applied to railway locomotive traction, and for understanding the hunting oscillation of railway vehicles. With respect to sliding, the classical solutions are due to C. Cattaneo (1938) and R.D. Mindlin (1949), who considered the tangential shifting of a sphere on a plane (see below).
In the 1950s, interest in the rolling contact of railway wheels grew. In 1958, Kenneth L. Johnson presented an approximate approach for the 3D frictional problem with Hertzian geometry, with either lateral or spin creepage. Among others he found that spin creepage, which is symmetric about the center of the contact patch, leads to a net lateral force in rolling conditions. This is due to the fore-aft differences in the distribution of tractions in the contact patch.
In 1967, Joost Jacques Kalker published his milestone PhD thesis on the linear theory for rolling contact. This theory is exact for the situation of an infinite friction coefficient in which case the slip area vanishes, and is approximative for non-vanishing creepages. It does assume Coulomb's friction law, which more or less requires (scrupulously) clean surfaces. This theory is for massive bodies such as the railway wheel-rail contact. With respect to road-tire interaction, an important contribution concerns the so-called magic tire formula by Hans Pacejka.
In the 1970s, many numerical models were devised. Particularly variational approaches, such as those relying on Duvaut and Lion’s existence and uniqueness theories. Over time, these grew into finite element approaches for contact problems with general material models and geometries, and into half-space based approaches for so-called smooth-edged contact problems for linearly elastic materials. Models of the first category were presented by Laursen and by Wriggers. An example of the latter category is Kalker’s CONTACT model.
A drawback of the well-founded variational approaches is their large computation times. Therefore, many different approximate approaches were devised as well. Several well-known approximate theories for the rolling contact problem are Kalker’s FASTSIM approach, the Shen-Hedrick-Elkins formula, and Polach’s approach.
More information on the history of the wheel/rail contact problem is provided in Knothe's paper. Further Johnson collected in his book a tremendous amount of information on contact mechanics and related subjects. With respect to rolling contact mechanics an overview of various theories is presented by Kalker as well. Finally the proceedings of a CISM course are of interest, which provide an introduction to more advanced aspects of rolling contact theory.
Problem formulation
Central in the analysis of frictional contact problems is the understanding that the stresses at the surface of each body are spatially varying. Consequently, the strains and deformations of the bodies are varying with position too. And the motion of particles of the contacting bodies can be different at different locations: in part of the contact patch particles of the opposing bodies may adhere (stick) to each other, whereas in other parts of the contact patch relative movement occurs. This local relative sliding is called micro-slip.
This subdivision of the contact area into stick (adhesion) and slip areas manifests itself a.o. in fretting wear. Note that wear occurs only where power is dissipated, which requires stress and local relative displacement (slip) between the two surfaces.
The size and shape of the contact patch itself and of its adhesion and slip areas are generally unknown in advance. If these were known, then the elastic fields in the two bodies could be solved independently from each other and the problem would not be a contact problem anymore.
Three different components can be distinguished in a contact problem.
First of all, there is the deformation of the separate bodies in reaction to loads applied on their surfaces. This is the subject of general continuum mechanics. It depends largely on the geometry of the bodies and on their (constitutive) material behavior (e.g. elastic vs. plastic response, homogeneous vs. layered structure etc.).
Secondly, there is the overall motion of the bodies relative to each other. For instance the bodies can be at rest (statics) or approaching each other quickly (impact), and can be shifted (sliding) or rotated (rolling) over each other. These overall motions are generally studied in classical mechanics, see for instance multibody dynamics.
Finally there are the processes at the contact interface: compression and adhesion in the direction perpendicular to the interface, and friction and micro-slip in the tangential directions.
The last aspect is the primary concern of contact mechanics. It is described in terms of so-called contact conditions.
For the direction perpendicular to the interface, the normal contact problem, adhesion effects are usually small (at larger spatial scales) and the following conditions are typically employed:
The gap between the two surfaces must be zero (contact) or strictly positive (separation, );
The normal stress acting on each body is zero (separation) or compressive ( in contact).
Mathematically: . Here are functions that vary with the position along the bodies' surfaces.
In the tangential directions the following conditions are often used:
The local (tangential) shear stress (assuming the normal direction parallel to the -axis) cannot exceed a certain position-dependent maximum, the so-called traction bound ;
Where the magnitude of tangential traction falls below the traction bound , the opposing surfaces adhere together and micro-slip vanishes, ;
Micro-slip occurs where the tangential tractions are at the traction bound; the direction of the tangential traction is then opposite to the direction of micro-slip .
The precise form of the traction bound is the so-called local friction law. For this Coulomb's (global) friction law is often applied locally: , with the friction coefficient. More detailed formulae are also possible, for instance with depending on temperature , local sliding velocity , etc.
Solutions for static cases
Rope on a bollard, the capstan equation
Consider a rope where equal forces (e.g., ) are exerted on both sides. By this the rope is stretched a bit and an internal tension is induced ( on every position along the rope). The rope is wrapped around a fixed item such as a bollard; it is bent and makes contact to the item's surface over a contact angle (e.g., ). Normal pressure comes into being between the rope and bollard, but no friction occurs yet. Next the force on one side of the bollard is increased to a higher value (e.g., ). This does cause frictional shear stresses in the contact area. In the final situation the bollard exercises a friction force on the rope such that a static situation occurs.
The tension distribution in the rope in this final situation is described by the capstan equation, with solution:
The tension increases from on the slack side () to on the high side . When viewed from the high side, the tension drops exponentially, until it reaches the lower load at . From there on it is constant at this value. The transition point is determined by the ratio of the two loads and the friction coefficient. Here the tensions are in Newtons and the angles in radians.
The tension in the rope in the final situation is increased with respect to the initial state. Therefore, the rope is elongated a bit. This means that not all surface particles of the rope can have held their initial position on the bollard surface. During the loading process, the rope slipped a little bit along the bollard surface in the slip area . This slip is precisely large enough to get to the elongation that occurs in the final state. Note that there is no slipping going on in the final state; the term slip area refers to the slippage that occurred during the loading process. Note further that the location of the slip area depends on the initial state and the loading process. If the initial tension is and the tension is reduced to at the slack side, then the slip area occurs at the slack side of the contact area. For initial tensions between and , there can be slip areas on both sides with a stick area in between.
Generalization for a rope lying on an arbitrary orthotropic surface
If a rope is laying in equilibrium under tangential forces on a rough orthotropic surface then three following conditions (all of them) are satisfied:
This generalization has been obtained by Konyukhov A.,
Sphere on a plane, the (3D) Cattaneo problem
Consider a sphere that is pressed onto a plane (half space) and then shifted over the plane's surface. If the sphere and plane are idealised as rigid bodies, then contact would occur in just a single point, and the sphere would not move until the tangential force that is applied reaches the maximum friction force. Then it starts sliding over the surface until the applied force is reduced again.
In reality, with elastic effects taken into consideration, the situation is much different. If an elastic sphere is pressed onto an elastic plane of the same material then both bodies deform, a circular contact area comes into being, and a (Hertzian) normal pressure distribution arises. The center of the sphere is moved down by a distance called the approach, which is equivalent to the maximum penetration of the undeformed surfaces. For a sphere of radius and elastic constants this Hertzian solution reads:
Now consider that a tangential force is applied that is lower than the Coulomb friction bound . The center of the sphere will then be moved sideways by a small distance that is called the shift. A static equilibrium is obtained in which elastic deformations occur as well as frictional shear stresses in the contact interface. In this case, if the tangential force is reduced then the elastic deformations and shear stresses reduce as well. The sphere largely shifts back to its original position, except for frictional losses that arise due to local slip in the contact patch.
This contact problem was solved approximately by Cattaneo using an analytical approach. The stress distribution in the equilibrium state consists of two parts:
In the central, sticking region , the surface particles of the plane displace over to the right whereas the surface particles of the sphere displace over to the left. Even though the sphere as a whole moves over relative to the plane, these surface particles did not move relative to each other. In the outer annulus , the surface particles did move relative to each other. Their local shift is obtained as
This shift is precisely as large such that a static equilibrium is obtained with shear stresses at the traction bound in this so-called slip area.
So, during the tangential loading of the sphere, partial sliding occurs. The contact area is thus divided into a slip area where the surfaces move relative to each other and a stick area where they do not. In the equilibrium state no more sliding is going on.
Solutions for dynamic sliding problems
The solution of a contact problem consists of the state at the interface (where the contact is, division of the contact area into stick and slip zones, and the normal and shear stress distributions) plus the elastic field in the bodies' interiors. This solution depends on the history of the contact. This can be seen by extension of the Cattaneo problem described above.
In the Cattaneo problem, the sphere is first pressed onto the plane and then shifted tangentially. This yields partial slip as described above.
If the sphere is first shifted tangentially and then pressed onto the plane, then there is no tangential displacement difference between the opposing surfaces and consequently there is no tangential stress in the contact interface.
If the approach in normal direction and tangential shift are increased simultaneously ("oblique compression") then a situation can be achieved with tangential stress but without local slip.
This demonstrates that the state in the contact interface is not only dependent on the relative positions of the two bodies, but also on their motion history. Another example of this occurs if the sphere is shifted back to its original position. Initially there was no tangential stress in the contact interface. After the initial shift micro-slip has occurred. This micro-slip is not entirely undone by shifting back. So in the final situation tangential stresses remain in the interface, in what looks like an identical configuration as the original one.
Influence of friction on dynamic contacts (impacts) is considered in detail in.
Solution of rolling contact problems
Rolling contact problems are dynamic problems in which the contacting bodies are continuously moving with respect to each other. A difference to dynamic sliding contact problems is that there is more variety in the state of different surface particles. Whereas the contact patch in a sliding problem continuously consists of more or less the same particles, in a rolling contact problem particles enter and leave the contact patch incessantly. Moreover, in a sliding problem the surface particles in the contact patch are all subjected to more or less the same tangential shift everywhere, whereas in a rolling problem the surface particles are stressed in rather different ways. They are free of stress when entering the contact patch, then stick to a particle of the opposing surface, are strained by the overall motion difference between the two bodies, until the local traction bound is exceeded and local slip sets in. This process is in different stages for different parts of the contact area.
If the overall motion of the bodies is constant, then an overall steady state may be attained. Here the state of each surface particle is varying in time, but the overall distribution can be constant. This is formalised by using a coordinate system that is moving along with the contact patch.
Cylinder rolling on a plane, the (2D) Carter-Fromm solution
Consider a cylinder that is rolling over a plane (half-space) under steady conditions, with a time-independent longitudinal creepage . (Relatively) far away from the ends of the cylinders a situation of plane strain occurs and the problem is 2-dimensional.
If the cylinder and plane consist of the same materials then the normal contact problem is unaffected by the shear stress. The contact area is a strip , and the pressure is described by the (2D) Hertz solution.
The distribution of the shear stress is described by the Carter-Fromm solution. It consists of an adhesion area at the leading edge of the contact area and a slip area at the trailing edge. The length of the adhesion area is denoted . Further the adhesion coordinate is introduced by . In case of a positive force (negative creepage ) it is:
The size of the adhesion area depends on the creepage, the wheel radius and the friction coefficient.
For larger creepages such that full sliding occurs.
Half-space based approaches
When considering contact problems at the intermediate spatial scales, the small-scale material inhomogeneities and surface roughness are ignored. The bodies are considered as consisting of smooth surfaces and homogeneous materials. A continuum approach is taken where the stresses, strains and displacements are described by (piecewise) continuous functions.
The half-space approach is an elegant solution strategy for so-called "smooth-edged" or "concentrated" contact problems.
If a massive elastic body is loaded on a small section of its surface, then the elastic stresses attenuate proportional to and the elastic displacements by when one moves away from this surface area.
If a body has no sharp corners in or near the contact region, then its response to a surface load may be approximated well by the response of an elastic half-space (e.g. all points with ).
The elastic half-space problem is solved analytically, see the Boussinesq-Cerruti solution.
Due to the linearity of this approach, multiple partial solutions may be super-imposed.
Using the fundamental solution for the half-space, the full 3D contact problem is reduced to a 2D problem for the bodies' bounding surfaces.
A further simplification occurs if the two bodies are “geometrically and elastically alike”. In general, stress inside a body in one direction induces displacements in perpendicular directions too. Consequently, there is an interaction between the normal stress and tangential displacements in the contact problem, and an interaction between the tangential stress and normal displacements. But if the normal stress in the contact interface induces the same tangential displacements in both contacting bodies, then there is no relative tangential displacement of the two surfaces. In that case, the normal and tangential contact problems are decoupled. If this is the case then the two bodies are called quasi-identical. This happens for instance if the bodies are mirror-symmetric with respect to the contact plane and have the same elastic constants.
Classical solutions based on the half-space approach are:
Hertz solved the contact problem in the absence of friction, for a simple geometry (curved surfaces with constant radii of curvature).
Carter considered the rolling contact between a cylinder and a plane, as described above. A complete analytical solution is provided for the tangential traction.
Cattaneo considered the compression and shifting of two spheres, as described above. Note that this analytical solution is approximate. In reality small tangential tractions occur which are ignored.
See also
s
References
External links
Biography of Prof.dr.ir. J.J. Kalker (Delft University of Technology).
Kalker's Hertzian/non-Hertzian CONTACT software.
Mechanical engineering
Solid mechanics | Frictional contact mechanics | [
"Physics",
"Engineering"
] | 4,129 | [
"Applied and interdisciplinary physics",
"Solid mechanics",
"Mechanics",
"Mechanical engineering"
] |
31,251,910 | https://en.wikipedia.org/wiki/DART%20radiative%20transfer%20model | DART (Discrete anisotropic radiative transfer) is a 3D radiative transfer model, designed for scientific research, in particular remote sensing. Developed at CESBIO since 1992, DART model was patented in 2003. It is freeware for scientific activities.
General Description
DART model simulates, simultaneously in several wavelengths of the optical domain (e.g., visible and thermal infrared), the radiative budget and remotely sensed images of any Earth scene (natural / urban with /without relief), for any sun direction, any atmosphere, any view direction and any sensor FTM. It was designed to be precise, easy to use and adapted for operational use. For that, it simulates:
Terrestrial landscape.
The atmosphere (optional simulation).
The space or airborne radiometric sensor (optional simulation).
It simulates any landscape as a 3D matrice of cells that contain turbid material and triangles. Turbid material is used for simulating vegetation (e.g., tree crowns, grass, agricultural crops,...) and the atmosphere. Triangles are used for simulating translucent and opaque surfaces that makes up topography, urban elements and 3D vegetation. DART can use structural and spectral data bases (atmosphere, vegetation, soil,...). It includes a LIDAR simulation mode.
General Information On Radiative Transfer
The approaches used to simulate radiative transfer differ on 2 levels: mathematical method of resolution and mode of representation of the propagation medium. These two levels are in general dependent. The models of radiative transfer are often divided into 2 categories associated with the 2 principal modes of representation of the landscape: homogeneous or heterogeneous representation. For the models known as homogeneous (Idso and of Wit, 1970; Ross, 1981; Verhoef, 1984; Myneni et al., 1989), the landscape is represented by a constant horizontal distribution of absorbing and scattering elements (sheets, branches, etc...). On the other hand, for the models known as heterogeneous, the landscape is represented by a no uniform space distribution of unspecified elements of the landscape (North, 1996; Govaerts, 1998).
Simulation of the "Earth – Atmosphere" scene
DART simulates radiative transfer in the "Earth-Atmosphere" system, for any wavelength in the optical domain (shortwaves : visible, thermal infrared,...). Its approach combines the ray tracing and the discrete ordinate methods. It works with natural and urban landscapes (forests with different types of trees, buildings, rivers,...), with topography and atmosphere above and within the landscape. It simulates light propagation from solar irradiance (Top of Atmosphere) and/or thermal emission within the scene.
Context
The study of the functioning of Continental surfaces requires the understanding of the various energetic and physiologic mechanisms that influence these surfaces. For example, the radiation absorbed in the visible spectral domain is the major energy source for vegetation photosynthesis. Moreover, energy and mass fluxes at the "Earth – Atmosphere" interface affect surface functioning, and consequently climatology.
In this context, Earth observation from space (i.e., space remote sensing) is an indispensable tool, due to its unique potential to provide synoptic and continuous surveys of the Earth, at different time and space scales.
The difficulty in studying continental surfaces arises from the complexity of the energetic and physiologic processes involved and also from the different time and space scales concerned. It comes also from the complexity of satellite remote sensing space and from its links to quantities that characterize Earth functioning. These remarks underline the need of models, because only these can couple and gather within a single scheme all concerned processes.
Major references
Modelling radiative transfer in heterogeneous 3-D vegetation canopies, 1996, Gastellu-Etchegorry JP, Demarez V, Pinel V, Zagolski F, Remote sensing of Environment, 58:131–156.
Radiative transfer model for simulating high-resolution satellite images, Gascon F., 2001, Gastellu-Etchegorry J.P. et Lefèvre M.J., IEEE, 39(9), 1922–1926.
The radiation transfer model intercomparison (RAMI) exercise, 2001, Pinty B., Gascon F., Gastellu-Etchegorry et al., Journal of Geophysical Research, Vol. 106, No. D11, June 16, 2001.
Building a Forward-Mode 3-D Reflectance model for topographic normalization of high-resolution (1-5m) imagery: Validation phase in a forested environment, 2012, Couturier, S., Gastellu-Etchegorry J.P., Martin E., Patiño, P., IEEE, Vol. 51, Number 7, 3910–3921.
Retrieval of spruce leaf chlorophyll content from airborne image data using continuum removal and radiative transfer, 2013, Malenovský Z., Homolová L., Zurita-Milla R., Lukeš P., Kapland V., Hanuš J., Gastellu-Etchegorry J.P., Schaepman M., Remote sensing of Environment. 131:85–102.
A new approach of direction discretization and oversampling for 3D anisotropic radiative transfer modeling, 2013, Yin T., Gastellu-Etchegorry J.P., Lauret N., Grau E., Rubio J., Remote Sensing of Environment. 135, pp 213–223
A canopy radiative transfer scheme with explicit FAPAR for the interactive vegetation model ISBA-A-gs: impact on carbon fluxes, 2013, Carrer D., Roujean J.L., Lafont S., Calvet J.C., Boone A., Decharme B., Delire C., Gastellu-Etchegorry J.P., Journal of Geophysical Research – Biogeosciences, Vol. 118: 1–16
Investigating the Utility of Wavelet Transforms for Inverting a 3-D Radiative Transfer Model Using Hyperspectral Data to Retrieve Forest LAI, 2013, Banskota A., Wynne R., Thomas V., Serbin S., Kayastha N., Gastellu-Etchegorry J.P., Townsend P., Remote Sensing, 5: 2639–2659
Directional viewing effects on satellite Land Surface Temperature products over sparse vegetation canopies – A multi-sensor analysis, 2013, Guillevic P.C., Bork-Unkelbach A., Göttsche F.M., Hulley G., Gastellu-Etchegorry J.P., Olesen F.S and Privette J.L., IEEE Geoscience and Remote sensing, 10, 1464–1468.
Radiative transfer modeling in the "Earth – Atmosphere" system with DART model, 2013, Grau E. and Gastellu-Etchegrry, Remote Sensing of Environment, 139, 149–170
The 4th radiation transfer model intercomparison (RAMI-IV): Proficiency testing of canopy reflectance models with ISO-13528, 2013, Widlowski J-L, B Pinty, M Lopatka, C Atzberger, D Buzica, M Chelle, M Disney, J-P Gastellu-Etchegorry, M Gerboles, N Gobron, E Grau, H Huang, A Kallel, H Kobayashi, P E Lewis, W Qin, M Schlerf, J Stuckens, D Xie, Journal of Geophysical Research 01/2013 1–22, doi:10.1002/jgrd.50497
3D Modeling of Imaging Spectrometer Data: data: 3D forest modeling based on LiDAR and in situ data, 2014, Schneider F.D. Leiterer R., Morsdorf F., Gastellu-Etchegorry J.P., Lauret N., Pfeifer N., Schaepman M.E., Remote Sensing of Environment, 152: 235–250.
Discrete anisotropic radiative transfer (DART 5) for modeling airborne and satellite spectroradiometer and LIDAR acquisitions of natural and urban landscapes, 2015, Gastellu-Etchegorry J.P., Yin T., Lauret N., 2015, Remote Sensing, 7, 1667–1701: doi: 10.3390/rs70201667.
A LUT-Based Inversion of DART Model to Estimate Forest LAI from Hyperspectral Data, 2015, Banskota A., Serbin S. P., Wynne R. H., Thomas V.A., Falkowski M.J., Kayastha N., Gastellu-Etchegorry J.P., Townsend P.A., IEEE Geoscience and Remote sensing, JSTARS-2014-00702.R1, in press.
Simulating images of passive sensors with finite field of view by coupling 3-D radiative transfer model and sensor perspective projection, 2015, Yin T., Lauret N. and Gastellu-Etchegorry J.P., Remote Sensing of Environment, accepted.
External links
Official website on CESBIO laboratory
RAdiation transfer Model Intercomparison (RAMI)
Assistance forum of the DART project
Remote sensing
Radiometry
Earth sciences
Atmospheric radiation
Scientific models | DART radiative transfer model | [
"Engineering"
] | 2,025 | [
"Telecommunications engineering",
"Radiometry"
] |
31,252,208 | https://en.wikipedia.org/wiki/PmiRKB | PmiRKB is a database of plant miRNAs.
See also
MiRTarBase
MESAdb
microRNA
References
External links
http://bis.zju.edu.cn/pmirkb/
Biological databases
RNA
MicroRNA | PmiRKB | [
"Biology"
] | 50 | [
"Bioinformatics",
"Biological databases"
] |
31,252,651 | https://en.wikipedia.org/wiki/Tolman%20electronic%20parameter | The Tolman electronic parameter (TEP) is a measure of the electron donating or withdrawing ability of a ligand. It is determined by measuring the frequency of the A1 C-O vibrational mode (ν(CO)) of a (pseudo)-C3v symmetric complex, [LNi(CO)3] by infrared spectroscopy, where L is the ligand of interest. [LNi(CO)3] was chosen as the model compound because such complexes are readily prepared from tetracarbonylnickel(0). The shift in ν(CO) is used to infer the electronic properties of a ligand, which can aid in understanding its behavior in other complexes. The analysis was introduced by Chadwick A. Tolman.
The A1 carbonyl band is rarely obscured by other bands in the analyte's infrared spectrum. Carbonyl is a small ligand so steric factors do not complicate the analysis. Upon coordination of CO to a metal, ν(CO) typically decreases from 2143 cm−1 of free CO. This shift can be explained by π backbonding: the metal forms a π bond with the carbonyl ligand by donating electrons through its d orbitals into the empty π* anti-bonding orbitals on CO. This interaction strengthens the metal-carbon bond but also weakens the carbon-oxygen bond, resulting in a lower vibrational frequency. If other ligands increase the density of π electrons on the metal, the C-O bond is weakened and ν(CO) decreases further; conversely, if other ligands compete with CO for π backbonding, ν(CO) increases.
Other ligand electronic parameters
Several other scales have been proposed for the ranking of the donor properties of ligands. The HEP scale ranks ligands on the basis of the 13C NMR shift of a reference ligand. Lever's electronic parameter ranking is related to the Ru(II/III) couple. Another scale evaluated ligands on the basis of the redox couples of [Cr(CO)5L]0/+.
In a treatment akin to the TEP analysis, the donor properties of N-heterocyclic carbene (NHC) ligands have been ranked according to IR data recorded on cis-[RhCl(NHC)(CO)2] complexes.
See also
Metal carbonyl
Tolman cone angle
References
Further reading
Organometallic chemistry
Inorganic chemistry
Infrared spectroscopy | Tolman electronic parameter | [
"Physics",
"Chemistry"
] | 499 | [
"Spectrum (physical sciences)",
"Infrared spectroscopy",
"nan",
"Organometallic chemistry",
"Spectroscopy"
] |
36,445,755 | https://en.wikipedia.org/wiki/Heptafluoride | Heptafluoride typically refers to compounds with the formula RnMxF7y− or RnMxF7y+, where n, x, and y are independent variables and R any substituent.
Binary heptafluorides
The only binary heptafluorides are iodine heptafluoride (IF7), rhenium heptafluoride (ReF7), and gold heptafluoride (AuF7). Only IF7 and ReF7 are true heptafluorides, however, as AuF7 is actually a coordination complex of gold pentafluoride (AuF5) and molecular fluorine; therefore, the correct chemical formula of gold heptafluoride is actually AuF5·F2.
Heptafluoride anions
A commercially important heptafluoride anion is the heptafluorotantalate anion, TaF72−. It is an intermediate in the purification of tantalum. Many dimeric and oligomeric heptafluorides have been observed or proposed. One example is B2F7−.
In the area of organofluorine chemistry, many heptafluorides are known. A prominent example is heptafluorobutyric acid. This species and its conjugate base heptafluorobutyrate (C3F7CO2−) are precursors to surfactants.
Complex heptafluorides
Many compounds that are not discrete ions or molecules also are heptafluorides.
References
Fluorides | Heptafluoride | [
"Chemistry"
] | 335 | [
"Fluorides",
"Salts"
] |
36,447,398 | https://en.wikipedia.org/wiki/Endothall | Endothall (3,6-endoxohexahydrophthalic acid) is used as an herbicide for terrestrial and aquatic plants. It is used as an aquatic herbicide for submerged aquatic plants and algae in lakes, ponds and irrigation canals. It is used, as a desiccant on potatoes, hops, cotton, clover and alfalfa. It is used as a biocide to control mollusks and algae in cooling towers.
Endothall is a selective contact herbicide that has been used to manage submerged aquatic vegetation for over 50 years. The herbicide damages the cells of susceptible plants at the point of contact but does not affect areas untouched by the herbicide, like roots or tubers (underground storage structures).
The chemical formula for endothall is C8H10O5. Its Chemical Abstracts Service (CAS) name is 7-oxabicyclo[2.2.1]heptane-2,3-dicarboxylic acid. It is an organic acid but is used as the dipotassium salt or the mono-N, N-dimethylalkylamine salt.
It is considered safe in drinking water by the EPA up to a maximum contaminant level of 0.1 mg/L (100 ppb). Some people who drink water contaminated above this level for many years experience stomach or intestine problems.
Endothall is chemically related to cantharidin. Both compounds are protein phosphatase 2A inhibitors.
Toxicity
There is limited data on human toxicity, although cases of lethal deliberate self-poisoning have been reported with as little as a mouthful of Endothall concentrate.. Features of toxicity include corrosive injury to the oropharynx and gastrointestinal tract, metabolic acidosis, coagulopathy, and multi-organ failure.
See also
Protein phosphatase
References
http://www.apms.org/japm/vol08a/v8p50.pdf
http://dnr.wi.gov/lakes/plants/factsheets/EndothallFactsheet.pdf
Herbicides
Dicarboxylic acids
Ethers
Oxygen heterocycles
Phosphatase inhibitors | Endothall | [
"Chemistry",
"Biology"
] | 465 | [
"Herbicides",
"Functional groups",
"Organic compounds",
"Ethers",
"Biocides"
] |
36,447,476 | https://en.wikipedia.org/wiki/Plasma%20sheet | In the magnetosphere, the plasma sheet is a sheet-like region of denser (0.3-0.5 ions/cm3 versus 0.01-0.02 in the lobes) hot plasma and lower magnetic field located on the magnetotail and near the equatorial plane, between the magnetosphere's north and south lobes.
The origin of the plasma sheet is still a subject of discussion on magnetospheric physics but it is thought that the region plays an important role on the transport of plasma around the Earth from the magnetotail towards the Sun. The plasma sheet is closely related to the convective motion of plasma on the magnetotail occurring as a result of magnetic field reconnection.
References
Geomagnetism
Planetary science
Space plasmas | Plasma sheet | [
"Physics",
"Materials_science",
"Astronomy"
] | 158 | [
"Space plasmas",
"Materials science stubs",
"Plasma physics",
"Astronomy stubs",
"Astrophysics",
"Astrophysics stubs",
"Plasma physics stubs",
"Planetary science",
"Electromagnetism stubs",
"Astronomical sub-disciplines"
] |
36,448,428 | https://en.wikipedia.org/wiki/Date%20windowing | Date windowing is a method by which dates with two-digit years are converted to and from dates with four-digit years. The year at which the century changes is called the pivot year of the date window. Date windowing was one of several techniques used to resolve the year 2000 problem in legacy computer systems.
Reasoning
For organizations and institutions with data that is only decades old, a "date windowing" solution was considered easier and more economical than the massive conversions and testing required when converting two-digit years into four-digit years.
Windowing methods
There are three primary methods used to determine the date window:
Fixed pivot year: simplest to code, works for most business dates.
Sliding pivot year: determined by subtracting some constant from the current year, typically used for birth dates.
Closest date: Three different interpretations (last century, this century, and next century) are compared to the current date, and the closest date is chosen from the three.
FOCUS
Information Builders's FOCUS "Century Aware" implementation allowed the user to focus on field-specific and file-specific settings.
This flexibility gave the best of all three major mechanisms: A school could have file RecentDonors set a field named BirthDate to use
DEFCENT=19 YRTHRESH=31, covering those born 1931-2030.
Those born 2031 are not likely to be donating before 2049, by which time those born 1931 would be 118 years old, and unlikely current donors.
DEFCENT and YRTHRESH for a file containing present students and recent graduates would use different values.
Examples
Below is a typical example of COBOL code that establishes a fixed date window, used to figure the century for ordinary business dates.
IF RECEIPT-DATE-YEAR >= 60
MOVE 19 TO RECEIPT-DATE-CENTURY
ELSE
MOVE 20 TO RECEIPT-DATE-CENTURY
END-IF.
The above code establishes a fixed date window of 1960 through 2059. It assumes that none of the receipt dates are before 1960, and should work until January 1, 2060.
Some systems have environment variables that set the fixed pivot year for the system. Any year after the pivot year will belong to this century (the 21st century), and any year before or equal to the pivot year will belong to last century (the 20th century).
Some products, such as Microsoft Excel 95 used a window of years 1920–2019 which had the potential to encounter a windowing bug reoccurring only 20 years after the year 2000 problem had been addressed.
The IBM i operating system uses a window of 1940-2039 for date formats with a two-digit year. In the 7.5 release of the operating system, an option was added to use a window of 1970-2069 instead.
See also
Serial number arithmetic, a form of windowing for sequential counters
References
Units of time | Date windowing | [
"Physics",
"Mathematics"
] | 579 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
37,886,697 | https://en.wikipedia.org/wiki/Incremental%20launch | Incremental launch is a method in civil engineering of building a complete bridge deck from one abutment of the bridge only, manufacturing the superstructure of the bridge by sections to the other side. In current applications, the method is highly mechanised and uses pre-stressed concrete.
History
The first bridge to have been incrementally launched appears to have been the Waldshut–Koblenz Rhine Bridge, a wrought iron lattice truss railway bridge, completed in 1859. The second incrementally launched bridge was the Rhine Bridge, a railway bridge that spanned the Upper Rhine between Kehl, Germany and Strasbourg, France, completed in 1861 and subsequently destroyed and rebuilt on several occasions.
The first incrementally launched concrete bridge was the span box girder bridge over the Caroní River, completed in 1964. The second incrementally launched concrete bridge was over the Inn River, Kufstein in Austria, completed in 1965. The structural engineers for both bridges were Professor Dr Fritz Leonhardt and his partner, Willi Baur.
The usual method of building concrete bridges is the segmental method, one span at a time.
Method
The bridges are mostly of the box girder design and work with straight or constant curve shapes, with a constant radius. box girder sections of the bridge deck are fabricated at one end of the bridge in factory conditions. Each section is manufactured in around one week.
The first section of the launch, the launching nose, is not made of concrete, but is a stiffened steel plate girder and is around 60% of the length of a bridge span, and reduces the cantilever moment. The sections of bridge deck slide over sliding bearings, which are concrete blocks covered with stainless steel and reinforced elastomeric pads.
Notable examples
Katima Mulilo Bridge, 2004
Redcliffe Bridge, 1986
Woronora River Bridge, 2001, the largest incrementally-launched bridge when built
Millau Viaduct, 2004, an example of launching a curved road deck
References
External links
Bridges by structural type
Civil engineering
Hydraulics | Incremental launch | [
"Physics",
"Chemistry",
"Engineering"
] | 417 | [
"Physical systems",
"Construction",
"Hydraulics",
"Civil engineering",
"Fluid dynamics"
] |
37,887,483 | https://en.wikipedia.org/wiki/Phoslock | Phoslock is the commercial name for a bentonite clay in which the sodium and/or calcium ions are exchanged for lanthanum. The lanthanum contained within Phoslock reacts with phosphate to form an inert mineral known as rhabdophane (LaPO4.\mathit{n}H2O). Phoslock is used in lake restoration projects to remove excess phosphorus from aquatic systems, thereby improving water quality and inducing biological recovery in impaired freshwater systems.
It was developed in Australia by the CSIRO in the late 1990s by Dr Grant Douglas (US Patent 6350383) as a way of utilising the ability of lanthanum to bind phosphate in freshwater natural aquatic systems. The first large-scale trial took place in January 2000 in the Canning River, Western Australia.
During its development, patenting and commercialisation by CSIRO and subsequent commercial production, Phoslock has been a subject in academic research and has been used globally in lake restoration projects. The largest number of whole lake applications and the most comprehensive pre- and post-application monitoring has taken place in Europe, primarily Germany (where it is sold under the tradename Bentophos), the Netherlands and the UK.
There are studies indicating that lanthanum release due to application of this clay could lead to increased concentrations of this rare element in water and soils, resulting in bioaccumulation in animal tissues and there are still concerns and precautions to be taken as currently there is not enough complete and independent information.
See also
Eutrophication
Harmful algal bloom
References
Ecology
Environmental engineering
Water technology | Phoslock | [
"Chemistry",
"Engineering",
"Biology"
] | 325 | [
"Chemical engineering",
"Ecology",
"Civil engineering",
"Environmental engineering",
"Water technology"
] |
29,969,297 | https://en.wikipedia.org/wiki/List%20of%20radioactive%20nuclides%20by%20half-life | This is a list of radioactive nuclides (sometimes also called isotopes), ordered by half-life from shortest to longest, in seconds, minutes, hours, days and years. Current methods make it difficult to measure half-lives between approximately 10−19 and 10−10 seconds.
10−24 seconds (yoctoseconds)
Twenty-three yoctoseconds is the time needed to traverse a 7-femtometre distance at the speed of light—around the diameter of a large atomic nucleus.
10−21 seconds (zeptoseconds)
10−18 seconds (attoseconds)
10−12 seconds (picoseconds)
10−9 seconds (nanoseconds)
10−6 seconds (microseconds)
10−3 seconds (milliseconds)
100 seconds
103 seconds (kiloseconds)
106 seconds (megaseconds)
109 seconds (gigaseconds)
1012 seconds (teraseconds)
1015 seconds (petaseconds)
1018 seconds (exaseconds)
1021 seconds (zettaseconds)
1024 seconds (yottaseconds)
1027 seconds (ronnaseconds)
1030 seconds (quettaseconds)
The half-life of tellurium-128 is over 160 trillion times greater than the age of the universe, which is seconds.
See also
List of elements by stability of isotopes
List of nuclides
Orders of magnitude (time)
Lists of isotopes, by element
Notes
External links
Radioactive isotope table "lists ALL radioactive nuclei with a half-life greater than 1000 years", incorporated in the list above.
The NUBASE2020 evaluation of nuclear physics properties F.G. Kondev et al. 2021 Chinese Phys. C 45 030001. The PDF of this article lists the half-lives of all known radioactives nuclides.
List
Radioactive
Radioactivity
Tables of nuclides | List of radioactive nuclides by half-life | [
"Physics",
"Chemistry"
] | 407 | [
"Isotopes",
"Tables of nuclides",
"Nuclear physics",
"Lists of chemical elements",
"Radioactivity"
] |
29,970,497 | https://en.wikipedia.org/wiki/Optical%20lift | Optical lift is an optical analogue of aerodynamic lift, in which a cambered refractive object with differently shaped top and bottom surfaces experiences a stable transverse lift force when placed in a uniform stream of light.
Discovery
The ability of light to apply pressure to objects is known as radiation pressure, which was first postulated in 1619 and proven in 1900. This is the principle behind the solar sail, which uses light radiation pressure to move through space. A 2010 study by physicist Grover Swartzlander and colleagues of the Rochester Institute of Technology in Rochester, New York shows light is also capable of creating the more complex force of "lift", which is the force generated by airfoils that make an airplane rise upwards as it travels forward. This study was published in December 2010 in Nature Photonics journal. Swartzlander predicted, observed and experimentally verified at a micrometer-scale that when applying a beam of laser light to a semi-cylindrical refractive rod, it automatically torques into a stable angle of attack, and then exhibits uniform motion.
The experiment began as computer models that suggested when light is incident on a tiny object shaped like a wing, a stable lift force is applied to the particle. Then the researchers decided to do physical experiments in the laboratory, and they created tiny, transparent, micrometer-sized rods that were flat on one side and rounded on the other, rather like airplane wings. They immersed the lightfoils in water and bombarded them with 130 mW infrared laser light from underneath the chamber. Radiation pressure pushes the particles along the direction of propagation, this is called the scatter force, but the excitement came when the particles were forced to the side in a direction perpendicular to the direction of propagating light. The transverse force on the particles is the lift force. The researchers discovered not only that the rods experienced stable lift, but that, depending on refractive index, the rod could have up to two stable angles of attack it rotated to when exposed to the laser light. Symmetrical spheres tested did not exhibit this same lift effect.
In optical lift, created by a "lightfoil", the lift is created within the transparent object as light shines through it and is refracted by its inner surfaces. In the lightfoil rods a greater proportion of light leaves in a direction perpendicular to the beam and this side therefore experiences a larger radiation pressure and hence, lift.
Potential uses
The 2010 discovery of stable optical lift is considered by some physicists to be "most surprising". Unlike optical tweezers, an intensity gradient is not required to achieve a transverse force. Many rods may therefore be lifted simultaneously in a single quasi-uniform beam of light. Swartzlander and his team propose using optical lift to power micromachines, transport microscopic particles in a liquid, or to help on self-alignment and steering of solar sails, a form of spacecraft propulsion for interstellar space travel. Solar sails are generally designed to harness light to "push" a spacecraft, whereas Swartzlander designed their lightfoil to lift in a perpendicular direction; this is where the idea of being able to steer a future solar sail spacecraft may be applied.
Swartzlander said the next step would be to test lightfoils in air and experiment with a variety of materials with different refractive properties, and with incoherent light.
See also
Aerodynamic lift
IKAROS – (Interplanetary Kite-craft Accelerated by Radiation of the Sun)
Laser propulsion
Optical force
Solar sail
References
External links
Video: Optical lifting demonstrated for the first time
Aircraft wing design
Force
Spacecraft propulsion
Spacecraft components
Aerospace engineering | Optical lift | [
"Physics",
"Mathematics",
"Engineering"
] | 728 | [
"Force",
"Physical quantities",
"Quantity",
"Mass",
"Classical mechanics",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Matter"
] |
29,972,322 | https://en.wikipedia.org/wiki/Ultrasonic%20impact%20treatment | Ultrasonic impact treatment (UIT) is a metallurgical processing technique, similar to work hardening, in which ultrasonic energy is applied to a metal object. This technique is part of the High Frequency Mechanical Impact (HFMI) processes. Other acronyms are also equivalent: Ultrasonic Needle Peening (UNP), Ultrasonic Peening (UP). Ultrasonic impact treatment can result in controlled residual compressive stress, grain refinement and grain size reduction. Low and high cycle fatigue are enhanced and have been documented to provide increases up to ten times greater than non-UIT specimens.
Theory
In UIT, ultrasonic waves are produced by an electro-mechanical ultrasonic transducer, and applied to a workpiece. An acoustically tuned resonator bar is caused to vibrate by energizing it with a magnetostrictive or Piezoelectric ultrasonic transducer. The energy generated from these high frequency impulses is imparted to the treated surface through the contact of specially designed steel pins. These transfer pins are free to move axially between the resonant body and the treated surface.
When the tool, made up of the ultrasonic transducer, pins and other components, comes into contact with the work piece it acoustically couples with the work piece, creating harmonic resonance. This harmonic resonance is performed at a carefully calibrated frequency, to which metals respond very favorably, resulting in compressive residual stress, stress relief and grain structure improvements.
Depending on the desired effects of treatment a combination of different frequencies and displacement amplitude is applied. Depending on the tool and the Original Equipment Manufacturer, these frequencies range between 15 and 55 kHz, with the displacement amplitude of the resonant body of between .
Application
UIT is highly controllable. Incorporating a programmable logic controller (PLC) or a Digital Ultrasonic Generator, the frequency and amplitude of UIT are easily set and maintained, thus removing a significant portion of operator dependency. UIT can also be mechanically controlled, thus providing repeatability of results from one application to the next. Examples of mechanical control employed with UIT include:
CNC milling machines
Lathes
Robotic control
Weld tractors
With these types of controlled applications, the surface finish of the work piece is highly controllable.
For many applications, UIT is most effectively employed by hand. The high portability of the UIT system enables travel to austere locations and hard to reach places. The flexibility that is facilitated by variations in the tool configuration (such as angle-peening-head) ensures that access to very tight locations is possible.
UIT's effectiveness has been illustrated on the following metals, among others:
Aluminium (including sensitized Aluminium)
Bronze
Cobalt alloys
Nickel alloys
Steels
Carbon steel
Stainless steel
High-strength low-alloy steel
Manganese steel
Titanium
History
UIT was originally developed in 1972 and has since been perfected by a team of Russian scientists under the leadership of Dr. Efim Statnikov. Originally developed and utilized to enhance the fatigue and corrosion attributes of ship and submarine structures, UIT has been utilized in aerospace, mining, offshore drilling, shipbuilding, infrastructure, automotive, energy production and other industries.
Different industrial solutions exist nowadays and are commercialized by a limited number of Original Equipment Manufacturers worldwide.
Practical applications
UIT enables life extension of steel bridges. This technique has been employed in numerous US states as well as other nations. The result is a greatly reduced cost of infrastructure. UIT has been certified for this use by AASHTO.
The use of UIT on draglines and other heavy equipment in the mining industry has resulted in increased production and has decreased downtime and maintenance costs.
UIT is employed on drive shafts and crank shafts in a number of industries. Results show that UIT increases shaft life by over a factor of 3.
The US Navy uses UIT to address cracked areas in certain aluminum decks. Without UIT, crack repairs resulted in almost immediate re-cracking. With UIT, repairs have shown to last over eight months without cracks.
See also
High frequency impact treatment
Corrosion fatigue
Stress corrosion cracking
Welding
References
Further reading
Haagensen, P.J., Weld Improvement Methods – Applications and Implementations in Design Codes, invited paper at the Conference on Fatigue of Welded Structures, Senlis, Paris, France, 12–14 June 1996.
Prokopenko, G.I., T.A. Lyatun, Study of Surface Hardening Conditions by Means of Ultrasound, in: Physics and Chemistry of Material Processing, No. 3, p 91, 1977.
Blaha, F., B.Langenecker.“Dehnung von Zink-Kristallen unter Ultraschalleinwirkung”, Zeitschrift die Naturwissenschaften, 20, 556, 1955.
Konovalov, E.G., V.M. Drozdov, M.D. Tyavlovski, Dynamic Strength of Metals (in Russian), Nauka i Tekhnika, Minsk, 1969.
Kazantsev, V.F., Basic Physics of Ultrasonic Action on Solid Body Processing (in Russian). Doctoral thesis, AKIN, Moscow, 1980, pp. 12–44.
Statnikov, E.S., Development and Study of Ultrasonic Specific-purpose Devices, Thesis, Academician N.N. Andreyev Acoustic Institute, Academy of Sciences of the USSR, 1982.
Severdenko, V.P., E.G. Konovalov, E.Sh. Statnikov et al., Study of Mechanical Properties of New Materials under Ultrasonic Oscillations, Report # 21-971, FTI Acad. Nauk of BSSR, Minsk (1966).
Statnikov, E.Sh., Activation of Deformation Process under Ultrasonic Effect,. Scientific and Technical Conference “XXX Lomonosov Readings”, Sevmashvtuz, Severodvinsk (2001).
IIW PUBLICATIONS:
Increasing the Fatigue Strength of Welded Joints in Cyclic Compression. 47th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-1569-94, Peking, 1994. Y. Kudryavtzev, V.I. Tryufyakov, P.P. Mikheev, E. S. Statnikov.
Improvement of Fatigue Strength of Welded Joint (in High Strength Steels and Aluminium Alloys) by Means of Ultrasonic Hammer Peening. 48th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-1594-95, Stockholm, 1995. J.J. Janosch, H. Koneczny, S. Debiez, E. S. Statnikov, V.I. Tryufyakov, P.P. Mikheev.
Ultrasonic Impact Treatment of Welded Joints. 48th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-1609-95, Stockholm, 1995. V.I. Trufyakov, P.P. Mikheev, Yu. Kudryavtzev, E. S. Statnikov.
Specification for Weld Toe Improvement by Means of Ultrasonic Impact Treatment. 49th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-1617-96, Budapest, 1996. E. S. Statnikov, V.I. Trufyakov, P.P. Mikheev Yu. Kudryavtzev.
Ultrasonic Impact Treatment (UIT) of Welded Joints. 49th Annual Assembly of the International Institute of Welding, Budapest, 1996., E. S. Statnikov.
Applications of Operational Ultrasonic Impact Treatment (UIT) Technology in Production of Welded Joints. 50th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-1667-97, San-Francisco, 1997. E.S. Statnikov.
Comparison of Efficiency and Processibility of Post-Weld Deformation Methods for Increase in Fatigue Strength of Welded Joints. 50th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-1668-97, San-Francisco, 1997. E. S. Statnikov.
The Efficiency of Ultrasonic Impact Treatment (UIT) for Improving the Fatigue Strength of Welded Joints. 51stAnnual Assembly of the International Institute of Welding. IIW Doc. XIII-1745-98, Hamburg, 1998. V.I. Troufyakov, E.S. Statnikov, P.P. Mikheev, A.Z. Kuzmenko.
Introductory Fatigue Tests on Welded Joints in High Strength Steel and Aluminium Treated by Various Improvement Methods Including Ultrasonic Impact Treatment (UIT). 51st Annual Assembly of the International Institute of Welding. IIW Doc. XIII-1748-98, Hamburg, 1998. P.J. Haagensen, E.S. Statnikov, L. Lopez-Martinez.
Repair of Fatigue Cracks. Working Group 5. 51st Annual Assembly of the International Institute of Welding. IIW Doc. XIII-WG5-18-98, Hamburg, 1998. E.S. Statnikov, L. Kelner, J. Baker, H. Croft, V.I. Dvoretsky, V.O. Muktepavel.
Guide for Application of Ultrasonic Impact Treatment Improving Fatigue Life of Welded Structures. 52nd Annual Assembly of the International Institute of Welding. IIW Doc. XIII-1757-99, Lisbon, 1999. E.S. Statnikov.
Comparison of Ultrasonic Impact Treatment (UIT) and other Fatigue Life Improvement Methods. 53rd Annual Assembly of the International Institute of Welding. IIW-Doc. XIII-1817-00, Florence, 2000. E.S. Statnikov, V.O. Muktepavel, A. Blomqvist.
Repair of Fatigue Welded Structures Repair Case Study. Working Group 5. 54th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-WG5-1873-01, Slovenia, 2001. E.S. Statnikov, L. Tehini.
Fatigue Strength Improvement of Bridge Girders by Ultrasonic Impact Treatment (UIT). 55th Annual Assembly of the International Institute of Welding. IIW Doc.XIII-1916-02, Copenhagen, 2002. J.W. Fisher, E.S. Statnikov, L. Tehini.
Comparison of the Improvement in Corrosion Fatigue Strength of Weld Repaired Marine Cu 3-grade Bronze Propellers by Ultrasonic Impact Treatment (UIT) or Heat Treatment. 56th Annual Assembly of the International Institute of Welding. IIW. Doc. XIII-1964-03, Bucharest, 2003. E.S. Statnikov, V.O. Muktepavel, V.N. Vityazev, V.I. Trufyakov, V.S. Kovalchuk, P. Haagensen.
The influence of ultrasonic impact treatment on fatigue behaviour of welded joints in high strength steel. 56th Annual Assembly of the International Institute of Welding, IIW-Doc. XIII-1976-03, Bucharest, 2003. André Galtier, E.S. Statnikov.
Fatigue strength of a longitudinal attachment improved by ultrasonic impact treatment. 56th Annual Assembly of the International Institute of Welding. IIW. Doc.XIII-1990-03, Bucharest, 2003. Veli-Matti Lihavainen, Gary Marquis, E.S. Statnikov.
Physics and Mechanism of Ultrasonic Impact Treatment. 57th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2004-04, Osaka, 2004, E. S. Statnikov.
Comparison of the Efficiency of 27, 36 and 44 kHz UIT Tools. 57th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2005, Osaka, 2004. E.S. Statnikov, V.N. Vityazev, O.V. Korolkov.
Improvement in Quality and Reliability of Structures by Means of UIT Esonix. 58th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2049-05, Prague, 2005. E. S. Statnikov.
Ultrasonic Impact Treatment versus Ultrasonic Peening. 58th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2050-05. Prague, 2005. E. S. Statnikov.
Physics and Mechanism of Ultrasonic Impact. 59th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2096-06, Quebec, 2006. E .S. Statnikov, O.V. Korolkov, V.N.Vityazev.
On the Assessment of Ultrasonic Impact Treatment Effect on Fatigue (Discussion of some experimental results). 59th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2097-06, Quebec, 2006. E. S. Statnikov, V.Y. Korostel.
Development of Esonix Ultrasonic Impact Treatment Techniques. 59th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2098-06, Quebec, 2006. E.S. Statnikov, V.Y. Korostel, N.Vekshin, G. Marquis.
Fatigue Strength Improvement of Thin Stainless Steel Specimens by UIT. 59th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2104-06, Quebec, 2006. L Huhtala, V-M Lihavainen, G Marquis, E. S. Statnikov, V.Y. Korostel, S.J. Maddox.
On the Use of Ultrasound to Accelerate Fatigue Testing. 59th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2106-06, Quebec, 2006. E.S. Statnikov, V.Y. Korostel.
UIT Application for Angular Distortion Compensation in Welded T-joints. 59th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2107-06, Quebec, 2006. E.S. Statnikov, V.Y. Korostel, W. Fricke.
On Identify in UIT Preparation for Comparative Testing and Field Application. 60th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2180-07, Dubrovnik, 2007. E.S. Statnikov, V.Y. Korostel, A.D. Manelik.
The use of ultrasound to accelerate fatigue testing during assessment of the UIT effectiveness. 60th Annual Assembly of the International Institute of Welding. IIW Doc. XIII-2182-07, Dubrovnik, 2007. E.S. Statnikov, V.Y. Korostel.
UIT application for angular distortion compensation in welded T-joints. 60th Annual Assembly of the International Institute of Welding. IIW Doc. X-1603-07, Dubrovnik, 2007. E.S. Statnikov, Wolfgang Fricke.
Inventing Ultrasonic Impact Technology and its Industry Impact. 63rd Annual Assembly of the International Institute of Welding, IIW Doc. XIII-2320-10, Istanbul, 2010. L. Kelner, D. Sharman.
External links
https://www.appliedultrasonics.com/
http://www.sonats-et.com/page_23-needle-peening.html
Corrosion prevention
Metallurgical processes
Metalworking
Welding | Ultrasonic impact treatment | [
"Chemistry",
"Materials_science",
"Engineering"
] | 3,176 | [
"Corrosion prevention",
"Welding",
"Metallurgical processes",
"Metallurgy",
"Corrosion",
"Mechanical engineering"
] |
29,974,094 | https://en.wikipedia.org/wiki/Spherical%20sector | In geometry, a spherical sector, also known as a spherical cone, is a portion of a sphere or of a ball defined by a conical boundary with apex at the center of the sphere. It can be described as the union of a spherical cap and the cone formed by the center of the sphere and the base of the cap. It is the three-dimensional analogue of the sector of a circle.
Volume
If the radius of the sphere is denoted by and the height of the cap by , the volume of the spherical sector is
This may also be written as
where is half the cone aperture angle, i.e., is the angle between the rim of the cap and the axis direction to the middle of the cap as seen from the sphere center. The limiting case is for approaching 180 degrees, which then describes a complete sphere.
The height, is given by
The volume of the sector is related to the area of the cap by:
Area
The curved surface area of the spherical sector (on the surface of the sphere, excluding the cone surface) is
It is also
where is the solid angle of the spherical sector in steradians, the SI unit of solid angle. One steradian is defined as the solid angle subtended by a cap area of .
Derivation
The volume can be calculated by integrating the differential volume element
over the volume of the spherical sector,
where the integrals have been separated, because the integrand can be separated into a product of functions each with one dummy variable.
The area can be similarly calculated by integrating the differential spherical area element
over the spherical sector, giving
where is inclination (or elevation) and is azimuth (right). Notice is a constant. Again, the integrals can be separated.
See also
Circular sector — the analogous 2D figure.
Spherical cap
Spherical segment
Spherical wedge
References
Spherical geometry | Spherical sector | [
"Mathematics"
] | 370 | [
"Geometry",
"Geometry stubs"
] |
29,985,074 | https://en.wikipedia.org/wiki/Dynamic%20strain%20aging | Dynamic strain aging (DSA) for materials science is an instability in plastic flow of materials, associated with interaction between moving dislocations and diffusing solutes. Although sometimes dynamic strain aging is used interchangeably with the Portevin–Le Chatelier effect (or serrated yielding), dynamic strain aging refers specifically to the microscopic mechanism that induces the Portevin–Le Chatelier effect. This strengthening mechanism is related to solid-solution strengthening and has been observed in a variety of fcc and bcc substitutional and interstitial alloys, metalloids like silicon, and ordered intermetallics within specific ranges of temperature and strain rate.
Description of mechanism
In materials, the motion of dislocations is a discontinuous process. When dislocations meet obstacles during plastic deformation (such as particles or forest dislocations), they are temporarily arrested for a certain time. During this time, solutes (such as interstitial particles or substitutional impurities) diffuse around the pinned dislocations, further strengthening the obstacles' hold on the dislocations. Eventually these dislocations will overcome the obstacles with sufficient stress and will quickly move to the next obstacle where they are stopped and the process can repeat. This process's most well-known macroscopic manifestations are Lüders bands and the Portevin–Le Chatelier effect. However, the mechanism is known to affect materials without these physical observations.
Model for substitutional solute DSA
In metal alloys with substitutional solute elements, such as aluminum-magnesium alloys, dynamic strain aging leads to negative strain rate sensitivity which causes instability in plastic flow. The diffusion of solute elements around a dislocation can be modeled based on the energy required to move a solute atom across the slip plane of the dislocation. An edge dislocation produces a stress field which is compressive above the slip plane and tensile below. In Al-Mg alloys, the Mg atom is larger than an Al atom and has lower energy on the tension side of the dislocation slip plane; therefore, Mg atoms in the vicinity of an edge dislocation are driven to diffuse across the slip plane (see figure). The resulting region of lower solute concentration above the slip plane weakens the material in the region near the pinned dislocation, such that when the dislocation becomes mobile again, the stress required to move it is temporarily reduced. This effect can manifest as serrations in the stress-strain curve (Portevin-Le Chatelier effect).
Because solute diffusion is thermally activated, increases in temperature can increase the rate and range of diffusion around a dislocation core. This can result in more severe stress drops, typically marked by a transition from Type A to Type C serrations.
Material property effects
Although serrations in the stress–strain curve caused by the Portevin–Le Chatelier effect are the most visible effect of dynamic strain aging, other effects may be present when this effect is not seen. Often when serrated flow is not seen, dynamic strain aging is marked by a lower strain rate sensitivity. That becomes negative in the Portevin–Le Chatelier regime. Dynamic strain aging also causes a plateau in the strength, a peak in flow stress a peak in work hardening, a peak in the Hall–Petch constant, and minimum variation of ductility with temperature. Since dynamic strain aging is a hardening phenomenon it increases the strength of the material.
Effect of alloying elements on DSA
Two categories can be distinguished by the interaction pathway.
The first class of Elements, such as carbon(C) and nitrogen(N), contribute to DSA directly by diffusing quickly enough through the lattice to the dislocations and locking them. Such effect is determined with the element’s solubility, diffusion coefficient, and the interaction energy between the elements and dislocations, i.e. the severity of dislocation locking.
Types of DSA Serrations
At least five classes can be identified according to the stress-strain relation appearance of Serration.
Type A
Arising from the repeated nucleation of shear bands and the continuous propagation of Lüders bands, this type consists of periodic locking serrations with abrupt increase in flow stress followed by drop of stress below the general level of the stress-strain curve. It is usually seen in the low temperature (high strain rate) part of the DS regime.
Type B
Result from the nucleation of narrow shear bands, which propagate discontinuously or do not propagate due to the adjacent nucleation sites, and thus oscillate about the general level of the flow curve. It occurs at higher temperature or lower strain rates than type A. It may also be developed from type A when it comes to higher strain.
Type C
Caused by dislocation unlocking, the stress drop of type C is below the general level of the flow curve. It occurs at even higher temperature and lower strain comparation to A and B type.
Type D
When there is no work hardening, a plateau on the stress-strain curve is seen and therefore is also named staircase type. This type forms a mixed mode with type B.
Type E
Occurring at higher strain after type A, type E is not easy to be recognized.
Material specific example of dynamic strain aging
Dynamic strain aging has been shown to be linked to these specific material problems:
Decrease the fracture resistance of Al–Li alloys.
Decrease low cycle fatigue life of austenitic stainless steels and super-alloys under test conditions which are similar to the service conditions in liquid-metal-cooled fast breeder reactors in which the material is used.
Reduce fracture toughness by 30–40% and shorten the air fatigue life of RPC steels and may worsen the cracking resistance of steels in aggressive environments. The susceptibility of RPC steels to environment assisted creating in high temperature water coincides with DSA behavior
PLC specific problems like blue brittleness in steel, loss of ductility and bad surface finishes for formed Aluminum Magnesium alloys.
See also
Portevin–Le Chatelier effect
Lüders band
References
Materials science | Dynamic strain aging | [
"Physics",
"Materials_science",
"Engineering"
] | 1,267 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
29,985,187 | https://en.wikipedia.org/wiki/Sceptre%20%28fusion%20reactor%29 | Sceptre was a series of early fusion power devices based on the Z-pinch concept of plasma confinement, built in the UK starting in 1956. They were the ultimate versions of a series of devices tracing their history to the original pinch machines, built at Imperial College London by Cousins and Ware in 1947. When the UK's fusion work was classified in 1950, Ware's team was moved to the Associated Electrical Industries (AEI) labs at Aldermaston. The team worked on the problems associated with using metal tubes with high voltages, in support of the efforts at Harwell. When Harwell's ZETA machine apparently produced fusion, AEI quickly built a smaller machine, Sceptre, to test their results. Sceptre also produced neutrons, apparently confirming the ZETA experiment. It was later found that the neutrons were spurious, and UK work on Z-pinch ended in the early 1960s.
History
Background
Fusion research in the UK started on a shoestring budget at Imperial College in 1946. When George Paget Thomson failed to gain funding from John Cockcroft's Atomic Energy Research Establishment (AERE), he turned over the project to two students, Stanley (Stan) W. Cousins and Alan Alfred Ware (1924-2010). They started working on the concept in January 1947, using a glass tube and old radar parts. Their small experimental device was able to generate brief flashes of light, but the nature of the light remained a mystery as they could not come up with a method of measuring its temperature.
Little interest was shown in the work, although it was noticed by Jim Tuck, who was interested in all things related to fusion. He met fellow fusion-fascinated Peter Thonemann, and the two developed a similar small machine of their own at Oxford University's Clarendon Laboratory. Tuck left for the University of Chicago before the device was built. After moving to Los Alamos, Tuck introduced the pinch concept there, and eventually built the Perhapsatron along the same lines.
In early 1950 Klaus Fuchs' admitted to turning UK and US atomic secrets over to the USSR. As fusion devices would generate copious amounts of neutrons, which could be used to enrich nuclear fuel for atomic bombs, the UK immediately classified all their fusion work. The research was considered important enough to continue, but it was difficult to maintain secrecy in a university setting. The decision was made to move both teams to secure locations. Imperial team under Ware was set up at the new Associated Electrical Industries (AEI) labs at Aldermaston in November while the Oxford team under Thonemann were moved to UKAEA Harwell.
By 1951 there were numerous pinch devices in operation; Cousins and Ware had built several follow-on machines, Tuck built his Perhapsatron, and another team at Los Alamos built a linear machine known as Columbus. It was later learned that Fuchs had passed information about the early UK work to the Soviets, and they had started a pinch program as well.
By 1952 it was clear to everyone that something was wrong in the machines. As current was applied, the plasma would first pinch down as expected, but would then develop a series of "kinks", evolving into a sinusoidal shape. When the outer portions hit the walls of the container, a small amount of the material would spall off into the plasma, cooling it and ruining the reaction. This so-called "kink instability" appeared to be a fundamental problem.
Practical work
At Aldermaston, the Imperial team was put under the direction of Thomas Allibone. Compared to the team at Harwell, the Aldermaston team decided to focus on faster pinch systems. Their power supply consisted of a large bank of capacitors with a total capacity of 66,000 Joules (when fully expanded) switched by spark gaps that could dump the stored power into the system at high speeds. Harwell's devices used slower rising pinch currents, and had to be larger to reach the same conditions.
One early suggestion to solve the kink instability was to use highly conductive metal tubes for the vacuum chamber instead of glass. As the plasma approached the walls of the tube, the moving current would induce a magnetic field in the metal. This field would, due to Lenz's law, opposed the motion of the plasma toward it, hopefully slowing or stopping its approach to the sides of the container. Tuck referred to this concept as "giving the plasma a backbone".
Allibone, originally from Metropolitan-Vickers, had worked on metal-walled X-ray tubes that used small inserts of porcelain to insulate them electrically. He suggested trying the same thing for the fusion experiments, potentially leading to higher temperatures than the glass tubes could handle. They started with an all-porcelain tube of 20 cm major axis, and were able to induce 30 kA of current into the plasma before it broke up. Following this they built an aluminum version, which was split into two parts with mica inserts between them. This version suffered arcing between the two halves.
Convinced that the metal tube was the way ahead, the team then started a long series of experiments with different materials and construction techniques to solve the arcing problem. By 1955 they had developed one with 64 segments that showed promise, and using 60 kJ capacitor bank they were able to induce 80 kA discharges. Although the tube was an improvement, it also suffered from the same kink instabilities, and work on this approach was abandoned.
To better characterize the problem, the team started construction of a larger aluminum torus with a 12-inch bore and 45 inch diameter, and inserted two straight sections to stretch it into a racetrack shape. The straight sections, known as the "pepper pot", had a series of holes drilled in them, angled so they all pointed to a single focal point some distance from the apparatus. A camera placed at the focal point was able to image the entire plasma column, greatly improving their understanding of the instability process.
Studying the issue, Shavranov, Taylor and Rosenbluth all developed the idea of adding a second magnetic field to the system, a steady-state toroidal field generated by magnets circling the vacuum tube. The second field would force the electrons and deuterons in the plasma to orbit the lines of force, reducing the effects of small imperfections in the field generated by the pinch itself. This sparked off considerable interest in both the US and UK. Thomson, armed with the possibility of a workable device and obvious interest in the US, won approval for a very large machine, ZETA.
Sceptre
At Aldermaston, using the same information, Ware's team calculated that with the 60 kJ available in the existing capacitor bank, they would reach the required conditions in a copper-covered quartz tube 2 inches in bore and 10 inches in diameter, or an all-copper version 2 inches in bore and 18 inches across. Work on both started in parallel, as Sceptre I and II.
However, before either was completed, the ZETA team at Harwell had already achieved stable plasmas in August 1957. The Aldermaston team raced to complete their larger photographic system. Electrical arcing and shorting between the tube segments became a problem, but the team had already learned that "dry firing" the apparatus hundreds of times would reduce this effect. After addressing the arcing, further experiments demonstrated temperatures around 1 million degrees. The system worked as expected, producing clear images of the kink instabilities using high-speed photography and argon gas so as to produce a bright image.
The team then removed the straight sections, added stabilization magnets, and re-christened the machine Sceptre III. In December they started experimental runs like those on ZETA. By measuring the spectral lines of oxygen, they calculated interior temperatures of 2 to 3.5 million degrees. Photographs through a slit in the side showed the plasma column remaining stable for 300 to 400 microseconds, a dramatic improvement on previous efforts. Working backward, the team calculated that the plasma had an electrical resistivity around 100 times that of copper, and was able to carry 200 kA of current for 500 microseconds in total. When the current was over 70 kA, neutrons were observed in roughly the same numbers as ZETA.
As in the case of ZETA, it was soon learned that the neutrons were being produced by a spurious source, and the temperatures were due to turbulence in the plasma, not the average temperature.
Sceptre IV
As the ZETA debacle played out in 1958, solutions to the problems seen in ZETA and Sceptre IIIA were hoped to be simple: a better tube, higher vacuum, and denser plasma. As the Sceptre machine was much less expensive and the high-power capacitor bank already existed, the decision was made to test these concepts with a new device, Sceptre IV.
However, none of these techniques helped. Sceptre IV proved to have the same performance problems as the earlier machines. Sceptre IV proved to be the last major "classic" pinch device built in the UK.
Notes
References
George Thomson, "Thermonuclear Fusion: The Task and the Triumph", New Scientist, 30 January 1958, pp. 11–13
Thomas Edward Allibone, "Controlling the Discharge", New Scientist, 30 January 1958, pp. 17–19
Robin Herman, "Fusion: the search for endless energy", Cambridge University Press, 1990
Peter Thonemann, "Controlled Thermonuclear Research in the United Kingdom", 2nd Geneva Conference on Peaceful Uses of Atomic Energy, Session P/78
(Review) Allibone, Chick, Thomson and Ware, "Review of Controlled Thermonuclear Research at A.E.I. Research Laboratory, 2nd Geneva Conference on Peaceful Uses of Atomic Energy, Session P/78
Aldermaston
Magnetic confinement fusion devices
Nuclear research institutes in the United Kingdom
Nuclear technology in the United Kingdom
Research institutes in Berkshire | Sceptre (fusion reactor) | [
"Chemistry"
] | 2,055 | [
"Particle traps",
"Magnetic confinement fusion devices"
] |
26,630,167 | https://en.wikipedia.org/wiki/Bethe%E2%80%93Slater%20curve | The Bethe–Slater curve is a heuristic explanation for why certain metals are ferromagnetic and others are antiferromagnetic. It assumes a Heisenberg model of magnetism, and explains the differences in exchange energy of transition metals as due to the ratio of the interatomic distance a to the radius r of the 3d electron shell. When the magnetically important 3d electrons of adjacent atoms are relatively close to each other, the exchange interaction, , is negative, but when they are further away, the exchange interaction becomes positive, before slowly dropping off.
The idea of relating exchange energy to inter-atomic distance was first proposed by John C. Slater in 1930, and illustrated as a curve on a graph in a review by Sommerfeld and Bethe in 1933.
For a pair of atoms, the exchange interaction wij (responsible for the energy E) is calculated as:
where: = exchange integral; S = electron spins; i and j = indices of the two atoms.
The Slater curve does produce realistic results, predicting Iron, Cobalt and Nickel to be the elements with ferromagnetic ordering. The curve is of practical use as a simple way of estimating based on the average atomic separation. However, more recent evaluations with realistic calculations of the exchange interactions show significantly more complex physics when treating the interactions of different atomic orbitals in an atom separately, rather than as a single unit.
External links
Open Quantum Materials Database
References
Magnetic exchange interactions
Ferromagnetism | Bethe–Slater curve | [
"Chemistry",
"Materials_science"
] | 305 | [
"Magnetic ordering",
"Ferromagnetism"
] |
26,637,162 | https://en.wikipedia.org/wiki/Sarizotan | Sarizotan (EMD-128,130) is a selective 5-HT1A receptor agonist and D2 receptor antagonist, which has antipsychotic effects, and has also shown efficacy in reducing dyskinesias resulting from long-term anti-Parkinsonian treatment with levodopa.
In June 2006, the developer Merck KGaA announced that the development of sarizotan was discontinued, after two sarizotan Phase III studies (PADDY I, PADDY II) failed to meet the primary efficacy endpoint and neither the Phase II findings nor the results from preclinical studies could be confirmed. No statistically significant difference of the primary target variable between sarizotan and placebo could be demonstrated.
See also
Osemozotan
Piclozotan
Robalzotan
References
5-HT1A agonists
5-HT7 agonists
Abandoned drugs
Amines
D2 antagonists
Pyridines
Secondary amines
4-Fluorophenyl compounds
Chromanes | Sarizotan | [
"Chemistry"
] | 214 | [
"Drug safety",
"Functional groups",
"Amines",
"Bases (chemistry)",
"Abandoned drugs"
] |
26,637,337 | https://en.wikipedia.org/wiki/Time%20resolved%20crystallography | Time resolved crystallography utilizes X-ray crystallography imaging to visualize reactions in four dimensions (x, y, z and time). This enables the studies of dynamical changes that occur in for example enzymes during their catalysis. The time dimension is incorporated by triggering the reaction of interest in the crystal prior to X-ray exposure, and then collecting the diffraction patterns at different time delays. In order to study these dynamical properties of macromolecules three criteria must be met;
The macromolecule must be biologically active in the crystalline state
It must be possible to trigger the reaction in the crystal
The intermediate of interest must be detectable, i.e. it must have a reasonable amount of concentration in the crystal (preferably over 25%).
This has led to the development of several techniques that can be divided into two groups, the pump-probe method and diffusion-trapping methods.
Pump-probe
In the pump-probe method the reaction is first triggered (pump) by photolysis (most often laser light) and then a diffraction pattern is collected by an X-ray pulse (probe) at a specific time delay. This makes it possible to obtain many images at different time delays after reaction triggering, and thereby building up a chronological series of images describing the events during reaction.
To obtain a reasonable signal to noise ratio this pump-probe cycle has to be performed many times for each spatial rotation of the crystal, and many times for the same time delay. Therefore, the reaction that one wishes to study with pump-probe must be able to relax back to its original conformation after triggering, enabling many measurements on the same sample.
The time resolution of the observed phenomena is dictated by the time width of the probing pulse (full width at half maximum). All processes that happen on a faster time scale than that are going to be averaged out by the convolution of the probe pulse intensity in time with the intensity of the actual x-ray reflectivity of the sample.
Diffusion-trapping
Diffusion-trapping methods utilizes diffusion techniques to get the substrates into the crystal and thereafter different trapping techniques are applied to get the intermediate of interest to accumulate in the crystal prior to collection of the diffraction pattern. These trapping methods could involve changes in pH, use of inhibitor or lowering the temperature in order to slow down the turnover rate or maybe even stop the reaction completely at a specific step. Just starting the reaction and then flash-freeze it, thereby quenching it at a specific time step, is also a possible method. One drawback with diffusion-trapping methods is that they can only be used to study intermediates that can be trapped, thereby limiting the time resolution one can obtain through the methods as compared to the pump-probe method.
See also
Keith Moffat
References
Crystallography
Articles containing video clips | Time resolved crystallography | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 578 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
26,637,474 | https://en.wikipedia.org/wiki/Fluorescent%20microthermography | Fluorescent microthermography (FMT) is a microscopy technique for infrared imaging of temperature distribution in small scale; the achievable spatial resolution is half micrometer and temperature resolution of 0.005 K. Time-dependent measurements are possible, as the fluorescence lifetime is only about 200 microseconds.
A thin film of a phosphor, europium thenoyl-trifluoroacetonate, is applied on the surface (e.g. an integrated circuit die) and illuminated by ultraviolet light at 340–380 nm, stimulating fluorescence at mainly 612 nm line. The quantum efficiency of fluorescence decreases exponentially with temperature, differences in emitted light intensity can be therefore used to assess differences on surface temperature, with hot areas showing as darker.
References
Infrared imaging
Microscopy
Thermometers
Measurement | Fluorescent microthermography | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 172 | [
"Physical quantities",
"Quantity",
"Measurement",
"Size",
"Measuring instruments",
"Thermometers",
"Microscopy"
] |
53,567,230 | https://en.wikipedia.org/wiki/Nisinic%20acid | Nisinic acid is a very long chain polyunsaturated omega-3 fatty acid, similar to docosahexaenoic acid (DHA). The lipid name is 24:6 (n-3) and the chemical name is all-cis-6,9,12,15,18,21-tetracosahexaenoic acid. It is not well studied, but polyunsaturated fatty acids even longer than DHA, nisinic acid included, may hold scientific promise.
References
Fatty acids | Nisinic acid | [
"Chemistry"
] | 114 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
53,567,687 | https://en.wikipedia.org/wiki/Lake%20Tuz%20Natural%20Gas%20Storage | Lake Tuz Natural Gas Storage () is an underground natural gas storage facility under construction in Aksaray Province, central Turkey. It was developed artificially in a salt formation.
The storage facility is situated near Sultanhanı town in Aksaray Province south of Lake Tuz at a depth of . It was established by creating salt caverns. The twelve man-made salt caverns each with a volume of can hold of natural gas. Daily delivery from the storage can be up to when needed.
The geological structure of the area is suitable for large underground natural gas storage facilities. The salt formation covers an area of . It is long and around thick. It has a salt dome structure. To create the caverns in the salt formation, fresh water was brought by pipeline from Hirfanlı Dam at distance. Using the process of solution mining, water was injected through a borehole into the salt formation, and the saline water, which is formed after dissolution of salt in water, was pumped back to the surface leaving a void in the formation. The salt water was transported to far away Lake Tuz by pipeline. Gas pressure inside the storage is nearly .
The contract for the project was signed between the Turkish BOTAŞ and the Chinese Tianchen Engineering Company (TCC) in 2012, and the construction started in 2013. For the financing, World Bank provided a loan in the amount of US$325 million in 2006. Another loan in the amount of US$400 million was secured in 2014. The first phase of the storage construction completed in February 2017. It is expected that storage will be fully in service in 2021. When completed, the facility will be capable of storing 10% of the natural gas consumed. The planned goal for total capacity is set to 20% of the consumed natural gas. The cost of the development was US$700 million.
The storage ensures supply safety in accordance with hourly, daily and seasonal needs. It eliminates supply demand disparity. Stored natural gas will be withdrawn in times of extreme cold weather or when the water level in the dams is reduced in drought. It also helps price stability.
Currently, the storage is extended as a next project step.
See also
Northern Marmara and Değirmenköy (Silivri) Depleted Gas Reservoir,
Marmara Ereğlisi LNG Storage Facility,
Egegaz Aliağa LNG Storage Facility.
Botaş Dörtyol LNG Storage Facility
References
Natural gas storage
Energy infrastructure in Turkey
Natural gas in Turkey
Buildings and structures under construction in Turkey
Buildings and structures in Aksaray Province
Botaş | Lake Tuz Natural Gas Storage | [
"Chemistry"
] | 524 | [
"Natural gas storage",
"Natural gas technology"
] |
53,567,922 | https://en.wikipedia.org/wiki/Industrial%20enzymes | Industrial enzymes are enzymes that are commercially used in a variety of industries such as pharmaceuticals, chemical production, biofuels, food and beverage, and consumer products. Due to advancements in recent years, biocatalysis through isolated enzymes is considered more economical than use of whole cells. Enzymes may be used as a unit operation within a process to generate a desired product, or may be the product of interest. Industrial biological catalysis through enzymes has experienced rapid growth in recent years due to their ability to operate at mild conditions, and exceptional chiral and positional specificity, things that traditional chemical processes lack. Isolated enzymes are typically used in hydrolytic and isomerization reactions. Whole cells are typically used when a reaction requires a co-factor. Although co-factors may be generated in vitro, it is typically more cost-effective to use metabolically active cells.
Enzymes as a unit of operation
Immobilization
Despite their excellent catalytic capabilities, enzymes and their properties must be improved prior to industrial implementation in many cases. Some aspects of enzymes that must be improved prior to implementation are stability, activity, inhibition by reaction products, and selectivity towards non-natural substrates. This may be accomplished through immobilization of enzymes on a solid material, such as a porous support. Immobilization of enzymes greatly simplifies the recovery process, enhances process control, and reduces operational costs. Many immobilization techniques exist, such as adsorption, covalent binding, affinity, and entrapment. Ideal immobilization processes should not use highly toxic reagents in the immobilization technique to ensure stability of the enzymes. After immobilization is complete, the enzymes are introduced into a reaction vessel for biocatalysis.
Adsorption
Enzyme adsorption onto carriers functions based on chemical and physical phenomena such as van der Waals forces, ionic interactions, and hydrogen bonding. These forces are weak, and as a result, do not affect the structure of the enzyme. A wide variety of enzyme carriers may be used. Selection of a carrier is dependent upon the surface area, particle size, pore structure, and type of functional group.
Covalent binding
Many binding chemistries may be used to adhere an enzyme to a surface to varying degrees of success. The most successful covalent binding techniques include binding via glutaraldehyde to amino groups and N-hydroxysuccinide esters. These immobilization techniques occur at ambient temperatures in mild conditions, which have limited potential to modify the structure and function of the enzyme.
Affinity
Immobilization using affinity relies on the specificity of an enzyme to couple an affinity ligand to an enzyme to form a covalently bound enzyme-ligand complex. The complex is introduced into a support matrix for which the ligand has high binding affinity, and the enzyme is immobilized through ligand-support interactions.
Entrapment
Immobilization using entrapment relies on trapping enzymes within gels or fibers, using non-covalent interactions. Characteristics that define a successful entrapping material include high surface area, uniform pore distribution, tunable pore size, and high adsorption capacity.
Recovery
Enzymes typically constitute a significant operational cost for industrial processes, and in many cases, must be recovered and reused to ensure economic feasibility of a process. Although some biocatalytic processes operate using organic solvents, the majority of processes occur in aqueous environments, improving the ease of separation. Most biocatalytic processes occur in batch, differentiating them from conventional chemical processes. As a result, typical bioprocesses employ a separation technique after bioconversion. In this case, product accumulation may cause inhibition of enzyme activity. Ongoing research is performed to develop in situ separation techniques, where product is removed from the batch during the conversion process. Enzyme separation may be accomplished through solid-liquid extraction techniques such as centrifugation or filtration, and the product-containing solution is fed downstream for product separation.
Enzymes as a desired product
To industrialize an enzyme, the following upstream and downstream enzyme production processes are considered:
Upstream
Upstream processes are those that contribute to the generation of the enzyme.
Selection of a suitable enzyme
An enzyme must be selected based upon the desired reaction. The selected enzyme defines the required operational properties, such as pH, temperature, activity, and substrate affinity.
Identification and selection of a suitable source for the selected enzyme
The choice of a source of enzymes is an important step in the production of enzymes. It is common to examine the role of enzymes in nature and how they relate to the desired industrial process. Enzymes are most commonly sourced through bacteria, fungi, and yeast. Once the source of the enzyme is selected, genetic modifications may be performed to increase the expression of the gene responsible for producing the enzyme.
Process development
Process development is typically performed after genetic modification of the source organism, and involves the modification of the culture medium and growth conditions. In many cases, process development aims to reduce mRNA hydrolysis and proteolysis.
Large scale production
Scaling up of enzyme production requires optimization of the fermentation process. Most enzymes are produced under aerobic conditions, and as a result, require constant oxygen input, impacting fermenter design. Due to variations in the distribution of dissolved oxygen, as well as temperature, pH, and nutrients, the transport phenomena associated with these parameters must be considered. The highest possible productivity of the fermenter is achieved at maximum transport capacity of the fermenter.
Downstream
Downstream processes are those that contribute to separation or purification of enzymes.
Removal of insoluble materials and recovery of enzymes from the source
The procedures for enzyme recovery depend on the source organism, and whether enzymes are intracellular or extracellular. Typically, intracellular enzymes require cell lysis and separation of complex biochemical mixtures. Extracellular enzymes are released into the culture medium, and are much simpler to separate. Enzymes must maintain their native conformation to ensure their catalytic capability. Since enzymes are very sensitive to pH, temperature, and ionic strength of the medium, mild isolation conditions must be used.
Concentration and primary purification of enzymes
Depending on the intended use of the enzyme, different levels purity are required. For example, enzymes used for diagnostic purposes must be separated to a higher purity than bulk industrial enzymes to prevent catalytic activity that provides erroneous results. Enzymes used for therapeutic purposes typically require the most rigorous separation. Most commonly, a combination of chromatography steps is employed for separation.
The purified enzymes are either sold in pure form and sold to other industries, or added to consumer goods.
See also
Industrial ecology
Industrial fermentation
Industrial microbiology
References | Industrial enzymes | [
"Biology"
] | 1,356 | [
"Industrial enzymes"
] |
53,572,373 | https://en.wikipedia.org/wiki/Locus%20of%20enterocyte%20effacement-encoded%20regulator | The locus of enterocyte effacement-encoded regulator (Ler) is a regulatory protein that controls bacterial pathogenicity of enteropathogenic Escherichia coli (EPEC) and enterohemorrhagic Escherichia coli (EHEC). More specifically, Ler regulates the locus of enterocyte effacement (LEE) pathogenicity island genes, which are responsible for creating intestinal attachment and effacing lesions and subsequent diarrhea: LEE1, LEE2, and LEE3. LEE1, 2, and 3 carry the information necessary for a type III secretion system. The transcript encoding the Ler protein is the open reading frame 1 on the LEE1 operon.
The mechanism of Ler regulation involves competition with histone-like nucleoid structuring protein (H-NS), a negative regulator of the LEE pathogenicity island. Ler is regulated by many factors such as plasmid encoded regulator (Per), integration host factor, Fis, BipA, a positive regulatory loop involving GrlA, and quorum sensing mediated by luxS.
Mechanism
Ler positively regulates the LEE genes by competition with its homolog, H-NS. H-NS silences LEE genes via rigid filament structures bound to the DNA that Ler disrupts and replaces through unknown mechanisms. Though little is known of the mechanism of Ler regulation, Ler interacts with DNA in specific ways. Ler binds DNA non-cooperatively, bends DNA in low concentrations, stiffens it in high concentration, and forms toroidal nucleoprotein complexes along DNA in vivo.
Regulation
The regulation of Ler and its transcript, ler, is complex and many-fold. The plasmid encoded regulator (per) directly activates the region of the LEE1 operon which encodes Ler. Integration host factor is also a direct activator of ler and binds upstream of its promoter.
Jeannette Barba and her colleagues at the National Autonomous University of Mexico elucidated a positive regulatory loop between Ler, ler, GrlA, and grlRA. GrlA is also a LEE encoded regulator of the LEE pathogenicity island. They found that GrlA activates ler, and that Ler activates grlRA indicating a loop of activation wherein a protein product activates a transcript whose protein product activates the transcript of the original protein. Ler activates grlRA only if H-NS is present, this is not the case for GrlA activation of ler.
Quorum sensing plays a role in Ler regulation. LuxS is an important protein involved in quorum sensing, particularly in the synthesis of autoinducer molecules. Quorum-sensing E. coli regulator A (QseA) is found in LuxS systems and activates transcription of ler. Fis, a nucleoid associated protein essential for EPEC's ability to form attaching and effacing lesions, partly acts through activation of Ler expression. BipA, a ribosomal binding GTPase and prolific regulator of EPEC virulence, transcriptionally regulates Ler from an upstream position where it also regulates other genes.
The Ler protein also represses its own transcript on the LEE1 operon through DNA looping which prevents RNA polymerase from completing transcription.
References
Locus
Gene expression | Locus of enterocyte effacement-encoded regulator | [
"Chemistry",
"Biology"
] | 698 | [
"Gene expression",
"Model organisms",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Escherichia coli"
] |
53,572,933 | https://en.wikipedia.org/wiki/Boron%20nitride%20aerogel | Boron nitride aerogel is an aerogel made of highly porous boron nitride (BN). It typically consists of a mixture of deformed boron nitride nanotubes and nanosheets. It can have a density as low as 0.6 mg/cm3 and a specific surface area as high as 1050 m2/g, and therefore has potential applications as an absorbent, catalyst support and gas storage medium. BN aerogels are highly hydrophobic and can absorb up to 160 times their mass in oil. They are resistant to oxidation in air at temperatures up to 1200 °C, and hence can be reused after the absorbed oil is burned out by flame. BN aerogels can be prepared by template-assisted chemical vapor deposition at a temperature ~900 °C using borazine as the feed gas. Alternatively it can be produced by ball milling h-BN powder, ultrasonically dispersing it in water, and freeze-drying the dispersion.
References
Nanomaterials
Boron compounds
Aerogels
Boron–nitrogen compounds | Boron nitride aerogel | [
"Chemistry",
"Materials_science"
] | 223 | [
"Nanotechnology",
"Foams",
"Aerogels",
"Nanomaterials"
] |
53,573,516 | https://en.wikipedia.org/wiki/FEATool%20Multiphysics | FEATool Multiphysics ("Finite Element Analysis Toolbox for Multiphysics") is a physics, finite element analysis (FEA), and partial differential equation (PDE) simulation toolbox. FEATool Multiphysics features the ability to model fully coupled heat transfer, fluid dynamics, chemical engineering, structural mechanics, fluid-structure interaction (FSI), electromagnetics, as well as user-defined and custom PDE problems in 1D, 2D (axisymmetry), or 3D, all within a graphical user interface (GUI) or optionally as script files. FEATool has been employed and used in academic research, teaching, and industrial engineering simulation contexts.
Distinguishing features
FEATool Multiphysics is a fully integrated physics and PDE simulation environment where the modeling process is subdivided into six steps; preprocessing (CAD and geometry modeling), mesh and grid generation, physics and PDE specification, boundary condition specification, solution, and postprocessing and visualization.
OpenFOAM and SU2 CFD & multi-solver interfaces
FEATool has introduced a multi-simulation/solver feature whereby integrated interfaces (UI) to popular open-source solvers are available. This enables several solvers to be used from a single unified GUI and CLI without requiring detailed knowledge of the syntax or peculiarities of each solver.
The CFD solver interfaces allows fluid dynamics problems to be solved with the finite volume CFD solvers OpenFOAM and SU2. Using the SU2 and OpenFOAM GUI interfaces automatically converts fluid dynamics models to compatible corresponding mesh, boundary, and control dictionary files, runs simulations, and afterwards imports and interpolates the resulting solutions back into the toolbox. In this way more advanced, larger, and parallel CFD models, for example including turbulence, can be simulated without leaving the FEATool interface.
FEniCS multiphysics solver interface
Similar to the OpenFOAM and SU2 solver interfaces, FEATool also features a fully integrated interface to the FEniCS general FEM and multiphysics solver. Using the FEATool-FEniCS interface, as both codes feature PDE definition languages, multiphysics problems can automatically be translated and converted to FEniCS Python definition files, after which the FEniCS solver is called, and the resulting solution re-imported.
Fully scriptable CLI interface
GUI operation is recorded as equivalent function calls, and therefore in addition to binary formats, FEATool simulation models can also be saved and exported as fully scriptable and editable MATLAB compatible m-script files. The short MATLAB script below illustrates how a complete flow around a cylinder computational fluid dynamics (CFD) benchmark problem can be defined and solved with the FEATool m-script functions (including geometry, grid generation, problem definition, solving, and postprocessing all in a few lines of code). Specifically, custom partial differential equations (PDE) and expressions can simply be entered and evaluated as string expressions as-is, without need for further compilation or writing custom functions.
% Geometry and mesh generation.
fea.sdim = { 'x' 'y' };
fea.geom.objects = { gobj_rectangle( 0, 2.2, 0, 0.41, 'R1' ), ...
gobj_circle( [0.2 0.2], 0.05, 'C1' ) };
fea = geom_apply_formula( fea, 'R1-C1' );
fea.grid = gridgen( fea, 'hmax', 0.02 );
% Problem definition (incompressible Navier-Stokes equations multiphysics mode).
fea = addphys( fea, @navierstokes );
% Prescribe fluid viscosity (density is default 1).
fea.phys.ns.eqn.coef{2,end} = { 0.001 };
% Boundary conditions (Non-specified boundaries are
% per default prescribed no-slip zero velocity walls).
% Inflow (bc type 2) at boundary 4.
fea.phys.ns.bdr.sel(4) = 2;
% Outflow (bc type 3, zero pressure) at boundary 2.
fea.phys.ns.bdr.sel(2) = 3;
% Parabolic inflow profile x-velocity expression.
fea.phys.ns.bdr.coef{2,end}{1,4} = '4*0.3*y*(0.41-y)/0.41^2';
% Check, parse, and solve problem.
fea = parsephys( fea );
fea = parseprob( fea );
fea.sol.u = solvestat( fea );
% Alternatively solve with OpenFOAM or SU2
% fea.sol.u = openfoam( fea );
% fea.sol.u = su2( fea );
% Postprocessing and visualization.
postplot( fea, 'surfexpr', 'sqrt(u^2+v^2)', ...
'arrowexpr', {'u' 'v'} )
p_cyl_front = evalexpr( 'p', [0.15; 0.2], fea );
p_cyl_back = evalexpr( 'p', [0.25; 0.2], fea );
delta_p_computed = p_cyl_front - p_cyl_back
delta_p_reference = 0.117520
External mesh generator interfaces
Similar to the external solver interfaces, FEATool features built-in support for the Gmsh and Triangle mesh generators. If requested instead of the built-in mesh generation algorithm, FEATool will convert and export appropriate Gridgen2D, Gmsh, or Triangle input data files, call the mesh generators through external system calls, and re-import the resulting grids into FEATool.
Other distinguishing features
Stand-alone operation (without MATLAB) or can be used as a MATLAB toolbox.
Fully cross platform MATLAB interoperability including other toolboxes.
Extensive FEM basis function library (linear and high order conforming P1-P5, non-conforming, bubble, and vector FEM discretizations).
Support for structured and un-structured line interval, triangles, quadrilaterals, tetrahedral, and hexahedral mesh elements.
28 pre-defined equations and multiphysics modes in 1D, 2D Cartesian and cylindrical coordinates, as well as full 3D.
Support for custom user defined PDE equations.
Mesh and geometry import, export, and conversion between OpenFOAM, SU2, Dolfin/FEniCS XML, GiD, Gmsh, GMV, Triangle (PSLG), and plain ASCII grid formats.
Online postprocssing and image export with ParaView Glance, Plotly, and social sharing of results.
See also
Multiphysics
Computer-aided engineering (CAE)
Continuum mechanics
Finite element method (FEM)
References
External links
FEATool Multiphysics website
Computational fluid dynamics
Computer-aided engineering software
Continuum mechanics
Finite element software
Finite element software for Linux
Scientific simulation software
Physics software | FEATool Multiphysics | [
"Physics",
"Chemistry"
] | 1,534 | [
"Continuum mechanics",
"Computational fluid dynamics",
"Classical mechanics",
"Computational physics",
"Fluid dynamics",
"Physics software"
] |
53,576,321 | https://en.wikipedia.org/wiki/Single-cell%20transcriptomics | Single-cell transcriptomics examines the gene expression level of individual cells in a given population by simultaneously measuring the RNA concentration (conventionally only messenger RNA (mRNA)) of hundreds to thousands of genes. Single-cell transcriptomics makes it possible to unravel heterogeneous cell populations, reconstruct cellular developmental pathways, and model transcriptional dynamics — all previously masked in bulk RNA sequencing.
Background
The development of high-throughput RNA sequencing (RNA-seq) and microarrays has made gene expression analysis a routine. RNA analysis was previously limited to tracing individual transcripts by Northern blots or quantitative PCR. Higher throughput and speed allow researchers to frequently characterize the expression profiles of populations of thousands of cells. The data from bulk assays has led to identifying genes differentially expressed in distinct cell populations, and biomarker discovery.
These studies are limited as they provide measurements for whole tissues and, as a result, show an average expression profile for all the constituent cells. This has a couple of drawbacks. Firstly, different cell types within the same tissue can have distinct roles in multicellular organisms. They often form subpopulations with unique transcriptional profiles. Correlations in the gene expression of the subpopulations can often be missed due to the lack of subpopulation identification. Secondly, bulk assays fail to recognize whether a change in the expression profile is due to a change in regulation or composition — for example if one cell type arises to dominate the population. Lastly, when your goal is to study cellular progression through differentiation, average expression profiles can only order cells by time rather than by developmental stage. Consequently, they cannot show trends in gene expression levels specific to certain stages.
Recent advances in biotechnology allow the measurement of gene expression in hundreds to thousands of individual cells simultaneously. While these breakthroughs in transcriptomics technologies have enabled the generation of single-cell transcriptomic data, they also presented new computational and analytical challenges. Bioinformaticians can use techniques from bulk RNA-seq for single-cell data. Still, many new computational approaches have had to be designed for this data type to facilitate a complete and detailed study of single-cell expression profiles.
Experimental steps
There is so far no standardized technique to generate single-cell data: all methods must include cell isolation from the population, lysate formation, amplification through reverse transcription and quantification of expression levels. Common techniques for measuring expression are quantitative PCR or RNA-seq.
Isolating single cells
There are several methods available to isolate and amplify cells for single-cell analysis. Low throughput techniques are able to isolate hundreds of cells, are slow, and enable selection. These methods include:
Micropipetting
Cytoplasmic aspiration
Laser capture microdissection.
High-throughput methods are able to quickly isolate hundreds to tens of thousands of cells. Common techniques include:
Fluorescence activated cell sorting (FACS)
Microfluidic devices
Combining FACS with scRNA-seq has produced optimized protocols such as SORT-seq. A list of studies that utilized SORT-seq can be found here. Moreover, combining microfluidic devices with scRNA-seq has been optimized in 10x Genomics protocols.
Quantitative PCR (qPCR)
To measure the level of expression of each transcript qPCR can be applied. Gene specific primers are used to amplify the corresponding gene as with regular PCR and as a result data is usually only obtained for sample sizes of less than 100 genes. The inclusion of housekeeping genes, whose expression should be
constant under the conditions, is used for normalisation. The most commonly used house keeping genes include GAPDH and α-actin, although the reliability of normalisation through this process is questionable as there is evidence that the level of expression can vary significantly. Fluorescent dyes are used as reporter molecules to detect the PCR product and monitor the progress of the amplification - the increase in fluorescence intensity is proportional to the amplicon concentration. A plot of fluorescence vs. cycle
number is made and a threshold fluorescence level is used to find cycle number at which the plot reaches this value. The cycle number at this point is known as the threshold cycle (Ct) and is measured for each gene.
Single-cell RNA-seq
The single-cell RNA-seq technique converts a population of RNAs to a library of cDNA fragments. These fragments are sequenced by high-throughput next generation sequencing techniques and the reads are mapped back to the reference genome, providing a count of the number of reads associated with each gene.
Normalisation of RNA-seq data accounts for cell to cell variation in the efficiencies of the cDNA library formation and sequencing. One method relies on the use of extrinsic RNA spike-ins (RNA sequences of known sequence and quantity) that are added in equal quantities to each cell lysate and used to normalise read count by the number of reads mapped to spike-in mRNA.
Another control uses unique molecular identifiers (UMIs)-short DNA sequences (6–10nt) that are added to each cDNA before amplification and act as a bar code for each cDNA molecule. Normalisation is achieved by using the count number of unique UMIs associated with each gene to account for differences in amplification efficiency.
A combination of both spike-ins, UMIs and other approaches have been combined for more accurate normalisation.
Considerations
A problem associated with single-cell data occurs in the form of zero inflated gene expression distributions, known as technical dropouts, that are common due to low mRNA concentrations of less-expressed genes that are not captured in the reverse transcription process. The percentage of mRNA molecules in the cell lysate that are detected is often only 10-20%.
When using RNA spike-ins for normalisation the assumption is made that the amplification and sequencing efficiencies for the endogenous and spike-in RNA are the same. Evidence suggests that this is not the case given fundamental differences in size and features, such as the lack of a polyadenylated tail in spike-ins and therefore shorter length. Additionally, normalisation using UMIs assumes the cDNA library is sequenced to saturation, which is not always the case.
Data analysis
Insights based on single-cell data analysis assume that the input is a matrix of normalised gene expression counts, generated by the approaches outlined above, and can provide opportunities that are not obtainable by bulk.
Three main insights provided:
Identification and characterization of cell types and their spatial organisation in time
Inference of gene regulatory networks and their strength across individual cells
Classification of the stochastic component of transcription
The techniques outlined have been designed to help visualise and explore patterns in the data in order to facilitate the revelation of these three features.
Clustering
Clustering allows for the formation of subgroups in the cell population. Cells can be clustered by their transcriptomic profile in order to analyse the sub-population structure and identify rare cell types or cell subtypes. Alternatively, genes can be clustered by their expression states in order to identify covarying genes. A combination of both clustering approaches, known as biclustering, has been used to simultaneously cluster by genes and cells to find genes that behave similarly within cell clusters.
Clustering methods applied can be K-means clustering, forming disjoint groups or Hierarchical clustering, forming nested partitions.
Biclustering
Biclustering provides several advantages by improving the resolution of clustering. Genes that are only informative to a subset of cells and are hence only expressed there can be identified through biclustering. Moreover, similarly behaving genes that differentiate one cell cluster from another can be identified using this method.
Dimensionality reduction
Dimensionality reduction algorithms such as Principal component analysis (PCA) and t-SNE can be used to simplify data for visualisation and pattern detection by transforming cells from a high to a lower dimensional space. The result of this method produces graphs with each cell as a point in a 2-D or 3-D space. Dimensionality reduction is frequently used before clustering as cells in high dimensions can wrongly appear to be close due to distance metrics behaving non-intuitively.
Principal component analysis
The most frequently used technique is PCA, which identifies the directions of largest variance principal components and transforms the data so that the first principal component has the largest possible variance, and successive principle components in turn each have the highest variance possible while remaining orthogonal to the preceding components. The contribution each gene makes to each component is used to infer which genes are contributing the most to variance in the population and are involved in differentiating different subpopulations.
Differential expression
Detecting differences in gene expression level between two populations is used both single-cell and bulk transcriptomic data. Specialised methods have been designed for single-cell data that considers single cell features such as technical dropouts and shape of the distribution e.g. Bimodal vs. unimodal.
Gene ontology enrichment
Gene ontology terms describe gene functions and the relationships between those functions into three classes:
Molecular function
Cellular component
Biological process
Gene Ontology (GO) term enrichment is a technique used to identify which GO terms are over-represented or under-represented in a given set of genes. In single-cell analysis input list of genes of interest can be selected based on differentially expressed genes or groups of genes generated from biclustering. The number of genes annotated to a GO term in the input list is normalised against the number of genes annotated to a GO term in the background set of all genes in genome to determine statistical significance.
Pseudotemporal ordering
Pseudo-temporal ordering (or trajectory inference) is a technique that aims to infer gene expression dynamics from snapshot single-cell data. The method tries to order the cells in such a way that similar cells are closely positioned to each other. This trajectory of cells can be linear, but can also bifurcate or follow more complex graph structures. The trajectory, therefore, enables the inference of gene expression dynamics and the ordering of cells by their progression through differentiation or response to external stimuli.
The method relies on the assumptions that the cells follow the same path through the process of interest and that their transcriptional state correlates to their progression. The algorithm can be applied to both mixed populations and temporal samples.
More than 50 methods for pseudo-temporal ordering have been developed, and each has its own requirements for prior information (such as starting cells or time course data), detectable topologies, and methodology. An example algorithm is the Monocle algorithm that carries out dimensionality reduction of the data, builds a minimal spanning tree using the transformed data, orders cells in pseudo-time by following the longest connected path of the tree and consequently labels cells by type. Another example is the diffusion pseudotime (DPT) algorithm, which uses a diffusion map and diffusion process. Another class of methods such as MARGARET employ graph partitioning for capturing complex trajectory topologies such as disconnected and multifurcating trajectories.
Network inference
Gene regulatory network inference is a technique that aims to construct a network, shown as a graph, in which the nodes represent the genes and edges indicate co-regulatory interactions. The method relies on the assumption that a strong statistical relationship between the expression of genes is an indication of a potential functional relationship. The most commonly used method to measure the strength of a statistical relationship is correlation. However, correlation fails to identify non-linear relationships and mutual information is used as an alternative. Gene clusters linked in a network signify genes that undergo coordinated changes in expression.
Integration
The presence or strength of technical effects and the types of cells observed often differ in single-cell transcriptomics datasets generated using different experimental protocols and under different conditions. This difference results in strong batch effects that may bias the findings of statistical methods applied across batches, particularly in the presence of confounding.
As a result of the aforementioned properties of single-cell transcriptomic data, batch correction methods developed for bulk sequencing data were observed to perform poorly. Consequently, researchers developed statistical methods to correct for batch effects that are robust to the properties of single-cell transcriptomic data to integrate data from different sources or experimental batches. Laleh Haghverdi performed foundational work in formulating the use of mutual nearest neighbors between each batch to define batch correction vectors. With these vectors, you can merge datasets that each include at least one shared cell type. An orthogonal approach involves the projection of each dataset onto a shared low-dimensional space using canonical correlation analysis. Mutual nearest neighbors and canonical correlation analysis have also been combined to define integration "anchors" comprising reference cells in one dataset, to which query cells in another dataset are normalized. Another class of methods (e.g., scDREAMER) uses deep generative models such as variational autoencoders for learning batch-invariant latent cellular representations which can be used for downstream tasks such as cell type clustering, denoising of single-cell gene expression vectors and trajectory inference.
See also
RNA-Seq
Single-cell analysis
Single-cell sequencing
Transcriptome
Transcriptomics
References
External links
Dissecting Tumor Heterogeneity with Single-Cell Transcriptomics
The ultimate single-cell RNA sequencing guide by single-cell RNA sequencing service provider Single Cell Discoveries.
DNA sequencing
Molecular biology techniques
Biotechnology | Single-cell transcriptomics | [
"Chemistry",
"Biology"
] | 2,754 | [
"Biotechnology",
"Molecular biology techniques",
"DNA sequencing",
"nan",
"Molecular biology"
] |
40,654,416 | https://en.wikipedia.org/wiki/Pluramycin%20A | Pluramycin A is an antibiotic/anticancer compound that inhibits nucleic acid biosynthesis. The pluramycin family of natural products are an important group of complex C-aryl glycoside antibiotics that possess the tetracyclic 4H-anthra[1,2-b]pyran-4,7,12-trione moiety A–D as an aromatic core. The D-ring is adorned with two deoxyaminosugars that are appended by C-aryl glycosidic linkages. The E-ring sugar is angolosamine, a carbohydrate that is also found in the antibiotic angolamycin. The F-ring sugar is the N,N-dimethyl derivative of vancosamine, which is the sugar found in the glycopeptide antibiotic vancomycin.
These compounds exhibit in vitro antitumor activity by DNA alkylation, where the two proximal amino sugars, D-angolosamine and N,N-dimethyl-L-vancosamine, play a key role in sequence recognition in intercalation of the tetracyclic chromophore.
References
Antibiotics
Angucyclines
Heterocyclic compounds with 4 rings
Epoxides
Acetate esters
Dimethylamino compounds
Anthrapyrans | Pluramycin A | [
"Chemistry",
"Biology"
] | 295 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
40,654,938 | https://en.wikipedia.org/wiki/Retinal%20homeobox%20protein%20Rx | Retinal homeobox protein Rx also known as retina and anterior neural fold homeobox is a protein that in humans is encoded by the RAX gene. The RAX gene is located on chromosome 18 in humans, mice, and rats.
Function
This gene encodes a homeobox-containing transcription factor that functions in eye development. The gene is expressed early in the eye primordia, and is required for retinal cell fate determination and also regulates stem cell proliferation.
Towards the end of late gastrulation a single eye field has formed and splits into bilateral fields via action by the signaling molecule, sonic hedgehog (Shh) secreted from the forebrain. Rax and Six-3 (also a transcription factor) maintain the forebrain's ability to secrete Shh by inhibiting activity of the signaling molecule Wnt.
Rax (Retina and Anterior Neural Fold Homeobox) is a gene in the OAR (Otx, Arx,& Rax) subgroup of the paired-like homeodomain family of transcription factors. Discovered in 1997, the Rax gene is known to contribute to the development of the retina, hypothalamus, pineal gland and pituitary gland.
Clinical significance
Mutations in this gene have been reported in patients with defects in ocular development, including microphthalmia, anophthalmia, and coloboma.
Mutations to the Rax gene cause malformation of the retinal field, including anophthalmia and microphthalmia.
Individuals who have a mutation in the RAX gene fail to develop ocular structures, referred to as anophthalmia. RAX mutant individuals can also have microphthalmia, where one or both of the eyes is smaller than normal.
Animal studies
Rax genes are conserved among vertebrates. RAX knockout mice have no eyes and abnormal forebrain formation. In the frog Xenopus tropicalis, Rax mutants are eyeless; the future retinal tissue instead has diencephalon and telencephalon features. Due to a genome duplication at the basis of the teleost fish lineage, fishes contain three Rax genes: Rx1, Rx2, and Rx3. Zebrafish and medaka mutants in Rx3 are eyeless.
References
Transcription factors | Retinal homeobox protein Rx | [
"Chemistry",
"Biology"
] | 490 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
40,656,163 | https://en.wikipedia.org/wiki/Trailer%20connectors%20in%20military%20organizations | A number of standards specific to military organizations exist for trailer connectors, the electrical connectors between vehicles and the trailers they tow that provide a means of control for the trailers. These can be found on surplus equipment sold for civilian use.
NATO
NATO uses a 12-pin connector according to STANAG 4007. However, note that there are often deviations from the standard depending on which country it is applied, which means that the table below may not be accurate.
The following supplementary information exists for the connector:
Some documentation indicates that the terminal A, C and H must be interconnected, this will conflict with the definition of Blackout and Convoy modes according to STANAG 4007 when these pins have different purposes. To clarify:
Pin A is for activation of so-called Blackout Mode. It turns off all the lights except convoy lighting if it is active. The lighting inside the vehicle shall also be extinguished if it is not specifically shaded.
Pin C is for Convoy Lamps, which is the special convoy lighting, corresponding to the tail lights, to be used while driving in the dark.
Pin H is for rear fog light and the rear fog light shall also be disabled when pin A is active.
Swedish Armed Forces
This is physically the same connector as the NATO connector, but with completely different wiring. This means mixing this up with the STANAG 4007 wiring runs the risk for short circuits and blown fuses.
The following supplementary information exists for the connector:
See also
Trailer connector
Trailer connectors in Australia
Trailer connectors in Europe
ISO standards for trailer connectors
Trailer connectors in North America
References
Symbol Guide
Military vehicles
Automotive electrics
Trailers
DC power connectors | Trailer connectors in military organizations | [
"Engineering"
] | 331 | [
"Electrical engineering",
"Automotive electrics"
] |
40,659,364 | https://en.wikipedia.org/wiki/Arctic%20sea%20ice%20decline | Sea ice in the Arctic region has declined in recent decades in area and volume due to climate change. It has been melting more in summer than it refreezes in winter. Global warming, caused by greenhouse gas forcing is responsible for the decline in Arctic sea ice. The decline of sea ice in the Arctic has been accelerating during the early twenty-first century, with a decline rate of 4.7% per decade (it has declined over 50% since the first satellite records). Summertime sea ice will likely cease to exist sometime during the 21st century.
The region is at its warmest in at least 4,000 years. Furthermore, the Arctic-wide melt season has lengthened at a rate of five days per decade (from 1979 to 2013), dominated by a later autumn freeze-up. The IPCC Sixth Assessment Report (2021) stated that Arctic sea ice area will likely drop below 1 million km2 in at least some Septembers before 2050. In September 2020, the US National Snow and Ice Data Center reported that the Arctic sea ice in 2020 had melted to an extent of 3.74 million km2, its second-smallest extent since records began in 1979. Earth lost 28 trillion tonnes of ice between 1994 and 2017, with Arctic sea ice accounting for 7.6 trillion tonnes of this loss. The rate of ice loss has risen by 57% since the 1990s.
Sea ice loss is one of the main drivers of Arctic amplification, the phenomenon that the Arctic warms faster than the rest of the world under climate change. It is plausible that sea ice decline also makes the jet stream weaker, which would cause more persistent and extreme weather in mid-latitudes. Shipping is more often possible in the Arctic now, and will likely increase further. Both the disappearance of sea ice and the resulting possibility of more human activity in the Arctic Ocean pose a risk to local wildlife such as polar bears.
One important aspect in understanding sea ice decline is the Arctic dipole anomaly. This phenomenon appears to have slowed down the overall loss of sea ice between 2007 and 2021, but such a trend will probably not continue.
Definitions
The Arctic Ocean is the mass of water positioned approximately above latitude 65° N. Arctic Sea Ice refers to the area of the Arctic Ocean covered by ice. The Arctic sea ice minimum is the day in a given year when Arctic sea ice reaches its smallest extent, occurring at the end of the summer melting season, normally during September. Arctic Sea ice maximum is the day of a year when Arctic sea ice reaches its largest extent near the end of the Arctic cold season, normally during March. Typical data visualizations for Arctic sea ice include average monthly measurements or graphs for the annual minimum or maximum extent, as shown in the adjacent images.
Sea ice extent is defined as the area with at least 15% of sea ice cover; it is more often used as a metric than simple total sea ice area. This metric is used to address uncertainty in distinguishing open sea water from melted water on top of solid ice, which satellite detection methods have difficulty differentiating. This is primarily an issue in summer months.
Observations
A 2007 study found the decline to be "faster than forecasted" by model simulations.
A 2011 study suggested that it could be reconciled by internal variability enhancing the greenhouse gas-forced sea ice decline over the last few decades. A 2012 study, with a newer set of simulations, also projected rates of retreat that were somewhat less than that actually observed.
Satellite era
Observation with satellites shows that Arctic sea ice area, extent, and volume have been in decline for a few decades. The amount of multi-year sea ice in the Arctic has declined considerably in recent decades. In 1988, ice that was at least 4 years old accounted for 26% of the Arctic's sea ice. By 2013, ice that age was only 7% of all Arctic sea ice.
Scientists recently measured sixteen-foot (five-meter) wave heights during a storm in the Beaufort Sea in mid-August until late October 2012. This is a new phenomenon for the region, since a permanent sea ice cover normally prevents wave formation. Wave action breaks up sea ice, and thus could become a feedback mechanism, driving sea ice decline.
For January 2016, the satellite-based data showed the lowest overall Arctic sea ice extent of any January since records began in 1979. Bob Henson from Wunderground noted:
January 2016's remarkable phase transition of Arctic oscillation was driven by a rapid tropospheric warming in the Arctic, a pattern that appears to have increased surpassing the so-called stratospheric sudden warming. The previous record of the lowest extent of the Arctic Ocean covered by ice in 2012 saw a low of 1.31 million square miles (3.387 million square kilometers). This replaced the previous record set on September 18, 2007, at 1.61 million square miles (4.16 million square kilometers). The minimum extent on 18th Sept 2019 was 1.60 million square miles (4.153 million square kilometers).
A 2018 study of the thickness of sea ice found a decrease of 66% or 2.0 m over the last six decades and a shift from permanent ice to largely seasonal ice cover.
Earlier data
The overall trend indicated in the passive microwave record from 1978 through mid-1995 shows that the extent of Arctic sea ice is decreasing 2.7% per decade. Subsequent work with the satellite passive-microwave data indicates that from late October 1978 through the end of 1996 the extent of Arctic sea ice decreased by 2.9% per decade. Sea ice extent for the Northern Hemisphere showed a decrease of 3.8% ± 0.3% per decade from November 1978 to December 2012.
Future ice loss
An "ice-free" Arctic Ocean, sometimes referred to as a "blue ocean event" (BOE), is often defined as "having less than 1 million square kilometers of sea ice", because it is very difficult to melt the thick ice around the Canadian Arctic Archipelago. The IPCC AR5 defines "nearly ice-free conditions" as a sea ice extent of less than 106 km2 for at least five consecutive years.
Estimating the exact year when the Arctic Ocean will become "ice-free" is very difficult, due to the large role of interannual variability in sea ice trends. In Overland and Wang (2013), the authors investigated three different ways of predicting future sea ice levels. They noted that the average of all models used in 2013 was decades behind the observations, and only the subset of models with the most aggressive ice loss was able to match the observations. However, the authors cautioned that there is no guarantee those models would continue to match the observations, and hence that their estimate of ice-free conditions first appearing in 2040s may still be flawed. Thus, they advocated for the use of expert judgement in addition to models to help predict ice-free Arctic events, but they noted that expert judgement could also be done in two different ways: directly extrapolating ice loss trends (which would suggest an ice-free Arctic in 2020) or assuming a slower decline trend punctuated by the occasional "big melt" seasons (such as those of 2007 and 2012) which pushes back the date to 2028 or further into 2030s, depending on the starting assumptions about the timing and the extent of the next "big melt". Consequently, there has been a recent history of competing projections from climate models and from individual experts.
Climate models
A 2006 paper examined projections from the Community Climate System Model and predicted "near ice-free September conditions by 2040".
A 2009 paper from Muyin Wang and James E. Overland applied observational constraints to the projections from six CMIP3 climate models and estimated nearly ice-free Arctic Ocean around September 2037, with a chance it could happen as early as 2028. In 2012, this pair of researchers repeated the exercise with CMIP5 models and found that under the highest-emission scenario in CMIP5, Representative Concentration Pathway 8.5, ice-free September first occurs between 14 and 36 years after the baseline year of 2007, with the median of 28 years (i.e. around 2035).
In 2009, a study using 18 CMIP3 climate models found that they project ice-free Arctic a little before 2100 under a scenario of medium future greenhouse gas emissions. In 2012, a different team used CMIP5 models and their moderate emission scenario, RCP 4.5 (which represents somewhat lower emissions than the scenario in CMIP3), and found that while their mean estimate avoids ice-free Arctic before the end of the century, ice-free conditions in 2045 were within one standard deviation of the mean.
In 2013, a study compared projections from the best-performing subset of CMIP5 models with the output from all 30 models after it was constrained by the historical ice conditions, and found good agreement between these approaches. Altogether, it projected ice-free September between 2054 and 2058 under RCP 8.5, while under RCP 4.5, Arctic ice gets very close to the ice-free threshold in 2060s, but does not cross it by the end of the century, and stays at an extent of 1.7 million km2.
In 2014, IPCC Fifth Assessment Report indicated a risk of ice-free summer around 2050 under the scenario of highest possible emissions.
The Third U.S. National Climate Assessment (NCA), released May 6, 2014, reported that the Arctic Ocean is expected to be ice free in summer before mid-century. Models that best match historical trends project a nearly ice-free Arctic in the summer by the 2030s.
In 2021, the IPCC Sixth Assessment Report assessed that there is "high confidence" that the Arctic Ocean will likely become practically ice-free in September before the year 2050 under all SSP scenarios.
A paper published in 2021 shows that the CMIP6 models which perform the best at simulating Arcic sea ice trends project the first ice-free conditions around 2035 under SSP5-8.5, which is the scenario of continually accelerating greenhouse gas emissions.
By weighting multiple CMIP6 projections, the first year of an ice-free Arctic is likely to occur during 2040–2072 under the SSP3-7.0 scenario.
Impacts on the physical environment
Global climate change
Arctic sea ice maintains the cool temperature of the polar regions and it has an important albedo effect on the climate. Its bright shiny surface reflects sunlight during the Arctic summer; dark ocean surface exposed by the melting ice absorbs more sunlight and becomes warmer, which increases the total ocean heat content and helps to drive further sea ice loss during the melting season, as well as potentially delaying its recovery during the polar night. Arctic ice decline between 1979 and 2011 is estimated to have been responsible for as much radiative forcing as a quarter of emissions the same period, which is equivalent to around 10% of the cumulative increase since the start of the Industrial Revolution. When compared to the other greenhouse gases, it has had the same impact as the cumulative increase in nitrous oxide, and nearly half of the cumulative increase in methane concentrations.
The effect of Arctic sea ice decline on global warming will intensify in the future as more and more ice is lost. This feedback has been accounted for by all CMIP5 and CMIP6 models, and it is included in all warming projections they make, such as the estimated warming by 2100 under each Representative Concentration Pathway and Shared Socioeconomic Pathway. They are also capable of resolving the second-order effects of sea ice loss, such as the effect on lapse rate feedback, the changes in water vapor concentrations and regional cloud feedbacks.
Ice-free summer vs. ice-free winter
In 2021, the IPCC Sixth Assessment Report said with high confidence that there is no hysteresis and no tipping point in the loss of Arctic summer sea ice. This can be explained by the increased influence of stabilizing feedback compared to the ice albedo feedback. Specifically, thinner sea ice leads to increased heat loss in the winter, creating a negative feedback loop. This counteracts the positive ice albedo feedback. As such, sea ice would recover even from a true ice-free summer during the winter, and if the next Arctic summer is less warm, it may avoid another ice-free episode until another similarly warm year down the line. However, higher levels of global warming would delay the recovery from ice-free episodes and make them occur more often and earlier in the summer. A 2018 paper estimated that an ice-free September would occur once in every 40 years under a global warming of 1.5 degrees Celsius, but once in every 8 years under 2 degrees and once in every 1.5 years under 3 degrees.
Very high levels of global warming could eventually prevent Arctic sea ice from reforming during the Arctic winter. This is known as an ice-free winter, and it ultimately amounts to a total of loss of Arctic ice throughout the year. A 2022 assessment found that unlike an ice-free summer, it may represent an irreversible tipping point. It estimated that it is most likely to occur at around 6.3 degrees Celsius, though it could potentially occur as early as 4.5 °C or as late as 8.7 °C. Relative to today's climate, an ice-free winter would add 0.6 degrees, with a regional warming between 0.6 and 1.2 degrees.
Amplified Arctic warming
Arctic amplification and its acceleration is strongly tied to declining Arctic sea ice: modelling studies show that strong Arctic amplification only occurs during the months when significant sea ice loss occurs, and that it largely disappears when the simulated ice cover is held fixed. Conversely, the high stability of ice cover in Antarctica, where the thickness of the East Antarctic ice sheet allows it to rise nearly above the sea level, means that this continent has not experienced any net warming over the past seven decades: ice loss in the Antarctic and its contribution to sea level rise is instead driven entirely by the warming of the Southern Ocean, which had absorbed 35–43% of the total heat taken up by all oceans between 1970 and 2017.
Impacts on extreme weather
Barents Sea ice
Barents Sea is the fastest-warming part of the Arctic, and some assessments now treat Barents sea ice as a separate tipping point from the rest of the Arctic sea ice, suggesting that it could permanently disappear once the global warming exceeds 1.5 degrees. This rapid warming also makes it easier to detect any potential connections between the state of sea ice and weather conditions elsewhere than in any other area. The first study proposing a connection between floating ice decline in the Barents Sea and the neighbouring Kara Sea and more intense winters in Europe was published in 2010, and there has been extensive research into this subject since then. For instance, a 2019 paper holds BKS ice decline responsible for 44% of the 1995–2014 central Eurasian cooling trend, far more than indicated by the models, while another study from that year suggests that the decline in BKS ice reduces snow cover in the North Eurasia but increases it in central Europe. There are also potential links to summer precipitation: a connection has been proposed between the reduced BKS ice extent in November–December and greater June rainfall over South China. One paper even identified a connection between Kara Sea ice extent and the ice cover of Lake Qinghai on the Tibetan Plateau.
However, BKS ice research is often subject to the same uncertainty as the broader research into Arctic amplification/whole-Arctic sea ice loss and the jet stream, and is often challenged by the same data. Nevertheless, the most recent research still finds connections which are statistically robust, yet non-linear in nature: two separate studies published in 2021 indicate that while autumn BKS ice loss results in cooler Eurasian winters, ice loss during winter makes Eurasian winters warmer: as BKS ice loss accelerates, the risk of more severe Eurasian winter extremes diminishes while heatwave risk in the spring and summer is magnified.
Other possible impacts on weather
In 2019, it was proposed that the reduced sea ice around Greenland in autumn affects snow cover during the Eurasian winter, and this intensifies Korean summer monsoon, and indirectly affects the Indian summer monsoon.
2021 research suggested that autumn ice loss in the East Siberian Sea, Chukchi Sea and Beaufort Sea can affect spring Eurasian temperature. Autumn sea ice decline of one standard deviation in that region would reduce mean spring temperature over central Russia by nearly 0.8 °C, while increasing the probability of cold anomalies by nearly a third.
Atmospheric chemistry
A 2015 study concluded that Arctic sea ice decline accelerates methane emissions from the Arctic tundra, with the emissions for 2005-2010 being around 1.7 million tonnes higher than they would have been with the sea ice at 1981–1990 levels. One of the researchers noted, "The expectation is that with further sea ice decline, temperatures in the Arctic will continue to rise, and so will methane emissions from northern wetlands."
Cracks in Arctic sea ice expose the seawater to the air, causing mercury in the air to be absorbed into the water. This absorption leads to more mercury, a toxin, entering the food chain where it can negatively affect fish and the animals and people who consume them. Mercury is part of Earth's atmosphere due to natural causes (see mercury cycle) and due to human emissions.
Shipping
Economic implications of ice-free summers and the decline in Arctic ice volumes include a greater number of journeys across the Arctic Ocean Shipping lanes during the year. This number has grown from 0 in 1979 to 400–500 along the Bering strait and >40 along the Northern Sea Route in 2013. Traffic through the Arctic Ocean is likely to increase further. An early study by James Hansen and colleagues suggested in 1981 that a warming of 5 to 10 °C, which they expected as the range of Arctic temperature change corresponding to doubled concentrations, could open the Northwest Passage. A 2016 study concludes that Arctic warming and sea ice decline will lead to "remarkable shifts in trade flows between Asia and Europe, diversion of trade within Europe, heavy shipping traffic in the Arctic and a substantial drop in Suez traffic. Projected shifts in trade also imply substantial pressure on an already threatened Arctic ecosystem."
In August 2017, the first ship traversed the Northern Sea Route without the use of ice-breakers. Also in 2017, the Finnish icebreaker MSV Nordica set a record for the earliest crossing of the Northwest Passage. According to the New York Times, this forebodes more shipping through the Arctic, as the sea ice melts and makes shipping easier. A 2016 report by the Copenhagen Business School found that large-scale trans-Arctic shipping will become economically viable by 2040.
Impacts on wildlife
The decline of Arctic sea ice will provide humans with access to previously remote coastal zones. As a result, this will lead to an undesirable effect on terrestrial ecosystems and put marine species at risk.
Sea ice decline has been linked to boreal forest decline in North America and is assumed to culminate with an intensifying wildfire regime in this region. The annual net primary production of the Eastern Bering Sea was enhanced by 40–50% through phytoplankton blooms during warm years of early sea ice retreat.
Polar bears are turning to alternative food sources because Arctic sea ice melts earlier and freezes later each year. As a result, they have less time to hunt their historically preferred prey of seal pups, and must spend more time on land and hunt other animals. As a result, the diet is less nutritional, which leads to reduced body size and reproduction, thus indicating population decline in polar bears.
The Arctic refuge is where polar bears main habitat is to den and the melting arctic sea ice is causing a loss of species. There are only about 900 bears in the Arctic refuge national conservation area.
As arctic ice decays, microorganisms produce substances with various effects on melting and stability. Certain types of bacteria in rotten ice pores produce polymer-like substances, which may influence the physical properties of the ice. A team from the University of Washington studying this phenomenon hypothesizes that the polymers may provide a stabilizing effect to the ice. However, other scientists have found algae and other microorganisms help create a substance, cryoconite, or create other pigments that increase rotting and increase the growth of the microorganisms.
See also
Abrupt climate change
Arctic sea ice ecology and history
Measurement of sea ice
Polar vortex
Sea ice thickness
Vanishing Point (2012 film)
Soil carbon feedback
References
External links
NASA Earth Observatory | Arctic Sea Ice
Piecing together the Arctic's sea ice history back to 1850
Maps
NSIDC | Arctic Sea Ice News
Global Cryosphere Watch
Daily AMSR2 Sea Ice Maps
Environment of the Arctic
Forms of water
Hydrology
Sea ice
Articles containing video clips
Climate change and the environment | Arctic sea ice decline | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,249 | [
"Physical phenomena",
"Earth phenomena",
"Sea ice",
"Hydrology",
"Phases of matter",
"Forms of water",
"Environmental engineering",
"Matter"
] |
40,660,830 | https://en.wikipedia.org/wiki/Bond%20softening | Bond softening is an effect of reducing the strength of a chemical bond by strong laser fields. To make this effect significant, the strength of the electric field in the laser light has to be comparable with the electric field the bonding electron "feels" from the nuclei of the molecule. Such fields are typically in the range of 1–10 V/Å, which corresponds to laser intensities 1013–1015 W/cm2. Nowadays, these intensities are routinely achievable from table-top Ti:Sapphire lasers.
Theory
Theoretical description of bond softening can be traced back to early work on dissociation of diatomic molecules in intense laser fields. While the quantitative description of this process requires quantum mechanics, it can be understood qualitatively using quite simple models.
Low-intensity description
Consider the simplest diatomic molecule, the H2+ ion. The ground state of this molecule is bonding and the first excited state is antibonding. This means that when we plot the potential energy of the molecule (i.e. the average electrostatic energy of the two protons and the electron plus the kinetic energy of the latter) as the function of proton-proton separation, the ground state has a minimum but the excited state is repulsive (see Fig. 1a). Normally, the molecule is in the ground state, in one of the lowest vibrational levels (marked by horizontal lines).
In the presence of light, the molecule may absorb a photon (violet arrow), provided its frequency matches the energy difference between the ground and the excited states. The excited state is unstable and the molecule dissociates within femtoseconds into hydrogen atom and a proton releasing kinetic energy (red arrow). This is the usual description of photon absorption, which works well at low intensity. At high intensity, however, the interaction of the light with the molecule is so strong that the potential energy curves become distorted. To take this distortion into account requires "dressing" the molecule in photons.
Dressing in photons at high intensity
At high laser intensity absorptions and stimulated emissions of photons are so frequent that the molecule cannot be regarded as a system separate from the laser field; the molecule is "dressed" in photons forming a single system. However, the number of photons in this system varies when photons are absorbed and emitted. Therefore, to plot the energy diagram of the dressed molecule, we need to repeat the energy curves at each number of photons. The number of photons is very large but only a few curve repetitions need to be considered in this very tall ladder, as shown in Fig. 1b.
In the dressed model, photon absorption (and emission) is no longer represented by vertical transitions. As the energy must be conserved, photon absorption occurs at the curve crossings. For example, if the molecule is in the ground electronic state with 1015 photons present, it can jump to the repulsive state absorbing a photon at the curve crossing (violet circle) and dissociate to the 1015-1 photon limit (red arrow). This "curve jumping" is in fact continuous and can be explained in terms of avoided crossings.
Energy curve distortion
When strong laser field perturbs the molecule, its energy levels are no longer the same as in the absence of the field. To calculate the new energy levels, the perturbation must be included as off-diagonal elements of the Hamiltonian, which has to be diagonalised. In consequence, the crossings turn into anticrossings and the higher the laser intensity, the larger the gap of the anticrossing as shown in Fig. 2. The molecule can dissociate along the lower branch of the anticrossings as indicated by the red arrows.
The top arrow represents one photon absorption, which is a continuous process. In the region of the anticrossing the molecule is in a superposition of the ground and the excited states, continuously exchanging energy with the laser field. As the internuclear separation increases, the molecule absorbs energy and the electronic wavefunction evolves to the antibonding state on the femtosecond timescale. The H2+ ion dissociates to the 1ω limit.
The bottom arrow represents a process initiated at the 3-photon gap. As the system passes through this gap, the 1-photon gap is wide open and the system slides along the top branch of the 1-photon anticrossing. The molecule dissociates to the 2ω limit via absorption of 3 photons followed by re-emission of 1 photon. (One-step even-photon absorptions and emissions are forbidden by the symmetry of the system.)
The anticrossing curves are adiabatic, i.e. they are accurate only for infinitely slow transitions. When the dissociation is fast and the gap is small, a diabatic transition may occur where the system ends up on the other branch of the anticrossing. The probability of such a transition is described by the Landau–Zener formula. When applied to the dissociation through the 3-photon gap, the formula gives a small probability of the H2+ molecular ion ending up in the 3ω dissociation limit without emitting any photons.
Experimental confirmation
The "bond softening" phrase was coined by Phil Bucksbaum in 1990 at the time of its experimental observation. A Nd:YAG laser was used to generate intense pulses of about 80 ps duration at the second harmonic of 532 nm. In a vacuum chamber, the pulses were focused on molecular hydrogen under low pressure (about 10−6 mbar) inducing ionization and dissociation. The kinetic energy of protons was measured in a time-of-flight (TOF) spectrometer. The proton TOF spectra revealed three peaks of kinetic energy spaced by a half of the photon energy. As the neutral H atom was taking the other half of the photon energy, this was an unambiguous confirmation of the bond softening process leading to the 1ω, 2ω and 3ω dissociation limits. Such a process which absorbs more than the minimum number of photons is known as above-threshold dissociation.
A comprehensive review puts the mechanism of bond softening in a broader research context. Anticrossings of diatomic energy curves have many similarities to the conical intersections of energy surfaces in polyatomic molecules.
References
Molecular physics
Quantum chemistry
Photochemistry | Bond softening | [
"Physics",
"Chemistry"
] | 1,320 | [
"Quantum chemistry",
"Molecular physics",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
40,661,214 | https://en.wikipedia.org/wiki/Ekkehard%20Bautz | Ekkehard Karl Friedrich Bautz is a molecular biologist and chair of the Institute of Molecular Genetics at the University of Heidelberg.
Biography
He was born September 24, 1933, in Konstanz, Germany. After studying chemistry at Freiburg University and the University of Zürich, at the age of 26, he emigrated to the United States and later became a U.S. citizen. In 1961, he obtained a doctorate in molecular biology from the University of Wisconsin. He did postdoctoral work at the University of Illinois with a fellowship awarded by the Damon Runyon Memorial Fund for Cancer Research and in 1962 became an assistant professor at the Institute of Microbiology at Rutgers. In 1964, he participated in the Evolving Genes and Proteins Symposium, a landmark event in the history of molecular evolution research. In 1966, he was promoted to associate professor at the same institution. In 1970, he was appointed full professor there, but chose to return to Germany in the same year to become chair of the Institute of Molecular Genetics at the University of Heidelberg.
Bautz's most important discovery is that of sigma factor, the first known transcription factor.
Scientific work
He developed methods for the isolation of messenger RNA and continued research on transcription. Later, he focused on novel selection methods, in particular phage display and the generation of recombinant antibodies.
In 1981, he founded the Center for Molecular Biology (ZMBH) in Heidelberg, where he served as chair of microbiology and acting director from 1983 to 1985.
Professional activities
Bautz was on the editorial board of the Journal of Virology from 1966 to 1970, and of Molecular and General Genetics from 1971 to 2000. He was chairman of the German Genetics Society from 1979 to 1981, and a board member of the German Cancer Research Centre from 1978 to 1983. In 1994, he was appointed a board member of the Zentralkommission für Biologische Sicherheit (ZKBS, engl.: Central Commission for Biological Safety), advising the German government on the biological safety of genetically engineered organisms. He retired from the commission in 2000.
In 1983, he founded Progen GmbH, a biotech startup, with Werner Franke and two other scientists from Heidelberg. He is also a cofounder of Peptide Specialty Laboratories (PSL) and acted as general manager of Multimetrix GmbH from 2002 to 2007.
Awards and honours
Research Career Development Award, U.S. Public Health Service (1966-1970)
American Chemical Society Award in Enzyme Chemistry (Pfizer Award, 1972)
Ferdinand Springer Lectureship of the Federation of European Biochemical Societies (1973)
Heidelberg Academy of Sciences, elected member (1977)
Waksman Medal of Rutgers University (USA) (1999)
University Medal, Heidelberg University (2000)
References
Year of birth missing (living people)
Living people
German molecular biologists
Academic staff of Heidelberg University
University of Zurich alumni
University of Freiburg alumni
Genetic engineering
Transcription factors
Rutgers University faculty
Molecular geneticists | Ekkehard Bautz | [
"Chemistry",
"Engineering",
"Biology"
] | 596 | [
"Biological engineering",
"Molecular geneticists",
"Gene expression",
"Genetic engineering",
"Signal transduction",
"Molecular genetics",
"Induced stem cells",
"Molecular biology",
"Transcription factors"
] |
43,534,533 | https://en.wikipedia.org/wiki/Enrico%20Dalgas | Enrico Mylius Dalgas K.1 D.M. F.M.I (16 July 1828 – 16 April 1894) was a Danish engineer who pioneered the soil melioration of Jutland.
Early life and family
Dalgas was born on 16 July 1828 in Naples, where his father Jean Antoine was the Danish consul. He was baptised Heinrich D. Dalgas. His mother Johanne Tomine née Stibolt was the daughter of a Danish naval officer from a distinguished family. Jean Antoine was descended from Huguenots who settled in Switzerland after the edict of Fontainebleau, whence his grandfather Antoine settled in Denmark. His brother, Carlo Dalgas, was a noted animal artist. When he was 7, his father died and his mother brought him back to Denmark.
Career
Dalgas joined the Danish Army as an officer of the Engineer Corps (today the Ingeniørregimentet). He mostly served as a highway engineer, reaching the rank of Lieutenant Colonel before he moved to the newly established civilian body of the Highway Authority. He also served as a pioneer officer during the First and Second Schleswig Wars, during which two of his brothers died.
Soil melioration and related work
Dalgas applied the knowledge and skills he gained as a military engineer toward the reclamation and afforestation of western Jutland. After Denmark's defeat in the Schleswig wars and the ensuing loss of territory, the removal of hardpan was considered a national priority. Working on roads across Jutland gave him an intimate knowledge of the different soils present in Denmark, and the organisational skill he gained as a military officer and civil servant helped him manage large planting and forestry projects. As he performed assessments of road damage, he got to know locals and learn about their concerns, gaining knowledge as well as many allies who helped him in his endeavours. Dalgas was among the leading forces behind the planting of heaths, for while they had been cultivated before, their widespread planting was an innovation that required much more organization.
In 1867, Dalgas founded the Danish Heath Society (Hedeselskabet), with jurist Georg Morville and prominent Jutish landowner Ferdinand Mourier-Petersen. It later added a branch in northern Germany, the Haide-Cultur-Verein. Dalgas wrote a number of books on land improvement, forestry, and the natural history of forests and heaths, as well as many pamphlets, and articles published in academic and popular periodicals on a wide range of topics, including agricultural science, mycology, and entomology. Dalgas's work was later used by German soil scientist Carl Emeis (among the founders of the Haide-Cultur-Verein) as the basis for his theory on hardpan and heaths, the first work to explain the role of humic acids in soil.
Personal life
In 1855 Dalgas married Marie Magdalene Christiane Købke, daughter of Lieutenant Colonel Niels Christian Købke. They had three sons: , a forester and the director of the Danish Heath Society; , director of the Royal Porcelain Manufactory; and philosopher . On 16 April 1894, Dalgas died in Aarhus. They had at least one daughter, Ellen Margareth, who emigrated to Brazil and was the mother of engineer and naturalist Johan Dalgas Frisch.
Honours
Enrico Dalgas was made a Knight of the Order of the Dannebrog in 1864, received the Cross of Honour of the Order of the Dannebrog in 1867, received the Medal of Merit in Gold in 1875, and was made Commander 1st Class of the Dannebrog in 1887. Dalgas Avenue in Aarhus, Dalgasgade in Herning and Dalgas Boulevard in Copenhagen are named after him, and a number of Danish cities have erected statues of him.
Bibliography
Oversigt over hederne i Jylland, 1866
Geografiske billeder I og II, 1867–68
Vejledning til træplantning, 1871
Den dybe reolpløjning, 1872
En hederejse i Hannover, 1873
Anvisning til anlæg af småplantninger, 1875–83
Hedemoser og Kærjorder, 1876
Om engvandring, 1877
Om plantning i Jylland, 1877
Hederne og deres Kultivering, 1878
References
1828 births
1894 deaths
Soil scientists
Foresters
Kingdom of the Two Sicilies people
Danish naturalists
Danish military engineers
19th-century Danish military officers
Danish conservationists
Danish civil servants
Environmental engineers
Recipients of the Cross of Honour of the Order of the Dannebrog
Commanders First Class of the Order of the Dannebrog
Recipients of the Medal of Merit (Denmark)
Danish agronomists
Burials at Nordre Cemetery
Danish civil engineers
19th-century Danish engineers
19th-century Danish military personnel | Enrico Dalgas | [
"Chemistry",
"Engineering"
] | 989 | [
"Environmental engineers",
"Environmental engineering"
] |
43,539,068 | https://en.wikipedia.org/wiki/Colletotrichum%20somersetense | Colletotrichum somersetense is a morphologically cryptic species described by J.A Crouch in 2014. This species belongs to Colletotrichum caudatum sensu lato and is a pathogen on warm-season grasses (Sorghastrum nutans). Presence of a unique filiform appendage at the apex of the conidium is the distinctive morphological character.
References
Phyllachorales
Fungi described in 2014
Fungus species | Colletotrichum somersetense | [
"Biology"
] | 94 | [
"Fungi",
"Fungus species"
] |
25,206,411 | https://en.wikipedia.org/wiki/Biothermia | Biothermia is the process of heating living tissue using non-ionizing radiation. Sources can include magnetic (inductive), electromagnetic (radiowaves), or conductive (organic materials).
See also
Bioelectrogenesis
Bioheat transfer
Electroreception
References
Biomedical engineering
Heat transfer | Biothermia | [
"Physics",
"Chemistry",
"Engineering",
"Biology"
] | 63 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Biological engineering",
"Biomedical engineering",
"Thermodynamics",
"Medical technology"
] |
25,208,100 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20August%2012%2C%202026 | A total solar eclipse will occur at the Moon's descending node of orbit on Wednesday, August 12, 2026, with a magnitude of 1.0386. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A total solar eclipse occurs when the Moon's apparent diameter is larger than the Sun's, blocking all direct sunlight, turning day into darkness. Totality occurs in a narrow path across Earth's surface, with the partial solar eclipse visible over a surrounding region thousands of kilometres wide. Occurring about 2.2 days after perigee (on August 10, 2026, at 12:15 UTC), the Moon's apparent diameter will be larger.
The total eclipse will pass over the Arctic, Greenland, Iceland, Atlantic Ocean, northern Spain and very extreme northeastern Portugal. The points of greatest duration and greatest eclipse will be just off the western coast of Iceland by 65°10.3' N and 25°12.3' W, where the totality will last 2m 18.21s. A partial eclipse will cover more than 90% of the Sun in Ireland, Great Britain, Portugal, France, Italy, the Balkans and North Africa and to a lesser extent in most of Europe, West Africa and northern North America.
The total eclipse will pass over northern Spain from the Atlantic coast to the Mediterranean coast as well as the Balearic Islands. The total eclipse will be visible from the cities of A Coruña, Valencia, Zaragoza, Palma and Bilbao, but both Madrid and Barcelona will be just outside the path of totality.
The last total eclipse in continental Europe occurred on March 29, 2006 and in continental part of European Union it occurred on August 11, 1999. It will be the first total solar eclipse visible in Iceland since June 30, 1954, also Solar Saros series 126 (descending node), and the only one to occur in the 21st century as the next one visible over Iceland will be in 2196. The last total solar eclipse in Spain happened on August 30, 1905 and followed a similar path across the country. The next total eclipse visible in Spain will happen less than a year later on August 2, 2027.
Circumstances
The eclipse path proceeds from North Siberia throughout the Arctic Region, Iceland, eastern Atlantic to Spain and the Mediterranean.
Solar eclipse and the aurora borealis
In the North Russia area where totality will begin at sunrise, the aurora borealis could also be visible up to the beginning of the nautical twilight, depending on the intensity of the auroral activity at that date. If an extremely high intensity geomagnetic storm takes place simultaneously, there might be chances of seeing the aurora simultaneously with the eclipsed Sun. In the east of Taymyr Peninsula (north-east of Krasnoyarsk Krai) the maximum of total phase will occur on August 13 at 0:00 local time during midnight sun.
Solar eclipse below the horizon
Due to the considerable eclipse gamma (more than 0.8), observers where the totally eclipsed Sun is just below the horizon will have the chance to observe the lunar shadow in the high atmosphere, as well as shortened civil twilight and extended nautical twilight. The darkening of the twilight sky could improve the chances of observing the inner Zodiacal light.
Bright planets and stars visible during totality
Far northern Russia will be treated to a dawn eclipse. Mercury and Jupiter will be very low above the rising eclipsed Sun, but Mercury will be showing most of its sunlit side and Jupiter will have its usual brightness. Mars and Saturn will be more advantageously placed in the northeast and southeast respectively. Of the bright asterisms, the Big Dipper will be very high in the north-northwest and the Summer Triangle will be high in the southwest. Aldebaran, Arcturus, Capella and Pollux are other first-magnitude stars which may be seen, although they will be low.
In Iceland the eclipse will be a mid-afternoon event occurring about 4 hours before sunset, it will start in Reykjavik at around 2:04 PM, with the total eclipse occurring at 3:15 PM. Mars may be a challenge to find, because it will be low in the west. Mercury and Jupiter will be well positioned west of the Sun and Venus will be many degrees to its east. Of 1st-magnitude stars from west to east, Capella and Pollux will be at decent elevations west of the Sun; Regulus, Spica (due south), Arcturus, Vega and Deneb are candidates for easy sighting to the Sun's east. Procyon will be about to set, while Altair will be low on the opposite side.
In Spain the eclipse will occur about 1 hour before sunset. Mercury and Jupiter, west of the eclipsed Sun, will therefore be very low below it. Venus will be brilliant well up in the southwest, with Spica to its east. Arcturus will be high in the south, and the Summer Triangle will be well up in the east. Lower in the south, Antares will be minutes away from transit.
Images
Animated path
Details of the totality in some places or cities
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2026
An annular solar eclipse on February 17.
A total lunar eclipse on March 3.
A total solar eclipse on August 12.
A partial lunar eclipse on August 28.
Metonic
Preceded by: Solar eclipse of October 25, 2022
Followed by: Solar eclipse of June 1, 2030
Tzolkinex
Preceded by: Solar eclipse of July 2, 2019
Followed by: Solar eclipse of September 23, 2033
Half-Saros
Preceded by: Lunar eclipse of August 7, 2017
Followed by: Lunar eclipse of August 19, 2035
Tritos
Preceded by: Solar eclipse of September 13, 2015
Followed by: Solar eclipse of July 13, 2037
Solar Saros 126
Preceded by: Solar eclipse of August 1, 2008
Followed by: Solar eclipse of August 23, 2044
Inex
Preceded by: Solar eclipse of September 2, 1997
Followed by: Solar eclipse of July 24, 2055
Triad
Preceded by: Solar eclipse of October 12, 1939
Followed by: Solar eclipse of June 13, 2113
Solar eclipses of 2026–2029
Saros 126
Metonic series
Tritos series
Inex series
References
External links
http://eclipse.gsfc.nasa.gov/SEplot/SEplot2001/SE2026Aug12T.GIF
https://eclipse.gsfc.nasa.gov/SEpath/SEpath2001/SE2026Aug12Tpath.html
https://www.timeanddate.com/eclipse/solar/2026-august-12
https://nationaleclipse.com/maps/main/2026-total-solar-eclipse-maps.html
2026 in science
2026 8 12
2026 8 12
2026 8 12 | Solar eclipse of August 12, 2026 | [
"Astronomy"
] | 1,562 | [
"Future astronomical events",
"Future solar eclipses"
] |
25,208,535 | https://en.wikipedia.org/wiki/Instrumentation%20in%20petrochemical%20industries | Instrumentation is used to monitor and control the process plant in the oil, gas and petrochemical industries. Instrumentation ensures that the plant operates within defined parameters to produce materials of consistent quality and within the required specifications. It also ensures that the plant is operated safely and acts to correct out of tolerance operation and to automatically shut down the plant to prevent hazardous conditions from occurring. Instrumentation comprises sensor elements, signal transmitters, controllers, indicators and alarms, actuated valves, logic circuits and operator interfaces.
An outline of key instrumentation is shown on Process Flow Diagrams (PFD) which indicate the principal equipment and the flow of fluids in the plant. Piping and Instrumentation Diagrams (P&ID) provide details of all the equipment (vessels, pumps, etc), piping and instrumentation on the plant in a symbolic and diagrammatic form.
The elements of instrumentation
Instrumentation includes sensing devices to measure process parameters such as pressure, temperature, liquid level, flow, velocity, composition, density, weight; and mechanical and electrical parameters such as vibration, position, power, current and voltage.
The measured value of a parameter is displayed and recorded locally and/or in a control room. If the measured variable exceeds pre-defined limits an alarm warns the operating personnel of a potential problem. Automatic executive action is taken by the instrumentation to close or open shutdown valves and dampers, or to trip (stop) pumps and compressors, to move the plant to a safe condition.
Correct operation of the petrochemical process plant is achieved through the action of control loops. These automatically maintain and control the pressure, temperature, liquid level and flowrate of fluid in vessels and piping. Control loops compare the measured value of a parameter on the plant, eg. pressure, with a pre-determined set point. A difference between the measured variable and the set point generates a signal which modulates the position of a control valve (the final element) to maintain the measured variable at the set point.
Valves are actuated by an electric motor, hydraulic fluid or air. For air-operated control valves, electrical signals from the control system are converted to an air pressure for the valve actuator in a current/pneumatic I/P converter. Upon loss of pneumatic or hydraulic pressure valves may fail to an open (FO) or fail to a closed (FC) position.
Some instrumentation is self actuating. For example, pressure regulators maintain a constant pre-set pressure, and rupture discs and pressure safety valves open at pre-set pressures.
Instrumentation includes facilities for operating personnel to intervene in the plant either locally or from a control room. Personnel can open or close valves, change set points, start and stop pumps or compressors, and over-ride shutdown functions (in specific controlled circumstances such as during start-up).
Temperature instrumentation
Oil, gas and petrochemical processes are undertaken at specific temperatures.
Measurement of temperature of fluids in the petrochemical industry is undertaken by temperature elements (TE). These can be Thermocouples or Platinum Resistance Temperature Detectors (RTDs). The latter are used for their good temperature response. Local temperature indicators (TI) are located on the inlet and outlet streams of heat exchangers to monitor the performance of the exchanger.
In industrial applications gaseous or liquid fluids may be heated or cooled. This duty is undertaken in a heat exchanger, whereby the fluid is heated or cooled by heat transfer with a second fluid such as water, glycol, hot oil or another process fluid (the heating or cooling medium). Temperature control is used to maintain the desired temperature of the first fluid. A temperature sensor transmitter (TT) is located in the first fluid at its outlet from the heat exchanger. This measured temperature is fed to the temperature controller (TIC) where it is compared to the desired set point temperature. The output of the controller, which is related to the difference between the measured variable and the set point, is fed to a control valve (TCV) in the second fluid to adjust the flow of the heating or cooling medium. In the case of a fluid being cooled, if the temperature of the fluid rises the temperature controller acts to open the TCV increasing the flow of the cooling medium which increases the heat transfer and reduces the temperature of the first fluid. Conversely if the temperature falls the controller acts to close the TCV which reduces the heat transfer increasing the temperature of the first fluid. In the case of heating medium with the falling temperature of the first fluid the controller would act to open the TCV to increase the flow of heating medium thereby raising the temperature of the first fluid. The controller (TIC) may also generate high (TAH) and low temperature (TAL) alarms to warn operating personnel of a potential problem.
Fin fan coolers use air to cool gases and liquids. The temperature of fluid is controlled (TIC) by opening or closing dampers on the cooler or adjusting the speed of the fan or the pitch angle of the fan blades thereby increasing or decreasing the flow of air.
Temperature monitoring and control instrumentation is used in fired heaters and furnaces to adjust the fuel flow valve (FCV) to maintain a desired thermal output. Waste heat recovery units (WHRU) are used to extract heat from the flow of hot exhaust gases from a gas turbine to heat a fluid (heating medium). Instrumentation includes controllers to maintain a desired temperature of the heating medium by closing or opening dampers in the exhaust gas flow.
Low temperature alarms (TSL) are used where cold fluids could be routed to pipework which is not suitable for cold service. Instrumentation may include an initial alarm (TAL) and then a shutdown action (TSLL) to close a shutdown valve (XV).
Temperature sensors (TE) are used to indicate that plant flares have been unintentionally extinguished (BAL), perhaps due to insufficient flowrate of gases to maintain a flame.
Pressure instrumentation
Oil, gas and petrochemical processes are undertaken at specific operating pressures.
Pressure is measured by pressure sensors (PE) which send pressure (PT) signals to pressure controllers (PIC). Pressure vessels and tanks are fitted with local pressure indicators (PI).
In the petrochemical industry pressure is controlled by maintaining a constant pressure in the upper gas space of a vessel. A pressure controller (PIC) adjusts the setting on a pressure control valve (PCV) that feeds gas forward to the next stage of the process. A rising pressure in the vessel results in the PCV opening to feed more gas forward. If the pressure continues to rise some controllers then act to open a second PCV that feeds excess gas to the flare system. The pressure transmitter is configured to provide warning alarms (PAL and PAH) if the pressure exceeds set high and low limits. If these limits are exceeded (PALL and PAHH) an automatic shutdown of the system is initiated which includes closure of the inlet valves of the vessel. The pressure sensor (PT) that initiates a shutdown is a separate instrument loop from the PT associated with the pressure control loop to mitigate common mode failures and to ensure greater reliability of the shutdown function.
The operation of hydrocyclones is controlled by pressure instrumentation that maintains fixed differential pressures between the inlet and the oil and water outlets.
Turbo-expanders are controlled by maintaining the inlet pressure (PIC) at a constant value by controlling the angle of the expander inlet vanes. A split range pressure controller may also modulate a Joule-Thomson valve across the turbo-expander.
Pressure in blanketed tanks is maintained by self actuating pressure control valves (PCVs). As liquid is withdrawn from the tank the pressure in the gas space falls. The blanket gas supply valve opens to maintain the pressure. As the tank fills with liquid the pressure rises and a vent gas valve open to vent gas to atmosphere or a vent system.
Rupture (bursting) discs (PSE) and pressure relief or pressure safety valves (PSV) are important pressure control devices. Both are self-actuating and are designed to open at a preset pressure to provide an essential safety function on the petrochemical plant.
Flow instrumentation
The throughput of a petrochemical plant is measured and controlled by flow instrumentation.
Flow measuring devices devices (FE) include vortex, positive displacement (PD), differential pressure (DP), coriolis, ultrasonic, and rotameters.
The flow through compressors, see schematic, is controlled by measuring the flow (FT) through the machine at the suction and controlling the speed (SC) of the prime mover (electric motor or gas turbine) that is driving the compressor. Anti-surge control ensures a minimum flow of fluid through the compressor. The flow (FT) at the discharge and measurements of the suction and discharge pressures (PT) and temperatures (TT) of the fluid flowing through the compressor are measured. The anti-surge controller (FIC) modulates a control valve (FCV) which recycles cooled gas from downstream of the compressor after-cooler back to the suction of the compressor. Low flow alarms (FAL) provide a warning indication to operating personnel.
Large process pumps are provided with minimum flow protection. This comprises measurement of flow (FE) at the pump discharge, this measurement is an input to a flow controller (FIC) whose set point is the minimum flow required through the pump(see diagram). As the flow reduces to the minimum flow value the controller acts to open a flow control valve (FCV) to recycle fluid from the discharge back to the suction of the pump.
Flow metering (FIQ) is required where custody transfer of fluids takes place, such as an outgoing pipeline or at a tanker loading station. Accurate measurement of the flow is essential and parameters such as liquid density are measured.
Flare and vent systems are purged to prevent air ingress and the formation of potentially explosive mixtures. The flowrate of purge gas is set by rotameter (FIC) or fixed orifice plate (FO). A low flow alarm (FAL) warns operating personnel that the purge flow has reduced significantly.
Pipelines are monitored by measuring the flowrate of fluid at each end, a discrepancy (FDA) may indicate a leak in the pipeline.
Level instrumentation
The level measurement of liquids in pressure vessels and tanks in the petrochemical industry is undertaken by differential pressure level meters, radar, magnetostrictive, nucleonic, magnetic float and pneumatic bubbler instruments.
Level instrumentation determines the height of liquids by measuring the position of a gas/liquid or liquid/liquid interface within the vessel or tank. Such interfaces include oil/gas, oil/water, condensate/water, glycol/condensate, etc. Local indication (LI) includes sight glasses which show the liquid level directly through a vertical glass tube attached to the vessel/tank.
Phase interfaces are maintained at a constant level by level transmitters (LT) transmitting a signal to a level controller (LIC) which compares the measured value with the desired set point. The difference is sent as a signal to a level control valve (LCV) on the liquid outlet from the vessel. As the level rises the controller acts to open the valve to draw off liquid to reduce the level. Similarly as the levels fall the controller acts to close the LCV to reduce outflow of fluid.
Some vessels store liquid until it is pumped out. The controller (LIC) acts to start and stop the pump within a specified band. For example, it may start the pump when the level rises to 0.6 m and stop the pump when the level falls to 0.4 m.
High and low level alarms (LAH and LAL) warn operating personnel that levels are outside predefined limits. Further deviation (LAHH and LALL) initiates a shutdown either to close emergency shutdown valves (ESDV) on the inlet to the vessel or on the liquid outlet lines. As with high and low pressure instrumentation the shutdown function comprises an independent measurement loop to prevent a common mode failure. Loss of liquid level in the vessel may lead to gas blowby where high pressure gas flows to the downstream vessel through the liquid outlet line. The structural integrity of the downstream vessel can be compromised. In addition high liquid level in the vessel may lead to carryover of liquid into the gas outlet may damage downstream equipment such as gas compressors.
High liquid level in a flare drum can lead to undesirable carryover of liquid to the flare. A high-high liquid level (LSHH) in the flare drum initiates a plant shutdown.
One of the problems with a significant number of technologies is that they are installed through a nozzle and are exposed to products. This can create several problems, especially when retrofitting new equipment to vessels that have already been stress relieved, as it may not be possible to fit the instrument at the location required. Also, as the measuring element is exposed to the contents within the vessel, it may either attack or coat the instrument causing it to fail in service. One of the most reliable methods for measuring level is using a nuclear gauge, as it is installed outside the vessel and doesn't normally require a nozzle for bulk level measurement. The measuring element is installed outside the process and can be maintained in normal operation without taking a shutdown. Shutdown is only required for an accurate calibration.
Analyser instrumentation
A wide range of analysis instruments are used in the oil, gas and petrochemical industries.
Chromatography – to measure the quality of product or reactants
Density (oil) – for custody metering of liquids
Dewpoint (water dewpoint and hydrocarbon dewpoint) to check the efficiency of dehydration or dewpoint control plant
Electrical conductivity – to measure the effectiveness of potable water reverse osmosis plant
Oil-in-water – prior to discharge of water into the environment
pH of reactants and products
Sulphur content – to check the efficiency of gas sweetening plant
Most instruments function continuously and provide a log of data and trends. Some analyser instruments are configured to alarm (AAH) if a measurement reaches a critical level.
Other instrumentation
Major pumps and compressors are provided with vibration sensors (VT) to give operating personnel a warning (VA) of potential mechanical problems with the machine.
Rupture discs (PSE) and pressure safety valves (PSV) are self-actuated and provide no immediate indication that they have ruptured or lifted. Instrumentation such as pressure alarms (PXA) or movement alarms (PZA) may be fitted to indicate that they have operated.
Corrosion coupons and corrosion probes provide a local indication of corrosion rates of fluids flowing in piping.
Pipeline pig launchers and receivers are provided with a pig signaller (XA) to indicate that a pig has been launched or has arrived.
Packaged items of equipment (compressors, diesel engines, electricity generators, etc) are fitted with local vendor supplied instrumentation. When equipment malfunctions a multivariable signal (UA) is sent to the control room.
The fire and gas detection system comprises local sensors to detect the presence of gas, smoke or fire. These initiate alarms in the control room. Simultaneous detection of multiple sensors initiates action to start firewater pumps and close fire dampers in enclosed spaces.
The petrochemical plant may have several levels of shutdown. A unit shutdown (USD) entails shutdown of one limited unit with the rest of the plant remaining in operation. A production shutdown (PSD) entails shutdown of the entire process plant. An emergency shutdown (ESD) entails complete shutdown of the plant.
Older plant may have local control loops which operate pneumatic (3 – 15 psia) final element actuators. Sensors may also transmit electrical signals (4 – 20mA). Conversion between pneumatic and electrical signals is undertaken by P/I and I/P converters. Control of modern plant is based on a distributed control systems using fieldbus digital protocols.
See also
Control engineering
Instrument and control engineering
Functional safety
Safety integrity level
Oil refinery
Piping and instrumentation diagram
Process flow diagram
Process manufacturing
References
Petroleum products
Applied and interdisciplinary physics
Process engineering
Measuring instruments
Control engineering
Industrial automation | Instrumentation in petrochemical industries | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 3,340 | [
"Process engineering",
"Applied and interdisciplinary physics",
"Petroleum products",
"Measuring instruments",
"Industrial engineering",
"Automation",
"Petroleum",
"Control engineering",
"Mechanical engineering by discipline",
"Industrial automation"
] |
25,209,688 | https://en.wikipedia.org/wiki/Genetically%20modified%20mammal | Genetically modified mammals are mammals that have been genetically engineered. They are an important category of genetically modified organisms. The majority of research involving genetically modified mammals involves mice with attempts to produce knockout animals in other mammalian species limited by the inability to derive and stably culture embryonic stem cells.
Usage
The majority of genetically modified mammals are used in research to investigate changes in phenotype when specific genes are altered. This can be used to discover the function of an unknown gene, any genetic interactions that occur or where the gene is expressed. Genetic modification can also produce mammals that are susceptible to certain compounds or stresses for testing in biomedical research. Some genetically modified mammals are used as models of human diseases and potential treatments and cures can first be tested on them. Other mammals have been engineered with the aim of potentially increasing their use to medicine and industry. These possibilities include pigs expressing human antigens aiming to increasing the success of xenotransplantation to lactating mammals expressing useful proteins in their milk.
Mice
Genetically modified mice are often used to study cellular and tissue-specific responses to disease (cf knockout mouse). This is possible since mice can be created with the same mutations that occur in human genetic disorders, the production of the human disease in these mice then allows treatments to be tested.
The oncomouse is a type of laboratory mouse that has been genetically modified developed by Philip Leder and Timothy A. Stewart of Harvard University to carry a specific gene called an activated oncogene.
Metabolic supermice are the creation of a team of American scientists led by Richard Hanson, professor of biochemistry at Case Western Reserve University at Cleveland, Ohio. The aim of the research was to gain a greater understanding of the PEPCK-C enzyme, which is present mainly in the liver and kidneys.
Rats
A knockout rat is a rat with a single gene disruption used for academic and pharmaceutical research.
Goats
BioSteel is a trademark name for a high-strength based fiber material made of the recombinant spider silk-like protein extracted from the milk of transgenic goats, made by Nexia Biotechnologies. Prior to its bankruptcy, the company successfully generated distinct lines of goats that produced recombinant versions of either the MaSpI or MaSpII dragline silk proteins, respectively, in their milk.
Pigs
The enviropig is the trademark for a genetically modified line of Yorkshire pigs with the capability to digest plant phosphorus more efficiently than ordinary unmodified pigs that was developed at the University of Guelph. Enviropigs produce the enzyme phytase in the salivary glands that is secreted in the saliva.
In 2006 the scientists from National Taiwan University's Department of Animal Science and Technology managed to breed three green-glowing pigs using green fluorescent protein. Fluorescent pigs can be used to study human organ transplants, regenerating ocular photoreceptor cells, neuronal cells in the brain, regenerative medicine via stem cells, tissue engineering, and other diseases.
In 2015, researchers at the Beijing Genomics Institute used transcription activator-like effector nucleases to create a miniature version of the Bama breed of pigs, and offered them for sale to consumers.
In 2017 scientists at the Roslin Institute of the University of Edinburgh, in collaboration with Genus, reported they had bred pigs with a modified CD163 gene. These pigs were completely resistant to Porcine Reproductive and Respiratory Syndrome, a disease that causes major losses in the world-wide pig industry.
Cattle
In 1991, Herman the Bull was the first genetically modified or transgenic bovine in the world. The announcement of Herman's creation generated considerable controversy.
In 2016 Jayne Raper and her team announced the first trypanotolerant transgenic cow in the world. This team, spanning the International Livestock Research Institute, Scotland's Rural College, the Roslin Institute's Centre for Tropical Livestock Genetics and Health, and the City University of New York, announced that a Kenyan Boran bull had been born and had already successfully had two children. Tumaini – named for the Swahili word for "hope" – had been given a trypanolytic factor from a baboon via CRISPR/Cas9.
Dogs
Ruppy (short for Ruby Puppy) was in 2009 the world's first genetically modified dog. A cloned beagle, Ruppy and four other beagles produced a fluorescent protein that glowed red upon excitation with ultraviolet light. It was hoped to use this procedure to investigate the effect of the hormone estrogen on fertility.
A team in China reported in 2015 that they had genetically engineered beagles to have twice the normal muscle mass, inserting a natural myostatin gene mutation taken from whippets.
Primates
In 2009 scientists in Japan announced that they had successfully transferred a gene into a primate species (marmosets) and produced a stable line of breeding transgenic primates for the first time. It was hoped that this would aid research into human diseases that cannot be studied in mice, for example Huntington's disease, strokes, Alzheimer's disease and schizophrenia.
Cats
In 2011 a Japanese-American Team created genetically modified green-fluorescent cats in order to study HIV/AIDS and other diseases as Feline immunodeficiency virus (FIV) is related to HIV.
References
Genetically modified organisms | Genetically modified mammal | [
"Engineering",
"Biology"
] | 1,088 | [
"Genetic engineering",
"Genetically modified organisms"
] |
25,211,094 | https://en.wikipedia.org/wiki/Knockout%20moss | A knockout moss is a kind of genetically modified moss. One or more of the moss's specific genes are deleted or inactivated ("knocked out"), for example by gene targeting or other methods. After the deletion of a gene, the knockout moss has lost the trait encoded by this gene. Thus, the function of this gene can be inferred. This scientific approach is called reverse genetics because the scientist wants to understand the function of a specific gene. In classical genetics, the scientist starts with a phenotype of interest and searches for the gene that causes this phenotype. Knockout mosses are relevant for basic research in biology as well as in biotechnology.
Scientific background
The targeted deletion or alteration of genes relies on the integration of a DNA strand at a specific and predictable position into the genome of the host cell. This DNA strand must be engineered in such a way that both ends are identical to this specific gene locus. This is a prerequisite for being efficiently integrated via homologous recombination (HR). This is similar to the process used for creating knockout mice.
So far, this method of gene targeting in land plants has been carried out in the mosses Physcomitrella patens and Ceratodon purpureus, since in these non-seed plant species the efficiency of HR is several orders of magnitude higher than in seed plants.
Knockout mosses are stored at and distributed by a specialized biobank, the International Moss Stock Center.
Method
For altering moss genes in a targeted way, the DNA-construct needs to be incubated together with moss protoplasts and with polyethylene glycol (PEG). Because mosses are haploid organisms, the regenerating moss filaments (protonemata) can be directly assayed for gene targeting within six weeks when utilizing PCR methods.
Examples
Chloroplast division
The first scientific publication in which knockout moss was used to identify the function of a hitherto-unknown gene appeared in 1998, and was authored by Ralf Reski and coworkers. They deleted the ftsZ-gene and thus functionally identified the first gene pivotal for the division of an organelle in any eukaryote.
Protein modifications
Physcomitrella plants were engineered with multiple knockouts to prevent the plant-specific glycosylation of proteins, an important post-translational modification. These knockout mosses are used to produce complex biopharmaceuticals in the field of molecular farming.
Mutant collection
In cooperation with the chemical company BASF, Ralf Reski and coworkers established a collection of knockout mosses to use for gene identification.
References
Molecular biology
Genetic engineering
Biological engineering
Genetically modified organisms
Molecular biology techniques
Plant genetics
Molecular genetics
Biotechnology | Knockout moss | [
"Chemistry",
"Engineering",
"Biology"
] | 570 | [
"Biological engineering",
"Genetically modified organisms",
"Plants",
"Plant genetics",
"Genetic engineering",
"Biotechnology",
"Molecular genetics",
"Molecular biology techniques",
"nan",
"Molecular biology",
"Biochemistry"
] |
25,211,625 | https://en.wikipedia.org/wiki/Electric%20dipole%20transition | An electric dipole transition is the dominant effect of an interaction of an electron in an atom with the electromagnetic field.
Following reference, consider an electron in an atom with quantum Hamiltonian , interacting with a plane electromagnetic wave
Write the Hamiltonian of the electron in this electromagnetic field as
Treating this system by means of time-dependent perturbation theory, one finds that the most likely transitions of the electron from one state to the other occur due to the summand of defined as
where and are the charge and mass of a bare electron. Electric dipole transitions are the transitions between energy levels in the system with the Hamiltonian .
Between certain electron states the electric dipole transition rate may be zero due to one or more selection rules, particularly the angular momentum selection rule. In such a case, the transition is termed electric dipole forbidden, and the transitions between such levels must be approximated by higher-order transitions.
The next order summand in is defined as
and describes magnetic dipole transitions.
Even smaller contributions to transition rates are given by higher electric and magnetic multipole transitions.
Semi-classical approach
One way of modelling and understanding the effect of light (mainly electric field) on an atom is to look at a simpler model consisting of three energy levels. In this model, we have simplified our atom to a transition between a state of 0 angular momentum (, to a state of angular momentum of 1 (). This could be, for example, the transition in hydrogen between the 1s (ground state) and the 2p () state.
In order to understand the effect of the electric field on this simplified atom we are going to choose the electric field linearly polarized with the polarization axis to be parallel with the axis of the to transition, we call this axis the axis. This assumption has no real loss of generality. In fact if we were to choose another axis, then we would be able to find another state that would be a linear combination of the previous states which would be parallel to the electric field bringing us back to this assumption of a linearly polarized electric field parallel with the transition axis.
With this in mind, we can limit ourselves to just the transition from to . We are going to use an electric field which can be written as where is the transition axis, is the angular frequency of the light incoming into the atom (think of it as a laser being shone into the atom), is the light phase which can depend on the position, and is the amplitude of the laser light.
Now, the main question we want to solve is what is the average force felt by the atom under this kind of light? We are interested by which represents the average force felt by the atom. In here the brackets represent an average over all the internal states of the atom (in a quantum fashion), and the bar represent a temporal average in the classical fashion. represents the potential due to the electric dipole of the atom.
This potential can be further be written as where is the dipole transition operator.
The reason we use a two state model is that it allow us to write explicitly the dipole transition operator as and thus we get the
.
Then
.
Now, the semi-classical approach means that we write the dipole moment as the polarizability of the atom times the electric field:
And as such and thus , and as such we have .
Before progressing in the math, and trying to find a more explicit expression for the proportionality constant , there's an important aspect that we need to discuss. That is that we have found that the potential felt by an atom in a light induced potential follows the square of the time averaged electric field. This is important to a lot of experimental physics in cold atom physics where physicists use this fact to understand what potential is applied to the atoms using the known intensity of the laser light applied to atoms since the intensity of light is proportional itself to the square of the time averaged electric field, i.e. .
Now, let's look at how to get the expression of the polarizability .
We will use the density matrix formalism, and the optical Bloch equations for this.
The main idea here is that the non-diagonal density matrix elements can be written as and ; and
Here is where the optical Bloch equations will come in handy, they give us an equation to understand the dynamics of the density matrix.
Indeed, we have:
which accounts for the reversible normal quantum evolution of the density matrix.
and another term that describes the spontaneous emissions of the atom:
Where is our semi-classical hamiltonian. It is written as . And . represents the linewidth of the transition, and thus you can see as the half-life of the given transition.
Let us introduce the Rabi frequency :
Then we can write the optical Bloch equations for and :
For this part we take the equation of the evolution of the and take the matrix elements. We get:
We can get the equation for by taking its complex conjugate.
We can then repeat the process for all 4 matrix elements, but in our study we will apply a small field approximation, so that the electric field is small enough that we can uncouple the 4 equations. This approximation is written mathematically using the Rabi frequency as:
, with .
Then we can neglect , and set . Indeed, the idea behind this is that if the atom doesn't see any light, then to a first degree approximation in , the atom will be in the ground state and not in the excited state forcing us to set , .
We can then rewrite the evolution equation to:
This is an ordinary first order differential equation with an inhomogeneous term in cosines. This can easily be solved by using the Euler's formula for the cosine.
We get the following solution:
Furthermore, if we say that the detuning is much bigger than , then of course the sum of both is also much bigger than and we can rewrite the previous equation as:
and
And coming back to our average dipole moment:
with
Then it is clear that , and the polarizability becomes .
Finally, we can write the potential felt by the atom in due to the electric dipole interaction as:
The essential points worth discussing here are as said previously that the light intensity of the laser produces a proportional local potential which the atoms "feel" in that region. Furthermore, now we can tell the sign of such potential. We see that it follows the sign of which in turn follows the sign of the detuning. This implies that the potential is attractive if we have a red detuned laser (), and it is repulsive if we have a blue detuned laser ().
See also
Dipole
Electric dipole moment
Electromagnetism
Magnetic dipole transition
References
Quantum mechanics | Electric dipole transition | [
"Physics"
] | 1,384 | [
"Theoretical physics",
"Quantum mechanics"
] |
25,212,268 | https://en.wikipedia.org/wiki/Additive%20K-theory | In mathematics, additive K-theory means some version of algebraic K-theory in which, according to Spencer Bloch, the general linear group GL has everywhere been replaced by its Lie algebra gl. It is not, therefore, one theory but a way of creating additive or infinitesimal analogues of multiplicative theories.
Formulation
Following Boris Feigin and Boris Tsygan, let be an algebra over a field of characteristic zero and let be the algebra of infinite matrices over with only finitely many nonzero entries. Then the Lie algebra homology
has a natural structure of a Hopf algebra. The space of its primitive elements of degree is denoted by and called the -th additive K-functor of A.
The additive K-functors are related to cyclic homology groups by the isomorphism
References
K-theory | Additive K-theory | [
"Mathematics"
] | 170 | [
"Topology stubs",
"Topology"
] |
28,461,249 | https://en.wikipedia.org/wiki/Scuderi%20cycle | A Scuderi cycle is a thermodynamic cycle that is constructed out of the following series of thermodynamic processes:
A-B and C-D (TOP and BOTTOM of the loop): a pair of quasi-parallel adiabatic processes
D-A (LEFT side of the loop): a positively sloped, increasing pressure, increasing volume process
B-C (RIGHT side of the loop): an isochoric process
The adiabatic processes are impermeable to heat: heat flows rapidly into the loop through the left expanding process, resulting in increasing pressure while volume is increasing; some of it flows back out through the right depressurizing process; the remaining heat does the work.
See also
Scuderi engine
References
Thermodynamics
Thermodynamic cycles | Scuderi cycle | [
"Physics",
"Chemistry",
"Mathematics"
] | 169 | [
"Thermodynamics",
"Dynamical systems"
] |
28,462,588 | https://en.wikipedia.org/wiki/Keyboard%20controller%20%28computing%29 | In computing, a keyboard controller is a device that interfaces a keyboard to a computer. Its main function is to inform the computer when a key is pressed or released. When data from the keyboard arrives, the controller raises an interrupt (a keyboard interrupt) to allow the CPU to handle the input.
If a keyboard is a separate peripheral system unit (such as in most modern desktop computers), the keyboard controller is not directly attached to the keys, but receives scancodes from a microcontroller embedded in the keyboard via some kind of serial interface. In this case, the controller usually also controls the keyboard's LEDs by sending data back to keyboard through the wire.
The IBM PC AT used an Intel 8042 chip to interface to the keyboard. This chip had two additional functions: it controlled access to the Intel 80286 CPU's A20 line in order to implement a workaround for a chip bug, and it was used to initiate a software CPU reset in order to allow the CPU to transition from protected mode to real mode because the 286 did not allow the CPU to go from protected mode to real mode unless the CPU is reset. The latter was a problem because the BIOS and services provided by real mode operating systems such as MS-DOS and similar operating systems could only be called by programs in real mode. These behaviors have been used by plenty of software that expects this behavior, and therefore keyboard controllers have continued controlling the A20 line and performing software CPU resets even when the need for a reset via the keyboard controller was obviated by the Intel 80386's ability to switch to real mode from protected mode without a CPU reset. The keyboard controller also handles PS/2 mouse input if a PS/2 mouse port is present. Today the keyboard controller is either a unit inside a Super I/O device or is missing, having its keyboard and mouse functions handled by a USB controller and its role in controlling the A20 line becoming integrated into the chipset's northbridge and then later into the CPU's built-in integrated memory controller.
See also
Keyboard buffer
AT keyboard
KVM extender
Embedded controller: The Intel 8042 and other keyboard controllers used in computers based on the IBM PC/AT design can be considered embedded controllers.
References
External links
KBD43W13 Keyboard and PS/2 Mouse Controller
Computer keyboards
Microcontrollers
Motherboard | Keyboard controller (computing) | [
"Technology"
] | 482 | [
"Computing stubs",
"Computer hardware stubs"
] |
33,861,265 | https://en.wikipedia.org/wiki/Carboxypeptidase%20A6 | Carboxypeptidase A6 (CPA6) is a metallocarboxypeptidase enzyme that in humans is encoded by the CPA6 gene. It is highly expressed in the adult mouse olfactory bulb and is broadly expressed in the embryonic brain and other tissues.
The protein encoded by this gene belongs to the family of carboxypeptidases, which catalyze the release of C-terminal amino acid, and have functions ranging from digestion of food to selective biosynthesis of neuroendocrine peptides. Polymorphic variants and a reciprocal translocation t(6;8)(q26;q13) involving this gene, have been associated with Duane retraction syndrome.
CPA6 processes several neuropeptides, including [Met]- and [Leu]-enkephalin, angiotensin I, and neurotensin in vitro. Whereas CPA6 is capable of converting the enkephalins and neurotensin into inactive forms, it can convert the inactive angiotensin I into the active angiotensin II. CPA6 may have additional roles in processing peptides and proteins in vivo, but the nature of these substrates and the effects of these cleavages are currently unknown.
See also
Carboxypeptidase A inhibitor
Carboxypeptidase
References
Further reading
External links
Proteins
Enzymes
Metabolism | Carboxypeptidase A6 | [
"Chemistry",
"Biology"
] | 292 | [
"Biomolecules by chemical classification",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins",
"Metabolism"
] |
33,864,050 | https://en.wikipedia.org/wiki/Peter%20Marler | Peter Robert Marler ForMemRS (February 24, 1928 – July 5, 2014) was a British-born American ethologist and zoosemiotician known for his research on animal sign communication and the science of bird song. A 1964 Guggenheim Fellow, he was emeritus professor of neurobiology, physiology and ethology at the University of California, Davis.
Education
Born in Slough, England, Marler graduated from University College London with a BSc in 1948, and a Ph.D. in botany in 1952.
In 1954, he graduated from the University of Cambridge with a second Ph.D. in zoology.
Career
From 1954 to 1956, he worked as a research assistant to William Homan Thorpe and Robert Hinde at Jesus College, Cambridge. In 1957, he became a professor at the University of California, Berkeley. In 1966, he became a professor at Rockefeller University, in 1969 became director of the Institute for Research in Animal Behavior, a collaboration between the New York Zoological Society (now the Wildlife Conservation Society) and Rockefeller University and in 1972 became director of the Field Research Center for Ethology and Ecology.
In 1989, Marler became a professor at the University of California, Davis. He retired in 1994, but took over the management of the local Center for Animal Behavior from 1996 to 2000. He died on July 5, 2014, of pneumonia while his family was evacuated from his Winters home because of the nearby Monticello wildfire.
Research
Marler was an internationally recognized researcher in the field of bird song. Through his work with songbirds, he helped gain fundamental insights into the acquisition of song. He also studied the development of communication skills in several primate species: chimpanzees and gorillas, along with Jane Goodall and Hugo van Lawick, and the southern green monkey, in collaboration with Tom Struhsaker, Dorothy Cheney and Robert Seyfarth. Peter Marler developed the first properly semiotic approach to animal communication. His work greatly informed our understanding of memory, learning, and the importance of auditory and social experience. His work group included many well-known ornithologist and behavioral scientists, including Masakazu Konishi, Fernando Nottebohm, Susan Peters, Don Kroodsma, Christopher Clark, Bill Searcy, Steve Nowicki, Ken Yasukawa, and John Wingfield.
Awards and honours
Marler was elected to the American Academy of Arts and Sciences in 1970, the United States National Academy of Sciences in 1971, and the American Philosophical Society in 1983. Marler was elected a Foreign Member of the Royal Society (ForMemRS) in 2008. His nomination reads:
Selected publications
Palleroni, A., M. Hauser & P. Marler (2005). "Do responses of galliform birds vary adaptively with predator size?" Animal Cognition. (8): 200–210.
Partan, S.R.; P. Marler (2005) "Issues in the classification of multimodal communication signals". American Naturalist. (166): 231–245.
Palleroni, A., C.T. Miller, M. Hauser, & P. Marler (2005). "Prey plumage adaptation against falcon attack". Nature. (434): 973–974.
Nelson, D.A. & P. Marler (2005). "Do bird nestmates learn the same songs?" Animal Behaviour. (69): 1007–1010.
Marler, P. (2005). "Ethology and the origins of behavioral endocrinology". Hormones and Behavior. (47): 493–502.
Marler, P. (2004). "Science and birdsong: The good old days". In: Nature's Music: The Science of Birdsong, P. Marler & H. Slabbekoorn (eds.). Elsevier Academic Press, San Diego, CA, pp. 1–38.
Marler, P. (2000). "Origins of music and speech: insights from animals". In: The Origins of Music, N. Wallin, B. Merker, and S. Brown (eds.). Cambridge: The MIT Press, 31–48.
Marler P. (1999). "How much does a human environment humanize a chimp". American Anthropologist. (101): 432–436.
Marler P. and DF Sherry (1999). "The nature and nurture of developmental plasticity". Proceedings of the 22nd International Ornithological Congress. Durban South Africa: University of Natal Press.
Marler, P. (1978). Affective and symbolic meaning: Some zoosemiotic speculations. In Thomas A. Sebeok (ed.): Sight, Sound and Sense. Bloomington: Indiana University Press, 113–123.
References
External links
Neurotree entry
1928 births
Ethologists
Semioticians
Foreign members of the Royal Society
University of California, Davis faculty
Alumni of University College London
Alumni of the University of Cambridge
Rockefeller University faculty
2014 deaths
People from Slough
People educated at Upton Court Grammar School
Wildlife Conservation Society people
British ornithologists
Members of the American Philosophical Society | Peter Marler | [
"Biology"
] | 1,078 | [
"Ethology",
"Behavior",
"Ethologists"
] |
33,865,207 | https://en.wikipedia.org/wiki/Geoff%20Jameson | Geoffrey Brind Jameson is a structural chemist and biologist at Massey University in Palmerston North, New Zealand. Jameson completed a PhD at the University of Canterbury in 1977. He is the director of the Centre for Structural Biology, and a crystallographer, using X-ray crystallography and NMR spectroscopy to determine the structure of materials.
Jameson was the 2011 recipient of the Marsden Medal from the New Zealand Association of Scientists.
References
University of Canterbury alumni
Georgetown University faculty
Academic staff of Massey University
New Zealand scientists
Crystallographers
Living people
Year of birth missing (living people)
New Zealand biochemists | Geoff Jameson | [
"Chemistry",
"Materials_science"
] | 123 | [
"Biochemistry stubs",
"Crystallography",
"Crystallographers",
"Biochemists",
"Biochemist stubs"
] |
33,865,389 | https://en.wikipedia.org/wiki/Effective%20dose%20%28radiation%29 | Effective dose is a dose quantity in the International Commission on Radiological Protection (ICRP) system of radiological protection.
It is the tissue-weighted sum of the equivalent doses in all specified tissues and organs of the human body and represents the stochastic health risk to the whole body, which is the probability of cancer induction and genetic effects, of low levels of ionizing radiation. It takes into account the type of radiation and the nature of each organ or tissue being irradiated, and enables summation of organ doses due to varying levels and types of radiation, both internal and external, to produce an overall calculated effective dose.
The SI unit for effective dose is the sievert (Sv) which represents a 5.5% chance of developing cancer. The effective dose is not intended as a measure of deterministic health effects, which is the severity of acute tissue damage that is certain to happen, that is measured by the quantity absorbed dose.
The concept of effective dose was developed by Wolfgang Jacobi and published in 1975, and was so convincing that the ICRP incorporated it into their 1977 general recommendations (publication 26) as "effective dose equivalent". The name "effective dose" replaced the name "effective dose equivalent" in 1991. Since 1977 it has been the central quantity for dose limitation in the ICRP international system of radiological protection.
Uses
According to the ICRP, the main uses of effective dose are the prospective dose assessment for planning and optimisation in radiological protection, and demonstration of compliance with dose limits for regulatory purposes. The effective dose is thus a central dose quantity for regulatory purposes.
The ICRP also says that effective dose has made a significant contribution to radiological protection as it has enabled doses to be summed from whole and partial body exposure from external radiation of various types and from intakes of radionuclides.
Usage for external dose
The calculation of effective dose is required for partial or non-uniform irradiation of the human body because equivalent dose does not consider the tissue irradiated, but only the radiation type. Various body tissues react to ionising radiation in different ways, so the ICRP has assigned sensitivity factors to specified tissues and organs so that the effect of partial irradiation can be calculated if the irradiated regions are known. A radiation field irradiating only a portion of the body will carry lower risk than if the same field irradiated the whole body. To take this into account, the effective doses to the component parts of the body which have been irradiated are calculated and summed. This becomes the effective dose for the whole body, dose quantity . It is a "protection" dose quantity which can be calculated, but cannot be measured in practice.
An effective dose will carry the same effective risk to the whole body regardless of where it was applied, and it will carry the same effective risk as the same amount of equivalent dose applied uniformly to the whole body.
Usage for internal dose
Effective dose can be calculated for committed dose which is the internal dose resulting from inhaling, ingesting, or injecting radioactive materials.
The dose quantity used is:
Committed effective dose, is the sum of the products of the committed organ or tissue equivalent doses and the appropriate tissue weighting factors , where is the integration time in years following the intake. The commitment period is taken to be 50 years for adults, and to age 70 years for children.
Calculation of effective dose
Ionizing radiation deposits energy in the matter being irradiated. The quantity used to express this is the absorbed dose, a physical dose quantity that is dependent on the level of incident radiation and the absorption properties of the irradiated object. Absorbed dose is a physical quantity, and is not a satisfactory indicator of biological effect, so to allow consideration of the stochastic radiological risk, the dose quantities equivalent dose and effective dose were devised by the International Commission on Radiation Units and Measurements (ICRU) and the ICRP to calculate the biological effect of an absorbed dose.
To obtain an effective dose, the calculated absorbed organ dose is first corrected for the radiation type using factor to give a weighted average of the equivalent dose quantity received in irradiated body tissues, and the result is further corrected for the tissues or organs being irradiated using factor , to produce the effective dose quantity .
The sum of effective doses to all organs and tissues of the body represents the effective dose for the whole body. If only part of the body is irradiated, then only those regions are used to calculate the effective dose. The tissue weighting factors summate to 1.0, so that if an entire body is radiated with uniformly penetrating external radiation, the effective dose for the entire body is equal to the equivalent dose for the entire body.
Use of tissue weighting factor
The ICRP tissue weighting factors are given in the accompanying table, and the equations used to calculate from either absorbed dose or equivalent dose are also given.
Some tissues like bone marrow are particularly sensitive to radiation, so they are given a weighting factor that is disproportionately large relative to the fraction of body mass they represent. Other tissues like the hard bone surface are particularly insensitive to radiation and are assigned a disproportionally low weighting factor.
Calculating from the equivalent dose:
.
Calculating from the absorbed dose:
Where
is the effective dose to the entire organism
is the equivalent dose absorbed by tissue
is the tissue weighting factor defined by regulation
is the radiation weighting factor defined by regulation
is the mass-averaged absorbed dose in tissue by radiation type
is the absorbed dose from radiation type as a function of location
is the density as a function of location
is volume
is the tissue or organ of interest
The ICRP tissue weighting factors are chosen to represent the fraction of health risk, or biological effect, which is attributable to the specific tissue named. These weighting factors have been revised twice, as shown in the chart above.
The United States Nuclear Regulatory Commission still uses the ICRP's 1977 tissue weighting factors in their regulations, despite the ICRP's later revised recommendations.
By medical imaging type
Health effects
Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns, and/or rapid fatality through acute radiation syndrome. Controlled doses are used for medical imaging and radiotherapy.
Regulatory nomenclature
UK regulations
The UK Ionising Radiations Regulations 1999 defines its usage of the term effective dose; "Any reference to an effective dose means the sum of the effective dose to the whole body from external radiation and the committed effective dose from internal radiation."
US effective dose equivalent
The US Nuclear Regulatory Commission has retained in the US regulation system the older term effective dose equivalent to refer to a similar quantity to the ICRP effective dose. The NRC's total effective dose equivalent (TEDE) is the sum of external effective dose with internal committed dose; in other words all sources of dose.
In the US, cumulative equivalent dose due to external whole-body exposure is normally reported to nuclear energy workers in regular dosimetry reports.
deep-dose equivalent, (DDE) which is properly a whole-body equivalent dose
shallow dose equivalent, (SDE) which is actually the effective dose to the skin
History
The concept of effective dose was introduced in 1975 by Wolfgang Jacobi (1928–2015) in his publication "The concept of an effective dose: a proposal for the combination of organ doses". It was quickly included in 1977 as “effective dose equivalent” into Publication 26 by the ICRP. In 1991, ICRP publication 60 shortened the name to "effective dose." This quantity is sometimes incorrectly referred to as the "dose equivalent" because of the earlier name, and that misnomer in turn causes confusion with equivalent dose. The tissue weighting factors were revised in 1990 and 2007 due to new data.
Future use of Effective Dose
At the ICRP 3rd International Symposium on the System of Radiological Protection in October 2015, ICRP Task Group 79 reported on the "Use of Effective Dose as a Risk-related Radiological Protection Quantity".
This included a proposal to discontinue use of equivalent dose as a separate protection quantity. This would avoid confusion between equivalent dose, effective dose and dose equivalent, and to use absorbed dose in Gy as a more appropriate quantity for limiting deterministic effects to the eye lens, skin, hands & feet.
It was also proposed that effective dose could be used as a rough indicator of possible risk from medical examinations. These proposals will need to go through the following stages:
Discussion within ICRP Committees
Revision of report by Task Group
Reconsideration by Committees and Main Commission
Public Consultation
See also
Radioactivity
Collective dose
Total effective dose equivalent
Deep-dose equivalent
Dose area product
Cumulative dose
Committed dose equivalent
Committed effective dose equivalent
References
External links
an account of chronological differences between USA and ICRP dosimetry systems
Radiology
Medical physics
Medical imaging
Physical quantities
Radiobiology
Radiation health effects
Radiation protection
pt:Dose efetiva | Effective dose (radiation) | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Biology"
] | 1,862 | [
"Physical phenomena",
"Radiation health effects",
"Applied and interdisciplinary physics",
"Physical quantities",
"Radiobiology",
"Quantity",
"Medical physics",
"Radiation effects",
"Physical properties",
"Radioactivity"
] |
48,390,643 | https://en.wikipedia.org/wiki/Financial%20security%20system | A financial security system finances unknown future obligations. Such a system involves an arrangement between a provider, who agrees to pay the future obligations, often in return for payments from a person or institution who wish to avoid undesirable economic consequences of uncertain future obligations. Financial security systems include insurance products as well as retirement plans and warranties.
References
Actuarial science | Financial security system | [
"Mathematics"
] | 74 | [
"Applied mathematics",
"Actuarial science"
] |
48,393,600 | https://en.wikipedia.org/wiki/Foturan | Foturan (notation of the manufacturer: FOTURAN) is a photosensitive glass by SCHOTT Corporation developed in 1984. It is a technical glass-ceramic which can be structured without photoresist when it is exposed to shortwave radiation such as ultraviolet light and subsequently etched.
In February 2016, Schott announced the introduction of Foturan II at Photonics West. Foturan II is characterized by higher homogeneity of the photosensitivity which allows finer microstructures.
Composition and Properties
Foturan is a lithium aluminosilicate glass system doped with small amounts of silver oxides and cerium oxides.
Processing
Foturan can be structured via UV-exposure, tempering and etching: Crystal nucleation grow in Foturan when exposed to UV and heat treated afterwards. The crystalized areas react much faster to hydrofluoric acid than the surrounding vitreous material, resulting in very fine microstructures, tight tolerance and high aspect ratio.
Exposure
If Foturan is exposed to light in the ultra-violet-range with a wavelength of 320 nm (eventually via photomask, contact lithography or proximity lithography to expose certain patterns), a chemical reaction is started in the exposed areas: The containing Ce3+ transforms into Ce4+ and frees an electron.
Tempering
During the nucleation tempering (~ 500 °C), the Silver-ion Ag+ will be transferred into Ag0 by scavenging the electron released from Ce3+.
This activates the agglomeration of atomic silver to form nanometer-scale silver clusters
During the subsequent crystallization tempering (~560-600 °C), lithium metasilicates (Li2SiO3 glass-ceramic) forms on the silver cluster nucleation in the exposed areas. The unexposed glass, otherwise amorphous, remains unchanged.
Etching
After tempering, the crystallized areas can be etched with hydrofluoric acid 20 times faster than the unexposed, still amorphous glass. Thus, structures with an aspect ratio of ca. 10:1 can be created.
Ceramization (Optional)
After etching, a ceramization of the entire substrate after a 2nd UV-exposure and thermal treatment is possible. The crystalline phase in this stage is lithium disilicate Li2Si2O5.
Product characteristics
Small structure size: Structure sizes of ~ 25 μm are possible
High aspect ratio: Etchingratios of > 20:1 make aspect ratio of > 10:1 and a wall angle of ~ 1-2° possible
High optical transmission in visible and non-visible spectrum: More than 90% transmission (substrate thickness 1 mm) between 350 nm and 2.700 nm
High temperature resistance: Tg > 450°Celsius
Pore-free: Suitable for biotech / microfluidics application
Low self fluorescence
Hydrolytic resistance (acc. to DIN ISO 719): HGB 4
Acid resistance (acc. to DIN 12116): S 1
Alkali resistance (acc. to DIN ISO 695): A 2
Foturan in the scientific community
Foturan is a widely known material in the material science community. As of October 30, 2015, Google Scholar showed more than 1.000 results of Foturan in scholarly literatures across an array of publishing formats and disciplines.
Many of those deal with topics such as
Micromachining Foturan
3D / laser direct writing in Foturan
Using Foturan for optical waveguides
Using Foturan for volume gratings
Processing Foturan via excimer / femtosecond laser
Applications
Foturan is mainly used for microstructure applications, where small and complex structures have to be created out of a solid and robust base material. Overall there are five main areas for which Foturan is used:
Microfluidics / Biotech (such as lab-on-a-chip or organ-on-a-chip components, micro mixer, micro reactor, printheads, titer plates, chip electrophoresis)
Semiconductor (such FED spacer, packaging elements or interposer for IC components, CMOS or memory modules)
Sensors (such as flow- or temperature sensors, gyroscopes or accelerometers)
RF / MEMS (such as substrates or packaging elements for antennas, capacitors, filter, duplexers, switches or oscillators)
Telecom (such as optical alignment chips, optical waveguides or optical interconnects)
By thermal diffusion bonding it is possible to bond multiple Foturan layers on top of each other to create complex 3-dimensional microstructures.
References
External links
Glass types
Glass-ceramics
Glass trademarks and brands
Transparent materials
German brands | Foturan | [
"Physics"
] | 1,013 | [
"Physical phenomena",
"Optical phenomena",
"Materials",
"Transparent materials",
"Matter"
] |
48,395,084 | https://en.wikipedia.org/wiki/GLD-2 | GLD-2 (which stands for Germ Line Development 2) is a cytoplasmic poly(A) polymerase (cytoPAPs) which adds successive AMP monomers to the 3’ end of specific RNAs, forming a poly(A) tail, which is a process known as polyadenylation.
For RNA specificity, GLD-2 associates with an RNA-binding protein, typically a GLD-3, to form a heterodimer that acts as a cytoplasmic PAP. This protein has an enzymatic function and belongs to a family (DNA polymerase type-B-like family) which includes several similar enzymes such as GLD-1, GLD-3 and GLD-4.
This family of cytoplasmic PAPs has been described in several different species including Homo sapiens, Caenorhabditis elegans, Xenopus, Mus musculus and Drosophila.
Moreover, as it is a cytoplasmtaic PAP it differs from nuclear PAPs in some aspects. While nuclear PAPs contain a catalytic domain and an RNA-binding domain, GLD-2 family members have only a catalytic domain.
Localization
GLD-2 is a common and abundant, but yet quite unknown protein that has already been found in each of the five kingdoms. In the animal kingdom, it has been specially detected in Homo sapiens, Drosophila, Xenopus and Mus musculus. However, there has also been noticed the presence of GLD-2 in Arabidopsis thaliana which belongs in the plants kingdom; Escherichia Coli in monera and Candida albicans in fungi.
In human beings it is mostly expressed in the brain and within it, in the cerebellum, hippocampus and medulla. We can also find them in some other source tissues are the fibroblast, HeLa cell, MCF-7 cell, melanoma cell line and thymus. Inside those cells, it can be located in the nucleus and mitochondrion since its main function is related with DNA polyadenilation and these cell organelles are the only ones were DNA can be found. However, there are also GLD-2 in a soluble way in the cytosol, although the reason why they are there is still unsure.
In Escherichia Coli, this enzymatic protein can be found in the cell membrane and in the cytosol, whereas in Drosophila melanogaster, it predominates in the brain's nucleus and cytoplasm, oocyte, ovary and testis’ cells. Finally, in the Arabidopsis thaliana, it is located in the flower's nucleus, root, stem and leaf cells.
Related functions
GLD-2 primarily stabilizes mRNAs that are translationally repressed as well as it strongly promotes bulk polyadenylation. Surprisingly, those functions seem to have little impact on dynamizing efficient target mRNA translation, as it is an efficient Poly(A) Polymerase which helps developing polyadenylation activity. This activity is stimulated by its interaction with a putative RNA-binding protein: GLD-3. It is proposed by some studies that GLD-3 stimulates GLD-2 by recruiting it to the RNA. If so, then bringing GLD-2 to the RNA by other means also should stimulate its activity.
Molecular function
ATP binding
GLD-2, as a poly(A) polymerase (PAP) acts incorporating ATP at the 3' end of mRNAs in a template-independent manner.
Enzymatic activity: Polynucleotide adenylyltransferase activity
It has been discovered that this protein has a catalytic activity, in other words, it has the ability to increase the speed of chemical reactions which would not occur so fast. It is known to catalysis the following reaction (which requires the following cofactor: Mg(2+)):
ATP + RNA(n) ⇄ diphosphate + RNA(n+1)
Depending on the surroundings the optimal pH varies from 8 in the cytoplasm to 8.3 in the nucleus.
Biological process
Hematopoietic progenitor cell differentiation
The GLD-2 protein together with 136 proteins more, is involved in the molecular process of hematopoietic progenitor cell differentiation, in the human proteome. This is the process in which precursor cell type acquires the specialized features of a hematopoietic progenitor cell, a kind of cell types including myeloid progenitor cells and lymphoid progenitor cells.
mRNA processing by RNA polyadenylation
The polyadenylation activity of GLD-2, as we previously mentioned, is stimulated by physical interaction with an RNA binding protein, GLD-3. To test whether GLD-3 might stimulate GLD-2 by recruiting it to RNA, some studies tethered C. elegans GLD-2 to mRNAs in Xenopus oocytes by using MS2 coat protein. Tethered GLD-2 adds poly(A) and stimulates translation of the mRNA, demonstrating that recruitment is sufficient to stimulate polyadenylation activity. PAP heterodimer in which GLD-2 contains the active site and GLD-3 provides RNA-binding specificity. MS2 coat protein was joined to GLD-2 to recruit it to an RNA.
Furthermore, GLD-2 activity is also important to maintain or up-regulate the abundance of many mRNAs, as the cytoplasmic polyadenylation has an essential role in activating maternal mRNA translation during early development. In vertebrates, the reaction requires CPEB, an RNA-binding protein and the poly(A) polymerase GLD-2.
The Xenopus enzyme, which exists in two closely related forms, polyadenylates RNAs to which it is tethered and enhances their translation. Likewise, it interacts with cytoplasmic polyadenylation factors, including Cleavage and polyadenylation specificity factor and CPEB, and with target mRNAs. These findings confirm and extend a recent report that a GLD-2 enzyme is the long-sought PAP responsible for cytoplasmic polyadenylation in oocytes.
In addition, the formation of long-term memory is believed to lack translational control of localized mRNAs. In mammals, dendrite mRNAs are kept in a repressed state and are activated upon repetitive stimulation. Several regulatory proteins required for translational control in early development are thought to be needed for memory formation, suggesting similar molecular mechanisms. In an experiment using Drosophila, it has been detected the enzyme responsible for poly(A) elongation in the brain and it has been demonstrated too that its activity is required specifically for long-term memory. These findings provide strong evidence that cytoplasmic polyadenylation is critical for memory formation, and that GLD2 is the responsible enzyme.
Medical implications
It has also been discovered that GLD2 has medical uses.
For example, such enzyme is overexpressed in patients who suffer from cancer; that's why it can be used as a prognostic factor for early appearance in breast cancer patients. Moreover, PAP activity is used to measure the effect of anticancer drugs as etoposide and cordycepin in two carcinoma cell lines: HeLa, which is the human epithelioid cervix carcinoma, and MCF-7 (human breast cancer).
However, in spite its utilities it can also be involved in the expression of several common diseases such as: leukemia, liver cirrhosis, brain injuries, hepatitis and in some cases infertility in male patients.
References
Further reading
Gene expression
RNA | GLD-2 | [
"Chemistry",
"Biology"
] | 1,633 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
48,396,387 | https://en.wikipedia.org/wiki/The%20Ocean%20Cleanup | The Ocean Cleanup is a nonprofit environmental engineering organization based in the Netherlands that develops and deploys technology to extract plastic pollution from the oceans and to capture it in rivers before it can reach the ocean. Their initial focus was on the Pacific Ocean and its garbage patch, and extended to rivers in countries including Indonesia, Guatemala, and the United States.
The Ocean Cleanup was founded in 2013 by Boyan Slat, a Dutch inventor who serves as its CEO. It develops both ocean and river based catch systems. Its ocean system consists of a funnel shaped floating barrier which is towed by two ships. The ocean system is deployed in oceanic gyres to collect marine debris. The project aims to launch 10 or more approximately systems which they predict could remove 50% of the debris in the Great Pacific Garbage Patch five years from deployment.
The river system consists of a variety of floating barriers and extraction systems which are anchored within rivers or at rivermouths. The Ocean Cleanup also publishes scientific papers, and estimates that "1% of worlds rivers (~1000 rivers) are responsible for 80% of the pollution in the world's seas". They aim to deploy their river systems in these 1000 rivers.
As of August 2023, the organization has removed more than 15.8 million kilograms of trash from rivers and the Great Pacific Garbage Patch.
History
Slat proposed the cleanup project and supporting system in 2012. In October, he outlined the project in a TED-talk. The initial design consisted of long, floating barriers fixed to the seabed, attached to a central platform shaped like a manta ray for stability. The barriers would direct the floating plastic to the central platform, which would remove the plastic from the water. Slat did not specify the dimensions of this system in the talk.
2014-2017: Initial prototypes
In 2014, the design replaced the central platform with a tower detached from the floating barriers. This platform would collect the plastic using a conveyor belt. The floating barrier was proposed to be long. They conducted and published a feasibility study.
In 2015, this design won the London Design Museum Design of the Year, and the INDEX: Award. Later that year, scale model tests were conducted in wave pools at Deltares and MARIN, testing the dynamics and load of the barrier in ocean conditions, and gathering data for computational modeling.
A segment went through a test in the North Sea in the summer of 2016. The test indicated that conventional oil containment booms would not stand up over time, and they changed the floater material to a hard-walled HDPE pipe.
In May 2017, significant changes to the conceptual design were made:
Dimensions were reduced from to , with the idea of using a fleet of 60 such systems.
Seabed anchors were replaced with sea anchors, to drift with the currents, allowing the plastic to "catch up" with the cleanup system, and letting the system drift to locations with the highest concentration of debris. The lines to the anchor would keep the system in a U-shape.
An automatic system for collecting plastic was replaced with a system for concentrating the plastic before removal by support vessels.
System 001
Tests in 2018 led to sea anchors being removed, and the opening of the U turned to face the direction of travel, by creating more drag in the middle with a deeper underwater screen.
On 9 September 2018, System 001 (nicknamed Wilson in reference to the floating volleyball in the 2000 film Cast Away) deployed from San Francisco. The ship Maersk Launcher towed the system to a position 240 nautical miles off the coast, where it was put through a series of sea trials. It consisted of a long barrier with a wide skirt hanging beneath it. It was made from HDPE, and consists of 50x12 m sections joined. It was unmanned and incorporated solar-powered monitoring and navigation systems, including GPS, cameras, lanterns and AIS. The barrier and the screen were produced by an Austrian supplier.
In October 2018 it was towed to the Great Pacific Garbage Patch for real-world duty. System 001 encountered difficulties retaining the plastic collected. The system collected debris, but soon lost it because the barrier did not retain a consistent speed through the water. In December, mechanical stress caused an 18-meter section to detach, and the rig was moved to Hawaii for inspection and repair. During the two months of operation, it had captured 2 metric tons of plastic.
In June 2019, after four months of root cause analyses and redesign, System 001/B was deployed, with a water-borne parachute to slow the system, and an extended cork line to hold the screen in place. This successfully captured smaller plastic, reduced the barrier size by two-thirds, and was easier to adjust offshore. However, System 001/B still did not adequately capture and retain debris.
Interceptors 001-010
In October 2019, The Ocean Cleanup unveiled a barrier for river cleanup, The Interceptor, to intercept river plastic and prevent it from reaching the ocean. Two systems were deployed in Jakarta (Indonesia) and Klang (Malaysia).
In January 2020, flooding broke the barrier of Interceptor 001 in Jakarta. It was replaced with a newer model with a stronger screen, simpler design, and an adjustable better-defined weak link. A third Interceptor was deployed in Santo Domingo, in the Dominican Republic. In December, The Ocean Cleanup announced they would start large-scale production of the Interceptor series.
In July 2022, an Interceptor Original was deployed near the mouth of Ballona Creek in southwestern Los Angeles County, California. This was the first Interceptor Original installed in the United States, and the second of its kind to be deployed globally.
In May 2022, the Ocean Cleanup trialed a new Interceptor called Trashfence on the Rio Las Vacas, a tributaty of the Rio Motagua, in Guatemala. It was anchored to the riverbed, and the anchors washed out. In April 2023, they returned with a pair of new Interceptors, at a point on the river with slower current, anchored to the riverbank. This was successful, and soon became their most prolific site; in its first year it removed 10,000,000 kg of trash from the river.
System 002 and 03
In July 2021, a new design called System 002, also known as "Jenny", was deployed in the Great Pacific Garbage Patch for testing. System 002 was actively towed by two ships as opposed to System 001 which passively drifted. In October, the organization announced that the system had gathered of trash. In October, the project announced plans for System 03, which would span up to .By December, the project announced it had removed more than 150 tonnes of plastic from the Great Pacific Garbage Patch and announced it would transition to the new longer System 03 the following year.
In May 2023, the project deployed its System 03 barrier, 2,250 meters long. The system included a retention zone where material is held before it is removed from the water, with the nets' mesh size there being increased from 10 to 15 mm. This is to allow marine life such as fish and turtles to escape, and to allow smaller creatures such as blue buttons and violet snails to pass through.
System 03 has about 5x the capacity of System 002, which is why they dropped a 0 from the naming scheme: [O]ur modeling suggests it may be possible to clean the entire GPGP with as few as 10 systems. That’s why we knocked off one of the zeroes from ‘002’ when we named ‘03’ – we no longer need a three-figure amount of systems to clean all five ocean garbage patches around the world.In June 2024, the project claimed that it had removed 15 million kilograms (33 million lb) of marine trash from the Great Pacific Garbage Patch and from key polluting rivers around the world since 2019.
Design
Ocean system
The latest 03 design uses a towed floating structure. The structure acts as a containment boom. A permeable screen underneath the float catches subsurface debris and funnels it into the Retention Zone which serves as a debris catch. The Retention Zone is monitored by underwater cameras. If an animal is spotted in the Retention Zone, the Marine Animal Safety Hatch (MASH) is activated, blocking off any further entrance into the Retention Zone while opening an exit hatch and giving the animal a clear route out.
Crewed boats tow the approximately 2.2km U-shaped barrier through the water at 1.5 knots. The ship is steered to areas with higher waste densities. In July 2022, the floating system reached the milestone of 100,000 kg of plastic removed from the Great Pacific Garbage Patch.
River system
The Interceptor is a solar-powered, automated system designed to capture and extract waste. Along with an optimized water flow path, a barrier guides rubbish towards the opening of the Interceptor and onto the conveyor belt, which delivers waste to the shuttle. The shuttle deposits the waste equally into one of six bins according to sensors. When the bins are almost full, local operators are informed with an automated message. They then empty them and send the waste to local waste management facilities. The Interceptor project is similar to a smaller-scale local project called Mr. Trash Wheel developed in 2008 for Maryland's Baltimore harbor.
In 2021, The Ocean Cleanup began expanding their Interceptor systems to be able to tackle a wider range of rivers. The Interceptor Barricade developed for Rio Las Vacas in 2023 was the first model designed for very high-throughput rivers that may carry 10,000 kg of trash a day.
System deployments
Research
Oceanic expeditions
In August 2015, The Ocean Cleanup conducted the Mega Expedition, in which a fleet of approximately 30 vessels, with lead ship R/V Ocean Starr, crossed the Great Pacific Garbage Patch and mapped an area of 3.5 million square kilometers. The expedition collected data on the size, concentration and total mass of the plastic in the patch. According to the organization, this expedition collected more data on oceanic plastic pollution than the last 40 years combined.
In September and October 2016, The Ocean Cleanup launched the Aerial Expedition, in which a C-130 Hercules aircraft conducted the first ever series of aerial surveys to map the Great Pacific Garbage Patch. The goal was specifically to quantify the amount of large debris, including ghost nets in the patch. Slat stated that the crew saw more debris than expected.
The project released an app called The Ocean Cleanup Survey App, which enables others to survey the ocean for plastic, and report their observations.
Scientific findings
In February 2015, the research team published a study in Biogeosciences about the vertical distribution of plastic, based on samples collected in the North Atlantic Gyre. They found that plastic concentration decreases exponentially with depth, with the highest concentration at the surface, and approaching zero just a few meters deeper. A follow-up paper was published in Scientific Reports in October 2016.
In June 2017, researchers published a paper in Nature Communications, with a model of the river plastic input into the ocean. Their model estimates that between 1.15 and 2.41 million metric tons of plastic enter the world's oceans every year, with 86% of the input stemming from rivers in Asia. In December 2017, they published a paper in Environmental Science & Technology about pollutants in oceanic plastic they had sampled.
In March 2018, they published a paper in Scientific Reports, summarizing the combined findings from the two expeditions. They estimated that the Patch contains 1.8 trillion pieces of floating plastic, with a total mass of 79,000 metric tons. Microplastics (< 0.5 cm) make up 94% of the pieces, accounting for 8% of the mass. The study suggests that the amount of plastic in the patch increased exponentially since 1970.
In September 2019, they published a paper in Scientific Reports studying why emissions into the ocean are higher than the estimates of debris accumulated at the surface layer of the ocean. They argue that debris circulation dynamics offer an explanation for this missing plastic and suggest that there is a significant amount of time between initial emissions and accumulation offshore. The study also indicated that current microplastics are mostly a result of the degradations of plastic produced in the 1990s or before. A follow-up study in May 2020 showed that part of the plastic at the surface of the Great Pacific Garbage Patch is breaking down into microplastics and sinking to the deep sea. Most debris is still found at the surface, with 90% in the first 5 meters.
In October 2019, other researchers estimated that most ocean plastic pollution comes from cargo ships, with a majority from Chinese cargo ships alone. A spokesperson from The Ocean Cleanup said: "Everyone talks about saving the oceans by stopping using plastic bags, straws and single use packaging. That's important, but when we head out on the ocean, that's not necessarily what we find."
Funding
The Ocean Cleanup raised over US$2 million with the help of a crowdfunding campaign in 2014.
As of 2019, it was mainly funded by donations and in-kind sponsors, including Maersk, Salesforce.com chief executive Marc Benioff, Peter Thiel, Julius Baer Foundation, The Coca-Cola Company and Royal DSM.
In 2019, it received a 10 million AUD award from the Macquarie Group Foundation as part of its 50th anniversary celebration.
In October 2020, they unveiled a product made from plastic certified from the Great Pacific Garbage Patch, The Ocean Cleanup sunglasses, to help fund the continuation of the cleanup. They made 21,000 sunglasses, sold at €200 apiece. They worked with DNV GL to develop a certification for plastic from water sources and the sunglasses were certified to originate from the GPGP. The sunglasses were designed by Yves Béhar and manufactured by Safilo. They sold out in early 2022.
In October 2021, they were part of the #TeamSeas fundraising campaign led by YouTube stars Mark Rober and MrBeast, and received roughly half of the $30 million raised.
In 2022, Kia signed a seven-year deal to become a global partner of The Ocean Cleanup through funding and in-kind contributions. The partnership will fund the construction of a new Interceptor and will allow for recycled plastics to be used in the manufacturing process of Kia.
In early 2023 the organization received its largest private donation to date of $25 million from Joe Gebbia, co-founder of Airbnb.
Efficacy issues and possible negative impacts
Criticisms and doubts about method, feasibility, efficiency and return on investment have been raised in the scientific community. Miriam Goldstein, director of ocean policy at the Center for American Progress, stated in 2019 that compared to the ocean system, devices closer to shore are easier to maintain, and would likely recover more plastic per dollar spent overall.
The team has expressed their own concerns that the devices could imperil sea life, including neustons, communities of pleustons, Portuguese man-of-war, sea snails, and sail jellyfish that live near the ocean surface, and have monitored for such impacts. A modelling study concluded that it is currently impossible to determine how damaging at-sea plastic removal strategies (such as those of The Ocean Cleanup) would be for marine life, with impacts potentially ranging from mild to severe.
It is understood that this approach cannot solve the whole problem. Plastic in the oceans is spread far beyond the gyres; experts estimate that less than 5% of all the plastic pollution which enters the oceans makes its way into any of the garbage patches. Much of the plastic that does make it to gyres is not floating at the surface, though recent research confirms it is mostly within the first meters. Plastic in rivers can be more easily trapped at the source but that accounts for only a part of all plastic in the oceans.
In 2022, the organization collected 923,000 kg of ocean and river plastic with the expenses of €45.603 million; a cost of €49.4/kg.
In 2023 the efficacy has significantly improved due to the upscaling of the river systems. A total of 8531 metric tons of ocean and river plastic was collected with the expense of €44.507 million; a cost of €5.22/kg.
Recognition
The project and its founder have been recognized in many fora.
2014 Champion of the Earth – The United Nations Environment Programme.
One of the 20 Most Promising Young Entrepreneurs Worldwide – Intel EYE50.
2015 London Design Museum Design of the Year.
2015 INDEX: Award.
2015 Fast Company Innovation By Design Award in the category Social Good.
2015 100 Global Thinkers — Foreign Policy.
2016 Katerva award.
2017 Norwegian Shipowners' Association's Thor Heyerdahl award.
2019 Macquarie 50th Anniversary Award.
See also
References
Further reading
External links
Ocean pollution
Plastics and the environment
Litter
Organisations based in South Holland
Conservation and environmental foundations
Environmental organizations established in 2013 | The Ocean Cleanup | [
"Chemistry",
"Environmental_science"
] | 3,424 | [
"Ocean pollution",
"Water pollution"
] |
49,628,551 | https://en.wikipedia.org/wiki/Poly%28adp-ribose%29%20polymerase%20family%20member%2014 | Poly(ADP-ribose) polymerase family member 14 is a protein that, in humans, is encoded by the PARP14 gene.
Function
Poly(ADP-ribosyl)ation is an immediate DNA damage-dependent post-translational modification of histones and other nuclear proteins that contributes to the survival of injured proliferating cells. PARP14 belongs to the superfamily of enzymes that perform this modification (Ame et al., 2004 [PubMed 15273990]).
References
Further reading
Cellular processes
Molecular genetics
Mutation
Senescence | Poly(adp-ribose) polymerase family member 14 | [
"Chemistry",
"Biology"
] | 116 | [
"DNA repair",
"Senescence",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Metabolism"
] |
49,630,668 | https://en.wikipedia.org/wiki/Fine%20electronic%20structure | In solid state physics and physical chemistry, the fine electronic structure of a solid are the features of the electronic bands induced by intrinsic interactions between charge carriers. Valence and conduction bands split slightly compared to the difference between the various bands. Some mechanisms that allow it are angular momentum couplings, spin-orbit coupling, lattice distortions (Jahn–Teller effect), and other interactions described by crystal field theory.
The name comes from the fine structure of atoms, where energy levels suffer from a similar effect from the non-relativistic calculation due to effects like spin–orbit interaction, zitterbewegung, and corrections to the kinetic energy.
See also
Fine structure constant
Rashba effect
Dresselhaus effect
References
Atomic physics
Solid-state chemistry
Condensed matter physics | Fine electronic structure | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 157 | [
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Atomic physics",
" molecular",
"Condensed matter physics",
"Atomic",
"nan",
"Solid-state chemistry",
"Matter",
" and optical physics"
] |
49,632,739 | https://en.wikipedia.org/wiki/Mitochondrial%20amidoxime%20reducing%20component%201 | Mitochondrial amidoxime-reducing component 1 (also known as MOCO sulphurase C-terminal domain containing 1, MOSC1 or MARC1) is a mammalian molybdenum-containing enzyme. It is located in the outer mitochondrial membrane and consists of a N-terminal mitochondrial signal domain facing the inter-membrane space, a transmembrane domain, and a C-terminal catalytic domain facing the cytosol. In humans it is encoded by the MOSC1 gene.
MOCO stands for molybdenum cofactor.
MOSC1 has been reported to reduce amidoximes to amidines.
Genetic variation in MARC1 has been reported to be associated with lower blood cholesterol levels, blood liver enzyme levels, reduced liver fat and protection from cirrhosis suggesting that MARC1 deficiency may protect against liver disease. A genome-wide association study involving subjects from the UK Biobank further established as association of alcoholic-related liver disease.
See also
MOCOS
References
Proteins | Mitochondrial amidoxime reducing component 1 | [
"Chemistry"
] | 206 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
50,733,020 | https://en.wikipedia.org/wiki/Estradiol%20furoate | Estradiol furoate (EF), or estradiol 17β-furoate, sold under the brand name Di-Folliculine, is an estrogen medication and estrogen ester which is no longer marketed. It is the C17β furoate ester of estradiol. Estradiol benzoate has also been marketed under the brand name Di-Folliculine, and should not be confused with estradiol furoate.
The duration of action of the related estradiol ester estradiol 3-furoate by intramuscular injection was studied in women in 1952. Its duration in oil solution was found to be similar to that of estradiol benzoate in oil solution and shorter than that of estradiol dipropionate in oil solution.
See also
List of estrogen esters § Estradiol esters
References
Abandoned drugs
Estradiol esters
Furoate esters
Synthetic estrogens
2-Furyl compounds | Estradiol furoate | [
"Chemistry"
] | 210 | [
"Drug safety",
"Abandoned drugs"
] |
50,739,450 | https://en.wikipedia.org/wiki/Diamond%20and%20Related%20Materials | Diamond and Related Materials is a peer-reviewed scientific journal in materials science covering research on all forms of diamond and other related materials, including diamond-like carbons, carbon nanotubes, graphene, and boron and carbon nitrides. The journal is published by Elsevier and the editor-in-chief is Ken Haenen (University of Hasselt).
Abstracting and indexing
The journal is abstracted and indexing in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.1.
References
External links
English-language journals
Materials science journals
Elsevier academic journals
Academic journals established in 1991 | Diamond and Related Materials | [
"Materials_science",
"Engineering"
] | 133 | [
"Materials science journals",
"Materials science"
] |
50,740,226 | https://en.wikipedia.org/wiki/Cubane-type%20cluster | A cubane-type cluster is an arrangement of atoms in a molecular structure that forms a cube. In the idealized case, the eight vertices are symmetry equivalent and the species has Oh symmetry. Such a structure is illustrated by the hydrocarbon cubane. With chemical formula , cubane has carbon atoms at the corners of a cube and covalent bonds forming the edges. Most cubanes have more complicated structures, usually with nonequivalent vertices. They may be simple covalent compounds or macromolecular or supramolecular cluster compounds.
Examples
Other compounds having different elements in the corners, various atoms or groups bonded to the corners are all part of this class of structures.
Inorganic cubane-type clusters include selenium tetrachloride, tellurium tetrachloride, and sodium silox.
Cubane clusters are common throughout bioinorganic chemistry. Ferredoxins containing [Fe4S4] iron–sulfur clusters are pervasive in nature. The four iron atoms and four sulfur atoms form an alternating arrangement at the corners. The whole cluster is typically anchored by coordination of the iron atoms, usually with cysteine residues. In this way, each Fe center achieves tetrahedral coordination geometry. Some [Fe4S4] clusters arise via dimerization of square-shaped [Fe2S2] precursors. Many synthetic analogues are known including heterometallic derivatives.
Several alkyllithium compounds exist as clusters in solution, typically tetramers, with the formula [RLi]4. Examples include methyllithium and tert-butyllithium. The individual RLi molecules are not observed. The four lithium atoms and the carbon from each alkyl group bonded to them occupy alternating vertices of the cube, with the additional atoms of the alkyl groups projecting off their respective corners.
Octaazacubane is a hypothetical allotrope of nitrogen with formula N8; the nitrogen atoms are the corners of the cube. Like the carbon-based cubane compounds, octaazacubane is predicted to be highly unstable due to angle strain at the corners, and it also does not enjoy the kinetic stability seen for its organic analogues.
References
Molecular geometry | Cubane-type cluster | [
"Physics",
"Chemistry"
] | 461 | [
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Matter"
] |
39,346,603 | https://en.wikipedia.org/wiki/Xeno%20nucleic%20acid | Xenonucleic acids (XNA) are synthetic nucleic acid analogues that are made up of non-natural components such as alternative nucleosides, sugars, or backbones.
Xenonucleic acids have different properties to endogenous nucleic acids. This means they can be used in different applications, such as therapeutics, probes, or functional molecules. For example, peptide nucleic acids, where the backbone is made up of repeating aminoethylglycine units, are extremely stable and resistant to degradation by nucleases because they are not recognised.
The same nucleobases can be used to store genetic information and interact with DNA, RNA, or other XNA bases, but the different backbone gives the structure different properties. This may mean it cannot be processed by naturally occurring cellular processes. For example, natural DNA polymerases cannot read and duplicate this information, thus the genetic information stored in XNA is invisible to DNA-based organisms.
, at least six types of synthetic sugars have been shown to form nucleic acid backbones that can store and retrieve genetic information. Research is now being done to create synthetic polymerases to transform XNA. The study of the production and application of XNA molecules has created the field of current xenobiology.
Background
The structure of DNA was discovered in 1953. Around the early 2000s, researchers created a number of exotic DNA-like structures, XNA. These are synthetic polymers that can carry the same information as DNA, but with different molecular constituents. The "X" in XNA stands for "xeno-", meaning strange or alien, indicating the difference in the molecular structure as compared to DNA or RNA.
Not much was done with XNA until the development of special polymerase enzyme, capable of copying XNA from a DNA template as well as copying XNA back into DNA. Pinheiro et al. (2012), for example, has demonstrated such an XNA-capable polymerase that works on sequences of around 100 base pairs in length. More recently, synthetic biologists Philipp Holliger and Alexander Taylor succeeded in creating XNAzymes, the XNA equivalent of a ribozyme, enzymes made of RNA. This demonstrates that XNAs not only store hereditary information, but can also serve as enzymes, raising the possibility that life elsewhere could have begun with something other than RNA or DNA.
Structure
Endogenous nucleic acids (DNA and RNA) polymers of nucleotides. A nucleotide is made up of three chemical components: a phosphate, a five-carbon sugar group (this can be either a deoxyribose sugar—which gives us the "D" in DNA—or a ribose sugar—the "R" in RNA), and one of five standard bases (adenine, guanine, cytosine, thymine or uracil). Xenonucleic acids can substitute any of these components with a non-natural alternative. These substitutions make XNAs functionally and structurally analogous to DNA and RNA despite being unnatural and artificial.
XNA exhibits a variety of structural chemical changes relative to its natural counterparts. Most work has focused on different chemical structures in place of the ribose, including:
1,5-Anhydrohexitol nucleic acid (HNA)
Cyclohexene nucleic acid (CeNA)
Threose nucleic acid (TNA)
Glycol nucleic acid (GNA)
Locked nucleic acid
Peptide nucleic acid (PNA)
Fluoroarabino nucleic acid (FANA)
HNA could potentially be used as a drug that can recognize and bind to specified sequences. Scientists have been able to isolate HNAs for the possible binding of sequences that target HIV. Research has also shown that CeNAs with stereochemistry similar to the D form of DNA can create stable duplexes with itself and RNA. It was shown that CeNAs are not as stable when they form duplexes with DNA.
Synthesis
XNA monomers are prepared by chemical synthesis and can be formed into XNA polymers using chemical synthesis or biosynthetic techniques.
Monomer synthesis
Appropriately protected monomers are required for chemical synthesis of XNA polymers. XNA nucleotides, or triphosphates are required for enzymatic polymerisation.
Typically, for sugar-based XNAs, to synthesize the xeno nucleoside, the 5 carbon sugar analog is chemically synthesised first then, the nucelobase is attached. To chemically synthesize the XNA oligomer from polymerization of xeno nucleoside, the hydroxyl group corresponding to 5'-OH of 5 carbon sugar needs activation by adding an active group(like MMTr, monomethoxytrityl), then the activated xeno nucleosides can be attached in polymerization designated chemically. One typical example is CeNA, where the xeno nucleoside repeating units 2′-Cyclohexenylnucleosides are chemically synthesized by attaching the protected base to the protected cyclohexenyl precursor.
XNA with a similar chemical structure like DNA can be synthesized by engineered polymerases. HNA, CeNA, LNA/BNA, ANA/FANA, and TNA is suitable for this process, while the Spiegelmers(consists of L-nucleic acids) is suitable for engineered polymerases to synthesize.
Polymer synthesis
Solid-phase synthesis is an important technique for synthesis of short XNA sequences. This enables synthesis of defined sequences.
Alternatively, XNAs can be assembled enzymatically. As xeno nucleotides are analogs of nucleotide, which has a phosphate group attached to the corresponding hydroxyl group. Xeno nucleotides can be chemically treated to attach the phosphate group. Since the similarity between xeno nucleotides and natural nucleotides, the xeno nucleotides can be used as blocks of the engineered polymerases to synthesize the XNA.
Biosynthesis of the XNAs usually requires templates like the DNA replication, and this process require the xeno nucleotide to be structural similar to natural nucleotide. XNA can be bio-synthesized with DNA templates, where the information in DNA templates instruct the XNA synthesis. XNA can also be bio-synthesized with XNA templates in some condition, where the XNA bahaves like DNA. The synthesis of DNA molecule of XNA templates are also important. Special engineered polymerases and some reverse transcriptase are utilized in the DNA-to-XNA, XNA-to-XNA, and XNA-to-DNA synthesis.
1,5-Anhydrohexitol Nucleic Acid (HNA) bio-synthesis: HNA polymerases(like TgoT_6G12[1], which is archaeal polymerase from Thermococcus gorgonarius) have been engineered to synthesize HNA polymers.
Cyclohexene Nucleic Acid (CeNA) and 2-F-CeNA bio-synthesis: Vent (exo−) DNA polymerase from the B-family polymerases, Taq DNA polymerase from the A-family polymerases, and HIV reverse transcriptase from the reverse transcriptase family have been developed to facilitate the synthesis of CeNA, enabling its use in synthetic genetics.
Locked Nucleic Acid (LNA) / Bridged Nucleic Acid (BNA) bio-synthesis: These nucleic acids are synthesized through the engineering of polymerases that can accommodate their unique structural features, which include modifications that lock the nucleic acid structure. KOD DNA polymerases, a family B DNA polymerase derived from Thermococcus kodakarensis KOD1, are effective LNA decoders and encoders.
Threofuranosyl Nucleic Acid (TNA) bio-synthesis: TNA has been synthesized using mutants of archaeal DNA polymerases, such as Kod-RI, Tgo and Therminator DNA polymerases (9°N, A485L).
Arabino-Nucleic Acid (ANA)/2′-Fluoro-Arabinonucleic Acid (FANA) bio-synthesis: ANA/FANA is synthesized using engineered polymerases that can handle its specific backbone chemistry.
Spiegelmers bio-synthesis: Spiegelmers are created by selecting RNA or DNA aptamers against enantiomeric target molecules, followed by the chemical synthesis of their non-natural L-RNA or L-DNA isomers. This process involves preparing mirror-image targets through chemical synthesis, which can be challenging.
Implications
The study of XNA is not intended to give scientists a better understanding of biological evolution as it has occurred historically, but rather to explore ways in which humans might control and reprogram the genetic makeup of biological organisms in future. XNA has shown significant potential in solving the current issue of genetic pollution in genetically modified organisms. While DNA is incredibly efficient in its ability to store genetic information and lend complex biological diversity, its four-letter genetic alphabet is relatively limited. Using a genetic code of six XNAs rather than the four naturally occurring DNA nucleotide bases yields greater opportunities for genetic modification and expansion of chemical functionality.
The development of various hypotheses and theories about XNAs have altered a key factor in the current understanding of nucleic acids: heredity and evolution are not limited to DNA and RNA as once thought, but are processes that have developed from polymers capable of storing information. Investigations into XNAs will allow researchers to assess whether DNA and RNA are the most efficient and desirable building blocks of life, or if these two molecules emerged randomly after evolving from a larger class of chemical ancestors.
Applications
One theory of XNA utilization is its incorporation into medicine as a disease-fighting agent. Some enzymes and antibodies that are currently administered for various disease treatments are broken down too quickly in the stomach or bloodstream. Because XNA is foreign and because it is believed that humans have not yet evolved the enzymes to break them down, XNAs may be able to serve as a more durable counterpart to the DNA and RNA-based treatment methodologies that are currently in use.
Experiments with XNA have already allowed for the replacement and enlargement of this genetic alphabet, and XNAs have shown complementarity with DNA and RNA nucleotides, suggesting potential for its transcription and recombination. One experiment conducted at the University of Florida led to the production of an XNA aptamer by the AEGIS-SELEX (artificially expanded genetic information system - systematic evolution of ligands by exponential enrichment) method, followed by successful binding to a line of breast cancer cells. Furthermore, experiments in the model bacterium E. coli have demonstrated the ability for XNA to serve as a biological template for DNA in vivo.
In moving forward with genetic research on XNAs, various questions must come into consideration regarding biosafety, biosecurity, ethics, and governance/regulation. One of the key questions here is whether XNA in an in vivo setting would intermix with DNA and RNA in its natural environment, thereby rendering scientists unable to control or predict its implications in genetic mutation.
XNA also has potential applications to be used as catalysts, much like RNA has the ability to be used as an enzyme. Researchers have shown XNA is able to cleave and ligate DNA, RNA and other XNA sequences, with the most activity being XNA catalyzed reactions on XNA molecules. This research may be used in determining whether DNA and RNA's role in life emerged through natural selection processes or if it was simply a coincidental occurrence.
XNA may be employed as molecular clamps in quantitative real-time polymerase chain reactions (qPCR) by hybridizing with target DNA sequences. In a study published in PLOS ONE, an XNA-mediated molecular clamping assay detected mutant cell-free DNA (cfDNA) from precancerous colorectal cancer (CRC) lesions and colorectal cancer. XNA may also act as highly specific molecular probes for detection of nucleic acid target sequence.
References
Helices
Nucleic acids | Xeno nucleic acid | [
"Chemistry"
] | 2,542 | [
"Biomolecules by chemical classification",
"Nucleic acids"
] |
39,350,099 | https://en.wikipedia.org/wiki/APOPT | APOPT (for Advanced Process OPTimizer) is a software package for solving large-scale optimization problems of any of these forms:
Linear programming (LP)
Quadratic programming (QP)
Quadratically constrained quadratic program (QCQP)
Nonlinear programming (NLP)
Mixed integer programming (MIP)
Mixed integer linear programming (MILP)
Mixed integer nonlinear programming (MINLP)
Applications of the APOPT include chemical reactors,
friction stir welding, prevention of hydrate formation in deep-sea pipelines, computational biology, solid oxide fuel cells, and flight controls for Unmanned Aerial Vehicles (UAVs).
Benchmark Testing
Standard benchmarks such as CUTEr and SBML curated models are used to test the performance of APOPT relative to solvers BPOPT, IPOPT, SNOPT, and MINOS. A combination of APOPT (Active Set SQP) and BPOPT (Interior Point Method) performed the best on 494 benchmark problems for solution speed and total fraction of problems solved.
See also
APOPT is supported in AMPL, APMonitor, Gekko, Julia, MATLAB, Pyomo, and Python.
References
External links
Web interface to solve optimization problems with the APOPT solver
Download APOPT for AMPL, MATLAB, Julia, Python, or APMonitor
Numerical software
Mathematical optimization software | APOPT | [
"Mathematics"
] | 284 | [
"Numerical software",
"Mathematical software"
] |
39,352,563 | https://en.wikipedia.org/wiki/Singlet%20fission | Singlet fission is a spin-allowed process, unique to molecular photophysics, whereby one singlet excited state is converted into two triplet states. The phenomenon has been observed in molecular crystals, aggregates, disordered thin films, and covalently-linked dimers, where the chromophores are oriented such that the electronic coupling between singlet and the double triplet states is large. Being spin allowed, the process can occur very rapidly (on a picosecond or femtosecond timescale) and out-compete radiative decay (that generally occurs on a nanosecond timescale) thereby producing two triplets with very high efficiency. The process is distinct from intersystem crossing, in that singlet fission does not involve a spin flip, but is mediated by two triplets coupled into an overall singlet. It has been proposed that singlet fission in organic photovoltaic devices could improve the photoconversion efficiencies.
History
The process of singlet fission was first introduced to describe the photophysics of anthracene in 1965. Early studies on the effect of the magnetic field on the fluorescence of crystalline tetracene solidified understanding of singlet fission in polyacenes.
Acenes, Pentacene and Tetracene in particular, are prominent candidates for singlet fission. The energy of the triplet states are smaller than or equal to half of the singlet (S1) state energy, thus satisfying the requirement of S1 ≥ 2T1. Singlet fission in functionalized pentacene compounds has been observed experimentally. Intramolecular singlet fission in covalently linked pentacene and tetracene dimers has also been reported.
The detailed mechanism of the process is unknown. Particularly, the role of charge transfer states in the singlet fission process is still debated. Typically, the mechanisms for singlet fission are classified into (a) Direct coupling between the molecules and (b) Step-wise one-electron processes involving the charge-transfer states. Intermolecular interactions and the relative orientation of the molecules within the aggregates are known to critically effect the singlet fission efficiencies.
The limited number and structural similarity of chromophores is believed to be the major obstacle to advancing the field for practical applications. It has been proposed that computational modeling of the diradical character of molecules may serve as a guiding principle for the discovery of new classes of singlet fission chromophores. Computations allowed to identify carbenes as building blocks for engineering singlet fission molecules.
Mechanisms
Singlet fission (SF) involves the conversion of a singlet excited state (S1) into two triplet states (T1). The process can be described by a two-step kinetic model (see Figure 1):
1. Formation of a correlated triplet pair state 1(T1T1) from the singlet excited state:
S1 + S0 → 1(T1T1)
2. Separation of the triplet pair into two individual triplet states:
1(T1T1) → T1 + T1
The rate of singlet fission, denoted as kSF, can be expressed using Fermi's Golden Rule:
kSF = (2π/ℏ) | ⟨ 1(T1T1) ∣ Hel ∣ S1 ⟩ |2 d
where Hel is the electronic coupling Hamiltonian, and d represents the density of states. This equation shows that electronic coupling and state density determine the efficiency of singlet fission.
Implications
Efficient singlet fission requires materials where the energy of the singlet state E (S1) is at least twice the energy of the triplet state E (T1):
E (S1) ≥ 2 × E (T1)
The energetic requirements for singlet fission can be met by acenes (e.g., tetracene, pentacene), perylene derivatives, and diketopyrrolopyrroles (DPPs). Crystal morphology, molecular packing, and minimizing defects influence performance. For instance, single-crystal tetracene displays coherent quantum beats from spin-state interactions, whereas polycrystalline films exhibit less coherence due to defects. Single-crystal tetracene has slower singlet decay times (200–300 ps) compared to polycrystalline films (70–90 ps). In polycrystalline films, excitons can diffuse to defect-rich regions, creating “hotspots” that enhance singlet fission, with excimer-like emissions reflecting the influence of structural defects on SF rates. When materials do not meet the energetic requirements for singlet fission, other relaxation pathways occur such as fluorescence, non-radiative decay, or intersystem crossing to a single triplet state dominate, leading to lower efficiency in photovoltaic applications.
Role of spectroscopy
Ultrafast and time-resolved spectroscopic techniques, including transient absorption and time-resolved fluorescence spectroscopy, allows determination of the rates of singlet exciton decay and the formation of triplet states. Transient absorption techniques capture the rapid conversion of singlet excitons into triplet pairs, highlighting the efficiency of singlet fission in various material morphologies. Using time-resolved fluorescence spectroscopy, one can observe coherent quantum beats resulting from spin-state interactions in triplet pairs.
Possible applications
Singlet fission has the potential to enhance solar cell efficiency beyond the Shockley–Queisser limit, especially for organic photovoltaics. Applications extend to other fields, including light-emitting devices.
References
Quantum mechanics
Spectroscopy | Singlet fission | [
"Physics",
"Chemistry"
] | 1,165 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Theoretical physics",
"Quantum mechanics",
"Spectroscopy"
] |
31,257,005 | https://en.wikipedia.org/wiki/Methyl-accepting%20chemotaxis%20proteins | The methyl-accepting chemotaxis proteins (MCP, also aspartate receptor) are a family of transmembrane receptors that mediate chemotactic response in certain enteric bacteria, such as Salmonella enterica enterica and Escherichia coli. These methyl-accepting chemotaxis receptors are one of the first components in the sensory excitation and adaptation responses in bacteria, which act to alter swimming behaviour upon detection of specific chemicals. Use of the MCP allows bacteria to detect concentrations of molecules in the extracellular matrix so that the bacteria may smooth swim or tumble accordingly. If the bacterium detects rising levels of attractants (nutrients) or declining levels of repellents (toxins), the bacterium will continue swimming forward, or smooth swimming. If the bacterium detects declining levels of attractants or rising levels of repellents, the bacterium will tumble and re-orient itself in a new direction. In this manner, a bacterium may swim towards nutrients and away from toxins
Evolution
There are many different types of bacterial 60 kDa transmembrane receptors, which share similar topology and signalling mechanisms. They possess three domains: a periplasmic ligand-binding domain, two transmembrane segments, and a cytoplasmic domain. The structure of the ligand-binding domain comprises a closed or partly opened, four-helical bundle with a left-handed twist. The difference in the sequence of the ligand-binding domain between receptors reflects the different ligand specificities. Binding of the ligand causes a conformational change that is transmitted across the membrane to the cytoplasmic activation domain.
Environmental diversity gives rise to diversity in bacterial signalling receptors, and consequently there are many genes encoding MCPs. For example, there are four well-characterised MCPs found in Escherichia coli: Tar (taxis towards aspartate and maltose, away from nickel and cobalt), Tsr (taxis towards serine, away from leucine, indole and weak acids), Trg (taxis towards galactose and ribose) and Tap (taxis towards dipeptides).
Structure
MCPs share similar structure and signalling mechanism. MCPs form dimers. Three dimers of MCP spontaneously form trimers. Trimers are complexed by CheA and CheW into hexagonal lattices. MCPs either bind ligands directly or interact with ligand-binding proteins, transducing the signal to downstream signalling proteins in the cytoplasm. Most MCPs contain: (a) an N-terminal signal peptide that is a transmembrane alpha-helix in the mature protein; (b) a poorly-conserved periplasmic receptor (ligand-binding) domain; (c) a transmembrane alpha-helix; (d) generally one or more HAMP domains and (e) a highly conserved C-terminal cytoplasmic domain that interacts with downstream signalling components. The C-terminal domain contains the methylated glutamate residues.
MCPs undergo two covalent modifications: deamidation and reversible methylation at a number of glutamate residues. Attractants increase the level of methylation, while repellents decrease it. The methyl groups are added by the methyl-transferase CheR and are removed by the methylesterase CheB.
Function
Binding a ligand causes a conformational change in the MCP receptor which translates down the hairpin structure and inhibits its sensor kinase. At the tip of the hairpin are two proteins that associate to the MCP: CheW and CheA. CheA acts as the sensor kinase. CheA has kinase activity and autophosphorylates itself on a histidyl residue when activated by the MCP. CheW is believed to be a transducer of the signal from the MCP to CheA. Activated CheA transfers its phosphoryl group to CheY, a response regulator. Phosphorylated CheY phosphorylates the basal body FliM which is connected to the flagellum. Phosphorylation of the basal body acts as a flagellar switch and changes the direction of rotation of the flagellum. This change in direction allows for alternation between smooth swimming and tumbling which biases the bacterial random walk towards attractant.
References
Bacterial proteins
Transmembrane receptors
Protein families
Protein domains | Methyl-accepting chemotaxis proteins | [
"Chemistry",
"Biology"
] | 901 | [
"Transmembrane receptors",
"Protein classification",
"Signal transduction",
"Protein domains",
"Protein families"
] |
31,257,075 | https://en.wikipedia.org/wiki/Thurston%20boundary | In mathematics, the Thurston boundary of Teichmüller space of a surface is obtained as the boundary of its closure in the projective space of functionals on simple closed curves on the surface. The Thurston boundary can be interpreted as the space of projective measured foliations on the surface.
The Thurston boundary of the Teichmüller space of a closed surface of genus is homeomorphic to a sphere of dimension . The action of the mapping class group on the Teichmüller space extends continuously over the union with the boundary.
Measured foliations on surfaces
Let be a closed surface. A measured foliation on is a foliation on which may admit isolated singularities, together with a transverse measure , i.e. a function which to each arc transverse to the foliation associates a positive real number . The foliation and the measure must be compatible in the sense that the measure is invariant if the arc is deformed with endpoints staying in the same leaf.
Let be the space of isotopy classes of closed simple curves on . A measured foliation can be used to define a function as follows: if is any curve let
where the supremum is taken over all collections of disjoint arcs which are transverse to (in particular if is a closed leaf of ). Then if the intersection number is defined by:
.
Two measured foliations are said to be equivalent if they define the same function on (there is a topological criterion for this equivalence via Whitehead moves). The space of projective measured foliations is the image of the set of measured foliations in the projective space via the embedding . If the genus of is at least 2, the space is homeomorphic to the -dimensional sphere (in the case of the torus it is the 2-sphere; there are no measured foliations on the sphere).
Compactification of Teichmüller space
Embedding in the space of functionals
Let be a closed surface. Recall that a point in the Teichmüller space is a pair where is a hyperbolic surface (a Riemannian manifold with sectional curvatures all equal to ) and a homeomorphism, up to a natural equivalence relation. The Teichmüller space can be realised as a space of functionals on the set of isotopy classes of simple closed curves on as follows. If and then is defined to be the length of the unique closed geodesic on in the isotopy class . The map is an embedding of into , which can be used to give the Teichmüller space a topology (the right-hand side being given the product topology).
In fact, the map to the projective space is still an embedding: let denote the image of there. Since this space is compact, the closure is compact: it is called the Thurston compactification of the Teichmüller space.
The Thurston boundary
The boundary is equal to the subset of . The proof also implies that the Thurston compactification is homeomorphic to the -dimensional closed ball.
Applications
Pseudo-Anosov diffeomorphisms
A diffeomorphism is called pseudo-Anosov if there exists two transverse measured foliations, such that under its action the underlying foliations are preserved, and the measures are multiplied by a factor respectively for some (called the stretch factor). Using his compactification Thurston proved the following characterisation of pseudo-Anosov mapping classes (i.e. mapping classes which contain a pseudo-Anosov element), which was in essence known to Nielsen and is usually called the Nielsen-Thurston classification. A mapping class is pseudo-Anosov if and only if:
it is not reducible (i.e. there is no and such that );
it is not of finite order (i.e. there is no such that is the isotopy class of the identity).
The proof relies on the Brouwer fixed point theorem applied to the action of on the Thurston compactification . If the fixed point is in the interior then the class is of finite order; if it is on the boundary and the underlying foliation has a closed leaf then it is reducible; in the remaining case it is possible to show that there is another fixed point corresponding to a transverse measured foliation, and to deduce the pseudo-Anosov property.
Applications to the mapping class group
The action of the mapping class group of the surface on the Teichmüller space extends continuously to the Thurston compactification. This provides a powerful tool to study the structure of this group; for example it is used in the proof of the Tits alternative for the mapping class group. It can also be used to prove various results about the subgroup structure of the mapping class group.
Applications to 3–manifolds
The compactification of Teichmüller space by adding measured foliations is essential in the definition of the ending laminations of a hyperbolic 3-manifold.
Actions on real trees
A point in Teichmüller space can alternatively be seen as a faithful representation of the fundamental group into the isometry group of the hyperbolic plane , up to conjugation. Such an isometric action gives rise (via the choice of a principal ultrafilter) to an action on the asymptotic cone of , which is a real tree. Two such actions are equivariantly isometric if and only if they come from the same point in Teichmüller space. The space of such actions (endowed with a natural topology) is compact, and hence we get another compactification of Teichmüller space. A theorem of R. Skora states that this compactification is equivariantly homeomorphic the Thurston compactification.
Notes
References
Geometric topology
Geometric group theory | Thurston boundary | [
"Physics",
"Mathematics"
] | 1,208 | [
"Geometric group theory",
"Group actions",
"Geometric topology",
"Topology",
"Symmetry"
] |
31,259,060 | https://en.wikipedia.org/wiki/Radiation%20material%20science | Radiation materials science is a subfield of materials science which studies the interaction of radiation with matter: a broad subject covering many forms of irradiation and of matter.
Main aim of radiation material science
Some of the most profound effects of irradiation on materials occur in the core of nuclear power reactors where atoms comprising the structural components are displaced numerous times over the course of their engineering lifetimes. The consequences of radiation to core components includes changes in shape and volume by tens of percent, increases in hardness by factors of five or more, severe reduction in ductility and increased embrittlement, and susceptibility to environmentally induced cracking. For these structures to fulfill their purpose, a firm understanding of the effect of radiation on materials is required in order to account for irradiation effects in design, to mitigate its effect by changing operating conditions, or to serve as a guide for creating new, more radiation-tolerant materials that can better serve their purpose.
Radiation
The types of radiation that can alter structural materials are neutron radiation, ion beams, electrons (beta particles), and gamma rays. All of these forms of radiation have the capability to displace atoms from their lattice sites, which is the fundamental process that drives the changes in structural metals. The inclusion of ions among the irradiating particles provides a tie-in to other fields and disciplines such as the use of accelerators for the transmutation of nuclear waste, or in the creation of new materials by ion implantation, ion beam mixing, plasma-assisted ion implantation, and ion beam-assisted deposition.
The effect of irradiation on materials is rooted in the initial event in which an energetic projectile strikes a target. While the event is made up of several steps or processes, the primary result is the displacement of an atom from its lattice site. Irradiation displaces an atom from its site, leaving a vacant site behind (a vacancy) and the displaced atom eventually comes to rest in a location that is between lattice sites, becoming an interstitial atom. The vacancy-interstitial pair is central to radiation effects in crystalline solids and is known as a Frenkel pair. The presence of the Frenkel pair and other consequences of irradiation damage determine the physical effects, and with the application of stress, the mechanical effects of irradiation by the occurring of interstitial, phenomena, such as swelling, growth, phase transition, segregation, etc., will be effected. In addition to the atomic displacement, an energetic charged particle moving in a lattice also gives energy to electrons in the system, via the electronic stopping power. This energy transfer can also for high-energy particles produce damage in non-metallic materials, such as ion tracks and fission tracks in minerals.
Radiation damage
The radiation damage event is defined as the transfer of energy from an incident projectile to the solid and the resulting distribution of target atoms after completion of the event. This event is composed of several distinct processes:
The interaction of an energetic incident particle with a lattice atom
The transfer of kinetic energy to the lattice atom giving birth to a primary knock-on atom
The displacement of the atom from its lattice site
The passage of the displaced atom through the lattice and the accompanying creation of additional knock-on atoms
The production of a displacement cascade (collection of point defects created by the primary knock-on atom)
The termination of the primary knock-on atom as an interstitial
The result of a radiation damage event is, if the energy given to a lattice atom is above the threshold displacement energy, the creation of a collection of point defects (vacancies and interstitials) and clusters of these defects in the crystal lattice.
The essence of the quantification of radiation damage in solids is the number of displacements per unit volume per unit time :
where is the atom number density, and are the maximum and minimum energies of the incoming particle, is the energy dependent particle flux, and are the maximum and minimum energies transferred in a collision of a particle of energy and a lattice atom, is the cross section for the collision of a particle of energy that results in a transfer of energy to the struck atom, is the number of displacements per primary knock-on atom.
The two key variables in this equation are and . The term describes the transfer of energy from the incoming particle to the first atom it encounters in the target, the primary knock-on atom; The second quantity is the total number of displacements that the primary knock-on atom goes on to make in the solid; Taken together, they describe the total number of displacements caused by an incoming particle of energy , and the above equation accounts for the energy distribution of the incoming particles. The result is the total number of displacements in the target from a flux of particles with a known energy distribution.
In radiation material science the displacement damage in the alloy ( = displacements per atom in the solid ) is a better representation of the effect of irradiation on materials properties than the fluence ( neutron fluence, ).
See also Wigner effect.
Radiation-resistant materials
To generate materials that fit the increasing demands of nuclear reactors to operate with higher efficiency or for longer lifetimes, materials must be designed with radiation resistance in mind. In particular, Generation IV nuclear reactors operate at higher temperatures and pressures compared to modern pressurized water reactors, which account for a vast amount of western reactors. This leads to increased vulnerability to normal mechanical failure in terms of creep resistance as well as radiation damaging events such as neutron-induced swelling and radiation-induced segregation of phases. By accounting for radiation damage, reactor materials would be able to withstand longer operating lifetimes. This allows reactors to be decommissioned after longer periods of time, improving return on investment of reactors without compromising safety. This is of particular interest in developing commercial viability of advanced and theoretical nuclear reactors, and this goal can be accomplished through engineering resistance to these displacement events.
Grain boundary engineering
Face-centered cubic metals such as austenitic steels and Ni-based alloys can benefit greatly from grain boundary engineering. Grain boundary engineering attempts to generate higher amounts of special grain boundaries, characterized by favorable orientations between grains. By increasing populations of low energy boundaries without increasing grain size, fracture mechanics of these face centered cubic metals can be changed to improve mechanical properties given a similar displacements per atom value versus non grain boundary engineered alloys. This method of treatment in particular yields better resistance to stress corrosion cracking and oxidation.
Materials selection
By using advanced methods of material selection, materials can be judged on criteria such as neutron-absorption cross sectional area. Selecting materials with minimum neutron-absorption can heavily minimize the number of displacements per atom that occur over a reactor material's lifetime. This slows the radiation embrittlement process by preventing mobility of atoms in the first place, proactively selecting materials that do not interact with the nuclear radiation as frequently. This can have a huge impact on total damage especially when comparing the materials of modern advanced reactors of zirconium to stainless steel reactor cores, which can differ in absorption cross section by an order of magnitude from more-optimal materials.
Example values for thermal neutron cross section are shown in the table below.
Short range order (SRO) self-organization
For nickel-chromium and iron-chromium alloys, short range order can be designed on the nano-scale (<5 nm) that absorbs the interstitial and vacancy's generated by primary knock-on atom events. This allows materials that mitigate the swelling that normally occurs in the presence of high displacements per atom and keep the overall volume percent change under the ten percent range. This occurs through generating a metastable phase that is in constant, dynamic equilibrium with surrounding material. This metastable phase is characterized by having an enthalpy of mixing that is effectively zero with respect to the main lattice. This allows phase transformation to absorb and disperse the point defects that typically accumulate in more rigid lattices. This extends the life of the alloy through making vacancy and interstitial creation less successful as constant neutron excitement in the form of displacement cascades transform the SRO phase, while the SRO reforms in the bulk solid solution.
Resources
Fundamentals of Radiation Material Science: Metals and Alloys, 2nd Ed, Gary S. Was, SpringerNature, New York 2017
R. S. Averback and T. Diaz de la Rubia (1998). "Displacement damage in irradiated metals and semiconductors". In H. Ehrenfest and F. Spaepen. Solid State Physics 51. Academic Press. pp. 281–402.
R. Smith, ed. (1997). Atomic & ion collisions in solids and at surfaces: theory, simulation and applications. Cambridge University Press. .
References
External links
Radiation
Building engineering | Radiation material science | [
"Physics",
"Materials_science",
"Engineering"
] | 1,780 | [
"Applied and interdisciplinary physics",
"Building engineering",
"Materials science",
"Civil engineering",
"nan",
"Architecture"
] |
31,261,584 | https://en.wikipedia.org/wiki/Circular%20law | In probability theory, more specifically the study of random matrices, the circular law concerns the distribution of eigenvalues of an random matrix with independent and identically distributed entries in the limit
.
It asserts that for any sequence of random matrices whose entries are independent and identically distributed random variables, all with mean zero and variance equal to , the limiting spectral distribution is the uniform distribution over the unit disc.
Ginibre ensembles
The complex Ginibre ensemble is defined as for , with all their entries sampled IID from the standard normal distribution .
The real Ginibre ensemble is defined as .
Eigenvalues
The eigenvalues of are distributed according to
Global law
Let be a sequence sampled from the complex Ginibre ensemble. Let denote the eigenvalues of . Define the empirical spectral measure of as
Then, almost surely (i.e. with probability one), the sequence of measures converges in distribution to the uniform measure on the unit disk.
Edge statistics
Let be sampled from the real or complex ensemble, and let be the absolute value of its maximal eigenvalue:We have the following theorem for the edge statistics:
This theorem refines the circular law of the Ginibre ensemble. In words, the circular law says that the spectrum of almost surely falls uniformly on the unit disc. and the edge statistics theorem states that the radius of the almost-unit-disk is about , and fluctuates on a scale of , according to the Gumbel law.
History
For random matrices with Gaussian distribution of entries (the Ginibre ensembles), the circular law was established in the 1960s by Jean Ginibre. In the 1980s, Vyacheslav Girko introduced an approach which allowed to establish the circular law for more general distributions. Further progress was made by Zhidong Bai, who established the circular law under certain smoothness assumptions on the distribution.
The assumptions were further relaxed in the works of Terence Tao and Van H. Vu, Guangming Pan and Wang Zhou, and Friedrich Götze and Alexander Tikhomirov. Finally, in 2010 Tao and Vu proved the circular law under the minimal assumptions stated above.
The circular law result was extended in 1985 by Girko to an elliptical law for ensembles of matrices with a fixed amount of correlation between the entries above and below the diagonal. The elliptic and circular laws were further generalized by Aceituno, Rogers and Schomerus to the hypotrochoid law which includes higher order correlations.
See also
Wigner semicircle distribution
References
Random matrices | Circular law | [
"Physics",
"Mathematics"
] | 519 | [
"Random matrices",
"Matrices (mathematics)",
"Statistical mechanics",
"Mathematical objects"
] |
31,262,483 | https://en.wikipedia.org/wiki/Induction%20regulator | An induction regulator is an alternating current electrical machine, somewhat similar to an induction motor, which can provide a continuously variable output voltage. The induction regulator was an early device used to control the voltage of electric networks. Since the 1930s it has been replaced in distribution network applications by the tap transformer. Its usage is now mostly confined to electrical laboratories, electrochemical processes and arc welding. With minor variations, its setup can be used as a phase-shifting power transformer.
Construction
A single-phase induction regulator has a (primary) excitation winding, connected to the supply voltage, wound on a magnetic core which can be rotated. The stationary secondary winding is connected in series with the circuit to be regulated. As the excitation winding is rotated through 180 degrees, the voltage induced in the series winding changes from adding to the supply voltage to opposing it. By selection of the ratios of the number of turns on the excitation and series windings, the range of voltage can be adjusted, say, plus or minus 20% of the supply voltage, for example.
The three phase induction regulator can be regarded as a wound induction motor. The rotor is not allowed to turn freely and it can be mechanically shifted by means of a worm gear. The rest of the regulator's construction follows that of a wound rotor induction motor with a slotted three-phase stator and a wound three-phase rotor. Since the rotor is not allowed to turn more than 180 degrees, mechanically, the rotor leads can be connected by flexible cables to the exterior circuit. If the stator winding is a two-pole winding, moving the rotor through 180 degrees physically will change the phase of the induced voltage by 180 degrees. A four-pole winding only requires 90 degrees of physical movement to produce 180 degrees of phase shift.
Since a torque is produced by the interaction of the magnetic fields, the movable element is held by a mechanism such as a worm gear. The rotor may be rotated by a hand wheel attached to the machine, or an electric motor can be used to remotely or automatically adjust the rotor position.
Depending on the application, the ratio of number of turns on the rotor and the stator can vary.
Working
Since the single phase regulator only changes the flux linking the excitation and series windings, it does not introduce a phase shift between the supply voltage and the load voltage. However, the varying position of the movable element in the three-phase regulator does create a phase shift. This may be a concern if the load circuit may be connected to more than one supply, since circulating currents will flow owing to the phase shift.
If the rotor terminals are connected to a three-phase electric power network, a rotating magnetic field will be driven into the magnetic core.
The resulting flux will produce an emf on the windings of the stator with the particularity that if rotor and stator are physically shifted by an angle α, then the electric phase shifting of both windings is α too. Considering just the fundamental harmonic, and ignoring the shifting, the following equation rules:
Where ξ is the winding factor, a constant related to the construction of the windings.
If the stator winding is connected to the primary phase, the total voltage seen from the neutral (N) will be the sum of the voltages at both windings rotor and stator. Translating this to electric phasors, both phasors are connected. There is an angular shifting of α between them. Since α can be freely chosen between [0, π], both phasors can be added or subtracted, so all the values in between are attainable. The primary and secondary are not isolated. Also, the ratio of the magnitudes of voltages between rotor and stator is constant; the resultant voltage varies owing to the angular shifting of the series winding induced voltage.
Advantages
The output voltage can be continuously regulated within the nominal range. This is a clear benefit against tap transformers where output voltage takes discrete values. Also, the voltage can be easily regulated under working conditions.
Drawbacks
In comparison to tap transformers, induction regulators are expensive, with lower efficiency, high open circuit currents (due to the airgap) and limited in voltage to less than 20kV.
Applications
An induction regulator for power networks is usually designed to have a nominal voltage of 14kV and ±(10-15)% of regulation, but this use has declined. Nowadays, its main uses are in electrical laboratories and arc welding.
See also
Variable frequency transformer
Bibliography
Electric transformers
Energy conversion
Electric motors | Induction regulator | [
"Technology",
"Engineering"
] | 922 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
31,264,994 | https://en.wikipedia.org/wiki/Laponite | Laponite is a synthetic smectite clay that was invented in 1962 by clay scientist Barbara Neumann. Laponite is usually produced as powder. It is a nanomaterial, made up of very small disk-shaped crystals, that is used in multiple industrial applications. Laponite was first marketed by the company Laporte plc, and is currently produced by BYK Additives & Instruments. Laponite is not an approved mineral species, since it is not naturally occurring and it is not produced by geological processes.
Development of laponite
In the first formulation of laponite created by Neumann in 1962, the synthetic clay was determined to be a fluorohectorite, and was produced in the form of discs that were 1 nanometer thick, and with a diameter of 60 to 80 nanometres. This went into mass production in 1964. The mineral structure of the clay gives laponite its particular physical characteristics. It has a structure similar to the smectite group of clay minerals, with a 2:1 layered crystal structure in which two tetrahedral silica sheets lie either side of an octahedral sheet containing magnesium ions.
In 1966, Neumann patented a second formulation of laponite, called 'Laponite RD'. This form was free from fluorine, and has subsequently become the most widely used form of laponite. This form of laponite has an empirical formula of .
In later years, Neumann also created other variants of laponite including a lithium-free magnesium silicate clay, a form of synthetic stevensite, and an iron silicate clay, which was a synthetic form of nontronite.
References
Clay
Smectite group | Laponite | [
"Chemistry",
"Materials_science"
] | 348 | [
"Nanotechnology",
"Synthetic materials",
"Nanomaterials",
"Synthetic minerals"
] |
31,267,110 | https://en.wikipedia.org/wiki/AQUAL | AQUAL is a theory of gravity based on Modified Newtonian Dynamics (MOND), but using a Lagrangian. It was developed by Jacob Bekenstein and Mordehai Milgrom in their 1984 paper, "Does the missing mass problem signal the breakdown of Newtonian gravity?". "AQUAL" stands for "AQUAdratic Lagrangian", stemming from the fact that, in contrast to Newtonian gravity, the proposed Lagrangian is non-quadratic in the potential gradient .
The gravitational force law obtained from MOND,
has a serious defect: it violates Newton's third law of motion, and therefore fails to conserve momentum and energy. To see this, consider two objects with ; then we have:
but the third law gives so we would get
even though and would therefore be constant, contrary to the MOND assumption that it is linear for small arguments.
This problem can be rectified by deriving the force law from a Lagrangian, at the cost of possibly modifying the general form of the force law. Then conservation laws could then be derived from the Lagrangian by the usual means.
The AQUAL Lagrangian is:
this leads to a modified Poisson equation:
where the predicted acceleration is These equations reduce to the MOND equations in the spherically symmetric case, although they differ somewhat in the disc case needed for modelling spiral or lenticular galaxies. However, the difference is only 10–15%, so does not seriously impact the results.
According to Sanders and McGaugh, one problem with AQUAL (or any scalar–tensor theory in which the scalar field enters as a conformal factor multiplying Einstein's metric) is AQUAL's failure to predict the amount of gravitational lensing actually observed in rich clusters of galaxies.
References
https://iopscience.iop.org/article/10.3847/1538-4357/ace101 says that the low-acceleration behavior in wide-binary stars doesn't match Newton/Einstein, but *does* match AQUAL, and gives a numeric value for the difference.
Astrophysics
Theories of gravity
Lagrangian mechanics | AQUAL | [
"Physics",
"Astronomy",
"Mathematics"
] | 455 | [
"Theoretical physics",
"Lagrangian mechanics",
"Classical mechanics",
"Astrophysics",
"Theories of gravity",
"Astronomical sub-disciplines",
"Dynamical systems"
] |
42,081,371 | https://en.wikipedia.org/wiki/Stable%20%E2%88%9E-category | In category theory, a branch of mathematics, a stable ∞-category is an ∞-category such that
(i) It has a zero object.
(ii) Every morphism in it admits a fiber and cofiber.
(iii) A triangle in it is a fiber sequence if and only if it is a cofiber sequence.
The homotopy category of a stable ∞-category is triangulated. A stable ∞-category admits finite limits and colimits.
Examples: the derived category of an abelian category and the ∞-category of spectra are both stable.
A stabilization of an ∞-category C having finite limits and base point is a functor from the stable ∞-category S to C. It preserves limit. The objects in the image have the structure of infinite loop spaces; whence, the notion is a generalization of the corresponding notion (stabilization (topology)) in classical algebraic topology.
By definition, the t-structure of a stable ∞-category is the t-structure of its homotopy category. Let C be a stable ∞-category with a t-structure. Then every filtered object in C gives rise to a spectral sequence , which, under some conditions, converges to By the Dold–Kan correspondence, this generalizes the construction of the spectral sequence associated to a filtered chain complex of abelian groups.
Notes
References
Higher category theory | Stable ∞-category | [
"Mathematics"
] | 285 | [
"Higher category theory",
"Mathematical structures",
"Category theory",
"Category theory stubs"
] |
42,083,967 | https://en.wikipedia.org/wiki/K-theory%20of%20a%20category | In algebraic K-theory, the K-theory of a category C (usually equipped with some kind of additional data) is a sequence of abelian groups Ki(C) associated to it. If C is an abelian category, there is no need for extra data, but in general it only makes sense to speak of K-theory after specifying on C a structure of an exact category, or of a Waldhausen category, or of a dg-category, or possibly some other variants. Thus, there are several constructions of those groups, corresponding to various kinds of structures put on C. Traditionally, the K-theory of C is defined to be the result of a suitable construction, but in some contexts there are more conceptual definitions. For instance, the K-theory is a 'universal additive invariant' of dg-categories and small stable ∞-categories.
The motivation for this notion comes from algebraic K-theory of rings. For a ring R Daniel Quillen in introduced two equivalent ways to find the higher K-theory. The plus construction expresses Ki(R) in terms of R directly, but it's hard to prove properties of the result, including basic ones like functoriality. The other way is to consider the exact category of projective modules over R and to set Ki(R) to be the K-theory of that category, defined using the Q-construction. This approach proved to be more useful, and could be applied to other exact categories as well. Later Friedhelm Waldhausen in extended the notion of K-theory even further, to very different kinds of categories, including the category of topological spaces.
K-theory of Waldhausen categories
In algebra, the S-construction is a construction in algebraic K-theory that produces a model that can be used to define higher K-groups. It is due to Friedhelm Waldhausen and concerns a category with cofibrations and weak equivalences; such a category is called a Waldhausen category and generalizes Quillen's exact category. A cofibration can be thought of as analogous to a monomorphism, and a category with cofibrations is one in which, roughly speaking, monomorphisms are stable under pushouts. According to Waldhausen, the "S" was chosen to stand for Graeme B. Segal.
Unlike the Q-construction, which produces a topological space, the S-construction produces a simplicial set.
Details
The arrow category of a category C is a category whose objects are morphisms in C and whose morphisms are squares in C. Let a finite ordered set be viewed as a category in the usual way.
Let C be a category with cofibrations and let be a category whose objects are functors such that, for , , is a cofibration, and is the pushout of and . The category defined in this manner is itself a category with cofibrations. One can therefore iterate the construction, forming the sequence. This sequence is a spectrum called the K-theory spectrum of C.
The additivity theorem
Most basic properties of algebraic K-theory of categories are consequences of the following important theorem. There are versions of it in all available settings. Here's a statement for Waldhausen categories. Notably, it's used to show that the sequence of spaces obtained by the iterated S-construction is an Ω-spectrum.
Let C be a Waldhausen category. The category of extensions has as objects the sequences in C, where the first map is a cofibration, and is a quotient map, i.e. a pushout of the first one along the zero map A → 0. This category has a natural Waldhausen structure, and the forgetful functor from to C × C respects it. The additivity theorem says that the induced map on K-theory spaces is a homotopy equivalence.
For dg-categories the statement is similar. Let C be a small pretriangulated dg-category with a semiorthogonal decomposition . Then the map of K-theory spectra K(C) → K(C1) ⊕ K(C2) is a homotopy equivalence. In fact, K-theory is a universal functor satisfying this additivity property and Morita invariance.
Category of finite sets
Consider the category of pointed finite sets. This category has an object for every natural number k, and the morphisms in this category are the functions which preserve the zero element. A theorem of Barratt, Priddy and Quillen says that the algebraic K-theory of this category is a sphere spectrum.
Miscellaneous
More generally in abstract category theory, the K-theory of a category is a type of decategorification in which a set is created from an equivalence class of objects in a stable (∞,1)-category, where the elements of the set inherit an Abelian group structure from the exact sequences in the category.
Group completion method
The Grothendieck group construction is a functor from the category of rings to the category of abelian groups. The higher K-theory should then be a functor from the category of rings but to the category of higher objects such as simplicial abelian groups.
Topological Hochschild homology
Waldhausen introduced the idea of a trace map from the algebraic K-theory of a ring to its Hochschild homology; by way of this map, information can be obtained about the K-theory from the Hochschild homology. Bökstedt factorized this trace map, leading to the idea of a functor known as the topological Hochschild homology of the ring's Eilenberg–MacLane spectrum.
K-theory of a simplicial ring
If R is a constant simplicial ring, then this is the same thing as K-theory of a ring.
See also
Volodin space
Cotriple homology
Notes
References
Further reading
For the recent ∞-category approach, see
Category theory
K-theory | K-theory of a category | [
"Mathematics"
] | 1,253 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
42,085,062 | https://en.wikipedia.org/wiki/Fungal%20isolate | Fungal isolates have been researched for decades. Because fungi often exist in thin mycelial monolayers, with no protective shell, immune system, and limited mobility, they have developed the ability to synthesize a variety of unusual compounds for survival. Researchers have discovered fungal isolates with anticancer, antimicrobial, immunomodulatory, and other bio-active properties. The first statins, β-Lactam antibiotics, as well as a few important antifungals, were discovered in fungi.
Chemotherapeutic isolates
BMS manufactures paclitaxel using Penicillium and plant cell fermentation. Fungi can synthesize podophyllotoxin and camptothecin, precursors to etoposide, teniposide, topotecan, and irinotecan.
Lentinan, PSK, and PSP, are registered anticancer immunologic adjuvants. Irofulven and acylfulvene are anticancer derivatives of illudin S. Clavaric acid is a reversible farnesyltransferase inhibitor. Inonotus obliquus creates betulinic acid precursor betulin. Flammulina velutipes creates asparaginase. Plinabulin is a fungal isolate derivative currently being researched for anticancer applications.
Cholesterol inhibitors
The statins lovastatin, mevastatin, and simvastatin precursor monacolin J, are fungal isolates. Additional fungal isolates that inhibit cholesterol are zaragozic acids, eritadenine, and nicotinamide riboside.
Immunosuppressants
Ciclosporin, mycophenolic acid, mizoribine, FR901483, and gliotoxin, are immunosuppressant fungal isolates.
Antimicrobials
Penicillin, cephalosporins, fusafungine, usnic acid, fusidic acid, fumagillin, brefeldin A, verrucarin A, alamethicin, are antibiotic fungal isolates. Antibiotics retapamulin, tiamulin, and valnemulin are derivatives of the fungal isolate pleuromutilin. Griseofulvin, echinocandins, strobilurin, azoxystrobin, caspofungin, micafungin, are fungal isolates with antifungal activity.
Psychotropic isolates
The headache medications cafergot, dihydroergotamine, methysergide, methylergometrine, the dementia medications hydergine, nicergoline, the Parkinson's disease medications lisuride, bromocriptine, cabergoline, and pergolide were all derived from Claviceps isolates. Polyozellus multiplex synthesizes prolyl endopeptidase inhibitors polyozellin, thelephoric acid, and kynapcins. Boletus badius synthesizes L-theanine.
Other isolates
Researchers have discovered other interesting fungal isolates like the antihyperglycemic compounds ternatin, aspergillusol A, sclerotiorin, and antimalarial compounds codinaeopsin, efrapeptins, and antiamoebin. The fungal isolate ergothioneine is actively absorbed and concentrated by the human body via SLC22A4. Other notable fungal isolates include vitamin D1, vitamin D2, and vitamin D4.
See also
References
External links
Bioprospecting for Microbial Endophytes and Their Natural Products – 2003
Pharmaceutical isolates
Fungi and humans | Fungal isolate | [
"Chemistry",
"Biology"
] | 788 | [
"Pharmacology",
"Fungi",
"Fungi and humans",
"Pharmaceutical isolates",
"Humans and other species"
] |
42,085,962 | https://en.wikipedia.org/wiki/EuroFOT | euroFOT, European Field Operational Test, was a project of gathering naturalistic data to assess the impact from the usage of intelligent transportation systems called "intelligent vehicle systems" or "active safety systems" to evaluate their effect on transport safety and fuel efficiency. Led by Ford in partnership with 28 partners, including European vehicle manufacturers, the project involved test on 1000 vehicles during a one-year period. The project included 8 sites in Central Europe. During the active years 12 deliverables and a final report were released. More than 100 TB of data were gathered of use for future analysis and research.
The intelligent vehicle systems included tools to automatically adjust vehicle speed using headway sensor data, to alert the driver if a sensor detects an object with high probability of collision, to alert the driver when the car is not centered on its lane, and tools monitoring fuel usage. The study showed a decrease of safety risk up to 42% due to timely alert of the driver or an automatic adjustment of speed, and that over 90% of accidents involve driver behaviour as a contributing factor. The data included assessment of risk in different positions on the road relative to intersections and visibility.
References
Transport in Europe
Transport safety
Transportation engineering | EuroFOT | [
"Physics",
"Engineering"
] | 242 | [
"Transport safety",
"Industrial engineering",
"Physical systems",
"Transport",
"Transportation engineering",
"Civil engineering"
] |
42,087,349 | https://en.wikipedia.org/wiki/IVBSS | IVBSS (Integrated Vehicle-Based Safety Systems program) is a study led by University of Michigan Transportation Research Institute to test integrated crash avoidance system from 2007–2011. 16 passenger cars and 10 trucks participated in the study. The system warned against front crash risks, lateral crash risks, risks involved while moving between lanes and curve risks while turning. Driver behavior was recorded with and without the system.
American power management company Eaton Corporation provided radar-based technology and worked on its integration with the system.
History
From November 2005 — April 2008, the first phase of the study was conducted, developing systems specification, design, and construction of prototype vehicles. In April 2008, the program was approved for field tests, which took place during February to December 2009. Con-way Freight sponsored and participated in the heavy trucks part of the field tests.
IVBSS won the Best of ITS Awards US national competition in 2008.
Test results indicated that drivers were ready to start using the system, with 72% indicating that they would like such systems in their personal vehicles, and found its blind spot component particularly relevant.
References
External links
Transport safety
Vehicle safety technologies | IVBSS | [
"Physics"
] | 228 | [
"Physical systems",
"Transport",
"Transport safety"
] |
25,213,349 | https://en.wikipedia.org/wiki/Steel%20fibre-reinforced%20shotcrete | Steel fibre-reinforced shotcrete (SFRS) is shotcrete (spray concrete) with steel fibres added. It has higher tensile strength than unreinforced shotcrete and is quicker to apply than weldmesh reinforcement. It has often been used for tunnels.
Advantages
The primary advantages of fibre-reinforced shotcrete are:
Addition of steel fibers into the concrete improves the crack resistance (or ductility) capacity of the concrete. Traditional rebars are generally used to improve the tensile strength of the concrete in a particular direction, whereas steel fibers are useful for multidirectional reinforcement. This is one of the reasons why steel fiber reinforced (shotcrete form) concrete successfully replaced weldmesh in lining tunnels.
Less labour is required.
Less construction time is required.
Applications and types
SFRS has various types, which are applicable to differing situations. Primary uses are:
Tunnels – uses short steel fibers
Industrial floorings – uses long steel fibers
See also
Fibre reinforced concrete
References
Building materials
Concrete
Fibre-reinforced cementitious materials | Steel fibre-reinforced shotcrete | [
"Physics",
"Engineering"
] | 218 | [
"Structural engineering",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Concrete",
"Matter",
"Architecture"
] |
25,215,397 | https://en.wikipedia.org/wiki/Transcription%20factory | Transcription factories, in genetics describe the discrete sites where transcription occurs in the cell nucleus, and are an example of a biomolecular condensate. They were first discovered in 1993 and have been found to have structures analogous to replication factories, sites where replication also occurs in discrete sites. The factories contain an RNA polymerase (active or inactive) and the necessary transcription factors (activators and repressors) for transcription. Transcription factories containing RNA polymerase II are the most studied but factories can exist for RNA polymerase I and III; the nucleolus being seen as the prototype for transcription factories. It is possible to view them under both light and electron microscopy. The discovery of transcription factories has challenged the original view of how RNA polymerase interacts with the DNA polymer and it is thought that the presence of factories has important effects on gene regulation and nuclear structure.
Discovery
The first use of the term ‘transcription factory’ was used in 1993 by Jackson and his colleagues who noticed that transcription occurred at discrete sites in the nucleus. This contradicted the original view that transcription occurred at an even distribution throughout the nucleus.
Structure
The structure of a transcription factory appears to be determined by cell type, transcriptional activity of the cell and also the method of technique used to visualise the structure. The generalised view of a transcription factory would feature between 4 – 30 RNA polymerase molecules and it is thought that the more transcriptionally active a cell is, the more polymerases that will be present in a factory in order to meet the demands of transcription. The core of the factory is porous and protein rich, with the hyperphosphorylated, elongating form polymerases on the perimeter. The type of proteins present include: ribonucleoproteins, co-activators, transcription factors, RNA helicase and splicing and processing enzymes. A factory only contains one type of RNA polymerase and the diameter of the factory varies depending on the RNA polymerase featured; RNA polymerase I factories are roughly 500 nm in width whereas RNA polymerase II and III factories a magnitude smaller at 50 nm. It has been experimentally shown that the transcription factory is immobilised to a structure and it is postulated that this immobilisation is because of a tethering to the nuclear matrix; this is because it has been shown it is tied to a structure that is unaffected by restriction enzymes. Proteins that have been thought to be involved in the tethering includes spectrin, actin and lamins.
Function
The structure of a transcriptional factory directly relates to its function. Transcription is made more efficient because of the clustered nature of the transcription factory. All the necessary proteins: RNA polymerase, transcription factors and other co-regulators are present in the transcription factory that allows for faster RNA polymerisation when the DNA template reaches the factory, it also allows for a number of genes to be transcribed at the same time.
Genomic location
The amount of transcription factories found per nucleus appears to be determined by cell type, species and the type of measurement. Cultured mouse embryonic fibroblasts have been found to have roughly 1500 factories through immunofluorescence detection of RNAP II however cells taken from different tissues of the same mouse group had between 100 and 300 factories. Measurements of the number of transcription factories in HeLa cells give a varied result. For example, using the traditional fluorescence microscopy approach 300 – 500 factories were found but using both confocal and electron microscopy roughly 2100 were detected.
Factory specialisation
In addition to the specialisation factories have for the type of RNA polymerase they contain, there is a further level of specialisation present. There are some factories that only transcribe a certain set of related genes, this further strengthens the concept that the main function of a transcription factory is for transcriptional efficiency.
Assembly and maintenance
There is much debate to whether transcription factories assemble because of the transcriptional demands of the genome or if they are stable structures that are conserved over time. Experimentally, it appears that they remain fixed over a short period of time; newly made mRNA were pulse labelled over 15 minutes and it showed no new transcription factories appearing. This is also supported by inhibition experiments. In these studies heat shock was used to turn off transcription which resulted in no change in the number of polymerases detected. Upon further analysis of western blot data it was suggested that there was in fact a slight decrease over time of transcription factories. Therefore, it could be claimed that polymerase molecules are released gently over time from the factory when there is a lack of transcription which would eventually lead to the complete loss of the transcription factory.
There is also several pieces of evidence that promotes the idea of transcription factories assembling de novo due to transcriptional demands. GFP polymerase fluorescence experiments have shown that the inducement of transcription in Drosophila polytene nuclei leads to the formation of a factory which contradicts the notion of a stable and secure structure.
Mechanism
It was previously thought that it was the relatively small RNA polymerase that moves along the comparatively larger DNA template during transcription. However, increasing evidence supports the notion that due to the tethering of a transcription factory to the nuclear matrix, it is in fact the large DNA template that is moved to accommodate RNA polymerisation. In vitro studies for example have shown that RNA polymerases attached to a surface are capable of both rotating the DNA template and threading it through the polymerase to start transcription; which indicates the capabilities of RNA polymerase to be a molecular motor. Chromosome Conformation Capture (3C) also supports the idea of the DNA template diffusing towards a stationary RNA polymerase.
There remains a doubt to this mechanism of transcription. Firstly, it is unknown how a stationary polymerase is capable of transcribing genes on the (+)-strand and (-)-strand at the same genomic locus at the same time. This is in addition to a lack of conclusive evidence on how the polymerase remains immobilised (how it is tethered) and what structure it is tethered to.
Effect on genomic and nuclear structure
There are several consequences the formation of a transcription factory has on nuclear and genomic structures. It has been proposed that the factories are responsible for nuclear organisation; they have been suggested to promote chromatin loop formation by two potential mechanisms:
The first mechanism suggests that loops form because 2 genes on the same chromosome require the same transcription machinery that would be found in a specific transcription factory. This requirement will attract the gene loci to the factory thus creating a loop.
Transcription factories are also suggested to be responsible for gene clustering, this is because related genes would require the same transcriptional machinery and if a factory satisfies these needs the genes would be attracted to the factory
. While the clustering of genes can be beneficial for transcriptional efficiency, there could be negative consequences to this. Gene translocation events occur when genes are in close proximity to one another; which will occur more often when a transcriptional factory is present. Gene translocation events, like point mutations, generally are detrimental to the organism and so therefore could lead to the possibility of disease. However, on the other hand recent research has suggested that there is no correlation between inter-gene interactions and translocation frequencies.
See also
References
Enzymes
Gene expression
Proteins | Transcription factory | [
"Chemistry",
"Biology"
] | 1,505 | [
"Biomolecules by chemical classification",
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
36,454,601 | https://en.wikipedia.org/wiki/Multifocal%20plane%20microscopy | Multifocal plane microscopy (MUM), also known as multiplane microscopy or multifocus microscopy, is a form of light microscopy that allows the tracking of the 3D dynamics in live cells at high temporal and spatial resolution by simultaneously imaging different focal planes within the specimen. In this methodology, the light collected from the sample by an infinity-corrected objective lens is split into two paths. In each path the split light is focused onto a detector which is placed at a specific calibrated distance from the tube lens. In this way, each detector images a distinct plane within the sample. The first developed MUM setup was capable of imaging two distinct planes within the sample. However, the setup can be modified to image more than two planes by further splitting the light in each light path and focusing it onto detectors placed at specific calibrated distances. It has later been improved for imaging up to four distinct planes. To image a greater number of focal planes, simpler techniques based on image splitting optics have been developed. One example is by using a customized image splitting prism, which is capable of capturing up to 8 focal planes using only two cameras. Better yet, standard off-the-shelf partial beamsplitters can be used to construct a so-called z-splitter prism that allows simultaneous imaging of 9 individual focal planes using a single camera. Another technique called multifocus microscopy (MFM) uses diffractive Fourier optics to image up to 25 focal planes.
Introduction
Fluorescence microscopy of live cells represents a major tool in the study of trafficking events. The conventional microscope design is well adapted to image fast cellular dynamics in two dimensions, i.e., in the plane of focus. However, cells are three-dimensional objects and intracellular trafficking pathways are typically not constrained to one focal plane. If the dynamics are not constrained to one focal plane, the conventional single plane microscopy technology is inadequate for detailed studies of fast intracellular dynamics in three dimensions. Classical approaches based on changing the focal plane are often not effective in such situations since the focusing devices are relatively slow in comparison to many of the intracellular dynamics. In addition, the focal plane may frequently be at the wrong place at the wrong time, thereby missing important aspects of the dynamic events.
Implementation
MUM can be implemented in any standard light microscope. An example implementation in a Zeiss microscope is as follows. A Zeiss dual-video adaptor is first attached to the side port of a Zeiss Axiovert 200 microscope. Two Zeiss dual-video adaptors are then concatenated by attaching each of them to the output ports of the first Zeiss video adaptor. To each of the concatenated video adaptor, a high resolution CCD camera is attached by using C-mount/spacer rings and a custom-machined camera coupling adaptor. The spacing between the output port of the video adaptor and the camera is different for each camera, which results in the cameras imaging distinct focal planes.
It is worth mentioning that there are many ways to implement MUM. The mentioned implementation offers several advantages such as flexibility, ease of installation and maintenance, and adjustability for different configurations. Additionally, for a number of applications it is important to be able to acquire images in different colors at different exposure times. For example, to visualize exocytosis in TIRFM, very fast acquisition is necessary. However, to image a fluorescently labeled stationary organelle in the cell, low excitation is necessary to avoid photobleaching and as a result the acquisition has to be relatively slow. In this regard, the above implementation offers great flexibility, since different cameras can be used to acquire images in different channels.
3D super-resolution imaging and single molecule tracking using MUM
Modern microscopy techniques have generated significant interest in studying cellular processes at the single molecule level. Single molecule experiments overcome averaging effects and therefore provide information that is not accessible using conventional bulk studies. However, the 3D localization and tracking of single molecules poses several challenges. In addition to whether or not images of the single molecule can be captured while it undergoes potentially highly complex 3D dynamics, the question arises whether or not the 3D location of the single molecule can be determined and how accurately this can be done.
A major obstacle to high accuracy 3D location estimation is the poor depth discrimination of a standard microscope. Even with a high numerical aperture objective, the image of a point source in a conventional microscope does not change appreciably if the point source is moved several hundred nanometers from its focus position. This makes it extraordinarily difficult to determine the axial, i.e., z position, of the point source with a conventional microscope.
More generally, quantitative single molecule microscopy for 3D samples poses the identical problem whether the application is localization/tracking or super-resolution microscopy such as PALM, STORM, FPALM, dSTORM for 3D applications, i.e. the determination of the location of a single molecule in three dimensions. MUM offers several advantages. In MUM, images of the point source are simultaneously acquired at different focus levels. These images give additional information that can be used to constrain the z position of the point source. This constraining information largely overcomes the depth discrimination problem near the focus.
The 3D localization measure provides a quantitative measure of how accurately the location of the point source can be determined. A small numerical value of the 3D localization measure implies very high accuracy in determining the location, while a large numerical value of the 3D localization measure implies very poor accuracy in determining the location. For a conventional microscope when the point source is close to the plane of focus, e.g., z0 <= 250 nm, the 3D localization measure predicts very poor accuracy in estimating the z position. Thus, in a conventional microscope, it is problematic to carry out 3D tracking when the point source is close to the plane of focus.
On the other hand, for a two plane MUM setup the 3D localization measure predicts consistently better accuracy than a conventional microscope for a range of z-values, especially when the point source is close to the plane of focus. An immediate implication of this result is that the z-location of the point source can be determined with relatively the same level of accuracy for a range of z-values, which is favorable for 3D single particle tracking.
Dual objective multifocal plane microscopy (dMUM)
In single particle imaging applications, the number of photons detected from the fluorescent label plays a crucial role in the quantitative analysis of the acquired data. Currently, particle tracking experiments are typically carried out on either an inverted or an upright microscope, in which a single objective lens illuminates the sample and also collects the fluorescence signal from it. Note that although fluorescence emission from the sample occurs in all directions (i.e., above and below the sample), the use of a single objective lens in these microscope configurations results in collecting light from only one side of the sample. Even if a high numerical aperture objective lens is used, not all photons emitted at one side of the sample can be collected due to the finite collection angle of the objective lens. Thus even under the best imaging conditions conventional microscopes collect only a fraction of the photons emitted from the sample.
To address this problem, a microscope configuration can be used that uses two opposing objective lenses, where one of the objectives is in an inverted position and the other objective is in an upright position. This configuration is called dual objective multifocal plane microscopy (dMUM).
References
External links
Ward Ober Lab web page at UT Southwestern Medical Center.
FandPLimitTool Home page
MUMDesignTool Home page
Microscopy
Fluorescence techniques
Cell imaging
Laboratory equipment | Multifocal plane microscopy | [
"Chemistry",
"Biology"
] | 1,565 | [
"Cell imaging",
"Fluorescence techniques",
"Microscopy"
] |
36,463,384 | https://en.wikipedia.org/wiki/Hovenring | The Hovenring is a suspended cycle path roundabout in the province of North Brabant in the Netherlands. It is situated between the localities of Eindhoven, Veldhoven, and Meerhoven, which accounts for its name, and is the first of its kind in the world.
History
The Hovenring was first conceived of in 2008, when increased traffic between Eindhoven and Veldhoven was starting to overwhelm the capacity of the roundabout on the crossing of the roads known as Heerbaan in Veldhoven and the Meerenakkerweg (Heistraat).
In order to improve the flow of traffic and improve safety, it was decided to completely separate motorized and bicycle traffic. In addition, it was decided to transform the roundabout for cars into a regular crossing of streets, to improve the flow of traffic. This left a decision to be made about what to do about the bicycle traffic. The city council of Eindhoven decided that they wanted to develop an eye-catching project, in keeping with ambitions of the Brainport top technology region (a knowledge economy-driven cooperative of the municipalities in the Eindhoven metropolitan area).
The design for the Hovenring was made by the ipv Delft design agency. The name was chosen through a competition held among the population of Eindhoven and Veldhoven. Literally the name means "ring of the Hovens", referring to Eindhoven, Veldhoven and Meerhoven (the residential area of Eindhoven where the Hovenring is). In addition, the name refers to the suspended ring of the Hovenring, as well as to the ring and needle (the central pylon) of lights that are formed by the lights that adorn the construction. With the addition of the lights, the name also refers to Eindhoven's unofficial designation of "city of lights".
The construction started on 11 February 2011. The new crossing was opened on 30 December 2011. About a week later, the crossing was again closed for all traffic, because the suspension cables were found to vibrate in a manner that was considered harmful. The Hovenring was finally opened to the public on 29 June 2012.
Construction
Design
The Hovenring is officially a roundabout, but in fact it is a circular cable-stayed bridge with the diameter deck suspended from a single tall central pylon by 24 cables. The entire construction is made of steel.
Vibration issues
The suspension bridge had to be closed almost immediately after delivery due to unexpected vibrations in the cables caused by the wind. An investigation of the problem was undertaken during the next several weeks by professors of civil and mechanical engineering from the Eindhoven University of Technology, the Delft University of Technology and professor Alberto Zasso of the Politecnico di Milano.
It was finally determined that the problem was vortices of wind forming in the lee of the cables, causing far heavier vibrations than expected during design. A solution was found by applying additional dampers on the cables. Unfortunately this caused an extra delay of a month in the opening of the bridge, since the contractor initially mounted the dampers incorrectly.
Comparison
An important predecessor to the Dutch design is the cycle overpass roundabout of Tjensvollkrysset in Stavanger, Norway. Opened in 2010, it bears remarkable resemblance to the Hovenring, sharing for instance its 72 m diameter.
The construction is out of concrete rather than steel, and support is more conventional. Considering the Eindhoven ring was designed from 2008 onwards, the designs may very well have been conceived independently of each other. External link: aerial view of the ring on Google maps
A comparison must also be made with the 2011 circular pedestrian overpass of Lujiazui in the Pudong district of Shanghai and with a similar overpass from 2012 in Rzeszów, Poland.
Gallery
See also
Nescio Bridge, an international award-winning suspension bridge for cyclists and pedestrians in East Amsterdam.
References
Civil engineering
Cable-stayed bridges in the Netherlands
Cyclist bridges in the Netherlands
Steel bridges in the Netherlands
Bridges in North Brabant
Towers in North Brabant
Buildings and structures in Eindhoven
Transport in Eindhoven | Hovenring | [
"Engineering"
] | 856 | [
"Construction",
"Civil engineering"
] |
36,464,530 | https://en.wikipedia.org/wiki/Symmetry-protected%20topological%20order | Symmetry-protected topological (SPT) order is a kind of order in zero-temperature quantum-mechanical states of matter that have a symmetry and a finite energy gap.
To derive the results in a most-invariant way, renormalization group methods are used (leading to equivalence classes corresponding to certain fixed points). The SPT order has the following defining properties:
(a) distinct SPT states with a given symmetry cannot be smoothly deformed into each other without a phase transition, if the deformation preserves the symmetry. (b) however, they all can be smoothly deformed into the same trivial product state without a phase transition, if the symmetry is broken during the deformation.
The above definition works for both bosonic systems and fermionic systems, which leads to the notions of bosonic SPT order and fermionic SPT order.
Using the notion of quantum entanglement, we can say that SPT states are short-range entangled states with a symmetry (by contrast: for long-range entanglement see topological order, which is not related to the famous EPR paradox). Since short-range entangled states have only trivial topological orders we may also refer the SPT order as Symmetry Protected "Trivial" order.
Characteristic properties
The boundary effective theory of a non-trivial SPT state always has pure gauge anomaly or mixed gauge-gravity anomaly for the symmetry group. As a result, the boundary of a SPT state is either gapless or degenerate, regardless how we cut the sample to form the boundary. A gapped non-degenerate boundary is impossible for a non-trivial SPT state. If the boundary is a gapped degenerate state, the degeneracy may be caused by spontaneous symmetry breaking and/or (intrinsic) topological order.
Monodromy defects in non-trivial 2+1D SPT states carry non-trival statistics and fractional quantum numbers of the symmetry group. Monodromy defects are created by twisting the boundary condition along a cut by a symmetry transformation. The ends of such cut are the monodromy defects. For example, 2+1D bosonic Zn SPT states are classified by a Zn integer m. One can show that n identical elementary monodromy defects in a Zn SPT state labeled by m will carry a total Zn quantum number 2m which is not a multiple of n.
2+1D bosonic U(1) SPT states have a Hall conductance that is quantized as an even integer. 2+1D bosonic SO(3) SPT states have a quantized spin Hall conductance.
Relation between SPT order and (intrinsic) topological order
SPT states are short-range entangled while topologically ordered states are long-range entangled.
Both intrinsic topological order, and also SPT order, can sometimes have protected gapless boundary excitations. The difference is subtle: the gapless boundary excitations in intrinsic topological order can be robust against any local perturbations, while the gapless boundary excitations in SPT order are robust only against local perturbations that do not break the symmetry. So the gapless boundary excitations in intrinsic topological order are topologically protected, while the gapless boundary excitations in SPT order are symmetry protected.
We also know that an intrinsic topological order has emergent fractional charge, emergent fractional statistics, and emergent gauge theory. In contrast, a SPT order has no emergent fractional charge/fractional statistics for finite-energy excitations, nor emergent gauge theory (due to its short-range entanglement). Note that the monodromy defects discussed above are not finite-energy excitations in the spectrum of the Hamiltonian, but defects created by modifying the Hamiltonian.
Examples
The first example of SPT order is the Haldane phase of odd-integer spin chain. It is a SPT phase protected by SO(3) spin rotation symmetry. Note that Haldane phases of even-integer-spin chain do not have SPT order.
A more well known example of SPT order is the topological insulator of non-interacting fermions, a SPT phase protected by U(1) and time reversal symmetry.
On the other hand, fractional quantum Hall states are not SPT states. They are states with (intrinsic) topological order and long-range entanglements.
Group cohomology theory for SPT phases
Using the notion of quantum entanglement, one obtains the following general picture of gapped
phases at zero temperature. All gapped zero-temperature phases can be divided into two classes: long-range entangled phases (ie phases with intrinsic topological order) and short-range entangled phases (ie phases with no intrinsic topological order). All short-range entangled phases can be further divided into three classes: symmetry-breaking phases, SPT phases, and their mix (symmetry breaking order and SPT order can appear together).
It is well known that symmetry-breaking orders are described by group theory. For bosonic SPT phases with pure gauge anomalous boundary, it was shown that they are classified by group cohomology theory: those (d+1)D SPT states with symmetry G are labeled by the elements in group cohomology class
.
For other (d+1)D SPT states
with mixed gauge-gravity anomalous boundary, they can be described by , where is the Abelian group formed by (d+1)D topologically ordered phases that have no non-trivial topological excitations (referred as iTO phases).
From the above results, many new quantum states of matter are predicted, including bosonic topological insulators (the SPT states protected by U(1) and time-reversal symmetry) and bosonic topological superconductors (the SPT states protected by time-reversal symmetry), as well as many other new SPT states protected by other symmetries.
A list of bosonic SPT states from group cohomology ( = time-reversal-symmetry group)
The phases before "+" come from . The phases after "+" come from .
Just like group theory can give us 230 crystal structures in 3+1D, group cohomology theory can give us various SPT phases in any dimensions with any on-site symmetry groups.
On the other hand, the fermionic SPT orders are described by group super-cohomology theory. So the group (super-)cohomology theory allows us to construct many
SPT orders even for interacting systems, which include interacting topological insulator/superconductor.
A complete classification of 1D gapped quantum phases (with interactions)
Using the notions of quantum entanglement and SPT order, one can obtain
a complete classification of all 1D gapped quantum phases.
First, it is shown that there is no (intrinsic) topological order in 1D (ie all 1D gapped states
are short-range entangled).
Thus, if the Hamiltonians have no symmetry, all their 1D gapped quantum states belong to one phase—the phase of trivial product states.
On the other hand, if the Hamiltonians do have a symmetry, their 1D gapped quantum states
are either symmetry-breaking phases, SPT phases, and their mix.
Such an understanding allows one to classify all 1D gapped quantum phases: All 1D gapped phases are classified by
the following three mathematical objects: , where is the symmetry group of the Hamiltonian, the symmetry group of the ground states, and the second group cohomology class of . (Note that classifies the projective representations of .) If there is no symmetry breaking (ie ), the 1D gapped phases are classified by the projective representations of symmetry group .
See also
AKLT Model
Topological insulator
Periodic table of topological invariants
Quantum spin Hall effect
Topological order
References
Quantum phases
Condensed matter physics
Mathematical physics
Symmetry
Topology
Emergence | Symmetry-protected topological order | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,645 | [
"Quantum phases",
"Matter",
"Applied mathematics",
"Theoretical physics",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Topology",
"Space",
"Condensed matter physics",
"Geometry",
"Spacetime",
"Mathematical physics",
"Symmetry"
] |
32,262,980 | https://en.wikipedia.org/wiki/Andr%C3%A9%E2%80%93Oort%20conjecture | In mathematics, the André–Oort conjecture is a problem in Diophantine geometry, a branch of number theory, that can be seen as a non-abelian analogue of the Manin–Mumford conjecture, which is now a theorem (proven in several different ways).
The conjecture concerns itself with a characterization of the Zariski closure of sets of special points in Shimura varieties.
A special case of the conjecture was stated by Yves André in 1989 and a more general statement (albeit with a restriction on the type of the Shimura variety) was conjectured by Frans Oort in 1995. The modern version is a natural generalization of these two conjectures.
Statement
The conjecture in its modern form is as follows. Each irreducible component of the Zariski closure of a set of special points in a Shimura variety is a special subvariety.
André's first version of the conjecture was just for one dimensional irreducible components, while Oort proposed that it should be true for irreducible components of arbitrary dimension in the moduli space of principally polarised Abelian varieties of dimension g.
It seems that André was motivated by applications to transcendence theory while Oort by the analogy with the Manin-Mumford
conjecture.
Results
Various results have been established towards the full conjecture by Ben Moonen, Yves André, Andrei Yafaev, Bas Edixhoven, Laurent Clozel, Bruno Klingler and Emmanuel Ullmo, among others. Some of these results were conditional upon the generalized Riemann hypothesis (GRH) being true.
In fact, the proof of the full conjecture under GRH was published by Bruno Klingler, Emmanuel Ullmo and Andrei Yafaev in 2014 in the Annals of Mathematics.
In 2006, Umberto Zannier and Jonathan Pila used techniques from o-minimal geometry and transcendental number theory to develop an approach to the Manin-Mumford-André-Oort type of problems.
In 2009, Jonathan Pila proved the André-Oort conjecture unconditionally for arbitrary products of modular curves, a result which earned him the 2011 Clay Research Award.
Bruno Klingler, Emmanuel Ullmo and Andrei Yafaev proved, in 2014, the functional transcendence result needed for the general Pila-Zannier approach and Emmanuel Ullmo has deduced from it a technical result needed for the induction step in the strategy. The remaining technical ingredient was the problem of bounding below the Galois degrees of special points.
For the case of the Siegel modular variety, this bound was deduced by Jacob Tsimerman in 2015 from the averaged Colmez conjecture and the Masser-Wustholtz isogeny estimates. The averaged Colmez conjecture was proved by Xinyi Yuan and Shou-Wu Zhang and independently by Andreatta, Goren, Howard and Madapusi-Pera.
In 2019-2020, Gal Biniyamini, Harry Schmidt and Andrei Yafaev, building on previous work and ideas of Harry Schmidt on torsion points in tori and abelian varieties and Gal Biniyamini's point counting results, have formulated a conjecture on bounds of heights of special points and deduced from its validity the bounds for the Galois degrees of special points needed for the proof of the full André-Oort conjecture.
In September 2021, Jonathan Pila, Ananth Shankar, and Jacob Tsimerman claimed in a paper (featuring an appendix written by Hélène Esnault and Michael Groechenig) a proof of the Biniyamini-Schmidt-Yafaev height conjecture, thus completing the proof of the André-Oort conjecture using the Pila-Zannier strategy.
Coleman–Oort conjecture
A related conjecture that has two forms, equivalent if the André–Oort conjecture is assumed, is the Coleman–Oort conjecture. Robert Coleman conjectured that for sufficiently large g, there are only finitely many smooth projective curves C of genus g, such that the Jacobian variety J(C) is an abelian variety of CM-type. Oort then conjectured that the Torelli locus – of the moduli space of abelian varieties of dimension g – has for sufficiently large g no special subvariety of dimension > 0 that intersects the image of the Torelli mapping in a dense open subset.
Generalizations
Manin-Mumford and André–Oort conjectures can be generalized in many directions, for example by relaxing the
properties of points being `special' (and considering the so-called `unlikely locus' instead) or looking at more general ambient varieties: abelian or semi-abelian schemes, mixed Shimura varieties etc.... These
generalizations are colloquially known as the Zilber–Pink conjectures because problems of this type were proposed by Richard Pink and Boris Zilber.
Most of these questions are open and are a subject of current active research.
See also
Zilber–Pink conjecture
References
Further reading
Diophantine geometry
Conjectures | André–Oort conjecture | [
"Mathematics"
] | 1,029 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures"
] |
32,266,248 | https://en.wikipedia.org/wiki/Magneto | A magneto is an electrical generator that uses permanent magnets to produce periodic pulses of alternating current. Unlike a dynamo, a magneto does not contain a commutator to produce direct current. It is categorized as a form of alternator, although it is usually considered distinct from most other alternators, which use field coils rather than permanent magnets.
Hand-cranked magneto generators were used to provide ringing current in telephone systems. Magnetos were also adapted to produce pulses of high voltage in the ignition systems of some gasoline-powered internal combustion engines to provide power to the spark plugs. Use of such ignition magnetos for ignition is now limited mainly to engines without a low-voltage electrical system, such as lawnmowers and chainsaws, and to aircraft engines, in which keeping the ignition independent of the rest of the electrical system ensures that the engine continues running in the event of alternator or battery failure. For redundancy, virtually all piston engine aircraft are fitted with two magneto systems, each supplying power to one of two spark plugs in each cylinder.
Magnetos were used for specialized isolated power systems such as arc lamp systems or lighthouses, for which their simplicity was an advantage. They have never been widely applied for the purposes of bulk electricity generation, for the same purposes or to the same extent as either dynamos or alternators. Only in a few specialised cases have they been used for power generation.
History
Production of electric current from a moving magnetic field was demonstrated by Faraday in 1831. The first machines to produce electric current from magnetism used permanent magnets; the dynamo machine, which used an electromagnet to produce the magnetic field, was developed later. The machine built by Hippolyte Pixii in 1832 used a rotating permanent magnet to induce alternating voltage in two fixed coils.
Electroplating
The first electrical machine used for an industrial process was a magneto, the Woolrich Electrical Generator. In 1842 John Stephen Woolrich was granted UK patent 9431 for the use of an electrical generator in electroplating, rather than batteries. A machine was built in 1844 and licensed to the use of the Elkington Works in Birmingham. Such electroplating expanded to become an important aspect of the Birmingham toy industry, the manufacture of buttons, buckles and similar small metal items.
The surviving machine has an applied field from four horseshoe magnets with axial fields. The rotor has ten axial bobbins. Electroplating requires DC and so the usual AC magneto is unworkable. Woolrich's machine, unusually, has a commutator to rectify its output to DC.
Arc lighting
Most early dynamos were bipolar and so their output varied cyclically as the armature rotated past the two poles.
To achieve an adequate output power, magneto generators used many more poles; usually sixteen, from eight horseshoe magnets arranged in a ring. As the flux available was limited by the magnet metallurgy, the only option was to increase the field by using more magnets. As this was still an inadequate power, extra rotor disks were stacked axially, along the axle. This had the advantage that each rotor disk could at least share the flux of two expensive magnets. The machine illustrated here uses eight disks and nine rows of magnets: 72 magnets in all.
The rotors first used were wound as sixteen axial bobbins, one per pole. Compared to the bipolar dynamo, this did have the advantage of more poles giving a smoother output per rotation, which was an advantage when driving arc lamps. Magnetos thus established a small niche for themselves as lighting generators.
The Belgian electrical engineer Floris Nollet (1794–1853) became particularly known for this type of arc lighting generator and founded the British-French company Société de l'Alliance to manufacture them.
The French engineer Auguste de Méritens (1834–1898) developed magnetos further for this purpose. His innovation was to replace the rotor coils previously wound on individual bobbins, with a 'ring wound' armature. These windings were placed on a segmented iron core, similar to a Gramme ring, so as to form a single continuous hoop. This gave a more even output current, which was still more advantageous for arc lamps.
Lighthouses
De Méritens is best remembered today for his production of magneto generators specifically for lighthouses. These were favoured for their simplicity and reliability, in particular their avoidance of commutators. In the sea air of a lighthouse, the commutator that had been used previously with dynamo generators was a continual source of trouble. The lighthouse keepers of the time, usually semi-retired sailors, were not mechanically or electrically skilled enough to maintain these more complex machines.
The de Méritens magneto generator illustrated shows the 'ring wound' armature. As there is now only a single rotor disk, each horseshoe magnet comprises a stack of individual magnets, but acts through a pair of pole pieces.
Self-exciting dynamos
Dynamos and alternators require a source of power to drive their field coils. This could not be supplied by their own generator's output, without some process of 'bootstrapping'.
Henry Wilde, an electrical engineer from Manchester, England, developed a combination of magneto and electro-magnet generator, where the magneto was used only to supply the field to the larger alternator. These are illustrated in Rankin Kennedy's work Electrical Installations Kennedy himself developed a simpler version of this, intended for lighting use on ships, where a dynamo and magneto were assembled on the same shaft. Kennedy's innovation here was to avoid the need for brushgear at all. The current generated in the magneto is transmitted by wires attached to the rotating shaft to the dynamo's rotating field coil. The output of the dynamo is then taken from the stator coils. This is 'inside-out' compared to the conventional dynamo, but avoids the need for brushgear.
The invention of the self-exciting field by Varley, Siemens & Wheatstone removed the need for a magneto exciter. A small residual field in the iron armature of the field coils acted as a weak permanent magnet, and thus a magneto. The shunt wiring of the generator feeds some of its output current back into the field coils, which in turn increases output. Because of this, the field 'builds up' regeneratively, though this may take 20–30 seconds to do so fully.
Use of magnetos here is now obsolete, though separate exciters are still used for high power generating sets, as they permit easier control of output power. These are particularly common with the transmissions of diesel-electric locomotives.
Power generation
Magnetos have advantages of simplicity and reliability, but are limited in size owing to the magnetic flux available from their permanent magnets. The fixed excitation of a magneto made it difficult to control its terminal voltage or reactive power production when operating on a synchronized grid. This restricted their use for high-power applications. Power generation magnetos were limited to narrow fields, such as powering arc lamps or lighthouses, where their particular features of output stability or simple reliability were most valued.
Wind turbines
Small wind turbines, particularly self-build designs, are widely adopting magneto alternators for generation. The generators use rotating neodymium rare-earth magnets with a three-phase stator and a bridge rectifier to produce direct current (DC). This current either directly pumps water, is stored in batteries, or drives a mains inverter that can supply the commercial electricity grid. A typical design is an axial-flux generator recycled from a car brake disk and hub bearing. A MacPherson strut provides the azimuth bearing to bring the turbine into the wind. The brake disk, with its attached rare-earth magnets, rotates to form the armature. A plywood disk carrying multiple axial coils is placed alongside this, with an iron armature ring behind it.
In large sizes, from the 100 kW to MW range, the machines developed for modern wind turbines are termed permanent magnet synchronous generators.
Bicycles
One popular and common use of magnetos of today is for powering lights and USB powered devices on bicycles. Most commonly, a small magneto, termed a bottle dynamo, rubs against the tire of the bicycle and generates power as the wheel turns. More expensive and less common but more efficient is the hub dynamo that rotates neodymium magnets around a copper coil in a claw pole cage inside the hub of a wheel. Commonly referred to as dynamos, both devices are in fact magnetos, producing alternating current as opposed to the direct current produced by a true dynamo.
Medical application
The magneto also had a medical application for treatment of mental illness in the beginnings of electromedicine. In 1850, Duchenne de Boulogne, a French doctor, developed and manufactured a magneto with a variable outer voltage and frequency, through varying revolutions by hand or varying the inductance of the two coils, for clinical experiments in neurology.
Ignition magnetos
Magnetos adapted to produce impulses of high voltage for spark plugs are used in the ignition systems of spark-ignition piston engines. Magnetos are used in piston aircraft engines for their reliability and simplicity, often in pairs. Motor sport vehicles such as motorcycles and snowmobiles may use magnetos because they are lighter in weight than an ignition system relying on a battery. Small internal combustion engines used for lawn mowers, chain saws, portable pumps and similar applications use magnetos for economy and weight reduction. Magnetos are not used in highway motor vehicles that have a cranking battery, which may need more ignition timing control than a magneto system can provide, though sophisticated solid state controllers are becoming more common.
Telephone
Manual telephones for local battery station service in magneto exchanges were equipped with a hand-cranked magneto generator to produce an alternating voltage to alert the central office operator, or to ring the bells of other telephones on the same (party) line.
Future possibilities
The development of modern rare-earth magnets makes the simple magneto alternator a more practical proposition as a power generator, as these permit a greatly increased field strength. As the magnets are compact and of light weight, they generally form the rotor, so the output windings can be placed on the stator, avoiding the need for brushgear.
Guided missiles
By the late 1980s, developments in magnetic materials such as samarium–cobalt, an early rare-earth type, let permanent magnet alternators be used in applications that require an extremely robust generator. In guided missiles, such generators can replace a flux switching alternator. These must operate at high speeds, directly coupled to a turbine. Both types share the advantage of the output coils being part of the stator, thus avoiding the need for brushgear.
See also
Electromagnetism
Faraday's law of induction
Notes
References
Electrical generators | Magneto | [
"Physics",
"Technology"
] | 2,253 | [
"Physical systems",
"Electrical generators",
"Machines"
] |
32,267,080 | https://en.wikipedia.org/wiki/Fraunhofer%20diffraction%20equation | In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when the diffraction pattern is viewed at a long distance from the diffracting object, and also when it is viewed at the focal plane of an imaging lens.
The equation was named in honour of Joseph von Fraunhofer although he was not actually involved in the development of the theory.
This article gives the equation in various mathematical forms, and provides detailed calculations of the Fraunhofer diffraction pattern for several different forms of diffracting apertures, specially for normally incident monochromatic plane wave. A qualitative discussion of Fraunhofer diffraction can be found elsewhere.
Definition
When a beam of light is partly blocked by an obstacle, some of the light is scattered around the object, and light and dark bands are often seen at the edge of the shadow – this effect is known as diffraction. The Kirchhoff diffraction equation provides an expression, derived from the wave equation, which describes the wave diffracted by an aperture; analytical solutions to this equation are not available for most configurations.
The Fraunhofer diffraction equation is an approximation which can be applied when the diffracted wave is observed in the far field, and also when a lens is used to focus the diffracted light; in many instances, a simple analytical solution is available to the Fraunhofer equation – several of these are derived below.
In Cartesian coordinates
If the aperture is in plane, with the origin in the aperture and is illuminated by a monochromatic wave, of wavelength λ, wavenumber with complex amplitude , and the diffracted wave is observed in the unprimed -plane along the positive -axis, where are the direction cosines of the point with respect to the origin. The complex amplitude of the diffracted wave is given by the Fraunhofer diffraction equation as:
It can be seen from this equation that the form of the diffraction pattern depends only on the direction of viewing, so the diffraction pattern changes in size but not in form with change of viewing distance.
Explicitly, the Fraunhofer diffraction equation is
where .
It can be seen that the integral in the above equations is the Fourier transform of the aperture function evaluated at frequencies.
Thus, we can also write the equation in terms of a Fourier transform as:
where is the Fourier transform of . The Fourier transform formulation can be very useful in solving diffraction problems.
Another form is:
where represent the observation point and a point in the aperture respectively, and represent the wave vectors of the disturbance at the aperture and of the diffracted waves respectively, and represents the magnitude of the disturbance at the aperture.
In polar coordinates
When the diffracting aperture has circular symmetry, it is useful to use polar rather than Cartesian coordinates.
A point in the aperture has coordinates giving:
and
The complex amplitude at is given by , and the area converts to ρ′ dρ′ dω′, giving
Using the integral representation of the Bessel function:
we have
where the integration over gives since the equation is circularly symmetric, i.e. there is no dependence on .
In this case, we have equal to the Fourier–Bessel or Hankel transform of the aperture function,
Example
Here are given examples of Fraunhofer diffraction with a normally incident monochromatic plane wave.
In each case, the diffracting object is located in the plane, and the complex amplitude of the incident plane wave is given by
where
is the magnitude of the wave disturbance,
is the wavelength,
is the velocity of light,
is the time
= is the wave number
and the phase is zero at time .
The time dependent factor is omitted throughout the calculations, as it remains constant, and is averaged out when the intensity is calculated. The intensity at is proportional to the amplitude times its complex conjugate
These derivations can be found in most standard optics books, in slightly different forms using varying notations. A reference is given for each of the systems modelled here. The Fourier transforms used can be found here.
Narrow rectangular slit
The aperture is a slit of width which is located along the -axis,
Solution by integration
Assuming the centre of the slit is located at , the first equation above, for all values of , is:
Using Euler's formula, this can be simplified to:
where . The sinc function is sometimes defined as and this may cause confusion when looking at derivations in different texts.
This can also be written as:
where is the angle between z-axis and the line joining to the origin and when .
Fourier transform solution
The slit can be represented by the rect function as:
The Fourier transform of this function is given by
where is the Fourier transform frequency, and the function is here defined as
The Fourier transform frequency here is , giving
Note that the function is here defined as to maintain consistency.
Intensity
The intensity is proportional to the square of the amplitude, and is therefore
Apertures
Rectangular aperture
When a rectangular slit of width W and height H is illuminated normally (the slit illuminated at the normal angle) by a monochromatic plane wave of wavelength , the complex amplitude can be found using similar analyses to those in the previous section, applied over two independent orthogonal dimensions as:
The intensity is given by
where the and axes define the transverse directions on the plane of observation or the image plane (described in the above figure), and is the distance between the slit center and the point of observation on the image plane.
In practice, all slits are of finite size so produce diffraction on the both transverse directions, along the (width W defined) and (height H defined) axes. If the height H of the slit is much greater than its width W, then the spacing of the vertical (along the height or the axis) diffraction fringes is much less than the spacing of the horizontal (along the width or axis) fringes. If the vertical fringe spacing is so less by a relatively so large H, then the observation of the vertical fringes is so hard that a person observing the diffracted wave intensity pattern on the plane of observation or the image plane recognizes only the horizontal fringes with their narrow height. This is the reason why a height-long slit or slit array such as a diffraction grating is typically analyzed only in the dimension along the width. If the illuminating beam does not illuminate the whole height of the slit, then the spacing of the vertical fringes is determined by the dimension of the laser beam along the slit height. Close examination of the two-slit pattern below shows that there are very fine vertical diffraction fringes above and below the main spots, as well as the more obvious horizontal fringes.
Circular aperture
The aperture has diameter . The complex amplitude in the observation plane is given by
Solution by integration
Using the recurrence relationship
to give
If we substitute
and the limits of the integration become 0 and , we get
Putting , we get
Solution using Fourier–Bessel transform
We can write the aperture function as a step function
The Fourier–Bessel transform for this function is given by the relationship
where is the transform frequency which is equal to and .
Thus, we get
Intensity
The intensity is given by:
Form of the diffraction pattern
This known as the Airy diffraction pattern
The diffracted pattern is symmetric about the normal axis.
Aperture with a Gaussian profile
An aperture with a Gaussian profile, for example, a photographic slide whose transmission has a Gaussian variation, so that the amplitude at a point in the aperture located at a distance r''' from the origin is given by
giving
Solution using Fourier–Bessel transform
The Fourier–Bessel or Hankel transform is defined as
where is the Bessel function of the first kind of order with .
The Hankel transform is
giving
and
Intensity
The intensity is given by:
This function is plotted on the right, and it can be seen that, unlike the diffraction patterns produced by rectangular or circular apertures, it has no secondary rings. This can be used in a process called apodization - the aperture is covered by a filter whose transmission varies as a Gaussian function, giving a diffraction pattern with no secondary rings.
Slits
Two slits
The pattern which occurs when light diffracted from two slits overlaps is of considerable interest in physics, firstly for its importance in establishing the wave theory of light through Young's interference experiment, and secondly because of its role as a thought experiment in double-slit experiment in quantum mechanics.
Narrow slits
right|thumb|Two slit interference using a red laserAssume we have two long slits illuminated by a plane wave of wavelength . The slits are in the plane, parallel to the axis, separated by a distance and are symmetrical about the origin. The width of the slits is small compared with the wavelength.
Solution by integration
The incident light is diffracted by the slits into uniform spherical waves. The waves travelling in a given direction from the two slits have differing phases. The phase of the waves from the upper and lower slits relative to the origin is given by and
The complex amplitude of the summed waves is given by:
Solution using Fourier transform
The aperture can be represented by the function:
where is the delta function.
We have
and
giving
This is the same expression as that derived above by integration.
Intensity
This gives the intensity of the combined waves as:
Slits of finite width
The width of the slits, is finite.
Solution by integration
The diffracted pattern is given by:
Solution using Fourier transform
The aperture function is given by:
The Fourier transform of this function is given by
where is the Fourier transform frequency, and the function is here defined as
and
We have
or
This is the same expression as was derived by integration.
Intensity
The intensity is given by:
It can be seen that the form of the intensity pattern is the product of the individual slit diffraction pattern, and the interference pattern which would be obtained with slits of negligible width. This is illustrated in the image at the right which shows single slit diffraction by a laser beam, and also the diffraction/interference pattern given by two identical slits.
Gratings
A grating is defined in Principles of Optics by Born and Wolf as "any arrangement which imposes on an incident wave a periodic variation of amplitude or phase, or both".
Narrow slit grating
A simple grating consists of a screen with slits whose width is significantly less than the wavelength of the incident light with slit separation of .
Solution by integration
The complex amplitude of the diffracted wave at an angle is given by:
since this is the sum of a geometric series.
Solution using Fourier transform
The aperture is given by
The Fourier transform of this function is:
Intensity
thumb|200px|right|Detail of main maximum in 20 and 50 narrow slit grating diffraction patternsThe intensity is given by:
This function has a series of maxima and minima. There are regularly spaced "principal maxima", and a number of much smaller maxima in between the principal maxima. The principal maxima occur when
and the main diffracted beams therefore occur at angles:
This is the grating equation for normally incident light.
The number of small intermediate maxima is equal to the number of slits, and their size and shape is also determined by .
The form of the pattern for =50 is shown in the first figure .
The detailed structure for 20 and 50 slits gratings are illustrated in the second diagram.
Finite width slit grating
The grating now has N'' slits of width and spacing
Solution using integration
The amplitude is given by:
Solution using Fourier transform
The aperture function can be written as:
Using the convolution theorem, which says that if we have two functions and , and we have
where denotes the convolution operation, then we also have
we can write the aperture function as
The amplitude is then given by the Fourier transform of this expression as:
Intensity
The intensity is given by:
The diagram shows the diffraction pattern for a grating with 20 slits, where the width of the slits is 1/5th of the slit separation. The size of the main diffracted peaks is modulated with the diffraction pattern of the individual slits.
Other gratings
The Fourier transform method above can be used to find the form of the diffraction for any periodic structure where the Fourier transform of the structure is known. Goodman uses this method to derive expressions for the diffraction pattern obtained with sinusoidal amplitude and phase modulation gratings. These are of particular interest in holography.
Extensions
Non-normal illumination
If the aperture is illuminated by a monochromatic plane wave incident in a direction , the first version of the Fraunhofer equation above becomes:
The equations used to model each of the systems above are altered only by changes in the constants multiplying and , so the diffracted light patterns will have the form, except that they will now be centred around the direction of the incident plane wave.
The grating equation becomes
Non-monochromatic illumination
In all of the above examples of Fraunhofer diffraction, the effect of increasing the wavelength of the illuminating light is to reduce the size of the diffraction structure, and conversely, when the wavelength is reduced, the size of the pattern increases. If the light is not mono-chromatic, i.e. it consists of a range of different wavelengths, each wavelength is diffracted into a pattern of a slightly different size to its neighbours. If the spread of wavelengths is significantly smaller than the mean wavelength, the individual patterns will vary very little in size, and so the basic diffraction will still appear with slightly reduced contrast. As the spread of wavelengths is increased, the number of "fringes" which can be observed is reduced.
See also
Kirchhoff's diffraction formula
Fresnel diffraction
Huygens principle
Airy disc
Fourier optics
References
Diffraction
Fourier analysis | Fraunhofer diffraction equation | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,892 | [
"Crystallography",
"Diffraction",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
32,269,489 | https://en.wikipedia.org/wiki/AGi32 | AGi32 is a simulation tool used for designing lighting projects and calculating the amount of light that will be delivered based on user-set parameters. The resulting calculations are commonly referred to as lighting layouts or point-by-points. AGi32 can calculate the amount of light that will be delivered in any kind of design, interior or exterior, and incorporate surrounding objects, obstructions, and varying shapes like vaulted ceilings or rooms in non-linear shapes. It aids lighting designers, engineers, and electrical contractors in the evaluation of lighting designs for projects before they are built.
Features
In addition to calculating the amount of light in a space, AGi32 can create photo-realistic renderings of how an area will look once light fixtures are installed. It can compute the amount of energy used as well as glare metrics, and can incorporate a variety of situational design needs such as custom aiming, numerous rooms and objects in the same project, and it includes utilities for estimating fixture spacing. The software is designed to determine the amount of light reaching a designated surface or work plane for any type of application. Obtrusive light (exterior) may be calculated and compared against several US and international standards for code compliance. Roadway lighting grids may be laid out per North American (IES) specifications or per several different global lighting standards.
AGi32 can import AutoCAD files up to 2018 DWG or DXF format, and also export files into DWG or DXF format (up to the 2018 version of AutoCAD).
Industry Testing
AGi32 has been independently tested against the International Commission On Illumination (CIE) benchmark, CIE 171:2006, Test Cases to Assess the Accuracy of Lighting Computer Programs: Results for AGi32 version 1.94. (CIE)
References
External links
Lighting Analysts
AGi32 support solutions
Simulation software
Light | AGi32 | [
"Physics"
] | 382 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Waves",
"Light"
] |
32,270,403 | https://en.wikipedia.org/wiki/Chromo%20shadow%20domain | In molecular biology, the chromo shadow domain is a protein domain which is distantly related to the chromodomain. It is always found in association with a chromodomain. Proteins containing a chromo shadow domain include Drosophila and human heterochromatin protein Su(var)205 (HP1); and mammalian modifier 1 and modifier 2.
Chromo shadow domains self-aggregate, bringing together the nucleosomes to which their proteins are bound and thus condense the chromatin region they are associated with. Condensed chromatin is not able to be transcribed as the transcription factors and enzymes are not able to access to DNA sequence in this form. Hence chromoshadow domain containing proteins repress gene transcription.
References
External links
Protein domains | Chromo shadow domain | [
"Biology"
] | 166 | [
"Protein domains",
"Protein classification"
] |
32,270,557 | https://en.wikipedia.org/wiki/NADH%20dehydrogenase%20%28ubiquinone%29%201%20alpha%20subcomplex%20subunit%207 | In molecular biology, the NADH dehydrogenase (ubiquinone) 1 alpha subcomplex subunit 7 family of proteins (also known as NADH-ubiquinone oxidoreductase subunit B14.5a or Complex I-B14.5a) form a part of NADH dehydrogenase (complex I). In mammals, it is encoded by the NDUFA7 gene.
References
Protein families | NADH dehydrogenase (ubiquinone) 1 alpha subcomplex subunit 7 | [
"Biology"
] | 93 | [
"Protein families",
"Protein classification"
] |
32,271,371 | https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%20regulatory%20subunit%20family | In molecular biology, the cyclin-dependent kinase regulatory subunit family is a family of proteins consisting of the regulatory subunits of cyclin-dependent protein kinases.
In eukaryotes, cyclin-dependent protein kinases interact with cyclins to regulate cell cycle progression, and are required for the G1 and G2 stages of cell division. The proteins bind to a regulatory subunit, cyclin-dependent kinase regulatory subunit (CKS), which is essential for their function. This regulatory subunit is a small protein of 79 to 150 residues. In yeast (gene CKS1) and in fission yeast (gene suc1) a single isoform is known, while mammals have two highly related isoforms. The regulatory subunits exist as hexamers, formed by the symmetrical assembly of 3 interlocked homodimers, creating an unusual 12-stranded beta-barrel structure. Through the barrel centre runs a 12A diameter tunnel, lined by 6 exposed helix pairs. Six kinase units can be modelled to bind the hexameric structure, which may thus act as a hub for cyclin-dependent protein kinase multimerisation.
This family includes the CKS1B and CKS2 genes in mammals.
References
External links
Cell cycle regulators
Protein domains | Cyclin-dependent kinase regulatory subunit family | [
"Chemistry",
"Biology"
] | 262 | [
"Protein domains",
"Cell cycle regulators",
"Protein classification",
"Signal transduction"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.