id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
2,023,440 | https://en.wikipedia.org/wiki/FR-2 | FR-2 (Flame Resistant 2) is a NEMA designation for synthetic resin bonded paper, a composite material made of paper impregnated with a plasticized phenol formaldehyde resin, used in the manufacture of printed circuit boards. Its main properties are similar to NEMA grade XXXP (MIL-P-3115) material, and can be substituted for the latter in many applications.
Applications
FR-2 sheet with copper foil lamination on one or both sides is widely used to build low-end consumer electronics equipment. While its electrical and mechanical properties are inferior to those of epoxy-bonded fiberglass, FR-4, it is significantly cheaper. It is not suitable for devices installed in vehicles, as continuous vibration can make cracks propagate, causing hairline fractures in copper circuit traces. Without copper foil lamination, FR-2 is sometimes used for simple structural shapes and electrical insulation.
Properties
Fabrication
FR-2 can be machined by drilling, sawing, milling and hot punching. Cold punching and shearing are not recommended, as they leave a ragged edge and tend to cause cracking. Tools made of high-speed steel can be used, although tungsten carbide tooling is preferred for high volume production.
Adequate ventilation or respiration protection are mandatory during high-speed machining, as it gives off toxic vapors.
Trade names and synonyms
Carta
Haefelyt
Lamitex
Paxolin, Paxoline
Pertinax, taken over by Lamitec and Dr. Dietrich Müller GmbH in 2014
Getinax (in the Ex-USSR)
Phenolic paper
Preßzell
Repelit
Synthetic resin bonded paper (SRBP)
Turbonit
Veroboard
Wahnerit
See also
Formica (plastic)
Micarta
References
Further reading
Composite materials
Printed circuit board manufacturing
Synthetic paper | FR-2 | [
"Physics",
"Chemistry",
"Engineering"
] | 378 | [
"Synthetic materials",
"Composite materials",
"Printed circuit board manufacturing",
"Materials",
"Electronic engineering",
"Electrical engineering",
"Synthetic paper",
"Matter"
] |
2,024,237 | https://en.wikipedia.org/wiki/Sulfide%20stress%20cracking | Sulfide stress cracking (SSC) is a form of hydrogen embrittlement which is a cathodic cracking mechanism. It should not be confused with the term stress corrosion cracking which is an anodic cracking mechanism. Susceptible alloys, especially steels, react with hydrogen sulfide (), forming metal sulfides (MeS) and atomic hydrogen (H•) as corrosion byproducts. Atomic hydrogen either combines to form H2 at the metal surface or diffuses into the metal matrix. Since sulfur is a hydrogen recombination poison, the amount of atomic hydrogen which recombines to form H2 on the surface is greatly reduced, thereby increasing the amount of diffusion of atomic hydrogen into the metal matrix. This aspect is what makes wet H2S environments so severe.
Since SSC is a form of hydrogen embrittlement, it is most susceptibile to cracking at or slightly below ambient temperature.
Sulfide stress cracking has special importance in the gas and oil industry, as the materials being processed there (natural gas and crude oil) often contain considerable amounts of hydrogen sulfide. Equipment that comes in contact with H2S environments can be rated for sour service with adherence to NACE MR0175/ISO 15156 for oil and gas production environments or NACE MR0103/ISO17945 for oil and gas refining environments.
"High Temperature Hydrogen Attack" (HTHA) does not rely on atomic hydrogen. At high temperature and high hydrogen partial pressure, hydrogen can diffuse into carbon steel alloys. In susceptible alloys, hydrogen combines with carbon within the alloy and forms methane. The methane molecules create a pressure buildup in the metal lattice voids, which leads to embrittlement and even cracking of the metal.
See also
Corrosion engineering
Crevice corrosion
Pitting corrosion
Sulfidation
References
Corrosion
Materials degradation | Sulfide stress cracking | [
"Chemistry",
"Materials_science",
"Engineering"
] | 380 | [
"Metallurgy",
"Materials science",
"Corrosion",
"Electrochemistry",
"Electrochemistry stubs",
"Materials degradation",
"Physical chemistry stubs",
"Chemical process stubs"
] |
2,024,795 | https://en.wikipedia.org/wiki/Mathematical%20formulation%20of%20the%20Standard%20Model | This article describes the mathematics of the Standard Model of particle physics, a gauge quantum field theory containing the internal symmetries of the unitary product group . The theory is commonly viewed as describing the fundamental set of particles – the leptons, quarks, gauge bosons and the Higgs boson.
The Standard Model is renormalizable and mathematically self-consistent; however, despite having huge and continued successes in providing experimental predictions, it does leave some unexplained phenomena. In particular, although the physics of special relativity is incorporated, general relativity is not, and the Standard Model will fail at energies or distances where the graviton is expected to emerge. Therefore, in a modern field theory context, it is seen as an effective field theory.
Quantum field theory
The standard model is a quantum field theory, meaning its fundamental objects are quantum fields, which are defined at all points in spacetime. QFT treats particles as excited states (also called quanta) of their underlying quantum fields, which are more fundamental than the particles. These fields are
the fermion fields, , which account for "matter particles";
the electroweak boson fields , , , and ;
the gluon field, ; and
the Higgs field, .
That these are quantum rather than classical fields has the mathematical consequence that they are operator-valued. In particular, values of the fields generally do not commute. As operators, they act upon a quantum state (ket vector).
Alternative presentations of the fields
As is common in quantum theory, there is more than one way to look at things. At first the basic fields given above may not seem to correspond well with the "fundamental particles" in the chart above, but there are several alternative presentations that, in particular contexts, may be more appropriate than those that are given above.
Fermions
Rather than having one fermion field , it can be split up into separate components for each type of particle. This mirrors the historical evolution of quantum field theory, since the electron component (describing the electron and its antiparticle the positron) is then the original field of quantum electrodynamics, which was later accompanied by and fields for the muon and tauon respectively (and their antiparticles). Electroweak theory added , and for the corresponding neutrinos. The quarks add still further components. In order to be four-spinors like the electron and other lepton components, there must be one quark component for every combination of flavor and color, bringing the total to 24 (3 for charged leptons, 3 for neutrinos, and 2·3·3 = 18 for quarks). Each of these is a four component bispinor, for a total of 96 complex-valued components for the fermion field.
An important definition is the barred fermion field , which is defined to be , where denotes the Hermitian adjoint of , and is the zeroth gamma matrix. If is thought of as an matrix then should be thought of as a matrix.
A chiral theory
An independent decomposition of is that into chirality components:
where is the fifth gamma matrix. This is very important in the Standard Model because left and right chirality components are treated differently by the gauge interactions.
In particular, under weak isospin SU(2) transformations the left-handed particles are weak-isospin doublets, whereas the right-handed are singlets – i.e. the weak isospin of is zero. Put more simply, the weak interaction could rotate e.g. a left-handed electron into a left-handed neutrino (with emission of a ), but could not do so with the same right-handed particles. As an aside, the right-handed neutrino originally did not exist in the standard model – but the discovery of neutrino oscillation implies that neutrinos must have mass, and since chirality can change during the propagation of a massive particle, right-handed neutrinos must exist in reality. This does not however change the (experimentally proven) chiral nature of the weak interaction.
Furthermore, acts differently on and (because they have different weak hypercharges).
Mass and interaction eigenstates
A distinction can thus be made between, for example, the mass and interaction eigenstates of the neutrino. The former is the state that propagates in free space, whereas the latter is the different state that participates in interactions. Which is the "fundamental" particle? For the neutrino, it is conventional to define the "flavor" (, , or ) by the interaction eigenstate, whereas for the quarks we define the flavor (up, down, etc.) by the mass state. We can switch between these states using the CKM matrix for the quarks, or the PMNS matrix for the neutrinos (the charged leptons on the other hand are eigenstates of both mass and flavor).
As an aside, if a complex phase term exists within either of these matrices, it will give rise to direct CP violation, which could explain the dominance of matter over antimatter in our current universe. This has been proven for the CKM matrix, and is expected for the PMNS matrix.
Positive and negative energies
Finally, the quantum fields are sometimes decomposed into "positive" and "negative" energy parts: . This is not so common when a quantum field theory has been set up, but often features prominently in the process of quantizing a field theory.
Bosons
Due to the Higgs mechanism, the electroweak boson fields , , , and "mix" to create the states that are physically observable. To retain gauge invariance, the underlying fields must be massless, but the observable states can gain masses in the process. These states are:
The massive neutral (Z) boson:
The massless neutral boson:
The massive charged W bosons:
where is the Weinberg angle.
The field is the photon, which corresponds classically to the well-known electromagnetic four-potential – i.e. the electric and magnetic fields. The field actually contributes in every process the photon does, but due to its large mass, the contribution is usually negligible.
Perturbative QFT and the interaction picture
Much of the qualitative descriptions of the standard model in terms of "particles" and "forces" comes from the perturbative quantum field theory view of the model. In this, the Lagrangian is decomposed as into separate free field and interaction Lagrangians. The free fields care for particles in isolation, whereas processes involving several particles arise through interactions. The idea is that the state vector should only change when particles interact, meaning a free particle is one whose quantum state is constant. This corresponds to the interaction picture in quantum mechanics.
In the more common Schrödinger picture, even the states of free particles change over time: typically the phase changes at a rate that depends on their energy. In the alternative Heisenberg picture, state vectors are kept constant, at the price of having the operators (in particular the observables) be time-dependent. The interaction picture constitutes an intermediate between the two, where some time dependence is placed in the operators (the quantum fields) and some in the state vector. In QFT, the former is called the free field part of the model, and the latter is called the interaction part. The free field model can be solved exactly, and then the solutions to the full model can be expressed as perturbations of the free field solutions, for example using the Dyson series.
It should be observed that the decomposition into free fields and interactions is in principle arbitrary. For example, renormalization in QED modifies the mass of the free field electron to match that of a physical electron (with an electromagnetic field), and will in doing so add a term to the free field Lagrangian which must be cancelled by a counterterm in the interaction Lagrangian, that then shows up as a two-line vertex in the Feynman diagrams. This is also how the Higgs field is thought to give particles mass: the part of the interaction term that corresponds to the nonzero vacuum expectation value of the Higgs field is moved from the interaction to the free field Lagrangian, where it looks just like a mass term having nothing to do with the Higgs field.
Free fields
Under the usual free/interaction decomposition, which is suitable for low energies, the free fields obey the following equations:
The fermion field satisfies the Dirac equation; for each type of fermion.
The photon field satisfies the wave equation .
The Higgs field satisfies the Klein–Gordon equation.
The weak interaction fields satisfy the Proca equation.
These equations can be solved exactly. One usually does so by considering first solutions that are periodic with some period along each spatial axis; later taking the limit: will lift this periodicity restriction.
In the periodic case, the solution for a field (any of the above) can be expressed as a Fourier series of the form
where:
is a normalization factor; for the fermion field it is , where is the volume of the fundamental cell considered; for the photon field it is .
The sum over is over all momenta consistent with the period , i.e., over all vectors where are integers.
The sum over covers other degrees of freedom specific for the field, such as polarization or spin; it usually comes out as a sum from to or from to .
is the relativistic energy for a momentum quantum of the field, when the rest mass is .
and are annihilation and creation operators respectively for "a-particles" and "b-particles" respectively of momentum ; "b-particles" are the antiparticles of "a-particles". Different fields have different "a-" and "b-particles". For some fields, and are the same.
and are non-operators that carry the vector or spinor aspects of the field (where relevant).
is the four-momentum for a quantum with momentum . denotes an inner product of four-vectors.
In the limit , the sum would turn into an integral with help from the hidden inside . The numeric value of also depends on the normalization chosen for and .
Technically, is the Hermitian adjoint of the operator in the inner product space of ket vectors. The identification of and as creation and annihilation operators comes from comparing conserved quantities for a state before and after one of these have acted upon it. can for example be seen to add one particle, because it will add to the eigenvalue of the a-particle number operator, and the momentum of that particle ought to be since the eigenvalue of the vector-valued momentum operator increases by that much. For these derivations, one starts out with expressions for the operators in terms of the quantum fields. That the operators with are creation operators and the one without annihilation operators is a convention, imposed by the sign of the commutation relations postulated for them.
An important step in preparation for calculating in perturbative quantum field theory is to separate the "operator" factors and above from their corresponding vector or spinor factors and . The vertices of Feynman graphs come from the way that and from different factors in the interaction Lagrangian fit together, whereas the edges come from the way that the s and s must be moved around in order to put terms in the Dyson series on normal form.
Interaction terms and the path integral approach
The Lagrangian can also be derived without using creation and annihilation operators (the "canonical" formalism) by using a path integral formulation, pioneered by Feynman building on the earlier work of Dirac. Feynman diagrams are pictorial representations of interaction terms. A quick derivation is indeed presented at the article on Feynman diagrams.
Lagrangian formalism
We can now give some more detail about the aforementioned free and interaction terms appearing in the Standard Model Lagrangian density. Any such term must be both gauge and reference-frame invariant, otherwise the laws of physics would depend on an arbitrary choice or the frame of an observer. Therefore, the global Poincaré symmetry, consisting of translational symmetry, rotational symmetry and the inertial reference frame invariance central to the theory of special relativity must apply. The local gauge symmetry is the internal symmetry. The three factors of the gauge symmetry together give rise to the three fundamental interactions, after some appropriate relations have been defined, as we shall see.
Kinetic terms
A free particle can be represented by a mass term, and a kinetic term that relates to the "motion" of the fields.
Fermion fields
The kinetic term for a Dirac fermion is
where the notations are carried from earlier in the article. can represent any, or all, Dirac fermions in the standard model. Generally, as below, this term is included within the couplings (creating an overall "dynamical" term).
Gauge fields
For the spin-1 fields, first define the field strength tensor
for a given gauge field (here we use ), with gauge coupling constant . The quantity is the structure constant of the particular gauge group, defined by the commutator
where are the generators of the group. In an abelian (commutative) group (such as the we use here) the structure constants vanish, since the generators all commute with each other. Of course, this is not the case in general – the standard model includes the non-Abelian and groups (such groups lead to what is called a Yang–Mills gauge theory).
We need to introduce three gauge fields corresponding to each of the subgroups .
The gluon field tensor will be denoted by , where the index labels elements of the representation of color SU(3). The strong coupling constant is conventionally labelled (or simply where there is no ambiguity). The observations leading to the discovery of this part of the Standard Model are discussed in the article in quantum chromodynamics.
The notation will be used for the gauge field tensor of where runs over the generators of this group. The coupling can be denoted or again simply . The gauge field will be denoted by .
The gauge field tensor for the of weak hypercharge will be denoted by , the coupling by , and the gauge field by .
The kinetic term can now be written as
where the traces are over the and indices hidden in and respectively. The two-index objects are the field strengths derived from and the vector fields. There are also two extra hidden parameters: the theta angles for and .
Coupling terms
The next step is to "couple" the gauge fields to the fermions, allowing for interactions.
Electroweak sector
The electroweak sector interacts with the symmetry group , where the subscript L indicates coupling only to left-handed fermions.
where is the gauge field; is the weak hypercharge (the generator of the group); is the three-component gauge field; and the components of are the Pauli matrices (infinitesimal generators of the group) whose eigenvalues give the weak isospin. Note that we have to redefine a new symmetry of weak hypercharge, different from QED, in order to achieve the unification with the weak force. The electric charge , third component of weak isospin (also called or ) and weak hypercharge are related by
(or by the alternative convention ). The first convention, used in this article, is equivalent to the earlier Gell-Mann–Nishijima formula. It makes the hypercharge be twice the average charge of a given isomultiplet.
One may then define the conserved current for weak isospin as
and for weak hypercharge as
where is the electric current and the third weak isospin current. As explained above, these currents mix to create the physically observed bosons, which also leads to testable relations between the coupling constants.
To explain this in a simpler way, we can see the effect of the electroweak interaction by picking out terms from the Lagrangian. We see that the SU(2) symmetry acts on each (left-handed) fermion doublet contained in , for example
where the particles are understood to be left-handed, and where
This is an interaction corresponding to a "rotation in weak isospin space" or in other words, a transformation between and via emission of a boson. The symmetry, on the other hand, is similar to electromagnetism, but acts on all "weak hypercharged" fermions (both left- and right-handed) via the neutral , as well as the charged fermions via the photon.
Quantum chromodynamics sector
The quantum chromodynamics (QCD) sector defines the interactions between quarks and gluons, with symmetry, generated by . Since leptons do not interact with gluons, they are not affected by this sector. The Dirac Lagrangian of the quarks coupled to the gluon fields is given by
where and are the Dirac spinors associated with up and down-type quarks, and other notations are continued from the previous section.
Mass terms and the Higgs mechanism
Mass terms
The mass term arising from the Dirac Lagrangian (for any fermion ) is , which is not invariant under the electroweak symmetry. This can be seen by writing in terms of left and right-handed components (skipping the actual calculation):
i.e. contribution from and terms do not appear. We see that the mass-generating interaction is achieved by constant flipping of particle chirality. The spin-half particles have no right/left chirality pair with the same representations and equal and opposite weak hypercharges, so assuming these gauge charges are conserved in the vacuum, none of the spin-half particles could ever swap chirality, and must remain massless. Additionally, we know experimentally that the W and Z bosons are massive, but a boson mass term contains the combination e.g. , which clearly depends on the choice of gauge. Therefore, none of the standard model fermions or bosons can "begin" with mass, but must acquire it by some other mechanism.
Higgs mechanism
The solution to both these problems comes from the Higgs mechanism, which involves scalar fields (the number of which depend on the exact form of Higgs mechanism) which (to give the briefest possible description) are "absorbed" by the massive bosons as degrees of freedom, and which couple to the fermions via Yukawa coupling to create what looks like mass terms.
In the Standard Model, the Higgs field is a complex scalar field of the group :
where the superscripts and indicate the electric charge () of the components. The weak hypercharge () of both components is .
The Higgs part of the Lagrangian is
where and , so that the mechanism of spontaneous symmetry breaking can be used. There is a parameter here, at first hidden within the shape of the potential, that is very important. In a unitarity gauge one can set and make real. Then is the non-vanishing vacuum expectation value of the Higgs field. has units of mass, and it is the only parameter in the Standard Model that is not dimensionless. It is also much smaller than the Planck scale and about twice the Higgs mass, setting the scale for the mass of all other particles in the Standard Model. This is the only real fine-tuning to a small nonzero value in the Standard Model. Quadratic terms in and arise, which give masses to the W and Z bosons:
The mass of the Higgs boson itself is given by
Yukawa interaction
The Yukawa interaction terms are
where , , and are matrices of Yukawa couplings, with the term giving the coupling of the generations and , and h.c. means Hermitian conjugate of preceding terms. The fields and are left-handed quark and lepton doublets. Likewise, , and are right-handed up-type quark, down-type quark, and lepton singlets. Finally is the Higgs doublet and
Neutrino masses
As previously mentioned, evidence shows neutrinos must have mass. But within the standard model, the right-handed neutrino does not exist, so even with a Yukawa coupling neutrinos remain massless. An obvious solution is to simply add a right-handed neutrino , which requires the addition of a new Dirac mass term in the Yukawa sector:
This field however must be a sterile neutrino, since being right-handed it experimentally belongs to an isospin singlet () and also has charge , implying (see above) i.e. it does not even participate in the weak interaction. The experimental evidence for sterile neutrinos is currently inconclusive.
Another possibility to consider is that the neutrino satisfies the Majorana equation, which at first seems possible due to its zero electric charge. In this case a new Majorana mass term is added to the Yukawa sector:
where denotes a charge conjugated (i.e. anti-) particle, and the terms are consistently all left (or all right) chirality (note that a left-chirality projection of an antiparticle is a right-handed field; care must be taken here due to different notations sometimes used). Here we are essentially flipping between left-handed neutrinos and right-handed anti-neutrinos (it is furthermore possible but not necessary that neutrinos are their own antiparticle, so these particles are the same). However, for left-chirality neutrinos, this term changes weak hypercharge by 2 units – not possible with the standard Higgs interaction, requiring the Higgs field to be extended to include an extra triplet with weak hypercharge = 2 – whereas for right-chirality neutrinos, no Higgs extensions are necessary. For both left and right chirality cases, Majorana terms violate lepton number, but possibly at a level beyond the current sensitivity of experiments to detect such violations.
It is possible to include both Dirac and Majorana mass terms in the same theory, which (in contrast to the Dirac-mass-only approach) can provide a “natural” explanation for the smallness of the observed neutrino masses, by linking the right-handed neutrinos to yet-unknown physics around the GUT scale (see seesaw mechanism).
Since in any case new fields must be postulated to explain the experimental results, neutrinos are an obvious gateway to searching physics beyond the Standard Model.
Detailed information
This section provides more detail on some aspects, and some reference material. Explicit Lagrangian terms are also provided here.
Field content in detail
The Standard Model has the following fields. These describe one generation of leptons and quarks, and there are three generations, so there are three copies of each fermionic field. By CPT symmetry, there is a set of fermions and antifermions with opposite parity and charges. If a left-handed fermion spans some representation its antiparticle (right-handed antifermion) spans the dual representation (note that for SU(2), because it is pseudo-real). The column "representation" indicates under which representations of the gauge groups that each field transforms, in the order (SU(3), SU(2), U(1)) and for the U(1) group, the value of the weak hypercharge is listed. There are twice as many left-handed lepton field components as right-handed lepton field components in each generation, but an equal number of left-handed quark and right-handed quark field components.
Fermion content
This table is based in part on data gathered by the Particle Data Group.
Free parameters
Upon writing the most general Lagrangian with massless neutrinos, one finds that the dynamics depend on 19 parameters, whose numerical values are established by experiment. Straightforward extensions of the Standard Model with massive neutrinos need 7 more parameters (3 masses and 4 PMNS matrix parameters) for a total of 26 parameters. The neutrino parameter values are still uncertain. The 19 certain parameters are summarized here.
The choice of free parameters is somewhat arbitrary. In the table above, gauge couplings are listed as free parameters, therefore with this choice the Weinberg angle is not a free parameter – it is defined as . Likewise, the fine-structure constant of QED is . Instead of fermion masses, dimensionless Yukawa couplings can be chosen as free parameters. For example, the electron mass depends on the Yukawa coupling of the electron to the Higgs field, and its value is . Instead of the Higgs mass, the Higgs self-coupling strength , which is approximately 0.129, can be chosen as a free parameter. Instead of the Higgs vacuum expectation value, the parameter directly from the Higgs self-interaction term can be chosen. Its value is , or approximately = .
The value of the vacuum energy (or more precisely, the renormalization scale used to calculate this energy) may also be treated as an additional free parameter. The renormalization scale may be identified with the Planck scale or fine-tuned to match the observed cosmological constant. However, both options are problematic.
Additional symmetries of the Standard Model
From the theoretical point of view, the Standard Model exhibits four additional global symmetries, not postulated at the outset of its construction, collectively denoted accidental symmetries, which are continuous U(1) global symmetries. The transformations leaving the Lagrangian invariant are:
The first transformation rule is shorthand meaning that all quark fields for all generations must be rotated by an identical phase simultaneously. The fields and are the 2nd (muon) and 3rd (tau) generation analogs of and fields.
By Noether's theorem, each symmetry above has an associated conservation law: the conservation of baryon number, electron number, muon number, and tau number. Each quark is assigned a baryon number of , while each antiquark is assigned a baryon number of . Conservation of baryon number implies that the number of quarks minus the number of antiquarks is a constant. Within experimental limits, no violation of this conservation law has been found.
Similarly, each electron and its associated neutrino is assigned an electron number of +1, while the anti-electron and the associated anti-neutrino carry a −1 electron number. Similarly, the muons and their neutrinos are assigned a muon number of +1 and the tau leptons are assigned a tau lepton number of +1. The Standard Model predicts that each of these three numbers should be conserved separately in a manner similar to the way baryon number is conserved. These numbers are collectively known as lepton family numbers (LF). (This result depends on the assumption made in Standard Model that neutrinos are massless. Experimentally, neutrino oscillations imply that individual electron, muon and tau numbers are not conserved.)
In addition to the accidental (but exact) symmetries described above, the Standard Model exhibits several approximate symmetries. These are the "SU(2) custodial symmetry" and the "SU(2) or SU(3) quark flavor symmetry".
U(1) symmetry
For the leptons, the gauge group can be written . The two factors can be combined into , where is the lepton number. Gauging of the lepton number is ruled out by experiment, leaving only the possible gauge group . A similar argument in the quark sector also gives the same result for the electroweak theory.
Charged and neutral current couplings and Fermi theory
The charged currents are
These charged currents are precisely those that entered the Fermi theory of beta decay. The action contains the charge current piece
For energy much less than the mass of the W-boson, the effective theory becomes the current–current contact interaction of the Fermi theory, .
However, gauge invariance now requires that the component of the gauge field also be coupled to a current that lies in the triplet of SU(2). However, this mixes with the , and another current in that sector is needed. These currents must be uncharged in order to conserve charge. So neutral currents are also required,
The neutral current piece in the Lagrangian is then
Physics beyond the Standard Model
See also
Overview of Standard Model of particle physics
Fundamental interaction
Noncommutative standard model
Open questions: CP violation, Neutrino masses, Quark matter
Physics beyond the Standard Model
Strong interactions
Flavor
Quantum chromodynamics
Quark model
Weak interactions
Electroweak interaction
Fermi's interaction
Weinberg angle
Symmetry in quantum mechanics
Quantum Field Theory in a Nutshell by A. Zee
References and external links
An introduction to quantum field theory, by M.E. Peskin and D.V. Schroeder (HarperCollins, 1995) .
Gauge theory of elementary particle physics, by T.P. Cheng and L.F. Li (Oxford University Press, 1982) .
Standard Model Lagrangian with explicit Higgs terms (T.D. Gutierrez, ca 1999) (PDF, PostScript, and LaTeX version)
The quantum theory of fields (vol 2), by S. Weinberg (Cambridge University Press, 1996) .
Quantum Field Theory in a Nutshell (Second Edition), by A. Zee (Princeton University Press, 2010) .
An Introduction to Particle Physics and the Standard Model, by R. Mann (CRC Press, 2010)
Physics From Symmetry by J. Schwichtenberg (Springer, 2015) . Especially page 86
Standard Model
Electroweak theory | Mathematical formulation of the Standard Model | [
"Physics"
] | 6,259 | [
"Standard Model",
"Physical phenomena",
"Electroweak theory",
"Fundamental interactions",
"Particle physics"
] |
2,025,632 | https://en.wikipedia.org/wiki/Superalloy | A superalloy, or high-performance alloy, is an alloy with the ability to operate at a high fraction of its melting point. Key characteristics of a superalloy include mechanical strength, thermal creep deformation resistance, surface stability, and corrosion and oxidation resistance.
The crystal structure is typically face-centered cubic (FCC) austenitic. Examples of such alloys are Hastelloy, Inconel, Waspaloy, Rene alloys, Incoloy, MP98T, TMS alloys, and CMSX single crystal alloys.
Superalloy development relies on chemical and process innovations. Superalloys develop high temperature strength through solid solution strengthening and precipitation strengthening from secondary phase precipitates such as gamma prime and carbides. Oxidation or corrosion resistance is provided by elements such as aluminium and chromium. Superalloys are often cast as a single crystal in order to eliminate grain boundaries, which decrease creep resistance (even though they may provide strength at low temperatures).
The primary application for such alloys is in aerospace and marine turbine engines. Creep is typically the lifetime-limiting factor in gas turbine blades.
Superalloys have made much of very-high-temperature engineering technology possible.
Chemical development
Because these alloys are intended for high temperature applications their creep and oxidation resistance are of primary importance. Nickel (Ni)-based superalloys are the material of choice for these applications because of their unique γ' precipitates. The properties of these superalloys can be tailored to a certain extent through the addition of various other elements, common or exotic, including not only metals, but also metalloids and nonmetals; chromium, iron, cobalt, molybdenum, tungsten, tantalum, aluminium, titanium, zirconium, niobium, rhenium, yttrium, vanadium, carbon, boron or hafnium are some examples of the alloying additions used. Each addition serves a particular purpose in optimizing properties.
Creep resistance is dependent, in part, on slowing the speed of dislocation motion within a crystal structure. In modern Ni-based superalloys, the γ'-Ni3(Al,Ti) phase acts as a barrier to dislocation. For this reason, this γ;' intermetallic phase, when present in high volume fractions, increases the strength of these alloys due to its ordered nature and high coherency with the γ matrix. The chemical additions of aluminum and titanium promote the creation of the γ' phase. The γ' phase size can be precisely controlled by careful precipitation strengthening heat treatments. Many superalloys are produced using a two-phase heat treatment that creates a dispersion of cuboidal γ' particles known as the primary phase, with a fine dispersion between these known as secondary γ'. In order to improve the oxidation resistance of these alloys, Al, Cr, B, and Y are added. The Al and Cr form oxide layers that passivate the surface and protect the superalloy from further oxidation while B and Y are used to improve the adhesion of this oxide scale to the substrate. Cr, Fe, Co, Mo and Re all preferentially partition to the γ matrix while Al, Ti, Nb, Ta, and V preferentially partition to the γ' precipitates and solid solution strengthen the matrix and precipitates respectively. In addition to solid solution strengthening, if grain boundaries are present, certain elements are chosen for grain boundary strengthening. B and Zr tend to segregate to the grain boundaries which reduces the grain boundary energy and results in better grain boundary cohesion and ductility. Another form of grain boundary strengthening is achieved through the addition of C and a carbide former, such as Cr, Mo, W, Nb, Ta, Ti, or Hf, which drives precipitation of carbides at grain boundaries and thereby reduces grain boundary sliding.
Phase formation
Adding elements is usually helpful because of solid solution strengthening, but can result in unwanted precipitation. Precipitates can be classified as geometrically close-packed (GCP), topologically close-packed (TCP), or carbides. GCP phases usually benefit mechanical properties, but TCP phases are often deleterious. Because TCP phases are not truly close packed, they have few slip systems and are brittle. Also they "scavenge" elements from GCP phases. Many elements that are good for forming γ' or have great solid solution strengthening may precipitate TCPs. The proper balance promotes GCPs while avoiding TCPs.
TCP phase formation areas are weak because they:
have inherently poor mechanical properties
are incoherent with the γ matrix
are surrounded by a "depletion zone" where there is no γ'
usually form sharp plate or needle-like morphologies which nucleate cracks
The main GCP phase is γ'. Almost all superalloys are Ni-based because of this phase. γ' is an ordered L1 (pronounced L-one-two), which means it has a certain atom on the face of the unit cell, and a certain atom on the corners of the unit cell. Ni-based superalloys usually present Ni on the faces and Ti or Al on the corners.
Another "good" GCP phase is γ''. It is also coherent with γ, but it dissolves at high temperatures.
Families of superalloys
Ni-based
History
The United States became interested in gas turbine development around 1905. From 1910-1915, austenitic ( γ phase) stainless steels were developed to survive high temperatures in gas turbines. By 1929, 80Ni-20Cr alloy was the norm, with small additions of Ti and Al. Although early metallurgists did not know it yet, they were forming small γ' precipitates in Ni-based superalloys. These alloys quickly surpassed Fe- and Co-based superalloys, which were strengthened by carbides and solid solution strengthening.
Although Cr was great for protecting the alloys from oxidation and corrosion up to 700 °C, metallurgists began decreasing Cr in favor of Al, which had oxidation resistance at much higher temperatures. The lack of Cr caused issues with hot corrosion, so coatings needed to be developed.
Around 1950, vacuum melting became commercialized, which allowed metallurgists to create higher purity alloys with more precise composition.
In the 60s and 70s, metallurgists changed focus from alloy chemistry to alloy processing. Directional solidification was developed to allow columnar or even single-crystal turbine blades. Oxide dispersion strengthening could obtain very fine grains and superplasticity.
Phases
Gamma (γ): This phase composes the matrix of Ni-based superalloy. It is a solid solution fcc austenitic phase of the alloying elements. The alloying elements most found in commercial Ni-based alloys are, C, Cr, Mo, W, Nb, Fe, Ti, Al, V, and Ta. During the formation of these materials, as they cool from the melt, carbides precipitate, and at even lower temperatures γ' phase precipitates.
Gamma prime (γ'): This phase constitutes the precipitate used to strengthen the alloy. It is an intermetallic phase based on Ni3(Ti,Al) which have an ordered FCC L12 structure. The γ' phase is coherent with the matrix of the superalloy having a lattice parameter that varies by around 0.5%. Ni3(Ti,Al) are ordered systems with Ni atoms at the cube faces and either Al or Ti atoms at the cube edges. As particles of γ' precipitates aggregate, they decrease their energy states by aligning along the <100> directions forming cuboidal structures. This phase has a window of instability between 600 °C and 850 °C, inside of which γ' will transform into the HCP η phase. For applications at temperatures below 650 °C, the γ" phase can be utilized for strengthening.
Gamma double prime (γ"): This phase typically is Ni3Nb or Ni3V and is used to strengthen Ni-based superalloys at lower temperatures (<650 °C) relative to γ'. The crystal structure of γ" is body-centered tetragonal (BCT), and the phase precipitates as 60 nm by 10 nm discs with the (001) planes in γ" parallel to the {001} family in γ. These anisotropic discs form as a result of lattice mismatch between the BCT precipitate and the FCC matrix. This lattice mismatch leads to high coherency strains which, together with order hardening, are the primary strengthening mechanisms. The γ" phase is unstable above approximately 650 °C.
Carbide phases: Carbide formation is usually deleterious although in Ni-based superalloys they are used to stabilize the structure of the material against deformation at high temperatures. Carbides form at the grain boundaries, inhibiting grain boundary motion.
Topologically close-packed (TCP) phases: The term "TCP phase" refers to any member of a family of phases (including the σ phase, the χ phase, the μ phase, and the Laves phase), which are not atomically close-packed but possess some close-packed planes with HCP stacking. TCP phases tend to be highly brittle and deplete the γ matrix of strengthening, solid solution refractory elements (including Cr, Co, W, and Mo). These phases form as a result of kinetics after long periods of time (thousands of hours) at high temperatures (>750 °C).
Co-based
Co-based superalloys depend on carbide precipitation and solid solution strengthening for mechanical properties. While these strengthening mechanisms are inferior to gamma prime (γ') precipitation strengthening, cobalt has a higher melting point than nickel and has superior hot corrosion resistance and thermal fatigue. As a result, carbide-strengthened Co-based superalloys are used in lower stress, higher temperature applications such as stationary vanes in gas turbines.
Co's γ/γ' microstructure was rediscovered and published in 2006 by Sato et al. That γ' phase was Co3(Al, W). Mo, Ti, Nb, V, and Ta partition to the γ' phase, while Fe, Mn, and Cr partition to the matrix γ.
The next family of Co-based superalloys was discovered in 2015 by Makineni et al. This family has a similar γ/γ' microstructure, but is W-free and has a γ' phase of Co3(Al,Mo,Nb). Since W is heavy, its elimination makes Co-based alloys increasingly viable in turbines for aircraft, where low density is especially valued.
The most recently discovered family of superalloys was computationally predicted by Nyshadham et al. in 2017, and demonstrated by Reyes Tirado et al. in 2018. This γ' phase is W free and has the composition Co3(Nb,V) and Co3(Ta,V).
Phases
Gamma (γ): This is the matrix phase. While Co-based superalloys are less-used commercially, alloying elements include C, Cr, W, Ni, Ti, Al, Ir, and Ta. As in stainless steels, Chromium is used (occasionally up to 20 wt.%) to improve resistance to oxidation and corrosion via the formation of a Cr2O3 passive layer, which is critical for use in gas turbines, but also provides solid-solution strengthening due to the mismatch in the atomic radii of Co and Cr, and precipitation hardening due to the formation of MC-type carbides.
Gamma Prime (γ'): Constitutes the precipitate used to strengthen the alloy. It is usually close-packed with a L12 structure of Co3Ti or FCC Co3Ta, though both W and Al integrate into these cuboidal precipitates. Ta, Nb, and Ti integrate into the γ' phase and are stabilize it at high temperatures.
Carbide Phases: Carbides strengthen the alloy through precipitation hardening but decrease low-temperature ductility.
Topologically Close-Packed (TCP) phases may appear in some Co-based superalloys, but embrittle the alloy and are thus undesirable.
Fe-based
Steel superalloys are of interest because some present creep and oxidation resistance similar to Ni-based superalloys, at far less cost.
Gamma (γ): Fe-based alloys feature a matrix phase of austenite iron (FCC). Alloying elements include: Al, B, C, Co, Cr, Mo, Ni, Nb, Si, Ti, W, and Y. Al (oxidation benefits) must be kept at low weight fractions (wt.%) because Al stabilizes a ferritic (BCC) primary phase matrix, which is undesirable, as it is inferior to the high temperature strength exhibited by an austenitic (FCC) primary phase matrix.
Gamma-prime (γ'): This phase is introduced as precipitates to strengthen the alloy. γ'-Ni3Al precipitates can be introduced with the proper balance of Al, Ni, Nb, and Ti additions.
Microstructure
The two major types of austenitic stainless steels are characterized by the oxide layer that forms on the steel surface: either chromia-forming or alumina-forming. Cr-forming stainless steel is the most common type. However, Cr-forming steels do not exhibit high creep resistance at high temperatures, especially in environments with water vapor. Exposure to water vapor at high temperatures can increase internal oxidation in Cr-forming alloys and rapid formation of volatile Cr (oxy)hydroxides, both of which can reduce durability and lifetime.
Al-forming austenitic stainless steels feature a single-phase matrix of austenite iron (FCC) with an Al-oxide at the surface of the steel. Al is more thermodynamically stable in oxygen than Cr. More commonly, however, precipitate phases are introduced to increase strength and creep resistance. In Al-forming steels, NiAl precipitates are introduced to act as Al reservoirs to maintain the protective alumina layer. In addition, Nb and Cr additions help form and stabilize Al by increasing precipitate volume fractions of NiAl.
At least 5 grades of alumina-forming austenitic (AFA) alloys, with different operating temperatures at oxidation in air + 10% water vapor have been realized:
AFA Grade: (50-60)Fe-(20-25)Ni-(14-15)Cr-(2.5-3.5)Al-(1-3)Nb wt.% base
750-800 °C operating temperatures at oxidation in air + 10% water vapor
Low Nickel AFA Grade: 63Fe-12Ni-14Cr-2.5Al-0.6Nb-5Mn3Cu wt.% base
650 °C operating temperatures at oxidation in air + 10% water vapor
High Performance AFA Grade: (45-55)Fe-(25-30)Ni-(14-15)Cr(3.5-4.5)Al-(1-3)Nb-(0.02-0.1)Hf/Y wt.% base
850-900 °C operating temperatures at oxidation in air + 10% water vapor
Cast AFA Grade: (35-50)Fe-(25-35)Ni-14Cr-(3.5-4)Al-1Nb wt.% base
750-1100 °C operating temperatures at oxidation in air + 10% water vapor, depending upon Ni wt.%
AFA superalloy (40-50)Fe-(30-35)Ni-(14-19)Cr-(2.5-3.5)Al-3Nb
750-850 °C operating temperatures at oxidation in air + 10% water vapor
Operating temperatures with oxidation in air and no water vapor are expected to be higher. In addition, an AFA superalloy grade exhibits creep strength approaching that of nickel alloy UNS N06617.
Microstructure
In pure Ni3Al phase Al atoms are placed at the vertices of the cubic cell and form sublattice A. Ni atoms are located at centers of the faces and form sublattice B. The phase is not strictly stoichiometric. An excess of vacancies in one of the sublattices may exist, which leads to deviations from stoichiometry. Sublattices A and B of the γ' phase can solute a considerable proportion of other elements. The alloying elements are dissolved in the γ phase. The γ' phase hardens the alloy through the yield strength anomaly. Dislocations dissociate in the γ' phase, leading to the formation of an anti-phase boundary.
To give an example, consider a dislocation with a burgers vector of traveling along a slip plane initially in the γ phase, where it is a perfect dislocation in that FCC structure. Since the γ' phase is primitive cubic instead of FCC due to the substitution of aluminum into the vertices of the unit cell, the perfect burgers vector along that direction in γ' is twice that of γ. For the dislocation to enter the γ' phase, it will have to create a high energy anti-phase boundary, which will need another such dislocation along the plane to restore order (as the sum of the two dislocations would have the perfect burgers vector).
It is thus rather energy prohibitive for the dislocation to enter the γ' phase unless there are two of them in close proximity along the same plane. However, the Peach-Koehler force between identical dislocations along the same plane is repulsive, which makes this a less favorable configuration. One possible mechanism involved one of the dislocations being pinned against the γ' phase while the other dislocation in the γ phase cross-slips into close proximity of the pinned dislocation from another plane, allowing the pair of dislocations to push into the γ' phase.
Furthermore, the burgers vector family of dislocations are likely to decompose into partial dislocations in this alloy due to its low stacking fault energy, such as dislocations with burgers vector of the family (Shockley partial dislocations). The stacking faults between these partial dislocations can further provide another obstacle to the movement of other dislocations, further contributing to the strength of the material. There are also more slip systems that can be involved beyond the slip plane and slip direction.
At elevated temperature, the free energy associated with the anti-phase boundary (APB) is considerably reduced if it lies on a particular plane, which by coincidence is not a permitted slip plane. One set of partial dislocations bounding the APB cross-slips so that the APB lies on the low-energy plane, and, since this low-energy plane is not a permitted slip plane, the dissociated dislocation is effectively locked. By this mechanism, the yield strength of γ' phase Ni3Al increases with temperature up to about 1000 °C.
Initial material selection for blade applications in gas turbine engines included alloys like the Nimonic series alloys in the 1940s. The early Nimonic series incorporated γ' Ni3(Al,Ti) precipitates in a γ matrix, as well as various metal-carbon carbides (e.g. Cr23C6) at the grain boundaries for additional grain boundary strength. Turbine blade components were forged until vacuum induction casting technologies were introduced in the 1950s. This process significantly improved cleanliness, reduced defects, and increased the strength and temperature capability.
Modern superalloys were developed in the 1980s. First generation superalloys incorporated increased Al, Ti, Ta, and Nb content in order to increase the γ' volume fraction. Examples include: PWA1480, René N4 and SRR99. Additionally, the volume fraction of the γ' precipitates increased to about 50–70% with the advent of monocrystal solidification techniques that enable grain boundaries to be entirely eliminated. Because the material contains no grain boundaries, carbides are unnecessary as grain boundary strengthers and were thus eliminated.
Second and third generation superalloys introduce about 3 and 6 weight percent rhenium, for increased temperature capability. Re is a slow diffuser and typically partitions the γ matrix, decreasing the rate of diffusion (and thereby high temperature creep) and improving high temperature performance and increasing service temperatures by 30 °C and 60 °C in second and third generation superalloys, respectively. Re promotes the formation of rafts of the γ' phase (as opposed to cuboidal precipitates). The presence of rafts can decrease creep rate in the power-law regime (controlled by dislocation climb), but can also potentially increase the creep rate if the dominant mechanism is particle shearing. Re tends to promote the formation of brittle TCP phases, which has led to the strategy of reducing Co, W, Mo, and particularly Cr. Later generations of Ni-based superalloys significantly reduced Cr content for this reason, however with the reduction in Cr comes a reduction in oxidation resistance. Advanced coating techniques offset the loss of oxidation resistance accompanying the decreased Cr contents. Examples of second generation superalloys include PWA1484, CMSX-4 and René N5.
Third generation alloys include CMSX-10, and René N6. Fourth, fifth, and sixth generation superalloys incorporate ruthenium additions, making them more expensive than prior Re-containing alloys. The effect of Ru on the promotion of TCP phases is not well-determined. Early reports claimed that Ru decreased the supersaturation of Re in the matrix and thereby diminished the susceptibility to TCP phase formation. Later studies noted an opposite effect. Chen, et al., found that in two alloys differing significantly only in Ru content (USTB-F3 and USTB-F6) that the addition of Ru increased both the partitioning ratio as well as supersaturation in the γ matrix of Cr and Re, and thereby promoted the formation of TCP phases.
The current trend is to avoid very expensive and very heavy elements. An example is Eglin steel, a budget material with compromised temperature range and chemical resistance. It does not contain rhenium or ruthenium and its nickel content is limited. To reduce fabrication costs, it was chemically designed to melt in a ladle (though with improved properties in a vacuum crucible). Conventional welding and casting is possible before heat-treatment. The original purpose was to produce high-performance, inexpensive bomb casings, but the material has proven widely applicable to structural applications, including armor.
Single-crystal superalloys
Single-crystal superalloys (SX or SC superalloys) are formed as a single crystal using a modified version of the directional solidification technique, leaving no grain boundaries. The mechanical properties of most other alloys depend on the presence of grain boundaries, but at high temperatures, they participate in creep and require other mechanisms. In many such alloys, islands of an ordered intermetallic phase sit in a matrix of disordered phase, all with the same crystal lattice. This approximates the dislocation-pinning behavior of grain boundaries, without introducing any amorphous solid into the structure.
Single crystal (SX) superalloys have wide application in the high-pressure turbine section of aero- and industrial gas turbine engines due to the unique combination of properties and performance. Since introduction of single crystal casting technology, SX alloy development has focused on increased temperature capability, and major improvements in alloy performance are associated with rhenium (Re) and ruthenium (Ru).
The creep deformation behavior of superalloy single crystal is strongly temperature-, stress-, orientation- and alloy-dependent. For a single-crystal superalloy, three modes of creep deformation occur under regimes of different temperature and stress: rafting, tertiary, and primary. At low temperature (~750 °C), SX alloys exhibits mostly primary creep behavior. Matan et al. concluded that the extent of primary creep deformation depends strongly on the angle between the tensile axis and the <001>/<011> symmetry boundary. At temperatures above 850 °C, tertiary creep dominates and promotes strain softening behavior. When temperature exceeds 1000 °C, the rafting effect is prevalent where cubic particles transform into flat shapes under tensile stress. The rafts form perpendicular to the tensile axis, since γ phase is transported out of the vertical channels and into the horizontal ones. Reed et al. studied unaxial creep deformation of <001> oriented CMSX-4 single crystal superalloy at 1105 °C and 100 MPa. They reported that rafting is beneficial to creep life since it delays evolution of creep strain. In addition, rafting occurs quickly and suppresses the accumulation of creep strain until a critical strain is reached.
Oxidation
For superalloys operating at high temperatures and exposed to corrosive environments, oxidation behavior is a concern. Oxidation involves chemical reactions of the alloying elements with oxygen to form new oxide phases, generally at the alloy surface. If unmitigated, oxidation can degrade the alloy over time in a variety of ways, including:
sequential surface oxidation, cracking, and spalling, eroding the alloy over time
surface embrittlement through the introduction of oxide phases, promoting crack formation and fatigue failure
depletion of key alloying elements, affecting mechanical properties and possibly compromising performance
Selective oxidation is the primary strategy used to limit these deleterious processes. The ratio of alloying elements promotes formation of a specific oxide phase that then acts as a barrier to further oxidation. Most commonly, aluminum and chromium are used in this role, because they form relatively thin and continuous oxide layers of alumina (Al2O3) and chromia (Cr2O3), respectively. They offer low oxygen diffusivities, effectively halting further oxidation beneath this layer. In the ideal case, oxidation proceeds through two stages. First, transient oxidation involves the conversion of various elements, especially the majority elements (e.g. nickel or cobalt). Transient oxidation proceeds until the selective oxidation of the sacrificial element forms a complete barrier layer.
The protective effect of selective oxidation can be undermined. The continuity of the oxide layer can be compromised by mechanical disruption due to stress or may be disrupted as a result of oxidation kinetics (e.g. if oxygen diffuses too quickly). If the layer is not continuous, its effectiveness as a diffusion barrier to oxygen is compromised. The stability of the oxide layer is strongly influenced by the presence of other minority elements. For example, the addition of boron, silicon, and yttrium to superalloys promotes oxide layer adhesion, reducing spalling and maintaining continuity.
Oxidation is the most basic form of chemical degradation superalloys may experience. More complex corrosion processes are common when operating environments include salts and sulfur compounds, or under chemical conditions that change dramatically over time. These issues are also often addressed through comparable coatings.
Creep
One of the main strengths of superalloys are their superior creep resistant properties when compared to most conventional alloys. To start, 𝛾’-strengthened superalloys have the benefit of requiring dislocations to move in pairs due to the phase creating a high antiphase boundary (APB) energy during dislocation motion. This high APB energy makes it so that a second dislocation has to undo the APB energy created by the first. In doing so, this significantly reduces the mobility of dislocations in the material which should inhibit dislocation activated creep. These dislocation pairs (also called superdislocations) have been described as being either weakly or strongly coupled, the spacing between the dislocations compared to the size of the particle diameter being the determining factor between weak and strong. A weakly coupled dislocation has a relatively large spacing between the dislocations compared to the particle diameter while a strongly coupled dislocation has a relatively comparable spacing compared to the particle diameter. This is determined not by the dislocation spacing, but by the size of the 𝛾’ particles. A weakly coupled dislocation occurs when the particle size is relatively small while a strongly coupled dislocation occurs when the particle size is relatively large (such as when a superalloy has been aged for too long). Weakly coupled dislocations exhibit pinning and bowing of the dislocation line on the 𝛾’-particles. Strongly coupled dislocation behavior depends greatly on the dislocation line lengths and the resistances benefits they offer disappear once the particle size becomes large enough.
Additionally, superalloys exhibit comparatively superior high temperature creep resistance due to thermally activated cross-slip of dislocations. When one of the dislocations in the pair cross-slips into another plane, the dislocations become pinned since they can no longer move as a pair. This pinning reduces the ability for the dislocations to move in dislocation activated creep and improving the creep resistant properties of the material.
Increasing the lattice misfit between 𝛾/𝛾' has also been shown to be beneficial for creep resistance. This is primarily since a high lattice misfit between the two phases results in a higher barrier to dislocation motion than a low lattice misfit.
For Ni-based single-crystal superalloys, upwards of ten different kinds of alloying additions can be seen to improve creep-resistance and overall mechanical properties. Alloying elements include Cr, Co, Al, Mo, W, Ti, Ta, Re, and Ru. Elements such as Co, Re, and Ru have been described to improve creep resistance by facilitating the formation of stacking faults by reducing the stacking fault energy. Increasing number of stacking faults leading to the inhibition of dislocation motion. Other elements (Al, Ti, Ta) can favorably partition into and improve the nucleation of 𝛾’-phase.
Diffusion is also a method of creep, and there are a few ways to limit diffusional creep. One primary way that superalloys can limit diffusional creep is by manipulating grain structure to reduce grain boundaries which tend to be pathways for easy diffusion. Typically this is done by manufacturing the superalloys as single crystals oriented parallel to the direction of the applied force.
Processing
Superalloys were originally iron-based and cold wrought prior to the 1940s when investment casting of cobalt base alloys significantly raised operating temperatures. The 1950s development of vacuum melting allowed for fine control of the chemical composition of superalloys and reduction in contamination and in turn led to a revolution in processing techniques such as directional solidification of alloys and single crystal superalloys.
Processing methods vary widely depending on the required properties of each item.
Casting and forging
Casting and forging are traditional metallurgical processing techniques that can be used to generate both polycrystalline and monocrystalline products. Polycrystalline casts offer higher fracture resistance, while monocrystalline casts offer higher creep resistance.
Jet turbine engines employ both crystalline component types to take advantage of their individual strengths. The disks of the high-pressure turbine, which are near the central hub of the engine are polycrystalline. The turbine blades, which extend radially into the engine housing, experience a much greater centripetal force, necessitating creep resistance, typically adopting monocrystalline or polycrystalline with a preferred crystal orientation.
Investment casting
Investment casting is a metallurgical processing technique in which a wax form is fabricated and used as a template for a ceramic mold. A ceramic mold is poured around the wax form and solidifies, the wax form is melted out of the ceramic mold, and molten metal is poured into the void left by the wax. This leads to a metal form in the same shape as the original wax form. Investment casting leads to a polycrystalline final product, as nucleation and growth of crystal grains occurs at numerous locations throughout the solid matrix. Generally, the polycrystalline product has no preferred grain orientation.
Directional solidification
Directional solidification uses a thermal gradient to promote nucleation of metal grains on a low temperature surface, as well as to promote their growth along the temperature gradient. This leads to grains elongated along the temperature gradient, and significantly greater creep resistance parallel to the long grain direction. In polycrystalline turbine blades, directional solidification is used to orient the grains parallel to the centripetal force. It is also known as dendritic solidification.
Single crystal growth
Single crystal growth starts with a seed crystal that is used to template growth of a larger crystal. The overall process is lengthy, and machining is necessary after the single crystal is grown.
Powder metallurgy
Powder metallurgy is a class of modern processing techniques in which metals are first powdered, and then formed into the desired shape by heating below the melting point. This is in contrast to casting, which occurs with molten metal. Superalloy manufacturing often employs powder metallurgy because of its material efficiency - typically much less waste metal must be machined away from the final product—and its ability to facilitate mechanical alloying. Mechanical alloying is a process by which reinforcing particles are incorporated into the superalloy matrix material by repeated fracture and welding.
Sintering and hot isostatic pressing
Sintering and hot isostatic pressing are processing techniques used to densify materials from a loosely packed "green body" into a solid object with physically merged grains. Sintering occurs below the melting point, and causes adjacent particles to merge at their boundaries, creating a strong bond between them. In hot isostatic pressing, a sintered material is placed in a pressure vessel and compressed from all directions (isostatically) in an inert atmosphere to affect densification.
Additive manufacturing
Selective laser melting (also known as powder bed fusion) is an additive manufacturing procedure used to create intricately detailed forms from a CAD file. A shape is designed and then converted into slices. These slices are sent to a laser writer to print the final product. In brief, a bed of metal powder is prepared, and a slice is formed in the powder bed by a high energy laser sintering the particles together. The powder bed moves downwards, and a new batch of metal powder is rolled over the top. This layer is then sintered with the laser, and the process is repeated until all slices have been processed. Additive manufacturing can leave pores behind. Many products undergo a heat treatment or hot isostatic pressing procedure to densify the product and reduce porosity.
Coatings
In modern gas turbines, the turbine entry temperature (~1750 K) exceeds superalloy incipient melting temperature (~1600 K), with the help of surface engineering.
Types
The three types of coatings are: diffusion coatings, overlay coatings, and thermal barrier coatings. Diffusion coatings, mainly constituted with aluminide or platinum-aluminide, is the most common. MCrAlX-based overlay coatings (M=Ni or Co, X=Y, Hf, Si) enhance resistance to corrosion and oxidation. Compared to diffusion coatings, overlay coatings are more expensive, but less dependent on substrate composition, since they must be carried out by air or vacuum plasma spraying (APS/VPS) or electron beam physical vapour deposition (EB-PVD). Thermal barrier coatings provide by far the best enhancement in working temperature and coating life. It is estimated that modern TBC of thickness 300 μm, if used in conjunction with a hollow component and cooling air, has the potential to lower metal surface temperatures by a few hundred degrees.
Thermal barrier coatings
Thermal barrier coatings (TBCs) are used extensively in gas turbine engines to increase component life and engine performance. A coating of about 1-200 μm can reduce the temperature at the superalloy surface by up to 200 K. TBCs are a system of coatings consisting of a bond coat, a thermally grown oxide (TGO), and a thermally insulating ceramic top coat. In most applications, the bond coat is either a MCrAlY (where M=Ni or NiCo) or a Pt modified aluminide coating. A dense bond coat is required to provide protection of the superalloy substrate from oxidation and hot corrosion attack and to form an adherent, slow-growing surface TGO. The TGO is formed by oxidation of the aluminum that is contained in the bond coat. The current (first generation) thermal insulation layer is composed of 7wt % yttria-stabilized zirconia (7YSZ) with a typical thickness of 100–300 μm. Yttria-stabilized zirconia is used due to its low thermal conductivity (2.6W/mK for fully dense material), relatively high coefficient of thermal expansion, and high temperature stability. The electron beam-directed vapor deposition (EB-DVD) process used to apply the TBC to turbine airfoils produces a columnar microstructure with multiple porosity levels. Inter-column porosity is critical to providing strain tolerance (via a low in-plane modulus), as it would otherwise spall on thermal cycling due to thermal expansion mismatch with the superalloy substrate. This porosity reduces the thermal coating's conductivity.
Bond coat
The bond coat adheres the thermal barrier to the substrate. Additionally, the bond coat provides oxidation protection and functions as a diffusion barrier against the motion of substrate atoms towards the environment. The five major types of bond coats are: the aluminides, the platinum-aluminides, MCrAlY, cobalt-cermets, and nickel-chromium. For aluminide bond coatings, the coating's final composition and structure depends on the substrate composition. Aluminides lack ductility below 750 °C, and exhibit limited thermomechanical fatigue strength. Pt-aluminides are similar to the aluminide bond coats except for a layer of Pt (5—10 μm) deposited on the blade. The Pt aids in oxide adhesion and contributes to hot corrosion, increasing blade lifespan. MCrAlY does not strongly interact with the substrate. Normally applied by plasma spraying, MCrAlY coatings from secondary aluminum oxides. This means that the coatings form an outer chromia layer and a secondary alumina layer underneath. These oxide formations occur at high temperatures in the range of those that superalloys usually encounter. The chromia provides oxidation and hot-corrosion resistance. The alumina controls oxidation mechanisms by limiting oxide growth by self-passivating. The yttrium enhances oxide adherence to the substrate, and limits the growth of grain boundaries (which can lead to coat flaking). Addition of rhenium and tantalum increases oxidation resistance. Cobalt-cermet-based coatings consisting of materials such as tungsten carbide/cobalt can be used due to excellent resistance to abrasion, corrosion, erosion, and heat. These cermet coatings perform well in situations where temperature and oxidation damage are significant concerns, such as boilers. One of cobalt cermet's unique advantages is minimal loss of coating mass over time, due to the strength of carbides. Overall, cermet coatings are useful in situations where mechanical demands are equal to chemical demands. Nickel-chromium coatings are used most frequently in boilers fed by fossil fuels, electric furnaces, and waste incineration furnaces, where the danger of oxidizing agents and corrosive compounds in the vapor must be addressed. The specific method of spray-coating depends on the coating composition. Nickel-chromium coatings that also contain iron or aluminum provide better corrosion resistance when they are sprayed and laser glazed, while pure nickel-chromium coatings perform better when thermally sprayed exclusively.
Process methods
Several kinds of coating process are available: pack cementation process, gas phase coating (both are a type of chemical vapor deposition (CVD)), thermal spraying, and physical vapor deposition. In most cases, after the coating process, near-surface regions of parts are enriched with aluminium in a matrix of the nickel aluminide.
Pack cementation
Pack cementation is a widely used CVD technique that consists of immersing the components to be coated in a metal powder mixture and ammonium halide activators and sealing them in a retort. The entire apparatus is placed inside a furnace and heated in a protective atmosphere to a lower than normal temperature that allows diffusion, due to the halide salts chemical reaction that causes a eutectic bond between the two metals. The surface alloy that is formed due to thermal-diffused ion migration has a metallurgical bond to the substrate and an intermetallic layer found in the gamma layer of the surface alloys.
The traditional pack consists of four components at temperatures below (750 °C):
Substrate or parts
Ferrous and non-ferrous powdered alloy: (Ti and/or Al, Si and/or Zn, B and/ or Cr)
Halide salt activator: Ammonium halide salts
Relatively inert filler powder (Al2O3, SiO2, or SiC)
This process includes:
Aluminizing
Chromizing
Siliconizing
Sherardizing
Boronizing
Titaniumizing
Pack cementation has reemerged when combined with other chemical processes to lower the temperatures of metal combinations and give intermetallic properties to different alloy combinations for surface treatments.
Thermal spraying
Thermal spraying involves heating a feedstock of precursor material and spraying it on a surface. Specific techniques depend on desired particle size, coat thickness, spray speed, desired area, etc. Thermal spraying relies on adhesion to the surface. As a result, the surface of the superalloy must be cleaned and prepared, and usually polished, before application.
Plasma spraying
Plasma spraying offers versatility of usable coatings, and high-temperature performance. Plasma spraying can accommodate a wide range of materials, versus other techniques. As long as the difference between melting and decomposition temperatures is greater than 300 K, plasma spraying is viable.
Gas phase
Gas phase coating is carried out at higher temperatures, about 1080 °C. The coating material is usually loaded onto trays without physical contact with the parts to be coated. The coating mixture contains active coating material and activator, but usually not thermal ballast. As in the pack cementation process, gaseous aluminium chloride (or fluoride) is transferred to the surface of the part. However, in this case the diffusion is outwards. This kind of coating also requires diffusion heat treatment.
Failure mechanisms
Failure of thermal barrier coating usually manifests as delamination, which arises from the temperature gradient during thermal cycling between ambient temperature and working conditions coupled with the difference in thermal expansion coefficient of substrate and coating. It is rare for the coating to fail completely – some pieces remain intact, and significant scatter is observed in the time to failure if testing is repeated under identical conditions. Various degradation mechanisms affect thermal barrier coating, and some or all of these must operate before failure finally occurs:
Oxidation at the interface of thermal barrier coating and underlying bond coat;
Depletion of aluminum in bond coat due to oxidation and diffusion with substrate;
Thermal stresses from mismatch in thermal expansion coefficient and growth stress due to the formation of thermally grown oxide layer;
Imperfections near thermally grown oxide layer;
Various other complicating factors during engine operation.
Additionally, TBC life is sensitive to the combination of materials (substrate, bond coat, ceramic) and processes (EB-PVD, plasma spraying) used.
Applications
Turbines
Nickel-based superalloys are used in load-bearing structures requiring the highest homologous temperature of any common alloy system (Tm = 0.9, or 90% of their melting point). Among the most demanding applications for a structural material are those in the hot sections of turbine engines (e.g. turbine blade). They comprise over 50% of the weight of advanced aircraft engines. The widespread use of superalloys in turbine engines coupled with the fact that the thermodynamic efficiency of turbine engines is a function of increasing turbine inlet temperatures has provided part of the motivation for increasing the maximum-use temperature of superalloys. From 1990-2020, turbine airfoil temperature capability increased on average by about 2.2 °C/year. Two major factors have made this increase possible:
Processing techniques that improved alloy cleanliness (thus improving reliability) and/or enabled the production of tailored microstructures such as directionally solidified or single-crystal material.
Alloy development resulting in higher temperature materials primarily through the additions of refractory elements such as Re, W, Ta, and Mo.
About 60% of the temperature increases related to advanced cooling, while 40% have resulted from material improvements. State-of-the-art turbine blade surface temperatures approach 1,150 C. The most severe stress and temperature combinations correspond to an average bulk metal temperature approaching 1,000 C..
Although Ni-based superalloys retain significant strength to 980 C, they tend to be susceptible to environmental attack because of the presence of reactive alloying elements. Surface attack includes oxidation, hot corrosion, and thermal fatigue.
Energy production
High temperature materials are valuable for energy conversion and energy production applications. Maximum energy conversion efficiency is desired in such applications, in accord with the Carnot cycle. Because Carnot efficiency is limited by the temperature difference between the hot and cold reservoirs, higher operating temperatures increase energy conversion efficiency. Operating temperatures are limited by superalloys, limiting applications to around 1000 °C-1400 °C. Energy applications include:
Solar thermal power plants (stainless steel rods containing heated water)
Steam turbines (turbine blades and boiler housing)
Heat exchangers for nuclear reactor systems
Alumina-forming stainless steel is weldable and has potential for use in automotive applications, such as for high temperature exhaust piping and in heat capture and reuse.
Research
Radiolysis
Sandia National Laboratories is studying radiolysis for making superalloys. It uses nanoparticle synthesis to create alloys and superalloys. This process holds promise as a universal method of nanoparticle formation. By developing an understanding of the basic material science, it might be possible to expand research into other aspects of superalloys. Radiolysis produces polycrystalline alloys, which suffer from an unacceptable level of creep.
Austenitic steel
Stainless steel alloys remain a research target because of lower production costs, as well as the need for an austenitic stainless steel with high-temperature corrosion resistance in environments with water vapor. Research focuses on increasing high-temperature tensile strength, toughness, and creep resistance to compete with Ni-based superalloys.
Oak Ridge National Laboratory is researching austenitic alloys, achieving similar creep and corrosion resistance at 800 °C to that of other austenitic alloys, including Ni-based superalloys.
AFA superalloys
Development of AFA superalloys with a 35 wt.% Ni-base have shown potential for use in operating temperatures upwards to 1,100 °C.
Multi-principal-element superalloy (MPES)
Researchers at Sandia Labs, Ames National Laboratory and Iowa State University reported a 3D-printed superalloy composed of 42% aluminum, 25% titanium, 13% niobium, 8% zirconium, 8% molybdenum and 4% tantalum. Most alloys are made chiefly of one primary element, combined with low amounts of other elements. In contrast MPES have substantial amounts of three or more elements.
Such alloys promise improvements on high-temperature applications, strength-to-weight, fracture toughness, corrosion and radiation resistance, wear resistance, and others. They reported ratio of hardness and density of 1.8–2.6 GPa-cm3/g, which surpasses all known alloys, including intermetallic compounds, titanium aluminides, refractory MPEAs, and conventional Ni-based superalloys. This represents a 300% improvement over Inconel 718 based on measured peak hardness of 4.5 GPa and density of 8.2 g/cm3, (0.55 GPa-cm3/g).
The material is stable at 800 °C, hotter than the 570+ °C found in typical coal-based power plants.
The researchers acknowledged that the 3D printing process produces microscopic cracks when forming large parts, and that the feedstock includes metals that limit applicability in cost-sensitive applications.
See also
Oxide dispersion strengthened alloy
Titanium aluminide
References
Bibliography
External links
Extensive bibliography and links.
Metallurgy
Aerospace materials | Superalloy | [
"Chemistry",
"Materials_science",
"Engineering"
] | 10,160 | [
"Aerospace materials",
"Metallurgy",
"Materials science",
"Superalloys",
"Alloys",
"nan",
"Aerospace engineering"
] |
2,025,734 | https://en.wikipedia.org/wiki/Entropic%20force | In physics, an entropic force acting in a system is an emergent phenomenon resulting from the entire system's statistical tendency to increase its entropy, rather than from a particular underlying force on the atomic scale.
Mathematical formulation
In the canonical ensemble, the entropic force associated to a macrostate partition is given by
where is the temperature, is the entropy associated to the macrostate , and is the present macrostate.
Examples
Pressure of an ideal gas
The internal energy of an ideal gas depends only on its temperature, and not on the volume of its containing box, so it is not an energy effect that tends to increase the volume of the box as gas pressure does. This implies that the pressure of an ideal gas has an entropic origin.
What is the origin of such an entropic force? The most general answer is that the effect of thermal fluctuations tends to bring a thermodynamic system toward a macroscopic state that corresponds to a maximum in the number of microscopic states (or micro-states) that are compatible with this macroscopic state. In other words, thermal fluctuations tend to bring a system toward its macroscopic state of maximum entropy.
Brownian motion
The entropic approach to Brownian movement was initially proposed by R. M. Neumann. Neumann derived the entropic force for a particle undergoing three-dimensional Brownian motion using the Boltzmann equation, denoting this force as a diffusional driving force or radial force. In the paper, three example systems are shown to exhibit such a force:
electrostatic system of molten salt,
surface tension and,
elasticity of rubber.
Polymers
A standard example of an entropic force is the elasticity of a freely jointed polymer molecule. For an ideal chain, maximizing its entropy means reducing the distance between its two free ends. Consequently, a force that tends to collapse the chain is exerted by the ideal chain between its two free ends. This entropic force is proportional to the distance between the two ends. The entropic force by a freely jointed chain has a clear mechanical origin and can be computed using constrained Lagrangian dynamics. With regards to biological polymers, there appears to be an intricate link between the entropic force and function. For example, disordered polypeptide segments in the context of the folded regions of the same polypeptide chain have been shown to generate an entropic force that has functional implications.
Hydrophobic force
Another example of an entropic force is the hydrophobic force. At room temperature, it partly originates from the loss of entropy by the 3D network of water molecules when they interact with molecules of dissolved substance. Each water molecule is capable of
donating two hydrogen bonds through the two protons,
accepting two more hydrogen bonds through the two sp3-hybridized lone pairs.
Therefore, water molecules can form an extended three-dimensional network. Introduction of a non-hydrogen-bonding surface disrupts this network. The water molecules rearrange themselves around the surface, so as to minimize the number of disrupted hydrogen bonds. This is in contrast to hydrogen fluoride (which can accept 3 but donate only 1) or ammonia (which can donate 3 but accept only 1), which mainly form linear chains.
If the introduced surface had an ionic or polar nature, there would be water molecules standing upright on 1 (along the axis of an orbital for ionic bond) or 2 (along a resultant polarity axis) of the four sp3 orbitals. These orientations allow easy movement, i.e. degrees of freedom, and thus lowers entropy minimally. But a non-hydrogen-bonding surface with a moderate curvature forces the water molecule to sit tight on the surface, spreading 3 hydrogen bonds tangential to the surface, which then become locked in a clathrate-like basket shape. Water molecules involved in this clathrate-like basket around the non-hydrogen-bonding surface are constrained in their orientation. Thus, any event that would minimize such a surface is entropically favored. For example, when two such hydrophobic particles come very close, the clathrate-like baskets surrounding them merge. This releases some of the water molecules into the bulk of the water, leading to an increase in entropy.
Another related and counter-intuitive example of entropic force is protein folding, which is a spontaneous process and where hydrophobic effect also plays a role. Structures of water-soluble proteins typically have a core in which hydrophobic side chains are buried from water, which stabilizes the folded state. Charged and polar side chains are situated on the solvent-exposed surface where they interact with surrounding water molecules. Minimizing the number of hydrophobic side chains exposed to water is the principal driving force behind the folding process, although formation of hydrogen bonds within the protein also stabilizes protein structure.
Colloids
Entropic forces are important and widespread in the physics of colloids, where they are responsible for the depletion force, and the ordering of hard particles, such as the crystallization of hard spheres, the isotropic-nematic transition in liquid crystal phases of hard rods, and the ordering of hard polyhedra. Because of this, entropic forces can be an important driver of self-assembly
Entropic forces arise in colloidal systems due to the osmotic pressure that comes from particle crowding. This was first discovered in, and is most intuitive for, colloid-polymer mixtures described by the Asakura–Oosawa model. In this model, polymers are approximated as finite-sized spheres that can penetrate one another, but cannot penetrate the colloidal particles. The inability of the polymers to penetrate the colloids leads to a region around the colloids in which the polymer density is reduced. If the regions of reduced polymer density around two colloids overlap with one another, by means of the colloids approaching one another, the polymers in the system gain an additional free volume that is equal to the volume of the intersection of the reduced density regions. The additional free volume causes an increase in the entropy of the polymers, and drives them to form locally dense-packed aggregates. A similar effect occurs in sufficiently dense colloidal systems without polymers, where osmotic pressure also drives the local dense packing of colloids into a diverse array of structures that can be rationally designed by modifying the shape of the particles. These effects are for anisotropic particles referred to as directional entropic forces.
Cytoskeleton
Contractile forces in biological cells are typically driven by molecular motors associated with the cytoskeleton. However, a growing body of evidence shows that contractile forces may also be of entropic origin. The foundational example is the action of microtubule crosslinker Ase1, which localizes to microtubule overlaps in the mitotic spindle. Molecules of Ase1 are confined to the microtubule overlap, where they are free to diffuse one-dimensionally. Analogically to an ideal gas in a container, molecules of Ase1 generate pressure on the overlap ends. This pressure drives the overlap expansion, which results in the contractile sliding of the microtubules. An analogous example was found in the actin cytoskeleton. Here, the actin-bundling protein anillin drives actin contractility in cytokinetic rings.
Controversial examples
Some forces that are generally regarded as conventional forces have been argued to be actually entropic in nature. These theories remain controversial and are the subject of ongoing work. Matt Visser, professor of mathematics at Victoria University of Wellington, NZ in "Conservative Entropic Forces" criticizes selected approaches but generally concludes:
Gravity
In 2009, Erik Verlinde argued that gravity can be explained as an entropic force. It claimed (similar to Jacobson's result) that gravity is a consequence of the "information associated with the positions of material bodies". This model combines the thermodynamic approach to gravity with Gerard 't Hooft's holographic principle. It implies that gravity is not a fundamental interaction, but an emergent phenomenon.
Other forces
In the wake of the discussion started by Verlinde, entropic explanations for other fundamental forces have been suggested, including Coulomb's law. The same approach was argued to explain dark matter, dark energy and Pioneer effect.
Links to adaptive behavior
It was argued that causal entropic forces lead to spontaneous emergence of tool use and social cooperation. Causal entropic forces by definition maximize entropy production between the present and future time horizon, rather than just greedily maximizing instantaneous entropy production like typical entropic forces.
A formal simultaneous connection between the mathematical structure of the discovered laws of nature, intelligence and the entropy-like measures of complexity was previously noted in 2000 by Andrei Soklakov in the context of Occam's razor principle.
See also
Colloids
Nanomechanics
Thermodynamics
Abraham–Lorentz force
Entropic gravity
Entropy
Introduction to entropy
Entropic elasticity of an ideal chain
Hawking radiation
Data clustering
Depletion force
Maximal entropy random walk
References
Materials science
Thermodynamic entropy
Soft matter | Entropic force | [
"Physics",
"Materials_science",
"Engineering"
] | 1,889 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Soft matter",
"Materials science",
"Thermodynamic entropy",
"Entropy",
"Condensed matter physics",
"nan",
"Statistical mechanics"
] |
22,784,974 | https://en.wikipedia.org/wiki/List%20of%20fusion%20power%20technologies | The following is a list of fusion power technologies that have been practically attempted:
Pioneers
Beam-target (Oliphant, 1934)
Convergent shock-waves (Huemul, Argentina)
Magneto-electrostatic toroid trap (ATOLL, Artsimovich)
Tokamak (T-1 to 10, Kurchatov Institute, JET, ITER is under construction, and many more)
Toroidal z-pinch (ZETA)
Magnetic
Accelerated FRC (TCS-U)
Bumpy torus (ELMO, EBT, ORNL)
Galatea (Tornado)
Magnetic suspension (Levitron)
High beta tokamak (HBT-EP)
Levitated dipole
Odd-parity RMF
Reversed field pinch (MST, RFX-Mod Italy)
Tokamak (Spherical tokamak (MAST, NSTX), Spheromak (Dynomak, SSPX Lawrence Livermore))
Stellarator (Wendelstein 7-X)
Non-neutral plasma (Columbia Non-neutral Torus)
Compact (NCSX Princeton [cancelled])
Tandem mirror (Gamma-10 Japan)
Inertial
Laser Inertial (NIF) - direct drive
Inertial confinement fusion - indirect drive
Inertial confinement fusion - Fast Ignition
Heavy ion fusion (HIF, HIFAR Lawrence Berkeley)
MAGLIF: Combination pinch and laser ICF
Z-pinch
Z-pinch
Pulsed z-pinch (Saturn, Sandia)
High density Z-pinch (MAGPIE Imperial College)
Inverse Z-pinch
Shear flow stabilized (Zap Energy)
Inertial electrostatic confinement
Fusor (Fusor, Farnsworth)
IEC (Fusor, Hirsch-Meeks)
IEC with Periodically Oscillating Plasma Sphere (POPS, LANL)
IEC with plasma electrode (PoF, Sanns)
IEC with beam/spherical capacitor (STAR, Sesselmann)
Polywell (Fusor and magnetic mirror hybrid)
IEC with Penning trap (Penning Fusion Experiment - PFX, LANL)
F1 (electrostatic and magnetic cusp hybrid - Fusion One)
Other, hybrids
CT Accel (CTIX, UC Davis)
Magneto-kinetic (PHDX, Plasma Dynamic Lab)
Magnetized target (AFRL, LANL)
Magneto-inertial (OMEGA laser, LLE, Rochester)
Levitated dipole [superconducting] (LDX, MIT, PSGC)
Maryland Centrifugal (MCX)
Sheared magnetofluid/Bernoulli confinement (MBX, Uni Texas)
Penning fusion (PFX, LANL)
Plasma jets (HyperV, Chantilly)
Magnetized target fusion with mechanical compression (General Fusion, Burnaby)
Field-reversed colliding beams (Tri-Alpha)
Muon-catalyzed fusion (Berkeley, Alvarez)
Dense Plasma Focus (Focus fusion, Lawrenceville Plasma Physics, Lerner)
Rotating lithium wall (RWE, Maryland)
References
See also
Fusion One Corporation
Magnetized target fusion technology description by General Fusion
Inertial Confinement Fusion at NIF - How ICF Works
Focus Fusion
EMC2 Fusion Development Corporation | List of fusion power technologies | [
"Physics",
"Chemistry"
] | 649 | [
"Nuclear fusion",
"Fusion power",
"Plasma physics"
] |
35,607,283 | https://en.wikipedia.org/wiki/Superfluidity | Superfluidity is the characteristic property of a fluid with zero viscosity which therefore flows without any loss of kinetic energy. When stirred, a superfluid forms vortices that continue to rotate indefinitely. Superfluidity occurs in two isotopes of helium (helium-3 and helium-4) when they are liquefied by cooling to cryogenic temperatures. It is also a property of various other exotic states of matter theorized to exist in astrophysics, high-energy physics, and theories of quantum gravity. The theory of superfluidity was developed by Soviet theoretical physicists Lev Landau and Isaak Khalatnikov.
Superfluidity often co-occurs with Bose–Einstein condensation, but neither phenomenon is directly related to the other; not all Bose–Einstein condensates can be regarded as superfluids, and not all superfluids are Bose–Einstein condensates. Superfluids have some potential practical uses, such as dissolving substances in a quantum solvent.
Superfluidity of liquid helium
Superfluidity was discovered in helium-4 by Pyotr Kapitsa and independently by John F. Allen and Don Misener in 1937. Onnes possibly observed the superfluid phase transition on August 2 1911, the same day that he observed superconductivity in mercury. It has since been described through phenomenology and microscopic theories.
In liquid helium-4, the superfluidity occurs at far higher temperatures than it does in helium-3. Each atom of helium-4 is a boson particle, by virtue of its integer spin. A helium-3 atom is a fermion particle; it can form bosons only by pairing with another particle like itself, which occurs at much lower temperatures. The discovery of superfluidity in helium-3 was the basis for the award of the 1996 Nobel Prize in Physics. This process is similar to the electron pairing in superconductivity.
Cold atomic gases
Superfluidity in an ultracold fermionic gas was experimentally proven by Wolfgang Ketterle and his team who observed quantum vortices in lithium-6 at a temperature of 50 nK at MIT in April 2005. Such vortices had previously been observed in an ultracold bosonic gas using rubidium-87 in 2000, and more recently in two-dimensional gases. As early as 1999, Lene Hau created such a condensate using sodium atoms for the purpose of slowing light, and later stopping it completely. Her team subsequently used this system of compressed light to generate the superfluid analogue of shock waves and tornadoes:
Superfluids in astrophysics
The idea that superfluidity exists inside neutron stars was first proposed by Arkady Migdal. By analogy with electrons inside superconductors forming Cooper pairs because of electron–lattice interaction, it is expected that nucleons in a neutron star at sufficiently high density and low temperature can also form Cooper pairs because of the long-range attractive nuclear force and lead to superfluidity and superconductivity.
In high-energy physics and quantum gravity
Superfluid vacuum theory (SVT) is an approach in theoretical physics and quantum mechanics where the physical vacuum is viewed as superfluid.
The ultimate goal of the approach is to develop scientific models that unify quantum mechanics (describing three of the four known fundamental interactions) with gravity. This makes SVT a candidate for the theory of quantum gravity and an extension of the Standard Model.
It is hoped that development of such a theory would unify into a single consistent model of all fundamental interactions,
and to describe all known interactions and elementary particles as different manifestations of the same entity, superfluid vacuum.
On the macro-scale a larger similar phenomenon has been suggested as happening in the murmurations of starlings. The rapidity of change in flight patterns mimics the phase change leading to superfluidity in some liquid states.
Light behaves like a superfluid in various applications such as Poisson's Spot. As the liquid helium shown above, light will travel along the surface of an obstacle before continuing along its trajectory. Since light is not affected by local gravity its "level" becomes its own trajectory and velocity. Another example is how a beam of light travels through the hole of an aperture and along its backside before diffraction.
See also
Boojum (superfluidity)
Condensed matter physics
Macroscopic quantum phenomena
Quantum hydrodynamics
Slow light
Supersolid
References
Further reading
Schmidt, A. (2015) Introduction to Superfluidity, Springer ISBN 978-3-319-0794
Svistunov, B. V., Babaev E. S., Prokof'ev N. V. Superfluid States of Matter
External links
Video: Demonstration of superfluid helium (Alfred Leitner, 1963, 38 min.)
Superfluidity seen in a 2d fermi gas recent 2021 observation relevant for Cuprate superconductors
Superfluidity
Physical phenomena
Fluid dynamics
Liquid helium
Phases of matter
Lev Landau | Superfluidity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,044 | [
"Physical phenomena",
"Phase transitions",
"Chemical engineering",
"Phases of matter",
"Superfluidity",
"Condensed matter physics",
"Exotic matter",
"Piping",
"Matter",
"Fluid dynamics"
] |
35,610,132 | https://en.wikipedia.org/wiki/Cyclopentadienyliron%20dicarbonyl%20dimer | Cyclopentadienyliron dicarbonyl dimer is an organometallic compound with the formula [(η5-C5H5)Fe(CO)2]2, often abbreviated to Cp2Fe2(CO)4, [CpFe(CO)2]2 or even Fp2, with the colloquial name "fip dimer". It is a dark reddish-purple crystalline solid, which is readily soluble in moderately polar organic solvents such as chloroform and pyridine, but less soluble in carbon tetrachloride and carbon disulfide. Cp2Fe2(CO)4 is insoluble in but stable toward water. Cp2Fe2(CO)4 is reasonably stable to storage under air and serves as a convenient starting material for accessing other Fp (CpFe(CO)2) derivatives (described below).
Structure
In solution, Cp2Fe2(CO)4 can be considered a dimeric half-sandwich complex. It exists in three isomeric forms: cis, trans, and an unbridged, open form. These isomeric forms are distinguished by the position of the ligands. The cis and trans isomers differ in the relative position of C5H5 (Cp) ligands. The cis and trans isomers have the formulation [(η5-C5H5)Fe(CO)(μ-CO)]2, that is, two CO ligands are terminal whereas the other two CO ligands bridge between the iron atoms. The cis and trans isomers interconvert via the open isomer, which has no bridging ligands between iron atoms. Instead, it is formulated as (η5-C5H5)(OC)2Fe−Fe(CO)2(η5-C5H5) — the metals are held together by an iron–iron bond. At equilibrium, the cis and trans isomers are predominant.
In addition, the terminal and bridging carbonyls are known to undergo exchange: the trans isomer can undergo bridging–terminal CO ligand exchange through the open isomer, or through a twisting motion without going through the open form. In contrast, the bridging and terminal CO ligands of the cis isomer can only exchange via the open isomer.
In solution, the cis, trans, and open isomers interconvert rapidly at room temperature, making the molecular structure fluxional. The fluxional process for cyclopentadienyliron dicarbonyl dimer is faster than the NMR time scale, so that only an averaged, single Cp signal is observed in the 1H NMR spectrum at 25 °C. Likewise, the 13C NMR spectrum exhibits one sharp CO signal above −10 °C, while the Cp signal sharpens to one peak above 60 °C. NMR studies indicate that the cis isomer is slightly more abundant than the trans isomer at room temperature, while the amount of the open form is small. The fluxional process is not fast enough to produce averaging in the IR spectrum. Thus, three absorptions are seen for each isomer. The bridging CO ligands appear at around 1780 cm−1 whereas the terminal CO ligands are observed at around 1980 cm−1. The averaged structure of these isomers of Cp2Fe2(CO)4 results in a dipole moment of 3.1 D in benzene.
The solid-state molecular structure of both cis and trans isomers have been analyzed by X-ray and neutron diffraction. The Fe–Fe separation and the Fe–C bond lengths are the same in the Fe2C2 rhomboids, an exactly planar Fe2C2 four-membered ring in the trans isomer versus a folded rhomboid in cis with an angle of 164°, and significant distortions in the Cp ring of the trans isomer reflecting different Cp orbital populations. Although older textbooks show the two iron atoms bonded to each other, theoretical analyses indicate the absence of a direct Fe–Fe bond. This view is consistent with computations and X-ray crystallographic data that indicate a lack of significant electron density between the iron atoms. However, Labinger offers a dissenting view, based primarily on chemical reactivity and spectroscopic data, arguing that electron density is not necessarily the best indication of the presence of a chemical bond. Moreover, without an Fe–Fe bond, the bridging carbonyls must be formally treated as an μ-X2 ligand and μ-L ligand in order for the iron centers to satisfy the 18-electron rule. This formalism is argued to give misleading implications with respect to the chemical and spectroscopic behavior of the carbonyl groups.
Synthesis
Cp2Fe2(CO)4 was first prepared in 1955 at Harvard by Geoffrey Wilkinson using the same method employed today: the reaction of iron pentacarbonyl and dicyclopentadiene.
2 Fe(CO)5 + C10H12 → (η5-C5H5)2Fe2(CO)4 + 6 CO + H2
In this preparation, dicyclopentadiene cracks to give cyclopentadiene, which reacts with Fe(CO)5 with loss of CO. Thereafter, the pathways for the photochemical and thermal routes differ subtly but both entail formation of a hydride intermediate. The method is used in the teaching laboratory.
Reactions
Although of no major commercial value, Fp2 is a workhorse in organometallic chemistry because it is inexpensive and FpX derivatives are rugged (X = halide, organyl).
"Fp−" (FpNa and FpK)
Reductive cleavage of [CpFe(CO)2]2 (formally an iron(I) complex) produces alkali metal derivatives formally derived from the cyclopentadienyliron dicarbonyl anion, [CpFe(CO)2]− or called Fp− (formally iron(0)), which are assumed to exist as a tight ion pair. A typical reductant is sodium metal or sodium amalgam; NaK alloy, potassium graphite (KC8), and alkali metal trialkylborohydrides have been used. [CpFe(CO)2]Na is a widely studied reagent since it is readily alkylated, acylated, or metalated by treatment with an appropriate electrophile. It is an excellent SN2 nucleophile, being one to two orders of magnitude more nucleophilic than thiophenolate, PhS– when reacted with primary and secondary alkyl bromides.
[CpFe(CO)2]2 + 2 Na → 2 CpFe(CO)2Na
[CpFe(CO)2]2 + 2 KBH(C2H5)3 → 2 CpFe(CO)2K + H2 + 2 B(C2H5)3
Treatment of NaFp with an alkyl halide (RX, X = Br, I) produces FeR(η5-C5H5)(CO)2
CpFe(CO)2K + CH3I → CpFe(CO)2CH3 + KI
Fp2 can also be cleaved with alkali metals and by electrochemical reduction.
FpX (X = Cl, Br, I)
Halogens oxidatively cleave [CpFe(CO)2]2 to give the Fe(II) species FpX (X = Cl, Br, I):
[CpFe(CO)2]2 + X2 → 2 CpFe(CO)2X
One example is cyclopentadienyliron dicarbonyl iodide.
Fp(η2-alkene)+, Fp(η2-alkyne)+ and other "Fp+"
In the presence of halide anion acceptors such as aluminium bromide or silver tetrafluoroborate, FpX compounds (X = halide) react with alkenes, alkynes, or neutral labile ligands (such as ethers and nitriles) to afford Fp+ complexes. In another approach, salts of [Fp(isobutene)]+ are readily obtained by reaction of NaFp with methallyl chloride followed by protonolysis. This complex is a convenient and general precursor to other cationic Fp–alkene and Fp–alkyne complexes. The exchange process is facilitated by the loss of gaseous and bulky isobutene. Generally, less substituted alkenes bind more strongly and can displace more hindered alkene ligands. Alkene and alkyne complexes can also be prepared by heating a cationic ether or aqua complex, for example , with the alkene or alkyne. complexes can also be prepared by treatment of FpMe with HBF4·Et2O in CH2Cl2 at −78 °C, followed by addition of L.
Alkene–Fp complexes can also be prepared from Fp anion indirectly. Thus, hydride abstraction from Fp–alkyl compounds using triphenylmethyl hexafluorophosphate affords [Fp(α-alkene)]+ complexes.
FpNa + RCH2CH2I → FpCH2CH2R + NaI
FpCH2CH2R + Ph3CPF6 → + Ph3CH
Reaction of NaFp with an epoxide followed by acid-promoted dehydration also affords alkene complexes. Fp(alkene)+ are stable with respect to bromination, hydrogenation, and acetoxymercuration, but the alkene is easily released with sodium iodide in acetone or by warming with acetonitrile.
The alkene ligand in these cations is activated toward attack by nucleophiles, opening the way to a number of carbon–carbon bond-forming reactions. Nucleophilic additions usually occur at the more substituted carbon. This regiochemistry is attributed to the greater positive charge density at this position. The regiocontrol is often modest. The addition of the nucleophile is completely stereoselective, occurring anti to the Fp group. Analogous Fp(alkyne)+ complexes are also reported to undergo nucleophilic addition reactions by various carbon, nitrogen, and oxygen nucleophiles.
Fp(alkene)+ and Fp(alkyne)+ π-complexes are also quite acidic at the allylic and propargylic positions, respectively, and can be quantitatively deprotonated with amine bases like Et3N to give neutral Fp–allyl and Fp–allenyl σ-complexes (eqn 1, shown for alkene complex).
Fp–allyl and Fp–allenyl react with cationic electrophiles E (such as Me3O+, carbocations, oxocarbenium ions) to generate allylic and propargylic functionalization products, respectively (eqn 2, shown for allyliron). The related complex [Cp*Fe(CO)2(thf)]+[BF4]− (Cp* = C5Me5) has been shown to catalyze propargylic, allylic, and allenic C−H functionalization by combining the deprotonation and electrophilic functionalization processes described above with facile exchange of the unsaturated hydrocarbon bound to the cationic iron center.
η2-Allenyl complexes of Fp+ and substituted cyclopentadienyliron dicarbonyl cations have also been characterized, with X-ray crystallographic analysis showing substantial bending at the central allenic carbon (bond angle < 150°).
Fp-based cyclopropanation reagents
Fp-based reagents have been developed for cyclopropanations. The key reagent is prepared from FpNa with a thioether and methyl iodide, and has a good shelf-life, in contrast to typical Simmons-Smith intermediates and diazoalkanes.
FpNa + ClCH2SCH3 → FpCH2SCH3 + NaCl
FpCH2SCH3 + CH3I + NaBF4 → FpCH2S(CH3)2]BF4 + NaI
Use of [FpCH2S(CH3)2]BF4 does not require specialized conditions.
+ (Ph)2C=CH2 → 1,1-diphenylcyclopropane + …
Iron(III) chloride is added to destroy any byproduct.
Precursors to , like FpCH2OMe which is converted to the iron carbene upon protonation, have also been used as cyclopropanation reagents.
Photochemical reaction
Fp2 exhibits photochemistry. For example, upon UV irradiation at 350 nm, it is reduced by benzylnicotinamide|1-benzyl-1,4-dihydronicotinamide dimer, also known as (BNA)2.
References
Organoiron compounds
Carbonyl complexes
Cyclopentadienyl complexes
Dimers (chemistry)
Half sandwich compounds | Cyclopentadienyliron dicarbonyl dimer | [
"Chemistry",
"Materials_science"
] | 2,823 | [
"Half sandwich compounds",
"Cyclopentadienyl complexes",
"Dimers (chemistry)",
"Polymer chemistry",
"Organometallic chemistry"
] |
35,610,473 | https://en.wikipedia.org/wiki/Video-based%20reflection | Video-based reflection is a reflective practice technique in which video recordings, rather than one's own memory, is used as a basis for reflection and professional growth. Video-based reflection is used with moderations in various professional fields, e.g. in the field of education and pedagogy. Several workshop formats can be described as based on video-based reflection, e.g. "Videobased Reflection on Team Interaction" (The ViRTI Method; Due & Lange 2015) within the conversation analytic school of research, or the concept of Marte Meo, a method of educational counseling.
In the education field
Individual teachers can practice video-based reflection, but it is more commonly done when teachers are a part of video clubs. Research about the effect that video club participation can have on teacher practice has been done by Katherine A. Linsenmeier, Miriam Gamoran Sherin, and Elizabeth van Es.
Individual video-based reflection
Research indicates that teachers can benefit from video-based reflection, even when done apart from a group. Individuals using this mode of reflection tend to write longer and have more specific reflections than when writing a reflection using memory alone. This practice has also been shown to help pre-service and novice teachers focus their attention on their students' learning rather than on their own experiences.
Video clubs
Video clubs are formed by groups of teachers who want to investigate a specific aspect of their teaching practice. These clubs often meet on a monthly basis to view and discuss a video clip of a member's teaching. Teachers participating in video clubs focused on student learning report feeling better able to notice and respond to student learning during instruction.
References
Personal development
Education theory
Learning theory (education) | Video-based reflection | [
"Biology"
] | 345 | [
"Personal development",
"Behavior",
"Human behavior"
] |
35,611,343 | https://en.wikipedia.org/wiki/Covalent%20bond%20classification%20method | The covalent bond classification (CBC) method, also referred to as LXZ notation, is a way of describing covalent compounds such as organometallic complexes in a way that is not prone to limitations resulting from the definition of oxidation state. Instead of simply assigning a charge (oxidation state) to an atom in the molecule, the covalent bond classification method analyzes the nature of the ligands surrounding the atom of interest. According to this method, the interactions that allow for coordination of the ligand can be classified according to whether it donates two, one, or zero electrons. These three classes of ligands are respectively given the symbols L, X, and Z. The method was published by Malcolm L. H. Green in 1995.
Types of ligands
X-type ligands are those that donate one electron to the metal and accept one electron from the metal when using the neutral ligand method of electron counting, or donate two electrons to the metal when using the donor pair method of electron counting. Regardless of whether they are considered neutral or anionic, these ligands yield normal covalent bonds. A few examples of this type of ligand are H, halogens (Cl, Br, F, etc.), OH, CN, CH3, and NO (bent).
L-type ligands are neutral ligands that donate two electrons to the metal center, regardless of the electron counting method being used. These electrons can come from lone pairs, pi, or sigma donors. The bonds formed between these ligands and the metal are dative covalent bonds, which are also known as coordinate bonds. Examples of this type of ligand include CO, PR3, NH3, H2O, carbenes (=CRR'), and alkenes.
Z-type ligands are those that accept two electrons from the metal center, as opposed to the donation occurring with the other two types of ligands. However, these ligands also form dative covalent bonds like the L-type. This type of ligand is not usually used because in certain situations it can be written in terms of L and X. For example, if a Z ligand is accompanied by an L type, it can be written as X2. Examples of these ligands are Lewis acids, such as BR3.
Uses of the notation
When given a metal complex and the trends for the ligand types, the complex can be written in a more simplified manner with the form . The subscripts represent the numbers of each ligand type present in that complex, M is the metal center, and Q is the overall charge on the complex. Some examples of this overall notation are as follows:
Also from this general form, the values for electron count, oxidation state, coordination number, number of d-electrons, valence number and the ligand bond number can be calculated.
Electron Count =
Where is the group number of the metal.
Oxidation State (OS) =
Coordination Number (CN) =
Number of d-electrons (dn) =
=
Valence Number (VN) =
Ligand Bond Number (LBN) =
Other uses
This template for writing a metal complex also allows for a better comparison of molecules with different charges. This can happen when the assignment is reduced to its “equivalent neutral class". The equivalent neutral class is the classification of the complex if the charge was localized on the ligand as opposed to the metal center. In other words, the equivalent neutral class is the representation of the complex as though there were no charge.
References
Chemical bonding | Covalent bond classification method | [
"Physics",
"Chemistry",
"Materials_science"
] | 718 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
35,611,432 | https://en.wikipedia.org/wiki/Quantum%20Bayesianism | In physics and the philosophy of physics, quantum Bayesianism is a collection of related approaches to the interpretation of quantum mechanics, the most prominent of which is QBism (pronounced "cubism"). QBism is an interpretation that takes an agent's actions and experiences as the central concerns of the theory. QBism deals with common questions in the interpretation of quantum theory about the nature of wavefunction superposition, quantum measurement, and entanglement. According to QBism, many, but not all, aspects of the quantum formalism are subjective in nature. For example, in this interpretation, a quantum state is not an element of reality—instead, it represents the degrees of belief an agent has about the possible outcomes of measurements. For this reason, some philosophers of science have deemed QBism a form of anti-realism. The originators of the interpretation disagree with this characterization, proposing instead that the theory more properly aligns with a kind of realism they call "participatory realism", wherein reality consists of more than can be captured by any putative third-person account of it.
This interpretation is distinguished by its use of a subjective Bayesian account of probabilities to understand the quantum mechanical Born rule as a normative addition to good decision-making. Rooted in the prior work of Carlton Caves, Christopher Fuchs, and Rüdiger Schack during the early 2000s, QBism itself is primarily associated with Fuchs and Schack and has more recently been adopted by David Mermin. QBism draws from the fields of quantum information and Bayesian probability and aims to eliminate the interpretational conundrums that have beset quantum theory. The QBist interpretation is historically derivative of the views of the various physicists that are often grouped together as "the" Copenhagen interpretation, but is itself distinct from them. Theodor Hänsch has characterized QBism as sharpening those older views and making them more consistent.
More generally, any work that uses a Bayesian or personalist (a.k.a. "subjective") treatment of the probabilities that appear in quantum theory is also sometimes called quantum Bayesian. QBism, in particular, has been referred to as "the radical Bayesian interpretation".
In addition to presenting an interpretation of the existing mathematical structure of quantum theory, some QBists have advocated a research program of reconstructing quantum theory from basic physical principles whose QBist character is manifest. The ultimate goal of this research is to identify what aspects of the ontology of the physical world make quantum theory a good tool for agents to use. However, the QBist interpretation itself, as described in , does not depend on any particular reconstruction.
History and development
E. T. Jaynes, a promoter of the use of Bayesian probability in statistical physics, once suggested that quantum theory is "[a] peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble". QBism developed out of efforts to separate these parts using the tools of quantum information theory and personalist Bayesian probability theory.
There are many interpretations of probability theory. Broadly speaking, these interpretations fall into one of three categories: those which assert that a probability is an objective property of reality (the propensity school), those who assert that probability is an objective property of the measuring process (frequentists), and those which assert that a probability is a cognitive construct which an agent may use to quantify their ignorance or degree of belief in a proposition (Bayesians). QBism begins by asserting that all probabilities, even those appearing in quantum theory, are most properly viewed as members of the latter category. Specifically, QBism adopts a personalist Bayesian interpretation along the lines of Italian mathematician Bruno de Finetti and English philosopher Frank Ramsey.
According to QBists, the advantages of adopting this view of probability are twofold. First, for QBists the role of quantum states, such as the wavefunctions of particles, is to efficiently encode probabilities; so quantum states are ultimately degrees of belief themselves. (If one considers any single measurement that is a minimal, informationally complete positive operator-valued measure (POVM), this is especially clear: A quantum state is mathematically equivalent to a single probability distribution, the distribution over the possible outcomes of that measurement.) Regarding quantum states as degrees of belief implies that the event of a quantum state changing when a measurement occurs—the "collapse of the wave function"—is simply the agent updating her beliefs in response to a new experience. Second, it suggests that quantum mechanics can be thought of as a local theory, because the Einstein–Podolsky–Rosen (EPR) criterion of reality can be rejected. The EPR criterion states: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." Arguments that quantum mechanics should be considered a nonlocal theory depend upon this principle, but to a QBist, it is invalid, because a personalist Bayesian considers all probabilities, even those equal to unity, to be degrees of belief. Therefore, while many interpretations of quantum theory conclude that quantum mechanics is a nonlocal theory, QBists do not.
Christopher Fuchs introduced the term "QBism" and outlined the interpretation in more or less its present form in 2010, carrying further and demanding consistency of ideas broached earlier, notably in publications from 2002. Several subsequent works have expanded and elaborated upon these foundations, notably a Reviews of Modern Physics article by Fuchs and Schack; an American Journal of Physics article by Fuchs, Mermin, and Schack; and Enrico Fermi Summer School lecture notes by Fuchs and Stacey.
Prior to the 2010 article, the term "quantum Bayesianism" was used to describe the developments which have since led to QBism in its present form. However, as noted above, QBism subscribes to a particular kind of Bayesianism which does not suit everyone who might apply Bayesian reasoning to quantum theory (see, for example, below). Consequently, Fuchs chose to call the interpretation "QBism", pronounced "cubism", preserving the Bayesian spirit via the CamelCase in the first two letters, but distancing it from Bayesianism more broadly. As this neologism is a homophone of Cubism the art movement, it has motivated conceptual comparisons between the two, and media coverage of QBism has been illustrated with art by Picasso and Gris. However, QBism itself was not influenced or motivated by Cubism and has no lineage to a potential connection between Cubist art and Bohr's views on quantum theory.
Core positions
According to QBism, quantum theory is a tool which an agent may use to help manage their expectations, more like probability theory than a conventional physical theory. Quantum theory, QBism claims, is fundamentally a guide for decision making which has been shaped by some aspects of physical reality. Chief among the tenets of QBism are the following:
All probabilities, including those equal to zero or one, are valuations that an agent ascribes to their degrees of belief in possible outcomes. As they define and update probabilities, quantum states (density operators), channels (completely positive trace-preserving maps), and measurements (positive operator-valued measures) are also the personal judgements of an agent.
The Born rule is normative, not descriptive. It is a relation to which an agent should strive to adhere in their probability and quantum-state assignments.
Quantum measurement outcomes are personal experiences for the agent gambling on them. Different agents may confer and agree upon the consequences of a measurement, but the outcome is the experience each of them individually has.
A measurement apparatus is conceptually an extension of the agent. It should be considered analogous to a sense organ or prosthetic limb—simultaneously a tool and a part of the individual.
Reception and criticism
Reactions to the QBist interpretation have ranged from enthusiastic to strongly negative. Some who have criticized QBism claim that it fails to meet the goal of resolving paradoxes in quantum theory. Bacciagaluppi argues that QBism's treatment of measurement outcomes does not ultimately resolve the issue of nonlocality, and Jaeger finds QBism's supposition that the interpretation of probability is key for the resolution to be unnatural and unconvincing. Norsen has accused QBism of solipsism, and Wallace identifies QBism as an instance of instrumentalism; QBists have argued insistently that these characterizations are misunderstandings, and that QBism is neither solipsist nor instrumentalist. A critical article by Nauenberg in the American Journal of Physics prompted a reply by Fuchs, Mermin, and Schack.
Some assert that there may be inconsistencies; for example, Stairs argues that when a probability assignment equals one, it cannot be a degree of belief as QBists say. Further, while also raising concerns about the treatment of probability-one assignments, Timpson suggests that QBism may result in a reduction of explanatory power as compared to other interpretations. Fuchs and Schack replied to these concerns in a later article. Mermin advocated QBism in a 2012 Physics Today article, which prompted considerable discussion. Several further critiques of QBism which arose in response to Mermin's article, and Mermin's replies to these comments, may be found in the Physics Today readers' forum. Section 2 of the Stanford Encyclopedia of Philosophy entry on QBism also contains a summary of objections to the interpretation, and some replies. Others are opposed to QBism on more general philosophical grounds; for example, Mohrhoff criticizes QBism from the standpoint of Kantian philosophy.
Certain authors find QBism internally self-consistent, but do not subscribe to the interpretation. For example, Marchildon finds QBism well-defined in a way that, to him, many-worlds interpretations are not, but he ultimately prefers a Bohmian interpretation. Similarly, Schlosshauer and Claringbold state that QBism is a consistent interpretation of quantum mechanics, but do not offer a verdict on whether it should be preferred. In addition, some agree with most, but perhaps not all, of the core tenets of QBism; Barnum's position, as well as Appleby's, are examples.
Popularized or semi-popularized media coverage of QBism has appeared in New Scientist, Scientific American, Nature, Science News, the FQXi Community, the Frankfurter Allgemeine Zeitung, Quanta Magazine, Aeon, Discover, Nautilus Quarterly, and Big Think. In 2018, two popular-science books about the interpretation of quantum mechanics, Ball's Beyond Weird and Ananthaswamy's Through Two Doors at Once, devoted sections to QBism. Furthermore, Harvard University Press published a popularized treatment of the subject, QBism: The Future of Quantum Physics, in 2016.
The philosophy literature has also discussed QBism from the viewpoints of structural realism and of phenomenology. Ballentine argues that "the initial assumption of QBism is not valid" because the inferential probability of Bayesian theory used by QBism is not applicable to quantum mechanics.
Relation to other interpretations
Copenhagen interpretations
The views of many physicists (Bohr, Heisenberg, Rosenfeld, von Weizsäcker, Peres, etc.) are often grouped together as the "Copenhagen interpretation" of quantum mechanics. Several authors have deprecated this terminology, claiming that it is historically misleading and obscures differences between physicists that are as important as their similarities. QBism shares many characteristics in common with the ideas often labeled as "the Copenhagen interpretation", but the differences are important; to conflate them or to regard QBism as a minor modification of the points of view of Bohr or Heisenberg, for instance, would be a substantial misrepresentation.
QBism takes probabilities to be personal judgments of the individual agent who is using quantum mechanics. This contrasts with older Copenhagen-type views, which hold that probabilities are given by quantum states that are in turn fixed by objective facts about preparation procedures. QBism considers a measurement to be any action that an agent takes to elicit a response from the world and the outcome of that measurement to be the experience the world's response induces back on that agent. As a consequence, communication between agents is the only means by which different agents can attempt to compare their internal experiences. Most variants of the Copenhagen interpretation, however, hold that the outcomes of experiments are agent-independent pieces of reality for anyone to access. QBism claims that these points on which it differs from previous Copenhagen-type interpretations resolve the obscurities that many critics have found in the latter, by changing the role that quantum theory plays (even though QBism does not yet provide a specific underlying ontology). Specifically, QBism posits that quantum theory is a normative tool which an agent may use to better navigate reality, rather than a set of mechanics governing it.
Other epistemic interpretations
Approaches to quantum theory, like QBism, which treat quantum states as expressions of information, knowledge, belief, or expectation are called "epistemic" interpretations. These approaches differ from each other in what they consider quantum states to be information or expectations "about", as well as in the technical features of the mathematics they employ. Furthermore, not all authors who advocate views of this type propose an answer to the question of what the information represented in quantum states concerns. In the words of the paper that introduced the Spekkens Toy Model:
if a quantum state is a state of knowledge, and it is not knowledge of local and noncontextual hidden variables, then what is it knowledge about? We do not at present have a good answer to this question. We shall therefore remain completely agnostic about the nature of the reality to which the knowledge represented by quantum states pertains. This is not to say that the question is not important. Rather, we see the epistemic approach as an unfinished project, and this question as the central obstacle to its completion. Nonetheless, we argue that even in the absence of an answer to this question, a case can be made for the epistemic view. The key is that one can hope to identify phenomena that are characteristic of states of incomplete knowledge regardless of what this knowledge is about.
Leifer and Spekkens propose a way of treating quantum probabilities as Bayesian probabilities, thereby considering quantum states as epistemic, which they state is "closely aligned in its philosophical starting point" with QBism. However, they remain deliberately agnostic about what physical properties or entities quantum states are information (or beliefs) about, as opposed to QBism, which offers an answer to that question. Another approach, advocated by Bub and Pitowsky, argues that quantum states are information about propositions within event spaces that form non-Boolean lattices. On occasion, the proposals of Bub and Pitowsky are also called "quantum Bayesianism".
Zeilinger and Brukner have also proposed an interpretation of quantum mechanics in which "information" is a fundamental concept, and in which quantum states are epistemic quantities. Unlike QBism, the Brukner–Zeilinger interpretation treats some probabilities as objectively fixed. In the Brukner–Zeilinger interpretation, a quantum state represents the information that a hypothetical observer in possession of all possible data would have. Put another way, a quantum state belongs in their interpretation to an optimally informed agent, whereas in QBism, any agent can formulate a state to encode her own expectations. Despite this difference, in Cabello's classification, the proposals of Zeilinger and Brukner are also designated as "participatory realism", as QBism and the Copenhagen-type interpretations are.
Bayesian, or epistemic, interpretations of quantum probabilities were proposed in the early 1990s by Baez and Youssef.
Von Neumann's views
R. F. Streater argued that "[t]he first quantum Bayesian was von Neumann", basing that claim on von Neumann's textbook The Mathematical Foundations of Quantum Mechanics. Blake Stacey disagrees, arguing that the views expressed in that book on the nature of quantum states and the interpretation of probability are not compatible with QBism, or indeed, with any position that might be called quantum Bayesianism.
Relational quantum mechanics
Comparisons have also been made between QBism and the relational quantum mechanics (RQM) espoused by Carlo Rovelli and others. In both QBism and RQM, quantum states are not intrinsic properties of physical systems. Both QBism and RQM deny the existence of an absolute, universal wavefunction. Furthermore, both QBism and RQM insist that quantum mechanics is a fundamentally local theory. In addition, Rovelli, like several QBist authors, advocates reconstructing quantum theory from physical principles in order to bring clarity to the subject of quantum foundations. (The QBist approaches to doing so are different from Rovelli's, and are described below.) One important distinction between the two interpretations is their philosophy of probability: RQM does not adopt the Ramsey–de Finetti school of personalist Bayesianism. Moreover, RQM does not insist that a measurement outcome is necessarily an agent's experience.
Other uses of Bayesian probability in quantum physics
QBism should be distinguished from other applications of Bayesian inference in quantum physics, and from quantum analogues of Bayesian inference. For example, some in the field of computer science have introduced a kind of quantum Bayesian network, which they argue could have applications in "medical diagnosis, monitoring of processes, and genetics". Bayesian inference has also been applied in quantum theory for updating probability densities over quantum states, and MaxEnt methods have been used in similar ways. Bayesian methods for quantum state and process tomography are an active area of research.
Technical developments and reconstructing quantum theory
Conceptual concerns about the interpretation of quantum mechanics and the meaning of probability have motivated technical work. A quantum version of the de Finetti theorem, introduced by Caves, Fuchs, and Schack (independently reproving a result found using different means by Størmer) to provide a Bayesian understanding of the idea of an "unknown quantum state", has found application elsewhere, in topics like quantum key distribution and entanglement detection.
Adherents of several interpretations of quantum mechanics, QBism included, have been motivated to reconstruct quantum theory. The goal of these research efforts has been to identify a new set of axioms or postulates from which the mathematical structure of quantum theory can be derived, in the hope that with such a reformulation, the features of nature which made quantum theory the way it is might be more easily identified. Although the core tenets of QBism do not demand such a reconstruction, some QBists—Fuchs, in particular—have argued that the task should be pursued.
One topic prominent in the reconstruction effort is the set of mathematical structures known as symmetric, informationally-complete, positive operator-valued measures (SIC-POVMs). QBist foundational research stimulated interest in these structures, which now have applications in quantum theory outside of foundational studies and in pure mathematics.
The most extensively explored QBist reformulation of quantum theory involves the use of SIC-POVMs to rewrite quantum states (either pure or mixed) as a set of probabilities defined over the outcomes of a "Bureau of Standards" measurement. That is, if one expresses a density matrix as a probability distribution over the outcomes of a SIC-POVM experiment, one can reproduce all the statistical predictions implied by the density matrix from the SIC-POVM probabilities instead. The Born rule then takes the role of relating one valid probability distribution to another, rather than of deriving probabilities from something apparently more fundamental. Fuchs, Schack, and others have taken to calling this restatement of the Born rule the urgleichung, from the German for "primal equation" (see Ur- prefix), because of the central role it plays in their reconstruction of quantum theory.
The following discussion presumes some familiarity with the mathematics of quantum information theory, and in particular, the modeling of measurement procedures by POVMs. Consider a quantum system to which is associated a -dimensional Hilbert space. If a set of rank-1 projectors satisfyingexists, then one may form a SIC-POVM . An arbitrary quantum state may be written as a linear combination of the SIC projectorswhere is the Born rule probability for obtaining SIC measurement outcome implied by the state assignment . We follow the convention that operators have hats while experiences (that is, measurement outcomes) do not. Now consider an arbitrary quantum measurement, denoted by the POVM . The urgleichung is the expression obtained from forming the Born rule probabilities, , for the outcomes of this quantum measurement, where is the Born rule probability for obtaining outcome implied by the state assignment . The term may be understood to be a conditional probability in a cascaded measurement scenario: Imagine that an agent plans to perform two measurements, first a SIC measurement and then the measurement. After obtaining an outcome from the SIC measurement, the agent will update her state assignment to a new quantum state before performing the second measurement. If she uses the Lüders rule for state update and obtains outcome from the SIC measurement, then . Thus the probability for obtaining outcome for the second measurement conditioned on obtaining outcome for the SIC measurement is .
Note that the urgleichung is structurally very similar to the law of total probability, which is the expressionThey functionally differ only by a dimension-dependent affine transformation of the SIC probability vector. As QBism says that quantum theory is an empirically-motivated normative addition to probability theory, Fuchs and others find the appearance of a structure in quantum theory analogous to one in probability theory to be an indication that a reformulation featuring the urgleichung prominently may help to reveal the properties of nature which made quantum theory so successful.
The urgleichung does not replace the law of total probability. Rather, the urgleichung and the law of total probability apply in different scenarios because and refer to different situations. is the probability that an agent assigns for obtaining outcome on her second of two planned measurements, that is, for obtaining outcome after first making the SIC measurement and obtaining one of the outcomes. , on the other hand, is the probability an agent assigns for obtaining outcome when she does not plan to first make the SIC measurement. The law of total probability is a consequence of coherence within the operational context of performing the two measurements as described. The urgleichung, in contrast, is a relation between different contexts which finds its justification in the predictive success of quantum physics.
The SIC representation of quantum states also provides a reformulation of quantum dynamics. Consider a quantum state with SIC representation . The time evolution of this state is found by applying a unitary operator to form the new state , which has the SIC representation
The second equality is written in the Heisenberg picture of quantum dynamics, with respect to which the time evolution of a quantum system is captured by the probabilities associated with a rotated SIC measurement of the original quantum state . Then the Schrödinger equation is completely captured in the urgleichung for this measurement:In these terms, the Schrödinger equation is an instance of the Born rule applied to the passing of time; an agent uses it to relate how she will gamble on informationally complete measurements potentially performed at different times.
Those QBists who find this approach promising are pursuing a complete reconstruction of quantum theory featuring the urgleichung as the key postulate. (The urgleichung has also been discussed in the context of category theory.) Comparisons between this approach and others not associated with QBism (or indeed with any particular interpretation) can be found in a book chapter by Fuchs and Stacey and an article by Appleby et al. As of 2017, alternative QBist reconstruction efforts are in the beginning stages.
See also
Bayes factor
Bayesian inference
Credible intervals
Degree of belief
Doxastic logic
Philosophy of science
Quantum logic
Quantum probability
Statistical inference
References
External links
Exotic Probability Theories and Quantum Mechanics: References
Notes on a Paulian Idea: Foundational, Historical, Anecdotal and Forward-Looking Thoughts on the Quantum – Cerro Grande Fire Series, Volume 1
My Struggles with the Block Universe – Cerro Grande Fire Series, Volume 2
Why the multiverse is all about you – The Philosopher's Zone interview with Fuchs
A Private View of Quantum Reality – Quanta Magazine interview with Fuchs
Rüdiger Schack on QBism in The Conversation
Participatory Realism – 2017 conference at the Stellenbosch Institute for Advanced Study
Being Bayesian in a Quantum World – 2005 conference at the University of Konstanz
Interpretations of quantum mechanics
Bayesian statistics | Quantum Bayesianism | [
"Physics"
] | 5,198 | [
"Interpretations of quantum mechanics",
"Quantum mechanics"
] |
32,924,598 | https://en.wikipedia.org/wiki/Continuous%20testing | Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing was originally proposed as a way of reducing waiting time for feedback to developers by introducing development environment-triggered tests as well as more traditional developer/tester-triggered tests.
For Continuous testing, the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.
Adoption drivers
In the 2010s, software has become a key business differentiator. As a result, organizations now expect software development teams to deliver more, and more innovative, software within shorter delivery cycles. To meet these demands, teams have turned to lean approaches, such as Agile, DevOps, and Continuous Delivery, to try to speed up the systems development life cycle (SDLC). After accelerating other aspects of the delivery pipeline, teams typically find that their testing process is preventing them from achieving the expected benefits of their SDLC acceleration initiative. Testing and the overall quality process remain problematic for several key reasons.
Traditional testing processes are too slow. Iteration length has changed from months to weeks or days with the rising popularity of Agile, DevOps, and Continuous Delivery. Traditional methods of testing, which rely heavily on manual testing and automated GUI tests that require frequent updating, cannot keep pace. At this point, organizations tend to recognize the need to extend their test automation efforts.
Even after more automation is added to the existing test process, managers still lack adequate insight into the level of risk associated with an application at any given point in time. Understanding these risks is critical for making the rapid go/no go decisions involved in Continuous Delivery processes. If tests are developed without an understanding of what the business considers to be an acceptable level of risk, it is possible to have a release candidate that passes all the available tests, but which the business leaders would not consider to be ready for release. For the test results to accurately indicate whether each release candidate meets business expectations, the approach to designing tests must be based on the business's tolerance for risks related to security, performance, reliability, and compliance. In addition to having unit tests that check code at a very granular bottom-up level, there is a need for a broader suite of tests to provide a top-down assessment of the release candidate's business risk.
Even if testing is automated and tests effectively measure the level of business risk, teams without a coordinated end-to-end quality process tend to have trouble satisfying the business expectations within today's compressed delivery cycles. Trying to remove risks at the end of each iteration has been shown to be significantly slower and more resource-intensive than building quality into the product through defect prevention strategies such as development testing.
Organizations adopt Continuous Testing because they recognize that these problems are preventing them from delivering quality software at the desired speed. They recognize the growing importance of software as well as the rising cost of software failure, and they are no longer willing to make a tradeoff between time, scope, and quality.
Goals and benefits
The goal of continuous testing is to provide fast and continuous feedback regarding the level of business risk in the latest build or release candidate. This information can then be used to determine if the software is ready to progress through the delivery pipeline at any given time.
Since testing begins early and is executed continuously, application risks are exposed soon after they are introduced. Development teams can then prevent those problems from progressing to the next stage of the SDLC. This reduces the time and effort that need to be spent finding and fixing defects. As a result, it is possible to increase the speed and frequency at which quality software (software that meets expectations for an acceptable level of risk) is delivered, as well as decrease technical debt.
Moreover, when software quality efforts and testing are aligned with business expectations, test execution produces a prioritized list of actionable tasks (rather than a potentially overwhelming number of findings that require manual review). This helps teams focus their efforts on the quality tasks that will have the greatest impact, based on their organization's goals and priorities.
Additionally, when teams are continuously executing a broad set of continuous tests throughout the SDLC, they amass metrics regarding the quality of the process as well as the state of the software. The resulting metrics can be used to re-examine and optimize the process itself, including the effectiveness of those tests. This information can be used to establish a feedback loop that helps teams incrementally improve the process. Frequent measurement, tight feedback loops, and continuous improvement are key principles of DevOps.
Scope of testing
Continuous testing includes the validation of both functional requirements and non-functional requirements.
For testing functional requirements (functional testing), Continuous Testing often involves unit tests, API testing, integration testing, and system testing. For testing non-functional requirements (non-functional testing - to determine if the application meets expectations around performance, security, compliance, etc.), it involves practices such as static code analysis, security testing, performance testing, etc. Tests should be designed to provide the earliest possible detection (or prevention) of the risks that are most critical for the business or organization that is releasing the software.
Teams often find that in order to ensure that test suite can run continuously and effectively assesses the level of risk, it's necessary to shift focus from GUI testing to API testing because 1) APIs (the "transaction layer") are considered the most stable interface to the system under test, and 2) GUI tests require considerable rework to keep pace with the frequent changes typical of accelerated release processes; tests at the API layer are less brittle and easier to maintain.
Tests are executed during or alongside continuous integration—at least daily. For teams practicing continuous delivery, tests are commonly executed many times a day, every time that the application is updated in to the version control system.
Ideally, all tests are executed across all non-production test environments. To ensure accuracy and consistency, testing should be performed in the most complete, production-like environment possible. Strategies for increasing test environment stability include virtualization software (for dependencies your organization can control and image) service virtualization (for dependencies beyond your scope of control or unsuitable for imaging), and test data management.
Common practices
Testing should be a collaboration of Development, QA, and Operations—aligned with the priorities of the line of business—within a coordinated, end-to-end quality process.
Tests should be logically-componentized, incremental, and repeatable; results must be deterministic and meaningful.
All tests need to be run at some point in the build pipeline, but not all tests need to be run all the time since some tests are more resource expensive (integration tests) than other (unit tests).
Eliminate test data and environment constraints so that tests can run constantly and consistently in production-like environments.
To minimize false positives, minimize test maintenance, and more effectively validate use cases across modern systems with multitier architectures, teams should emphasize API testing over GUI testing.
Challenges/roadblocks
Since modern applications are highly distributed, test suites that exercise them typically require access to dependencies that are not readily available for testing (e.g., third-party services, mainframes that are available for testing only in limited capacity or at inconvenient times, etc.) Moreover, with the growing adoption of Agile and parallel development processes, it is common for end-to-end functional tests to require access to dependencies that are still evolving or not yet implemented. This problem can be addressed by using service virtualization to simulate the application under test's (AUT's) interactions with the missing or unavailable dependencies. It can also be used to ensure that data, performance, and behavior is consistent across the various test runs.
One reason teams avoid continuous testing is that their infrastructure is not scalable enough to continuously execute the test suite. This problem can be addressed by focusing the tests on the business's priorities, splitting the test base, and parallelizing the testing with application release automation tools.
Continuous testing vs automated testing
The goal of Continuous Testing is to apply "extreme automation" to stable, production-like test environments. Automation is essential for Continuous Testing. But automated testing is not the same as Continuous Testing.
Automated testing involves automated, CI-driven execution of whatever set of tests the team has accumulated. Moving from automated testing to continuous testing involves executing a set of tests that is specifically designed to assess the business risks associated with a release candidate, and to regularly execute these tests in the context of stable, production-like test environments. Some differences between automated and continuous testing:
With automated testing, a test failure may indicate anything from a critical issue to a violation of a trivial naming standard. With continuous testing, a test failure always indicates a critical business risk.
With continuous testing, a test failure is addressed via a clear workflow for prioritizing defects vs. business risks and addressing the most critical ones first.
With continuous testing, each time a risk is identified, there is a process for exposing all similar defects that might already have been introduced, as well as preventing this same problem from recurring in the future.
Predecessors
Since the 1990s, Continuous test-driven development has been used to provide programmers rapid feedback on whether the code they added a) functioned properly and b) unintentionally changed or broke existing functionality. This testing, which was a key component of Extreme Programming, involves automatically executing unit tests (and sometimes acceptance tests or smoke tests) as part of the automated build, often many times a day. These tests are written prior to implementation; passing tests indicate that implementation is successful.
See also
Continuous delivery
Continuous integration
DevOps
Release management
Service virtualization
Software testing
Test automation
Further reading
References
Software testing | Continuous testing | [
"Engineering"
] | 1,994 | [
"Software engineering",
"Software testing"
] |
32,924,961 | https://en.wikipedia.org/wiki/Neotropical%20fish | The freshwater fish of tropical South and Central America, represent one of the most diverse and extreme aquatic ecosystems on Earth, with more than 5,600 species, representing about 10% all living vertebrate species. The exceptional diversity of species, adaptations, and life histories observed in the Neotropical ichthyofauna has been the focus of numerous books and scientific papers, especially the wonderfully complex aquatic ecosystems of the Amazon Basin and adjacent river basins (e.g., Goulding and Smith, 1996; Araujo-Lima and Goulding, 1997; Barthem and Goulding, 1997; Barthem, 2003; Goulding et al., 2003). Many of the advances in Neotropical ichthyology have been summarized in three edited volumes: Malabarba et al. (1998); Reis et al. (2003); Albert and Reis (2011).
Habitat
The Neotropical ichthyofauna extends throughout the continental waters of Central and South America, from south of the Mesa Central in southern Mexico (~ 16° N) to the La Plata estuary in northern Argentina (~ 34° S). The fishes of this region are largely restricted to the humid tropical portions of the Neotropical realm as circumscribed by Sclater (1858) and Wallace (1876), being excluded from the arid Pacific slopes of Peru and northern Chile, and the boreal regions of the Southern Cone in Chile and Argentina (Arratia, 1997; Dyer, 2000). The vast Neotropical ichthyofaunal region extends over more than 17 million square km of moist tropical lowland forests, seasonally flooded wetlands and savannahs, and also several arid peripheral regions (e.g., Northwest Venezuela; Northeast Brazil; Chaco of Paraguay, Argentina and Bolivia). At the core of this system lies Amazonia, the greatest interconnected freshwater fluvial system on the planet. This system includes the drainages of the Amazon Basin itself, and of two large adjacent regions, the Orinoco Basin and the Guiana Shield. The Amazon River is by any measure the largest in the world, discharging about 16% of the world's flowing freshwater into the Atlantic (Goulding et al., 2003b).
References
External links
Information About All Fishes
Fish by habitat
Aquatic ecology
Ichthyology | Neotropical fish | [
"Biology"
] | 473 | [
"Aquatic ecology",
"Ecosystems"
] |
32,927,433 | https://en.wikipedia.org/wiki/Neutron%20stimulated%20emission%20computed%20tomography | Neutron stimulated emission computed tomography (NSECT) uses induced gamma emission through neutron inelastic scattering to generate images of the spatial distribution of elements in a sample.
Clinical Applications
NSECT has been shown to be effective in detecting liver iron overload disorders and breast cancer. Due to its sensitivity in measuring elemental concentrations, NSECT is currently being developed for cancer staging, among other medical applications.
NSECT mechanism
A given atomic nucleus, defined by its proton and neutron numbers, is a quantized system with a set of characteristic higher energy levels that it can occupy as a nuclear isomer. When the nucleus in its ground state is struck by a fast neutron with kinetic energy greater than that of its first excited state, it can undergo an isomeric transition to one of its excited states by receiving the necessary energy from the fast neutron through inelastic scatter. Promptly (on the order of picoseconds, on average) after excitation, the excited nuclear isomer de-excites (either directly or through a series of cascades) to the ground state, emitting a characteristic gamma ray for each decay transition with energy equal to the difference in the energy levels involved (see induced gamma emission). After irradiating the sample with neutrons, the measured number of emitted gamma rays of energy characteristic to the nucleus of interest is directly proportional to the number of such nuclei along the incident neutron beam trajectory. After repeating the measurement for neutron beam incidence at positions around the sample, an image of the distribution of the nuclei in the sample can be reconstructed as done in tomography.
References
Further reading
NSECT at Ravin Advanced Imaging Laboratories, Duke University
Floyd CE, Bender JE, Sharma AC, Kapadia A, Xia J, and Harrawood B, Tourassi GD, Lo JY, Crowell A, and Howell C. "Introduction to neutron stimulated emission computed tomography," Physics in Medicine and Biology. 51:3375. 2006.
Sharma AC, Harrawood BP, Bender JE, Tourassi GD, and Kapadia AJ. "Neutron stimulated emission computed tomography: a Monte Carlo simulation approach,"Physics in Medicine and Biology. 52:6117. 2007.
Floyd CE, Kapadia, AJ, et al. "Neutron-stimulated emission computed tomography of a multi-element phantom," Physics in Medicine and Biology. 53:2313. 2008.
Medical imaging
Neutron scattering | Neutron stimulated emission computed tomography | [
"Chemistry"
] | 497 | [
"Scattering",
"Neutron scattering"
] |
32,928,344 | https://en.wikipedia.org/wiki/BBX%20%28gene%29 | HMG box transcription factor BBX also known as bobby sox homolog or HMG box-containing protein 2 is a protein that in humans is encoded by the BBX gene.
References
External links
Further reading
Transcription factors
Genes mutated in mice | BBX (gene) | [
"Chemistry",
"Biology"
] | 50 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
32,933,846 | https://en.wikipedia.org/wiki/Bloch%20group | In mathematics, the Bloch group is a cohomology group of the Bloch–Suslin complex, named after Spencer Bloch and Andrei Suslin. It is closely related to polylogarithm, hyperbolic geometry and algebraic K-theory.
Bloch–Wigner function
The dilogarithm function is the function defined by the power series
It can be extended by analytic continuation, where the path of integration avoids the cut from 1 to +∞
The Bloch–Wigner function is related to dilogarithm function by
, if
This function enjoys several remarkable properties, e.g.
is real analytic on
The last equation is a variant of Abel's functional equation for the dilogarithm .
Definition
Let K be a field and define as the free abelian group generated by symbols [x]. Abel's functional equation implies that D2 vanishes on the subgroup D(K) of Z(K) generated by elements
Denote by A (K) the quotient of by the subgroup D(K). The Bloch-Suslin complex is defined as the following cochain complex, concentrated in degrees one and two
, where ,
then the Bloch group was defined by Bloch
The Bloch–Suslin complex can be extended to be an exact sequence
This assertion is due to the Matsumoto theorem on K2 for fields.
Relations between K3 and the Bloch group
If c denotes the element and the field is infinite, Suslin proved the element c does not depend on the choice of x, and
where GM(K) is the subgroup of GL(K), consisting of monomial matrices, and BGM(K)+ is the Quillen's plus-construction. Moreover, let K3M denote the Milnor's K-group, then there exists an exact sequence
where K3(K)ind = coker(K3M(K) → K3(K)) and Tor(K*, K*)~ is the unique nontrivial extension of Tor(K*, K*) by means of Z/2.
Relations to hyperbolic geometry in three-dimensions
The Bloch-Wigner function , which is defined on , has the following meaning: Let be 3-dimensional hyperbolic space and its half space model. One can regard elements of as points at infinity on . A tetrahedron, all of whose vertices are at infinity, is called an ideal tetrahedron. We denote such a tetrahedron by and its (signed) volume by where are the vertices. Then under the appropriate metric up to constants we can obtain its cross-ratio:
In particular, . Due to the five terms relation of , the volume of the boundary of non-degenerate ideal tetrahedron equals 0 if and only if
In addition, given a hyperbolic manifold , one can decompose
where the are ideal tetrahedra. whose all vertices are at infinity on . Here the are certain complex numbers with . Each ideal tetrahedron is isometric to one with its vertices at for some with . Here is the cross-ratio of the vertices of the tetrahedron. Thus the volume of the tetrahedron depends only one single parameter . showed that for ideal tetrahedron , where is the Bloch-Wigner dilogarithm. For general hyperbolic 3-manifold one obtains
by gluing them. The Mostow rigidity theorem guarantees only single value of the volume with for all .
Generalizations
Via substituting dilogarithm by trilogarithm or even higher polylogarithms, the notion of Bloch group was extended by Goncharov and Zagier . It is widely conjectured that those generalized Bloch groups Bn should be related to algebraic K-theory or motivic cohomology. There are also generalizations of the Bloch group in other directions, for example, the extended Bloch group defined by Neumann .
References
(this 1826 manuscript was only published posthumously.)
Algebraic topology | Bloch group | [
"Mathematics"
] | 826 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
32,933,985 | https://en.wikipedia.org/wiki/Cobot | A cobot, or collaborative robot, also known as a companion robot, is a robot intended for direct human-robot interaction within a shared space, or where humans and robots are in close proximity. Cobot applications contrast with traditional industrial robot applications in which robots are isolated from human contact or the humans are protected by robotic tech vests. Cobot safety may rely on lightweight construction materials, rounded edges, and inherent limitation of speed and force, or on sensors and software that ensure safe behavior.
Uses
The International Federation of Robotics (IFR), a global industry association of robot manufacturers and national robot associations, recognizes two main groups of robots: industrial robots used in automation and service robots for domestic and professional use. Service robots could be considered to be cobots as they are intended to work alongside humans. Industrial robots have traditionally worked separately from humans behind fences or other protective barriers, but cobots remove that separation.
Cobots can have many uses, from information robots in public spaces (an example of service robots), logistics robots that transport materials within a building, to industrial robots that help automate unergonomic tasks such as helping people moving heavy parts, or machine feeding or assembly operations.
The IFR defines four levels of collaboration between industrial robots and human workers:
Coexistence: Human and robot work alongside each other without a fence, but with no shared workspace.
Sequential Collaboration: Human and robot are active in shared workspace but their motions are sequential; they do not work on a part at the same time.
Cooperation: Robot and human work on the same part at the same time, with both in motion.
Responsive Collaboration: The robot responds in real-time to movement of the human worker.
In most industrial applications of cobots today, the cobot and human worker share the same space but complete tasks independently or sequentially (Co-existence or Sequential Collaboration.) Co-operation or Responsive Collaboration are presently less common.
History
Cobots were invented in 1996 by J. Edward Colgate and Michael Peshkin, professors at Northwestern University. Their United States patent entitled, "Cobots" describes "an apparatus and method for direct physical interaction between a person and a general purpose manipulator controlled by a computer."
The invention resulted from a 1994 General Motors initiative led by Prasad Akella of the GM Robotics Center and a 1995 General Motors Foundation research grant intended to find a way to make robots or robot-like equipment safe enough to team with people.
The first cobots assured human safety by having no internal source of motive power. Instead, motive power was provided by the human worker.
The cobot's function was to allow computer control of motion, by redirecting or steering a payload, in a cooperative way with the human worker.
Later, cobots provided limited amounts of motive power as well. General Motors and an industry working group used the term Intelligent Assist Device (IAD) as an alternative to cobot, which was viewed as too closely associated with the company Cobotics. At the time, the market demand for Intelligent Assist Devices and the safety standard "T15.1 Intelligent Assist Devices - Personnel Safety Requirements" was to improve industrial material handling and automotive assembly operations.
Cobot companies
Cobotics, a company founded in 1997 by Colgate and Peshkin, produced several cobot models used in automobile final assembly. These cobots were of IFR type Responsive Collaboration using what is now called "Hand Guided Control". The company was acquired in 2003 by Stanley Assembly Technologies.
KUKA released its first cobot, LBR 3, in 2004. This computer-controlled lightweight robot was the result of a long collaboration with the German Aerospace Center institute. KUKA further refined the technology, releasing the KUKA LBR 4 in 2008 and the KUKA LBR iiwa in 2013.
Universal Robots released its first cobot, the UR5, in 2008. This cobot could safely operate alongside employees, eliminating the need for safety caging or fencing. The new robot helped launch the era of flexible, user-friendly and cost-efficient collaborative robots. In 2012, Universal Robots released the UR10 cobot, and in 2015 they released the smaller, lower payload UR3.
Rethink Robotics released an industrial cobot, Baxter, in 2012 and smaller, faster collaborative robot Sawyer in 2015, designed for high precision tasks.
From 2009 to 2013, four CoBot robots, which were designed, built, and programmed by the CORAL research group at Carnegie Mellon University, logged more than 130 kilometers of autonomous in-building errand travel.
FANUC released its first collaborative robot in 2015 - the FANUC CR-35iA (renamed the CR-35iB in 2022) with a heavy 35 kg payload. Since that time FANUC has released a smaller line of collaborative robots including the FANUC CR-4iA, CR-7iA and the CR-7/L long arm version, and also a full line of standard cobots including the CRX-5iA, CRX-10iA, CRX-10iA/L, CRX-20iA, CRX-20iA/L, CRX-25iA, and CRX-30iA. They're also the first company in the world to have the first explosion-proof rated cobot, used in painting applications and other hazardous environments like loading munitions or working in areas needing ex-proof rated equipment.
ABB released YuMi in 2015, the first collaborative dual arm robot. In February 2021 they released GoFa, which had a payload of 5 kg.
Dobot Robotics released its CRA series in 2023 , the new generation of collaborative robots. Dobot's Exclusive SafeSkin Technology launched in 2019 enables the safe human-robot collaboration in real-world applications.
As of 2019, Universal Robots was the market leader followed by Techman Robot Inc. Techman Robot Inc. is a cobot manufacturer founded by Quanta in 2016. It is based in Taoyuan's Hwa Ya Technology Park.
In 2020, the market for industrial cobots had an annual growth rate of 50 percent.
In 2022, Collaborative Robotics (co.bot) was founded by Brad Porter, former VP and Distinguished Engineer, Robotics at Amazon.
In 2023, Collaborative Robotics raised a $30M Series A to begin fielding and manufacturing their novel cobot.
In 2023, Gautam Siwach and Cheryl Li showcase transformative applications of Natural Language Processing for improving communication between humans and collaborative robots (UR3e).
Major Cobot Manufacturers:
FANUC
ABB
Doosan Robotics
Universal Robots
Standards and guidelines
RIA BSR/T15.1, a draft safety standard for Intelligent Assist Devices, was published by the Robotic Industries Association, an industry working group in March 2002.
The robot safety standard (ANSI/RIA R15.06 was first published in 1986, after 4 years of development. It was updated with newer editions in 1992 and 1999. In 2011, ANSI/RIA R15.06 was updated again and is now a national adoption of the combined ISO 10218-1 and ISO 10218-2 safety standards. The ISO standards are based on ANSI/RIA R15.06-1999. A companion document was developed by ISO TC299 WG3 and published as an ISO Technical Specification, ISO/TS 15066:2016. This Technical Specification covers collaborative robotics - requirements of robots and the integrated applications. ISO 10218-1 contains the requirements for robots - including those with optional capabilities to enable collaborative applications. ISO 10218-2:2011 and ISO/TS 15066 contain the safety requirements for both collaborative and non-collaborative robot applications. Technically, the <collaborative> robot application includes the robot, end-effector (mounted to the robot arm or manipulator to perform tasks which can include manipulating or handling objects) and the workpiece (if an object is handled).
The safety of a collaborative robot application is the issue since there is NO official term of "cobot" (within robot standardization). Cobot is considered to be a sales or marketing term because "collaborative" is determined by the application. For example, a robot wielding a cutting tool or a sharp workpiece would be hazardous to people. However the same robot sorting foam chips would likely be safe. Consequently, the risk assessment accomplished by the robot integrator addresses the intended application (use). ISO 10218 Parts 1 and 2 rely on risk assessment (according to ISO 12100). In Europe, the Machinery Directive is applicable, however the robot by itself is a partial machine. The robot system (robot with end-effector) and the robot application are considered complete machines.
See also
Air-Cobot, a collaborative mobile robot to inspect aircraft
Automated guided vehicle
References
External links
Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), Safe co-operation between human beings and robots
CORAL project's CoBot web page at the Carnegie Mellon School of Computer Science
Cobot screwdrivers | Cobot | [
"Technology",
"Engineering"
] | 1,836 | [
"Social robots",
"Industrial robots",
"Computing and society"
] |
32,934,654 | https://en.wikipedia.org/wiki/Liquid%20Robotics | Liquid Robotics is an American marine robotics corporation that designs, manufactures and sells the Wave Glider, a wave and solar powered unmanned surface vehicle (USV). The Wave Glider harvests energy from ocean waves for propulsion. With this energy source, Wave Gliders can spend many months at a time at sea, collecting and transmitting ocean data.
The vehicles host sensor payloads such as: atmospheric and oceanographic sensors applicable to ocean and climate science, seismic sensors for earthquake and tsunami detection, and video cameras and acoustic sensors for security and marine environment protection purposes.
The company was founded in 2007 in Sunnyvale, California. In December 2016, the company was acquired by The Boeing Company and is a wholly owned subsidiary, part of Boeing’s Defense, Space and Security organization. In February 2022 the company relocated to Herndon, Virginia.
History
The Wave Glider was originally invented to record the singing of humpback whales and transmit the songs back to shore. In 2003, Joe Rizzi, Chairman, Jupiter Research Foundation, set out with the goal to design a system that could hold its position at sea—even if it wasn’t anchored in place—and operate 24/7 without harming the environment or the whales.
After a few years of experimenting, he enlisted the Hine family to help develop an unmoored, station-keeping data buoy. Roger Hine, a mechanical engineer and robotics expert from Stanford University, spent a year on the project experimenting with different designs and energy sources. In 2005, he invented the Wave Glider and in January 2007, Roger Hine and Joe Rizzi co-founded Liquid Robotics.
In January 2009, endurance testing began when a Wave Glider completed a nine-day circumnavigation of Hawaii's Big Island. Later that year a pair of Wave Gliders travelled from Hawai’i to San Diego, an 82-day trip that covered more than 2,500 miles. Since then Wave Gliders have travelled over 1.4 million nautical miles over the course of over 32,000 vehicle-days at sea.
In September 2014, Liquid Robotics entered partnership with The Boeing Company for the purpose of advancing maritime surveillance. Over the next three years, the two companies worked on integrating unmanned systems to provide a seafloor to space communications capability for anti-submarine warfare. During the Unmanned Warrior 2016 exercise hosted by the Royal British Navy, Boeing and Liquid Robotics demonstrated for the first time a network of persistent USVs that detected, reported and tracked a live diesel submarine. On December 6, 2016, Boeing acquired Liquid Robotics.
Wave Glider
Architecture
The Wave Glider is composed of two parts: the ‘’float’’, roughly the size and shape of a large surfboard, travels on the surface of the ocean; the ‘’sub’’ or wing rack hangs below on an umbilical tether 13–26 feet (4–8 meters) long and is equipped with a rudder for steering and a thruster for additional thrust during extreme conditions (doldrums or high currents). The Wave Glider leverages the difference in motion between the ocean surface and the calmer water below to create forward propulsion. No fuel is required for operation which enables it to stay at sea for long durations.
Next Generation Wave Glider
On September 7, 2017, Liquid Robotics announced the Next Generation Wave Glider with advancements to the platform’s operational range, and performance for missions in high sea states (sea state 6 and greater) and high latitudes. Changes include advancements for expanded sensor payloads and increased energy and storage capacity required for long duration maritime surveillance, environmental monitoring and observation missions.
Solar panels recharge batteries which supply the power for the onboard sensor payloads, communications, computing, and enables a thruster propulsion system that provides additional navigational thrust for challenging ocean conditions (doldrums through high seas).
The vehicle can be programmed for autonomous operation, or it can be piloted remotely. Communication is provided via satellite, BGAN, cellular or Wi-Fi links for piloting and data transmission.
Software
The Wave Glider software is built on open standards and composed of two parts:
Regulus, the on-board operating environment built on Linux and Java and used for on-board command and control of all Wave Glider functions including sensors.
WGMS is a web-based console for mission management that supports mission planning, piloting and data management.
Wave Glider markets and missions
Wave Gliders are used for defense, maritime surveillance, commercial, oil and gas, and science and research applications. Examples include:
Commercial/Oil and Gas – atmospheric, seismic, and environmental monitoring
Defense - Anti-submarine warfare and Intelligence, Surveillance and Recognizance
Maritime Surveillance – surface vessel detection for coastal and border security
Scientific research – weather monitoring, climate change, deep-sea seismic detection, ocean acidification, environmental monitoring, bio-geophysical research and fish/ecosystems monitoring
Since 2007, Wave Gliders have been deployed in many areas of the global ocean, from the Arctic to the Southern Ocean. They've been used to track great white sharks by Dr. Barbara Block of Hopkins Marine Station, patrol marine protected areas (MPAs) for the United Kingdom’s Foreign & Commonwealth Office to protect against illegal fishing and assessed the health of the Great Barrier Reef and ecosystems. Additionally, they’ve collected and transmitted data through extreme storms and detected a live diesel submarine during the Unmanned Warrior exercise conducted in October of 2016.
Guinness World Record
In 2013 Liquid Robotics was awarded the Guinness World Record for the "longest journey by an autonomous, unmanned surface vehicle on the planet". The Wave Glider, named Benjamin Franklin, travelled farther than any other unmanned autonomous surface vehicle – over 7939 nautical miles (14,703 km) on an autonomous journey of just over one-year. The Wave Glider’s route traveled across the Pacific Ocean from San Francisco, CA to Bundaberg, Queensland Australia arriving on 14 February 2013.
The Digital Ocean
The Digital Ocean is an initiative originated by Liquid Robotics to collaboratively establish the data collection and communications infrastructure needed to support the Internet of Things for the ocean. The vision for the Digital Ocean is a networked ocean connecting billions of sensors, manned and unmanned systems, and satellites above. The goal of the project is to address issues facing the ocean as noted in the UN’s Sustainable Development Goal #14 and to conserve and sustainably use the oceans, seas and marine resources.
Strategic Advisory Board
Liquid Robotics established the Strategic Advisory Board in September 2011. The distinguished board of advisors include:
Robert S. Gelbard, Chairman, Washington Global Partners, LLC, former Foreign Service Officer, U.S. Department of State, Ambassador to Indonesia and Bolivia
Walter L. Sharp, General, U.S. Army (Ret.)
Dr. Eric Terrill, Director, Coastal Observing Research and Development Center (CORDC), Scripps Institution of Oceanography, University of California, San Diego (UCSD)
John J. Young Jr., Principal, JY Strategies, LLC, former U.S. Undersecretary of Defense for Acquisition, Technology and Logistics
Sir George Zambellas, Admiral (ret.), former First Sea Lord of the British Royal Navy and Chief of the Naval Staff
References
External links
Liquid Robotics homepage
Jupiter Research Foundation homepage
SBS home page
Oceanographic instrumentation
Companies based in Sunnyvale, California
Unmanned surface vehicles of the United States
Autonomous underwater vehicles
Unmanned surface vehicle manufacturers | Liquid Robotics | [
"Technology",
"Engineering"
] | 1,504 | [
"Oceanographic instrumentation",
"Measuring instruments"
] |
902,820 | https://en.wikipedia.org/wiki/Emissivity | The emissivity of the surface of a material is its effectiveness in emitting energy as thermal radiation. Thermal radiation is electromagnetic radiation that most commonly includes both visible radiation (light) and infrared radiation, which is not visible to human eyes. A portion of the thermal radiation from very hot objects (see photograph) is easily visible to the eye.
The emissivity of a surface depends on its chemical composition and geometrical structure. Quantitatively, it is the ratio of the thermal radiation from a surface to the radiation from an ideal black surface at the same temperature as given by the Stefan–Boltzmann law. (A comparison with Planck's law is used if one is concerned with particular wavelengths of thermal radiation.) The ratio varies from 0 to 1.
The surface of a perfect black body (with an emissivity of 1) emits thermal radiation at the rate of approximately 448 watts per square metre (W/m) at a room temperature of .
Objects have emissivities less than 1.0, and emit radiation at correspondingly lower rates.
However, wavelength- and subwavelength-scale particles, metamaterials, and other nanostructures may have an emissivity greater than 1.
Practical applications
Emissivities are important in a variety of contexts:
Insulated windows Warm surfaces are usually cooled directly by air, but they also cool themselves by emitting thermal radiation. This second cooling mechanism is important for simple glass windows, which have emissivities close to the maximum possible value of 1.0. "Low-E windows" with transparent low-emissivity coatings emit less thermal radiation than ordinary windows. In winter, these coatings can halve the rate at which a window loses heat compared to an uncoated glass window.
Solar heat collectors Similarly, solar heat collectors lose heat by emitting thermal radiation. Advanced solar collectors incorporate selective surfaces that have very low emissivities. These collectors waste very little of the solar energy through emission of thermal radiation.
Thermal shielding For the protection of structures from high surface temperatures, such as reusable spacecraft or hypersonic aircraft, high-emissivity coatings (HECs), with emissivity values near 0.9, are applied on the surface of insulating ceramics. This facilitates radiative cooling and protection of the underlying structure and is an alternative to ablative coatings, used in single-use reentry capsules.
Passive daytime radiative cooling Daytime passive radiative coolers use the extremely cold temperature of outer space (~2.7 K) to emit heat and lower ambient temperatures while requiring zero energy input. These surfaces minimize the absorption of solar radiation to lessen heat gain in order to maximize the emission of LWIR thermal radiation. It has been proposed as a solution to global warming.
Planetary temperatures The planets are solar thermal collectors on a large scale. The temperature of a planet's surface is determined by the balance between the heat absorbed by the planet from sunlight, heat emitted from its core, and thermal radiation emitted back into space. Emissivity of a planet is determined by the nature of its surface and atmosphere.
Temperature measurements Pyrometers and infrared cameras are instruments used to measure the temperature of an object by using its thermal radiation; no actual contact with the object is needed. The calibration of these instruments involves the emissivity of the surface that's being measured.
Mathematical definitions
In its most general form, emissivity can be specified for a particular wavelength, direction, and polarization.
However, the most commonly used form of emissivity is the hemispherical total emissivity, which considers emissions as totaled over all wavelengths, directions, and polarizations, given a particular temperature.
Some specific forms of emissivity are detailed below.
Hemispherical emissivity
Hemispherical emissivity of a surface, denoted ε, is defined as
where
Me is the radiant exitance of that surface;
Me° is the radiant exitance of a black body at the same temperature as that surface.
Spectral hemispherical emissivity
Spectral hemispherical emissivity in frequency and spectral hemispherical emissivity in wavelength of a surface, denoted εν and ελ, respectively, are defined as
where
Me,ν is the spectral radiant exitance in frequency of that surface;
Me,ν° is the spectral radiant exitance in frequency of a black body at the same temperature as that surface;
Me,λ is the spectral radiant exitance in wavelength of that surface;
Me,λ° is the spectral radiant exitance in wavelength of a black body at the same temperature as that surface.
Directional emissivity
Directional emissivity of a surface, denoted εΩ, is defined as
where
Le,Ω is the radiance of that surface;
Le,Ω° is the radiance of a black body at the same temperature as that surface.
Spectral directional emissivity
Spectral directional emissivity in frequency and spectral directional emissivity in wavelength of a surface, denoted εν,Ω and ελ,Ω, respectively, are defined as
where
Le,Ω,ν is the spectral radiance in frequency of that surface;
Le,Ω,ν° is the spectral radiance in frequency of a black body at the same temperature as that surface;
Le,Ω,λ is the spectral radiance in wavelength of that surface;
Le,Ω,λ° is the spectral radiance in wavelength of a black body at the same temperature as that surface.
Hemispherical emissivity can also be expressed as a weighted average of the directional spectral emissivities as described in textbooks on "radiative heat transfer".
Emissivities of common surfaces
Emissivities ε can be measured using simple devices such as Leslie's cube in conjunction with a thermal radiation detector such as a thermopile or a bolometer. The apparatus compares the thermal radiation from a surface to be tested with the thermal radiation from a nearly ideal, black sample. The detectors are essentially black absorbers with very sensitive thermometers that record the detector's temperature rise when exposed to thermal radiation. For measuring room temperature emissivities, the detectors must absorb thermal radiation completely at infrared wavelengths near 10×10−6 metre. Visible light has a wavelength range of about 0.4–0.7×10−6 metre from violet to deep red.
Emissivity measurements for many surfaces are compiled in many handbooks and texts. Some of these are listed in the following table.
Notes:
These emissivities are the total hemispherical emissivities from the surfaces.
The values of the emissivities apply to materials that are optically thick. This means that the absorptivity at the wavelengths typical of thermal radiation doesn't depend on the thickness of the material. Very thin materials emit less thermal radiation than thicker materials.
Most emissitivies in the chart above were recorded at room temperature, .
Closely related properties
Absorptance
There is a fundamental relationship (Gustav Kirchhoff's 1859 law of thermal radiation) that equates the emissivity of a surface with its absorption of incident radiation (the "absorptivity" of a surface). Kirchhoff's law is rigorously applicable with regard to the spectral directional definitions of emissivity and absorptivity. The relationship explains why emissivities cannot exceed 1, since the largest absorptivity—corresponding to complete absorption of all incident light by a truly black object—is also 1. Mirror-like, metallic surfaces that reflect light will thus have low emissivities, since the reflected light isn't absorbed. A polished silver surface has an emissivity of about 0.02 near room temperature. Black soot absorbs thermal radiation very well; it has an emissivity as large as 0.97, and hence soot is a fair approximation to an ideal black body.
With the exception of bare, polished metals, the appearance of a surface to the eye is not a good guide to emissivities near room temperature. For example, white paint absorbs very little visible light. However, at an infrared wavelength of 10×10−6 metre, paint absorbs light very well, and has a high emissivity. Similarly, pure water absorbs very little visible light, but water is nonetheless a strong infrared absorber and has a correspondingly high emissivity.
Emittance
Emittance (or emissive power) is the total amount of thermal energy emitted per unit area per unit time for all possible wavelengths. Emissivity of a body at a given temperature is the ratio of the total emissive power of a body to the total emissive power of a perfectly black body at that temperature. Following Planck's law, the total energy radiated increases with temperature while the peak of the emission spectrum shifts to shorter wavelengths. The energy emitted at shorter wavelengths increases more rapidly with temperature. For example, an ideal blackbody in thermal equilibrium at , will emit 97% of its energy at wavelengths below .
The term emissivity is generally used to describe a simple, homogeneous surface such as silver. Similar terms, emittance and thermal emittance, are used to describe thermal radiation measurements on complex surfaces such as insulation products.
Measurement of Emittance
Emittance of a surface can be measured directly or indirectly from the emitted energy from that surface. In the direct radiometric method, the emitted energy from the sample is measured directly using a spectroscope such as Fourier transform infrared spectroscopy (FTIR). In the indirect calorimetric method, the emitted energy from the sample is measured indirectly using a calorimeter. In addition to these two commonly applied methods, inexpensive emission measurement technique based on the principle of two-color pyrometry.
Emissivities of planet Earth
The emissivity of a planet or other astronomical body is determined by the composition and structure of its outer skin. In this context, the "skin" of a planet generally includes both its semi-transparent atmosphere and its non-gaseous surface. The resulting radiative emissions to space typically function as the primary cooling mechanism for these otherwise isolated bodies. The balance between all other incoming plus internal sources of energy versus the outgoing flow regulates planetary temperatures.
For Earth, equilibrium skin temperatures range near the freezing point of water, 260±50 K (-13±50 °C, 8±90 °F). The most energetic emissions are thus within a band spanning about 4-50 μm as governed by Planck's law. Emissivities for the atmosphere and surface components are often quantified separately, and validated against satellite- and terrestrial-based observations as well as laboratory measurements. These emissivities serve as parametrizations within some simpler meteorlogic and climatologic models.
Surface
Earth's surface emissivities (εs) have been inferred with satellite-based instruments by directly observing surface thermal emissions at nadir through a less obstructed atmospheric window spanning 8-13 μm. Values range about εs=0.65-0.99, with lowest values typically limited to the most barren desert areas. Emissivities of most surface regions are above 0.9 due to the dominant influence of water; including oceans, land vegetation, and snow/ice. Globally averaged estimates for the hemispheric emissivity of Earth's surface are in the vicinity of εs=0.95.
Atmosphere
Water also dominates the planet's atmospheric emissivity and absorptivity in the form of water vapor. Clouds, carbon dioxide, and other components make substantial additional contributions, especially where there are gaps in the water vapor absorption spectrum. Nitrogen () and oxygen () - the primary atmospheric components - interact less significantly with thermal radiation in the infrared band. Direct measurement of Earths atmospheric emissivities (εa) are more challenging than for land surfaces due in part to the atmosphere's multi-layered and more dynamic structure.
Upper and lower limits have been measured and calculated for εa in accordance with extreme yet realistic local conditions. At the upper limit, dense low cloud structures (consisting of liquid/ice aerosols and saturated water vapor) close the infrared transmission windows, yielding near to black body conditions with εa≈1. At a lower limit, clear sky (cloud-free) conditions promote the largest opening of transmission windows. The more uniform concentration of long-lived trace greenhouse gases in combination with water vapor pressures of 0.25-20 mbar then yield minimum values in the range of εa=0.55-0.8 (with ε=0.35-0.75 for a simulated water-vapor-only atmosphere). Carbon dioxide () and other greenhouse gases contribute about ε=0.2 to εa when atmospheric humidity is low. Researchers have also evaluated the contribution of differing cloud types to atmospheric absorptivity and emissivity.
These days, the detailed processes and complex properties of radiation transport through the atmosphere are evaluated by general circulation models using radiation transport codes and databases such as MODTRAN/HITRAN. Emission, absorption, and scattering are thereby simulated through both space and time.
For many practical applications it may not be possible, economical or necessary to know all emissivity values locally. "Effective" or "bulk" values for an atmosphere or an entire planet may be used. These can be based upon remote observations (from the ground or outer space) or defined according to the simplifications utilized by a particular model. For example, an effective global value of εa≈0.78 has been estimated from application of an idealized single-layer-atmosphere energy-balance model to Earth.
Effective emissivity due to atmosphere
The IPCC reports an outgoing thermal radiation flux (OLR) of 239 (237–242) W m and a surface thermal radiation flux (SLR) of 398 (395–400) W m, where the parenthesized amounts indicate the 5-95% confidence intervals as of 2015. These values indicate that the atmosphere (with clouds included) reduces Earth's overall emissivity, relative to its surface emissions, by a factor of 239/398 ≈ 0.60. In other words, emissions to space are given by where is the effective emissivity of Earth as viewed from space and is the effective temperature of the surface.
History
The concepts of emissivity and absorptivity, as properties of matter and radiation, appeared in the late-eighteenth thru mid-nineteenth century writings of Pierre Prévost, John Leslie, Balfour Stewart and others. In 1860, Gustav Kirchhoff published a mathematical description of their relationship under conditions of thermal equilibrium (i.e. Kirchhoff's law of thermal radiation). By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. Emissivity, defined as a further proportionality factor to the Stefan-Boltzmann law, was thus implied and utilized in subsequent evaluations of the radiative behavior of grey bodies. For example, Svante Arrhenius applied the recent theoretical developments to his 1896 investigation of Earth's surface temperatures as calculated from the planet's radiative equilibrium with all of space. By 1900 Max Planck empirically derived a generalized law of blackbody radiation, thus clarifying the emissivity and absorptivity concepts at individual wavelengths.
Other radiometric coefficients
See also
Albedo
Black-body radiation
Passive daytime radiative cooling
Radiant barrier
Reflectance
Sakuma–Hattori equation
Stefan–Boltzmann law
View factor
Wien's displacement law
References
Further reading
An open community-focused website & directory with resources related to spectral emissivity and emittance. On this site, the focus is on available data, references and links to resources related to spectral emissivity as it is measured & used in thermal radiation thermometry and thermography (thermal imaging).
Resources, Tools and Basic Information for Engineering and Design of Technical Applications. This site offers an extensive list of other material not covered above.
Physical quantities
Radiometry
Heat transfer | Emissivity | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 3,352 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Telecommunications engineering",
"Physical quantities",
"Quantity",
"Thermodynamics",
"Physical properties",
"Radiometry"
] |
903,117 | https://en.wikipedia.org/wiki/Karyorrhexis | Karyorrhexis (from Greek κάρυον karyon 'kernel, seed, nucleus' and ῥῆξις rhexis 'bursting') is the destructive fragmentation of the nucleus of a dying cell whereby its chromatin is distributed irregularly throughout the cytoplasm. It is usually preceded by pyknosis and can occur as a result of either programmed cell death (apoptosis), cellular senescence, or necrosis.
In apoptosis, the cleavage of DNA is done by Ca2+ and Mg2+ -dependent endonucleases.
Overview
During apoptosis, a cell goes through a series of steps as it eventually breaks down into apoptotic bodies, which undergo phagocytosis. In the context of karyorrhexis, these steps are, in chronological order, pyknosis (the irreversible condensation of chromatin), karyorrhexis (fragmentation of the nucleus and condensed DNA) and karyolysis (dissolution of the chromatin due to endonucleases).
Karyorrhexis involves the breakdown of the nuclear envelope and the fragmentation of condensed chromatin due to endonucleases. In cases of apoptosis, karyorrhexis ensures that nuclear fragments are quickly removed by phagocytes. In necrosis, however, this step fails to progress in an orderly manner, leaving behind fragmented cellular debris, further contributing to tissue damage and inflammation.
Process of Nuclear Envelope Dissolution During Karyorrhexis
In the intrinsic pathway of apoptosis, environmental factors such as oxidative stress signal pro-apoptotic members of the Bcl-2 protein family to eventually break the outer membrane of the mitochondria. This causes cytochrome c to leak into the cytoplasm, which causes a cascade of events that eventually leads to the activation of several caspases. One of these caspases, caspase-6, is known to cleave nuclear lamina proteins such as lamin A/C, which hold the nuclear envelope together, thereby aiding in the dissolution of the nuclear envelope.
Process of Condensed Chromatin Fragmentation During Karyorrhexis
In the process of karyorrhexis through apoptosis, DNA is fragmented in an orderly manner by endonucleases such as caspase-activated DNase and discrete nucleosomal units are formed. This is because the DNA has already been condensed during pyknosis, meaning it has been wrapped around histones in an organized manner, with around 180 base pairs per histone. The fragmented chromatin observed during karyorrhexis is made when activated endonucleases cleave the DNA in between the histones, resulting in orderly, discrete nucleosomal units. These short DNA fragments left by the endonucleases can be identified on an agar gel during electrophoresis due to their unique “laddered” appearance, allowing researchers to better identify cell death through apoptosis.
Nucleus Degradation in Other Forms of Cell Death
Karyorrhexis is associated with a controlled breakdown of the nuclear envelope, typically by caspases that destroy lamins during apoptosis. However, for other forms of cell death that are less controlled than apoptosis, such as necrosis (unprogrammed cell death), the degradation of the nucleus is caused by other factors. Unlike apoptosis, necrosis cells are characterized by having a ruptured plasma membrane, no association with the activation of caspases, and typically invoking an inflammatory response. Because necrosis is a caspase-independent process, the nucleus may stay intact during early stages of cell death before being ripped open due to osmotic stress and other factors associated with having a hole in the plasma membrane. A specialized form of necrosis, called necroptosis, has a slightly more controlled degradation of the nucleus. This process is dependent on calpain, which is a protease that also degrades lamins, destabilizing the structure of the nucleus. However, similar to necrosis, this process also involves a ruptured plasma membrane, which contributes to the uncontrolled degradation of the nuclear envelope.
Unlike karyorrhexis in apoptosis which produces apoptotic bodies to be digested through phagocytosis, karyorrhexis in necroptosis leads to the expulsion of cell contents into extracellular space to be digested through pinocytosis.
Triggers and Mechanisms
The process of apoptosis, and thereby nucleus degradation through karyorrhexis, is invoked by various physiological and pathological stimuli. DNA damage, oxidative stress, hypoxia, and infections can initiate signaling cascades leading to nuclear degradation through the intrinsic pathway of apoptosis. The intrinsic pathway can also be induced through ethanol, which activates apoptosis-related proteins such as BAX and caspases. Additionally, if the death receptors on a cell’s surface are activated, such as CD95, the activation of caspases and nuclear envelope degradation can be triggered as well. In all of these processes, caspases such as caspase-3 play a key role by cleaving nuclear lamins and promoting chromatin fragmentation. In necrosis, uncontrolled calcium influx and activation of proteases such as calpains accelerate the process, highlighting the contrasting regulatory mechanisms between necrotic and apoptotic karyorrhexis.
The level of DNA damage determines whether a cell undergoes apoptosis or cell senescence. Cellular senescence refers to the cessation of the cell cycle and thus cell division, which can be observed after a fixed amount (approximately 50) of doublings in primary cells. One cause of cellular senescence is DNA damage through the shortening of telomeres. This causes a DNA damage response (DDR), which, if prolonged over a long period of time, activates ATR and ATM damage kinases. These kinases activate two more kinases, Chk1 and Chk2 kinases, which can alter the cell in a few different ways. One of these ways is by activating a transcription factor known as p53. If the level of DNA damage is mild, the p53 will opt to activate CIP, which inhibits CDKs, arresting the cell cycle. However, if the level of DNA damage is severe enough, p53 can trigger apoptotic pathways which lead to the dissolution of the nuclear envelope through karyorrhexis.
Pathological Implications
Karyorrhexis is a prominent feature in conditions related to cell death, such as ischemia and neurodegenerative disorders. It has been observed during myocardial infarction and brain stroke, indicating its contribution to cell death in acute stress responses. Moreover, disorders such as placental vascular malperfusion have highlighted the role of karyorrhexis in fetal demise, particularly when it disrupts normal tissue homeostasis.
In cancer, apoptotic karyorrhexis plays a dual role. While it facilitates controlled cell death, aiding in tumor suppression, resistance to apoptosis in cancer cells results in evasion of this pathway, promoting malignancy. Therapeutic interventions targeting apoptotic pathways attempt to restore this phase of nuclear degradation to induce tumor regression.
See also
Karyolysis
References
Cellular processes
Cellular senescence
Programmed cell death | Karyorrhexis | [
"Chemistry",
"Biology"
] | 1,564 | [
"Signal transduction",
"Senescence",
"Cellular senescence",
"Cellular processes",
"Programmed cell death"
] |
903,171 | https://en.wikipedia.org/wiki/Pyknosis | Pyknosis, or karyopyknosis, is the irreversible condensation of chromatin in the nucleus of a cell undergoing necrosis or apoptosis. It is followed by karyorrhexis, or fragmentation of the nucleus.
Pyknosis (from Ancient Greek meaning "thick, closed or condensed") is also observed in the maturation of erythrocytes (a red blood cell) and the neutrophil (a type of white blood cell). The maturing metarubricyte (a stage in RBC maturation) will condense its nucleus before expelling it to become a reticulocyte. The maturing neutrophil will condense its nucleus into several connected lobes that stay in the cell until the end of its cell life.
Pyknotic nuclei are often found in the zona reticularis of the adrenal gland. They are also found in the keratinocytes of the outermost layer in parakeratinised epithelium.
Overview of Pyknosis
Pyknosis, or the irreversible nuclear condensation (a nuclear morphology) in a cell (generally old vertebrate leukocyte cells) is the result of a cell undergoing either apoptosis or necrosis. There are two types of pyknosis: nucleolytic pyknosis and anucleolytic pyknosis. Nucleolytic pyknosis occurs during apoptosis (a form of controlled/programmed cell death), while anucleolytic pyknosis occurs during necrosis. Necrosis is a form of regulated cell death due to toxins, infections, and other acute stressors. These stressors cause swelling/shape modification of cellular organelles leading to the eventual loss of stability and integrity of the cell membrane.
In simpler terms, pyknosis is the process of nuclear shrinkage that may occur during both necrosis and apoptosis. Pyknosis is also characterized by eventual fragmentation (karyorrhexis) of the dense nuclear chromatin, resulting in dark, round, and dense nuclear fragments. Karyorrhexis is the fragmentation of a pyknotic cell’s nucleus and the cleavage and condensing of chromatin.
Pyknosis provides a distinction between apoptosis and necrosis
Apoptosis is characterized by nuclear condensation, shrinking of the cell, and blebbing of the nuclear and cell membrane, while necrosis is characterized by nuclear condensation, swelling of the cell, and breaks in the cell membrane. Both necrosis and apoptosis are regulated by a few of the same proteins: caspase-activated DNase (CAD), endonuclease G and DNase I. Pyknosis occurs in both an apoptotic and a necrotic cell. Pyknosis in an apoptotic cell is identified by nuclear condensation, chromatin fragmentation, and the formation of a few large clumps which are enveloped by apoptotic extracellular vesicles, which are to be released when the cell dies. Pyknosis in a necrotic cell is identified by nuclear condensation and fragmentation into small clumps that will be dissolved later in the process of the necrotic cell’s death. Consequently, pyknosis can be distinguished into two types, nucleolytic pyknosis (apoptotic cells) and anucleolytic pyknosis (necrotic cells).
The types of pyknosis
Nucleolytic pyknosis
Nucleolytic pyknosis, which can also be referred to as apoptotic pyknosis, involves three main events. These are disrupting the nuclear membrane, the condensing of the chromatin, and lastly, nuclear cleavage/fragmentation. Throughout these events the cell shrinks in size and the cell membrane undergoes blebbing, which is the forming of membrane bulges across the exterior-facing surface of the cell membrane. During the first event (the disruption of the nuclear membrane), several enzymes are used to cleave the proteins found in the nuclear membrane. These enzymes, caspase-3 and caspase-6, both target and cleave nuclear membrane proteins, including NUP153, LAP2, and B1 (proteins that are used for membrane structure and molecular transport). This cleavage, in turn, results in a disruption of the interior of the membrane, which is an initiating factor for chromatin condensation (the second event of nucleolytic pyknosis). This is due to the fact that caspase-3 cleaves Acinus, which has DNA/RNA binding domains and ATPase activity to initiate the condensation of chromatin.
Anucleolytic pyknosis
Anucleolytic pyknosis, which can also be referred to as necrotic pyknosis, involves the swelling of the cell, followed by the separation of the nuclear membrane from chromatin, the eventual collapse of both the nuclear membrane and chromatin, and finally the cell membrane ruptures (the cell dies). One protein that plays a significant role in necrotic pyknosis is the barrier-to-autointegration factor (BAF). The function of BAF is to facilitate the tethering of chromatin to the membrane of the nucleus, however in the case of necrosis, when BAF is phosphorylated, it will initiate the dissociation between the nuclear membrane and the condensed chromatin. As a result, the nuclear membrane will collapse onto the condensed chromatin. Thus, the phosphorylation of BAF is a critical marker of necrotic pyknosis.
Significance of pyknosis
Pyknosis is a stage in the apoptotic or necrotic cell death pathways. It is an important stage that involves fragmentation and condensation of damaged DNA/chromatin. Without it, the apoptotic or necrotic cell death pathways would be interrupted. This disruption, in turn, may prompt the improper destruction or removal of a cell with damaged elements as well as other related issues. These issues include cell accumulation and uncontrolled cell growth, which results in the formation of cancerous and abnormal tissue masses known as tumors. Therefore, being able to observe or identify when a cell is pyknotic (which may indicate that the cell is undergoing apoptosis or necrosis) and if it then successfully undergoes apoptosis or necrosis, may be crucial in determining if the cell will undergo uncontrolled cell growth and contribute to the formation of a tumor.
Techniques for detecting or observing pyknosis in cells
Various techniques are used to detect/observe pyknosis. These techniques also help to differentiate between apoptotic or necrotic cells. The techniques are identified and described as follows:
Cellular staining
When stains and dyes are applied to locate pyknotic cells in a tissue sample, the cell becomes easily identifiable. The stains/dyes target the nuclear and blebbed fragments of a pyknotic cell, making them dark (light contrast) and more readily seen when the sample is placed under a light microscope. Fluorescence microscopy and flow cytometry also use staining (fluorescent stains) to target the DNA/nuclear fragments of cells. The fluorescent staining creates a contrast between normal cell DNA and pyknotic cell DNA, because pyknotic cell nuclear material is condensed.
Gel electrophoresis
Gel electrophoresis is a standard technique that is frequently used to visualize DNA fragmentation (forming a ladder-like image on the gel), which is a characteristic of apoptosis and is associated with nuclear condensation (which characterizes pyknosis). Therefore, when referring to apoptosis, this technique is known as DNA laddering. Gel electrophoresis is also used to visualize the random DNA fragmentation of necrosis, which forms a smear on the gel.
Assays to detect DNA fragmentation or condensation
Various assays of DNA fragmentation or condensation include the APO single-stranded DNA (ssDNA) assay which detects damaged DNA of cells undergoing apoptosis or necrosis, TUNEL assay which is used to locally find DNA strand breaks (DSBs), and ISEL.
ISEL (in-situ labeling technique) is a labeling/tagging technique of apoptotic or necrotic cells. ISEL specifically targets unfragmented DNA that has condensed into a nucleosome structure.
The APO ssDNA assay detects apoptotic cells by using an antibody that specifically binds to the ssDNA, which is accumulated during apoptosis as a result of DNA fragmentation. Therefore, the presence of ssDNA is an indicator of DNA damage in the apoptotic cell. For the assay process, cells are fixed (with e.g., formamide), and these cells then undergo incubation (at a predetermined temperature), which subjects the DNA to thermal denaturation and exposes the ssDNA. Next, the cells are incubated with an ssDNA-specific antibody along with a fluorescently labeled secondary antibody. The fluorescence amounts as a measure of apoptosis which can then be quantified using flow cytometry.
The TUNEL assay, otherwise known as the terminal deoxynucleotidyl transferase dUTP nick-end labeling assay, is a technique that measures DNA damage and breakage during apoptosis. During apoptosis, DNA fragmentation exposes numerous 3’OH ends, that are labeled with modified deoxy-uridine triphosphate (dUTP) by the TUNEL reaction. Then, this modified dUTP can be identified with specific fluorescent antibodies which can identify modified nucleotides or by using tagged nucleotides themselves. Flow cytometry can then be used to quantify fluorescence intensity, and thus provide a measure of apoptosis.
Detection methods of caspase activity
As mentioned above, caspase proteins, which are protease enzymes, promote DNA condensation and fragmentation via the caspase (or proteolytic) cascade. These caspase proteins include, for example, caspase 9, caspase 6, caspase 7, and caspase 3. The caspase cascade directly activates caspase-activated DNase (CAD) which initiates DNA fragmentation into smaller pieces resulting in chromatin condensation. The biochemical techniques used to detect caspase activity include ELISA and fluorometric and colorimetric assays.
See also
Apoptosis
Necrosis
Karyolysis
Karyorrhexis
References
Cellular processes
Programmed cell death
Cellular senescence | Pyknosis | [
"Chemistry",
"Biology"
] | 2,263 | [
"Signal transduction",
"Cellular senescence",
"Senescence",
"Cellular processes",
"Programmed cell death"
] |
903,303 | https://en.wikipedia.org/wiki/Diazotroph | Diazotrophs are bacteria and archaea that fix atmospheric nitrogen (N2) in the atmosphere into bioavailable forms such as ammonia.
A diazotroph is a microorganism that is able to grow without external sources of fixed nitrogen. Examples of organisms that do this are rhizobia and Frankia and Azospirillum. All diazotrophs contain iron-molybdenum or iron-vanadium nitrogenase systems. Two of the most studied systems are those of Klebsiella pneumoniae and Azotobacter vinelandii. These systems are studied because of their genetic tractability and their fast growth.
Etymology
The word diazotroph is derived from the words diazo ("di" = two + "azo" = nitrogen) meaning "dinitrogen (N2)" and troph meaning "pertaining to food or nourishment", in summary dinitrogen utilizing. The word azote means nitrogen in French and was named by French chemist and biologist Antoine Lavoisier, who saw it as the part of air which cannot sustain life.
Types
Diazotrophs are scattered across Bacteria taxonomic groups (as well as a couple of Archaea). Even within a species that can fix nitrogen there may be strains that do not. Fixation is shut off when other sources of nitrogen are available, and, for many species, when oxygen is at high partial pressure. Bacteria have different ways of dealing with the debilitating effects of oxygen on nitrogenases, listed below.
Free-living diazotrophs
Anaerobes—these are obligate anaerobes that cannot tolerate oxygen even if they are not fixing nitrogen. They live in habitats low in oxygen, such as soils and decaying vegetable matter. Clostridium is an example. Sulphate-reducing bacteria are important in ocean sediments (e.g. Desulfovibrio), and some Archean methanogens, like Methanococcus, fix nitrogen in muds, animal intestines and anoxic soils.
Facultative anaerobes—these species can grow either with or without oxygen, but they only fix nitrogen anaerobically. Often, they respire oxygen as rapidly as it is supplied, keeping the amount of free oxygen low. Examples include Klebsiella pneumoniae, Paenibacillus polymyxa, Bacillus macerans, and Escherichia intermedia.
Aerobes—these species require oxygen to grow, yet their nitrogenase is still debilitated if exposed to oxygen. Azotobacter vinelandii is the most studied of these organisms. It uses very high respiration rates, and protective compounds, to prevent oxygen damage. Many other species also reduce the oxygen levels in this way, but with lower respiration rates and lower oxygen tolerance.
Oxygenic photosynthetic bacteria (cyanobacteria) generate oxygen as a by-product of photosynthesis, yet some are able to fix nitrogen as well. These are colonial bacteria that have specialized cells (heterocysts) that lack the oxygen generating steps of photosynthesis. Examples are Anabaena cylindrica and Nostoc commune. Other cyanobacteria lack heterocysts and can fix nitrogen only in low light and oxygen levels (e.g. Plectonema). Some cyanobacteria, including the highly abundant marine taxa Prochlorococcus and Synechococcus do not fix nitrogen, whilst other marine cyanobacteria, such as Trichodesmium and Cyanothece, are major contributors to oceanic nitrogen fixation.
Anoxygenic photosynthetic bacteria do not generate oxygen during photosynthesis, having only a single photosystem which cannot split water. Nitrogenase is expressed under nitrogen limitation. Normally, the expression is regulated via negative feedback from the produced ammonium ion but in the absence of N2, the product is not formed, and the by-product H2 continues unabated [Biohydrogen]. Example species: Rhodobacter sphaeroides, Rhodopseudomonas palustris, Rhodobacter capsulatus.
Symbiotic diazotrophs
Rhizobia—these are the species that associate with legumes, plants of the family Fabaceae. Oxygen is bound to leghemoglobin in the root nodules that house the bacterial symbionts, and supplied at a rate that will not harm the nitrogenase.
Frankias—'actinorhizal' nitrogen fixers. The bacteria also infect the roots, leading to the formation of nodules. Actinorhizal nodules consist of several lobes, each lobe has a similar structure as a lateral root. Frankia is able to colonize in the cortical tissue of nodules, where it fixes nitrogen. Actinorhizal plants and Frankias also produce haemoglobins. Their role is less well established than for rhizobia. Although at first, it appeared that they inhabit sets of unrelated plants (alders, Australian pines, California lilac, bog myrtle, bitterbrush, Dryas), revisions to the phylogeny of angiosperms show a close relatedness of these species and the legumes. These footnotes suggest the ontogeny of these replicates rather than the phylogeny. In other words, an ancient gene (from before the angiosperms and gymnosperms diverged) that is unused in most species was reawakened and reused in these species.
Cyanobacteria—there are also symbiotic cyanobacteria. Some associate with fungi as lichens, with liverworts, with a fern, and with a cycad. These do not form nodules (indeed most of the plants do not have roots). Heterocysts exclude the oxygen, as discussed above. The fern association is important agriculturally: the water fern Azolla harbouring Anabaena is an important green manure for rice culture.
Association with animals—although diazotrophs have been found in many animal guts, there is usually sufficient ammonia present to suppress nitrogen fixation. Termites on a low nitrogen diet allow for some fixation, but the contribution to the termite's nitrogen supply is negligible. Shipworms may be the only species that derive significant benefit from their gut symbionts.
Cultivation
Under the laboratory conditions, extra nitrogen sources are not needed to grow free-living diazotrophs. Carbon sources (such as sucrose or glucose) and a small amount of inorganic salt are added to the medium. Free-living diazotrophs can directly use atmospheric nitrogen (N2). However, while cultivating several symbiotic diazotrophs, such as rhizobia, it is necessary to add nitrogen because rhizobia and other symbiotic nitrogen-fixing bacteria can not use molecular nitrogen (N2) in free-living form and only fix nitrogen during symbiosis with a host plant.
Application
Biofertilizer
Diazotroph fertilizer is a kind of biofertilizer that can use nitrogen-fixing microorganisms to convert molecular nitrogen (N2) into ammonia (which is the formation of nitrogen available for the crops to use). These nitrogen nutrients then can be used in the process of protein synthesis for the plants. This whole process of nitrogen fixation by diazotroph is called biological nitrogen fixation. This biochemical reaction can be carried out under normal temperature and pressure conditions. So it does not require extreme conditions and specific catalysts in fertilizer production. Therefore, produce available nitrogen in this way can be cheap, clean and efficient. Nitrogen-fixing bacteria fertilizer is an ideal and promising biofertilizer.
From the ancient time, people grow the leguminous crops to make the soil more fertile. And the reason for this is: the root of leguminous crops are symbiotic with the rhizobia (a kind of diazotroph). These rhizobia can be considered as a natural biofertilizer to provide available nitrogen in the soil. After harvesting the leguminous crops, and then grow other crops (may not be leguminous), they can also use these nitrogen remain in the soil and grow better.
Diazotroph biofertilizers used today include Rhizobium, Azotobacter, Azospirilium and Blue green algae (a genus of cyanobacteria). These fertilizer are widely used and commenced into industrial production. So far in the market, nitrogen fixation biofertilizer can be divided into liquid fertilizer and solid fertilizer. Most of the fertilizers are fermented in the way of liquid fermentation. After fermentation, the liquid bacteria can be packaged, which is the liquid fertilizer, and the fermented liquid can also be adsorbed with sterilized peat and other carrier adsorbents to form a solid microbial fertilizer. These nitrogen-fixation fertilizer has a certain effect on increasing the production of cotton, rice, wheat, peanuts, rape, corn, sorghum, potatoes, tobacco, sugarcane and various vegetables.
Importance
In organisms the symbiotic associations greatly exceed the free-living species, with the exception of cyanobacteria.
Biologically available nitrogen such as ammonia is the primary limiting factor for life on earth. Diazotroph plays an important roles in nitrogen cycle of the earth. In the terrestrial ecosystem, the diazotroph fix the (N2) from the atmosphere and provide the available nitrogen for the primary producer. Then the nitrogen is transferred to higher trophical levels and human beings. The formation and storage of nitrogen will all influenced by the transformation process. Also the available nitrogen fixed by the diazotroph is environmentally sustainable, which can reduce the use of fertilizer, which can be an important topic in agricultural research.
In marine ecosystem, prokaryotic phytoplankton (such as cyanobacteria) is the main nitrogen fixer, then the nitrogen consumed by higher trophical levels. The fixed N released from these organisms is a component of ecosystem N inputs. And also the fixed N is important for the coupled C cycle. A greater oceanic inventory of fixed N may increase the primary production and export of organic C to the deep ocean.
References
External links
Marine Nitrogen Fixation - The Basics (USC Capone Lab)
Azotobacter
Rhizobia
Frankia & Actinorhizal Plants
Nitrogen cycle
Environmental microbiology | Diazotroph | [
"Chemistry",
"Environmental_science"
] | 2,237 | [
"Environmental microbiology",
"Nitrogen cycle",
"Metabolism"
] |
903,686 | https://en.wikipedia.org/wiki/Teleparallelism | Teleparallelism (also called teleparallel gravity), was an attempt by Albert Einstein to base a unified theory of electromagnetism and gravity on the mathematical structure of distant parallelism, also referred to as absolute or teleparallelism. In this theory, a spacetime is characterized by a curvature-free linear connection in conjunction with a metric tensor field, both defined in terms of a dynamical tetrad field.
Teleparallel spacetimes
The crucial new idea, for Einstein, was the introduction of a tetrad field, i.e., a set of four vector fields defined on all of such that for every the set is a basis of , where denotes the fiber over of the tangent vector bundle . Hence, the four-dimensional spacetime manifold must be a parallelizable manifold. The tetrad field was introduced to allow the distant comparison of the direction of tangent vectors at different points of the manifold, hence the name distant parallelism. His attempt failed because there was no Schwarzschild solution in his simplified field equation.
In fact, one can define the connection of the parallelization (also called the Weitzenböck connection) to be the linear connection on such that
where and are (global) functions on ; thus is a global vector field on . In other words, the coefficients of Weitzenböck connection with respect to are all identically zero, implicitly defined by:
hence
for the connection coefficients (also called Weitzenböck coefficients) in this global basis. Here is the dual global basis (or coframe) defined by .
This is what usually happens in , in any affine space or Lie group (for example the 'curved' sphere but 'Weitzenböck flat' manifold).
Using the transformation law of a connection, or equivalently the properties, we have the following result.
Proposition. In a natural basis, associated with local coordinates , i.e., in the holonomic frame , the (local) connection coefficients of the Weitzenböck connection are given by:
where for are the local expressions of a global object, that is, the given tetrad.
The Weitzenböck connection has vanishing curvature, but – in general – non-vanishing torsion.
Given the frame field , one can also define a metric by conceiving of the frame field as an orthonormal vector field. One would then obtain a pseudo-Riemannian metric tensor field of signature (3,1) by
where
The corresponding underlying spacetime is called, in this case, a Weitzenböck spacetime.
These 'parallel vector fields' give rise to the metric tensor as a byproduct.
New teleparallel gravity theory
New teleparallel gravity theory (or new general relativity) is a theory of gravitation on Weitzenböck spacetime, and attributes gravitation to the torsion tensor formed of the parallel vector fields.
In the new teleparallel gravity theory the fundamental assumptions are as follows:
In 1961 Christian Møller revived Einstein's idea, and Pellegrini and Plebanski found a Lagrangian formulation for absolute parallelism.
Møller tetrad theory of gravitation
In 1961, Møller showed that a tetrad description of gravitational fields allows a more rational treatment of the energy-momentum complex than in a theory based on the metric tensor alone. The advantage of using tetrads as gravitational variables was connected with the fact that this allowed to construct expressions for the energy-momentum complex which had more satisfactory transformation properties than in a purely metric formulation. In 2015, it was shown that the total energy of matter and gravitation is proportional to the Ricci scalar of three-space up to the linear order of perturbation.
New translation teleparallel gauge theory of gravity
Independently in 1967, Hayashi and Nakano revived Einstein's idea, and Pellegrini and Plebanski started to formulate the gauge theory of the spacetime translation group. Hayashi pointed out the connection between the gauge theory of the spacetime translation group and absolute parallelism. The first fiber bundle formulation was provided by Cho. This model was later studied by Schweizer et al., Nitsch and Hehl, Meyer; more recent advances can be found in Aldrovandi and Pereira, Gronwald, Itin, Maluf and da Rocha Neto, Münch, Obukhov and Pereira, and Schucking and Surowitz.
Nowadays, teleparallelism is studied purely as a theory of gravity without trying to unify it with electromagnetism. In this theory, the gravitational field turns out to be fully represented by the translational gauge potential , as it should be for a gauge theory for the translation group.
If this choice is made, then there is no longer any Lorentz gauge symmetry because the internal Minkowski space fiber—over each point of the spacetime manifold—belongs to a fiber bundle with the Abelian group as structure group. However, a translational gauge symmetry may be introduced thus: Instead of seeing tetrads as fundamental, we introduce a fundamental translational gauge symmetry instead (which acts upon the internal Minkowski space fibers affinely so that this fiber is once again made local) with a connection and a "coordinate field" taking on values in the Minkowski space fiber.
More precisely, let be the Minkowski fiber bundle over the spacetime manifold . For each point , the fiber is an affine space. In a fiber chart , coordinates are usually denoted by , where are coordinates on spacetime manifold , and are coordinates in the fiber .
Using the abstract index notation, let refer to and refer to the tangent bundle . In any particular gauge, the value of at the point p is given by the section
The covariant derivative
is defined with respect to the connection form , a 1-form assuming values in the Lie algebra of the translational abelian group . Here, d is the exterior derivative of the th component of , which is a scalar field (so this isn't a pure abstract index notation). Under a gauge transformation by the translation field ,
and
and so, the covariant derivative of is gauge invariant. This is identified with the translational (co-)tetrad
which is a one-form which takes on values in the Lie algebra of the translational Abelian group , whence it is gauge invariant. But what does this mean? is a local section of the (pure translational) affine internal bundle , another important structure in addition to the translational gauge field . Geometrically, this field determines the origin of the affine spaces; it is known as Cartan’s radius vector. In the gauge-theoretic framework, the one-form
arises as the nonlinear translational gauge field with interpreted as the Goldstone field describing the spontaneous breaking of the translational symmetry.
A crude analogy: Think of as the computer screen and the internal displacement as the position of the mouse pointer. Think of a curved mousepad as spacetime and the position of the mouse as the position. Keeping the orientation of the mouse fixed, if we move the mouse about the curved mousepad, the position of the mouse pointer (internal displacement) also changes and this change is path dependent; i.e., it does not depend only upon the initial and final position of the mouse. The change in the internal displacement as we move the mouse about a closed path on the mousepad is the torsion.
Another crude analogy: Think of a crystal with line defects (edge dislocations and screw dislocations but not disclinations). The parallel transport of a point of along a path is given by counting the number of (up/down, forward/backwards and left/right) crystal bonds transversed. The Burgers vector corresponds to the torsion. Disinclinations correspond to curvature, which is why they are neglected.
The torsion—that is, the translational field strength of Teleparallel Gravity (or the translational "curvature")—
is gauge invariant.
We can always choose the gauge where is zero everywhere, although is an affine space and also a fiber; thus the origin must be defined on a point-by-point basis, which can be done arbitrarily. This leads us back to the theory where the tetrad is fundamental.
Teleparallelism refers to any theory of gravitation based upon this framework. There is a particular choice of the action that makes it exactly equivalent to general relativity, but there are also other choices of the action which are not equivalent to general relativity. In some of these theories, there is no equivalence between inertial and gravitational masses.
Unlike in general relativity, gravity is due not to the curvature of spacetime but to the torsion thereof.
Non-gravitational contexts
There exists a close analogy of geometry of spacetime with the structure of defects in crystal. Dislocations are represented by torsion, disclinations by curvature. These defects are not independent of each other. A dislocation is equivalent to a disclination-antidisclination pair, a disclination is equivalent to a string of dislocations. This is the basic reason why Einstein's theory based purely on curvature can be rewritten as a teleparallel theory based only on torsion. There exists, moreover, infinitely many ways of rewriting Einstein's theory, depending on how much of the curvature one wants to reexpress in terms of torsion, the teleparallel theory being merely one specific version of these.
A further application of teleparallelism occurs in quantum field theory, namely, two-dimensional non-linear sigma models with target space on simple geometric manifolds, whose renormalization behavior is controlled by a Ricci flow, which includes torsion. This torsion modifies the Ricci tensor and hence leads to an infrared fixed point for the coupling, on account of teleparallelism ("geometrostasis").
See also
Classical theories of gravitation
Gauge gravitation theory
Kaluza-Klein theory
Geometrodynamics
References
Further reading
External links
Selected Papers on Teleparallelism, translated and edited by D. H. Delphenich
Teleparallel Structures and Gravity Theories by Luca Bombelli
History of physics
Theories of gravity | Teleparallelism | [
"Physics"
] | 2,157 | [
"Theoretical physics",
"Theories of gravity"
] |
904,056 | https://en.wikipedia.org/wiki/Electrical%20impedance%20tomography | Electrical impedance tomography (EIT) is a noninvasive type of medical imaging in which the electrical conductivity, permittivity, and impedance of a part of the body is inferred from surface electrode measurements and used to form a tomographic image of that part. Electrical conductivity varies considerably among various types of biological tissues or due to the movement of fluids and gases within tissues. The majority of EIT systems apply small alternating currents at a single frequency, however, some EIT systems use multiple frequencies to better differentiate between normal and suspected abnormal tissue within the same organ.
Typically, conducting surface electrodes are attached to the skin around the body part being examined. Small alternating currents are applied to some or all of the electrodes, the resulting equipotentials being recorded from the other electrodes. This process will then be repeated for numerous different electrode configurations and finally result in a two-dimensional tomogram according to the image reconstruction algorithms used.
Since free ion content determines tissue and fluid conductivity, muscle and blood will conduct the applied currents better than fat, bone or lung tissue. This property can be used to reconstruct static images by morphological or absolute EIT (a-EIT). However, in contrast to linear x-rays used in Computed Tomography, electric currents travel three dimensionally along all the paths simultaneously, weighted by their conductivity (thus primarily along the path of least resistivity, but not exclusively). This means, that a part of the electric current leaves the transverse plane and results in an impedance transfer. This and other factors are the reason why image reconstruction in absolute EIT is so hard, since there is usually more than just one solution for image reconstruction of a three-dimensional area projected onto a two-dimensional plane.
Mathematically, the problem of recovering conductivity from surface measurements of current and potential is a non-linear inverse problem and is severely ill-posed. The mathematical formulation of the problem is due to Alberto Calderón, and in the mathematical literature of inverse problems it is often referred to as "Calderón's inverse problem" or the "Calderón problem". There is extensive mathematical research on the problem of uniqueness of solution and numerical algorithms for this problem.
Compared to the tissue conductivities of most other soft tissues within the human thorax, lung tissue conductivity is approximately five-fold lower, resulting in high absolute contrast. This characteristic may partially explain the amount of research conducted in EIT lung imaging. Furthermore, lung conductivity fluctuates intensely during the breath cycle which accounts for the immense interest of the research community to use EIT as a bedside method to visualize inhomogeneity of lung ventilation in mechanically ventilated patients. EIT measurements between two or more physiological states, e.g. between inspiration and expiration, are therefore referred to as time difference EIT (td-EIT).
Time difference EIT (td-EIT) has one major advantage over absolute EIT (a-EIT): inaccuracies resulting from interindividual anatomy, insufficient skin contact of surface electrodes or impedance transfer can be dismissed because most artifacts will eliminate themselves due to simple image subtraction in td-EIT. This is most likely the reason why, as of today, the greatest progress of EIT research has been achieved with time difference EIT.
Further EIT applications proposed include detection/location of cancer in skin, breast, or cervix, localization of epileptic foci, imaging of brain activity. as well as a diagnostic tool for impaired gastric emptying. Attempts to detect or localize tissue pathology within normal tissue usually rely on multifrequency EIT (MF-EIT), also termed Electrical Impedance Spectroscopy (EIS) and are based on differences in conductance patterns at varying frequencies.
History
The invention of EIT as a medical imaging technique is usually attributed to John G. Webster and a publication in 1978, although the first practical realization of a medical EIT system was detailed in 1984 due to the work of David C. Barber and Brian H. Brown. Together, Brown and Barber published the first Electrical Impedance Tomogram in 1983, visualizing the cross section of a human forearm by absolute EIT. Even though there has been substantial progress in the meantime, most a-EIT applications are still considered experimental. However, two commercial f-EIT devices for monitoring lung function in intensive care patients have been introduced just recently.
A technique similar to EIT is used in geophysics and industrial process monitoring – electrical resistivity tomography. In analogy to EIT, surface electrodes are being placed on the earth, within bore holes, or within a vessel or pipe in order to locate resistivity anomalies or monitor mixtures of conductive fluids. Setup and reconstruction techniques are comparable to EIT. In geophysics, the idea dates from the 1930s. Electrical resistivity tomography has also been proposed for mapping the electrical properties of substrates and thin films for electronic applications.
Theory
Electrical conductivity and permittivity vary among biological tissue types and depend on their free ion content. Further factors affecting conductivity include temperature and other physiological factors, e.g. the respiratory cycle between in- and expiration when lung tissue becomes more conductive due to lower content of insulating air within its alveoli.
After positioning surface electrodes through adhesive electrodes, an electrode belt or a conductive electrode vest around the body part of interest, alternating currents of typically a few milliamperes at a frequency of 10–100 kHz will be applied across two or more drive electrodes. The remaining electrodes will be used to measure the resulting voltage. The procedure will then be repeated for numerous "stimulation patterns", e.g. successive pairs of adjacent electrodes until an entire circle has been completed and image reconstruction can be carried out and displayed by a digital workstation that incorporates complex mathematical algorithms and a priori data.
The current itself is applied using current sources, either a single current source switched between electrodes using a multiplexer or a system of voltage-to-current converters, one for each electrode, each controlled by a digital-to-analog converter. The measurements again may be taken either by a single voltage measurement circuit multiplexed over the electrodes or a separate circuit for each electrode. Earlier EIT systems still used an analog demodulation circuit to convert the alternating voltage to a direct current level before running it through an analog-to-digital converter. Newer systems convert the alternating signal directly before performing digital demodulation. Depending on indication, some EIT systems are capable of working at multiple frequencies and measuring both magnitude and phase of the voltage. Voltages measured are passed on to a computer to perform image reconstruction and display. The choice of current (or voltage) patterns affects the signal-to-noise ratio significantly. With devices capable of feeding currents from all electrodes simultaneously (such as ACT3) it is possible to adaptively determine optimal current patterns.
If images are to be displayed in real time a typical approach is the application of some form of regularized inverse of a linearization of the forward problem or a fast version of a direct reconstruction method such as the D-bar method. Most practical systems used in the medical environment generate a 'difference image', i.e. differences in voltage between two time points are left-multiplied by the regularized inverse to calculate an approximate difference between permittivity and conductivity images. Another approach is to construct a finite element model of the body and adjust the conductivities (for example using a variant of Levenburg–Marquart method) to fit the measured data. This is more challenging as it requires an accurate body shape and the exact position of the electrodes.
Much of the fundamental work underpinning Electrical Impedance was done at Rensselaer Polytechnic Institute starting in the 1980s. See also the work published in 1992 from the Glenfield Hospital Project (reference missing).
Absolute EIT approaches are targeted at digital reconstruction of static images, i.e. two-dimensional representations of the anatomy within the body part of interest. As mentioned above and unlike linear x-rays in Computed Tomography, electric currents travel three-dimensionally along the path of least resistivity, which results in partial loss of the electric current applied (impedance transfer, e.g. due to blood flow through the transverse plane). This is one of the reasons why image reconstruction in absolute EIT is so complex, since there is usually more than just one solution for image reconstruction of a three-dimensional area projected onto a two-dimensional plane. Another difficulty is that given the number of electrodes and the measurement precision at each electrode, only objects bigger than a given size can be distinguished. This explains the necessity of highly sophisticated mathematical algorithms that will address the inverse problem and its ill-posedness.
Further difficulties in absolute EIT arise from inter- and intra-individual differences of electrode conductivity with associated image distortion and artifacts. It is also important to bear in mind that the body part of interest is rarely precisely rotund and that inter-individual anatomy varies, e.g. thorax shape, affecting individual electrode spacing. A priori data accounting for age-, height- and gender-typical anatomy can reduce sensitivity to artifacts and image distortion. Improving the signal-to-noise ratio, e.g. by using active surface electrodes, further reduces imaging errors. Some of the latest EIT systems with active electrodes monitor electrode performance through an extra channel and are able to compensate for insufficient skin contact by removing them from the measurements. Another potential solution to problem with electrode-skin contact is contactless EIT technique which uses voltage excitation and capacitive coupling instead of direct contact with the skin. Capacitively coupled electrodes are more comfortable for the patient but maintaining a constant and equal coupling capacitance for all electrodes is challenging in real measurements.
Time difference EIT bypasses most of these issues by recording measurements in the same individual between two or more physiological states associated with linear conductivity changes. One of the best examples for this approach is lung tissue during breathing due to linear conductivity changes between inspiration and expiration which are caused by varying contents of insulating air during each breath cycle. This permits digital subtraction of recorded measurements obtained during the breath cycle and results in functional images of lung ventilation. One major advantage is that relative changes of conductivity remain comparable between measurements even if one of the recording electrodes is less conductive than the others, thereby reducing most artifacts and image distortions. However, incorporating a priori data sets or meshes in difference EIT is still useful in order to project images onto the most likely organ morphology, which depends on weight, height, gender, and other individual factors.
The open source project EIDORS
provides a suite of programs (written in Matlab / GNU Octave) for data reconstruction and display under the GNU GPL license. The direct nonlinear D-bar method for nonlinear EIT reconstruction is available in Matlab code at .
The Open Innovation EIT Research Initiative is aimed at advancing the development of electrical impedance tomography (EIT) in general and to ultimately accelerate its clinical adoption. A plug-and-play EIT hardware and software package was available through Swisstom until 2018.
Properties
In contrast to most other tomographic imaging techniques, EIT does not apply any kind of ionizing radiation. Currents typically applied in EIT are relatively small and certainly below the threshold at which they would cause significant nerve stimulation. The frequency of the alternating current is sufficiently high not to give rise to electrolytic effects in the body and the Ohmic power dissipated is sufficiently small and diffused over the body to be easily handled by the body's thermoregulatory system. These properties qualify EIT to be continuously applied in humans, e.g. during mechanical ventilation in an intensive care unit (ICU).
Because the equipment needed in order to perform EIT is much smaller and less costly than in conventional tomography, EIT qualifies for continuous real time visualization of lung ventilation right at the bedside.
EIT's major disadvantage versus conventional tomography is its lower maximum spatial resolution (approximately 15% of electrode array diameter in EIT compared to 1 mm in CT and MRI). However, resolution can be improved using 32 instead of 16 electrodes. Image quality can be further improved by constructing an EIT system with active surface electrodes, which significantly reduce signal loss, artifacts, and interferences associated with cables as well as cable length and handling.
In contrast to spatial resolution, temporal resolution of EIT (0.1 milliseconds) is much higher than in CT or MRI (0.1 seconds).
Applications
Lung (a-EIT, td-EIT)
EIT is particularly useful for monitoring lung function because lung tissue resistivity is five times higher than most other soft tissues in the thorax. This results in high absolute contrast of the lungs. In addition, lung resistivity increases and decreases several-fold between inspiration and expiration which explains why monitoring ventilation is currently the most promising clinical application of EIT since mechanical ventilation frequently results in ventilator-associated lung injury (VALI). The feasibility of EIT for lung imaging was first demonstrated at Rensselaer Polytechnic Institute in 1990 using the NOSER algorithm. Time difference EIT can resolve the changes in the distribution of lung volumes between dependent and non-dependent lung regions and assist in adjusting ventilator settings to provide lung protective ventilation to patients during critical illness or anesthesia.
Most EIT studies have focused on monitoring regional lung function using the information determined by time difference EIT (td-EIT). However absolute EIT (a-EIT) also has the potential to become a clinically useful tool for lung imaging, as this approach would allow one to directly distinguish between lung conditions which result from regions with lower resistivity (e.g. hemothorax, pleural effusion, atelectasis, lung edema) and those with higher resistivity (e.g. pneumothorax, emphysema).
The above image shows an EIT study of a 10-day-old baby breathing normally with 16 adhesive electrodes applied to the chest.
Image reconstruction from absolute impedance measurements requires consideration of the exact dimensions and shape of a body as well as the precise electrode location since simplified assumptions would lead to major reconstruction artifacts. While initial studies assessing aspects of absolute EIT have been published, this area of research has not yet reached the level of maturity which would make it suitable for clinical use.
In contrast, time difference EIT determines relative impedance changes that may be caused by either ventilation or changes of end-expiratory lung volume. These relative changes are referred to a baseline level, which is typically defined by the intra-thoracic impedance distribution at the end of expiration.
Time difference EIT images can be generated continuously and right at the bedside. These attributes make regional lung function monitoring particularly useful whenever there is a need to improve oxygenation or CO2 elimination and when therapy changes are intended to achieve a more homogenous gas distribution in mechanically ventilated patients. EIT lung imaging can resolve the changes in the regional distribution of lung volumes between e.g. dependent and non-dependent lung regions as ventilator parameters are changed. Thus, EIT measurements may be used to guide specific ventilator settings to maintain lung protective ventilation for each patient.
Besides the applicability of EIT in the ICU, first studies with spontaneously breathing patients reveal further promising applications. The high temporal resolution of EIT allows regional assessment of common dynamic parameters used in pulmonary function testing (e.g. forced expiratory volume in 1 second). Additionally, specially developed image fusion methods overlaying functional EIT-data with morphological patient data (e.g. CT or MRI images) may be used to get a comprehensive insight into the pathophysiology of the lungs, which might be useful for patients with obstructive lung diseases (e.g. COPD, CF).
After many years of lung EIT research with provisional EIT equipment or series models manufactured in very small numbers, three commercial systems for lung EIT have entered in the medical technology market: Timpel Medical - ENLIGHT 2100, Dräger's PulmoVista® 500 and Sentec's LuMon EIT. The models are currently being installed in intensive care units and are already used as aides in decision-making processes related to the treatment of patients with acute respiratory distress syndrome (ARDS).
The increasing availability of commercial EIT systems in ICUs will show whether the promising body of evidence obtained from animal models will apply to humans as well (EIT-guided lung recruitment, selection of optimum PEEP levels, pneumothorax detection, prevention of ventilator associated lung injury (VALI), etc.). This would be highly desirable, given that recent studies suggest that 15% of mechanically ventilated patients in the ICU will develop acute lung injury (ALI) with attendant progressive lung collapse and which is associated with a reportedly high mortality of 39%. Just recently, the first prospective animal trial on EIT-guided mechanical ventilation and outcome could demonstrate significant benefits in regard to respiratory mechanics, gas exchange, and histological signs of ventilator-associated lung injury.
In addition to visual information (e.g. regional distribution of tidal volume), EIT measurements provide raw data sets that can be used to calculate other helpful information (e.g. changes of intrathoracal gas volume during critical illness) – however, such parameters still require careful evaluation and validation.
Another interesting aspect of thoracic EIT is its ability to record and filter pulsatile signals of perfusion. Although promising studies have been published on this topic, this technology is still at its beginnings. A breakthrough would allow simultaneous visualization of both regional blood flow and regional ventilation – enabling clinicians to locate and react upon physiological shunts caused by regional mismatches of lung ventilation and perfusion with associated hypoxemia.
Breast (MF-EIT)
EIT is being investigated in the field of breast imaging as an alternative/complementary technique to mammography and magnetic resonance imaging (MRI) for breast cancer detection. The low specificity of mammography and of MRI result in a relatively high rate of false positive screenings, with high distress for patients and cost for healthcare structures. Development of alternative imaging techniques for this indication would be desirable due to the shortcomings of the existing methods: ionizing radiation in mammography and the risk of inducing nephrogenic systemic fibrosis (NSF) in patients with decreased renal function by administering the contrast agent used in breast MRI, Gadolinium.
Literature shows that the electrical properties differ between normal and malignant
breast tissues, setting the stage for cancer detection through determination of electrical properties.
An early commercial development of non-tomographic electrical impedance imaging was the T-Scan device which was reported to improve sensitivity and specificity when used as an adjunct to screening mammography. A report to the United States Food and Drug Administration (FDA) describes a study involving 504 subjects where the sensitivity of mammography was 82%, 62% for the T-Scan alone, and 88% for the two combined. The specificity was 39% for mammography, 47% for the T-Scan alone, and 51% for the two combined.
Several research groups across the world are actively developing the technique. A frequency sweep seems to be an effective technique for detecting breast cancer using EIT.
United States Patent US 8,200,309 B2 combines electrical impedance scanning with magnetic resonance low frequency current density imaging in a clinically acceptable configuration not requiring the use of gadolinium chelate enhancement in magnetic resonance mammography.
Cervix (MF-EIT)
In addition to his pioneering role in the development of the first EIT systems in Sheffield professor Brian H. Brown is currently active in the research and development of an electrical impedance spectroscope based on MF-EIT. According to a study published by Brown in 2000, MF-EIT is able to predict [Cervical intraepithelial neoplasia] (CIN) grades 2 and 3 according to Pap smear with a sensitivity and specificity of 92% each. Whether cervical MF-EIT is going to be introduced as an adjunct or an alternative to the Pap smear has yet to be decided. Brown is academic founder of Zilico Limited which distributes the spectroscope (ZedScan I). The device received EC certification from its Notified Body in 2013 and is currently being introduced into a number of clinics in the UK and healthcare systems across the globe.
Brain (a-EIT, td-EIT, mf-EIT)
EIT has been suggested as a basis for brain imaging to enable detection and monitoring of cerebral ischemia, haemorrhage, and other morphological pathologies associated with impedance changes due to neuronal cell swelling, i.e. cerebral hypoxemia and hypoglycemia.
While EIT's maximum spatial resolution of approximately 15% of the electrode array diameter is significantly lower than that of cerebral CT or MRI (about one millimeter), temporal resolution of EIT is much higher than in CT or MRI (0.1 milliseconds compared to 0.1 seconds). This makes EIT also interesting for monitoring normal brain function and neuronal activity in intensive care units or the preoperative setting for localization of epileptic foci by telemetric recordings.
Holder was able to demonstrate in 1992 that changes of intracerebral impedance can be detected noninvasively through the cranium by surface electrode measurements. Animal models of experimental stroke or seizure showed increases of impedance of up to 100% and 10%, respectively.
More recent EIT systems offer the option to apply alternating currents from non-adjacent drive electrodes. So far, cerebral EIT has not yet reached the maturity to be adopted in clinical routine, yet clinical studies are currently being performed on stroke and epilepsy.
In this use EIT depends upon applying low frequency currents above the skull that are around <100 Hz since during neuronal rest at this frequency these currents remain in the extracellular space and therefore unable to enter the intracellular space within neurons. However, when a neuron generates an action potential or is about to be depolarized, resistance of its membrane preventing this will be reduced by eighty-fold. Whenever this happens in a larger numbers of neurons, resistivity changes of about 0.06–1.7 % will result. These changes in resistivity provide a means of detecting coherent neuronal activity across larger numbers of neurons and so the tomographic imaging of neural brain activity.
Unfortunately while such changes are detectable "they are just too small to support reliable production of images." The prospects of using this technique for this indication will depend upon improved signal processing or recording.
A study reported in June 2011 that Functional Electrical Impedance Tomography by Evoke Response (fEITER) has been used to image changes in brain activity after injection of an anaesthetic. One of the benefits of the technique is that the equipment required is small enough and easy enough to transport so that it can be used for monitoring depth of anesthesia in operating theatres.
Perfusion (td-EIT)
Due to its relatively high conductivity, blood may be used for functional imaging of perfusion in tissues and organs characterized by lower conductivities, e.g. to visualize regional lung perfusion. Background of this approach is that pulsatile tissue impedance changes according to differences in the filling of blood vessels between systole and diastole, particularly when injecting saline as contrasting agent.
Sports medicine / home care (a-EIT, td-EIT)
Electrical impedance measurements may also be used to calculate abstract parameters, i.e. nonvisual information. Recent advances in EIT technology as well as the lower number of electrodes required for recording global instead of regional parameters in healthy individuals can be used for non-invasive determination of e.g. VO2 or arterial blood pressure in sports medicine or home care.
Commercial systems
a-EIT and td-EIT
Even though medical EIT systems had not been used broadly until recently, several medical equipment manufacturers have been supplying commercial versions of lung imaging systems developed by university research groups. The first such system is produced by Maltron International who distribute the Sheffield Mark 3.5 system with 16 electrodes. Similar systems are the Goe MF II system developed by the University of Göttingen, Germany and distributed through CareFusion (16 electrodes) as well as the Enlight 1800 developed at the University of São Paulo School of Medicine and the Polytechnic Institute of the University of São Paulo, Brazil which is distributed by Timpel SA (Adult Belt Reusable - 32 electrodes; Pediatric Belt Reusable - 24 electrodes; Neonatal Belt Disposable - 16 electrodes). Timpel Medical has now released their second generation ENLIGHT 2100 and is the only FDA cleared electrical impedance tomography device commercially available in the United States.
These systems typically comply with medical safety legislation and have been primarily employed by clinical research groups in hospitals, most of them in critical care.
The first EIT device for lung function monitoring designed for everyday clinical use in the critical care environment has been made available by Dräger Medical in 2011 – the PulmoVista® 500 (16-electrode system). Another commercial EIT system designed for monitoring lung function in the ICU setting is based on 32 active electrodes and was first presented at 2013's annual ESICM congress – the LuMon EIT. Sentec's LuMon EIT was released to the market at 2014's International Symposium on Intensive Care and Emergency Medicine (ISICEM).
Timpel Medical
New strategies in artificial ventilation began to be developed through a research project, led by Marcelo Amato MD, PhD, University of São Paulo pulmonologist, between 2002 and 2008. These new ventilation strategies drove the need for innovation that would allow real-time visualization of ventilation and the individualization of treatment at the bedside. With this objective in mind, Timpel was created in 2004. In the same year, Dr Amato and his Team published the article “Imbalances in Regional Lung Ventilation: A Validation Study on Electrical Impedance Tomography” in the renowned ATS Journal, the American Journal of Respiratory and Critical Care Medicine otherwise known as the Blue Journal.
This was just the beginning of the journey. Amato’s research Team published more than 30 articles about EIT from 2004 to 2023. This research has contributed to the many tools available with EIT today. Because of the tremendous interest in EIT and the value the technology brings to the bedside, researchers around the world have contributed to the body of evidence with more than 250 peer reviewed publications in press by 2022.
Timpel's name is derived from the technology (Electric Impedance Tomography) written in reverse. El – electrical; Imp – impedance; T-tomography.
Timpel is passionate and motivated: to make EIT a valuable adjunctive tool for lung protective strategies contributing to the next generation methodology of treating critically ill patients at the bedside.
With Timpel’s ENLIGHT, electrical impedance tomography device each patient’s care is individualized based on their lung disease. ENLIGHT gives the Clinicians visibility of the ventilation disease profile in real time, at the bedside without the added risk of transportation.
MF-EIT
Multifrequency-EIT (MF-EIT) or electrical impedance spectroscopy (EIS) systems are typically designed to detect or locate abnormal tissue, e.g. precancerous lesions or cancer.
Impedance Medical Technologies manufacture systems based on designs by the Research Institute of Radioengineering and Electronics of the Russian Academy of Science in Moscow, that are aimed especially at breast cancer detection.
Texas-based Mirabel Medical Systems, Inc. develops a similar solution for non-invasive detection of breast cancer and offers the T-Scan 2000ED. Zilico Limited distributes an electrical impedance spectroscope named ZedScan I as a medical device supposed to aid cervical intraepithelial neoplasia location/diagnosis. The device just received EC certification in 2013.
V5R
The v5r is a high performance device, based upon a voltage-voltage measurement technique, designed to improve process control. The high frame rate of the v5r (over 650 frames per second) means that it can be used to monitor rapidly evolving processes or dynamic flow conditions. The data it provides can be used to determine the flow profile of complex multiphase processes; allowing engineers to discriminate between laminar flow, plug flow and other important flow conditions for deeper understanding and improved process control.
When used for concentration measurements, the ability to measure full impedance across a wide range of phase ratios means the v5r is able to deliver considerable accuracy across a wider conductivity range compared to other devices.
See also
Electrical capacitance tomography
Three-dimensional electrical capacitance tomography
Respiratory monitoring
EIDORS a reconstruction toolbox for EIT
Industrial Tomography Systems
References
Electrodiagnosis
Impedance measurements
Inverse problems
Medical imaging | Electrical impedance tomography | [
"Physics",
"Mathematics"
] | 6,072 | [
"Physical quantities",
"Applied mathematics",
"Inverse problems",
"Impedance measurements",
"Electrical resistance and conductance"
] |
905,140 | https://en.wikipedia.org/wiki/Riabouchinsky%20solid | In fluid mechanics a Riabouchinsky solid is a technique used for approximating boundary layer separation from a bluff body using potential flow. It is named after Dimitri Pavlovitch Riabouchinsky.
Riabouchinsky solids are typically used for analysing the behaviour of bodies moving through otherwise quiescent fluid (examples would include moving cars, or buildings in a windfield).
Typically the streamline that touches the edge of the body is modelled as having no transverse pressure gradient and thus may be styled as a free surface after separation.
The use of Riabouchinsky solids renders d'Alembert's paradox void; the technique typically gives reasonable estimates for the drag offered by bluff bodies moving through inviscid fluids.
References
Fluid dynamics
Russian inventions | Riabouchinsky solid | [
"Chemistry",
"Engineering"
] | 162 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
905,278 | https://en.wikipedia.org/wiki/Nanotechnology%20education | Nanotechnology education involves a multidisciplinary natural science education with courses such as physics, chemistry, mathematics, and molecular biology. It is being offered by many universities around the world. The first program involving nanotechnology was offered by the University of Toronto's Engineering Science program, where nanotechnology could be taken as an option.
Here is a partial list of universities offering nanotechnology education, and the degrees offered (Bachelor of Science, Master of Science, or PhD in Nanotechnology).
Africa
Egypt
Nile University - master's
The American University in Cairo - master's
Zewail City of Science and Technology - B.Sc
Cairo University - Faculty of Engineering - Masters of Science
Asia
Hong Kong
Hong Kong University of Science and Technology - MPhil, PhD
India
VIT University, Vellore, Center for Nanotechnology Research, M.Tech. in Nanotechnology
Srinivas Institute of Technology, Mangaluru, Karnataka [Affiliated to VTU, Belagavi, Approved by AICTE] - B.E. Nano Technology
Desh Bhagat School of Engineering, Mandi Gobindgarh, Punjab Desh Bhagat University - B.Tech & M.Tech with Nanotechnology
Tezpur Central University, Napam, Tezpur (M.Sc & Ph.D in nanoscience and technology)
Indian Institute of Science, Bangalore
IITs - B.Tech & M.Tech with Nanotechnology
Delhi Technological University (formerly DCE), Delhi - M.Tech
NITs
Central University of Jharkhand - Integrated M.Tech Nanotechnology
Jamia Millia Islamia (a central university by an act of parliament), New Delhi - M.Tech (Nanotechnology) & Ph.D
Sikkim Manipal Institute of Technology - M.Tech (Material Science and Nanotechnology, 2-year regular programme), Ph.D, Equipments: AFM, STM, HWCVD, TCVD
Amity University, Noida, Uttar Pradesh B.Tech, M.Sc, M.Sc + M.Tech, M.Tech
Jawaharlal Nehru Technological University, Kukatpally, Hyderabad, Telangana - M.Sc (Nanoscience & Technology) M.Tech (Nanotechnology) & Ph.D (Nano Science & Technology)
Sam Higginbottom Institute of Agriculture, Technology and Sciences - Postgraduate diploma in nanotechnology.
Jawaharlal Nehru Technological University College of Engineering Sultanpur, B.Tech in (Mechanical and NanoTechnology) Engineering.
University of Madras, National Centre for Nanoscience and Nanotechnology, Chennai - M.Tech, M.Sc and PhD in Nanoscience and Nanotechnology
University of Petroleum and Energy Studies, DEHRADUN-Uttarakhand, B.Tech - Material Science specialization in Nanotechnology
Nanobeach, Delhi - Advanced Nanotechnology Programs
SASTRA University, Thanjavur Tamil Nadu -M.Tech integrated in medical nanotechnology
Nano Science and Technology Consortium, Delhi - Nanotechnology Programs
Vels university, Chennai - M.Sc nanoscience
Bhagwant University, ajmer Rajasthan - B.Tech - Nanotechnology engineering
Maharaja Sayajirao University of Baroda, M.Sc Materials Science (Nanotechnology)
National Institute of Technology Calicut - M.Tech and Ph.D
SRM Institute of Science and Technology, Kattankulathur - B.Tech, M.S. (coursework, research), M.Tech and Ph.D
Noorul Islam College of Engineering, Kumarakovil - M.Tech Nanotechnology
Karunya University, Coimbatore - master's and Ph.D
Anna University Chennai
Andhra University, Visakhapatnam - M.Sc., M.Tech
Sri Venkateswara University, Tirupathi - M.Sc, M.Tech
Bharathiar University, Coimbatore - M.Sc Nanoscience and Technology (Based on Physics or Chemistry or Biotechnology), M.Phil and Ph.D
Osmania University, Hyderabad - M.Sc., M.Tech
Anna University Tiruchirappalli, Tamil Nadu - M.Tech (Nanoscience and Technology)
Centre For Converging Technologies, University of Rajasthan, Jaipur - M.Tech (Nanotechnology And Nanomaterials)
KSR College of Technology, Tiruchengode - M.Tech NanoScience and Technology
Mepco-Schlenk Engineering College, Sivakasi - M.Tech Nanoscience and Technology
Sarah Tucker college for Women (Affiliated with MS University, Tirunelveli) B.Sc Nanoscience
Karunya University, Coimbatore-114 - Integrated M.Sc Nanoscience & Nanotechnology and M.Tech with Nanotechnology
Acharya Nagarjuna University, Guntur - Integrated M.Sc Nanotechnology
Indian Institute of Nano Science & TechnologyBangalore
Sri Guru Granth Sahib World University Fatehgarh Sahib, Punjab
Anna University, Coimbatore
Mount Carmel College, Autonomous, Bangalore - M.Sc in Nanoscience and Technology (2-year course)
Important:
AICTE New Delhi has added B.Tech & M.Tech Nanotechnology courses in the list of approved courses in the academic year 2011 – 2012
North Maharashtra University JALGAON M.Tech in Nanoscience and Nanotechnology
Department of Environmental Microbiology, Babasaheb Bhimrao Ambedkar Central University, Lucknow. M.Sc in Nanoscience and Nanotechnology,
Amity Institute of Nanotechnology, Amity University, Noida [bachelor's and master's in Nanotechnology]
School of Nanoscience and Technology, Shivaji University, Kolhapur-416004, Maharashtra State, India (B.Sc-M.Sc 5-year integrated Course)
Department of Nanotechnology offers two year M.Sc. course in Nanotechnology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad-431004, Maharashtra State, India.
Iran
Iran university of science & technology - master's
Sharif University of Technology - master's, Ph.D
Tarbiat Modares University - master's, Ph.D
University of Tehran - master's, Ph.D
Amirkabir University of Technology - master's
University of Isfahan - master's
Shiraz University - master's
University of Sistan and Baluchestan - master's
University of Kurdistan - master's
Islamic Azad University of Marvdasht - master's
Israel
Bar Ilan University Institute of Nanotechnology & Advanced Materials (BINA)- with research centers for materials, medicine, energy, magnetism, cleantech and photonics. M.Sc, Ph.D, youth programs.
The Hebrew University of Jerusalem Center for Nanoscience and Nanotechnology - with units for nanocharecerization and nanofabrication. M.Sc, Ph.D
Technion Russell Berrie Nanotechnology Institute (RBNI)- Over 110 faculty members from 14 departments. M.Sc, Ph.D
Tel Aviv University Center for Nanoscience and Nanotechnology Interdisciplinary graduate program, marked by a large participation of students from the industry. M.Sc
The Weizmann Institute of Science - has a research group in the Department of Science Teaching that build programs for introduction high school teachers and students to Nanotechnology.
Japan
Tohoku University - bachelor's, master's, Ph.D
Nagoya University - bachelor's, master's, Ph.D
Kyushu University - master's, Ph.D
Keio University - master's
University of Tokyo - master's, Ph.D
Tokyo Institute of Technology - master's, Ph.D
Kyoto University - master's, Ph.D
Waseda University - Ph.D
Osaka University - master's, Ph.D
University of Tsukuba -master's, Ph.D
University of Electro-Communications - master's, Ph.D on Micro-Electronic
Kazakhstan
Al-Farabi Kazakh National University - master's, Ph.D
Malaysia
University Putra Malaysia - M.Sc and Ph.D programs in Nanomaterials and Nanotechnology UPM-Nano
Malaysia Multimedia University - bachelor's degree in electronic engineering majoring in Nanotechnology (Nano-Engineering)
Malaysia University of Science & Technology - B.Sc in Nanoscience & Nanoengineering with Business Management
Pakistan
University of the Punjab, Lahore, Centre of Excellence in Solid State Physics, M.S./Ph.D Program in Nanotechnology
Preston Institute Of Nanoscience And Technology (PINSAT), Islamabad, B.S. Nanoscience and Technology
University of Engineering & Technology, Lahore, Introductory Short Courses
Quaid-e-Azam University, Islamabad, master's degree research projects
Pakistan Institute of Engineering and Applied Sciences (PIEAS), Islamabad, National Centre for Nanotechnology
COMSATS Institute of Information Technology (CIIT), Islamabad, Center for Micro and Nano Devices
National University of Science & Technology (NUST), Islamabad, M.S. and Ph.D Nanoscience & Engineering
National Institute of Bio Genetic Engineering (NIGBE), Faisalabad, Research Projects
Ghulam Ishaq Khan Institute of Engineering & Technology (GIKI), TOPI, KPK, master's/Ph.D degree program
Baha-ud-din Zakaria University, Multan
Government College University (GCU), Lahore
University of Sind, Karachi
Peshawar University, Peshawar
International Islamic University Bachelor & Master of Science in Nanotechnology
Singapore
National University of Singapore - B.Eng in Engineering Science with Nanoscience & Nanotechnology options, master's and PhD in Nanoscience and Nanotechnology Specialization
Sri Lanka
Sri Lanka Institute of Nanotechnology (SLINTEC) - Ph.D & M.Phil
Thailand
Chulalongkorn University - bachelor's degree in engineering (Nano-Engineering)
Mahidol University - Center of Nanoscience and Nanotechnology - Master Program
Kasetsart University - Center of Nanotechnology, Kasetsart University Research and Development Institute
Center of Excellence in Nanotechnology at AIT - Center of Excellence in Nanotechnology - master's and Ph.D programs
College of Nanotechnology at KMITL - bachelor's degree in engineering (Nanomaterials), M.Sc and Ph.D programa in Nanoscience and Nanotechnology
Turkey
UNAM-Ulusal Nanoteknoloji Araştırma Merkezi, Bilkent University - master's, Ph.D (Materials Science and Nanotechnology)
Hacettepe University - master's, Ph.D (Nanotechnology and Nanomedicine)
TOBB University of Economics and Technology - B.S. Materials Science and Nanotechnology Engineering, master's, Ph.D
Istanbul Technical University - master's, Ph.D (Nanoscience and Nanoengineering)
Middle East Technical University - master's, Ph.D
Anadolu University - master's
Atatürk University - master's, Ph.D (Nanoscience and Nanoengineering)
Europe
A list of the master's programs is kept by the UK-based Institute of Nanotechnology in their Nano, Enabling, and Advanced Technologies (NEAT) Post-graduate Course Directory.
Joint Programmes
Chalmers.se, Frontiers Joint Curriculum - masters's
EMM-NANO.org , Erasmus Mundus Master Nanoscience and Nanotechnology - master's
Master-Nanotech.com, International Master in Nanotechnology - international master's
Belgium
Katholieke Universiteit Leuven - master's in Nanotechnology and Nanoscience
University of Antwerp - M.Sc in Nanophysics
Czech Republic
Technical University of Liberec - bachelor's, master's, Ph.D
Palacký University, Olomouc - bachelor's, master's
Technical University of Ostrava - bachelor's, master's
Technical University of Brno - bachelor's, master's
Cyprus
Near east university - bacherlor's
Denmark
University of Aalborg - bachelor's, master's, Ph.D
University of Aarhus - bachelor's, master's, Ph.D
University of Copenhagen - bachelor's, master's, Ph.D
Technical University of Denmark - bachelor's, master's, Ph.D
University of Southern Denmark - bachelor's, master's, Ph.D
France
Université Lille Nord de France & École Centrale de Lille, CARNOT Institut d'électronique de microélectronique et de nanotechnologie (Lille) - master's in microelectronics, nanotechnologies and telecom, doctorate (Ph.D in microelectronics, nanotechnologies, acoustics and telecommunications)
University of Grenoble & Grenoble Institute of Technology, CARNOT CEA-Leti: Laboratoire d'électronique des technologies de l'information (LETI) • Minatec (Grenoble) - master's, doctorate
University of Bordeaux, CARNOT Materials and systems Institute of Bordeaux (MIB) (Bordeaux) - master's, doctorate
Université de Bourgogne, CARNOT Institut FEMTO-ST (Besançon) - Nanotechnologies et Nanobiosciences - master's, doctorate
École Nationale Supérieure des Mines de Saint-Étienne, Centre de micro-électronique de Provence (Gardanne) - master's, doctorate
Paris-Sud 11 University, Institut d'électronique fondamentale (Orsay) - master's, doctorate
Paris-Pierre and Marie Curie University, Institut des nano-sciences (Paris) - master's, doctorate
University of Toulouse, Institut de nano-technologies (Toulouse) - master's, doctorate
University of Technology of Troyes - Nanotechnology (and Optics) - master's, doctorate
University of Lyon & École Centrale de Lyon, Université Claude Bernard Lyon 1 a two-year nanotechnology M.Sc program
Germany
Kaiserslautern University of Technology - master's, certificate short term courses (Distance Learning)
Bielefeld University - master's
Karlsruhe Institute of Technology, graduate degrees
University of Duisburg-Essen - bachelor's, master's
University of Erlangen–Nuremberg - bachelor's, master's
University of Hamburg - bachelor's, master's
University of Hannover - bachelor's
University of Kassel - bachelor's
Ludwig-Maximilians-Universität München - Ph.D
Munich University of Applied Sciences - master's
Saarland University - bachelor's
University of Ulm - master's
University of Würzburg - bachelor's
Greece
National Technical University of Athens - master's in Micro-systems and Nano-devices
Ireland
Trinity College, Dublin - bachelor's
Dublin Institute of Technology - bachelor's
Italy
IUSS Pavia - master's
Mediterranea University of Reggio Calabria - master's
Perugia University - master's
Polytechnic University of Turin - master's
Polytechnic University of Milan - bachelor's, master's
Sapienza University of Rome - master's
University of Padua - master's
University of Salento - master's
University of Trieste - Ph.D
University of Venice - bachelor's, master's
University of Verona - master's
Netherlands
Radboud University Nijmegen - master's, Ph.D
Leiden University - master's
Delft University of Technology - master's, Ph.D
University of Groningen - master's, Ph.D, including the Top Master Program in Nanoscience
University of Twente - master's
Norway
Vestfold University College - bachelor's, master's, Ph.D
Norwegian University of Science and Technology - master's
University of Bergen - bachelor's and master's
University of Oslo - bachelor's and master's
Poland
Gdańsk University of Technology - bachelor's, master's
Jagiellonian University - bachelor's, master's, Ph.D
University of Warsaw - bachelor's and master's in Nanostructure Engineering (http://nano.fuw.edu.pl/ - only in Polish)
Russia
Mendeleev Russian University of Chemistry and Technology - bachelor's
Moscow State University - bachelor's, master's
Moscow Institute of Physics and Technology (MIPT)
National Research University of Electronic Technology (MIET) - bachelor's, master's
Peoples' Friendship University of Russia (PFUR) - master's in engineering & technology: "Nanotechnology and Microsystem Technology"
National University of Science and Technology MISIS - bachelor's, master's
Samara State Aerospace University - bachelor's, master's
Tomsk State University of Control Systems and Radioelectronics (TUSUR)
Ural Federal University (UrFU) - bachelor's (master's) of Engineering & Technology: "Nanotechnology and Microsystem Technology", "Electronics and Nanoelectronics" (profiles: "Physical Electronics", "Functional Materials of micro-, opto-and nanoelectronics")
Spain
DFA.ua.es, Master en Nanociencia y Nanotecnologia Molecular - master's
Universitat Autònoma de Barcelona, bachelor's in nanoscience and nanotechnology
Universitat Autònoma de Barcelona, master's in nanotechnology
Rovira i Virgili at Tarragona, master's in nanoscience and nanotechnology
Sweden
KTH Royal Institute of Technology - master's
Linköping University - master's
Lund University - bachelor's, master's
Chalmers University of Technology - bachelor's, master's
Switzerland
Eidgenössische Technische Hochschule Zürich, Zurich - master's, Ph.D
University of Basel - bachelor's, master's, Ph.D
United Kingdom
Bangor University - master's
University of Birmingham - Ph.D
University of Cambridge - master's, Ph.D
Cranfield University - master's, Ph.D (Certificate/Degree Programs)
Heriot-Watt University - bachelor's, master's
Lancaster University - master's
Imperial College London - master's
University College London - master's
University of Leeds - bachelor's, master's
University of Liverpool - master's
University of Manchester- Ph.D
University of Nottingham - master's
University of Oxford - Postgraduate Certificate
University of Sheffield - master's, Ph.D
University of Surrey - master's
University of Sussex - bachelor's
University of Swansea- B.Eng, M.Eng, M.Sc, M.Res, M.Phil and Ph.D
University of Ulster - master's
University of York- bachelor's, master's
North America
Canada
University of Alberta - B.Sc in Engineering Physics with Nanoengineering option
University of Toronto - B.A.Sc in Engineering Science with Nanoengineering option
University of Waterloo - B.A.Sc in Nanotechnology Engineering
Waterloo Institute for Nanotechnology -B.Sc, B.A.Sc, master's, Ph.D, Post Doctorate
McMaster University - B.Sc in Engineering Physics with Nanotechnology option
University of British Columbia - B.A.Sc in Electrical Engineering with Nanotechnology & Microsystems option
Carleton University - B.Sc in Chemistry with Concentration in Nanotechnology
University of Calgary - B.Sc Minor in Nanoscience, B.Sc Concentration in Nanoscience
University of Guelph - B.Sc in Nanoscience
Northern Alberta Institute of Technology - Technical Diploma in Nanotechnology Systems
México
Universidad tecnológica gral. Mariano Escobedo (UTE) - bachelor's in Nanotechnology
Instituto Nacional de Astrofisica, Optica y Electronica (INAOE) - M.Sc and Ph.D
Centro de Investigación en Materiales Avanzados (CIMAV) - M.Sc and PhD in Nanotechnology
Instituto Potosino de Investigación Científica y Tecnológica (IPICyT) - M.Sc and PhD in Nanotechnology
Centro de Investigación en Química Aplicada (CIQA) - M.Sc and PhD in Nanotechnology
Centro de Investigación y de Estudios Avanzados (CINVESTAV) - Ph.D in Nanoscience and Nanotechnology
Universidad Politécnica del Valle de México (UPVM) - bachelor's in Nanotechnology Engineering
Universidad de las Américas Puebla (UDLAP)- bachelor's (Nanotechnology and Molecular Engineering). This undergraduate program was the first one at Mexico and Latin America, specializing professionals in the field; it started in August 2006. An account on its historical development has recently been published.
Instituto Tecnológico y de Estudios Superiores de Occidente (ITESO)- bachelor's
Instituto Tecnológico Superior de Poza Rica (ITSPR)- bachelor's
Universidad de La Ciénega de Michoacán de Ocampo (UCMO)- bachelor's in Nanotechnology Engineering
Instituto Tecnologico de Tijuana (ITT) - bachelor's (Nanotechnology Engineering)
Universidad Autónoma de Querétaro (UAQ) - bachelor's in Nanotechnology Engineering
Universidad Autónoma de Baja California (UABC) - bachelor's in Nanotechnology Engineering
Universidad Veracruzana (UV) - Master of Science in Micro and Nanosystems
Instituto Mexicano del Petróleo (IMP) - M.Sc & Ph.D in Materials and Nanostructures
Universidad Nacional Autónoma de México (UNAM) at Mexico City, University City (UNAM) - M.Sc & Ph.D in Materials approach to nanoscience and nanotechnology
Universidad Nacional Autónoma de México (UNAM) at Ensenada, Baja California (UNAM) - Bachelor in Nanotechnology Engineering
Another Universities and Institutes in Mexico
United States
Arizona State University - Professional Science master's (PSM) in Nanoscience
Boston University - Concentration in Nanotechnology, Minor in Nanotechnology Engineering.
Chippewa Valley Technical College - associate degree
Cinano.com, International Association of Nano and California Institute of Nano, (CNCP) Certified Nano and Clean technology Professional-Nanotechnology Experience for Engineers
College of Nanoscale Science and Engineering - B.S., M.S., Ph.D in Nanoscale Science, Nanoscale Engineering
Dakota County Technical College - associate degree
Danville Community College - A.A.S. in Nanotechnology
Forsyth Technical Community College - Associate of Science
George Mason University (Virginia) - Graduate certificate
Hudson Valley Community College - associate degree, Electrical Technology: Semiconductor Manufacturing Technology
Ivy Tech Community College of Indiana - Associate of Science in Nanotechnology.
Johns Hopkins University - M.S. in Materials Science and Engineering with Nanotechnology Option
Louisiana Tech University - B.S. Nanosystems Engineering, M.S. Molecular Sciences & Nanotechnology, Ph.D (Micro/Nanotechnology and Micro/Nanoelectronics Emphasis)
North Dakota State College of Science - associate degree
Northern Illinois University - Certificate in Nanotechnology
Oklahoma State University–Okmulgee Institute of Technology - Associate of Technology
Penn State University - Minor in Nanotechnology, M.S. Nanotechnology
Portland State University - undergraduate/graduate course in support of a Ph.D program in Applied Physics
Radiological Technologies University - M.S. in Nanomedicine and dual MS in Nanomedicine and Medical Physics
Rice University - Public Outreach, K to 12 Summer Programs, Undergraduate and Graduate Programs/Degrees, Integrated Physics & Chemistry - Nanotechnology Experience for Teachers Program, Research Experience for Undergraduates Program
Richland College - associate degree
Rochester Institute of Technology, B.S., M.S. Microelectronic Engineering, Ph.D Microsystems Engineering
Stevens Institute of Technology - Five departments in engineering and science offer Master of Science, Master of Engineering, and Doctor of Philosophy degrees with Nanotechnology concentration
University at Albany, The State University of New York - B.S. Nanoscale Science, B.S. Nanoscale Engineering, master's and Ph.D
University of Arkansas, Fayetteville - M.S./Ph.D Several departments in Science and Engineering have excellent research in Nanotechnology
University of California, San Diego - B.S. Nanoengineering, M.S. Nanoengineering
University of California, San Diego - B.S. NanoEngineering, M.S. NanoEngineering, Ph.D NanoEngineering
University of Central Florida - B.S. Nanoscience and Nanotechnology track in Liberal Studies
University of Central Florida, Orlando, FL - B.S. in Nanoscience and Nanotechnology track in Liberal Studies
University of Maryland, College Park - Minor in Nanoscale Science and Technology NanoCenter.umd.edu
University of Nevada, Reno - Minor in Nanotechnology
University of North Carolina at Charlotte - Ph.D
University of North Carolina at Greensboro and NC A&T State University Joint School of Nanoscience & Nanoengineering - M.S. and PhD in Nanoscience and Nanoengineering
University of Oklahoma Bachelor of Science in Engineering Physics - Nanotechnology
University of Pennsylvania- Master of Science in Engineering (M.S.E.), Undergraduate Minor , Graduate Certificate in Nanotechnology.
University of Pittsburgh - Bachelor in Engineering Science - Nanoengineering
University of Tulsa - B.S. with a specialization in nanotechnology
University of Utah - Nanomedicine and Nanobiosensors. [nano.Utah.edu]
University of Washington- Nanoscience and Molecular Engineering option under Materials Science and Engineering, Ph.D in Nanotechnology
University of Wisconsin - Platteville - Minor in Microsystems & Nanotechnology
University of Wisconsin - Stout - B.S. in Nanotechnology and Nanoscience
Virginia Commonwealth University - Ph.D in Nanoscience and Nanotechnology
Virginia Tech B.S. in Nanoscience
Wayne State University - Nanoengineering Certificate Program
Oceania
Australia
New South Wales
University of New South Wales - bachelor's, Ph.D
University of Sydney - Bachelor of Science majoring in Nanoscience and Technology
University of Technology, Sydney - bachelor's
University of Western Sydney - bachelor's
University of Wollongong - bachelor's
Queensland
University of Queensland - bachelor's
South Australia
Flinders University - bachelor's, master's
Victoria
La Trobe University, Melbourne - Ph.D, master's in Nanotechnology (Graduate Entry), master's/bachelor's (double degree), bachelor's (double degree) website
RMIT University - bachelor's
The University of Melbourne - master's
St Helena Secondary College Melbourne - High School education
Western Australia
Curtin University - bachelor's
University of Western Australia - bachelor's
Murdoch University - bachelor's
New Zealand
Massey University, New Zealand - Bachelor of Science (Nanoscience)
Massey University, New Zealand - Bachelor of Engineering (Nanotechnology)
South America
Brazil
Universidade Federal do Rio Grande do Sul, UFRGS - bachelor's
Federal University of Rio de Janeiro - bachelor's , master's, Ph.D
Universidade Federal do ABC - master's, Ph.D
Centro Universitário Franciscano - UNIFRA - master's
Pontifícia Universidade Católica do Rio de Janeiro - bachelor's, master's, Ph.D
Nanotechnology in schools
In recent years, there has been a growing interest in introducing nanoscience and nanotechnology in grade schools, especially at the high school level. In the United States, although very few high schools officially offer a two-semester course in nanotechnology, “nano” concepts are bootstrapped and taught during traditional science classes using a number of educational resources and hands-on activities developed by dedicated non-profit organizations, such as:
The National Science Teacher Association, which has published a number of textbooks for nanotechnology in K-12 education, including a teacher's guide and an activity manual for hands-on experiences.
Nano-Link, a notable program of the Dakota County Technical College, which has developed a variety of nanotech-related hands-on activities supported by toolkits to teach concepts in nanotechnology throughout direct lab experience.
Omni Nano, which is developing comprehensive educational resources specifically designed to support a two-semester high school course, both online and in classrooms. Omni Nano also discusses issues in nanotechnology education on its dedicated blog.
Nano4Me, which has a good amount of resources for K-12 education, although their program is intended for higher education. Their K-12 resources include introductory level modules and activities, interactive multimedia, and a collection of experiments and hands-on activities.
Nanoscale Informal Science Education Network (NISE), which has a website of educational products designed to engage the public in nano science, engineering, and technology. NISE also organizes Nano Days, a nationwide festival of educational programs about nanoscale science and engineering and its potential impact on the future.
In Egypt, in2nano is a high school outreach program aiming to increase scientific literacy and prepare students for the sweeping changes of nanotechnology.
Nanotechnology education outside of school
Nanoscale Informal Science Education Network (NISE) has a website of educational products designed to engage the public in nano science, engineering, and technology. The NISE Network also organizes Nano Days, a nationwide festival of educational programs about nanoscale science and engineering and its potential impact on the future.
References
External links
www.nisenet.org
Nanotechnology
Science education
Lists of universities and colleges | Nanotechnology education | [
"Materials_science",
"Engineering"
] | 6,236 | [
"Nanotechnology",
"Materials science"
] |
905,803 | https://en.wikipedia.org/wiki/Transparent%20ceramics | Many ceramic materials, both glassy and crystalline, have found use as optically transparent materials in various forms from bulk solid-state components to high surface area forms such as thin films, coatings, and fibers. Such devices have found widespread use for various applications in the electro-optical field including: optical fibers for guided lightwave transmission, optical switches, laser amplifiers and lenses, hosts for solid-state lasers and optical window materials for gas lasers, and infrared (IR) heat seeking devices for missile guidance systems and IR night vision. In commercial and general knowledge domains, it is commonly accepted that transparent ceramics or ceramic glass are varieties of strengthened glass, such as those used for the screen glass on an iPhone.
While single-crystalline ceramics may be largely defect-free (particularly within the spatial scale of the incident light wave), optical transparency in polycrystalline materials is limited by the amount of light that is scattered by their microstructural features. The amount of light scattering therefore depends on the wavelength of the incident radiation, or light.
For example, since visible light has a wavelength scale on the order of hundreds of nanometers, scattering centers will have dimensions on a similar spatial scale. Most ceramic materials, such as alumina and its compounds, are formed from fine powders, yielding a fine grained polycrystalline microstructure that is filled with scattering centers comparable to the wavelength of visible light. Thus, they are generally opaque as opposed to transparent materials. Recent nanoscale technology, however, has made possible the production of (poly)crystalline transparent ceramics such as alumina Al2O3, yttrium aluminium garnet (YAG), and neodymium-doped Nd:YAG.
Introduction
Transparent ceramics have recently acquired a high degree of interest and notoriety. Basic applications include lasers and cutting tools, transparent armor windows, night vision devices (NVD), and nose cones for heat seeking missiles. Currently available infrared (IR) transparent materials typically exhibit a trade-off between optical performance and mechanical strength. For example, sapphire (crystalline alumina) is very strong, but lacks full transparency throughout the 3–5 micrometer mid-IR range. Yttria is fully transparent from 3–5 micrometers, but lacks sufficient strength, hardness, and thermal shock resistance for high-performance aerospace applications. Not surprisingly, a combination of these two materials in the form of the yttria-alumina garnet (YAG) has proven to be one of the top performers in the field.
In 1961, General Electric began selling transparent alumina Lucalox bulbs. In 1966, GE announced a ceramic "transparent as glass", called Yttralox. In 2004, Anatoly Rosenflanz and colleagues at 3M used a "flame-spray" technique to alloy aluminium oxide (or alumina) with rare-earth metal oxides in order to produce high strength glass-ceramics with good optical properties. The method avoids many of the problems encountered in conventional glass forming and may be extensible to other oxides. This goal has been readily accomplished and amply demonstrated in laboratories and research facilities worldwide using the emerging chemical processing methods encompassed by the methods of sol–gel chemistry and nanotechnology.
Many ceramic materials, both glassy and crystalline, have found use as hosts for solid-state lasers and as optical window materials for gas lasers. The first working laser was made by Theodore H. Maiman in 1960 at Hughes Research Laboratories in Malibu, who had the edge on other research teams led by Charles H. Townes at Columbia University, Arthur Schawlow at Bell Labs, and Gould at TRG (Technical Research Group). Maiman used a solid-state light-pumped synthetic ruby to produce red laser light at a wavelength of 694 nanometers (nm). Synthethic ruby lasers are still in use. Both sapphires and rubies are corundum, a crystalline form of aluminium oxide (Al2O3).
Crystals
Ruby lasers consist of single-crystal sapphire alumina (Al2O3) rods doped with a small concentration of chromium Cr, typically in the range of 0.05%. The end faces are highly polished with a planar and parallel configuration. Neodymium-doped YAG (Nd:YAG) has proven to be one of the best solid-state laser materials. Its indisputable dominance in a broad variety of laser applications is determined by a combination of high emission cross section with long spontaneous emission lifetime, high damage threshold, mechanical strength, thermal conductivity, and low thermal beam distortion. The fact that the Czochralski crystal growth of Nd:YAG is a matured, highly reproducible and relatively simple technological procedure adds significantly to the value of the material.
Nd:YAG lasers are used in manufacturing for engraving, etching, or marking a variety of metals and plastics. They are extensively used in manufacturing for cutting and welding steel and various alloys. For automotive applications (cutting and welding steel) the power levels are typically 1–5 kW.
In addition, Nd:YAG lasers are used in ophthalmology to correct posterior capsular opacification, a condition that may occur after cataract surgery, and for peripheral iridotomy in patients with acute angle-closure glaucoma, where it has superseded surgical iridectomy. Frequency-doubled Nd:YAG lasers (wavelength 532 nm) are used for pan-retinal photocoagulation in patients with diabetic retinopathy. In oncology, Nd:YAG lasers can be used to remove skin cancers.
These lasers are also used extensively in the field of cosmetic medicine for laser hair removal and the treatment of minor vascular defects such as spider veins on the face and legs. Recently used for dissecting cellulitis, a rare skin disease usually occurring on the scalp. Using hysteroscopy in the field of gynecology, the Nd:YAG laser has been used for removal of uterine septa within the inside of the uterus.
In dentistry, Nd:YAG lasers are used for soft tissue surgeries in the oral cavity.
Glasses
Glasses (non-crystalline ceramics) also are used widely as host materials for lasers. Relative to crystalline lasers, they offer improved flexibility in size and shape and may be readily manufactured as large, homogeneous, isotropic solids with excellent optical properties. The indices of refraction of glass laser hosts may be varied between approximately 1.5 and 2.0, and both the temperature coefficient of n and the strain-optical coefficient may be tailored by altering the chemical composition. Glasses have lower thermal conductivities than the alumina or YAG, however, which imposes limitations on their use in continuous and high repetition-rate applications.
The principal differences between the behavior of glass and crystalline ceramic laser host materials are associated with the greater variation in the local environment of lasing ions in amorphous solids. This leads to a broadening of the fluorescent levels in glasses. For example, the width of the Nd3+ emission in YAG is ~ 10 angstroms as compared to ~ 300 angstroms in typical oxide glasses. The broadened fluorescent lines in glasses make it more difficult to obtain continuous wave laser operation (CW), relative to the same lasing ions in crystalline solid laser hosts.
Several glasses are used in transparent armor, such as normal plate glass (soda-lime-silica), borosilicate glass, and fused silica. Plate glass has been the most common glass used due to its low cost. But greater requirements for the optical properties and ballistic performance have necessitated the development of new materials. Chemical or thermal treatments can increase the strength of glasses, and the controlled crystallization of certain glass compositions can produce optical quality glass-ceramics. Alstom Grid Ltd. currently produces a lithium di-silicate based glass-ceramic known as TransArm, for use in transparent armor systems. It has all the workability of an amorphous glass, but upon recrystallization it demonstrates properties similar to a crystalline ceramic. Vycor is 96% fused silica glass, which is crystal clear, lightweight and high strength. One advantage of these type of materials is that they can be produced in large sheets and other curved shapes.
Nanomaterials
It has been shown fairly recently that laser elements (amplifiers, switches, ion hosts, etc.) made from fine-grained ceramic nanomaterials—produced by the low temperature sintering of high purity nanoparticles and powders—can be produced at a relatively low cost. These components are free of internal stress or intrinsic birefringence, and allow relatively large doping levels or optimized custom-designed doping profiles. This highlights the use of ceramic nanomaterials as being particularly important for high-energy laser elements and applications.
Primary scattering centers in polycrystalline nanomaterials—made from the sintering of high purity nanoparticles and powders—include microstructural defects such as residual porosity and grain boundaries (see Transparent materials). Thus, opacity partly results from the incoherent scattering of light at internal surfaces and interfaces. In addition to porosity, most of the interfaces or internal surfaces in ceramic nanomaterials are in the form of grain boundaries which separate nanoscale regions of crystalline order. Moreover, when the size of the scattering center (or grain boundary) is reduced well below the size of the wavelength of the light being scattered, the light scattering no longer occurs to any significant extent.
In the processing of high performance ceramic nanomaterials with superior opto-mechanical properties under adverse conditions, the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during the synthesis or formation of the object. Thus a reduction of the original particle size well below the wavelength of visible light (~ 0.5 μm or 500 nm) eliminates much of the light scattering, resulting in a translucent or even transparent material.
Furthermore, results indicate that microscopic pores in sintered ceramic nanomaterials, mainly trapped at the junctions of microcrystalline grains, cause light to scatter and prevented true transparency. It has been observed that the total volume fraction of these nanoscale pores (both intergranular and intragranular porosity) must be less than 1% for high-quality optical transmission, i.e. the density has to be 99.99% of the theoretical crystalline density.
Lasers
Nd:YAG
For example, a 1.46 kW Nd:YAG laser has been demonstrated by Konoshima Chemical Co. in Japan. In addition, Livermore researchers realized that these fine-grained ceramic nanomaterials might greatly benefit high-powered lasers used in the National Ignition Facility (NIF) Programs Directorate. In particular, a Livermore research team began to acquire advanced transparent nanomaterials from Konoshima to determine if they could meet the optical requirements needed for Livermore's Solid-State Heat Capacity Laser (SSHCL). Livermore researchers have also been testing applications of these materials for applications such as advanced drivers for laser-driven fusion power plants.
Assisted by several workers from the NIF, the Livermore team has produced 15 mm diameter samples of transparent Nd:YAG from nanoscale particles and powders, and determined the most important parameters affecting their quality. In these objects, the team largely followed the Japanese production and processing methodologies, and used an in house furnace to vacuum sinter the nanopowders. All specimens were then sent out for hot isostatic pressing (HIP). Finally, the components were returned to Livermore for coating and testing, with results indicating exceptional optical quality and properties.
One Japanese/East Indian consortium has focused specifically on the spectroscopic and stimulated emission characteristics of Nd3+ in transparent YAG nanomaterials for laser applications. Their materials were synthesized using vacuum sintering techniques. The spectroscopic studies suggest overall improvement in absorption and emission and reduction in scattering loss. Scanning electron microscope and transmission electron microscope observations revealed an excellent optical quality with low pore volume and narrow grain boundary width. Fluorescence and Raman measurements reveal that the Nd3+ doped YAG nanomaterial is comparable in quality to its single-crystal counterpart in both its radiative and non-radiative properties. Individual Stark levels are obtained from the absorption and fluorescence spectra and are analyzed in order to identify the stimulated emission channels possible in the material. Laser performance studies favor the use of high dopant concentration in the design of an efficient microchip laser. With 4 at% dopant, the group obtained a slope efficiency of 40%. High-power laser experiments yield an optical-to-optical conversion efficiency of 30% for Nd (0.6 at%) YAG nanomaterial as compared to 34% for an Nd (0.6 at%) YAG single crystal. Optical gain measurements conducted in these materials also show values comparable to single crystal, supporting the contention that these materials could be suitable substitutes to single crystals in solid-state laser applications.
Yttria, Y2O3
The initial work in developing transparent yttrium oxide nanomaterials was carried out by General Electric in the 1960s.
In 1966, a transparent ceramic, Yttralox, was invented by Dr. Richard C. Anderson at the General Electric Research Laboratory, with further work at GE's Metallurgy and Ceramics Laboratory by Drs. Paul J. Jorgensen, Joseph H. Rosolowski, and Douglas St. Pierre. Yttralox is "transparent as glass", has a melting point twice as high, and transmits frequencies in the near infrared band as well as visible light.
Further development of yttrium ceramic nanomaterials was carried out by General Electric in the 1970s in Schenectady and Cleveland, motivated by lighting and ceramic laser applications. Yttralox, transparent yttrium oxide Y2O3 containing ~ 10% thorium oxide (ThO2) was fabricated by Greskovich and Woods. The additive served to control grain growth during densification, so that porosity remained on grain boundaries and not trapped inside grains where it would be quite difficult to eliminate during the initial stages of sintering. Typically, as polycrystalline ceramics densify during heat treatment, grains grow in size while the remaining porosity decreases both in volume fraction and in size. Optically transparent ceramics must be virtually pore-free.
GE's transparent Yttralox was followed by GTE's lanthana-doped yttria with similar level of additive. Both of these materials required extended firing times at temperatures above 2000 °C. La2O3 – doped Y2O3 is of interest for infrared (IR) applications because it is one of the longest wavelength transmitting oxides. It is refractory with a melting point of 2430 °C and has a moderate coefficient of thermal expansion. The thermal shock and erosion resistance is considered to be intermediate among the oxides, but outstanding compared to non-oxide IR transmitting materials. A major consideration is the low emissivity of yttria, which limits background radiation upon heating. It is also known that the phonon edge gradually moves to shorter wavelengths as a material is heated.
In addition, yttria itself, Y2O3 has been clearly identified as a prospective solid-state laser material. In particular, lasers with ytterbium as dopant allow the efficient operation both in cw operation
and in pulsed regimes.
At high concentration of excitations (of order of 1%) and poor cooling, the quenching of emission at laser frequency and avalanche broadband emission takes place.
Future
The Livermore team is also exploring new ways to chemically synthesize the initial nanopowders. Borrowing on expertise developed in CMS over the past 5 years, the team is synthesizing nanopowders based on sol–gel processing, and then sintering them accordingly in order to obtain the solid-state laser components. Another technique being tested utilizes a combustion process in order to generate the powders by burning an organic solid containing yttrium, aluminum, and neodymium. The smoke is then collected, which consists of spherical nanoparticles.
The Livermore team is also exploring new forming techniques (e.g. extrusion molding) which have the capacity to create more diverse, and possibly more complicated, shapes. These include shells and tubes for improved coupling to the pump light and for more efficient heat transfer. In addition, different materials can be co-extruded and then sintered into a monolithic transparent solid. An amplifier slab can be formed so that part of the structure acts in guided lightwave transmission in order to focus pump light from laser diodes into regions with a high concentration of dopant ions near the slab center.
In general, nanomaterials promise to greatly expand the availability of low-cost, high-end laser components in much larger sizes than would be possible with traditional single crystalline ceramics. Many classes of laser designs could benefit from nanomaterial-based laser structures such as amplifies with built-in edge claddings. Nanomaterials could also provide more robust and compact designs for high-peak power, fusion-class lasers for stockpile stewardship, as well as high-average-power lasers for global theater ICBM missile defense systems (e.g. Strategic Defense Initiative SDI, or more recently the Missile Defense Agency.
Night vision
A night vision device (NVD) is an optical instrument that allows images to be produced in levels of light approaching total darkness. They are most often used by the military and law enforcement agencies, but are available to civilian users. Night vision devices were first used in World War II,
and came into wide use during the Vietnam War. The technology has evolved greatly since their introduction, leading to several "generations" of night vision equipment with performance increasing and price decreasing. The United States Air Force is experimenting with Panoramic Night Vision Goggles (PNVGs) which double the user's field of view to approximately 95 degrees by using four 16 mm image intensifiers tubes, rather than the more standard two 18 mm tubes.
Thermal images are visual displays of the amount of infrared (IR) energy emitted, transmitted, and reflected by an object. Because there are multiple sources of the infrared energy, it is difficult to get an accurate temperature of an object using this method. A thermal imaging camera is capable of performing algorithms to interpret that data and build an image. Although the image shows the viewer an approximation of the temperature at which the object is operating, the camera is using multiple sources of data based on the areas surrounding the object to determine that value rather than detecting the temperature.
Night vision infrared devices image in the near-infrared, just beyond the visual spectrum, and can see emitted or reflected near-infrared in complete visual darkness. All objects above the absolute zero temperature (0 K) emit infrared radiation. Hence, an excellent way to measure thermal variations is to use an infrared vision device, usually a focal plane array (FPA) infrared camera capable of detecting radiation in the mid (3 to 5 μm) and long (7 to 14 μm) wave infrared bands, denoted as MWIR and LWIR, corresponding to two of the high transmittance infrared windows. Abnormal temperature profiles at the surface of an object are an indication of a potential problem.
Infrared thermography, thermal imaging, and thermal video, are examples of infrared imaging science. Thermal imaging cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 900–14,000 nanometers or 0.9–14 μm) and produce images of that radiation, called thermograms.
Since infrared radiation is emitted by all objects near room temperature, according to the black body radiation law, thermography makes it possible to see one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature. Therefore, thermography allows one to see variations in temperature. When viewed through a thermal imaging camera, warm objects stand out well against cooler backgrounds; humans and other warm-blooded animals become easily visible against the environment, day or night. As a result, thermography is particularly useful to the military and to security services.
Thermography
In thermographic imaging, infrared radiation with wavelengths between 8–13 micrometers strikes the detector material, heating it, and thus changing its electrical resistance. This resistance change is measured and processed into temperatures which can be used to create an image. Unlike other types of infrared detecting equipment, microbolometers utilizing a transparent ceramic detector do not require cooling. Thus, a microbolometer is essentially an uncooled thermal sensor.
The material used in the detector must demonstrate large changes in resistance as a result of minute changes in temperature. As the material is heated, due to the incoming infrared radiation, the resistance of the material decreases. This is related to the material's temperature coefficient of resistance (TCR) specifically its negative temperature coefficient. Industry currently manufactures microbolometers that contain materials with TCRs near −2%.
VO2 and V2O5
The most commonly used ceramic material in IR radiation microbolometers is vanadium oxide. The various crystalline forms of vanadium oxide include both VO2 and V2O5. Deposition at high temperatures and performing post-annealing allows for the production of thin films of these crystalline compounds with superior properties, which may be easily integrated into the fabrication process. VO2 has low resistance but undergoes a metal-insulator phase change near 67 °C and also has a lower TCR value. On the other hand, V2O5 exhibits high resistance and also high TCR.
Other IR transparent ceramic materials that have been investigated include doped forms of CuO, MnO and SiO.
Missiles
Many ceramic nanomaterials of interest for transparent armor solutions are also used for electromagnetic (EM) windows. These applications include radomes, IR domes, sensor protection, and multi-spectral windows. Optical properties of the materials used for these applications are critical, as the transmission window and related cut-offs (UV – IR) control the spectral bandwidth over which the window is operational. Not only must these materials possess abrasion resistance and strength properties common of most armor applications, but due to the extreme temperatures associated with the environment of military aircraft and missiles, they must also possess excellent thermal stability.
Thermal radiation is electromagnetic radiation emitted from the surface of an object which is due to the object's temperature. Infrared homing refers to a passive missile guidance system which uses the emission from a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers", since infrared is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background.
Sapphire
The current material of choice for high-speed infrared-guided missile domes is single-crystal sapphire. The optical transmission of sapphire does not extend to cover the entire mid-infrared range (3–5 μm), but starts to drop off at wavelengths greater than approximately 4.5 μm at room temperature. While the strength of sapphire is better than that of other available mid-range infrared dome materials at room temperature, it weakens above ~600 °C.
Limitations to larger area sapphires are often business related, in that larger induction furnaces and costly tooling dies are necessary in order to exceed current fabrication limits. However, as an industry, sapphire producers have remained competitive in the face of coating-hardened glass and new ceramic nanomaterials, and still managed to offer high performance and an expanded market.
Yttria, Y2O3
Alternative materials, such as yttrium oxide, offer better optical performance, but inferior mechanical durability. Future high-speed infrared-guided missiles will require new domes that are substantially more durable than those in use today, while still retaining maximum transparency across a wide wavelength range. A long-standing trade-off exists between optical bandpass and mechanical durability within the current collection of single-phase infrared transmitting materials, forcing missile designers to compromise on system performance. Optical nanocomposites may present the opportunity to engineer new materials that overcome this traditional compromise.
The first full scale missile domes of transparent yttria manufactured from nanoscale ceramic powders were developed in the 1980s under Navy funding. Raytheon perfected and characterized its undoped polycrystalline yttria, while lanthana-doped yttria was similarly developed by GTE Labs. The two versions had comparable IR transmittance, fracture toughness, and thermal expansion, while the undoped version exhibited twice the value of thermal conductivity.
Renewed interest in yttria windows and domes has prompted efforts to enhance mechanical properties by using nanoscale materials with submicrometer or nanosized grains. In one study, three vendors were selected to provide nanoscale powders for testing and evaluation, and they were compared to a conventional (5 μm) yttria powder previously used to prepare transparent yttria. While all of the nanopowders evaluated had impurity levels that were too high to allow processing to full transparency, 2 of them were processed to theoretical density and moderate transparency. Samples were sintered to a closed pore state at temperatures as low as 1400 C.
After the relatively short sintering period, the component is placed in a hot isostatic press (HIP) and processed for 3 – 10 hours at ~ 30 kpsi(~200 MPa) at a temperature similar to that of the initial sintering. The applied isostatic pressure provides additional driving force for densification by substantially increasing the atomic diffusion coefficients, which promotes additional viscous flow at or near grain boundaries and intergranular pores. Using this method, transparent yttria nanomaterials were produced at lower temperatures, shorter total firing times, and without extra additives which tend to reduce the thermal conductivity.
Recently, a newer method has been developed by Mouzon, which relies on the methods of glass-encapsulation, combined with vacuum sintering at 1600 °C followed by hot isostatic pressing (HIP) at 1500 °C of a highly agglomerated commercial powder. The use of evacuated glass capsules to perform HIP treatment allowed samples that showed open porosity after vacuum sintering to be sintered to transparency. The sintering response of the investigated powder was studied by careful microstructural observations using scanning electron microscopy and optical microscopy both in reflection and transmission. The key to this method is to keep porosity intergranular during pre-sintering, so that it can be removed subsequently by HIP treatment. It was found that agglomerates of closely packed particles are helpful to reach that purpose, since they densify fully and leave only intergranular porosity.
Composites
Prior to the work done at Raytheon, optical properties in nanocomposite ceramic materials had received little attention. Their studies clearly demonstrated near theoretical transmission in nanocomposite optical ceramics for the first time. The yttria/magnesia binary system is an ideal model system for nanocomposite formation. There is limited solid solubility in either one of the constituent phases, permitting a wide range of compositions to be investigated and compared to each other. According to the phase diagram, bi-phase mixtures are stable for all temperatures below ~ 2100 °C. In addition, neither yttria nor magnesia shows any absorption in the 3 – 5 μm mid-range IR portion of the EM spectrum.
In optical nanocomposites, two or more interpenetrating phases are mixed in a sub-micrometer grain sized, fully dense body. Infrared light scattering can be minimized (or even eliminated) in the material as long as the grain size of the individual phases is significantly smaller than infrared wavelengths. Experimental data suggests that limiting the grain size of the nanocomposite to approximately 1/15th of the wavelength of light is sufficient to limit scattering.
Nanocomposites of yttria and magnesia have been produced with a grain size of approximately 200 nm. These materials have yielded good transmission in the 3–5 μm range and strengths higher than that for single-phase individual constituents. Enhancement of mechanical properties in nanocomposite ceramic materials has been extensively studied. Significant increases in strength (2–5 times), toughness (1–4 times), and creep resistance have been observed in systems including SiC/Al2O3, SiC/Si3N4, SiC/MgO, and Al2O3/ZrO2.
The strengthening mechanisms observed vary depending on the material system, and there does not appear to be any general consensus regarding strengthening mechanisms, even within a given system. In the SiC/Al2O3 system, for example, it is widely known and accepted that the addition of SiC particles to the Al2O3 matrix results in a change of failure mechanism from intergranular (between grains) to intragranular (within grains) fracture. The explanations for improved strength include:
A simple reduction of processing flaw concentration during nanocomposite fabrication.
Reduction of the critical flaw size in the material—resulting in increased strength as predicted by the Hall-Petch relation)
Crack deflection at nanophase particles due to residual thermal stresses introduced upon cooling form processing temperatures.
Microcracking along stress-induced dislocations in the matrix material.
Armor
There is an increasing need in the military sector for high-strength, robust materials which have the capability to transmit light around the visible (0.4–0.7 micrometers) and mid-infrared (1–5 micrometers) regions of the spectrum. These materials are needed for applications requiring transparent armor. Transparent armor is a material or system of materials designed to be optically transparent, yet protect from fragmentation or ballistic impacts. The primary requirement for a transparent armor system is to not only defeat the designated threat but also provide a multi-hit capability with minimized distortion of surrounding areas. Transparent armor windows must also be compatible with night vision equipment. New materials that are thinner, lightweight, and offer better ballistic performance are being sought.
Existing transparent armor systems typically have many layers, separated by polymer (e.g. polycarbonate) interlayers. The polymer interlayer is used to mitigate the stresses from thermal expansion mismatches, as well as to stop crack propagation from ceramic to polymer. The polycarbonate is also currently used in applications such as visors, face shields and laser protection goggles. The search for lighter materials has also led to investigations into other polymeric materials such as transparent nylons, polyurethane, and acrylics. The optical properties and durability of transparent plastics limit their use in armor applications. Investigations carried out in the 1970s had shown promise for the use of polyurethane as armor material, but the optical properties were not adequate for transparent armor applications.
Several glasses are utilized in transparent armor, such as normal plate glass (soda-lime-silica), borosilicate glasses, and fused silica. Plate glass has been the most common glass used due to its low cost, but greater requirements for the optical properties and ballistic performance have generated the need for new materials. Chemical or thermal treatments can increase the strength of glasses, and the controlled crystallization of certain glass systems can produce transparent glass-ceramics. Alstom Grid Research & Technology (Stafford, UK), produced a lithium disilicate based glass-ceramic known as TransArm, for use in transparent armor systems with continuous production yielding vehicle windscreen sized pieces (and larger). The inherent advantages of glasses and glass-ceramics include having lower cost than most other ceramic materials, the ability to be produced in curved shapes, and the ability to be formed into large sheets.
Transparent crystalline ceramics are used to defeat advanced threats. Three major transparent candidates currently exist: aluminum oxynitride (AlON), magnesium aluminate spinel (spinel), and single crystal aluminum oxide (sapphire).
Aluminium oxynitride spinel
Aluminium oxynitride spinel (Al23O27N5), abbreviated as AlON, is one of the leading candidates for transparent armor. It is produced by the Surmet Corporation under the trademark ALON. The incorporation of nitrogen into aluminium oxide stabilizes a crystalline spinel phase, which due to its cubic crystal structure and unit cell, is an isotropic material which can be produced as transparent ceramic nanomaterial. Thus, fine-grained polycrystalline nanomaterials can be produced and formed into complex geometries using conventional ceramic forming techniques such as hot isostatic pressing, and slip casting.
The Surmet Corporation has acquired Raytheon's ALON business and is currently building a market for this technology in the area of Transparent Armor, Sensor windows, Reconnaissance windows and IR Optics such as Lenses and Domes and as an alternative to quartz and sapphire in the semiconductor market. The AlON based transparent armor has been tested to stop multi-hit threats including of 30calAPM2 rounds and 50calAPM2 rounds successfully. The high hardness of AlON provides a scratch resistance which exceeds even the most durable coatings for glass scanner windows, such as those used in supermarkets. Surmet has successfully produced a 15"x18" curved AlON window and is currently attempting to scale up the technology and reduce the cost. In addition, the U.S. Army and U.S. Air Force are both seeking development into next generation applications.
Spinel
Magnesium aluminate spinel (MgAl2O4) is a transparent ceramic with a cubic crystal structure with an excellent optical transmission from 0.2 to 5.5 micrometers in its polycrystalline form. Optical quality transparent spinel has been produced by sinter/HIP, hot pressing, and hot press/HIP operations, and it has been shown that the use of a hot isostatic press can improve its optical and physical properties.
Spinel offers some processing advantages over AlON, such as the fact that spinel powder is available from commercial manufacturers while AlON powders are proprietary to Raytheon. It is also capable of being processed at much lower temperatures than AlON and has been shown to possess superior optical properties within the infrared (IR) region. The improved optical characteristics make spinel attractive in sensor applications where effective communication is impacted by the protective missile dome's absorption characteristics.
Spinel shows promise for many applications, but is currently not available in bulk form from any manufacturer, although efforts to commercialize spinel are underway. The spinel products business is being pursued by two key U.S. manufacturers: "Technology Assessment and Transfer" and the "Surmet Corporation".
An extensive NRL review of the literature has indicated clearly that attempts to make high-quality spinel have failed to date because the densification dynamics of spinel are poorly understood. They have conducted extensive research into the dynamics involved during the densification of spinel. Their research has shown that LiF, although necessary, also has extremely adverse effects during the final stages of densification. Additionally, its distribution in the precursor spinel powders is of critical importance.
Traditional bulk mixing processes used to mix LiF sintering aid into a powder leave fairly inhomogeneous distribution of Lif that must be homogenized by extended heat treatment times at elevated temperatures. The homogenizing temperature for Lif/Spinel occurs at the temperature of fast reaction between the LiF and the Al2O3. In order to avoid this detrimental reaction, they have developed a new process that uniformly coats the spinel particles with the sintering aid. This allows them to reduce the amount of Lif necessary for densification and to rapidly heat through the temperature of maximum reactivity. These developments have allowed NRL to fabricate MgAl2O4 spinel to high transparency with extremely high reproducibility that should enable military as well as commercial use of spinel.
Sapphire
Single-crystal aluminum oxide (sapphire – Al2O3) is a transparent ceramic. Sapphire's crystal structure is rhombohedral and thus its properties are anisotropic, varying with crystallographic orientation. Transparent alumina is currently one of the most mature transparent ceramics from a production and application perspective, and is available from several manufacturers. But the cost is high due to the processing temperature involved, as well as machining costs to cut parts out of single crystal boules. It also has a very high mechanical strength – but that is dependent on the surface finish.
The high level of maturity of sapphire from a production and application standpoint can be attributed to two areas of business: electromagnetic spectrum windows for missiles and domes, and electronic/semiconductor industries and applications.
There are current programs to scale-up sapphire grown by the heat exchanger method or edge defined film-fed growth (EFG) processes. Its maturity stems from its use as windows and in semiconductor industry. Crystal Systems Inc. which uses single crystal growth techniques, is currently scaling their sapphire boules to diameter and larger. Another producer, the Saint-Gobain Group produces transparent sapphire using an edge-defined growth technique. Sapphire grown by this technique produces an optically inferior material to that which is grown via single crystal techniques, but is much less expensive, and retains much of the hardness, transmission, and scratch-resistant characteristics. Saint-Gobain is currently capable of producing 0.43" thick (as grown) sapphire, in 12" × 18.5" sheets, as well as thick, single-curved sheets. The U.S. Army Research Laboratory is currently investigating use of this material in a laminate design for transparent armor systems. The Saint Gobain Group have commercialized the capability to meet flight requirements on the F-35 Joint Strike Fighter and F-22 Raptor next generation fighter aircraft.
Composites
Future high-speed infrared-guided missiles will require new dome materials that are substantially more durable than those in use today, while retaining maximum transparency across the entire operational spectrum or bandwidth. A long-standing compromise exists between optical bandpass and mechanical durability within the current group of single-phase (crystalline or glassy) IR transmitting ceramic materials, forcing missile designers to accept substandard overall system performance. Optical nanocomposites may provide the opportunity to engineer new materials that may overcome these traditional limitations.
For example, transparent ceramic armor consisting of a lightweight composite has been formed by utilizing a face plate of transparent alumina Al2O3 (or magnesia MgO) with a back-up plate of transparent plastic. The two plates (bonded together with a transparent adhesive) afford complete ballistic protection against 0.30 AP M2 projectiles at 0° obliquity with a muzzle velocity of per second.
Another transparent composite armor provided complete protection for small arms projectiles up to and including caliber .50 AP M2 projectiles consisting of two or more layers of transparent ceramic material.
Nanocomposites of yttria and magnesia have been produced with an average grain size of ~200 nm. These materials have exhibited near theoretical transmission in the 3 – 5 μm IR band. Additionally, such composites have yielded higher strengths than those observed for single phase solid-state components. Despite a lack of agreement regarding mechanism of failure, it is widely accepted that nanocomposite ceramic materials can and do offer improved mechanical properties over those of single phase materials or nanomaterials of uniform chemical composition.
Nanocomposite ceramic materials also offer interesting mechanical properties not achievable in other materials, such as superplastic flow and metal-like machinability. It is anticipated that further development will result in high strength, high transparency nanomaterials which are suitable for application as next generation armor.
See also
Ceramic engineering
Lumicera
Nanomaterials
Optical fiber
Transparent materials
References
Further reading
Ceramic Processing Before Firing, Onoda, G.Y., Jr. and Hench, L.L. Eds., (Wiley & Sons, New York, 1979)
External links
Laser Advances
How Night Vision Works
"apetz Light-Scattering Model
Fraunhofer Institute IKTS
Rosenflanz technique
Oxides
Transparent materials | Transparent ceramics | [
"Physics",
"Chemistry"
] | 8,365 | [
"Physical phenomena",
"Oxides",
"Salts",
"Optical phenomena",
"Materials",
"Transparent materials",
"Matter"
] |
906,193 | https://en.wikipedia.org/wiki/Native%20chemical%20ligation | Native Chemical Ligation (NCL) is an important extension of the chemical ligation concept for constructing a larger polypeptide chain by the covalent condensation of two or more unprotected peptides segments. Native chemical ligation is the most effective method for synthesizing native or modified proteins of typical size (i.e., proteins< ~300 AA).
Reaction
In native chemical ligation, the ionized thiol group of an N-terminal cysteine residue of an unprotected peptide attacks the C-terminal thioester of a second unprotected peptide, in an aqueous buffer at pH 7.0 and room temperature. This transthioesterification step is reversible in the presence of an aryl thiol catalyst, rendering the reaction both chemoselective and regioselective, and leads to formation of a thioester-linked intermediate. The intermediate rapidly and spontaneously rearranges by an intramolecular S,N-acyl shift that results in the formation of a native amide ('peptide') bond at the ligation site (scheme 1).
Remarks :
Thiol additives :
The initial transthioesterification step of the native chemical ligation reaction is catalyzed by thiol additives. The most effective and commonly used thiol catalyst is 4-mercaptophenylacetic acid (MPAA), (ref).
Regioselectivity:
The key feature of native chemical ligation of unprotected peptides is the reversibility of the first step, the thiol(ate)–thioester exchange reaction. Native chemical ligation is exquisitely regioselective because that thiol(ate)–thioester exchange step is freely reversible in the presence of an added arylthiol catalyst. The high yields of final ligation product obtained, even in the presence of internal Cys residues in either/both segments, is the result of the irreversibility of the second (S-to-N acyl shift) amide-forming step under the reaction conditions used.
Chemoselectivity of NCL :
No side-products are formed from reaction with the other functional groups present in either peptide segment (e.g. Asp, Glu side chain carboxylic acids; Lys epsilon amino group; Tyr phenolic hydroxyl; Ser, Thr hydroxyls, etc.).
Historical context
In 1992, Stephen Kent and Martina Schnölzer at The Scripps Research Institute developed the "Chemical Ligation" concept, the first practical method to covalently condense unprotected peptide segments; the key feature of chemical ligation is formation of an unnatural bond at the ligation site. Just two years later in 1994, Philip Dawson, Tom Muir and Stephen Kent reported "Native Chemical Ligation", an extension of the chemical ligation concept to the formation of a native amide ('peptide') bond after initial nucleophilic condensation formed a thioester-linked condensation product designed to spontaneously rearrange to the native amide bond at the ligation site.
Theodor Wieland and coworkers had reported the S-to-N acyl shift as early as 1953, when the reaction of valine-thioester and cysteine amino acid in aqueous buffer was shown to yield the dipeptide valine-cysteine. The reaction proceeded through the intermediacy of a thioester containing the sulfur of the cysteine residue. However, Wieland's work did NOT lead to the development of the native chemical ligation reaction. Rather, the study of amino acid thioester reactions led Wieland and others to develop the 'active ester' method for the synthesis of protected peptide segments by conventional chemical methods carried out in organic solvents.
Features
Native chemical ligation forms the basis of modern chemical protein synthesis, and has been used to prepare numerous proteins and enzymes by total chemical synthesis. The payoff in the native chemical ligation method is that coupling long peptides by this technique is typically near quantitative and provides synthetic access to large peptides and proteins otherwise impossible to make, due to their large size, decoration by post-translational modification, and containing non-coded amino acid or other chemical building blocks.
Native chemical ligation is inherently 'Green' in its atom economy and its use of benign solvents. It involves the reaction of an unprotected peptide thioester with a second, unprotected peptide that has an N-terminal cysteine residue. It is carried out in aqueous solution at neutral pH, usually in 6 M guanidine.hydrochloride, in the presence of an arylthiol catalyst and typically gives near-quantitative yields of the desired ligation product.
Peptide-thioesters can be directly prepared by Boc chemistry SPPS; however, thioester-containing peptides are not stable to treatment with a nucleophilic base, thus preventing direct synthesis of peptide thioesters by Fmoc chemistry SPPS. Fmoc chemistry solid phase peptide synthesis techniques for generating peptide-thioesters are based on the synthesis of peptide hydrazides that are converted to peptide thioesters post-synthetically.
Polypeptide C-terminal thioesters can also be produced in situ, using so-called N,S-acyl shift systems. Bis(2-sulfanylethyl)amido group, also called SEA group, belongs to this family. Polypeptide C-terminal bis(2-sulfanylethyl)amides (SEA peptide segments) react with Cys peptide to give a native peptide bond as in NCL. This reaction, which is called SEA Native Peptide Ligation, is a useful variant of native chemical ligation.
In making peptide segments that contain an N-terminal cysteine residue, exposure to ketones should be avoided since these may cap the N-terminal cysteine. Do not use protecting groups that release aldehydes or ketones. For the same reason, the use of acetone should be avoided, particularly in washing glassware used for lyophilization.
A feature of the native chemical ligation technique is that the product polypeptide chain contains cysteine at the site of ligation. The cysteine at the ligation site can be desulfurized to alanine, thus extending the range of possible ligation sites to include alanine residues. Other beta-thiol containing amino acids can be used for native chemical ligation, followed by desulfurization. Alternatively, thiol-containing ligation auxiliaries can be used that mimic an N-terminal cysteine for the ligation reaction, but which can be removed after synthesis. The use of thiol-containing auxiliaries may not be as effective as ligation at a Cys residue. Native chemical ligation can also be performed with an N-terminal selenocysteine residue.
Polypeptide C-terminal thioesters produced by recombinant DNA techniques can be reacted with an N-terminal Cys containing polypeptide by the same native ligation chemistry to provide very large semi-synthetic proteins. Native chemical ligation of this kind using a recombinant polypeptide segment is known as Expressed Protein Ligation. Similarly, a recombinant protein containing an N-terminal Cys can be reacted with a synthetic polypeptide thioester. Thus, native chemical ligation can be used to introduce chemically synthesized segments into recombinant proteins, regardless of size.
See also
Intein
KAHA Ligation
Peptide synthesis
Protein synthesis
SEA Native Peptide Ligation
References
Further reading
Peptides | Native chemical ligation | [
"Chemistry"
] | 1,618 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
6,736,828 | https://en.wikipedia.org/wiki/Diaphragm%20valve | Diaphragm valves (or membrane valves) consists of a valve body with two or more ports, a flexible diaphragm, and a "weir or saddle" or seat upon which the diaphragm closes the valve. The valve body may be constructed from plastic, metal or other materials depending on the intended use.
Categories
There are two main categories of diaphragm valves: one type seals over a "weir" (saddle) and the other (sometimes called a "full bore or straight-through" valve) seals over a seat. In general, straight-through diaphragm valves are used in on-off applications and weir-type diaphragm valves are used for control or throttling applications. While diaphragm valves usually come in two-port forms (2/2-way diaphragm valve), they can also come with three ports (3/2-way diaphragm valves also called T-valves) and more (so called block-valves). When more than three ports are included, they generally require more than one diaphragm seat; however, special dual actuators can handle more ports with one membrane.
Diaphragm valves can be manual or automated. Automated diaphragm valves may use pneumatic, hydraulic or electric actuators along with accessories such as solenoid valves, limit switches and positioners.
In addition to the well known, two way shut off or throttling diaphragm valve, other types include: Three way zero deadleg valve, sterile access port, block and bleed, valbow and tank bottom valve.
Valve body
Many diaphragm valve body dimensions follow the Manufacturers Standardization Society MSS SP-88 However, most non-diaphragm valves used in industrial applications are built to the ANSI/ASME B16.10 standard. standard. The different standards makes it difficult to use diaphragm valves as an alternative to most other industrial valves. Some manufacturers offer diaphragm valves that conform to ANSI B16.10 standards thereby making these diaphragm valves interchangeable with most solid wedge, double disc, and resilient wedge gate valves as well as short pattern plug and ball valves.
Actuators
Diaphragm valves can be controlled by various types of actuators e.g. manual, pneumatic, hydraulic, electric etc. The most common diaphragm valves use pneumatic actuators; in this type of valve, air pressure is applied through a pilot valve into the actuator which in turn raises the diaphragm and opens the valve. This type of valve is one of the more common valves used in operations where valve speed is a necessity.
Hydraulic diaphragm valves also exist for higher pressure and lower speed operations. Many diaphragm valves are also controlled manually.
Body materials
Brass
Steel type:
Cast Iron
Ductile iron
Carbon Steel
Stainless Steel
Alloy 20
Plastic type:
ABS (Acrylonitrile butadiene styrene)
PVC-U (Polyvinyl chloride, unplasticized) also known as PVCu or uPVC
PVC-C (Polyvinyl chloride, post chlorinated) also known as PVCc or cPVC
PP (Polypropylene)
PE (Polyethylene) also known as LDPE, MDPE and HDPE (see note)
PVDF (Polyvinylidene fluoride)
PTFE
PFA
Body lining materials
Depending on temperature, pressure and chemical resistance, one of the following is used:
Unlined type
Rubber lined type:
NR/Hard Rubber/Ebonite,
BR/Soft rubber
EPDM
BUNA-N
Neoprene
Fluorine plastic lined type
FEP
PFA
PO
PP
Tefzel
KYNAR
XYLON
HALAR
Glass Lined (Green Glass or Blue Glass)
Diaphragm materials
Unlined or Rubber Lined Type:
NR/Natural Rubber
NBR/Nitrile/Buna-N
EPDM
FKM/Viton
BUNA-N
SI/Silicone rubber
Leather
Fluorine Plastic Type:
FEP, with EPDM backing
PTFE, with EPDM backing
PFA, with EPDM backing
Applications
Diaphragm Valves are ideally suited for:
Corrosive applications, where the body and diaphragm materials can be chosen for chemical compatibility. (E.G. Acids, Bases etc.)
Abrasive applications, where the body lining can be designed to withstand abrasion and the diaphragm can be easily replaced once worn out
Solids entrained liquids, since the diaphragm can seal around any entrained solids and provide positive seal
Slurries, since the diaphragm can seal around entrained solids and provide positive seal
Markets
Diaphragm valves have many applications in the following markets:
Water and Wastewater
Power
Pulp and Paper
Chemical
Cement
Mining and Minerals
Pharmaceutical and Bioprocessing
See also
Ball valve
Butterfly valve
Control valve
Gate valve
Globe valve
Needle valve
Pinch valve
References
External links
History of the diaphragm valve, internet archive
Internal Structure of the diaphragm valve, internet archive
Pneumatic Actuators for diaphragm valves
Plumbing valves
Valves | Diaphragm valve | [
"Physics",
"Chemistry"
] | 1,077 | [
"Physical systems",
"Valves",
"Hydraulics",
"Piping"
] |
2,814,939 | https://en.wikipedia.org/wiki/Solid%20oxygen | Solid oxygen forms at normal atmospheric pressure at a temperature below 54.36 K (−218.79 °C, −361.82 °F). Solid oxygen O2, like liquid oxygen, is a clear substance with a light sky-blue color caused by absorption in the red part of the visible light spectrum.
Oxygen molecules have attracted attention because of the relationship between the molecular magnetization and crystal structures, electronic structures, and superconductivity. Oxygen is the only simple diatomic molecule (and one of the few molecules in general) to carry a magnetic moment. This makes solid oxygen particularly interesting, as it is considered a "spin-controlled" crystal that displays antiferromagnetic magnetic order in the low temperature phases. The magnetic properties of oxygen have been studied extensively. At very high pressures, solid oxygen changes from an insulating to a metallic state; and at very low temperatures, it even transforms to a superconducting state. Structural investigations of solid oxygen began in the 1920s and, at present, six distinct crystallographic phases are established unambiguously.
The density of solid oxygen ranges from 21 cm3/mol in the α-phase, to 23.5 cm3/mol in the γ-phase.
Phases
Six different phases of solid oxygen are known to exist:
α-phase: light blue forms at 1 atm, below 23.8 K, monoclinic crystal structure, space group C2/m (no. 12).
β-phase: faint blue to pink forms at 1 atm, below 43.8 K, rhombohedral crystal structure, space group Rm (no. 166). At room temperature and high pressure begins transformation to tetraoxygen.
γ-phase: faint blue forms at 1 atm, below 54.36 K, cubic crystal structure, Pmn (no. 223).
δ-phase: orange forms at room temperature at a pressure of 9 GPa
ε-phase: dark-red to black forms at room temperature at pressures greater than 10 GPa
ζ-phase: metallic forms at pressures greater than 96 GPa
It has been found that oxygen is solidified into a state called the β-phase at room temperature by applying pressure, and with further increasing pressure, the β-phase undergoes phase transitions to the δ-phase at 9 GPa and the ε-phase at 10 GPa; and, due to the increase in molecular interactions, the color of the β-phase changes to pink, orange, then red (the stable octaoxygen phase), and the red color further darkens to black with increasing pressure. It was found that a metallic ζ-phase appears at 96 GPa when ε-phase oxygen is further compressed.
Red oxygen
As the pressure of oxygen at room temperature is increased through , it undergoes a dramatic phase transition. Its volume decreases significantly and it changes color from sky-blue to deep red. However, this is a different allotrope of oxygen, , not merely a different crystalline phase of O2.
Metallic oxygen
A ζ-phase appears at 96 GPa when ε-phase oxygen is further compressed. This phase was discovered in 1990 by pressurizing oxygen to 132 GPa. The ζ-phase with metallic cluster exhibits superconductivity at pressures over 100 GPa and a temperature below 0.6 K.
References
Crystals in space group 12
Crystals in space group 166
Crystals in space group 221
Oxygen
Cryogenics
Ice | Solid oxygen | [
"Physics"
] | 705 | [
"Applied and interdisciplinary physics",
"Cryogenics"
] |
2,815,048 | https://en.wikipedia.org/wiki/Computational%20model | A computational model uses computer programs to simulate and study complex systems using an algorithmic or mechanistic approach and is widely used in a diverse range of fields spanning from physics, engineering, chemistry and biology to economics, psychology, cognitive science and computer science.
The system under study is often a complex nonlinear system for which simple, intuitive analytical solutions are not readily available. Rather than deriving a mathematical analytical solution to the problem, experimentation with the model is done by adjusting the parameters of the system in the computer, and studying the differences in the outcome of the experiments. Operation theories of the model can be derived/deduced from these computational experiments.
Examples of common computational models are weather forecasting models, earth simulator models, flight simulator models, molecular protein folding models, Computational Engineering Models (CEM), and neural network models.
See also
Computational Engineering
Computational cognition
Reversible computing
Agent-based model
Artificial neural network
Computational linguistics
Data-driven model
Decision field theory
Dynamical systems model of cognition
Membrane computing
Ontology (information science)
Programming language theory
Microscale and macroscale models
References
Models of computation
Mathematical modeling | Computational model | [
"Mathematics"
] | 225 | [
"Applied mathematics",
"Mathematical modeling"
] |
2,815,477 | https://en.wikipedia.org/wiki/Prostate%20biopsy | Prostate biopsy is a procedure in which small hollow needle-core samples are removed from a man's prostate gland to be examined for the presence of prostate cancer. It is typically performed when the result from a PSA blood test is high. It may also be considered advisable after a digital rectal exam (DRE) finds possible abnormality. PSA screening is controversial as PSA may become elevated due to non-cancerous conditions such as benign prostatic hyperplasia (BPH), by infection, or by manipulation of the prostate during surgery or catheterization. Additionally many prostate cancers detected by screening develop so slowly that they would not cause problems during a man's lifetime, making the complications due to treatment unnecessary.
The most frequent side effect of the procedure is blood in the urine (31%). Other side effects may include infection (0.9%) and death (0.2%).
Ultrasound-guided prostate biopsy
The procedure may be performed transrectally, through the urethra or through the perineum. The most common approach is transrectally, and historically this was done with tactile finger guidance. The most common method of prostate biopsy was transrectal ultrasound-guided prostate (TRUS) biopsy.
Extended biopsy schemes take 12 to 14 cores from the prostate gland through a thin needle in a systematic fashion from different regions of the prostate.
A biopsy procedure with a higher rate of cancer detection is template prostate mapping (TPM) or transperineal template-guided mapping biopsy (TTMB), whereby typically 50 to 60 samples are taken of the prostate through the outer skin between the rectum and scrotum, to thoroughly sample and map the entire prostate, through a template with holes every 5 mm, usually under a general or spinal anaesthetic.
Antibiotics are usually prescribed to minimize the risk of infection. A healthcare provider may also prescribe an enema to be taken in the morning of the procedure. During the transrectal procedure, an ultrasound probe is inserted into the rectum to assist in guiding the biopsy needles. Following this, a local anesthetic, such as lidocaine, is administered into the tissue surrounding the prostate. Subsequently, a spring-loaded biopsy needle is inserted into the prostate, resulting in a clicking sound. When the local anesthetic is effective, any discomfort experienced is minimal.
MRI-guided targeted biopsy
Since the mid-1980s, TRUS biopsy has been used to diagnose prostate cancer in essentially a blind fashion because prostate cancer cannot be seen on ultrasound due to poor soft tissue resolution. However, multi-parametric magnetic resonance imaging (mpMRI) has since about 2005 been used to better identify and characterize prostate cancer. A study correlating MRI and surgical pathology specimens demonstrated a sensitivity of 59% and specificity of 84% in identifying cancer when T2-weighted, dynamic contrast enhanced, and diffusion-weighted imaging were used together. Many prostate cancers missed by conventional biopsy are detectable by MRI-guided targeted biopsy. In fact, a side-by-side comparison of TRUS versus MRI-guided targeted biopsy that was conducted as a prospective, investigator-blinded study demonstrated that MRI-guided biopsy improved detection of significant prostate cancer by 17.7%, and decreased the diagnosis of insignificant or low-risk disease by 89.4%.
Two methods of MRI-guided, or "targeted" prostate biopsy, are available: (1) direct "in-bore" biopsy within the MRI tube, and (2) fusion biopsy using a device that fuses stored MRI with real-time ultrasound (MRI-US). Visual or cognitive MRI-US fusion have been described.
When MRI is used alone to guide prostate biopsy, it is done by an interventional radiologist. Correlation between biopsy and final pathology is improved between MRI-guided biopsy as compared to TRUS.
In the fusion MRI-US prostate biopsy, a prostate MRI is performed before biopsy and then, at the time of biopsy, the MRI images are fused to the ultrasound images to guide the urologist to the suspicious targets. Fusion MRI-US biopsies can be achieved in an office setting with a variety of devices.
MRI-guided prostate biopsy appears to be superior to standard TRUS-biopsy in prostate cancer detection. Several groups in the U.S., and Europe, have demonstrated that targeted biopsies obtained with fusion imaging are more likely to reveal cancer than blind systematic biopsies. In 2015, AdMeTech Foundation, American College of Radiology and European Society of Eurogenital Radiology developed Prostate Imaging Reporting and Data System (PI-RADS v2) for global standardization of image acquisition and interpretation, which similarly to BI-RADS standardization of breast imaging, is expected to improve patient selection for biopsies and precisely-targeted tissue sampling. PI-RADS v2 created standards for optimal mpMRI image reporting and graded the level of suspicion based on the score of one to five, with the goal to improve early detection (and exclusion) of clinically significant (or aggressive) prostate cancer. The higher suspicion on mpMRI and the higher PI-RADS v2 score, the greater the likelihood of aggressive prostate cancer on targeted biopsy. Considerable experience and training is required by the reader of prostate mpMRI studies.
Up to 2013, indications for targeted biopsy have included mainly patients for whom traditional TRUS biopsies have been negative despite concern for rising PSA, as well as for patients enrolling in a program of active surveillance who may benefit from a confirmatory biopsy and/or the added confidence of more accurate non-invasive monitoring. Increasingly, men undergoing initial biopsy are requesting targeted biopsy, and thus, the use of pre-biopsy MRI is growing rapidly.
Clinical trials of mpMRI and PI-RADS v2, including NIH-funded studies are underway to further clarify the benefits of targeted prostate biopsy.
Side effects
Side effects of a TRUS or TPM biopsy include:
rectal pain or discomfort (very common)
burning when passing urine (very common)
bruising (very common with TPM only)
bloody urine for 2–3 days (very common)
bloody semen for ~3 months (30% with TRUS; ~100% with TPM)
poor erections for ~8 weeks (30% with TRUS; ~100% with TPM)
infection of skin or urine (1–8%)
infection of skin or urine requiring hospitalisation and intravenous antibiotics (1–4%)
difficulty urinating (1% with TRUS; >5% with TPM)
Gleason score
The tissue samples are examined under a microscope to determine whether cancer cells are present, and to evaluate the microscopic features (or Gleason score) of any cancer found. Gleason score, PSA, and digital rectal examination together determine clinical risk, which then dictates treatment options.
Tumor markers
Tissue samples can be stained for the presence of PSA and other tumor markers in order to determine the origin of malignant cells that have metastasized.
References
Biopsy
Histopathology
Male genital surgery
Prostatic procedures
Prostate cancer | Prostate biopsy | [
"Chemistry"
] | 1,498 | [
"Histopathology",
"Microscopy"
] |
2,816,523 | https://en.wikipedia.org/wiki/Canadian%20Nuclear%20Safety%20Commission | The Canadian Nuclear Safety Commission (CNSC; ) is the federal regulator of nuclear power and materials in Canada.
Mandate and history
Canadian Nuclear Safety Commission was established under the 1997 Nuclear Safety and Control Act with a mandate to regulate nuclear energy, nuclear substances, and relevant equipment in order to reduce and manage the safety, environmental, and national security risks, and to keep Canada in compliance with international legal obligations, such as the Treaty on the Non-Proliferation of Nuclear Weapons. It replaced the former Atomic Energy Control Board (AECB, French: Régie de energie atomique), which was founded in 1946.
The CNSC is an agency of the Government of Canada which reports to the Parliament of Canada through the Minister of Natural Resources.
In 2008, Linda Keen the president and the chief executive officer of the CNSC was fired following a shortage of medical radioisotopes in Canada as a results of the extended routine shutdown of the NRU nuclear reactor at the Chalk River Laboratories.
Rumina Velshi joined the organisation in 2011 and in 2018 she became the President and CEO. In 2020 she also took on an international role for the IAEA becoming their Chairperson for their Commission on Safety Standards. She was appointed to serve for four years.
Programs
The Participant Funding Program allows the public, Indigenous groups, and other stakeholders to request funding from the CNSC to participate in its regulatory processes.
In 2014, the CNSC launched the Independent Environmental Monitoring Program. The program verifies that the public and environment around licensed nuclear facilities are safe, helping to confirm their regulatory position and decision-making.
See also
Anti-nuclear movement in Canada
Canadian National Calibration Reference Centre
International Nuclear Regulators' Association
Nuclear industry in Canada
References
External links
2000 establishments in Canada
Federal departments and agencies of Canada
Government agencies established in 2000
Energy regulatory authorities of Canada
Nuclear regulatory organizations
Natural Resources Canada
Nuclear power in Canada | Canadian Nuclear Safety Commission | [
"Engineering"
] | 379 | [
"Nuclear regulatory organizations",
"Nuclear organizations"
] |
2,816,674 | https://en.wikipedia.org/wiki/Component-based%20software%20engineering | Component-based software engineering (CBSE), also called component-based development (CBD), is a style of software engineering that aims to construct a software system from components that are loosely-coupled and reusable. This emphasizes the separation of concerns among components.
To find the right level of component granularity, software architects have to continuously iterate their component designs with developers. Architects need to take into account user requirements, responsibilities and architectural characteristics.
Considerations
For large-scale systems developed by large teams, a disciplined culture and process is required to achieve the benefits of CBSE. Third-party components are often utilized in large systems.
The system can be designed visually with the Unified Modeling Language (UML). Each component is shown as a rectangle, and an interface is shown as a lollipop to indicate a provided interface and as a socket to indicate consumption of an interface.
Component-based usability testing is for components that interact with the end user.
References
Object-oriented programming
Software architecture
Software engineering | Component-based software engineering | [
"Technology",
"Engineering"
] | 208 | [
"Systems engineering",
"Computer engineering",
"Software engineering",
"Component-based software engineering",
"Information technology",
"Components"
] |
2,816,874 | https://en.wikipedia.org/wiki/Agrophysics | Agrophysics is a branch of science bordering on agronomy and physics,
whose objects of study are the agroecosystem - the biological objects, biotope and biocoenosis affected by human activity, studied and described using the methods of physical sciences. Using the achievements of the exact sciences to solve major problems in agriculture, agrophysics involves the study of materials and processes occurring in the production and processing of agricultural crops, with particular emphasis on the condition of the environment and the quality of farming materials and food production.
Agrophysics is closely related to biophysics, but is restricted to the physics of the plants, animals, soil and an atmosphere involved in agricultural activities and biodiversity. It is different from biophysics in having the necessity of taking into account the specific features of biotope and biocoenosis, which involves the knowledge of nutritional science and agroecology, agricultural technology, biotechnology, genetics etc.
The needs of agriculture, concerning the past experience study of the local complex soil and next plant-atmosphere systems, lay at the root of the emergence of a new branch – agrophysics – dealing this with experimental physics.
The scope of the branch starting from soil science (physics) and originally limited to the study of relations within the soil environment, expanded over time onto influencing the properties of agricultural crops and produce as foods and raw postharvest materials, and onto the issues of quality, safety and labeling concerns, considered distinct from the field of nutrition for application in food science.
Research centres focused on the development of the agrophysical sciences include the Institute of Agrophysics, Polish Academy of Sciences in Lublin, and the Agrophysical Research Institute, Russian Academy of Sciences in St. Petersburg.
See also
Agriculture science
Agroecology
Genomics
Metagenomics
Metabolomics
Physics (Aristotle)
Proteomics
Soil plant atmosphere continuum
Research institutes and societies
Agrophysical Research Institute in St. Petersburg, Russia
Bohdan Dobrzański Institute of Agrophysics in Lublin, Poland
The Indian Society of AgroPhysics
Scholarly journals
Acta Agrophysica
Journal of Agricultural Physics
Polish Journal of Soil Science
References
Encyclopedia of Agrophysics in series: Encyclopedia of Earth Sciences Series edts. Jan Glinski, Jozef Horabik, Jerzy Lipiec, 2011, Publisher: Springer,
Encyclopedia of Soil Science, edts. Ward Chesworth, 2008, Uniw. of Guelph Canada, Publ. Springer,
АГРОФИЗИКА - AGROPHYSICS by Е. В. Шеин (J.W. Chein), В. М. Гончаров (W.M. Gontcharow), Ростов-на-Дону (Rostov-on-Don), Феникс (Phoenix), 2006, - 399 c., - Рекомендовано УМО по классическому университетскому образованию в качестве учебника для студентов высших учебных заведений, обучающихся по специальности и направлению высшего профессионального образования "Почвоведение"
Scientific Dictionary of Agrophysics: polish-English, polsko-angielski by R. Dębicki, J. Gliński, J. Horabik, R. T. Walczak - Lublin 2004,
Physical Methods in Agriculture. Approach to Precision and Quality, edts. J. Blahovec and M. Kutilek, Kluwer Academic Publishers, New York 2002, .
Soil Physical Condition and Plant Roots by J. Gliński, J. Lipiec, 1990, CRC Press, Inc., Boca Raton, USA,
Soil Aeration and its Role for Plants by J. Gliński, W. Stępniewski, 1985, Publisher: CRC Press, Inc., Boca Raton, USA,
Fundamentals of Agrophysics (Osnovy agrofiziki) by A. F. Ioffe, I. B. Revut, Petr Basilevich Vershinin, 1966, English : Publisher: Jerusalem, Israel Program for Scientific Translations; (available from the U.S. Dept. of Commerce, Clearinghouse for Federal Scientific and Technical Information, Va.)
Fundamentals of Agrophysics by P. V, etc. Vershinin, 1959, Publisher: IPST,
External links
Agrophysical Research Institute of the Russian Academy of Agricultural Sciences
Bohdan Dobrzański Institute of Agrophysics, Polish Academy of Sciences in Lublin
Free Association of PMA Labs, Czech University of Agriculture, Prague
International Agrophysics
International Agrophysics - quarterly journal focused on applications of physics in environmental and agricultural sciences
Polish Society of Agrophysics
Sustainable Agriculture: Definitions and Terms
Agronomy
Applied and interdisciplinary physics | Agrophysics | [
"Physics"
] | 1,166 | [
"Applied and interdisciplinary physics"
] |
2,817,088 | https://en.wikipedia.org/wiki/Glucose%20transporter | Glucose transporters are a wide group of membrane proteins that facilitate the transport of glucose across the plasma membrane, a process known as facilitated diffusion. Because glucose is a vital source of energy for all life, these transporters are present in all phyla. The GLUT or SLC2A family are a protein family that is found in most mammalian cells. 14 GLUTS are encoded by the human genome. GLUT is a type of uniporter transporter protein.
Synthesis of free glucose
Most non-autotrophic cells are unable to produce free glucose because they lack expression of glucose-6-phosphatase and, thus, are involved only in glucose uptake and catabolism. Usually produced only in hepatocytes, in fasting conditions, other tissues such as the intestines, muscles, brain, and kidneys are able to produce glucose following activation of gluconeogenesis.
Glucose transport in yeast
In Saccharomyces cerevisiae glucose transport takes place through facilitated diffusion. The transport proteins are mainly from the Hxt family, but many other transporters have been identified.
Glucose transport in mammals
GLUTs are integral membrane proteins that contain 12 membrane-spanning helices with both the amino and carboxyl termini exposed on the cytoplasmic side of the plasma membrane. GLUT proteins transport glucose and related hexoses according to a model of alternate conformation, which predicts that the transporter exposes a single substrate binding site toward either the outside or the inside of the cell. Binding of glucose to one site provokes a conformational change associated with transport, and releases glucose to the other side of the membrane. The inner and outer glucose-binding sites are, it seems, located in transmembrane segments 9, 10, 11; also, the DLS motif located in the seventh transmembrane segment could be involved in the selection and affinity of transported substrate.
Types
Each glucose transporter isoform plays a specific role in glucose metabolism determined by its pattern of tissue expression, substrate specificity, transport kinetics, and regulated expression in different physiological conditions. To date, 14 members of the GLUT/SLC2 have been identified. On the basis of sequence similarities, the GLUT family has been divided into three subclasses.
Class I
Class I comprises the well-characterized glucose transporters GLUT1-GLUT4.
Classes II/III
Class II comprises:
GLUT5 (), a fructose transporter in enterocytes
GLUT7 (), found in the small and large intestine, transporting glucose out of the endoplasmic reticulum
GLUT9 (), recently has been found to transport uric acid
GLUT11 ()
Class III comprises:
GLUT6 (),
GLUT8 (),
GLUT10 (),
GLUT12 (), and
GLUT13, also H+/myo-inositol transporter HMIT (), primarily expressed in brain.
Most members of classes II and III have been identified recently in homology searches of EST databases and the sequence information provided by the various genome projects.
The function of these new glucose transporter isoforms is still not clearly defined at present. Several of them (GLUT6, GLUT8) are made of motifs that help retain them intracellularly and therefore prevent glucose transport. Whether mechanisms exist to promote cell-surface translocation of these transporters is not yet known, but it has clearly been established that insulin does not promote GLUT6 and GLUT8 cell-surface translocation.
Discovery of sodium-glucose cotransport
In August 1960, in Prague, Robert K. Crane presented for the first time his discovery of the sodium-glucose cotransport as the mechanism for intestinal glucose absorption. Crane's discovery of cotransport was the first ever proposal of flux coupling in biology. Crane in 1961 was the first to formulate the cotransport concept to explain active transport. Specifically, he proposed that the accumulation of glucose in the intestinal epithelium across the brush border membrane was [is] coupled to downhill Na+ transport cross the brush border. This hypothesis was rapidly tested, refined, and extended [to] encompass the active transport of a diverse range of molecules and ions into virtually every cell type.
See also
Cotransport
Cotransporter
GLUT1 deficiency syndrome
GLUT2 deficiency syndrome
References
External links
Carbohydrate metabolism
Transport proteins
Integral membrane proteins
Solute carrier family | Glucose transporter | [
"Chemistry"
] | 949 | [
"Carbohydrate metabolism",
"Carbohydrate chemistry",
"Metabolism"
] |
2,817,855 | https://en.wikipedia.org/wiki/Larmor%20formula | In electrodynamics, the Larmor formula is used to calculate the total power radiated by a nonrelativistic point charge as it accelerates. It was first derived by J. J. Larmor in 1897, in the context of the wave theory of light.
When any charged particle (such as an electron, a proton, or an ion) accelerates, energy is radiated in the form of electromagnetic waves. For a particle whose velocity is small relative to the speed of light (i.e., nonrelativistic), the total power that the particle radiates (when considered as a point charge) can be calculated by the Larmor formula:
where or is the proper acceleration, is the charge, and is the speed of light. A relativistic generalization is given by the Liénard–Wiechert potentials.
In either unit system, the power radiated by a single electron can be expressed in terms of the classical electron radius and electron mass as:
One implication is that an electron orbiting around a nucleus, as in the Bohr model, should lose energy, fall to the nucleus and the atom should collapse. This puzzle was not solved until quantum theory was introduced.
Derivation
To calculate the power radiated
by a point charge at a position
, with a velocity,
we integrate the Poynting vector over the surface of a sphere of radius R, to get:
The electric and magnetic fields are given
by the Liénard-Wiechert field equations,
The radius vector, , is the distance from the charged particle's position at
the retarded time to the point of observation of the electromagnetic
fields at the present time, is the charge's velocity divided by , is the charge's acceleration divided by , and .
The variables, , , ,
and are all evaluated at the
retarded time, .
We make a Lorentz transformation to the rest frame of the point charge where , and
Here, is the rest frame acceleration parallel to , and
is the rest frame acceleration perpendicular to .
We integrate the rest frame Poynting vector over the surface of a sphere of radius R', to get.
We take the limit In this limit, , and so the electric field is given by
with all variables evaluated at the present time.
Then, the surface integral for the radiated power reduces to
The radiated power can be put back in terms of the original acceleration in the moving frame, to give
The variables in this equation are in the original moving frame, but the rate of energy emission on the left hand side of the equation is still given in terms of the rest frame variables.
However, the right-hand side will be shown below to be a Lorentz invariant, so radiated power can be Lorentz transformed to the moving frame, finally giving
This result (in two forms) is the same as Liénard's relativistic extension
of Larmor's formula, and is given here with all variables at the present time.
Its nonrelativistic reduction reduces to Larmor's original formula.
For high-energies, it appears that the power radiated for acceleration parallel to the velocity is a factor larger than that for perpendicular acceleration. However, writing the Liénard formula in terms of the velocity gives a misleading implication. In terms of momentum instead of velocity, the Liénard formula becomes
This shows that the power emitted for perpendicular to the velocity is larger by a factor of than the power for parallel to the velocity.
This results in radiation damping being negligible for linear accelerators, but a limiting factor for circular accelerators.
Covariant form
The radiated power
is actually a Lorentz scalar, given in covariant form as
To show this, we reduce the four-vector scalar product to vector notation. We start with
The time derivatives are.
When these derivatives are used,
we get
With this expression for the scalar product, the manifestly invariant form for the power agrees with the vector form above, demonstrating that the radiated power is a Lorentz scalar
Angular distribution
The angular distribution of radiated power is given by a general formula, applicable whether or not the particle is relativistic. In CGS units, this formula is
where is a unit vector pointing from the particle towards the observer. In the case of linear motion (velocity parallel to acceleration), this simplifies to
where is the angle between the observer and the particle's motion.
Radiation reaction
The radiation from a charged particle carries energy and momentum. In order to satisfy energy and momentum conservation, the charged particle must experience a recoil at the time of emission. The radiation must exert an additional force on the charged particle. This force is known as Abraham-Lorentz force while its non-relativistic limit is known as the Lorentz self-force and relativistic forms are known as Lorentz-Dirac force or Abraham-Lorentz-Dirac force. The radiation reaction phenomenon is one of the key problems and consequences of the Larmor formula. According to classical electrodynamics, a charged particle produces electromagnetic radiation as it accelerates. The particle loses momentum and energy as a result of the radiation, which is carrying it away from it. The radiation response force, on the other hand, also acts on the charged particle as a result of the radiation.
The dynamics of charged particles are significantly impacted by the existence of this force. In particular, it causes a change in their motion that may be accounted for by the Larmor formula, a factor in the Lorentz-Dirac equation.
According to the Lorentz-Dirac equation, a charged particle's velocity will be influenced by a "self-force" resulting from its own radiation. Such non-physical behavior as runaway solutions, when the particle's velocity or energy become infinite in a finite amount of time, might result from this self-force.
A resolution to the paradoxes resulting from the introduction of a self-force due to the emission of electromagnetic radiation, is that there is no self-force produced. The acceleration of a charged particle produces electromagnetic radiation, whose outgoing energy reduces the energy of the charged particle. This results in 'radiation reaction' that decreases the acceleration of the charged particle, not as a self force, but just as less acceleration of the particle.
Atomic physics
The invention of quantum physics, notably the Bohr model of the atom, was able to explain this gap between the classical prediction and the actual reality. The Bohr model proposed that transitions between distinct energy levels, which electrons could only inhabit, might account for the observed spectral lines of atoms. The wave-like properties of electrons and the idea of energy quantization were used to explain the stability of these electron orbits.
The Larmor formula can only be used for non-relativistic particles, which limits its usefulness. The Liénard-Wiechert potential is a more comprehensive formula that must be employed for particles travelling at relativistic speeds. In certain situations, more intricate calculations including numerical techniques or perturbation theory could be necessary to precisely compute the radiation the charged particle emits.
See also
Atomic theory
Cyclotron radiation
Electromagnetic wave equation
Maxwell's equations in curved spacetime
Radiation reaction
Wave equation
Wheeler–Feynman absorber theory
References
Antennas (radio)
Atomic physics
Electrodynamics
Electromagnetic radiation
Electromagnetism
Eponymous equations of physics | Larmor formula | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,503 | [
"Physical phenomena",
"Electromagnetism",
"Electrodynamics",
"Equations of physics",
"Dynamical systems",
"Electromagnetic radiation",
"Eponymous equations of physics",
"Quantum mechanics",
"Radiation",
"Atomic physics",
"Fundamental interactions",
" molecular",
"Atomic",
" and optical phy... |
2,818,161 | https://en.wikipedia.org/wiki/Hydraulic%20action | Hydraulic action, most generally, is the ability of moving water (flowing or waves) to dislodge and transport rock particles. This includes a number of specific erosional processes, including abrasion, at facilitated erosion, such as static erosion where water leaches salts and floats off organic material from unconsolidated sediments, and from chemical erosion more often called chemical weathering.
It is a mechanical process, in which the moving water current flows against the banks and bed of a river, thereby removing rock particles.
A primary example of hydraulic action is a wave striking a cliff face which compresses the air in cracks of the rocks. This exerts pressure on the surrounding rock which can progressively crack, break, splinter and detach rock particles. This is followed by the decompression of the air as the wave retreats which can occur suddenly with explosive force which additionally weakens the rock. Cracks are gradually widened so each wave compresses more air, increasing the explosive force of its release. Thus, the effect intensifies in a 'positive feedback' system. Over time, as the cracks may grow they sometimes form a sea cave. The broken pieces that fall off produce two additional types of erosion, abrasion (sandpapering) and attrition. In corrasion, the newly formed chunks are thrown against the rock face. Attrition is a similar effect caused by eroded particles after they fall to the sea bed where they are subjected to further wave action. In coastal areas wave hydraulic action is often the most important form of erosion.
Similarly, where hydraulic action is strong enough to loosen sediment along a stream bed and its banks; this will take rocks and particles from the banks and bed of the stream and add this to the stream's load. This process is the result of friction between the moving water and the static stream bed and banks. This friction increases with the speed of the water and once loosened the smaller particles are held in suspension by the force of the flowing water, these suspended particles can scour the sides and bottom of the stream. The scouring action produces distinctive markings on streams beds such as ripple marks, fluting, and crescent marks. The larger particles and even large rocks are scooted (dragged) along the bottom in a process known as traction which causes attrition, and are often "bounced" along in a process known as saltation where the force of the water temporarily lifts the rock particle which then crashes back into the bed dislodging other particles.
Hydraulic action also occurs as a stream tumbles over a waterfall to crash onto the rocks below. It usually leads to the formation of a plunge pool below the waterfall due in part to corrosion from the stream's load, but more to a scouring action as vortices form in the water as it escapes downstream. Hydraulic action can also cause the breakdown of river banks since there are water bubbles which enter the banks and collapse them when they expand.
Notes
See also
Freezing and thawing erosion
Hydrology
Erosion
Geological processes | Hydraulic action | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 616 | [
"Hydrology",
"Environmental engineering"
] |
2,819,556 | https://en.wikipedia.org/wiki/Ehrenfest%20paradox | The Ehrenfest paradox concerns the rotation of a "rigid" disc in the theory of relativity.
In its original 1909 formulation as presented by Paul Ehrenfest in relation to the concept of Born rigidity within special relativity, it discusses an ideally rigid cylinder that is made to rotate about its axis of symmetry. The radius R as seen in the laboratory frame is always perpendicular to its motion and should therefore be equal to its value R0 when stationary. However, the circumference (2R) should appear Lorentz-contracted to a smaller value than at rest, by the usual factor γ. This leads to the contradiction that R = R0 and R < R0.
The paradox has been deepened further by Albert Einstein, who showed that since measuring rods aligned along the periphery and moving with it should appear contracted, more would fit around the circumference, which would thus measure greater than 2R. This indicates that geometry is non-Euclidean for rotating observers, and was important for Einstein's development of general relativity.
Any rigid object made from real material that is rotating with a transverse velocity close to that material's speed of sound must exceed the point of rupture due to centrifugal force, because centrifugal pressure can not exceed the shear modulus of material.
where is speed of sound, is density and is shear modulus. Therefore, when considering relativistic speeds, it is only a thought experiment. Neutron-degenerate matter may allow velocities close to the speed of light, since the speed of a neutron-star oscillation is relativistic (though these bodies cannot strictly be said to be "rigid").
Essence of the paradox
Imagine a disk of radius R rotating with constant angular velocity .
The reference frame is fixed to the stationary center of the disk. Then the magnitude of the relative velocity of any point in the circumference of the disk is . So the circumference will undergo Lorentz contraction by a factor of .
However, since the radius is perpendicular to the direction of motion, it will not undergo any contraction. So
This is paradoxical, since in accordance with Euclidean geometry, it should be exactly equal to .
Ehrenfest's argument
Ehrenfest considered an ideal Born-rigid cylinder that is made to rotate. Assuming that the cylinder does not expand or contract, its radius stays the same. But measuring rods laid out along the circumference should be Lorentz-contracted to a smaller value than at rest, by the usual factor γ. This leads to the paradox that the rigid measuring rods would have to separate from one another due to Lorentz contraction; the discrepancy noted by Ehrenfest seems to suggest that a rotated Born rigid disk should shatter.
Thus Ehrenfest argued by reductio ad absurdum that Born rigidity is not generally compatible with special relativity. According to special relativity an object cannot be spun up from a non-rotating state while maintaining Born rigidity, but once it has achieved a constant nonzero angular velocity it does maintain Born rigidity without violating special relativity, and then (as Einstein later showed) a disk-riding observer will measure a circumference:
Einstein and general relativity
The rotating disc and its connection with rigidity was also an important thought experiment for Albert Einstein in developing general relativity. He referred to it in several publications in 1912, 1916, 1917, 1922 and drew the insight from it, that the geometry of the disc becomes non-Euclidean for a co-rotating observer. Einstein wrote (1922): 66ff: Imagine a circle drawn about the origin in the x'y' plane of K' and a diameter of this circle. Imagine, further, that we have given a large number of rigid rods, all equal to each other. We suppose these laid in series along the periphery and the diameter of the circle, at rest relatively to K'. If U is the number of these rods along the periphery, D the number along the diameter, then, if K' does not rotate relatively to K, we shall have . But if K' rotates we get a different result. Suppose that at a definite time t of K we determine the ends of all the rods. With respect to K all the rods upon the periphery experience the Lorentz contraction, but the rods upon the diameter do not experience this contraction (along their lengths!). It therefore follows that . It therefore follows that the laws of configuration of rigid bodies with respect to K' do not agree with the laws of configuration of rigid bodies that are in accordance with Euclidean geometry. If, further, we place two similar clocks (rotating with K'), one upon the periphery, and the other at the centre of the circle, then, judged from K, the clock on the periphery will go slower than the clock at the centre. The same thing must take place, judged from K' if we define time with respect to K' in a not wholly unnatural way, that is, in such a way that the laws with respect to K' depend explicitly upon the time. Space and time, therefore, cannot be defined with respect to K' as they were in the special theory of relativity with respect to inertial systems. But, according to the principle of equivalence, K' is also to be considered as a system at rest, with respect to which there is a gravitational field (field of centrifugal force, and force of Coriolis). We therefore arrive at the result: the gravitational field influences and even determines the metrical laws of the space-time continuum. If the laws of configuration of ideal rigid bodies are to be expressed geometrically, then in the presence of a gravitational field the geometry is not Euclidean.
Brief history
Citations to the papers mentioned below (and many which are not) can be found in a paper by Øyvind Grøn which is available on-line.
1909: Max Born introduces a notion of rigid motion in special relativity.
1909: After studying Born's notion of rigidity, Paul Ehrenfest demonstrated by means of a paradox about a cylinder that goes from rest to rotation, that most motions of extended bodies cannot be Born rigid.
1910: Gustav Herglotz and Fritz Noether independently elaborated on Born's model and showed (Herglotz–Noether theorem) that Born rigidity only allows three degrees of freedom for bodies in motion. For instance, it's possible that a rigid body is executing uniform rotation, yet accelerated rotation is impossible. So a Born rigid body cannot be brought from a state of rest into rotation, confirming Ehrenfest's result.
1910: Max Planck calls attention to the fact that one should not confuse the problem of the contraction of a disc due to spinning it up, with that of what disk-riding observers will measure as compared to stationary observers. He suggests that resolving the first problem will require introducing some material model and employing the theory of elasticity.
1910: Theodor Kaluza points out that there is nothing inherently paradoxical about the static and disk-riding observers obtaining different results for the circumference. This does however imply, Kaluza argues, that "the geometry of the rotating disk" is non-euclidean. He asserts without proof that this geometry is in fact essentially just the geometry of the hyperbolic plane.
1911: Vladimir Varićak argued that the paradox only occurs in the Lorentz standpoint, where rigid bodies contract, but not if the contraction is "caused by the manner of our clock-regulation and length-measurement". Einstein published a rebuttal, denying that his viewpoint was different from Lorentz's.
1911: Max von Laue shows, that an accelerated body has an infinite number of degrees of freedom, thus no rigid bodies can exist in special relativity.
1916: While writing up his new general theory of relativity, Albert Einstein notices that disk-riding observers measure a longer circumference, = 2πr/. That is, because rulers moving parallel to their length axis appear shorter as measured by static observers, the disk-riding observers can fit more smaller rulers of a given length around the circumference than stationary observers could.
1922: A. S. Eddington, in The Mathematical Theory of Relativity (p. 113), calculates a contraction of the radius of the rotating disc (compared to stationary scales) of one quarter of the 'Lorentz contraction' factor applied to the circumference.
1935: Paul Langevin essentially introduces a moving frame (or frame field in modern language) corresponding to the family of disk-riding observers, now called Langevin observers. (See the figure.) He also shows that distances measured by nearby Langevin observers correspond to a certain Riemannian metric, now called the Langevin-Landau-Lifschitz metric.
1937: Jan Weyssenhoff (now perhaps best known for his work on Cartan connections with zero curvature and nonzero torsion) notices that the Langevin observers are not hypersurface orthogonal. Therefore, the Langevin-Landau-Lifschitz metric is defined, not on some hyperslice of Minkowski spacetime, but on the quotient space obtained by replacing each world line with a point. This gives a three-dimensional smooth manifold which becomes a Riemannian manifold when we add the metric structure.
1946: Nathan Rosen shows that inertial observers instantaneously comoving with Langevin observers also measure small distances given by Langevin-Landau-Lifschitz metric.
1946: E. L. Hill analyzes relativistic stresses in a material in which (roughly speaking) the speed of sound equals the speed of light and shows these just cancel the radial expansion due to centrifugal force (in any physically realistic material, the relativistic effects lessen but do not cancel the radial expansion). Hill explains errors in earlier analyses by Arthur Eddington and others.
1952: C. Møller attempts to study null geodesics from the point of view of rotating observers (but incorrectly tries to use slices rather than the appropriate quotient space)
1968: V. Cantoni provides a straightforward, purely kinematical explanation of the paradox by showing that "one of the assumptions implicitly contained in the statement of Ehrenfest's paradox is not correct, the assumption being that the geometry of Minkowski space-time allows the passage of the disk from rest to rotation in such a fashion that both the length of the radius and the length of the periphery, measured with respect to the comoving frame of reference, remain unchanged"
1975: Øyvind Grøn writes a classic review paper about solutions of the "paradox".
1977: Grünbaum and Janis introduce a notion of physically realizable "non-rigidity" which can be applied to the spin-up of an initially non-rotating disk (this notion is not physically realistic for real materials from which one might make a disk, but it is useful for thought experiments).
1981: Grøn notices that Hooke's law is not consistent with Lorentz transformations and introduces a relativistic generalization.
1997: T. A. Weber explicitly introduces the frame field associated with Langevin observers.
2000: Hrvoje Nikolić points out that the paradox disappears when (in accordance with general theory of relativity) each piece of the rotating disk is treated separately, as living in its own local non-inertial frame.
2002: Rizzi and Ruggiero (and Bel) explicitly introduce the quotient manifold mentioned above.
2024: Jitendra Kumar analyzes the paradox for a ring and points out that the resolution depends on how the ring is brought from rest to rotational motion, whether by keeping the rest length of the periphery constant (in which case the periphery tears) or by keeping periphery's length in the inertial frame constant (in which case the periphery physically stretches, increasing its rest length).
Resolution of the paradox
Grøn states that the resolution of the paradox stems from the impossibility of synchronizing clocks in a rotating reference frame. If observers on the rotating circumference try to synchronise their clocks around the circumference to establish disc time, there is a time difference between the two end points where they meet.
The modern resolution can be briefly summarized as follows:
Small distances measured by disk-riding observers are described by the Langevin-Landau-Lifschitz metric, which is indeed well approximated (for small angular velocity) by the geometry of the hyperbolic plane, just as Kaluza had claimed.
For physically reasonable materials, during the spin-up phase a real disk expands radially due to centrifugal forces; relativistic corrections partially counteract (but do not cancel) this Newtonian effect. After a steady-state rotation is achieved and the disk has been allowed to relax, the geometry "in the small" is approximately given by the Langevin–Landau–Lifschitz metric.
See also
Born coordinates, for a coordinate chart adapted to observers riding on a rigidly rotating disk
Length contraction
Relativistic disk
Some other "paradoxes" in special relativity
Bell's spaceship paradox
Ladder paradox
Physical paradox
Supplee's paradox
Twin paradox
Notes
Citations
Works cited
A few papers of historical interest
A few classic "modern" references
See Section 84 and the problem at the end of Section 89.
Some experimental work and subsequent discussion
Selected recent sources
Studies general non-inertial motion of a point particle and treats rotating disk as a collection of such non-inertial particles. See also the eprint version.
Studies a coordinate chart constructed using radar distance "in the large" from a single Langevin observer. See also the eprint version.
They give a precise definition of the "space of the disk" (non-Euclidean), and solve the paradox without extraneous dynamic considerations. See also the eprint version.
This book contains a comprehensive historical survey by Øyvind Grøn, on which the "brief history" in this article is based, and some other papers on the Ehrenfest paradox and related controversies. Hundreds of additional references may be found in this book, particularly the paper by Grøn.
Considers two ways by which a ring is brought from rest to rotational motion and resolves the paradox for those two cases. See also the eprint version.
External links
The Rigid Rotating Disk in Relativity, by Michael Weiss (1995), from the sci.physics FAQ.
Einstein's Carousel (section 3.5.4), by B. Crowell
Relativistic paradoxes
Theory of relativity | Ehrenfest paradox | [
"Physics"
] | 3,010 | [
"Theory of relativity"
] |
2,819,718 | https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28frequency%29 | The following list illustrates various frequencies, measured in hertz, according to decade in the order of their magnitudes, with the negative decades illustrated by events and positive decades by acoustic or electromagnetic uses.
See also
Hertz
Orders of magnitude (rotational speed)
References
Frequency
Temporal rates | Orders of magnitude (frequency) | [
"Physics",
"Mathematics"
] | 56 | [
"Temporal quantities",
"Physical quantities",
"Quantity",
"Temporal rates",
"Orders of magnitude",
"Units of measurement"
] |
2,820,493 | https://en.wikipedia.org/wiki/Flexibility%20method | In structural engineering, the flexibility method, also called the method of consistent deformations, is the traditional method for computing member forces and displacements in structural systems. Its modern version formulated in terms of the members' flexibility matrices also has the name the matrix force method due to its use of member forces as the primary unknowns.
Member flexibility
Flexibility is the inverse of stiffness. For example, consider a spring that has Q and q as, respectively, its force and deformation:
The spring stiffness relation is Q = k q where k is the spring stiffness.
Its flexibility relation is q = f Q, where f is the spring flexibility.
Hence, f = 1/k.
A typical member flexibility relation has the following general form:
where
m = member number m.
= vector of member's characteristic deformations.
= member flexibility matrix which characterises the member's susceptibility to deform under forces.
= vector of member's independent characteristic forces, which are unknown internal forces. These independent forces give rise to all member-end forces by member equilibrium.
= vector of member's characteristic deformations caused by external effects (such as known forces and temperature changes) applied to the isolated, disconnected member (i.e. with ).
For a system composed of many members interconnected at points called nodes, the members' flexibility relations can be put together into a single matrix equation, dropping the superscript m:
where M is the total number of members' characteristic deformations or forces in the system.
Unlike the matrix stiffness method, where the members' stiffness relations can be readily integrated via nodal equilibrium and compatibility conditions, the present flexibility form of equation () poses serious difficulty. With member forces as the primary unknowns, the number of nodal equilibrium equations is insufficient for solution, in general—unless the system is statically determinate.
Nodal equilibrium equations
To resolve this difficulty, first we make use of the nodal equilibrium equations in order to reduce the number of independent unknown member forces. The nodal equilibrium equation for the system has the form:
where
: Vector of nodal forces at all N degrees of freedom of the system.
: The resulting nodal equilibrium matrix
: The vector of forces arising from loading on the members.
In the case of determinate systems, matrix b is square and the solution for Q can be found immediately from () provided that the system is stable.
The primary system
For statically indeterminate systems, M > N, and hence, we can augment () with I = M−N equations of the form:
The vector X is the so-called vector of redundant forces and I is the degree of statical indeterminacy of the system. We usually choose j, k, …, , and such that is a support reaction or an internal member-end force. With suitable choices of redundant forces, the equation system () augmented by () can now be solved to obtain:
Substitution into () gives:
Equations () and () are the solution for the primary system which is the original system that has been rendered statically determinate by cuts that expose the redundant forces . Equation () effectively reduces the set of unknown forces to .
Compatibility equation and solution
Next, we need to set up compatibility equations in order to find . The compatibility equations restore the required continuity at the cut sections by setting the relative displacements at the redundants X to zero. That is, using the unit dummy force method:
where
Equation () can be solved for X, and the member forces are next found from () while the nodal displacements can be found by
where
is the system flexibility matrix.
Supports' movements taking place at the redundants can be included in the right-hand-side of equation (), while supports' movements at other places must be included in and as well.
Advantages and disadvantages
While the choice of redundant forces in () appears to be arbitrary and troublesome for automatic computation, this objection can be overcome by proceeding from () directly to () using a modified Gauss–Jordan elimination process. This is a robust procedure that automatically selects a good set of redundant forces to ensure numerical stability.
It is apparent from the above process that the matrix stiffness method is easier to comprehend and to implement for automatic computation. It is also easier to extend for advanced applications such as non-linear analysis, stability, vibrations, etc. For these reasons, the matrix stiffness method is the method of choice for use in general purpose structural analysis software packages. On the other hand, for linear systems with a low degree of statical indeterminacy, the flexibility method has the advantage of being computationally less intensive. This advantage, however, is a moot point as personal computers are widely available and more powerful. The main redeeming factor in learning this method nowadays is its educational value in imparting the concepts of equilibrium and compatibility in addition to its historical value. In contrast, the procedure of the direct stiffness method is so mechanical that it risks being used without much understanding of the structural behaviors.
The upper arguments were valid up to the late 1990s. However, recent advances in numerical computing have shown a comeback of the force method, especially in the case of nonlinear systems. New frameworks have been developed that allow "exact" formulations irrespectively of the type or nature of the system nonlinearities. The main advantages of the flexibility method is that the result error is independent of the discretization of the model and that it is indeed a very fast method. For instance, the elastic-plastic solution of a continuous beam using the force method requires only 4 beam elements whereas a commercial "stiffness based" FEM code requires 500 elements in order to give results with the same accuracy. To conclude, one can say that in the case where the solution of the problem requires recursive evaluations of the force field like in the case of structural optimization or system identification, the efficiency of the flexibility method is indisputable.
See also
Finite element method in structural mechanics
Structural analysis
Stiffness method
References
External links
Consistent Deformations - Force Method
Structural analysis
Finite element method | Flexibility method | [
"Engineering"
] | 1,249 | [
"Structural engineering",
"Structural analysis",
"Mechanical engineering",
"Aerospace engineering"
] |
2,820,509 | https://en.wikipedia.org/wiki/Electrocoagulation | Electrocoagulation (EC) is a technique used for wastewater treatment, wash water treatment, industrially processed water, and medical treatment. Electrocoagulation has become a rapidly growing area of wastewater treatment due to its ability to remove contaminants that are generally more difficult to remove by filtration or chemical treatment systems, such as emulsified oil, total petroleum hydrocarbons, refractory organics, suspended solids, and heavy metals. There are many brands of electrocoagulation devices available, and they can range in complexity from a simple anode and cathode to much more complex devices with control over electrode potentials, passivation, anode consumption, cell REDOX potentials as well as the introduction of ultrasonic sound, ultraviolet light and a range of gases and reactants to achieve so-called Advanced Oxidation Processes for refractory or recalcitrant organic substances.
Water and Wastewater Treatment
With the latest technologies, reduction of electricity requirements, and miniaturization of the needed power supplies, EC systems have now become affordable for water treatment plants and industrial processes worldwide.
Background
Electrocoagulation ("electro", meaning to apply an electrical charge to water, and "coagulation", meaning the process of changing the particle surface charge, allowing suspended matter to form an agglomeration) is an advanced and economical water treatment technology. It effectively removes suspended solids to sub-micrometre levels, breaks emulsions such as oil and grease or latex, and oxidizes and eradicates heavy metals from water without the use of filters or the addition of separation chemicals
A wide range of wastewater treatment techniques are known, which includes biological processes for nitrification, denitrification and phosphorus removal, as well as a range of physico-chemical processes that require chemical addition. The commonly used physico-chemical treatment processes are filtration, air stripping, ion exchange, chemical precipitation, chemical oxidation, carbon adsorption, ultrafiltration (UF), reverse osmosis (RO), electrodialysis, volatilization, and gas stripping.
Benefits
Mechanical filtration addresses only two issues in wash rack wash water: suspended solids larger than 30 μm, and free oil and grease. Emulsified oil and grease cause damage to the media filters, resulting in high maintenance costs. Electrocoagulation does not use particle size to provide physical separation.
Chemical treatment addresses suspended solids, oil and grease, and some heavy metals, but may require the addion of various flocculants and coagulants as well as pH adjustments for proper treatment. This technology requires the addition of chemicals which can be expensive, messy, hazardous and labor-intensive treatment. This process also requires addition of compressed air for flotation of coagulated contaminants. Generally filtration is used only as a post-treatment phase for polishing.
Technology
Treatment of wastewater and wash water by EC has been practiced for most of the 20th century with increasing popularity. In the last decade, this technology has been increasingly used in the United States, South America and Europe for treatment of industrial wastewater containing metals. It has also been noted that in North America EC has been used primarily to treat wastewater from pulp and paper industries, mining and metal-processing industries. A large one-thousand gallon per minute cooling tower application in El Paso, Texas illustrates electrocoagulations growing recognition and acceptance to the industrial community. In addition, EC has been applied to treat water containing foodstuff waste, oil wastes, dyes, output from public transit and marinas, wash water, ink, suspended particles, chemical and mechanical polishing waste, organic matter from landfill leachates, defluorination of water, synthetic detergent effluents, and solutions containing heavy metals. Electrocoagulation is not typically used for domestic wastewater treatment.
Coagulation process
Coagulation is one of the most important physio-chemical reactions used in water treatment. Ions (heavy metals) and colloids (organic and inorganic) are mostly held in solution by electrical charges. The addition of ions with opposite charges destabilizes the colloids, allowing them to coagulate. Coagulation can be achieved by a chemical coagulant or by electrical methods. Alum [Al2(SO4)3.18H2O] is such a chemical substance, which has been widely used for ages for wastewater treatment.
The mechanism of coagulation has been the subject of continual review. It is generally accepted that coagulation is brought about primarily by the reduction of the net surface charge to a point where the colloidal particles, previously stabilized by electrostatic repulsion, can approach closely enough for van der Waals forces to hold them together and allow aggregation. The reduction of the surface charge is a consequence of the decrease of the repulsive potential of the electrical double layer by the presence of an electrolyte having opposite charge. In the EC process, the coagulant is generated in situ by electrolytic oxidation of an appropriate anode material. In this process, charged ionic species—metals or otherwise—are removed from wastewater by allowing it to react with an ion having an opposite charge, or with floc of metallic hydroxides generated within the effluent.
Electrocoagulation offers an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The technology removes metals, colloidal solids and particles, and soluble inorganic pollutants from aqueous media by introducing highly charged polymeric metal hydroxide species. These species neutralize the electrostatic charges on suspended solids and oil droplets to facilitate agglomeration or coagulation and resultant separation from the aqueous phase. The treatment prompts the precipitation of certain metals and salts:
Chemical coagulation has been used for decades to destabilize suspensions and to effect precipitation of soluble metals species, as well as other inorganic species from aqueous streams, thereby permitting their removal through sedimentation or filtration. Alum, lime and/or polymers have been the chemical coagulants used. These processes, however, tend to generate large volumes of sludge with high bound water content that can be slow to filter and difficult to dewater. These treatment processes also tend to increase the total dissolved solids (TDS) content of the effluent, making it unacceptable for reuse within industrial applications.
Although the electrocoagulation mechanism resembles chemical coagulation in that the cationic species are responsible for the neutralization of surface charges, the characteristics of the electrocoagulated flock differ dramatically from those generated by chemical coagulation. An electrocogulated flock tends to contain less bound water, is more shear resistant and is more readily filterable.
Description
In its simplest form, an electrocoagulation reactor is made up of an electrolytic cell with one anode and one cathode. When connected to an external power source, the anode material will electrochemically corrode due to oxidation, while the cathode will be subjected to passivation.
An EC system essentially consists of pairs of conductive metal plates in parallel, which act as monopolar electrodes. It furthermore requires a direct current power source, a resistance box to regulate the current density and a multimeter to read the current values. The conductive metal plates are commonly known as "sacrificial electrodes." The sacrificial anode lowers the dissolution potential of the anode and minimizes the passivation of the cathode. The sacrificial anodes and cathodes can be of the same or of different materials.
The arrangement of monopolar electrodes with cells in series is electrically similar to a single cell with many electrodes and interconnections. In series cell arrangement, a higher potential difference is required for a given current to flow because the cells connected in series have higher resistance. The same current would, however, flow through all the electrodes. In contrast, in parallel or bipolar arrangement the electric current is divided between all the electrodes in relation to the resistance of the individual cells, and each face on the electrode has a different polarity.
During electrolysis, the positive side undergoes anodic reactions, while on the negative side, cathodic reactions are encountered. Consumable metal plates, such as iron or aluminum, are usually used as sacrificial electrodes to continuously produce ions in the water. The released ions neutralize the charges of the particles and thereby initiate coagulation. The released ions remove undesirable contaminants either by chemical reaction and precipitation, or by causing the colloidal materials to coalesce, which can then be removed by flotation. In addition, as water containing colloidal particulates, oils, or other contaminants move through the applied electric field, there may be ionization, electrolysis, hydrolysis, and free-radical formation which can alter the physical and chemical properties of water and contaminants. As a result, the reactive and excited state causes contaminants to be released from the water and destroyed or made less soluble.
It is important to note that electrocoagulation technology cannot remove infinitely soluble matter. Therefore, ions with molecular weights smaller than Ca+2 or Mg+2 cannot be dissociated from the aqueous medium.
Reactions within the electrocoagulation reactor
Within the electrocoagulation reactor, several distinct electrochemical reactions are produced independently. These are:
Seeding, resulting from the anode reduction of metal ions that become new centers for larger, stable, insoluble complexes that precipitate as complex metal ions.
Emulsion Breaking, resulting from the oxygen and hydrogen ions that bond into the water receptor sites of emulsified oil molecules creating a water-insoluble complex separating water from oil, driller's mud, dyes, inks, fatty acids, etc.
Halogen Complexing, as the metal ions bind themselves to chlorines in a chlorinated hydrocarbon molecule resulting in a large insoluble complex separating water from pesticides, herbicides, chlorinated PCBs, etc.
Bleaching by the oxygen ions produced in the reaction chamber oxidizes dyes, cyanides, bacteria, viruses, biohazards, etc. Electron flooding of electrodes forced ions to be formed to carry charge into the water, thereby eliminating the polar effect of the water complex, allowing colloidal materials to precipitate and the current controlled ion transport between the electrodes creates an osmotic pressure that typically ruptures bacteria, cysts, and viruses.
Oxidation and Reduction reactions are forced to their natural end point within the reaction tank which speeds up the natural process of nature that occurs in wet chemistry, where concentration gradients and solubility products (KsP) are the chief determinants to enable reactions to reach stoichiometric completion.
Electrocoagulation Induced pH swings toward neutral.
Optimizing reactions
Careful selection of the reaction tank material is essential along with control of the current, flow rate and pH. Electrodes can be made of iron, aluminum, titanium, graphite or other materials, depending upon the wastewater to be treated and the contaminants to be removed. Temperature and pressure appear to have only a minor effect on the process.
In the EC process the water-contaminant mixture separates into a floating layer, a mineral-rich flocculated sediment, and clear water. The floating layer is generally removed by means of an overflow weir or similar removal method. The aggregated flocculent mass settles either in the reaction vessel or in subsequent settling tanks due to gravitational force.
Following removal to a sludge collection tank, it is typically dewatered to a semi-dry cake using a mechanical screw press. The clear, treated (supernatant) water is typically then pumped to a buffer tank for later disposal and/or reuse in the plant's designated process.
Advantages
EC requires simple equipment and is easy to operate with sufficient operational latitude to handle most problems encountered on running.
Wastewater treated by EC gives palatable, clear, colorless and odorless water.
Sludge formed by EC tends to be readily settable and easy to de-water, compared to conventional alum or ferric hydroxide sludges, because the mainly metallic oxides/hydroxides have no residual charge.
Flocs formed by EC are similar to chemical floc, except that EC floc tends to be much larger, contains less bound water, is acid-resistant and more stable, and therefore, can be separated faster by filtration.
EC can produce effluent with less TDS content as compared with chemical treatments, particularly if the metal ions can be precipitated as either hydroxides or carbonates (such as magnesium and calcium). EC generally has little if any impact on sodium and potassium ions in solution.
The EC process has the advantage of removing the smallest colloidal particles, because the applied electric field neutralises any residual charge, thereby facilitating the coagulation.
The EC process generally avoids excessive use of chemicals and so there is reduced requirement to neutralize excess chemicals and less possibility of secondary pollution caused by chemical substances added at high concentration as when chemical coagulation of wastewater is used.
The gas bubbles produced during electrolysis can conveniently carry the pollutant components to the top of the solution where it can be more easily concentrated, collected and removed by a motorised skimmer.
The electrolytic processes in the EC cell are controlled electrically and with no moving parts, thus requiring less maintenance.
Dosing incoming waste water with sodium hypochlorite assists reduction of biochemical oxygen demand (BOD) and consequent chemical oxygen demand (COD) although this should be avoided for wastewater containing high levels of organic compounds or dissolved ammonia (NH4+) due to formation of trihalogenated methanes (THMs) or other chlorinated organics. Sodium hypochlorite can be generated electrolytically in an E cell using platinum and similar inert electrodes or by using external electrochlorinators.
Due to the excellent EC removal of suspended solids and the simplicity of the EC operation, tests conducted for the U.S. Office of Naval Research concluded that the most promising application of EC in a membrane system was found to be as pretreatment to a multi-membrane system of UF/RO or microfiltration/reverse osmosis (MF/RO). In this function the EC provides protection of the low-pressure membrane that is more general than that provided by chemical coagulation and more effective. EC is very effective at removing a number of membrane fouling species (such as silica, alkaline earth metal hydroxides and transition group metals) as well as removing many species that chemical coagulation alone cannot remove. (see Refractory Organics)
Medical treatment
A fine wire probe or other delivery mechanism is used to transmit radio waves to tissues near the probe. Molecules in the tissue are caused to vibrate, leading to a rapid increase in temperature, causing coagulation of the proteins in the tissue and effectively killing the tissue. At higher-powered applications, full desiccation of tissue is possible.
See also
List of wastewater treatment technologies
Industrial wastewater treatment
Coagulation (water treatment)
References
Environmental engineering | Electrocoagulation | [
"Chemistry",
"Engineering"
] | 3,186 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
3,783,853 | https://en.wikipedia.org/wiki/Bond%20graph | A bond graph is a graphical representation of a physical dynamic system. It allows the conversion of the system into a state-space representation. It is similar to a block diagram or signal-flow graph, with the major difference that the arcs in bond graphs represent bi-directional exchange of physical energy, while those in block diagrams and signal-flow graphs represent uni-directional flow of information. Bond graphs are multi-energy domain (e.g. mechanical, electrical, hydraulic, etc.) and domain neutral. This means a bond graph can incorporate multiple domains seamlessly.
The bond graph is composed of the "bonds" which link together "single-port", "double-port" and "multi-port" elements (see below for details). Each bond represents the instantaneous flow of energy () or power. The flow in each bond is denoted by a pair of variables called power variables, akin to conjugate variables, whose product is the instantaneous power of the bond. The power variables are broken into two parts: flow and effort. For example, for the bond of an electrical system, the flow is the current, while the effort is the voltage. By multiplying current and voltage in this example you can get the instantaneous power of the bond.
A bond has two other features described briefly here, and discussed in more detail below. One is the "half-arrow" sign convention. This defines the assumed direction of positive energy flow. As with electrical circuit diagrams and free-body diagrams, the choice of positive direction is arbitrary, with the caveat that the analyst must be consistent throughout with the chosen definition. The other feature is the "causality". This is a vertical bar placed on only one end of the bond. It is not arbitrary. As described below, there are rules for assigning the proper causality to a given port, and rules for the precedence among ports. Causality explains the mathematical relationship between effort and flow. The positions of the causalities show which of the power variables are dependent and which are independent.
If the dynamics of the physical system to be modeled operate on widely varying time scales, fast continuous-time behaviors can be modeled as instantaneous phenomena by using a hybrid bond graph. Bond graphs were invented by Henry Paynter.
Systems for bond graph
Many systems can be expressed in terms used in bond graph. These terms are expressed in the table below.
Conventions for the table below:
is the active power;
is a matrix object;
is a vector object;
is the Hermitian conjugate of ; it is the complex conjugate of the transpose of . If is a scalar, then the Hermitian conjugate is the same as the complex conjugate;
is the Euler notation for differentiation, where:
Vergent-factor:
Other systems:
Thermodynamic power system (flow is entropy-rate and effort is temperature)
Electrochemical power system (flow is chemical activity and effort is chemical potential)
Thermochemical power system (flow is mass-rate and effort is mass specific enthalpy)
Macroeconomics currency-rate system (displacement is commodity and effort is price per commodity)
Microeconomics currency-rate system (displacement is population and effort is GDP per capita)
Tetrahedron of state
The tetrahedron of state is a tetrahedron that graphically shows the conversion between effort and flow. The adjacent image shows the tetrahedron in its generalized form. The tetrahedron can be modified depending on the energy domain.
Using the tetrahedron of state, one can find a mathematical relationship between any variables on the tetrahedron. This is done by following the arrows around the diagram and multiplying any constants along the way. For example, if you wanted to find the relationship between generalized flow and generalized displacement, you would start at the and then integrate it to get . More examples of equations can be seen below.
Relationship between generalized displacement and generalized flow.
Relationship between generalized flow and generalized effort.
Relationship between generalized flow and generalized momentum.
Relationship between generalized momentum and generalized effort.
Relationship between generalized flow and generalized effort, involving the constant C.
All of the mathematical relationships remain the same when switching energy domains, only the symbols change. This can be seen with the following examples.
Relationship between displacement and velocity.
Relationship between current and voltage, this is also known as Ohm's law.
Relationship between force and displacement, also known as Hooke's law. The negative sign is dropped in this equation because the sign is factored into the way the arrow is pointing in the bond graph.
For power systems, the formula for the frequency of resonance is as follows:
For power density systems, the formula for the velocity of the resonance wave is as follows:
Components
If an engine is connected to a wheel through a shaft, the power is being transmitted in the rotational mechanical domain, meaning the effort and the flow are torque (τ) and angular velocity (ω) respectively. A word bond graph is a first step towards a bond graph, in which words define the components. As a word bond graph, this system would look like:
A half-arrow is used to provide a sign convention, so if the engine is doing work when τ and ω are positive, then the diagram would be drawn:
This system can also be represented in a more general method. This involves changing from using the words, to symbols representing the same items. These symbols are based on the generalized form, as explained above. As the engine is applying a torque to the wheel, it will be represented as a source of effort for the system. The wheel can be presented by an impedance on the system. Further, the torque and angular velocity symbols are dropped and replaced with the generalized symbols for effort and flow. While not necessary in the example, it is common to number the bonds, to keep track of in equations. The simplified diagram can be seen below.
Given that effort is always above the flow on the bond, it is also possible to drop the effort and flow symbols altogether, without losing any relevant information. However, the bond number should not be dropped. The example can be seen below.
The bond number will be important later when converting from the bond graph to state-space equations.
Association of elements
Series association
Suppose that an element has the following behavior:
where is a generic function (it can even differentiate/integrate its input) and is the element's constant. Then, suppose that in a 1-junction you have many of this type of element. Then the total voltage across the junction is:
Parallel association
Suppose that an element has the following behavior:
where is a generic function (it can even differentiate/integrate its input) and is the element's constant. Then, suppose that in a 0-junction you have many of this type of element. Then it is valid:
Single-port elements
Single-port elements are elements in a bond graph that can have only one port.
Sources and sinks
Sources are elements that represent the input for a system. They will either input effort or flow into a system. They are denoted by a capital "S" with either a lower case "e" or "f" for effort or flow respectively. Sources will always have the arrow pointing away from the element. Examples of sources include: motors (source of effort, torque), voltage sources (source of effort), and current sources (source of flow).
where J indicates a junction.
Sinks are elements that represent the output for a system. They are represented the same way as sources, but have the arrow pointing into the element instead of away from it.
Inertia
Inertia elements are denoted by a capital "I", and always have power flowing into them. Inertia elements are elements that store energy. Most commonly these are a mass for mechanical systems, and inductors for electrical systems.
Resistance
Resistance elements are denoted by a capital "R", and always have power flowing into them. Resistance elements are elements that dissipate energy. Most commonly these are a damper, for mechanical systems, and resistors for electrical systems.
Compliance
Compliance elements are denoted by a capital "C", and always have power flowing into them. Compliance elements are elements that store potential energy. Most commonly these are springs for mechanical systems, and capacitors for electrical systems.
Two-port elements
These elements have two ports. They are used to change the power between or within a system. When converting from one to the other, no power is lost during the transfer. The elements have a constant that will be given with it. The constant is called a transformer constant or gyrator constant depending on which element is being used. These constants will commonly be displayed as a ratio below the element.
Transformer
A transformer applies a relationship between flow in flow out, and effort in effort out. Examples include an ideal electrical transformer or a lever.
Denoted
where the r denotes the modulus of the transformer. This means
and
Gyrator
A gyrator applies a relationship between flow in effort out, and effort in flow out. An example of a gyrator is a DC motor, which converts voltage (electrical effort) into angular velocity (angular mechanical flow).
meaning that and
Multi-port elements
Junctions, unlike the other elements can have any number of ports either in or out. Junctions split power across their ports. There are two distinct junctions, the 0-junction and the 1-junction which differ only in how effort and flow are carried across. The same junction in series can be combined, but different junctions in series cannot.
0-junctions
0-junctions behave such that all effort values (and its time integral/derivative) are equal across the bonds, but the sum of the flow values in equals the sum of the flow values out, or equivalently, all flows sum to zero. In an electrical circuit, the 0-junction is a node and represents a voltage shared by all components at that node. In a mechanical circuit, the 0-junction is a joint among components, and represents a force shared by all components connected to it.
An example is shown below.
Resulting equations:
1-junctions
1-junctions behave opposite of 0-junctions. 1-junctions behave such that all flow values (and its time integral/derivative) are equal across the bonds, but the sum of the effort values in equals the sum the effort values out, or equivalently, all efforts sum to zero. In an electrical circuit, the 1 junction represents a series connection among components. In a mechanical circuit, the 1-junction represents a velocity shared by all components connected to it.
An example is shown below.
Resulting equations:
Causality
Bond graphs have a notion of causality, indicating which side of a bond determines the instantaneous effort and which determines the instantaneous flow. In formulating the dynamic equations that describe the system, causality defines, for each modeling element, which variable is dependent and which is independent. By propagating the causation graphically from one modeling element to the other, analysis of large-scale models becomes easier. Completing causal assignment in a bond graph model will allow the detection of modeling situation where an algebraic loop exists; that is the situation when a variable is defined recursively as a function of itself.
As an example of causality, consider a capacitor in series with a battery. It is not physically possible to charge a capacitor instantly, so anything connected in parallel with a capacitor will necessarily have the same voltage (effort variable) as that across the capacitor. Similarly, an inductor cannot change flux instantly and so any component in series with an inductor will necessarily have the same flow as the inductor. Because capacitors and inductors are passive devices, they cannot maintain their respective voltage and flow indefinitely—the components to which they are attached will affect their respective voltage and flow, but only indirectly by affecting their current and voltage respectively.
Note: Causality is a symmetric relationship. When one side "causes" effort, the other side "causes" flow.
In bond graph notation, a causal stroke may be added to one end of the power bond to indicate that this side is defining the flow. Consequently, the side opposite from the casual stroke controls the effort.
Sources of flow () define flow, so they host the causal stroke:
Sources of effort () define effort, so the other end hosts the causal stroke:
Consider a constant-torque motor driving a wheel, i.e. a source of effort (). That would be drawn as follows:
Symmetrically, the side with the causal stroke (in this case the wheel) defines the flow for the bond.
Causality results in compatibility constraints. Clearly only one end of a power bond can define the effort and so only one end of a bond can (the other end) have a causal stroke. In addition, the two passive components with time-dependent behavior, and , can only have one sort of causation: an component determines flow; a component defines effort. So from a junction, , the preferred causal orientation is as follows:
The reason that this is the preferred method for these elements can be further analyzed if you consider the equations they would give shown by the tetrahedron of state.
The resulting equations involve the integral of the independent power variable. This is preferred over the result of having the causality the other way, which results in derivative. The equations can be seen below.
It is possible for a bond graph to have a causal bar on one of these elements in the non-preferred manner. In such a case a "causal conflict" is said to have occurred at that bond. The results of a causal conflict are only seen when writing the state-space equations for the graph. It is explained in more details in that section.
A resistor has no time-dependent behavior: apply a voltage and get a flow instantly, or apply a flow and get a voltage instantly, thus a resistor can be at either end of a causal bond:
Transformers are passive, neither dissipating nor storing energy, so causality passes through them:
A gyrator transforms flow to effort and effort to flow, so if flow is caused on one side, effort is caused on the other side and vice versa:
Junctions
In a 0-junction, efforts are equal; in a 1-junction, flows are equal. Thus, with causal bonds, only one bond can cause the effort in a 0-junction and only one can cause the flow in a 1-junction. Thus, if the causality of one bond of a junction is known, the causality of the others is also known. That one bond is called the 'strong bond'
In a nutshell, 0-junctions must have a single causal bar, 1-junctions must have all but one causal bars.
Determining causality
In order to determine the causality of a bond graph certain steps must be followed. Those steps are:
Draw Source Causal Bars
Draw Preferred causality for C and I bonds
Draw causal bars for 0 and 1 junctions, transformers and gyrators
Draw R bond causal bars
If a causal conflict occurs, change C or I bond to differentiation
A walk-through of the steps is shown below.
The first step is to draw causality for the sources, over which there is only one. This results in the graph below.
The next step is to draw the preferred causality for the C bonds.
Next apply the causality for the 0 and 1 junctions, transformers, and gyrators.
However, there is an issue with 0-junction on the left. The 0-junction has two causal bars at the junction, but the 0-junction wants one and only one at the junction. This was caused by having be in the preferred causality. The only way to fix this is to flip that causal bar. This results in a causal conflict, the corrected version of the graph is below, with the representing the causal conflict.
Converting from other systems
One of the main advantages of using bond graphs is that once you have a bond graph it doesn't matter the original energy domain. Below are some of the steps to apply when converting from the energy domain to a bond graph.
Electromagnetic
The steps for solving an Electromagnetic problem as a bond graph are as follows:
Place an 0-junction at each node
Insert Sources, R, I, C, TR, and GY bonds with 1 junctions
Ground (both sides if a transformer or gyrator is present)
Assign power flow direction
Simplify
These steps are shown more clearly in the examples below.
Linear mechanical
The steps for solving a Linear Mechanical problem as a bond graph are as follows:
Place 1-junctions for each distinct velocity (usually at a mass)
Insert R and C bonds at their own 0-junctions between the 1 junctions where they act
Insert Sources and I bonds on the 1 junctions where they act
Assign power flow direction
Simplify
These steps are shown more clearly in the examples below.
Simplifying
The simplifying step is the same regardless if the system was electromagnetic or linear mechanical. The steps are:
Remove Bond of zero power (due to ground or zero velocity)
Remove 0 and 1 junctions with less than three bonds
Simplify parallel power
Combine 0 junctions in series
Combine 1 junctions in series
These steps are shown more clearly in the examples below.
Parallel power
Parallel power is when power runs in parallel in a bond graph. An example of parallel power is shown below.
Parallel power can be simplified, by recalling the relationship between effort and flow for 0 and 1-junctions. To solve parallel power you will first want to write down all of the equations for the junctions. For the example provided, the equations can be seen below. (Please make note of the number bond the effort/flow variable represents).
By manipulating these equations you can arrange them such that you can find an equivalent set of 0 and 1-junctions to describe the parallel power.
For example, because and you can replace the variables in the equation resulting in and since , we now know that . This relationship of two effort variables equaling can be explained by an 0-junction. Manipulating other equations you can find that which describes the relationship of a 1-junction. Once you have determined the relationships that you need you can redraw the parallel power section with the new junctions. The result for the example show is seen below.
Examples
Simple electrical system
A simple electrical circuit consisting of a voltage source, resistor, and capacitor in series.
The first step is to draw 0-junctions at all of the nodes:
The next step is to add all of the elements acting at their own 1-junction:
The next step is to pick a ground. The ground is simply an 0-junction that is going to be assumed to have no voltage. For this case, the ground will be chosen to be the lower left 0-junction, that is underlined above. The next step is to draw all of the arrows for the bond graph. The arrows on junctions should point towards ground (following a similar path to current). For resistance, inertance, and compliance elements, the arrows always point towards the elements. The result of drawing the arrows can be seen below, with the 0-junction marked with a star as the ground.
Now that we have the Bond graph, we can start the process of simplifying it. The first step is to remove all the ground nodes. Both of the bottom 0-junctions can be removed, because they are both grounded. The result is shown below.
Next, the junctions with less than three bonds can be removed. This is because flow and effort pass through these junctions without being modified, so they can be removed to allow us to draw less. The result can be seen below.
The final step is to apply causality to the bond graph. Applying causality was explained above. The final bond graph is shown below.
Advanced electrical system
A more advanced electrical system with a current source, resistors, capacitors, and a transformer
Following the steps with this circuit will result in the bond graph below, before it is simplified. The nodes marked with the star denote the ground.
Simplifying the bond graph will result in the image below.
Lastly, applying causality will result in the bond graph below. The bond with star denotes a causal conflict.
Simple linear mechanical
A simple linear mechanical system, consisting of a mass on a spring that is attached to a wall. The mass has some force being applied to it. An image of the system is shown below.
For a mechanical system, the first step is to place a 1-junction at each distinct velocity, in this case there are two distinct velocities, the mass and the wall. It is usually helpful to label the 1-junctions for reference. The result is below.
The next step is to draw the R and C bonds at their own 0-junctions between the 1-junctions where they act. For this example there is only one of these bonds, the C bond for the spring. It acts between the 1-junction representing the mass and the 1-junction representing the wall. The result is below.
Next you want to add the sources and I bonds on the 1-junction where they act. There is one source, the source of effort (force) and one I bond, the mass of the mass both of which act on the 1-junction of the mass. The result is shown below.
Next power flow is to be assigned. Like the electrical examples, power should flow towards ground, in this case the 1-junction of the wall. Exceptions to this are R, C, or I bond, which always point towards the element. The resulting bond graph is below.
Now that the bond graph has been generated, it can be simplified. Because the wall is grounded (has zero velocity), you can remove that junction. As such the 0-junction the C bond is on, can also be removed because it will then have less than three bonds. The simplified bond graph can be seen below.
The last step is to apply causality, the final bond graph can be seen below.
Advanced linear mechanical
A more advanced linear mechanical system can be seen below.
Just like the above example, the first step is to make 1-junctions at each of the distant velocities. In this example there are three distant velocity, Mass 1, Mass 2, and the wall. Then you connect all of the bonds and assign power flow. The bond can be seen below.
Next you start the process of simplifying the bond graph, by removing the 1-junction of the wall, and removing junctions with less than three bonds. The bond graph can be seen below.
There is parallel power in the bond graph. Solving parallel power was explained above. The result of solving it can be seen below.
Lastly, apply causality, the final bond graph can be seen below.
State equations
Once a bond graph is complete, it can be utilized to generate the state-space representation equations of the system. State-space representation is especially powerful as it allows complex multi-order differential system to be solved as a system of first-order equations instead. The general form of the state equation is
where is a column matrix of the state variables, or the unknowns of the system. is the time derivative of the state variables. is a column matrix of the inputs of the system. And and are matrices of constants based on the system. The state variables of a system are and values for each C and I bond without a causal conflict. Each I bond gets a while each C bond gets a .
For example, if you have the following bond graph
you would have the following , , and matrices:
The matrices of and are solved by determining the relationship of the state variables and their respective elements, as was described in the tetrahedron of state. The first step to solve the state equations is to list all of the governing equations for the bond graph. The table below shows the relationship between bonds and their governing equations.
"♦" denotes preferred causality.
For the example provided,
the governing equations are the following.
These equations can be manipulated to yield the state equations. For this example, you are trying to find equations that relate and in terms of , , and .
To start you should recall from the tetrahedron of state that starting with equation 2, you can rearrange it so that . can be substituted for equation 4, while in equation 4, can be replaced by due to equation 3, which can then be replaced by equation 5. can likewise be replaced using equation 7, in which can be replaced with which can then be replaced with equation 10. Following these substituted yields the first state equation which is shown below.
The second state equation can likewise be solved, by recalling that . The second state equation is shown below.
Both equations can further be rearranged into matrix form. The result of which is below.
At this point the equations can be treated as any other state-space representation problem.
International conferences on bond graph modeling (ECMS and ICBGM)
A bibliography on bond graph modeling may be extracted from the following conferences :
ECMS-2013 27th European Conference on Modelling and Simulation, May 27–30, 2013, Ålesund, Norway
ECMS-2008 22nd European Conference on Modelling and Simulation, June 3–6, 2008 Nicosia, Cyprus
ICBGM-2007: 8th International Conference on Bond Graph Modeling And Simulation, January 15–17, 2007, San Diego, California, U.S.A.
ECMS-2006 20TH European Conference on Modelling and Simulation, May 28–31, 2006, Bonn, Germany
IMAACA-2005 International Mediterranean Modeling Multiconference
ICBGM-2005 International Conference on Bond Graph Modeling and Simulation, January 23–27, 2005, New Orleans, Louisiana, U.S.A. – Papers
ICBGM-2003 International Conference on Bond Graph Modeling and Simulation (ICBGM'2003) January 19–23, 2003, Orlando, Florida, USA – Papers
14TH European Simulation symposium October 23–26, 2002 Dresden, Germany
ESS'2001 13th European Simulation symposium, Marseilles, France October 18–20, 2001
ICBGM-2001 International Conference on Bond Graph Modeling and Simulation (ICBGM 2001), Phoenix, Arizona U.S.A.
European Simulation Multi-conference 23-26 May, 2000, Gent, Belgium
11th European Simulation symposium, October 26–28, 1999 Castle, Friedrich-Alexander University, Erlangen-Nuremberg, Germany
ICBGM-1999 International Conference on Bond Graph Modeling and Simulation January 17–20, 1999 San Francisco, California
ESS-97 9TH European Simulation Symposium and Exhibition Simulation in Industry, Passau, Germany, October 19–22, 1997
ICBGM-1997 3rd International Conference on Bond Graph Modeling And Simulation, January 12–15, 1997, Sheraton-Crescent Hotel, Phoenix, Arizona
11th European Simulation Multiconference Istanbul, Turkey, June 1–4, 1997
ESM-1996 10th annual European Simulation Multiconference Budapest, Hungary, June 2–6, 1996
ICBGM-1995 Int. Conf. on Bond Graph Modeling and Simulation (ICBGM’95), January 15–18, 1995, Las Vegas, Nevada.
See also
20-sim simulation software based on the bond graph theory
AMESim simulation software based on the bond graph theory
Hybrid bond graph
Coenergy
References
Further reading
http://www.site.uottawa.ca/~rhabash/ESSModelFluid.pdf Explains modeling the bond graph in the fluid domain
http://www.dartmouth.edu/~sullivan/22files/Fluid_sys_anal_w_chart.pdf Explains modeling the bond graph in the fluid domain
External links
Simscape Official MATLAB/Simulink add-on library for graphical bond graph programming
BG V.2.1 Freeware MATLAB/Simulink add-on library for graphical bond graph programming
Scientific visualization
Diagrams
Application-specific graphs
Electrical engineering
Mechanical engineering
Modeling languages | Bond graph | [
"Physics",
"Engineering"
] | 5,719 | [
"Electrical engineering",
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
1,388,228 | https://en.wikipedia.org/wiki/Electromagnetic%20forming | Electromagnetic forming (EM forming or magneforming) is a type of high-velocity, cold forming process for electrically conductive metals, most commonly copper and aluminium. The workpiece is reshaped by high-intensity pulsed magnetic fields that induce a current in the workpiece and a corresponding repulsive magnetic field, rapidly repelling portions of the workpiece. The workpiece can be reshaped without any contact from a tool, although in some instances the piece may be pressed against a die or former. The technique is sometimes called high-velocity forming or electromagnetic pulse technology.
Explanation
A special coil is placed near the metallic workpiece, replacing the pusher in traditional forming. When the system releases its intense magnetic pulse, the coil generates a magnetic field which in turn accelerates the workpiece to hyper speed and onto the die.
The magnetic pulse and the extreme deformation speed transforms the metal into a visco-plastic state – increasing formability without affecting the native strength of the material. See the magnetic pulse forming illustration for a visualization.
A rapidly changing magnetic field induces a circulating electric current within a nearby conductor through electromagnetic induction. The induced current creates a corresponding magnetic field around the conductor (see Pinch (plasma physics)). Because of Lenz's Law, the magnetic fields created within the conductor and work coil strongly repel each other.
In practice the metal workpiece to be fabricated is placed in proximity to a heavily constructed coil of wire (called the work coil). A huge pulse of current is forced through the work coil by rapidly discharging a high-voltage capacitor bank using an ignitron or a spark gap as a switch. This creates a rapidly oscillating, ultra strong electromagnetic field around the work coil.
The high work coil current (typically tens or hundreds of thousands of amperes) creates ultra strong magnetic forces that easily overcome the yield strength of the metal work piece, causing permanent deformation. The metal forming process occurs extremely quickly (typically tens of microseconds) and, because of the large forces, portions of the workpiece undergo high acceleration reaching velocities of up to 300 m/s.
Applications
The forming process is most often used to shrink or expand cylindrical tubing, but it can also form sheet metal by repelling the work piece onto a shaped die at a high velocity. High-quality joints can be formed, either by electromagnetic pulse crimping with a mechanical interlock or by electromagnetic pulse welding with a true metallurgical weld. Since the forming operation involves high acceleration and deceleration, mass of the work piece plays a critical role during the forming process. The process works best with good electrical conductors such as copper or aluminum, but it can be adapted to work with poorer conductors such as steel.
Comparison with mechanical forming
Electromagnetic forming has a number of advantages and disadvantages compared to conventional mechanical forming techniques.
Some of the advantages are;
Improved formability (the amount of stretch available without tearing)
Wrinkling can be greatly suppressed
Forming can be combined with joining and assembling with dissimilar components including glass, plastic, composites and other metals.
Close tolerances are possible as springback can be significantly reduced.
Single-sided dies are sufficient, which can reduce tooling costs
Lubricants are reduced or are unnecessary, so forming can be used in clean-room conditions
Mechanical contact with the workpiece is not required; this avoids surface contamination and tooling marks. As a result, a surface finish can be applied to the workpiece before forming.
The principle disadvantages are;
Non-conductive materials cannot be formed directly, but can be formed using a conductive drive plate
The high voltages and currents involved require careful safety considerations
References
External links
Electromagnetic radiation
Metal forming
Pulsed power | Electromagnetic forming | [
"Physics"
] | 767 | [
"Physical phenomena",
"Physical quantities",
"Electromagnetic radiation",
"Power (physics)",
"Radiation",
"Pulsed power"
] |
1,389,284 | https://en.wikipedia.org/wiki/HX-63 | The HX-63 was an advanced rotor machine designed by Crypto AG founder Boris Hagelin. Development of the device started in 1952 and lasted a decade. The machine had nine rotors, each with 41 contacts. There were 26 keyboard inputs and outputs, leaving 15 wires to "loop back" through the rotors via a different path. Moreover, each rotor wire could be selected from one of two paths. The movement of the rotors was irregular and controlled by switches. There were two plugboards with the machine; one to scramble the input, and one for the loop-back wires. The machine also used a technique called reinjection (also called reentry), which increased its security exponentially. The machine could be set up in around 10600 different configurations.
William Friedman, the first chief cryptologist of the U.S. National Security Agency (NSA), was alarmed when he read Hagelin's patent on the machine. Friedman realized that the machine was more secure than the NSA's KL-7 and unbreakable. Friedman and Hagelin were good friends from World War II, and Friedman called on Hagelin to terminate the program, which CryptoAG did.
Only twelve of these machines were manufactured, and it was adopted by only one department of the French Government (about 1960).
See also
KL-7
References
External links
Jerry Proc's pages — photographs and a brief description
Notice of a past eBay auction of an HX-63
John Savard's discussion on the machine
Further reading
Cipher A. Deavours and Louis Kruh, "Machine Cryptography and Modern Cryptanalysis", Artech House, 1985, p199.
Rotor machines | HX-63 | [
"Physics",
"Technology"
] | 354 | [
"Physical systems",
"Machines",
"Rotor machines"
] |
1,389,316 | https://en.wikipedia.org/wiki/Gas%20in%20a%20box | In quantum mechanics, the results of the quantum particle in a box can be used to look at the equilibrium situation for a quantum ideal gas in a box which is a box containing a large number of molecules which do not interact with each other except for instantaneous thermalizing collisions. This simple model can be used to describe the classical ideal gas as well as the various quantum ideal gases such as the ideal massive Fermi gas, the ideal massive Bose gas as well as black body radiation (photon gas) which may be treated as a massless Bose gas, in which thermalization is usually assumed to be facilitated by the interaction of the photons with an equilibrated mass.
Using the results from either Maxwell–Boltzmann statistics, Bose–Einstein statistics or Fermi–Dirac statistics, and considering the limit of a very large box, the Thomas–Fermi approximation (named after Enrico Fermi and Llewellyn Thomas) is used to express the degeneracy of the energy states as a differential, and summations over states as integrals. This enables thermodynamic properties of the gas to be calculated with the use of the partition function or the grand partition function. These results will be applied to both massive and massless particles. More complete calculations will be left to separate articles, but some simple examples will be given in this article.
Thomas–Fermi approximation for the degeneracy of states
For both massive and massless particles in a box, the states of a particle are
enumerated by a set of quantum numbers . The magnitude of the momentum is given by
where h is the Planck constant and L is the length of a side of the box. Each possible state of a particle can be thought of as a point on a 3-dimensional grid of positive integers. The distance from the origin to any point will be
Suppose each set of quantum numbers specify f states where f is the number of internal degrees of freedom of the particle that can be altered by collision. For example, a spin particle would have , one for each spin state. For large values of n, the number of states with magnitude of momentum less than or equal to p from the above equation is approximately
which is just f times the volume of a sphere of radius n divided by eight since only the octant with positive ni is considered. Using a continuum approximation, the number of states with magnitude of momentum between p and is therefore
where V = L3 is the volume of the box. Notice that in using this continuum approximation, also known as Thomas−Fermi approximation, the ability to characterize the low-energy states is lost, including the ground state where . For most cases this will not be a problem, but when considering Bose–Einstein condensation, in which a large portion of the gas is in or near the ground state, the ability to deal with low energy states becomes important.
Without using any approximation, the number of particles with energy εi is given by
where is the degeneracy of state i and with , the Boltzmann constant kB, temperature T, and chemical potential μ. (See Maxwell–Boltzmann statistics, Bose–Einstein statistics, and Fermi–Dirac statistics.)
Using the Thomas−Fermi approximation, the number of particles dNE with energy between E and is:
where is the number of states with energy between E and .
Energy distribution
Using the results derived from the previous sections of this article, some distributions for the gas in a box can now be determined. For a system of particles, the distribution for a variable is defined through the expression which is the fraction of particles that have values for between and
where
, number of particles which have values for between and
, number of states which have values for between and
, probability that a state which has the value is occupied by a particle
, total number of particles.
It follows that:
For a momentum distribution , the fraction of particles with magnitude of momentum between and is:
and for an energy distribution , the fraction of particles with energy between and is:
For a particle in a box (and for a free particle as well), the relationship between energy and momentum is different for massive and massless particles. For massive particles,
while for massless particles,
where is the mass of the particle and is the speed of light.
Using these relationships,
For massive particles where is the thermal wavelength of the gas. This is an important quantity, since when is on the order of the inter-particle distance , quantum effects begin to dominate and the gas can no longer be considered to be a Maxwell–Boltzmann gas.
For massless particles where is now the thermal wavelength for massless particles.
Specific examples
The following sections give an example of results for some specific cases.
Massive Maxwell–Boltzmann particles
For this case:
Integrating the energy distribution function and solving for N gives
Substituting into the original energy distribution function gives
which are the same results obtained classically for the Maxwell–Boltzmann distribution. Further results can be found in the classical section of the article on the ideal gas.
Massive Bose–Einstein particles
For this case:
where
Integrating the energy distribution function and solving for N gives the particle number
where Lis(z) is the polylogarithm function. The polylogarithm term must always be positive and real, which means its value will go from 0 to ζ(3/2) as z goes from 0 to 1. As the temperature drops towards zero, will become larger and larger, until finally will reach a critical value where and
where denotes the Riemann zeta function. The temperature at which is the critical temperature. For temperatures below this critical temperature, the above equation for the particle number has no solution. The critical temperature is the temperature at which a Bose–Einstein condensate begins to form. The problem is, as mentioned above, that the ground state has been ignored in the continuum approximation. It turns out, however, that the above equation for particle number expresses the number of bosons in excited states rather well, and thus:
where the added term is the number of particles in the ground state. The ground state energy has been ignored. This equation will hold down to zero temperature. Further results can be found in the article on the ideal Bose gas.
Massless Bose–Einstein particles (e.g. black body radiation)
For the case of massless particles, the massless energy distribution function must be used. It is convenient to convert this function to a frequency distribution function:
where is the thermal wavelength for massless particles. The spectral energy density (energy per unit volume per unit frequency) is then
Other thermodynamic parameters may be derived analogously to the case for massive particles. For example, integrating the frequency distribution function and solving for N gives the number of particles:
The most common massless Bose gas is a photon gas in a black body. Taking the "box" to be a black body cavity, the photons are continually being absorbed and re-emitted by the walls. When this is the case, the number of photons is not conserved. In the derivation of Bose–Einstein statistics, when the restraint on the number of particles is removed, this is effectively the same as setting the chemical potential (μ) to zero. Furthermore, since photons have two spin states, the value of f is 2. The spectral energy density is then
which is just the spectral energy density for Planck's law of black body radiation. Note that the Wien distribution is recovered if this procedure is carried out for massless Maxwell–Boltzmann particles, which approximates a Planck's distribution for high temperatures or low densities.
In certain situations, the reactions involving photons will result in the conservation of the number of photons (e.g. light-emitting diodes, "white" cavities). In these cases, the photon distribution function will involve a non-zero chemical potential. (Hermann 2005)
Another massless Bose gas is given by the Debye model for heat capacity. This model considers a gas of phonons in a box and differs from the development for photons in that the speed of the phonons is less than light speed, and there is a maximum allowed wavelength for each axis of the box. This means that the integration over phase space cannot be carried out to infinity, and instead of results being expressed in polylogarithms, they are expressed in the related Debye functions.
Massive Fermi–Dirac particles (e.g. electrons in a metal)
For this case:
Integrating the energy distribution function gives
where again, Lis(z) is the polylogarithm function and is the thermal de Broglie wavelength. Further results can be found in the article on the ideal Fermi gas. Applications of the Fermi gas are found in the free electron model, the theory of white dwarfs and in degenerate matter in general.
See also
Gas in a harmonic trap
References
Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28.
Statistical mechanics | Gas in a box | [
"Physics"
] | 1,860 | [
"Statistical mechanics"
] |
1,389,320 | https://en.wikipedia.org/wiki/Gas%20in%20a%20harmonic%20trap | The results of the quantum harmonic oscillator can be used to look at the equilibrium situation for a quantum ideal gas in a harmonic trap, which is a harmonic potential containing a large number of particles that do not interact with each other except for instantaneous thermalizing collisions. This situation is of great practical importance since many experimental studies of Bose gases are conducted in such harmonic traps.
Using the results from either Maxwell–Boltzmann statistics, Bose–Einstein statistics or Fermi–Dirac statistics we use the Thomas–Fermi approximation (gas in a box) and go to the limit of a very large trap, and express the degeneracy of the energy states () as a differential, and summations over states as integrals. We will then be in a position to calculate the thermodynamic properties of the gas using the partition function or the grand partition function. Only the case of massive particles will be considered, although the results can be extended to massless particles as well, much as was done in the case of the ideal gas in a box. More complete calculations will be left to separate articles, but some simple examples will be given in this article.
Thomas–Fermi approximation for the degeneracy of states
For massive particles in a harmonic well, the states of the particle are enumerated by a set of quantum numbers . The energy of a particular state is given by:
Suppose each set of quantum numbers specify states where is the number of internal degrees of freedom of the particle that can be altered by collision. For example, a spin-1/2 particle would have , one for each spin state. We can think of each possible state of a particle as a point on a 3-dimensional grid of positive integers. The Thomas–Fermi approximation assumes that the quantum numbers are so large that they may be considered to be a continuum. For large values of , we can estimate the number of states with energy less than or equal to from the above equation as:
which is just times the volume of the tetrahedron formed by the plane described by the energy equation and the bounding planes of the positive octant. The number of states with energy between and is therefore:
Notice that in using this continuum approximation, we have lost the ability to characterize the low-energy states, including the ground state where . For most cases this will not be a problem, but when considering Bose–Einstein
condensation, in which a large portion of the gas is in or near the ground state, we will need to recover the ability to deal with low energy states.
Without using the continuum approximation, the number of particles with energy is given by:
where
{|
|-
|
| for particles obeying Maxwell–Boltzmann statistics
|-
|
| for particles obeying Bose–Einstein statistics
|-
|
| for particles obeying Fermi–Dirac statistics
|}
with , with being the Boltzmann constant, being temperature, and being the chemical potential. Using the continuum approximation, the number of particles with energy between and is now written:
Energy distribution function
We are now in a position to determine some distribution functions for the "gas in a harmonic trap." The distribution function for any variable is and is equal to the fraction of particles which have values for between and :
It follows that:
Using these relationships we obtain the energy distribution function:
Specific examples
The following sections give an example of results for some specific cases.
Massive Maxwell–Boltzmann particles
For this case:
Integrating the energy distribution function and solving for gives:
Substituting into the original energy distribution function gives:
Massive Bose–Einstein particles
For this case:
where is defined as:
Integrating the energy distribution function and solving for gives:
where is the polylogarithm function. The polylogarithm term must always be positive and real, which means its value will go from 0 to as goes from 0 to 1. As the temperature goes to zero, will become larger and larger, until finally will reach a critical value , where and
The temperature at which is the critical temperature at which a Bose–Einstein condensate begins to form. The problem is, as mentioned above, the ground state has been ignored in the continuum approximation. It turns out that the above expression expresses the number of bosons in excited states rather well, and so we may write:
where the added term is the number of particles in the ground state. (The ground state energy has been ignored.) This equation will hold down to zero temperature. Further results can be found in the article on the ideal Bose gas.
Massive Fermi–Dirac particles (e.g. electrons in a metal)
For this case:
Integrating the energy distribution function gives:
where again, is the polylogarithm function. Further results can be found in the article on the ideal Fermi gas.
References
Huang, Kerson, "Statistical Mechanics", John Wiley and Sons, New York, 1967
A. Isihara, "Statistical Physics", Academic Press, New York, 1971
L. D. Landau and E. M. Lifshitz, "Statistical Physics, 3rd Edition Part 1", Butterworth-Heinemann, Oxford, 1996
C. J. Pethick and H. Smith, "Bose–Einstein Condensation in Dilute Gases", Cambridge University Press, Cambridge, 2004
Statistical mechanics | Gas in a harmonic trap | [
"Physics"
] | 1,090 | [
"Statistical mechanics"
] |
1,390,283 | https://en.wikipedia.org/wiki/Diffuse%20interstellar%20bands | Diffuse interstellar bands (DIBs) are absorption features seen in the spectra of astronomical objects in the Milky Way and other galaxies. They are caused by the absorption of light by the interstellar medium. Circa 500 bands have now been seen, in ultraviolet, visible and infrared wavelengths.
The origin of most DIBs remains unknown, with common suggestions being polycyclic aromatic hydrocarbons and other large carbon-bearing molecules. Only one DIB carrier has been identified: ionised buckminsterfullerene (C60+), which is responsible for several DIBs in the near-infrared. The carriers of most DIBs remain unidentified.
Discovery and history
Much astronomical work relies on the study of spectra - the light from astronomical objects dispersed using a prism or, more usually, a diffraction grating. A typical stellar spectrum will consist of a continuum, containing absorption lines, each of which is attributed to a particular atomic energy level transition in the atmosphere of the star.
The appearances of all astronomical objects are affected by extinction, the absorption and scattering of photons by the interstellar medium. Relevant to DIBs is interstellar absorption, which predominantly affects the whole spectrum in a continuous way, rather than causing absorption lines. In 1922, though, astronomer Mary Lea Heger first observed a number of line-like absorption features which seemed to be interstellar in origin.
Their interstellar nature was shown by the fact that the strength of the observed absorption was roughly proportional to the extinction, and that in objects with widely differing radial velocities the absorption bands were not affected by Doppler shifting, implying that the absorption was not occurring in or around the object concerned. The name diffuse interstellar band, or DIB for short, was coined to reflect the fact that the absorption features are much broader than the normal absorption lines seen in stellar spectra.
The first DIBs observed were those at wavelengths 578.0 and 579.7 nanometers (visible light corresponds to a wavelength range of 400 - 700 nanometers). Other strong DIBs are seen at 628.4, 661.4 and 443.0 nm. The 443.0 nm DIB is particularly broad at about 1.2 nm across - typical intrinsic stellar absorption features are 0.1 nm or less across.
Later spectroscopic studies at higher spectral resolution and sensitivity revealed more and more DIBs; a catalogue of them in 1975 contained 25 known DIBs, and a decade later the number known had more than doubled. The first detection-limited survey was published by Peter Jenniskens and Xavier Desert in 1994 (see Figure above), which led to the first conference on The Diffuse Interstellar Bands at the University of Colorado in Boulder on May 16–19, 1994. Today circa 500 have been detected.
In recent years, very high resolution spectrographs on the world's most powerful telescopes have been used to observe and analyse DIBs. Spectral resolutions of 0.005 nm are now routine using instruments at observatories such as the European Southern Observatory at Cerro Paranal, Chile, and the Anglo-Australian Observatory in Australia, and at these high resolutions, many DIBs are found to contain considerable sub-structure.
The nature of the carriers
The great problem with DIBs, apparent from the earliest observations, was that their central wavelengths did not correspond with any known spectral lines of any ion or molecule, and so the material which was responsible for the absorption could not be identified. A large number of theories were advanced as the number of known DIBs grew, and determining the nature of the absorbing material (the 'carrier') became a crucial problem in astrophysics.
One important observational result is that the strengths of most DIBs are not strongly correlated with each other. This means that there must be many carriers, rather than one carrier responsible for all DIBs. Also significant is that the strength of DIBs is broadly correlated with the interstellar extinction. Extinction is caused by interstellar dust; however, DIBs, are not likely to be caused by dust grains.
The existence of sub-structure in DIBs supports the idea that they are caused by molecules. Substructure results from band heads in the rotational band contour and from isotope substitution. In a molecule containing, say, three carbon atoms, some of the carbon will be in the form of the carbon-13 isotope, so that while most molecules will contain three carbon-12 atoms, some will contain two 12C atoms and one 13C atom, much less will contain one 12C and two 13C, and a very small fraction will contain three 13C molecules. Each of these forms of the molecule will create an absorption line at a slightly different rest wavelength.
The most likely candidate molecules for producing DIBs are thought to be large carbon-bearing molecules, which are common in the interstellar medium. Polycyclic aromatic hydrocarbons, long carbon-chain molecules such as polyynes, and fullerenes are all potentially important. These types of molecule experience rapid and efficient deactivation when excited by a photon, which both broadens the spectral lines and makes them stable enough to exist in the interstellar medium.
Identification of C60+ as a carrier
the only molecule confirmed to be a DIB carrier is the buckminsterfullerene ion, C60+. Soon after Harry Kroto discovered fullerenes in the 1980s, he proposed that they could be DIB carriers. Kroto pointed out that the ionised form C60+ was more likely to survive in the diffuse interstellar medium. However, the lack of a reliable laboratory spectrum of gas-phase C60+ made this proposal difficult to test.
In the early 1990s, laboratory spectra of C60+ were obtained by embedding the molecule in solid ices, which showed strong bands in the near-infrared. In 1994, Bernard Foing and Pascale Ehrenfreund detected new DIBs with wavelengths close to those in the laboratory spectra, and argued that the difference was due to an offset between the gas-phase and solid-phase wavelengths. However, this conclusion was disputed by other researchers, such as Peter Jenniskens, on multiple spectroscopic and observational grounds.
A laboratory gas-phase spectrum of C60+ was obtained in 2015 by a group led by John Maier. Their results matched the band wavelengths that had been observed by Foing and Ehrenfreund in 1994. Three weaker bands of C60+ were found in interstellar spectra soon afterwards, resolving one of the earlier objections raised by Jenniskens. New objections were raised by other researchers, but by 2019 the C60+ bands and their assignment had been confirmed by multiple groups of astronomers and laboratory chemists.
See also
List of interstellar and circumstellar molecules
References
External links
Entry in the Encyclopedia of Astrobiology, Astronomy, and Spaceflight
Diffuse Interstellar Band Catalog
Astrochemistry
Interstellar media
Astronomical spectroscopy
Unsolved problems in astronomy | Diffuse interstellar bands | [
"Physics",
"Chemistry",
"Astronomy"
] | 1,423 | [
"Unsolved problems in astronomy",
"Interstellar media",
"Spectrum (physical sciences)",
"Outer space",
"Concepts in astronomy",
"Astrophysics",
"Astrochemistry",
"Astronomical controversies",
"Astronomical spectroscopy",
"nan",
"Spectroscopy",
"Astronomical sub-disciplines"
] |
1,390,873 | https://en.wikipedia.org/wiki/Reflection%20symmetry | In mathematics, reflection symmetry, line symmetry, mirror symmetry, or mirror-image symmetry is symmetry with respect to a reflection. That is, a figure which does not change upon undergoing a reflection has reflectional symmetry.
In 2-dimensional space, there is a line/axis of symmetry, in 3-dimensional space, there is a plane of symmetry. An object or figure which is indistinguishable from its transformed image is called mirror symmetric. In conclusion, a line of symmetry splits the shape in half and those halves should be identical.
Symmetric function
In formal terms, a mathematical object is symmetric with respect to a given operation such as reflection, rotation, or translation, if, when applied to the object, this operation preserves some property of the object. The set of operations that preserve a given property of the object form a group. Two objects are symmetric to each other with respect to a given group of operations if one is obtained from the other by some of the operations (and vice versa).
The symmetric function of a two-dimensional figure is a line such that, for each perpendicular constructed, if the perpendicular intersects the figure at a distance 'd' from the axis along the perpendicular, then there exists another intersection of the shape and the perpendicular at the same distance 'd' from the axis, in the opposite direction along the perpendicular.
Another way to think about the symmetric function is that if the shape were to be folded in half over the axis, the two halves would be identical: the two halves are each other's mirror images. Thus, a square has four axes of symmetry because there are four different ways to fold it and have the edges all match. A circle has infinitely many axes of symmetry, while a cone and sphere have infinitely many planes of symmetry.
Symmetric geometrical shapes
Triangles with reflection symmetry are isosceles. Quadrilaterals with reflection symmetry are kites, (concave) deltoids, rhombi, and isosceles trapezoids. All even-sided polygons have two simple reflective forms, one with lines of reflections through vertices, and one through edges. For an arbitrary shape, the axiality of the shape measures how close it is to being bilaterally symmetric. It equals 1 for shapes with reflection symmetry, and between two-thirds and 1 for any convex shape.
In 3D, the cube in which the plane can configure in all of the three axes that can reflect the cube has 9 planes of reflective symmetry.
Advanced types of reflection symmetry
For more general types of reflection there are correspondingly more general types of reflection symmetry. For example:
with respect to a non-isometric affine involution (an oblique reflection in a line, plane, etc.)
with respect to circle inversion.
In nature
Animals that are bilaterally symmetric have reflection symmetry around the sagittal plane, which divides the body vertically into left and right halves, with one of each sense organ and limb pair on either side. Most animals are bilaterally symmetric, likely because this supports forward movement and streamlining.
In architecture
Mirror symmetry is often used in architecture, as in the facade of Santa Maria Novella, Florence. It is also found in the design of ancient structures such as Stonehenge. Symmetry was a core element in some styles of architecture, such as Palladianism.
See also
Patterns in nature
Point reflection symmetry
Coxeter group theory about Reflection groups in Euclidean space
Rotational symmetry (different type of symmetry)
Chirality
References
Bibliography
General
Advanced
Elementary geometry
Euclidean symmetries | Reflection symmetry | [
"Physics",
"Mathematics"
] | 717 | [
"Functions and mappings",
"Euclidean symmetries",
"Mathematical objects",
"Elementary mathematics",
"Elementary geometry",
"Mathematical relations",
"Symmetry"
] |
1,390,921 | https://en.wikipedia.org/wiki/Single%20bond | In chemistry, a single bond is a chemical bond between two atoms involving two valence electrons. That is, the atoms share one pair of electrons where the bond forms. Therefore, a single bond is a type of covalent bond. When shared, each of the two electrons involved is no longer in the sole possession of the orbital in which it originated. Rather, both of the two electrons spend time in either of the orbitals which overlap in the bonding process. As a Lewis structure, a single bond is denoted as AːA or A-A, for which A represents an element. In the first rendition, each dot represents a shared electron, and in the second rendition, the bar represents both of the electrons shared in the single bond.
A covalent bond can also be a double bond or a triple bond. A single bond is weaker than either a double bond or a triple bond. This difference in strength can be explained by examining the component bonds of which each of these types of covalent bonds consists (Moore, Stanitski, and Jurs 393).
Usually, a single bond is a sigma bond. An exception is the bond in diboron, which is a pi bond. In contrast, the double bond consists of one sigma bond and one pi bond, and a triple bond consists of one sigma bond and two pi bonds (Moore, Stanitski, and Jurs 396). The number of component bonds is what determines the strength disparity. It stands to reason that the single bond is the weakest of the three because it consists of only a sigma bond, and the double bond or triple bond consist not only of this type of component bond but also at least one additional bond.
The single bond has the capacity for rotation, a property not possessed by the double bond or the triple bond. The structure of pi bonds does not allow for rotation (at least not at 298 K), so the double bond and the triple bond which contain pi bonds are held due to this property. The sigma bond is not so restrictive, and the single bond is able to rotate using the sigma bond as the axis of rotation (Moore, Stanitski, and Jurs 396-397).
Another property comparison can be made in bond length. Single bonds are the longest of the three types of covalent bonds as interatomic attraction is greater in the two other types, double and triple. The increase in component bonds is the reason for this attraction increase as more electrons are shared between the bonded atoms (Moore, Stanitski, and Jurs 343).
Single bonds are often seen in diatomic molecules. Examples of this use of single bonds include H2, F2, and HCl.
Single bonds are also seen in molecules made up of more than two atoms. Examples of this use of single bonds include:
Both bonds in H2O
All 4 bonds in CH4
Single bonding even appears in molecules as complex as hydrocarbons larger than methane. The type of covalent bonding in hydrocarbons is extremely important in the nomenclature of these molecules. Hydrocarbons containing only single bonds are referred to as alkanes (Moore, Stanitski, and Jurs 334). The names of specific molecules which belong to this group end with the suffix -ane. Examples include ethane, 2-methylbutane, and cyclopentane (Moore, Stanitski, and Jurs 335).
See also
Bond order
References
Chemical bonding | Single bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 710 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
1,391,009 | https://en.wikipedia.org/wiki/Plasma%20stealth | Plasma stealth is a proposed process to use ionized gas (plasma) to reduce the radar cross-section (RCS) of an aircraft. Interactions between electromagnetic radiation and ionized gas have been extensively studied for many purposes, including concealing aircraft from radar as stealth technology. Various methods might plausibly be able to form a layer or cloud of plasma around a vehicle to deflect or absorb radar, from simpler electrostatic or radio frequency discharges to more complex laser discharges. It is theoretically possible to reduce RCS in this way, but it may be very difficult to do so in practice. Some Russian missiles e.g. the 3M22 Zircon (SS-N-33) and Kh-47M2 Kinzhal missiles have been reported to make use of plasma stealth.
First claims
In 1956, Arnold Eldredge, of General Electric, filed a patent application for an "Object Camouflage Method and Apparatus," which proposed using a particle accelerator in an aircraft to create a cloud of ionization that would "...refract or absorb incident radar beams." It is unclear who funded this work or whether it was prototyped and tested. U.S. Patent 3,127,608 was granted in 1964.
During Project OXCART, the operation of the Lockheed A-12 reconnaissance aircraft, the CIA funded an attempt to reduce the RCS of the A-12's inlet cones. Known as Project KEMPSTER, this used an electron beam generator to create a cloud of ionization in front of each inlet. The system was flight tested but was never deployed on operational A-12s or SR-71s. The A-12 also had the capability to use a cesium-based fuel additive called "A-50" to ionize the exhaust gases, thus blocking radar waves from reflecting off the aft quadrant and engine exhaust pipes. Cesium was used because it was easily ionized by the hot exhaust gases. Radar physicist Ed Lovick Jr. claimed this additive saved the A-12 program.
In 1992, Hughes Research Laboratory conducted a research project to study electromagnetic wave propagation in unmagnetized plasma. A series of high voltage spark gaps were used to generate UV radiation, which creates plasma via photoionization in a waveguide. Plasma filled missile radomes were tested in an anechoic chamber for attenuation of reflection. At about the same time, R. J. Vidmar studied the use of atmospheric pressure plasma as electromagnetic reflectors and absorbers. Other investigators also studied the case of a non-uniform magnetized plasma slab.
Despite the apparent technical difficulty of designing a plasma stealth device for combat aircraft, there are claims that a system was offered for export by Russia in 1999. In January 1999, the Russian ITAR-TASS news agency published an interview with Doctor Anatoliy Koroteyev, the director of the Keldysh Research Center (FKA Scientific Research Institute for Thermal Processes), who talked about the plasma stealth device developed by his organization. The claim was particularly interesting in light of the solid scientific reputation of Dr. Koroteyev and the Institute for Thermal Processes, which is one of the top scientific research organizations in the world in the field of fundamental physics.
The Journal of Electronic Defense reported that "plasma-cloud-generation technology for stealth applications" developed in Russia reduces an aircraft's RCS by a factor of 100 (20 dB). According to this June 2002 article, the Russian plasma stealth device has been tested aboard a Sukhoi Su-27IB fighter-bomber. The Journal also reported that similar research into applications of plasma for RCS reduction is being carried out by Accurate Automation Corporation (Chattanooga, Tennessee) and Old Dominion University (Norfolk, Virginia) in the U.S.; and by Dassault Aviation (Saint-Cloud, France) and Thales (Paris, France).
Plasma and its properties
A plasma is a quasineutral (total electrical charge is close to zero) mix of ions (atoms which have been ionized, and therefore possess a net positive charge), electrons, and neutral particles (un-ionized atoms or molecules). Most plasmas are only partially ionized, in fact, the ionization degree of common plasma devices like fluorescent lamp is fairly low ( less than 1%). Almost all the matter in the universe is very low density plasma: solids, liquids and gases are uncommon away from planetary bodies. Plasmas have many technological applications, from fluorescent lighting to plasma processing for semiconductor manufacture.
Plasmas can interact strongly with electromagnetic radiation: this is why plasmas might plausibly be used to modify an object's radar signature. Interaction between plasma and electromagnetic radiation is strongly dependent on the physical properties and parameters of the plasma, most notably the electron temperature and plasma density.
Characteristic electron plasma frequency, the frequency with which electrons oscillate (plasma oscillation):
Plasmas can have a wide range of values in both temperature and density; plasma temperatures range from close to absolute zero and to well beyond 109 kelvins (for comparison, tungsten melts at 3700 kelvins), and plasma may contain less than one particle per cubic metre. Electron temperature is usually expressed as electronvolt (eV), and 1 eV is equivalent to 11,604 K. Common plasmas temperature and density in fluorescent light tubes and semiconductor manufacturing processes are around several eV and 109-12per cm3. For a wide range of parameters and frequencies, plasma is electrically conductive, and its response to low-frequency electromagnetic waves is similar to that of a metal: a plasma simply reflects incident low-frequency radiation. Low-frequency means it is lower than the characteristic electron plasma frequency. The use of plasmas to control the reflected electromagnetic radiation from an object (Plasma stealth) is feasible at suitable frequency where the conductivity of the plasma allows it to interact strongly with the incoming radio wave, and the wave can either be absorbed and converted into thermal energy, or reflected, or transmitted depending on the relationship between the radio wave frequency and the characteristic plasma frequency. If the frequency of the radio wave is lower than the plasma frequency, it is reflected. if it is higher, it is transmitted. If these two are equal, then resonance occurs. There is also another mechanism where reflection can be reduced. If the electromagnetic wave passes through the plasma, and is reflected by the metal, and the reflected wave and incoming wave are roughly equal in power, then they may form two phasors. When these two phasors are of opposite phase they can cancel each other out. In order to obtain substantial attenuation of radar signal, the plasma slab needs adequate thickness and density.
Plasmas support a wide range of waves, but for unmagnetised plasmas, the most relevant are the Langmuir waves, corresponding to a dynamic compression of the electrons. For magnetised plasmas, many different wave modes can be excited which might interact with radiation at radar frequencies.
Absorption of EM radiation
When electromagnetic waves, such as radar signals, propagate into a conductive plasma, ions and electrons are displaced as a result of the time varying electric and magnetic fields. The wave field gives energy to the particles. The particles generally return some fraction of the energy they have gained to the wave, but some energy may be permanently absorbed as heat by processes like scattering or resonant acceleration, or transferred into other wave types by mode conversion or nonlinear effects. A plasma can, at least in principle, absorb all the energy in an incoming wave, and this is the key to plasma stealth. However, plasma stealth implies a substantial reduction of an aircraft's RCS, making it more difficult (but not necessarily impossible) to detect. The mere fact of detection of an aircraft by a radar does not guarantee an accurate targeting solution needed to intercept the aircraft or to engage it with missiles. A reduction in RCS also results in a proportional reduction in detection range, allowing an aircraft to get closer to the radar before being detected.
The central issue here is frequency of the incoming signal. A plasma will simply reflect radio waves below a certain frequency (characteristic electron plasma frequency). This is the basic principle of short wave radios and long-range communications, because low-frequency radio signals bounce between the Earth and the ionosphere and may therefore travel long distances. Early-warning over-the-horizon radars utilize such low-frequency radio waves (typically lower than 50 MHz). Most military airborne and air defense radars, however, operate in VHF, UHF, and microwave band, which have frequencies higher than the characteristic plasma frequency of ionosphere, therefore microwave can penetrate the ionosphere and communication between the ground and communication satellites demonstrates is possible. (Some frequencies can penetrate the ionosphere).
Plasma surrounding an aircraft might be able to absorb incoming radiation, and therefore reduces signal reflection from the metal parts of the aircraft: the aircraft would then be effectively invisible to radar at long range due to weak signals received. A plasma might also be used to modify the reflected waves to confuse the opponent's radar system: for example, frequency-shifting the reflected radiation would frustrate Doppler filtering and might make the reflected radiation more difficult to distinguish from noise.
Control of plasma properties like density and temperature is important for a functioning plasma stealth device, and it may be necessary to dynamically adjust the plasma density, temperature, or combinations, or the magnetic field, in order to effectively defeat different types of radar systems. The great advantage Plasma Stealth possesses over traditional radio frequency stealth techniques like low-observability geometry and use of radar-absorbent materials is that plasma is tunable and wideband. When faced with frequency hopping radar, it is possible, at least in principle, to change the plasma temperature and density to deal with the situation. The greatest challenge is to generate a large area or volume of plasma with good energy efficiency.
Plasma stealth technology also faces various technical problems. For example, the plasma itself emits EM radiation, although it is usually weak and noise-like in spectrum. Also, it takes some time for plasma to be re-absorbed by the atmosphere and a trail of ionized air would be created behind the moving aircraft, but at present there is no method to detect this kind of plasma trail at long distance. Thirdly, plasmas (like glow discharges or fluorescent lights) tend to emit a visible glow: this is not compatible with overall low observability concept. Last but not least, it is extremely difficult to produce a radar-absorbent plasma around an entire aircraft traveling at high speed, the electrical power needed is tremendous. However, a substantial reduction of an aircraft's RCS may be still be achieved by generating radar-absorbent plasma around the most reflective surfaces of the aircraft, such as the turbojet engine fan blades, engine air intakes, vertical stabilizers, and airborne radar antenna.
There have been several computational studies on plasma-based radar cross section reduction technique using three-dimensional finite-difference time-domain simulations. Chung studied the radar cross change of a metal cone when it is covered with plasma, a phenomenon that occurs during reentry into the atmosphere. Chung simulated the radar cross section of a generic satellite, and also the radar cross section when it is covered with artificially generated plasma cones.
Theoretical work with Sputnik
Due to the obvious military applications of the subject, there are few readily available experimental studies of plasma's effect on the radar cross section (RCS) of aircraft, but plasma interaction with microwaves is a well explored area of general plasma physics. Standard plasma physics reference texts are a good starting point and usually spend some time discussing wave propagation in plasmas.
One of the most interesting articles related to the effect of plasma on the RCS of aircraft was published in 1963 by the IEEE. The article is entitled "Radar cross sections of dielectric or plasma coated conducting spheres and circular cylinders" (IEEE Transactions on Antennas and Propagation, September 1963, pp. 558–569). Six years earlier, in 1957, the Soviets had launched the first artificial satellite. While trying to track Sputnik it was noticed that its electromagnetic scattering properties were different from what was expected for a conductive sphere. This was due to the satellite's traveling inside of a plasma shell: the ionosphere.
The Sputnik's simple shape serves as an ideal illustration of plasma's effect on the RCS of an aircraft. Naturally, an aircraft would have a far more elaborate shape and be made of a greater variety of materials, but the basic effect should remain the same. In the case of the Sputnik flying through the ionosphere at high velocity and surrounded by a naturally occurring plasma shell, there are two separate radar reflections: the first from the conductive surface of the satellite, and the second from the dielectric plasma shell.
The authors of the paper found that a dielectric (plasma) shell may either decrease or increase the echo area of the object. If either one of the two reflections is considerably greater, then the weaker reflection will not contribute much to the overall effect. The authors also stated that the EM signal that penetrates the plasma shell and reflects off the object's surface will drop in intensity while traveling through plasma, as was explained in the prior section.
The most interesting effect is observed when the two reflections are of the same order of magnitude. In this situation the two components (the two reflections) will be added as phasors and the resulting field will determine the overall RCS. When these two components are out of phase relative to each other, cancellation occurs. This means that under such circumstances the RCS becomes null and the object is completely invisible to the radar.
It is immediately apparent that performing similar numeric approximations for the complex shape of an aircraft would be difficult. This would require a large body of experimental data for the specific airframe, properties of plasma, aerodynamic aspects, incident radiation, etc. In contrast, the original computations discussed in this paper were done by a handful of people on an IBM 704 computer made in 1956, and at the time, this was a novel subject with very little research background. So much has changed in science and engineering since 1963, that differences between a metal sphere and a modern combat jet pale in comparison.
A simple application of plasma stealth is the use of plasma as an antenna: metal antenna masts often have large radar cross sections, but a hollow glass tube filled with low pressure plasma can also be used as an antenna, and is entirely transparent to radar when not in use.
See also
Stealth technology
Active camouflage
Multi-spectral camouflage
Cloaking device
Penetration aid
References
Radar
stealth
Stealth technology | Plasma stealth | [
"Physics"
] | 2,999 | [
"Plasma technology and applications",
"Plasma physics"
] |
1,391,752 | https://en.wikipedia.org/wiki/Overall%20length | The overall length (OAL) of an ammunition cartridge is a measurement from the base of the brass shell casing to the tip of the bullet, seated into the brass casing. Cartridge overall length, or "COL", is important to safe functioning of reloads in firearms.
Handloaded cartridges and commercially available cartridges for firearms are normally created with a maximum length standardized by the Sporting Arms and Ammunition Manufacturers' Institute (SAAMI). A cartridge's overall length may be shorter than the maximum standard, equal to the standard, or sometimes even longer.
The maximum overall length is dictated by the need to fit into a box magazine of standard manufacture. For example, the .223 Remington cartridge, when loaded for use in the AR-15 rifle (or the military's M16 rifle), has to fit into the removable box magazine for that rifle. This dictates that the cartridge's maximum overall length be no greater than 2.260". However, for competition purposes during off-hand and slow fire prone match stages, the .223 Remington is loaded one cartridge at a time into the rifle's receiver. This allows for the cartridge to be longer than the standardized 2.260" SAAMI maximum overall length. These cartridges can be safely loaded to a length that has the ogive portion of the bullet just touching the rifle's lands. Many competitive shooters will make these cartridges 0.005" less than the truly maximum allowable overall length, for the sake of safety.
It is desirable for these single-loaded cartridges to have as little bullet jump as possible before the bullet's ogive begins to be engraved by the rifle's lands. This minimized bullet jump increases the accuracy of the rifle, all else being equal. This practice of long-loading a cartridge must be adjusted for each individual rifle, since there are variations from rifle to rifle as to how far down the barrel the rifling begins.
References
Ballistics | Overall length | [
"Physics"
] | 399 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
25,717,385 | https://en.wikipedia.org/wiki/Constrained-layer%20damping | Constrained-layer damping is a mechanical engineering technique to suppress vibrations. Typically a viscoelastic or other damping material, is sandwiched between two sheets of stiff materials that lack sufficient damping by themselves. The result is that any vibration generated on either side of the constraining materials (the two stiffer materials on the sides) is suppressed by the viscoelastic material, by turning it into heat. The damping is associated with the shear deformation of the viscoelastic material.
References
Mechanical engineering | Constrained-layer damping | [
"Physics",
"Engineering"
] | 104 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
25,723,035 | https://en.wikipedia.org/wiki/Yang%E2%80%93Mills%E2%80%93Higgs%20equations | In mathematics, the Yang–Mills–Higgs equations are a set of non-linear partial differential equations for a Yang–Mills field, given by a connection, and a Higgs field, given by a section of a vector bundle (specifically, the adjoint bundle). These equations are
with a boundary condition
where
A is a connection on a vector bundle,
D is the exterior covariant derivative,
F is the curvature of that connection,
Φ is a section of that vector bundle,
∗ is the Hodge star, and
[·,·] is the natural, graded bracket.
These equations are named after Chen Ning Yang, Robert Mills, and Peter Higgs. They are very closely related to the Ginzburg–Landau equations, when these are expressed in a general geometric setting.
M.V. Goganov and L.V. Kapitanskii have shown that the Cauchy problem for hyperbolic Yang–Mills–Higgs equations in Hamiltonian gauge on 4-dimensional Minkowski space have a unique global solution with no restrictions at the spatial infinity. Furthermore, the solution has the finite propagation speed property.
Lagrangian
The equations arise as the equations of motion of the Lagrangian density
where is an invariant symmetric bilinear form on the adjoint bundle. This is sometimes written as due to the fact that such a form can arise from the trace on under some representation; in particular here we are concerned with the adjoint representation, and the trace on this representation is the Killing form.
For the particular form of the Yang–Mills–Higgs equations given above, the potential is vanishing. Another common choice is , corresponding to a massive Higgs field.
This theory is a particular case of scalar chromodynamics where the Higgs field is valued in the adjoint representation as opposed to a general representation.
See also
Yang–Mills equations
Stable Yang–Mills–Higgs pair
Scalar chromodynamics
References
M.V. Goganov and L.V. Kapitansii, "Global solvability of the initial problem for Yang-Mills-Higgs equations", Zapiski LOMI 147,18–48, (1985); J. Sov. Math, 37, 802–822 (1987).
Partial differential equations
Quantum field theory | Yang–Mills–Higgs equations | [
"Physics"
] | 480 | [
"Quantum field theory",
"Quantum mechanics"
] |
30,329,560 | https://en.wikipedia.org/wiki/Berry%20connection%20and%20curvature | In physics, Berry connection and Berry curvature are related concepts which can be viewed, respectively, as a local gauge potential and gauge field associated with the Berry phase or geometric phase. The concept was first introduced by S. Pancharatnam as geometric phase and later elaborately explained and popularized by Michael Berry in a paper published in 1984 emphasizing how geometric phases provide a powerful unifying concept in several branches of classical and quantum physics.
Berry phase and cyclic adiabatic evolution
In quantum mechanics, the Berry phase arises in a cyclic adiabatic evolution. The quantum adiabatic theorem applies to a system whose Hamiltonian depends on a (vector) parameter that varies with time . If the 'th eigenvalue remains non-degenerate everywhere along the path and the variation with time t is sufficiently slow, then a system initially in the normalized eigenstate will remain in an instantaneous eigenstate of the Hamiltonian , up to a phase, throughout the process. Regarding the phase, the state at time t can be written as
where the second exponential term is the "dynamic phase factor." The first exponential term is the geometric term, with being the Berry phase. From the requirement that the state satisfies the time-dependent Schrödinger equation, it can be shown that
indicating that the Berry phase only depends on the path in the parameter space, not on the rate at which the path is traversed.
In the case of a cyclic evolution around a closed path such that , the closed-path Berry phase is
An example of physical systems where an electron moves along a closed path is cyclotron motion (details are given in the page of Berry phase). Berry phase must be considered to obtain the correct quantization condition.
Gauge transformation
A gauge transformation can be performed
to a new set of states that differ from the original ones only by an -dependent phase factor. This modifies the open-path Berry phase to be . For a closed path, continuity requires that ( an integer), and it follows that is invariant, modulo , under an arbitrary gauge transformation.
Berry connection
The closed-path Berry phase defined above can be expressed as
where
is a vector-valued function known as the Berry connection (or Berry potential). The Berry connection is gauge-dependent, transforming as
. Hence the local Berry connection can never be physically observable. However, its integral along a closed path, the Berry phase , is gauge-invariant up to an integer multiple of . Thus, is absolutely gauge-invariant, and may be related to physical observables.
Berry curvature
The Berry curvature is an anti-symmetric second-rank tensor derived from the Berry connection via
In a three-dimensional parameter space the Berry curvature can be written in the pseudovector form
The tensor and pseudovector forms of the Berry curvature are related to each other through the Levi-Civita antisymmetric tensor as . In contrast to the Berry connection, which is physical only after integrating around a closed path, the Berry curvature is a gauge-invariant local manifestation of the geometric properties of the wavefunctions in the parameter space, and has proven to be an essential physical ingredient for understanding a variety of electronic properties.
For a closed path that forms the boundary of a surface , the closed-path Berry phase can be rewritten using Stokes' theorem as
If the surface is a closed manifold, the boundary term vanishes, but the indeterminacy of the boundary term modulo manifests itself in the Chern theorem, which states that the integral of the Berry curvature over a closed manifold is quantized in units of . This number is the so-called Chern number, and is essential for understanding various quantization effects.
Finally, by using for , the Berry curvature can also be written as a summation over all the other eigenstates in the form
Note that the curvature of the nth energy level is contributed by all the other energy levels. That is, the Berry curvature can
be viewed as the result of the residual interaction of
those projected-out eigenstates. This gives the local conservation law for the Berry
curvature, if we sum over all possible energy levels for each value of
This equation also offers the advantage that no differentiation on the eigenstates is involved, and thus it can be
computed under any gauge choice.
Example: Spinor in a magnetic field
The Hamiltonian of a spin-1/2 particle in a magnetic field can be written as
where denote the Pauli matrices, is the magnetic moment, and B is the magnetic field. In three dimensions, the eigenstates have energies and their eigenvectors are
Now consider the state. Its Berry connection can be computed as
, and the Berry curvature is
If we choose a new gauge by multiplying by (or any other phase , ), the Berry connections are
and , while the Berry curvature remains the same. This is consistent with the conclusion that the Berry connection is gauge-dependent while the Berry curvature is not.
The Berry curvature per solid angle is given by . In this case, the Berry phase corresponding to any given path on the unit sphere in magnetic-field space is just half the solid angle subtended by the path.
The integral of the Berry curvature over the whole sphere is therefore exactly , so that the Chern number is unity, consistent with the Chern theorem.
Applications in crystals
The Berry phase plays an important role in modern investigations of electronic properties in crystalline solids and in the theory of the quantum Hall effect.
The periodicity of the crystalline potential allows the application of the Bloch theorem, which states that the Hamiltonian eigenstates take the form
where is a band index, is a wavevector in the reciprocal-space (Brillouin zone), and is a periodic function of . Due to translational symmetry, the momentum operator could be replaced with by the Peierls substitution and the wavevector plays the role of the parameter . Thus, one can define Berry phases, connections, and curvatures in the reciprocal space. For example, in an
N-band system, the Berry connection of the nth band in reciprocal space is
In the system, the Berry curvature of the nth band is given by all the other N − 1 bands for each value of In a 2D crystal, the Berry curvature only has
the component out of the plane and behaves as a pseudoscalar. It is because there only exists in-plane translational symmetry when translational symmetry is broken along z direction for a 2D crystal.
Because the Bloch theorem also implies that the reciprocal space itself is closed, with the Brillouin zone having the topology of a 3-torus in three dimensions, the requirements of integrating over a closed loop or manifold can easily be satisfied. In this way, such properties as the electric polarization, orbital magnetization, anomalous Hall conductivity, and orbital magnetoelectric coupling can be expressed in terms of Berry phases, connections, and curvatures.
References
External links
The quantum phase, five years after. by M. Berry.
Berry Phases and Curvatures in Electronic Structure Theory A talk by D. Vanderbilt.
Berry-ology, Orbital Magnetolectric Effects, and Topological Insulators - A talk by D. Vanderbilt.
Classical mechanics
Quantum phases | Berry connection and curvature | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,482 | [
"Quantum phases",
"Phases of matter",
"Classical mechanics",
"Quantum mechanics",
"Mechanics",
"Condensed matter physics",
"Matter"
] |
30,335,048 | https://en.wikipedia.org/wiki/Bunch%E2%80%93Davies%20vacuum | In quantum field theory in curved spacetime, there is a whole class of quantum states over a background de Sitter space which are invariant under all the isometries: the alpha-vacua. Among them there is a particular one whose associated Green functions verify a condition (Hadamard condition) consisting to behave on the light-cone as in flat space. This state is usually called the Bunch–Davies vacuum or Euclidean vacuum,
actually was first obtained by N.A. Chernikov and E. A. Tagirov, in 1968 and later by C. Schomblond and P. Spindel, in 1976, in the framework of a general discussion about invariant Green functions on de Sitter space.
The Bunch–Davies vacuum can also be described as being generated by an infinite time trace from the condition that the scale of quantum fluctuations is much smaller than the Hubble scale. The state possesses no quanta at the asymptotic past infinity.
The Bunch–Davies state is the zero-particle state as seen by a geodesic observer, that is, an observer who is in free fall in the expanding state. The state explains the origin of cosmological perturbation fluctuations in inflationary models.
See also
Quantum field theory in curved spacetime
Unruh effect
Hawking radiation
Inflation (cosmology)
References
Further reading
Quantum field theory
Theory of relativity | Bunch–Davies vacuum | [
"Physics"
] | 281 | [
"Quantum field theory",
"Quantum mechanics",
"Relativity stubs",
"Theory of relativity"
] |
30,337,848 | https://en.wikipedia.org/wiki/Harmonic%20pitch%20class%20profiles | Harmonic pitch class profiles (HPCP) is a group of features that a computer program extracts from an audio signal, based on a pitch class profile—a descriptor proposed in the context of a chord recognition system. HPCP are an enhanced pitch distribution feature that are sequences of feature vectors that, to a certain extent, describe tonality, measuring the relative intensity of each of the 12 pitch classes of the equal-tempered scale within an analysis frame. Often, the twelve pitch spelling attributes are also referred to as chroma and the HPCP features are closely related to what is called chroma features or chromagrams.
By processing musical signals, software can identify HPCP features and use them to estimate the key of a piece, to measure similarity between two musical pieces (cover version identification), to perform content-based audio retrieval (audio matching),
to extract the musical structure (audio structure analysis),
and to classify music in terms of composer, genre or mood. The process is related to time-frequency analysis. In general, chroma features are robust to noise (e.g., ambient noise or percussive sounds), independent of timbre and instrumentation and independent of loudness and dynamics.
HPCPs are tuning independent and consider the presence of harmonic frequencies, so that the reference frequency can be different from the standard A 440 Hz. The result of HPCP computation is a 12, 24, or 36-bin octave-independent histogram depending on the desired resolution, representing the relative intensity of each 1, 1/2, or 1/3 of the 12 semitones of the equal tempered scale.
General HPCP feature extraction procedure
The block diagram of the procedure is shown in Fig.1 and is further detailed in.
The General HPCP feature extraction procedure is summarized as follows:
Input musical signal.
Do spectral analysis to obtain the frequency components of the music signal.
Use Fourier transform to convert the signal into a spectrogram. (The Fourier transform is a type of time-frequency analysis.)
Do frequency filtering. A frequency range of between 100 and 5000 Hz is used.
Do peak detection. Only the local maximum values of the spectrum are considered.
Do reference frequency computation procedure. Estimate the deviation with respect to 440 Hz.
Do Pitch class mapping with respect to the estimated reference frequency. This is a procedure for determining the pitch class value from frequency values. A weighting scheme with cosine function is used. It considers the presence of harmonic frequencies (harmonic summation procedure), taking account a total of 8 harmonics for each frequency. To map the value on a one-third of a semitone, the size of the pitch class distribution vectors must be equal to 36.
Normalize the feature frame by frame dividing through the maximum value to eliminate dependency on global loudness. This results in a HPCP sequence like the one shown in Fig.2.
System of measuring similarity between two songs
After getting the HPCP feature, the pitch of the signal in a time section is known. The HPCP feature has been used to compute similarity between two songs in many research papers. A system of measuring similarity between two songs is shown in Fig.3. First, time-frequency analysis is needed to extract the HPCP feature. And then set two songs' HPCP feature to a global HPCP, so there is a standard of comparing. The next step is to use the two features to construct a binary similarity matrix. Smith–Waterman algorithm is used to construct a local alignment matrix H in the Dynamic Programming Local Alignment. Finally, after doing post processing, the distance between two songs can be computed.
See also
Time-frequency analysis
Time-frequency analysis for music signal
Pitch (music)
Musical theory
References
External links
HPCP - Harmonic Pitch Class Profile plugin available for download http://mtg.upf.edu/technologies/hpcp
Chroma Toolbox Free MATLAB implementations of various chroma types of pitch-based and chroma-based audio features
Time–frequency analysis
Music information retrieval | Harmonic pitch class profiles | [
"Physics"
] | 825 | [
"Frequency-domain analysis",
"Spectrum (physical sciences)",
"Time–frequency analysis"
] |
5,123,821 | https://en.wikipedia.org/wiki/Day-night%20average%20sound%20level | The day-night average sound level (Ldn or DNL) is the average noise level over a 24-hour period. The noise level measurements between the hours of 22:00 and 07:00 are artificially increased by 10 dB before averaging. This noise is weighted to take into account the decrease in community background noise of 10 dB during this period. There is a similar metric called day-evening-night average sound level (Lden or DENL) commonly used in other countries, or community noise exposure level (CNEL) used in California legislation; that is, the DNL with the addition of an evening period from 19:00 to 22:00 when noise level measurements are boosted 5 dB (or 4.77 dB in the case of CNEL) to account for the approximate decrease in background community noise during this period.
In the US, the Federal Aviation Administration has established this measure as a community noise exposure metric to aid airport noise analyses under Federal Aviation Regulation Part 150. The FAA says that a maximum day-night average sound level of higher than 65 dB is incompatible with residential communities. Communities in affected areas may be eligible for mitigation such as soundproofing.
See also
Aircraft noise
Effective perceived noise in decibels rating of aircraft
Noise pollution
Noise measurement
Day–evening–night noise level, the EU equivalent
References
United States administrative law
Federal Aviation Administration
Aviation law
Noise pollution
Sound measurements | Day-night average sound level | [
"Physics",
"Mathematics"
] | 284 | [
"Quantity",
"Sound measurements",
"Physical quantities"
] |
5,124,085 | https://en.wikipedia.org/wiki/Drill%20floor | The drill floor is the heart of any drilling rig. This is the area where the drill string begins its trip into the earth. It is traditionally where joints of pipe are assembled, as well as the BHA (bottom hole assembly), drilling bit, and various other tools. This is the primary work location for roughnecks and the driller. The drill floor is located directly under the derrick.
The floor is a relatively small work area in which the rig crew conducts operations, usually adding or removing drillpipe to, or from the drillstring. The rig floor is the most dangerous location on the rig because heavy iron is moved around there. Drill string connections are made or broken on the drill floor, and the driller's console for controlling the major components of the rig are located there. Attached to the rig floor is a small metal room, the doghouse, where the rig crew can meet, take breaks, and take refuge from the elements during idle times.
External links
Schlumberger Oilfield Glossary
The History of the Oil Industry
"Black Gold" Popular Mechanics, January 1930 - large photo article on oil drilling in the 1920s and 1930s
"World's Deepest Well" Popular Science, August 1938, article on the late 1930s technology of drilling oil wells
Oilfield terminology
Oil platforms
Mining equipment | Drill floor | [
"Chemistry",
"Engineering"
] | 266 | [
"Oil platforms",
"Structural engineering",
"Mining equipment",
"Petroleum stubs",
"Petroleum technology",
"Petroleum",
"Natural gas technology"
] |
5,124,994 | https://en.wikipedia.org/wiki/Spherical%20model | The spherical model is a model of ferromagnetism similar to the Ising model, which was solved in 1952 by T. H. Berlin and M. Kac. It has the remarkable property that for linear dimension d greater than four, the critical exponents that govern the behaviour of the system near the critical point are independent of d and the geometry of the system. It is one of the few models of ferromagnetism that can be solved exactly in the presence of an external field.
Formulation
The model describes a set of particles on a lattice containing N sites. Each site j of contains a spin which interacts only with its nearest neighbours and an external field H. It differs from the Ising model in that the are no longer restricted to , but can take all real values, subject to the constraint that
which in a homogeneous system ensures that the average of the square of any spin is one, as in the usual Ising model.
The partition function generalizes from that of the Ising model to
where is the Dirac delta function, are the edges of the lattice, and and , where T is the temperature of the system, k is the Boltzmann constant and J the coupling constant of the nearest-neighbour interactions.
Berlin and Kac saw this as an approximation to the usual Ising model, arguing that the -summation in the Ising model can be viewed as a sum over all corners of an N-dimensional hypercube in -space. The becomes an integration over the surface of a hypersphere passing through all such corners.
It was rigorously proved by Kac and C. J. Thompson that the spherical model is a limiting case of the N-vector model.
Equation of state
Solving the partition function and using a calculation of the free energy yields an equation describing the magnetization M of the system
for the function g defined as
The internal energy per site is given by
an exact relation relating internal energy and magnetization.
Critical behaviour
For the critical temperature occurs at absolute zero, resulting in no phase transition for the spherical model. For d greater than 2, the spherical model exhibits the typical ferromagnetic behaviour, with a finite Curie temperature where ferromagnetism ceases. The critical behaviour of the spherical model was derived in the completely general circumstances that the dimension d may be a real non-integer dimension.
The critical exponents and in the zero-field case which dictate the behaviour of the system close to were derived to be
which are independent of the dimension of d when it is greater than four, the dimension being able to take any real value.
References
R. J. Baxter, Exactly solved models in statistical mechanics, London, Academic Press, 1982; 2007 Dover reprint, with a new chapter "Subsequent Developments"
Further reading
Lattice models
Exactly solvable models | Spherical model | [
"Physics",
"Materials_science"
] | 573 | [
"Statistical mechanics",
"Condensed matter physics",
"Lattice models",
"Computational physics"
] |
5,126,046 | https://en.wikipedia.org/wiki/Hereditarily%20countable%20set | In set theory, a set is called hereditarily countable if it is a countable set of hereditarily countable sets.
Results
The inductive definition above is well-founded and can be expressed in the language of first-order set theory.
Equivalent properties
A set is hereditarily countable if and only if it is countable, and every element of its transitive closure is countable.
See also
Hereditarily finite set
Constructible universe
References
Set theory
Large cardinals | Hereditarily countable set | [
"Mathematics"
] | 105 | [
"Set theory",
"Mathematical logic",
"Mathematical objects",
"Infinity",
"Large cardinals"
] |
5,126,167 | https://en.wikipedia.org/wiki/Berry%20mechanism | The Berry mechanism, or Berry pseudorotation mechanism, is a type of vibration causing molecules of certain geometries to isomerize by exchanging the two axial ligands (see the figure) for two of the equatorial ones. It is the most widely accepted mechanism for pseudorotation and most commonly occurs in trigonal bipyramidal molecules such as PF5, though it can also occur in molecules with a square pyramidal geometry. The Berry mechanism is named after R. Stephen Berry, who first described this mechanism in 1960.
Berry mechanism in trigonal bipyramidal structure
The process of pseudorotation occurs when the two axial ligands close like a pair of scissors pushing their way in between two of the equatorial groups which scissor out to accommodate them. Both the axial and equatorial constituents move at the same rate of increasing the angle between the other axial or equatorial constituent. This forms a square based pyramid where the base is the four interchanging ligands and the tip is the pivot ligand, which has not moved. The two originally equatorial ligands then open out until they are 180 degrees apart, becoming axial groups perpendicular to where the axial groups were before the pseudorotation. This requires about 3.6 kcal/mol in PF5.
This rapid exchange of axial and equatorial ligands renders complexes with this geometry unresolvable (unlike carbon atoms with four distinct substituents), except at low temperatures or when one or more of the ligands is bi- or poly-dentate.
Berry mechanism in square pyramidal structure
The Berry mechanism in square pyramidal molecules (such as IF5) is somewhat like the inverse of the mechanism in bipyramidal molecules. Starting at the "transition phase" of bipyramidal pseudorotation, one pair of fluorines scissors back and forth with a third fluorine, causing the molecule to vibrate. Unlike with pseudorotation in bipyramidal molecules, the atoms and ligands which are not actively vibrating in the "scissor" motion are still participating in the process of pseudorotation; they make general adjustment based on the movement of the actively vibrating atoms and ligands. However, this geometry requires a significant amount of energy to occur of about 26.7 kcal/mol.
See also
Pseudorotation
Bailar twist
Bartell mechanism
Ray–Dutt twist
Fluxional molecule
References
Molecular geometry
Chemical kinetics
fr:Pseudorotation de Berry | Berry mechanism | [
"Physics",
"Chemistry"
] | 503 | [
"Chemical reaction engineering",
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Chemical kinetics",
"Matter"
] |
5,126,231 | https://en.wikipedia.org/wiki/Pseudorotation | In chemistry, a pseudorotation is a set of intramolecular movements of attached groups (i.e., ligands) on a highly symmetric molecule, leading to a molecule indistinguishable from the initial one. The International Union of Pure and Applied Chemistry (IUPAC) defines a pseudorotation as a "stereoisomerization resulting in a structure that appears to have been produced by rotation of the entire initial molecule", the result of which is a "product" that is "superposable on the initial one, unless different positions are distinguished by substitution, including isotopic substitution."
Well-known examples are the intramolecular isomerization of trigonal bipyramidal compounds by the Berry pseudorotation mechanism, and the out-of-plane motions of carbon atoms exhibited by cyclopentane, leading to the interconversions it experiences between its many possible conformers (envelope, twist). Note, no angular momentum is generated by this motion. In these and related examples, a small displacement of the atomic positions leads to a loss of symmetry until the symmetric product re-forms (see image example below), where these displacements are typically along low-energy pathways. The Berry mechanism refers to the facile interconversion of axial and equatorial ligand in types of compounds, e.g. D3h-symmetric (shown). Finally, in a formal sense, the term pseudorotation is intended to refer exclusively to dynamics in symmetrical molecules, though mechanisms of the same type are invoked for lower symmetry molecules as well.
See also
Bailar twist
Ray–Dutt twist
Bartell mechanism
Fluxional molecule
References
Chemical bonding
Stereochemistry | Pseudorotation | [
"Physics",
"Chemistry",
"Materials_science"
] | 350 | [
"Stereochemistry",
"Space",
"Condensed matter physics",
"Stereochemistry stubs",
"nan",
"Spacetime",
"Chemical bonding"
] |
5,129,473 | https://en.wikipedia.org/wiki/Core%20electron | Core electrons are the electrons in an atom that are not valence electrons and do not participate directly in chemical bonding. The nucleus and the core electrons of an atom form the atomic core. Core electrons are tightly bound to the nucleus. Therefore, unlike valence electrons, core electrons play a secondary role in chemical bonding and reactions by screening the positive charge of the atomic nucleus from the valence electrons.
The number of valence electrons of an element can be determined by the periodic table group of the element (see valence electron):
For main-group elements, the number of valence electrons ranges from 1 to 8 (ns and np orbitals).
For transition metals, the number of valence electrons ranges from 3 to 12 (ns and (n−1)d orbitals).
For lanthanides and actinides, the number of valence electrons ranges from 3 to 16 (ns, (n−2)f and (n−1)d orbitals).
All other non-valence electrons for an atom of that element are considered core electrons.
Orbital theory
A more complex explanation of the difference between core and valence electrons can be described with atomic orbital theory.
In atoms with a single electron the energy of an orbital is determined exclusively by the principal quantum number n. The n = 1 orbital has the lowest possible energy in the atom. For large n, the energy increases so much that the electron can easily escape from the atom. In single electron atoms, all energy levels with the same principle quantum number are degenerate, and have the same energy.
In atoms with more than one electron, the energy of an electron depends not only on the properties of the orbital it resides in, but also on its interactions with the other electrons in other orbitals. This requires consideration of the ℓ quantum number. Higher values of ℓ are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When ℓ = 2, the increase in energy of the orbital becomes large enough to push the energy of orbital above the energy of the s-orbital in the next higher shell; when ℓ = 3 the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled.
The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms of higher atomic number, the ℓ of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers n of electrons becomes less and less important in their energy placement. The energy sequence of the first 35 subshells (e.g., 1s, 2s, 2p, 3s, etc.) is given in the following table [not shown?]. Each cell represents a subshell with n and ℓ given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. See the periodic table below, organized by subshells.
Atomic core
The atomic core refers to the central part of the atom excluding the valence electrons. The atomic core has a positive electric charge called the core charge and is the effective nuclear charge experienced by an outer shell electron. In other words, core charge is an expression of the attractive force experienced by the valence electrons to the core of an atom which takes into account the shielding effect of core electrons. Core charge can be calculated by taking the number of protons in the nucleus minus the number of core electrons, also called inner shell electrons, and is always a positive value in neutral atoms.
The mass of the core is almost equal to the mass of the atom. The atomic core can be considered spherically symmetric with sufficient accuracy. The core radius is at least three times smaller than the radius of the corresponding atom (if we calculate the radii by the same methods). For heavy atoms, the core radius grows slightly with increasing number of electrons. The radius of the core of the heaviest naturally occurring element - uranium - is comparable to the radius of a lithium atom, although the latter has only three electrons.
Chemical methods cannot separate the electrons of the core from the atom. When ionized by flame or ultraviolet radiation, atomic cores, as a rule, also remain intact.
Core charge is a convenient way of explaining trends in the periodic table. Since the core charge increases as you move across a row of the periodic table, the outer-shell electrons are pulled more and more strongly towards the nucleus and the atomic radius decreases. This can be used to explain a number of periodic trends such as atomic radius, first ionization energy (IE), electronegativity, and oxidizing.
Core charge can also be calculated as 'atomic number' minus 'all electrons except those in the outer shell'. For example, chlorine (element 17), with electron configuration 1s2 2s2 2p6 3s2 3p5, has 17 protons and 10 inner shell electrons (2 in the first shell, and 8 in the second) so:
Core charge = 17 − 10 = +7
A core charge is the net charge of a nucleus, considering the completed shells of electrons to act as a 'shield.' As a core charge increases, the valence electrons are more strongly attracted to the nucleus, and the atomic radius decreases across the period.
Relativistic effects
For elements with high atomic number Z, relativistic effects can be observed for core electrons. The velocities of core s electrons reach relativistic momentum which leads to contraction of 6s orbitals relative to 5d orbitals. Physical properties affected by these relativistic effects include lowered melting temperature of mercury and the observed golden colour of gold and caesium due to narrowing of energy gap. Gold appears yellow because it absorbs blue light more than it absorbs other visible wavelengths of light and so reflects back yellow-toned light.
Electron transition
A core electron can be removed from its core-level upon absorption of electromagnetic radiation. This will either excite the electron to an empty valence shell or cause it to be emitted as a photoelectron due to the photoelectric effect. The resulting atom will have an empty space in the core electron shell, often referred to as a core-hole. It is in a metastable state and will decay within 10−15 s, releasing the excess energy via X-ray fluorescence (as a characteristic X-ray) or by the Auger effect. Detection of the energy emitted by a valence electron falling into a lower-energy orbital provides useful information on the electronic and local lattice structures of a material. Although most of the time this energy is released in the form of a photon, the energy can also be transferred to another electron, which is ejected from the atom. This second ejected electron is called an Auger electron and this process of electronic transition with indirect radiation emission is known as the Auger effect.
Every atom except hydrogen has core-level electrons with well-defined binding energies. It is therefore possible to select an element to probe by tuning the X-ray energy to the appropriate absorption edge. The spectra of the radiation emitted can be used to determine the elemental composition of a material.
See also
Atomic orbital
Auger effect
Lanthanide contraction
Relativistic quantum chemistry
Shielding effect
Surface core level shift
Valence electron
References
Atomic physics
Atomic, molecular, and optical physics
Quantum chemistry | Core electron | [
"Physics",
"Chemistry"
] | 1,563 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
27,585,573 | https://en.wikipedia.org/wiki/Uncertainty%20exponent | In mathematics, the uncertainty exponent is a method of measuring the fractal dimension of a basin boundary. In a chaotic scattering system, the
invariant set of the system is usually not directly accessible because it is non-attracting and typically of measure zero. Therefore, the only way to infer the presence of members
and to measure the properties of the invariant set is through the basins of attraction. Note that in a scattering system, basins of attraction are not limit cycles therefore do not constitute members of the invariant set.
Suppose we start with a random trajectory and perturb it by a small amount,
, in a random direction. If the new trajectory ends up
in a different basin from the old one, then it is called epsilon uncertain.
If we take a large number of such trajectories,
then the fraction of them that are epsilon uncertain is the uncertainty fraction,
, and we expect it to scale exponentially
with :
Thus the uncertainty exponent, , is defined as follows:
The uncertainty exponent can be shown to approximate the box-counting dimension
as follows:
where N is the embedding dimension. Please refer to the article on chaotic mixing for an example of numerical computation of the uncertainty dimension
compared with that of a box-counting dimension.
References
C. Grebogi, S. W. McDonald, E. Ott and J. A. Yorke, Final state sensitivity: An obstruction to predictability, Phys. Letters 99A: 415-418 (1983).
Chaos theory
Fractals | Uncertainty exponent | [
"Mathematics"
] | 310 | [
"Mathematical analysis",
"Functions and mappings",
"Mathematical analysis stubs",
"Mathematical objects",
"Fractals",
"Mathematical relations"
] |
27,585,945 | https://en.wikipedia.org/wiki/Trajectory%20%28fluid%20mechanics%29 | In fluid mechanics, meteorology (weather) and oceanography, a trajectory traces the motion of a single point, often called a parcel, in the flow.
Trajectories are useful for tracking atmospheric contaminants, such as smoke plumes, and as constituents to Lagrangian simulations, such as contour advection or semi-Lagrangian schemes.
Suppose we have a time-varying flow field, . The motion of a fluid parcel, or trajectory, is given by the following system of ordinary differential equations:
While the equation looks simple, there are at least three concerns when attempting to solve it numerically. The first is the integration scheme. This is typically a Runge-Kutta, although others can be useful as well, such as a leapfrog. The second is the method of determining the velocity vector, at a given position, , and time, t. Normally, it is not known at all positions and times, therefore some method of interpolation is required. If the velocities are gridded in space and time, then bilinear, trilinear or higher-dimensional linear interpolation is appropriate. Bicubic, tricubic, etc., interpolation is used as well, but is probably not worth the extra computational overhead.
Velocity fields can be determined by measurement, e.g. from weather balloons, from numerical models or especially from a combination of the two, e.g. assimilation models.
The final concern is metric corrections. These are necessary for geophysical fluid flows on a spherical Earth. The differential equations for tracing a two-dimensional, atmospheric trajectory in longitude-latitude coordinates are as follows:
where, and are, respectively, the longitude and latitude in radians, r is the radius of the Earth, u is the zonal wind and v is the meridional wind.
One problem with this formulation is the polar singularity: notice how the denominator in the first equation goes to zero when the latitude is 90 degrees—plus or minus. One means of overcoming this is to use a locally Cartesian coordinate system close to the poles. Another is to perform the integration on a pair of Azimuthal equidistant projections—one for the N. Hemisphere and one for the S. Hemisphere.
Trajectories can be validated by balloons in the atmosphere and buoys in the ocean.
External links
ctraj: A trajectory integrator written in C++.
References
Fluid dynamics
Continuum mechanics
Meteorological concepts
Numerical analysis
Numerical climate and weather models | Trajectory (fluid mechanics) | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 524 | [
"Continuum mechanics",
"Chemical engineering",
"Computational mathematics",
"Classical mechanics",
"Mathematical relations",
"Numerical analysis",
"Piping",
"Approximations",
"Fluid dynamics"
] |
27,586,152 | https://en.wikipedia.org/wiki/Isoline%20retrieval | Isoline retrieval is a remote sensing inverse method that retrieves one or more isolines of a trace atmospheric constituent or variable. When used to validate another contour, it is the most accurate method possible for the task. When used to retrieve a whole field, it is a general, nonlinear inverse method and a robust estimator.
For validating advected contours
Rationale
Suppose we have, as in contour advection, inferred knowledge of a
single contour or isoline of an atmospheric constituent, q
and we wish to validate this against satellite remote-sensing data.
Since satellite instruments cannot measure the constituent directly,
we need to perform some sort of inversion.
In order to validate the contour, it is not necessary to know,
at any given point, the exact value of the constituent. We only need to
know whether it falls inside or outside, that is, is it greater
than or less than the value of the contour, q0.
This is a classification problem. Let:
be the discretized variable.
This will be related to the satellite measurement vector, ,
by some conditional probability, ,
which we approximate by collecting samples, called training data, of both the
measurement vector and the state variable, q.
By generating classification results over the region of interest
and using any contouring algorithm to separate the
two classes, the isoline will have been "retrieved."
The accuracy of a retrieval will be given by integrating
the conditional probability over the area of interest, A:
where c is the retrieved class at position, .
We can maximize this quantity by maximizing the value of the integrand
at each point:
Since this is the definition of maximum likelihood,
a classification algorithm based on maximum likelihood
is the most accurate method possible of validating an advected contour.
A good method for performing maximum likelihood classification
from a set of training data is variable kernel density estimation.
Training data
There are two methods of generating the training data.
The most obvious is empirically, by simply matching measurements of
the variable, q, with collocated
measurements from the satellite instrument. In this case,
no knowledge of the actual physics that produce the measurement
is required and the retrieval algorithm is purely statistical.
The second is with a forward model:
where is the state vector and
q = xk is a single component.
An advantage of this method is that state vectors need not
reflect actual atmospheric configurations, they need only
take on a state that could reasonably occur in the real atmosphere.
There are also none of the errors inherent in
most collocation procedures,
e.g. because of offset errors in the locations of the paired samples
and differences in the footprint sizes of the two instruments.
Since retrievals will be biased towards more common states,
however, the statistics ought to reflect those in the real world.
Error characterization
The conditional probabilities, , provide
excellent error characterization, therefore the classification
algorithm ought to return them.
We define the confidence rating by rescaling the conditional
probability:
where nc is the number of classes (in this case, two).
If C is zero, then the classification is little better than
chance, while if it is one, then it should be perfect.
To transform the confidence rating to a statistical tolerance,
the following line integral can be applied to an isoline retrieval
for which the true isoline is known:
where s is the path, l is the length of the isoline
and is the retrieved confidence as a function
of position.
While it appears that the integral must be evaluated separately
for each value of the confidence rating, C, in fact it may be
done for all values of C by sorting the confidence ratings of the
results, .
The function relates the threshold value of the confidence rating
for which the tolerance is applicable.
That is, it defines a region that contains a fraction of the true
isoline equal to the tolerance.
Example: water vapour from AMSU
The Advanced Microwave Sounding Unit (AMSU) series of satellite instruments
are designed to detect temperature and water vapour. They have a high
horizontal resolution (as little as 15 km) and because they are
mounted on more than one satellite, full global coverage can be
obtained in less than one day.
Training data was generated using the second method from
European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-40
data fed to a fast radiative transfer model called
RTTOV.
The function, has been generated from
simulated retrievals and is shown in the figure to the right.
This is then used to set the 90 percent tolerance in the figure
below by shading all the confidence ratings less than 0.8.
Thus we expect the true isoline to fall within the shading
90 percent of the time.
For continuum retrievals
Isoline retrieval is also useful for retrieving a continuum variable
and constitutes a general, nonlinear inverse method.
It has the advantage over both a neural network, as well as iterative
methods such as optimal estimation that invert the forward model
directly, in that there is no possibility of getting stuck in a
local minimum.
There are a number of methods of reconstituting the continuum variable
from the discretized one. Once a sufficient number of contours
have been retrieved, it is straightforward to interpolate between
them. Conditional probabilities make a good proxy for
the continuum value.
Consider the transformation from a continuum to a discrete variable:
Suppose that is given by a Gaussian:
where is the expectation value and
is the standard deviation, then the conditional probability is related to the
continuum variable, q, by the error function:
The figure shows conditional probability versus specific humidity for the example
retrieval discussed above.
As a robust estimator
The location of q0 is found by setting the conditional probabilities
of the two classes to be equal:
In other words, equal amounts of the "zeroeth order moment" lie on either side
of q0. This type of formulation is characteristic of a robust estimator.
References
External links
Software for isoline retrieval
Remote sensing
Inverse problems | Isoline retrieval | [
"Mathematics"
] | 1,226 | [
"Inverse problems",
"Applied mathematics"
] |
20,288,037 | https://en.wikipedia.org/wiki/Diagnostic%20electron%20microscopy | The transmission electron microscope (TEM) is used as an important diagnostic tool to screen human tissues at high magnification and at high resolution (the ultrastructural level), often in conjunction with other methods, particularly light microscopy and immunofluorescence techniques. The TEM was first used extensively for this purpose in the 1980s, especially for identifying the markers of cell differentiation to identify tumours, and in renal disease. Immunolabelling techniques are now generally used instead of the TEM for tumour diagnosis but the technique retains a critical role in the diagnosis of renal disease and a range of other conditions. One example is Primary ciliary dyskinesia (PCD), a rare ciliopathy which affects the action of cilia. TEM images of ciliary axonemes are examined using TEM and abnormalities of structure can provide a positive diagnosis in some cases.
Specifically, TEM should be used for diagnostic purposes when it: (1) provides useful (complementary) information in the context of a carefully considered differential diagnosis; (2) provides an ‘improved’ diagnosis that results in better treatment strategies and; (3) is time & cost effective in respect to alternative techniques. For diagnostic purposes solid tissues are prepared for TEM in the same way as other biological tissues, they are fixed in glutaraldehyde and osmium tetroxide then dehydrated and embedded in epoxy resin. The epoxy resin block is trimmed and the target tissue is selected using a light microscope by viewing semithin sections stained with toluidine blue. The block is then retrimmed and the specific area for observation is ultrathin sectioned, preferably using a diamond knife. The ultrathin sections are collected on 3mm copper (mesh) grids and stained with uranyl acetate and lead citrate to make the contents of the tissue electron dense (and thus visible in the electron microscope).
References
External links
The Association of Clinical Electron Microscopists (UK)
Microscopy | Diagnostic electron microscopy | [
"Chemistry"
] | 423 | [
"Microscopy"
] |
20,295,286 | https://en.wikipedia.org/wiki/Gyrovector%20space | A gyrovector space is a mathematical concept proposed by Abraham A. Ungar for studying hyperbolic geometry in analogy to the way vector spaces are used in Euclidean geometry. Ungar introduced the concept of gyrovectors that have addition based on gyrogroups instead of vectors which have addition based on groups. Ungar developed his concept as a tool for the formulation of special relativity as an alternative to the use of Lorentz transformations to represent compositions of velocities (also called boosts – "boosts" are aspects of relative velocities, and should not be conflated with "translations"). This is achieved by introducing "gyro operators"; two 3d velocity vectors are used to construct an operator, which acts on another 3d velocity.
Name
Gyrogroups are weakly associative group-like structures. Ungar proposed the term gyrogroup for what he called a gyrocommutative-gyrogroup, with the term gyrogroup being reserved for the non-gyrocommutative case, in analogy with groups vs. abelian groups. Gyrogroups are a type of Bol loop. Gyrocommutative gyrogroups are equivalent to K-loops although defined differently. The terms Bruck loop and dyadic symset are also in use.
Mathematics of gyrovector spaces
Gyrogroups
Axioms
A gyrogroup (G, ) consists of an underlying set G and a binary operation satisfying the following axioms:
In G there is at least one element 0 called a left identity with 0 a = a for all a in G.
For each a in G there is an element a in G called a left inverse of a with (a) a = 0.
For any a, b, c in G there exists a unique element gyr[a,b]c in G such that the binary operation obeys the left gyroassociative law: a (b c) = (a b) gyr[a,b]c
The map gyr[a,b]: G → G given by c ↦ gyr[a,b]c is an automorphism of the magma (G, ) – that is, gyr[a,b] is a member of Aut(G, ) and the automorphism gyr[a,b] of G is called the gyroautomorphism of G generated by a, b in G. The operation gyr: G × G → Aut(G, ) is called the gyrator of G.
The gyroautomorphism gyr[a,b] has the left loop property gyr[a,b] = gyr[a b,b]
The first pair of axioms are like the group axioms. The last pair present the gyrator axioms and the middle axiom links the two pairs.
Since a gyrogroup has inverses and an identity it qualifies as a quasigroup and a loop.
Gyrogroups are a generalization of groups. Every group is an example of a gyrogroup with gyr[a,b] defined as the identity map for all a and b in G.
An example of a finite gyrogroup is given in .
Identities
Some identities which hold in any gyrogroup (G, ) are:
(gyration)
(left associativity)
(right associativity)
Furthermore, one may prove the Gyration inversion law, which is the motivation for the definition of gyrocommutativity below:
(gyration inversion law)
Some additional theorems satisfied by the Gyration group of any gyrogroup include:
(identity gyrations)
(gyroautomorphism inversion law)
(gyration even property)
(right loop property)
(left loop property)
More identities given on page 50 of . One particularly useful consequence of the above identities is that Gyrogroups satisfy the left Bol property
Gyrocommutativity
A gyrogroup (G,) is gyrocommutative if its binary operation obeys the gyrocommutative law: a b = gyr[a,b](b a). For relativistic velocity addition, this formula showing the role of rotation relating a + b and b + a was published in 1914 by Ludwik Silberstein.
Coaddition
In every gyrogroup, a second operation can be defined called coaddition: a b = a gyr[a,b]b for all a, b ∈ G. Coaddition is commutative if the gyrogroup addition is gyrocommutative.
Beltrami–Klein disc/ball model and Einstein addition
Relativistic velocities can be considered as points in the Beltrami–Klein model of hyperbolic geometry and so vector addition in the Beltrami–Klein model can be given by the velocity addition formula. In order for the formula to generalize to vector addition in hyperbolic space of dimensions greater than 3, the formula must be written in a form that avoids use of the cross product in favour of the dot product.
In the general case, the Einstein velocity addition of two velocities and is given in coordinate-independent form as:
where is the gamma factor given by the equation .
Using coordinates this becomes:
where .
Einstein velocity addition is commutative and associative only when and are parallel. In fact
and
where "gyr" is the mathematical abstraction of Thomas precession into an operator called Thomas gyration and given by
for all w. Thomas precession has an interpretation in hyperbolic geometry as the negative hyperbolic triangle defect.
Lorentz transformation composition
If the 3 × 3 matrix form of the rotation applied to 3-coordinates is given by gyr[u,v], then the 4 × 4 matrix rotation applied to 4-coordinates is given by:
.
The composition of two Lorentz boosts B(u) and B(v) of velocities u and v is given by:
This fact that either B(uv) or B(vu) can be used depending whether you write the rotation before or after explains the velocity composition paradox.
The composition of two Lorentz transformations L(u,U) and L(v,V) which include rotations U and V is given by:
In the above, a boost can be represented as a 4 × 4 matrix. The boost matrix B(v) means the boost B that uses the components of v, i.e. v1, v2, v3 in the entries of the matrix, or rather the components of v/c in the representation that is used in the section Lorentz transformation#Matrix forms. The matrix entries depend on the components of the 3-velocity v, and that's what the notation B(v) means. It could be argued that the entries depend on the components of the 4-velocity because 3 of the entries of the 4-velocity are the same as the entries of the 3-velocity, but the usefulness of parameterizing the boost by 3-velocity is that the resultant boost you get from the composition of two boosts uses the components of the 3-velocity composition uv in the 4 × 4 matrix B(uv). But the resultant boost also needs to be multiplied by a rotation matrix because boost composition (i.e. the multiplication of two 4 × 4 matrices) results not in a pure boost but a boost and a rotation, i.e. a 4 × 4 matrix that corresponds to the rotation Gyr[u,v] to get B(u)B(v) = B(uv)Gyr[u,v] = Gyr[u,v]B(vu).
Einstein gyrovector spaces
Let s be any positive constant, let (V,+,.) be any real inner product space and let Vs={v ∈ V :|v|<s}. An Einstein gyrovector space (Vs, , ) is an Einstein gyrogroup (Vs, ) with scalar multiplication given by rv = s tanh(r tanh−1(|v|/s))v/|v| where r is any real number, v ∈ Vs, v ≠ 0 and r 0 = 0 with the notation v r = r v.
Einstein scalar multiplication does not distribute over Einstein addition except when the gyrovectors are colinear (monodistributivity), but it has other properties of vector spaces: For any positive integer n and for all real numbers r,r1,r2 and v ∈ Vs:
Poincaré disc/ball model and Möbius addition
The Möbius transformation of the open unit disc in the complex plane is given by the polar decomposition
which can be written as which defines the Möbius addition .
To generalize this to higher dimensions the complex numbers are considered as vectors in the plane , and Möbius addition is rewritten in vector form as:
This gives the vector addition of points in the Poincaré ball model of hyperbolic geometry where radius s=1 for the complex unit disc now becomes any s>0.
Möbius gyrovector spaces
Let s be any positive constant, let (V,+,.) be any real inner product space and let Vs={v ∈ V :|v|<s}. A Möbius gyrovector space (Vs, , ) is a Möbius gyrogroup (Vs, ) with scalar multiplication given by r v = s tanh(r tanh−1(|v|/s))v/|v| where r is any real number, v ∈ Vs, v ≠ 0 and r 0 = 0 with the notation v r = r v.
Möbius scalar multiplication coincides with Einstein scalar multiplication (see section above) and this stems from Möbius addition and Einstein addition coinciding for vectors that are parallel.
Proper velocity space model and proper velocity addition
A proper velocity space model of hyperbolic geometry is given by proper velocities with vector addition given by the proper velocity addition formula:
where is the beta factor given by .
This formula provides a model that uses a whole space compared to other models of hyperbolic geometry which use discs or half-planes.
A proper velocity gyrovector space is a real inner product space V, with the proper velocity gyrogroup addition and with scalar multiplication defined by r v = s sinh(r sinh−1(|v|/s))v/|v| where r is any real number, v ∈ V, v ≠ 0 and r 0 = 0 with the notation v r = r v.
Isomorphisms
A gyrovector space isomorphism preserves gyrogroup addition and scalar multiplication and the inner product.
The three gyrovector spaces Möbius, Einstein and Proper Velocity are isomorphic.
If M, E and U are Möbius, Einstein and Proper Velocity gyrovector spaces respectively with elements vm, ve and vu then the isomorphisms are given by:
From this table the relation between and is given by the equations:
This is related to the connection between Möbius transformations and Lorentz transformations.
Gyrotrigonometry
Gyrotrigonometry is the use of gyroconcepts to study hyperbolic triangles.
Hyperbolic trigonometry as usually studied uses the hyperbolic functions cosh, sinh etc., and this contrasts with spherical trigonometry which uses the Euclidean trigonometric functions cos, sin, but with spherical triangle identities instead of ordinary plane triangle identities. Gyrotrigonometry takes the approach of using the ordinary trigonometric functions but in conjunction with gyrotriangle identities.
Triangle centers
The study of triangle centers traditionally is concerned with Euclidean geometry, but triangle centers can also be studied in hyperbolic geometry. Using gyrotrigonometry, expressions for trigonometric barycentric coordinates can be calculated that have the same form for both euclidean and hyperbolic geometry. In order for the expressions to coincide, the expressions must not encapsulate the specification of the anglesum being 180 degrees.
Gyroparallelogram addition
Using gyrotrigonometry, a gyrovector addition can be found which operates according to the gyroparallelogram law. This is the coaddition to the gyrogroup operation. Gyroparallelogram addition is commutative.
The gyroparallelogram law is similar to the parallelogram law in that a gyroparallelogram is a hyperbolic quadrilateral the two gyrodiagonals of which intersect at their gyromidpoints, just as a parallelogram is a Euclidean quadrilateral the two diagonals of which intersect at their midpoints.
Bloch vectors
Bloch vectors which belong to the open unit ball of the Euclidean 3-space, can be studied with Einstein addition or Möbius addition.
Book reviews
A review of one of the earlier gyrovector books says the following:
"Over the years, there have been a handful of attempts to promote the non-Euclidean style for use in problem solving in relativity and electrodynamics, the failure of which to attract any substantial following, compounded by the absence of any positive results must give pause to anyone considering a similar undertaking. Until recently, no one was in a position to offer an improvement on the tools available since 1912. In his new book, Ungar furnishes the crucial missing element from the panoply of the non-Euclidean style: an elegant nonassociative algebraic formalism that fully exploits the structure of Einstein’s law of velocity composition."
Notes and references
Domenico Giulini, Algebraic and geometric structures of Special Relativity, A Chapter in "Special Relativity: Will it Survive the Next 100 Years?", edited by Claus Lämmerzahl, Jürgen Ehlers, Springer, 2006.
Further reading
Maks A. Akivis And Vladislav V. Goldberg (2006), Local Algebras Of A Differential Quasigroup, Bulletin of the AMS, Volume 43, Number 2
Oğuzhan Demirel, Emine Soytürk (2008), The Hyperbolic Carnot Theorem In The Poincare Disc Model Of Hyperbolic Geometry, Novi Sad J. Math. Vol. 38, No. 2, 2008, 33–39
M Ferreira (2008), Spherical continuous wavelet transforms arising from sections of the Lorentz group, Applied and Computational Harmonic Analysis, Elsevier
T Foguel (2000), Comment. Math. Univ. Carolinae, Groups, transversals, and loops
Yaakov Friedman (1994), "Bounded symmetric domains and the JB*-triple structure in physics", Jordan Algebras: Proceedings of the Conference Held in Oberwolfach, Germany, August 9–15, 1992, By Wilhelm Kaup, Kevin McCrimmon, Holger P. Petersson, Published by Walter de Gruyter, ,
Florian Girelli, Etera R. Livine (2004), Special Relativity as a non commutative geometry: Lessons for Deformed Special Relativity, Phys. Rev. D 81, 085041 (2010)
Sejong Kim, Jimmie Lawson (2011), Smooth Bruck Loops, Symmetric Spaces, And Nonassociative Vector Spaces, Demonstratio Mathematica, Vol. XLIV, No 4
Peter Levay (2003), Mixed State Geometric Phase From Thomas Rotations
Azniv Kasparian, Abraham A. Ungar, (2004) Lie Gyrovector Spaces, J. Geom. Symm. Phys
R Olah-Gal, J Sandor (2009), On Trigonometric Proofs of the Steiner–Lehmus Theorem, Forum Geometricorum, 2009 – forumgeom.fau.edu
Gonzalo E. Reyes (2003), On the law of motion in Special Relativity
Krzysztof Rozga (2000), Pacific Journal of Mathematics, Vol. 193, No. 1,On Central Extensions Of Gyrocommutative Gyrogroups
L.V. Sabinin (1995), "On the gyrogroups of Hungar" , RUSS MATH SURV, 1995, 50 (5), 1095–1096.
L.V. Sabinin, L.L. Sabinina, Larissa Sbitneva (1998), Aequationes Mathematicae, On the notion of gyrogroup
L.V. Sabinin, Larissa Sbitneva, I.P. Shestakov (2006), "Non-associative Algebra and Its Applications", CRC Press,,
F. Smarandache, C. Barbu (2010), The Hyperbolic Menelaus Theorem in The Poincaré Disc Model of Hyperbolic Geometry
Roman Ulrich Sexl, Helmuth Kurt Urbantke, (2001), "Relativity, Groups, Particles: Special Relativity and Relativistic Symmetry in Field and Particle Physics", pages 141–142, Springer, ,
External links
Einstein's Special Relativity: The Hyperbolic Geometric Viewpoint
Euclidean geometry
Hyperbolic geometry
Non-associative algebra
Special relativity
Quantum mechanics | Gyrovector space | [
"Physics",
"Mathematics"
] | 3,623 | [
"Non-associative algebra",
"Mathematical structures",
"Theoretical physics",
"Quantum mechanics",
"Special relativity",
"Algebraic structures",
"Theory of relativity"
] |
31,443,350 | https://en.wikipedia.org/wiki/Optical%20depth%20%28astrophysics%29 | Optical depth in astrophysics refers to a specific level of transparency. Optical depth and actual depth, and respectively, can vary widely depending on the absorptivity of the astrophysical environment. Indeed, is able to show the relationship between these two quantities and can lead to a greater understanding of the structure inside a star.
Optical depth is a measure of the extinction coefficient or absorptivity up to a specific 'depth' of a star's makeup.
The assumption here is that either the extinction coefficient or the column number density is known. These can generally be calculated from other equations if a fair amount of information is known about the chemical makeup of the star. From the definition, it is also clear that large optical depths correspond to higher rate of obscuration. Optical depth can therefore be thought of as the opacity of a medium.
The extinction coefficient can be calculated using the transfer equation. In most astrophysical problems, this is exceptionally difficult to solve since solving the corresponding equations requires the incident radiation as well as the radiation leaving the star. These values are usually theoretical.
In some cases the Beer–Lambert law can be useful in finding .
where is the refractive index, and is the wavelength of the incident light before being absorbed or scattered. The Beer–Lambert law is only appropriate when the absorption occurs at a specific wavelength, . For a gray atmosphere, for instance, it is most appropriate to use the Eddington Approximation.
Therefore, is simply a constant that depends on the physical distance from the outside of a star. To find at a particular depth , the above equation may be used with and integration from to .
The Eddington approximation and the depth of the photosphere
Since it is difficult to define where the interior of a star ends and the photosphere begins, astrophysicists usually rely on the Eddington Approximation to derive the formal definition of
Devised by Sir Arthur Eddington the approximation takes into account the fact that produces a "gray" absorption in the atmosphere of a star, that is, it is independent of any specific wavelength and absorbs along the entire electromagnetic spectrum. In that case,
where is the effective temperature at that depth and is the optical depth.
This illustrates not only that the observable temperature and actual temperature at a certain physical depth of a star vary, but that the optical depth plays a crucial role in understanding the stellar structure. It also serves to demonstrate that the depth of the photosphere of a star is highly dependent upon the absorptivity of its environment. The photosphere extends down to a point where is about 2/3, which corresponds to a state where a photon would experience, in general, less than 1 scattering before leaving the star.
The above equation can be rewritten in terms of in the following way:
Which is useful, for example, when is not known but is.
References
Astrophysics
Scattering, absorption and radiative transfer (optics) | Optical depth (astrophysics) | [
"Physics",
"Chemistry",
"Astronomy"
] | 591 | [
"Astronomical sub-disciplines",
"Scattering",
" absorption and radiative transfer (optics)",
"Astrophysics"
] |
35,618,022 | https://en.wikipedia.org/wiki/One-way%20travel | One-way travel or one way is a travel paid by a fare purchased for a trip on an aircraft, a train, a bus, or some other mode of travel without a return trip. One-way tickets may be purchased for a variety of reasons, such as if one is planning to permanently relocate to the destination, is uncertain of one's return plans, has alternate arrangements for the return, or if the traveler is planning to return, but there is no need to pay the fare in advance. For some modes of travel, often for buses, trams or metros, return tickets may not be available at all.
For air trips, normal return tickets are valid for 12 months or 365 days, so in the case of a passenger that wants to stay at the destination for more than 365 days (12 months in one year) then a one-way ticket is advised by airlines and travel agents.
Depending on the provider, buying two one-way tickets may or may not be more expensive than buying a round trip ticket. At times, buying two one-way tickets may actually be less expensive, especially if the two tickets are on different airlines.
The hijackers involved in the September 11 attacks purchased one-way tickets, and in aftermath of the attacks, purchasers of one-way airline tickets were in some cases subject to a higher risk of additional security screening.
See also
Repositioning cruise
References
Public transport tickets | One-way travel | [
"Physics"
] | 289 | [
"Physical systems",
"Transport",
"Transport stubs"
] |
35,620,010 | https://en.wikipedia.org/wiki/Vector%20radiative%20transfer | In spectroscopy and radiometry, vector radiative transfer (VRT) is a method of modelling the propagation of polarized electromagnetic radiation in low density media. In contrast to scalar radiative transfer (RT), which models only the first Stokes component, the intensity, VRT models all four components through vector methods.
For a single frequency, , the VRT equation for a scattering media can be written as follows:
where s is the path, is the propagation vector, K is the extinction matrix, is the absorption vector, B is the Planck function and Z is the scattering phase matrix.
All the coefficient matrices, K, and Z, will vary depending on the density of absorbers/scatterers present and must be calculated from their density-independent quantities, that is the attenuation coefficient vector, , is calculated from the mass absorption coefficient vector times the density of the absorber.
Moreover, it is typical for media to have multiple species causing extinction, absorption and scattering, thus these coefficient matrices must be summed up over all the different species.
Extinction is caused both by simple absorption as well as from scattering out of the line-of-sight, , therefore we calculate the extinction matrix from the combination of the absorption vector and the scattering phase matrix:
where I is the identity matrix.
The four-component radiation vector, where I, Q, U and V are the first through fourth elements of the Stokes parameters, respectively, fully describes the polarization state of the electromagnetic radiation.
It is this vector-nature that considerably complicates the equation.
Absorption will be different for each of the four components, moreover, whenever the radiation is scattered, there can be a complex transfer between the different Stokes components—see polarization mixing—thus the scattering phase function has 4*4=16 components. It is, in fact, a rank-two tensor.
References
Radiometry
Spectroscopy
Electromagnetic radiation | Vector radiative transfer | [
"Physics",
"Chemistry",
"Engineering"
] | 386 | [
"Physical phenomena",
"Telecommunications engineering",
"Molecular physics",
"Spectrum (physical sciences)",
"Electromagnetic radiation",
"Instrumental analysis",
"Radiation",
"Spectroscopy",
"Radiometry"
] |
35,623,859 | https://en.wikipedia.org/wiki/Chapman%E2%80%93Kolmogorov%20equation | In mathematics, specifically in the theory of Markovian stochastic processes in probability theory, the Chapman–Kolmogorov equation (CKE) is an identity relating the joint probability distributions of different sets of coordinates on a stochastic process. The equation was derived independently by both the British mathematician Sydney Chapman and the Russian mathematician Andrey Kolmogorov. The CKE is prominently used in recent Variational Bayesian methods.
Mathematical description
Suppose that { fi } is an indexed collection of random variables, that is, a stochastic process. Let
be the joint probability density function of the values of the random variables f1 to fn. Then, the Chapman–Kolmogorov equation is
i.e. a straightforward marginalization over the nuisance variable.
(Note that nothing yet has been assumed about the temporal (or any other) ordering of the random variables—the above equation applies equally to the marginalization of any of them.)
In terms of Markov kernels
If we consider the Markov kernels induced by the transitions of a Markov process, the Chapman-Kolmogorov equation can be seen as giving a way of composing the kernel, generalizing the way stochastic matrices compose. Given a measurable space and a Markov kernel , the two-step transition kernel is given by
for all and .
One can interpret this as a sum, over all intermediate states, of pairs of independent probabilistic transitions.
More generally, given measurable spaces , and , and Markov kernels and , we get a composite kernel by
for all and .
Because of this, Markov kernels, like stochastic matrices, form a category.
Application to time-dilated Markov chains
When the stochastic process under consideration is Markovian, the Chapman–Kolmogorov equation is equivalent to an identity on transition densities. In the Markov chain setting, one assumes that i1 < ... < in. Then, because of the Markov property,
where the conditional probability is the transition probability between the times . So, the Chapman–Kolmogorov equation takes the form
Informally, this says that the probability of going from state 1 to state 3 can be found from the probabilities of going from 1 to an intermediate state 2 and then from 2 to 3, by adding up over all the possible intermediate states 2.
When the probability distribution on the state space of a Markov chain is discrete and the Markov chain is homogeneous, the Chapman–Kolmogorov equations can be expressed in terms of (possibly infinite-dimensional) matrix multiplication, thus:
where P(t) is the transition matrix of jump t, i.e., P(t) is the matrix such that entry (i,j) contains the probability of the chain moving from state i to state j in t steps.
As a corollary, it follows that to calculate the transition matrix of jump t, it is sufficient to raise the transition matrix of jump one to the power of t, that is
The differential form of the Chapman–Kolmogorov equation is known as a master equation.
See also
Fokker–Planck equation (also known as Kolmogorov forward equation)
Kolmogorov backward equation
Examples of Markov chains
Category of Markov kernels
Citations
Further reading
External links
Equations
Markov processes
Stochastic calculus | Chapman–Kolmogorov equation | [
"Mathematics"
] | 686 | [
"Mathematical objects",
"Equations"
] |
24,313,526 | https://en.wikipedia.org/wiki/Hirox | Hirox (ハイロックス) is a lens company in Tokyo, Japan that created the first digital microscope in 1985. This company is now known as Hirox Co Ltd. Hirox's main industry is digital microscopes, but still makes the lenses for a variety of items including rangefinders.
Hirox's newest digital microscope systems are currently the RH-2000 and the RH-8800. The RH-2000 connects to a desktop computer by USB 3.0 and USB 2.0. The RH-8800 system is a standalone system with the computer built-in. Both are capable of 3D rotation, high dynamic range, 2D and 3D measurement, 2D and 3D tiling, as well as automated particle counting.
History
Hirox founded in Tokyo, Japan in 1978 as a lens and optical system manufacturer. In 1980 the company started to design and sell TV lenses for people with poor eyesight, and to supply products to the Swedish government. It introduced the first digital microscope in 1985, followed by a hand-held video microscope system in 1986, supplied to the Japanese police force. The Hirox Digital Microscope System started distribution in the USA in 1986. The 3-D rotational microscope was introduced in 1992. From 2000 offices and associated companies were set up in Osaka, USA, China, Nagoya, Korea, Europe, and Asia, with distribution agreements with LECO (USA), Leeds Precision Instruments, and Olympus Corporation.
By 2014 distribution agreements with LECO (USA), Leeds Precision Instruments, and Olympus Corporation had ended.
In 2018, a distribution agreement started with Nikon Metrology.
One of the first major demonstrations of the Hirox technology was the high-resolution digitalization of Girl with a Pearl Earring starting in 2018, resulting in a panoramic image of over 1 billion pixels, believed to be first such panoramic image of this size.
Digital microscope magnification
The Hirox Digital Microscope System supports magnifications of up to 7000×. A primary difference between an optical and a digital microscope is the magnification. With an optical microscope the magnification is the lens magnification multiplied by the eyepiece magnification. The magnification for a digital microscope is defined as the ratio of the size of image on the monitor to the subject size. The Hirox Digital Microscope System has a 15" monitor.
Optical and digital microscopes
Since the digital microscope has the image projected directly on to the CCD camera, it is possible to have higher quality recorded images than with an optical microscope. With the optical microscope, the lenses are designed for the optics of the eye. Attaching a CCD camera to an optical microscope will result in an image that has compromises due to the eyepiece.
2D measurement
The Hirox Digital Microscope System can measure distances on-screen. Calibration is needed at each magnification.
3D measurement
3D measurement is achieved with a digital microscope by image stacking. Using a step motor, the system takes images from the lowest focal plane in the field of view to the highest focal plane. Then it reconstructs these images into a 3D model based on contrast to give a 3D color image of the sample. From these 3D model measurements can be made, but their accuracy is based on the step motor and depth of field of the lens. The step motor is necessary to get accurate height information and accuracy is higher with a shallower depth of field. The most accurate 3D measurement from a step motor for a digital microscope is 1 micrometres.
The 3D measurement abilities include, but are not limited to height, length, angle, radius, volume and area. Also, the 3D model can be shown as a texture model, wireframe, or rainbow graph. This data can be exported to be viewed on a PC or in programs such as MATLAB.
2D and 3D tiling
2D and 3D tiling, also known as stitching or creating a panoramic image, can be done with the more advanced digital microscope systems. In 2D tiling images are automatically tiled together seamlessly in real-time by moving the XY stage. 3D tiling combines the XY-stage movement of 2D-tiling with the Z-axis movement of 3D measurement to create a 3D panoramic.
See also
Microscope
Digital microscope
High dynamic range
Optical microscope
References
Hirox Europe
Hirox Asia
Hirox Korea
McCrone Group
Seika
Global Spec
Hirox Japanese Wiki
Microscopes
Electronics companies of Japan
Japanese brands
Lens manufacturers
Optics manufacturing companies
Electronics companies established in 1978
Manufacturing companies based in Tokyo | Hirox | [
"Chemistry",
"Technology",
"Engineering"
] | 942 | [
"Microscopes",
"Measuring instruments",
"Microscopy"
] |
24,314,125 | https://en.wikipedia.org/wiki/Romidepsin | Romidepsin, sold under the brand name Istodax, is an anticancer agent used in cutaneous T-cell lymphoma (CTCL) and other peripheral T-cell lymphomas (PTCLs). Romidepsin is a natural product obtained from the bacterium Chromobacterium violaceum, and works by blocking enzymes known as histone deacetylases, thus inducing apoptosis. It is sometimes referred to as depsipeptide, after the class of molecules to which it belongs. Romidepsin is branded and owned by Gloucester Pharmaceuticals, a part of Celgene.
History
Romidepsin was first reported in the scientific literature in 1994, by a team of researchers from Fujisawa Pharmaceutical Company (now Astellas Pharma) in Tsukuba, Japan, who isolated it in a culture of Chromobacterium violaceum from a soil sample obtained in Yamagata Prefecture. It was found to have little to no antibacterial activity, but was potently cytotoxic against several human cancer cell lines, with no effect on normal cells; studies on mice later found it to have antitumor activity in vivo as well.
The first total synthesis of romidepsin was accomplished by Harvard researchers and published in 1996. Its mechanism of action was elucidated in 1998, when researchers from Fujisawa and the University of Tokyo found it to be a histone deacetylase inhibitor with effects similar to those of trichostatin A.
Clinical trials
Phase I studies of romidepsin, initially codenamed FK228 and FR901228, began in 1997. Phase II and phase III trials were conducted for a variety of indications. The most significant results were found in the treatment of cutaneous T-cell lymphoma (CTCL) and other peripheral T-cell lymphomas (PTCLs).
In 2004, romidepsin received Fast Track designation from the FDA for the treatment of cutaneous T-cell lymphoma, and orphan drug status from the FDA and the European Medicines Agency for the same indication.
The FDA approved romidepsin for CTCL in November 2009 and approved romidepsin for other peripheral T-cell lymphomas (PTCLs) in June 2011.
A randomised, phase III trial of romidepsin + CHOP chemotherapy vs CHOP chemotherapy for patients with peripheral T cell lymphoma returned negative results, having no significant impact on progression free survival or overall survival.
Pre-clinical HIV study
In 2014, PLOS Pathogens published a study involving romidepsin in a trial designed to reactivate latent HIV virus in order to deplete the HIV reservoir. Latently infected T-cells were exposed in vitro and ex vivo to romidepsin, leading to an increase in detectable levels of cell-associated HIV RNA. The trial also compared the effect of romidepsin to another histone deacetylase inhibitor, Vorinostat
Autism study in animal model
A study involving romidepsin in an animal study that showed that a brief treatment with low amounts of romidepsin could reverse social deficits in a mouse model of autism.
Pharmacodynamics
In a Phase II trial of romidepsin involving patients with CTCL or PTCL, there was evidence of increased histone acetylation in peripheral blood mononuclear cells (PBMCs) extending 4–48 hours. Expression of the ABCB1 gene, a marker of romidepsin-induced gene expression, was also increased in both PBMCs and tumor biopsy samples. Increased gene expression following increased histone acetylation is an expected effect of an HDAC inhibitor. Increased hemoglobin F (another surrogate marker for gene-expression changes resulting from HDAC inhibition) was also detected in blood after romidepsin administration, and persistent histone acetylation was inversely associated with drug clearance and directly associated with patient response to therapy.
Dosage and administration
The approved dosage of romidepsin in both CTCL and PTCL is a four-hour i.v. administration of 14 mg/m2 on days 1, 8, and 15 of a 28-day treatment cycle. This cycle should be repeated as long as the patient continues to benefit and tolerate the therapy. A dose reduction to 10 mg/m2 is possible in some patients who experience high-grade toxicities.
Pharmacokinetics
In trials involving patients with advanced cancers, romidepsin exhibited linear pharmacokinetics across doses ranging from 1.0 to 24.9 mg/m2 when administered intravenously over four hours. Age, race, sex, mild-to-severe renal impairment, and mild-to-moderate hepatic impairment had no effect on romidepsin pharmacokinetics. No accumulation of plasma concentration was observed after repeated dosing.
Mechanism of action
Romidepsin acts as a prodrug with the disulfide bond undergoing reduction within the cell to release a zinc-binding thiol. The thiol binds to a zinc atom in the binding pocket of Zn-dependent histone deacetylase to block its activity. Thus it is an HDAC inhibitor. Many HDAC inhibitors are potential treatments for cancer through the ability to epigenetically restore normal expression of tumor suppressor genes, which may result in cell cycle arrest, differentiation, and apoptosis.
Adverse effects
The use of romidepsin is uniformly associated with adverse effects. In clinical trials, the most common were nausea and vomiting, fatigue, infection, loss of appetite, and blood disorders (including anemia, thrombocytopenia, and leukopenia). It has also been associated with infections, and with metabolic disturbances (such as abnormal electrolyte levels), skin reactions, altered taste perception, and changes in cardiac electrical conduction.
References
Antineoplastic drugs
Drugs developed by Bristol Myers Squibb
Orphan drugs
Prodrugs
Histone deacetylase inhibitors
Depsipeptides
Astellas Pharma | Romidepsin | [
"Chemistry"
] | 1,277 | [
"Chemicals in medicine",
"Prodrugs"
] |
24,315,207 | https://en.wikipedia.org/wiki/Ground%20glass%20hepatocyte | In liver pathology, a ground glass hepatocyte, abbreviated GGH, is a liver parenchymal cell with a flat hazy and uniformly dull appearing cytoplasm on light microscopy. The cytoplasm's granular homogeneous eosinophilic staining is caused by the presence of HBsAg.
The appearance is classically associated with abundant hepatitis B antigen in the endoplasmic reticulum, but may also be drug-induced. In the context of hepatitis B, GGHs are only seen in chronic infections, i.e. they are not seen in acute hepatitis B.
GGHs were first described by Hadziyannis et al.
Types
Several different types of GGHs are recognized:
Type I - morphologically consist of GGHs that are scattered singly and have weak Pre-S2 positive immunostaining.
Type II - morphologically consist of GGHs that are in clusters and have Pre-S2 negative immunostaining.
There is some evidence to suggest that type II GGHs predispose to hepatocellular carcinoma.
See also
Drug reaction
Mallory body
Viral hepatitis
Additional images
References
External links
Image of ground glass hepatocyte - med.ohio-state.edu.
Chronic hepatitis - pathconsultddx.com.
Histopathology | Ground glass hepatocyte | [
"Chemistry"
] | 280 | [
"Histopathology",
"Microscopy"
] |
24,318,403 | https://en.wikipedia.org/wiki/Two-electron%20atom | In atomic physics, a two-electron atom or helium-like ion is a quantum mechanical system consisting of one nucleus with a charge of Ze and just two electrons. This is the first case of many-electron systems where the Pauli exclusion principle plays a central role.
It is an example of a three-body problem.
The first few two-electron atoms are:
Schrödinger equation
The Schrödinger equation for any two-electron system, such as the neutral Helium atom (He, Z = 2), the negative Hydrogen ion (H−, Z = 1), or the positive Lithium ion (Li+, Z = 3) is: For a more rigorous mathematical derivation of Schrödinger's equation, see also.
where r1 is the position of one electron (r1 = |r1| is its magnitude), r2 is the position of the other electron (r2 = |r2| is the magnitude), r12 = |r12| is the magnitude of the separation between them given by
μ is the two-body reduced mass of an electron with respect to the nucleus of mass M
and Z is the atomic number for the element (not a quantum number).
The cross-term of two Laplacians
is known as the mass polarization term, which arises due to the motion of atomic nuclei. The wavefunction is a function of the two electron's positions:
There is no closed form solution for this equation.
Spectrum
The optical spectrum of the two electron atom has two systems of lines. A para system of single lines, and an ortho system of triplets (closely spaced group of three lines). The energy levels in the atom for the single lines are indicated by 1S0 1P1 1D2 1F3 etc., and for the triplets, some energy levels are split: 3S1 3P2 3P1 3P0 3D3 3D2 3D1 3F4 3F3 3F2. Alkaline earths and mercury also have spectra with similar features, due to the two outer valence electrons.
See also
Hydrogen-like atom
Hydrogen molecular ion
Helium atom
Lithium atom
References
Atoms
Quantum models | Two-electron atom | [
"Physics"
] | 452 | [
"Quantum models",
"Atoms",
"Quantum mechanics",
"Matter"
] |
24,318,538 | https://en.wikipedia.org/wiki/C21H24O10 | {{DISPLAYTITLE:C21H24O10}}
The molecular formula C21H24O10 (molar mass: 436.41 g/mol, exact mass: 436.136947 u) may refer to:
Nothofagin, a C-linked phloretin glucoside
Phlorizin, an O-linked phloretin glucoside
Molecular formulas | C21H24O10 | [
"Physics",
"Chemistry"
] | 88 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,319,449 | https://en.wikipedia.org/wiki/C23H27NO9 | {{DISPLAYTITLE:C23H27NO9}}
The molecular formula C23H27NO9 (molar mass: 461.46 g/mol) may refer to:
Morphine-3-glucuronide
morphine-6-glucuronide
Molecular formulas | C23H27NO9 | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,319,673 | https://en.wikipedia.org/wiki/C11H16N4O4 | {{DISPLAYTITLE:C11H16N4O4}}
The molecular formula C11H16N4O4 (molar mass: 268.27 g/mol, exact mass: 268.1172 u) may refer to:
Acetylcarnosine (NAC)
Dexrazoxane
Pentostatin
Molecular formulas | C11H16N4O4 | [
"Physics",
"Chemistry"
] | 75 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,319,775 | https://en.wikipedia.org/wiki/C9H11FN2O5 | {{DISPLAYTITLE:C9H11FN2O5}}
The molecular formula C9H11FN2O5 may refer to:
Doxifluridine, a second generation nucleoside analog
Floxuridine, an antimetabolite oncology drug
Molecular formulas | C9H11FN2O5 | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,320,320 | https://en.wikipedia.org/wiki/C6H11NO3 | {{DISPLAYTITLE: C6H11NO3}}
The molecular formula C6H11NO3 (molar mass: 145.16 g/mol, exact mass: 145.0739 u) may refer to:
N-Acetyl-γ-aminobutyric acid
Allysine
Methyl aminolevulinate (MAL)
Molecular formulas | C6H11NO3 | [
"Physics",
"Chemistry"
] | 77 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,320,384 | https://en.wikipedia.org/wiki/Bat%20bridge | A bat bridge is a structure of varying construction crossing a new or altered road to aid the navigation of bats following the destruction of a hedgerow, and to cause the bats to cross the roadway at a sufficient height to avoid traffic. Bats are thought to follow the lines of hedgerows and woods, and removing these may confuse the bats.
The theory is that these "bridges" will be seen by the bats' sonar as linear features sufficiently similar to the old hedgerows as to provide an adequate substitute. The English Highways Agency is performing a study of those on the Dobwalls bypass to determine if this assumption is justified.
Usage
France
The first bridge to be installed in France is on the A65 motorway between junctions for Roquefort and Caloy in the Landes department.
Two additional bat bridges were completed in November 2012 near Balbigny, on the A89 motorway.
Germany
Two metal bridges were built in 2013 to protect the Mouse-eared Bat at Biberach an der Riss, Baden-Wuerttemberg. The structures cost £375,000 (400,000 €).
United Kingdom
Bat bridges have been implemented in the United Kingdom by various agencies, including the Highways Agency, with support of the Bat Conservation Trust.
At A38 Dobwalls Bypass, the bridges are more elaborate and sophisticated than the earlier Welsh structures, which consisted of cables strung from poles.
Criticism
The overall cost of bat bridges was criticised by Lord Marlesford in the House of Lords in 2011, for being funded "at a time when we're having to cut a lot of public spending".
A team from the University of Leeds examined the effectiveness of bat bridges, gantries and underpasses. They found that one underpass, placed on a commuting route, was used by 96% of bats, but few bats used the other underpasses and gantries, preferring routes which put them in the path of traffic.
See also
Amphibian and reptile tunnel
Squirrel bridge
Wildlife crossing
References
External links
Roads, Bat Bridges and Gantries: A position statement from Bat Conservation Trust
Bats and humans
Bat conservation
Conservation projects
Bridges | Bat bridge | [
"Engineering"
] | 435 | [
"Structural engineering",
"Bridges"
] |
24,321,121 | https://en.wikipedia.org/wiki/Geopiety | Geopiety is "the belief and worship of powers behind nature or the human environment". It was coined by the American geographer John Kirtland Wright for geographical piety.
The term "geopiety" comes from a combination of the Greek root geo, for earth, and the Latin root "pietas". As Wright explained when coining the term, geopiety is meant to refer to "emotional piety aroused by awareness of terrestrial diversity of the kind of which geography is also a form of awareness".
One example of geopiety can be found in the works of American preacher Jonathan Edwards:
See also
Religion and geography
References
Bibliography
Human geography | Geopiety | [
"Environmental_science"
] | 136 | [
"Environmental social science stubs",
"Environmental social science",
"Human geography"
] |
24,321,593 | https://en.wikipedia.org/wiki/RTI-352 | RTI-352 is a phenyltropane that is used as a radiolabeling ligand for the DAT.
RTI-352 is a geometric isomer of RTI-55 (β-CIT).
Based on X-ray crystallography, this compound is in a tautomeric equilibrium residing mostly on the side of the boat-shaped conformer.
References
Tropanes
RTI compounds
4-Iodophenyl compounds
Methyl esters | RTI-352 | [
"Chemistry"
] | 98 | [] |
24,322,531 | https://en.wikipedia.org/wiki/MIS%20capacitor | A MIS capacitor is a capacitor formed from a layer of metal, a layer of insulating material and a layer of semiconductor material. It gets its name from the initials of the metal-insulator-semiconductor (MIS) structure. As with the MOS field-effect transistor structure, for historical reasons, this layer is also often referred to as a MOS capacitor, but this specifically refers to an oxide insulator material.
The maximum capacitance, CMIS(max) is calculated analogously to the plate capacitor:
where :
εr is the insulator's relative permittivity
ε0 is the permittivity of the vacuum
A is the area
d is the insulator thickness
The production method depends on materials used (it is even possible that polymers can be used as both the insulator or the semiconductor layers). We will consider an example of an inorganic MOS capacitor based on silicon and silicon dioxide. On the semiconductor substrate, a thin layer of oxide (silicon dioxide) is applied (by, for example, thermal oxidation, or chemical vapour deposition) and then coated with a metal.
This structure and thus a capacitor of this type is present in every MIS field-effect transistor, such as MOSFETs. For the steady reduction of the size of structures in microelectronics, the ever thinner insulation layers are required (to keep the same capacitance for smaller area). However, when the oxide thickness falls below ~ 5 nm there arise parasitic leakages due to the tunneling effect. From this reason, the use of so-called high-κ dielectrics as the insulator material is being investigated.
In the MOSFET R&D, the MIS capacitors are extensively used as a relatively simple testing bench, e.g. to examine fabrication process and properties of the novel insulator materials, to measure leakage currents and charge-to-breakdown, to get the trap density value, to verify different models for carrier transport. Furthermore, the capacitors are often included in tutorial courses, particularly to discuss their charge states (inversion, depletion, accumulation) which also occur in the more complex transistor systems.
References
Capacitors | MIS capacitor | [
"Physics"
] | 466 | [
"Capacitance",
"Capacitors",
"Physical quantities"
] |
32,944,014 | https://en.wikipedia.org/wiki/Glutamine%20amidotransferase | In molecular biology, glutamine amidotransferases (GATase) are enzymes which catalyse the removal of the ammonia group from a glutamine molecule and its subsequent transfer to a specific substrate, thus creating a new carbon-nitrogen group on the substrate. This activity is found in a range of biosynthetic enzymes, including glutamine amidotransferase, anthranilate synthase component II, p-aminobenzoate, and glutamine-dependent carbamoyl-transferase (CPSase). Glutamine amidotransferase (GATase) domains can occur either as single polypeptides, as in glutamine amidotransferases, or as domains in a much larger multifunctional synthase protein, such as CPSase. On the basis of sequence similarities two classes of GATase domains have been identified: class-I (also known as trpG-type) and class-II (also known as purF-type). Class-I GATase domains are defined by a conserved catalytic triad consisting of cysteine, histidine and glutamate. Class-I GATase domains have been found in the following enzymes: the second component of anthranilate synthase and 4-amino-4-deoxychorismate (ADC) synthase; CTP synthase; GMP synthase; glutamine-dependent carbamoyl-phosphate synthase; phosphoribosylformylglycinamidine synthase II; and the histidine amidotransferase hisH.
References
Protein domains | Glutamine amidotransferase | [
"Biology"
] | 352 | [
"Protein domains",
"Protein classification"
] |
32,945,359 | https://en.wikipedia.org/wiki/Wong%E2%80%93Sandler%20mixing%20rule | The Wong–Sandler mixing rule is a thermodynamic mixing rule used for vapor–liquid equilibrium and liquid-liquid equilibrium calculations.
Summary
The first boundary condition is
which constrains the sum of a and b. The second equation is
with the notable limit as (and ) of
The mixing rules become
The cross term still must be specified by a combining rule, either
or
See also
Vapor–liquid equilibrium
Equation of state
References
Engineering thermodynamics
Thermodynamics
Equilibrium chemistry | Wong–Sandler mixing rule | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 102 | [
"Thermodynamics stubs",
"Engineering thermodynamics",
"Equilibrium chemistry",
"Thermodynamics",
"Mechanical engineering",
"Physical chemistry stubs",
"Dynamical systems"
] |
34,427,543 | https://en.wikipedia.org/wiki/Polarization%20mixing | In optics, polarization mixing refers to changes in the relative strengths of the Stokes parameters caused by reflection or scattering—see vector radiative transfer—or by changes in the radial orientation of the detector.
Example: A sloping, specular surface
The definition of the four Stokes components
are, in a fixed basis:
where Ev and Eh are the electric field components in the vertical and horizontal directions respectively. The definitions of the coordinate bases are arbitrary and depend on the orientation of the instrument. In the case of the Fresnel equations, the bases are defined in terms of the surface, with the horizontal being parallel to the surface and the vertical in a plane perpendicular to the surface.
When the bases are rotated by 45 degrees around the viewing axis, the definition of the third Stokes component becomes equivalent to that of the second, that is the difference in field intensity between the horizontal and vertical polarizations. Thus, if the instrument is rotated out of plane from the surface upon which it is looking, this will give rise to a signal. The geometry is illustrated in the above figure: is the instrument viewing angle with respect to nadir, is the viewing angle with respect to the surface normal and is the angle between the polarisation axes defined by the instrument and that defined by the Fresnel equations, i.e., the surface.
Ideally, in a polarimetric radiometer, especially a satellite mounted one, the polarisation axes are aligned with the Earth's surface, therefore we define the instrument viewing direction using the following vector:
We define the slope of the surface in terms of the normal vector, , which can be calculated in a number of ways. Using angular slope and azimuth, it becomes:
where is the slope and is the azimuth relative to the instrument view. The effective viewing angle can be
calculated via a dot product between the two vectors:
from which we compute the reflection coefficients, while the angle of the polarisation plane can be calculated with cross products:
where is the unit vector defining the y-axis.
The angle, , defines the rotation of the polarization axes between those defined for the Fresnel equations versus those of the detector. It can be used to correct for polarization mixing caused by a rotated detector, or to predict what the detector "sees", especially in the third Stokes component. See Stokes parameters#Relation to the polarization ellipse.
Application: Aircraft radiometry data
The Pol-Ice 2007 campaign included measurements over sea ice and open water from a fully polarimetric, aeroplane-mounted, L-band (1.4 GHz) radiometer. Since the radiometer was fixed to the aircraft, changes in aircraft attitude are equivalent to changes in surface slope. Moreover, emissivity over calm water and to a lesser extent, sea ice, can be effectively modelled using the Fresnel equations. Thus this is an excellent source of data with which to test the ideas discussed in the previous section. In particular, the campaign included both circular and zig-zagging overflights which will produce strong mixing in the Stokes parameters.
Correcting or removing bad data
To test the calibration of the EMIRAD II radiometer used in the Pol-Ice campaign, measurements over open water were compared with model results based on the Fresnel equations. The first plot, which compares the measured data with the model, shows that vertically polarized channel is too high, but more importantly in this context, are the smeared points in between the otherwise relatively clean function for measured vertical and horizontal brightness temperature as a function of viewing angle. These are the result of polarization mixing caused by changes in the attitude of the aircraft, particularly the roll angle. Since there are plenty of data points, rather than correcting the bad data, the authors simply exclude points for which the angle, , is too large. The result is shown at far right.
Predicting U
Many of the radiance measurements over sea ice included large signals in the third Stoke component, U. It turns out that these can be predicted to fairly high accuracy simply from the aircraft attitude. We use the following model for emissivity in U:
where eh and ev are the emissivities calculated via the Fresnel or similar equations and eU is the emissivity in U—that is, , where T is physical temperature—for the rotated polarization axes. The plot below shows the dependence on surface-slope and azimuth angle for a refractive index of 2 (a common value for sea ice) and a nominal instrument pointing-angle of 45 degrees. Using the same model, we can simulate the U-component of the Stokes vector for the radiometer.
See also
Polarization scrambling
Stokes parameters
References
Polarization (waves)
Radiometry | Polarization mixing | [
"Physics",
"Engineering"
] | 963 | [
"Telecommunications engineering",
"Polarization (waves)",
"Astrophysics",
"Radiometry"
] |
34,429,007 | https://en.wikipedia.org/wiki/Edmond%20Perrier | Jean Octave Edmond Perrier (9 May 1844 – 31 July 1921) was a French zoologist born in Tulle. He is known for his studies of invertebrates (annelids and echinoderms). He was the brother of zoologist Rémy Perrier (1861–1936).
Career
On advice from Louis Pasteur, he studied sciences at the , where he took classes in zoology from Henri de Lacaze-Duthiers (1821–1901). Afterwards he was a schoolteacher for three years at the college in Agen. In 1869 he obtained his doctorate in natural sciences, later replacing Lacaze-Duthiers at the (1872).
In 1876 he attained the chair of Natural History (mollusks, worms and zoophytes) at the National Museum of Natural History, and in 1879 became chairman of the Société zoologique de France. In the early 1880s he participated in a series of sea expeditions, during which, he performed investigations of marine life located within the benthic zone, subsequently gaining international recognition as a specialist of marine fauna.
In 1892 he became a member of the Académie des sciences, and even though he was not a doctor of medicine, he became a member of the Académie nationale de médecine (1898). From 1900 to 1919 he was director of the museum of natural history, where during the same time period (1903), he succeeded Henri Filhol (1843–1902) as chair of comparative anatomy.
Perrier was deeply interested in the evolutionary theories of Charles Darwin and Jean-Baptiste Lamarck. In 1909 he was speaker at the inauguration of Lamarck's monument at the National Museum of Natural History. He believed Lamarck to be the true founder in regards to the theory of evolution.
Selected writings
Etudes sur l'organisation des Lombriciens terrestres, 1874 – Studies on the organization of terrestrial earthworms.
La Philosophie zoologique avant Darwin, 1884 – Zoological philosophy prior to Darwin
Les Coralliaires et les îles Madréporiques..., 1887 – The Coralline islands and Madreporaria.
Lamarck et le transformisme actuel, 1893.
Expéditions scientifiques du Travailleur et du Talisman pendant les années 1880, 1881, 1882, 1883 (1902) – Scientific expeditions of the Travailleur and the Talisman.
La Femme dans la nature, dans les moeurs dans la légende, dans la société, 1910.
La Terre avant l'Histoire. Les Origines de la Vie et de l'Homme, 1920.
Founder of the Friends of the Natural History Museum
Perrier is the main founder of the Friends of the Natural History Museum Paris society, with Léon Bourgeois as the first president in office from 1907 to 1922.
References
Further reading
France savante list of publications
The Philosophy of Zoology Before Darwin (biographical information)
Parts of this article's biography are based on a translation of an equivalent article at the French Wikipedia.
External links
French zoologists
1844 births
1921 deaths
Lamarckism
Lycée Condorcet alumni
Members of the Ligue de la patrie française
People from Tulle
École Normale Supérieure alumni
Members of the French Academy of Sciences
Commanders of the Legion of Honour
Commanders of the Order of Agricultural Merit
National Museum of Natural History (France) people | Edmond Perrier | [
"Biology"
] | 676 | [
"Non-Darwinian evolution",
"Biology theories",
"Obsolete biology theories",
"Lamarckism"
] |
34,431,232 | https://en.wikipedia.org/wiki/International%20Cannabinoid%20Research%20Society | The International Cannabinoid Research Society (ICRS) is a professional society for scientific research in all fields of the cannabinoids, based in North Carolina, US. ICRS is one of the very few global non-profit medical societies or associations related to cannabis and cannabinoids.
History
The ICRS was formally incorporated as a scientific research society in 1992. Prior to that, early ICRS Symposia were organized by various researchers in the field since 1970. Membership in the Society has risen from 50 original members in its first year to 650+ members from all over the world.
Work
The International Cannabinoid Research Society is a:
Non-political, non-religious organization dedicated to scientific research in all fields of the cannabinoids, ranging from biochemical, chemical and physiological studies of the endogenous cannabinoid system to studies of the abuse potential of recreational Cannabis.
In addition to acting as a source for impartial information on Cannabis and the cannabinoids, the main role of the ICRS is to provide an open forum for researchers to meet and discuss their research.
Mission
The mission of the ICRS is to:
Foster cannabinoid research;
promote the exchange of scientific information and perspectives about Cannabis, the cannabinoids and endocannabinoids through the organization of scientific meetings;
serve as a source of reliable information regarding the chemistry, pharmacology, therapeutic uses, toxicology and the behavioral, psychological, and social effects of Cannabis and its constituents, of synthetic and endogenous compounds that interact with cannabinoid receptors, and of any compounds that target other components of the endocannabinoid system.
Journal
Since 2019, the ICRS has partnered with Mary Ann Liebert, Inc. Publishers to produce the academic journal Cannabis and Cannabinoid Research.
ICRS awards the "Raphael Mechoulam Award in Cannabinoid Research" annually.
Annual Symposia
The Society holds an annual meeting, not available to the media, which generally alternates between North America and Europe.
Notes
References
External links
Official Website
ICRS Annual Symposium On The Cannabinoids
1992 in cannabis
Cannabis research
501(c)(3) organizations
Neuroscience organizations
Pharmacological societies
Toxicology organizations
International learned societies | International Cannabinoid Research Society | [
"Chemistry",
"Environmental_science"
] | 466 | [
"Pharmacology",
"Pharmacological societies",
"Toxicology",
"Toxicology organizations"
] |
22,792,377 | https://en.wikipedia.org/wiki/Mole%20map%20%28chemistry%29 | In chemistry, a mole map is a graphical representation of an algorithm that compares molar mass, number of particles per mole, and factors from balanced equations or other formulae. They are often used in undergraduate-level chemistry courses as a tool to teach the basics of stoichiometry and unit conversion.
Stoichiometry
References
Chemistry education | Mole map (chemistry) | [
"Chemistry",
"Mathematics"
] | 72 | [
"Stoichiometry",
"Chemical reaction engineering",
"Quantity",
"Chemical quantities",
"nan"
] |
22,792,600 | https://en.wikipedia.org/wiki/Energid%20Technologies | Energid Technologies is an engineering firm providing robotics, machine vision, and remote control software with the core product referred to as Actin. Its headquarters are in Bedford, Massachusetts. It has regional presence in Bedford, Massachusetts, New York, New York; Pittsburgh, Pennsylvania; Tucson, Arizona; Austin, Texas; and Chicago, Illinois. Energid also has an international presence in Bangalore, India. Energid Technologies develops tools for robotic applications in the industrial, agriculture, transportation, defense, and medical industries. Energid's Actin and Selectin products provide advanced robotics technology in the form of extendable software toolkits. Actin is in release 5.5 and provides control and tasking for complex multi-robot systems. Energid has applied its software to control robots for seafloor oil exploration, nuclear reactor inspection, and citrus harvesting.
In May 2019, Energid was named to the RBR50 2019, an annual list of the top 50 robotics companies by Robotics Business Review.
History
Energid Technologies was founded in 2001 by Neil Tardella and James English. It is a Florida corporation headquartered in Bedford, Massachusetts.
References
External links
Official Website
Robotics companies of the United States
Medical robotics
Companies based in Massachusetts
Technology companies established in 2001 | Energid Technologies | [
"Biology"
] | 266 | [
"Medical robotics",
"Medical technology"
] |
22,795,102 | https://en.wikipedia.org/wiki/Rees%20factor%20semigroup | In mathematics, in semigroup theory, a Rees factor semigroup (also called Rees quotient semigroup or just Rees factor), named after David Rees, is a certain semigroup constructed using a semigroup and an ideal of the semigroup.
Let S be a semigroup and I be an ideal of S. Using S and I one can construct a new semigroup by collapsing I into a single element while the elements of S outside of I retain their identity. The new semigroup obtained in this way is called the Rees factor semigroup of S modulo I and is denoted by S/I.
The concept of Rees factor semigroup was introduced by David Rees in 1940.
Formal definition
A subset of a semigroup is called an ideal of if both and are subsets of (where , and similarly for ). Let be an ideal of a semigroup . The relation in defined by
x ρ y ⇔ either x = y or both x and y are in I
is an equivalence relation in . The equivalence classes under are the singleton sets with not in and the set . Since is an ideal of , the relation is a congruence on . The quotient semigroup is, by definition, the Rees factor semigroup of modulo
. For notational convenience the semigroup is also denoted as . The Rees factor
semigroup has underlying set , where is a new element and the product (here denoted by
) is defined by
The congruence on as defined above is called the Rees congruence on modulo .
Example
Consider the semigroup S = { a, b, c, d, e } with the binary operation defined by the following Cayley table:
Let I = { a, d } which is a subset of S. Since
SI = { aa, ba, ca, da, ea, ad, bd, cd, dd, ed } = { a, d } ⊆ I
IS = { aa, da, ab, db, ac, dc, ad, dd, ae, de } = { a, d } ⊆ I
the set I is an ideal of S. The Rees factor semigroup of S modulo I is the set S/I = { b, c, e, I } with the binary operation defined by the following Cayley table:
Ideal extension
A semigroup S is called an ideal extension of a semigroup A by a semigroup B if A is an ideal of S and the Rees factor semigroup S/A is isomorphic to B.
Some of the cases that have been studied extensively include: ideal extensions of completely simple semigroups, of a group by a completely 0-simple semigroup, of a commutative semigroup with cancellation by a group with added zero. In general, the problem of describing all ideal extensions of a semigroup is still open.
References
Semigroup theory | Rees factor semigroup | [
"Mathematics"
] | 581 | [
"Semigroup theory",
"Fields of abstract algebra",
"Mathematical structures",
"Algebraic structures"
] |
22,798,149 | https://en.wikipedia.org/wiki/Translational%20Centre%20for%20Regenerative%20Medicine | The Translational Centre for Regenerative Medicine (TRM) ws a central scientific institution of the University of Leipzig. It focussed on the development of diagnostic and therapeutic concepts in the field of regenerative medicine and their implementation into a clinical setting.
The TRM Leipzig was established in October 2006 with funds from the Federal Ministry of Education and Research, the Free State of Saxony and the University of Leipzig. It was part of the Life Sciences Network Leipzig and one of the initiators of the Regenerative Medicine Initiative Germany (RMIG).
There is no information available for the TRM after 2020.
Translation
The TRM Leipzig aims to accelerate the translation of laboratory research in therapeutics and diagnostics into the clinic. The centre introduces an organisational process to assure the effective implementation of therapy-oriented gateway research. The foundation of this concept consists of an award system in which three main gates are conceived. Passing the gates precedes the conceptual, preclinical, and clinical working phases each new diagnostic or therapeutic concept has to go through. This enables the TRM Leipzig to ensure an effective translation in conjunction with a comprehensive support and coordination of research projects.
Research
Professor Frank Emmrich led the institute from 2006 to 2015. The scientific work of TRM Leipzig is guided by two boards. The Executive Board provides the strategic direction of research at the TRM. Expertise and scientific support is given by the Internal Advisory Board which includes experts of science, senior scientists, researchers and entrepreneurs. TRM Leipzig promotes application-oriented and interdisciplinary research projects in four areas:
Tissue Engineering und Materials Science (TEMAT)
Cell Therapies for Repair and Replacement (CELLT)
Regulatory Molecules and Delivery Systems (REMOD)
Imaging, Modelling and Monitoring of Regeneration (IMONIT)
The research areas of the TRM Leipzig are supported by three core units:
Quality Management Core Unit (QMCU)
Translational Surgery Core Unit (TSCU)
Computational Microscopy Core Unit (CMCU)
Awards
The TRM Leipzig selects and funds investigator-initiated translational awards that supports young researchers pursuing their own therapy-oriented concepts and extending their innovation potential. Awards can be requested by individual researchers, scientific groups or tandem research teams consisting of clinicians and researchers. Applications for the awards can be submitted anytime.
References
External links
Medical and health organisations based in Saxony
Medical research institutes in Germany
Translational medicine
2006 establishments in Germany | Translational Centre for Regenerative Medicine | [
"Biology"
] | 485 | [
"Translational medicine"
] |
6,740,565 | https://en.wikipedia.org/wiki/Helly%27s%20selection%20theorem | In mathematics, Helly's selection theorem (also called the Helly selection principle) states that a uniformly bounded sequence of monotone real functions admits a convergent subsequence.
In other words, it is a sequential compactness theorem for the space of uniformly bounded monotone functions.
It is named for the Austrian mathematician Eduard Helly.
A more general version of the theorem asserts compactness of the space BVloc of functions locally of bounded total variation that are uniformly bounded at a point.
The theorem has applications throughout mathematical analysis. In probability theory, the result implies compactness of a tight family of measures.
Statement of the theorem
Let (fn)n ∈ N be a sequence of increasing functions mapping a real interval I into the real line R,
and suppose that it is uniformly bounded: there are a,b ∈ R such that a ≤ fn ≤ b for every n ∈ N.
Then the sequence (fn)n ∈ N admits a pointwise convergent subsequence.
Proof
Step 1. An increasing function f on an interval I has at most countably many points of discontinuity.
Let , i.e. the set of discontinuities, then since f is increasing, any x in A satisfies , where ,, hence by discontinuity, . Since the set of rational numbers is dense in R, is non-empty. Thus the axiom of choice indicates that there is a mapping s from A to Q.
It is sufficient to show that s is injective, which implies that A has a non-larger cardinity than Q, which is countable. Suppose x1,x2∈A, x1<x2, then , by the construction of s, we have s(x1)<s(x2). Thus s is injective.
Step 2. Inductive Construction of a subsequence converging at discontinuities and rationals.
Let , i.e. the discontinuities of fn, , then A is countable, and it can be denoted as {an: n∈N}.
By the uniform boundedness of (fn)n ∈ N and B-W theorem, there is a subsequence (f(1)n)n ∈ N such that (f(1)n(a1))n ∈ N converges. Suppose (f(k)n)n ∈ N has been chosen such that (f(k)n(ai))n ∈ N converges for i=1,...,k, then by uniform boundedness, there is a subsequence (f(k+1)n)n ∈ N of (f(k)n)n ∈ N, such that (f(k+1)n(ak+1))n ∈ N converges, thus (f(k+1)n)n ∈ N converges for i=1,...,k+1.
Let , then gk is a subsequence of fn that converges pointwise in A.
Step 3. gk converges in I except possibly in an at most countable set.
Let , then , hk(a)=gk(a) for a∈A, hk is increasing, let , then h is increasing, since supremes and limits of increasing functions are increasing, and for a∈ A by Step 2. By Step 1, h has at most countably many discontinuities.
We will show that gk converges at all continuities of h. Let x be a continuity of h, q,r∈ A, q<x<r, then ,hence
Thus,
Since h is continuous at x, by taking the limits , we have , thus
Step 4. Choosing a subsequence of gk that converges pointwise in I
This can be done with a diagonal process similar to Step 2.
With the above steps we have constructed a subsequence of (fn)n ∈ N that converges pointwise in I.
Generalisation to BVloc
Let U be an open subset of the real line and let fn : U → R, n ∈ N, be a sequence of functions. Suppose that
(fn) has uniformly bounded total variation on any W that is compactly embedded in U. That is, for all sets W ⊆ U with compact closure W̄ ⊆ U,
where the derivative is taken in the sense of tempered distributions.
Then, there exists a subsequence fnk, k ∈ N, of fn and a function f : U → R, locally of bounded variation, such that
fnk converges to f pointwise almost everywhere;
and fnk converges to f locally in L1 (see locally integrable function), i.e., for all W compactly embedded in U,
and, for W compactly embedded in U,
Further generalizations
There are many generalizations and refinements of Helly's theorem. The following theorem, for BV functions taking values in Banach spaces, is due to Barbu and Precupanu:
Let X be a reflexive, separable Hilbert space and let E be a closed, convex subset of X. Let Δ : X → [0, +∞) be positive-definite and homogeneous of degree one. Suppose that zn is a uniformly bounded sequence in BV([0, T]; X) with zn(t) ∈ E for all n ∈ N and t ∈ [0, T]. Then there exists a subsequence znk and functions δ, z ∈ BV([0, T]; X) such that
for all t ∈ [0, T],
and, for all t ∈ [0, T],
and, for all 0 ≤ s < t ≤ T,
See also
Bounded variation
Fraňková-Helly selection theorem
Total variation
References
Compactness theorems
Theorems in analysis | Helly's selection theorem | [
"Mathematics"
] | 1,242 | [
"Compactness theorems",
"Theorems in mathematical analysis",
"Mathematical analysis",
"Theorems in topology",
"Mathematical problems",
"Mathematical theorems"
] |
6,745,332 | https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Polymer%20Research | The Max Planck Institute for Polymer Research () is a scientific center in the field of polymer science located in Mainz, Germany. The institute was founded in 1983 by Erhard W. Fischer and Gerhard Wegner. Belonging to the Chemistry, Physics and Technology Section, it is one of over 80 institutes in the Max Planck Society (Max-Planck-Gesellschaft).
Research
Using a basic research approach, its scientists strive to design and characterize innovative applications in the fields of electronics, energy technology, medicine and nanomaterials. The institute specializes in new approaches to synthesis, supramolecular architectures, developing new methods, functional materials and components, structure and dynamics and surfaces and interfaces.
Organization
The beginning of 2014 saw a total of 511 people working at the institute, of whom 134 were supported by third-party funding and 79 were privately sponsored. The workforce was made up of 123 scientists, 150 doctoral and diploma students, 41 visiting scientists and 164 technical, administrative and auxiliary staff, altogether from approximately 40 different countries.
Departments
The MPIP consists of six departments each managed by a director:
Molecular Electronics, Paul Blom
Molecular Spectroscopy, Mischa Bonn
Physics of Interfaces, Hans-Jürgen Butt
Polymer Theory, Kurt Kremer
Physical Chemistry of Polymers, Katharina Landfester
Synthesis of Macromolecules, Tanja Weil
Emeriti and former directors
Emeriti
Hans-Wolfgang Spiess, Director, Polymer Spectroscopy (1985-2012)
Gerhard Wegner, Director, Solid State Chemistry (1983-2008)
Former Directors
Erhard E. Fischer, Director, Polymer Physics (1983-1997)
Wolfgang Knoll, Director, Material Science (1993-2008)
Graduate programs
The International Max Planck Research School for Polymer Materials is a graduate program offering a doctorate degree. The school is run in cooperation with the Johannes Gutenberg University of Mainz.
The Max Planck Graduate Center (MPGC) is a virtual department across the MPIP, the Max Planck Institute for Chemistry, and four faculties of the Johannes Gutenberg University of Mainz, created for interdisciplinary projects. It offers a PhD program in these research topics to candidates from all over the world.
References
External links
Portrait of the Max Planck Institute for Polymer Research at the Homepage of the Max Planck Society
Homepage of the Max Planck Graduate Center (MPGC)
Polymer Research
Chemical research institutes
Materials science institutes | Max Planck Institute for Polymer Research | [
"Chemistry",
"Materials_science"
] | 477 | [
"Materials science organizations",
"Chemical research institutes",
"Chemistry organization stubs",
"Materials science institutes"
] |
6,746,775 | https://en.wikipedia.org/wiki/Compact%20embedding | In mathematics, the notion of being compactly embedded expresses the idea that one set or space is "well contained" inside another. There are versions of this concept appropriate to general topology and functional analysis.
Definition (topological spaces)
Let (X, T) be a topological space, and let V and W be subsets of X. We say that V is compactly embedded in W, and write V ⊂⊂ W, if
V ⊆ Cl(V) ⊆ Int(W), where Cl(V) denotes the closure of V, and Int(W) denotes the interior of W; and
Cl(V) is compact.
Definition (normed spaces)
Let X and Y be two normed vector spaces with norms ||•||X and ||•||Y respectively, and suppose that X ⊆ Y. We say that X is compactly embedded in Y, and write X ⊂⊂ Y or X ⋐ Y, if
X is continuously embedded in Y; i.e., there is a constant C such that ||x||Y ≤ C||x||X for all x in X; and
The embedding of X into Y is a compact operator: any bounded set in X is totally bounded in Y, i.e. every sequence in such a bounded set has a subsequence that is Cauchy in the norm ||•||Y.
If Y is a Banach space, an equivalent definition is that the embedding operator (the identity) i : X → Y is a compact operator.
When applied to functional analysis, this version of compact embedding is usually used with Banach spaces of functions. Several of the Sobolev embedding theorems are compact embedding theorems. When an embedding is not compact, it may possess a related, but weaker, property of cocompactness.
References
.
.
.
Compactness (mathematics)
Functional analysis
General topology | Compact embedding | [
"Mathematics"
] | 401 | [
"General topology",
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Topology",
"Mathematical relations"
] |
6,746,813 | https://en.wikipedia.org/wiki/Spinhenge%40Home | Spinhenge@home was a volunteer computing project on the BOINC platform, which performs extensive numerical simulations concerning the physical characteristics of magnetic molecules. It is a project of the Bielefeld University of Applied Sciences, Department of Electrical Engineering and Computer Science, in cooperation with the University of Osnabrück and Ames Laboratory.
The project began beta testing on September 1, 2006 and used the Metropolis Monte Carlo algorithm to calculate and simulate spin dynamics in nanoscale molecular magnets.
On September 28, 2011, a hiatus was announced while the project team reviewed results and upgraded hardware. As of July 10, 2022 the hiatus continues and it is likely that the project has been closed down permanently.
See also
Spintronics
BOINC
List of volunteer computing projects
References
External links
Project Website
More Information about Spinhenge@Home
Project Statistics at BOINCStats
Science in society
Free science software
Volunteer computing projects | Spinhenge@Home | [
"Physics"
] | 184 | [
"Quantum mechanics",
"Quantum physics stubs"
] |
6,747,488 | https://en.wikipedia.org/wiki/Binding%20constant | The binding constant, or affinity constant/association constant, is a special case of the equilibrium constant K, and is the inverse of the dissociation constant. It is associated with the binding and unbinding reaction of receptor (R) and ligand (L) molecules, which is formalized as:
R + L RL
The reaction is characterized by the on-rate constant kon and the off-rate constant koff, which have units of M−1 s−1 and s−1, respectively. In equilibrium, the forward binding transition R + L → RL should be balanced by the backward unbinding transition RL → R + L. That is,
,
where [R], [L] and [RL] represent the concentration of unbound free receptors, the concentration of unbound free ligand and the concentration of receptor-ligand complexes. The binding constant Ka is defined by
.
An often considered quantity is the dissociation constant Kd ≡ , which has the unit of concentration, despite the fact that strictly speaking, all association constants are unitless values. The inclusion of units arises from the simplification that such constants are calculated solely from concentrations, which is not the case. Once chemical activity is factored into the correct form of the equation, a dimensionless value is obtained. For the binding of receptor and ligand molecules in solution, the molar Gibbs free energy ΔG, or the binding affinity is related to the dissociation constant Kd via
,
in which R is the ideal gas constant, T temperature and the standard reference concentration co = 1 mol/L.
See also
Binding coefficient
References
Equilibrium chemistry | Binding constant | [
"Chemistry"
] | 334 | [
"Equilibrium chemistry",
"Analytical chemistry stubs"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.