id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
28,205,882 | https://en.wikipedia.org/wiki/Electrical%20resistivity%20measurement%20of%20concrete | Concrete electrical resistivity can be obtained by applying a current into the concrete and measuring the response voltage. There are different methods for measuring concrete resistivity.
Laboratory methods
Two electrodes
Concrete electrical resistance can be measured by applying a current using two electrodes attached to the ends of a uniform cross-section specimen. Electrical resistivity is obtained from the equation:
R is the electrical resistance of the specimen, the ratio of voltage to current (measured in ohms, Ω)
is the length of the piece of material (measured in metres, m)
A is the cross-sectional area of the specimen (measured in square metres, m2).
This method suffers from the disadvantage that contact resistance can significantly add to the measured resistance causing inaccuracy. Conductive gels are used to improve the contact of the electrodes with the sample.
Four electrodes
The problem of contact resistance can be overcome by using four electrodes. The two end electrodes are used to inject current as before, but the voltage is measured between the two inner electrodes. The effective length of the sample being measured is the distance between the two inner electrodes. Modern voltage meters draw very little current so there is no significant current through the voltage electrodes and hence no voltage drop across the contact resistances.
Transformer method
In this method a transformer is used to measure resistivity without any direct contact with the specimen. The transformer consists of a primary coil which energises the circuit with an AC voltage and a secondary which is formed by a toroid of the concrete sample. The current in the sample is detected by a current coil wound around a section of the toroid (a current transformer). This method is good for measuring the setting properties of concrete, its hydration and strength. Wet concrete has a resistivity of around which progressively increases as the cement sets.
On-site methods
Four probes
On-site electrical resistivity of concrete is commonly measured using four probes in a Wenner array. The reason for using four probes is the same as in the laboratory method - to overcome contact errors. In this method four equally spaced probes are applied to the specimen in a line. The two outer probes induce the current to the specimen and the two inner electrodes measure the resulting potential drop. The probes are all applied to the same surface of the specimen and the method is consequently suitable for measuring the resistivity of bulk concrete in situ.
The resistivity is given by:
V is the voltage measured between the inner two probes (measured in volts, V)
I is the current injected in the two outer probes (measured in amps, A)
a is the equal distance of the probes (measured in metres, m).
Rebar
The presence of rebars disturbs electrical resistivity measurement as they conduct current much better than the surrounding concrete. This is particularly the case when the concrete cover depth is less than 30 mm. In order to minimize the effect, placing the electrodes above a rebar is usually avoided, or if unavoidable, then they are placed perpendicular to the rebar.
However, measurement of the resistance between a rebar and a single probe at the concrete surface is sometimes done in conjunction with electrochemical measurements. Resistivity strongly affects corrosion rates and electrochemical measurements require an electrical connection to the rebar. It is convenient to make a resistance measurement with the same connection.
The resistivity is given by:
R is the measured resistance,
D is the diameter of the surface probe.
Relation to corrosion
Corrosion is an electro-chemical process. The rate of flow of the ions between the anode and cathode areas, and therefore the rate at which corrosion can occur, is affected by the resistivity of the concrete. To measure the electrical resistivity of the concrete a current is applied to the two outer probes and the potential difference is measured between the two inner probes.
Empirical tests have arrived at the following threshold values which can be used to determine the likelihood of corrosion.
These values have to be used cautiously as there is strong evidence that chloride diffusion and surface electrical resistivity is dependent on other factors such as mix composition and age. The electrical resistivity of the concrete cover layer decreases due to:
Increasing concrete water content
Increasing concrete porosity
Increasing temperature
Increasing chloride content
Decreasing carbonatation depth
When the electrical resistivity of the concrete is low, the rate of corrosion increases.
When the electrical resistivity is high, e.g. in case of dry and carbonated concrete, the rate of corrosion decreases.
See also
Concrete degradation
Cover meter
Impedance spectroscopy
Induced polarization (IP)
References
Further reading
AASHTO Designation: T 358-151, Surface Resistivity Indication of Concrete's Ability to Resist Chloride Ion Penetration
Nondestructive testing
Concrete
Impedance measurements | Electrical resistivity measurement of concrete | [
"Physics",
"Materials_science",
"Engineering"
] | 975 | [
"Structural engineering",
"Physical quantities",
"Nondestructive testing",
"Materials testing",
"Concrete",
"Impedance measurements",
"Electrical resistance and conductance"
] |
28,206,366 | https://en.wikipedia.org/wiki/Signalizer%20functor | In mathematics, in the area of abstract algebra, a signalizer functor is a mapping from a potential finite subgroup to the centralizers of the nontrivial elements of an abelian group. The signalizer functor theorem provides the conditions under which the source of such a functor is in fact a subgroup.
The signalizer functor was first defined by Daniel Gorenstein. George Glauberman proved the Solvable Signalizer Functor Theorem for solvable groups and Patrick McBride proved it for general groups. Results concerning signalizer functors play a major role in the classification of finite simple groups.
Definition
Let A be a non-cyclic elementary abelian p-subgroup of the finite group G. An A-signalizer functor on G (or simply a signalizer functor when A and G are clear) is a mapping θ from the set of nonidentity elements of A to the set of A-invariant p′-subgroups of G satisfying the following properties:
For every nonidentity element , the group is contained in
For every pair of nonidentity elements , we have
The second condition above is called the balance condition. If the subgroups are all solvable, then the signalizer functor itself is said to be solvable.
Solvable signalizer functor theorem
Given certain additional, relatively mild, assumptions allow one to prove that the subgroup of generated by the subgroups is in fact a -subgroup.
The Solvable Signalizer Functor Theorem proved by Glauberman states that this will be the case if is solvable and has at least three generators. The theorem also states that under these assumptions, itself will be solvable.
Several weaker versions of the theorem were proven before Glauberman's proof was published. Gorenstein proved it under the stronger assumption that had rank at least 5. David Goldschmidt proved it under the assumption that had rank at least 4 or was a 2-group of rank at least 3. Helmut Bender gave a simple proof for 2-groups using the ZJ theorem, and Paul Flavell gave a proof in a similar spirit for all primes. Glauberman gave the definitive result for solvable signalizer functors. Using the classification of finite simple groups, McBride showed that is a -group without the assumption that is solvable.
Completeness
The terminology of completeness is often used in discussions of signalizer functors. Let be a signalizer functor as above, and consider the set И of all -invariant -subgroups of satisfying the following condition:
for all nonidentity
For example, the subgroups belong to И as a result of the balance condition of θ.
The signalizer functor is said to be complete if И has a unique maximal element when ordered by containment. In this case, the unique maximal element can be shown to coincide with above, and is called the completion of . If is complete, and turns out to be solvable, then is said to be solvably complete.
Thus, the Solvable Signalizer Functor Theorem can be rephrased by saying that if has at least three generators, then every solvable -signalizer functor on is solvably complete.
Examples of signalizer functors
The easiest way to obtain a signalizer functor is to start with an -invariant -subgroup of and define for all nonidentity However, it is generally more practical to begin with and use it to construct the -invariant -group.
The simplest signalizer functor used in practice is
As defined above, is indeed an -invariant -subgroup of , because is abelian. However, some additional assumptions are needed to show that this satisfies the balance condition. One sufficient criterion is that for each nonidentity the group is solvable (or -solvable or even -constrained).
Verifying the balance condition for this under this assumption can be done using Thompson's -lemma.
Coprime action
To obtain a better understanding of signalizer functors, it is essential to know the following general fact about finite groups:
Let be an abelian non-cyclic group acting on the finite group Assume that the orders of and are relatively prime.
Then
This fact can be proven using the Schur–Zassenhaus theorem to show that for each prime dividing the order of the group has an -invariant Sylow -subgroup. This reduces to the case where is a -group. Then an argument by induction on the order of reduces the statement further to the case where is elementary abelian with acting irreducibly. This forces the group to be cyclic, and the result follows.
This fact is used in both the proof and applications of the Solvable Signalizer Functor Theorem.
For example, one useful result is that it implies that if is complete, then its completion is the group defined above.
Normal completion
Another result that follows from the fact above is that the completion of a signalizer functor is often normal in :
Let be a complete -signalizer functor on .
Let be a noncyclic subgroup of Then the coprime action fact together with the balance condition imply that
To see this, observe that because is B-invariant,
The equality above uses the coprime action fact, and the containment uses the balance condition. Now, it is often the case that satisfies an "equivariance" condition, namely that for each and nonidentity , where the superscript denotes conjugation by For example, the mapping , the example of a signalizer functor given above, satisfies this condition.
If satisfies equivariance, then the normalizer of will normalize It follows that if is generated by the normalizers of the noncyclic subgroups of then the completion of (i.e., W) is normal in
References
Signalizer functor | Signalizer functor | [
"Mathematics"
] | 1,193 | [
"Mathematical structures",
"Algebraic structures",
"Finite groups"
] |
35,171,726 | https://en.wikipedia.org/wiki/Field%20effect%20%28semiconductor%29 | In physics, the field effect refers to the modulation of the electrical conductivity of a material by the application of an external electric field.
In a metal, the electron density that responds to applied fields is so large that an external electric field can penetrate only a very short distance into the material. However, in a semiconductor the lower density of electrons (and possibly holes) that can respond to an applied field is sufficiently small that the field can penetrate quite far into the material. This field penetration alters the conductivity of the semiconductor near its surface, and is called the field effect. The field effect underlies the operation of the Schottky diode and of field-effect transistors, notably the MOSFET, the JFET and the MESFET.
Surface conductance and band bending
The change in surface conductance occurs because the applied field alters the energy levels available to electrons to considerable depths from the surface, and that in turn changes the occupancy of the energy levels in the surface region. A typical treatment of such effects is based upon a band-bending diagram showing the positions in energy of the band edges as a function of depth into the material.
An example band-bending diagram is shown in the figure. For convenience, energy is expressed in eV and voltage is expressed in volts, avoiding the need for a factor q for the elementary charge. In the figure, a two-layer structure is shown, consisting of an insulator as left-hand layer and a semiconductor as right-hand layer. An example of such a structure is the MOS capacitor, a two-terminal structure made up of a metal gate contact, a semiconductor body (such as silicon) with a body contact, and an intervening insulating layer (such as silicon dioxide, hence the designation O). The left panels show the lowest energy level of the conduction band and the highest energy level of the valence band. These levels are "bent" by the application of a positive voltage V. By convention, the energy of electrons is shown, so a positive voltage penetrating the surface lowers the conduction edge. A dashed line depicts the occupancy situation: below this Fermi level the states are more likely to be occupied, the conduction band moves closer to the Fermi level, indicating more electrons are in the conducting band near the insulator.
Bulk region
The example in the figure shows the Fermi level in the bulk material beyond the range of the applied field as lying close to the valence band edge. This position for the occupancy level is arranged by introducing impurities into the semiconductor. In this case the impurities are so-called acceptors which soak up electrons from the valence band becoming negatively charged, immobile ions embedded in the semiconductor material. The removed electrons are drawn from the valence band levels, leaving vacancies or holes in the valence band. Charge neutrality prevails in the field-free region because a negative acceptor ion creates a positive deficiency in the host material: a hole is the absence of an electron, it behaves like a positive charge. Where no field is present, neutrality is achieved because the negative acceptor ions exactly balance the positive holes.
Surface region
Next the band bending is described. A positive charge is placed on the left face of the insulator (for example using a metal "gate" electrode). In the insulator there are no charges so the electric field is constant, leading to a linear change of voltage in this material. As a result, the insulator conduction and valence bands are therefore straight lines in the figure, separated by the large insulator energy gap.
In the semiconductor at the smaller voltage shown in the top panel, the positive charge placed on the left face of the insulator lowers the energy of the valence band edge. Consequently, these states are fully occupied out to a so-called depletion depth where the bulk occupancy reestablishes itself because the field cannot penetrate further. Because the valence band levels near the surface are fully occupied due to the lowering of these levels, only the immobile negative acceptor-ion charges are present near the surface, which becomes an electrically insulating region without holes (the depletion layer). Thus, field penetration is arrested when the exposed negative acceptor ion charge balances the positive charge placed on the insulator surface: the depletion layer adjusts its depth enough to make the net negative acceptor ion charge balance the positive charge on the gate.
Inversion
The conduction band edge also is lowered, increasing electron occupancy of these states, but at low voltages this increase is not significant. At larger applied voltages, however, as in the bottom panel, the conduction band edge is lowered sufficiently to cause significant population of these levels in a narrow surface layer, called an inversion layer because the electrons are opposite in polarity to the holes originally populating the semiconductor. This onset of electron charge in the inversion layer becomes very significant at an applied threshold voltage, and once the applied voltage exceeds this value charge neutrality is achieved almost entirely by addition of electrons to the inversion layer rather than by an increase in acceptor ion charge by expansion of the depletion layer. Further field penetration into the semiconductor is arrested at this point, as the electron density increases exponentially with band-bending beyond the threshold voltage, effectively pinning the depletion layer depth at its value at threshold voltages.
References
Semiconductors
Semiconductor technology
Semiconductor structures
Electronic band structures
Physical phenomena
MOSFETs | Field effect (semiconductor) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,130 | [
"Electron",
"Physical phenomena",
"Matter",
"Physical quantities",
"Microtechnology",
"Semiconductors",
"Electronic band structures",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Semiconductor technology",
"Solid state engineering",
"Electrical resistance and conductan... |
35,173,380 | https://en.wikipedia.org/wiki/Quasi-geostrophic%20equations | While geostrophic motion refers to the wind that would result from an exact balance between the Coriolis force and horizontal pressure-gradient forces, quasi-geostrophic (QG) motion refers to flows where the Coriolis force and pressure gradient forces are almost in balance, but with inertia also having an effect.
Origin
Atmospheric and oceanographic flows take place over horizontal length scales which are very large compared to their vertical length scale, and so they can be described using the shallow water equations. The Rossby number is a dimensionless number which characterises the strength of inertia compared to the strength of the Coriolis force. The quasi-geostrophic equations are approximations to the shallow water equations in the limit of small Rossby number, so that inertial forces are an order of magnitude smaller than the Coriolis and pressure forces. If the Rossby number is equal to zero then we recover geostrophic flow.
The quasi-geostrophic equations were first formulated by Jule Charney.
Derivation of the single-layer QG equations
In Cartesian coordinates, the components of the geostrophic wind are
(1a)
(1b)
where is the geopotential.
The geostrophic vorticity
can therefore be expressed in terms of the geopotential as
(2)
Equation (2) can be used to find from a known field . Alternatively, it can also be used to determine from a known distribution of by inverting the Laplacian operator.
The quasi-geostrophic vorticity equation can be obtained from the and components of the quasi-geostrophic momentum equation which can then be derived from the horizontal momentum equation
(3)
The material derivative in (3) is defined by
(4)
where is the pressure change following the motion.
The horizontal velocity can be separated into a geostrophic and an ageostrophic part
(5)
Two important assumptions of the quasi-geostrophic approximation are
1. , or, more precisely .
2. the beta-plane approximation with
The second assumption justifies letting the Coriolis parameter have a constant value in the geostrophic approximation and approximating its variation in the Coriolis force term by . However, because the acceleration following the motion, which is given in (1) as the difference between the Coriolis force and the pressure gradient force, depends on the departure of the actual wind from the geostrophic wind, it is not permissible to simply replace the velocity by its geostrophic velocity in the Coriolis term. The acceleration in (3) can then be rewritten as
(6)
The approximate horizontal momentum equation thus has the form
(7)
Expressing equation (7) in terms of its components,
(8a)
(8b)
Taking , and noting that geostrophic wind is nondivergent (i.e., ), the vorticity equation is
(9)
Because depends only on (i.e., ) and that the divergence of the ageostrophic wind can be written in terms of based on the continuity equation
equation (9) can therefore be written as
(10)
The same identity using the geopotential
Defining the geopotential tendency and noting that partial differentiation may be reversed, equation (10) can be rewritten in terms of as
(11)
The right-hand side of equation (11) depends on variables and . An analogous equation dependent on these two variables can be derived from the thermodynamic energy equation
(12)
where and is the potential temperature corresponding to the basic state temperature. In the midtroposphere, ≈ .
Multiplying (12) by and differentiating with respect to and using the definition of yields
(13)
If for simplicity were set to 0, eliminating in equations (11) and (13) yields
(14)
Equation (14) is often referred to as the geopotential tendency equation. It relates the local geopotential tendency (term A) to the vorticity advection distribution (term B) and thickness advection (term C).
The same identity using the quasi-geostrophic potential vorticity
Using the chain rule of differentiation, term C can be written as
(15)
But based on the thermal wind relation,
.
In other words, is perpendicular to and the second term in equation (15) disappears.
The first term can be combined with term B in equation (14) which, upon division by can be expressed in the form of a conservation equation
(16)
where is the quasi-geostrophic potential vorticity defined by
(17)
The three terms of equation (17) are, from left to right, the geostrophic relative vorticity, the planetary vorticity and the stretching vorticity.
Implications
As an air parcel moves about in the atmosphere, its relative, planetary and stretching vorticities may change but equation (17) shows that the sum of the three must be conserved following the geostrophic motion.
Equation (17) can be used to find from a known field . Alternatively, it can also be used to predict the evolution of the geopotential field given an initial distribution of and suitable boundary conditions by using an inversion process.
More importantly, the quasi-geostrophic system reduces the five-variable primitive equations to a one-equation system where all variables such as , and can be obtained from or height .
Also, because and are both defined in terms of , the vorticity equation can be used to diagnose vertical motion provided that the fields of both and are known.
References
Fluid mechanics
Synoptic meteorology and weather | Quasi-geostrophic equations | [
"Engineering"
] | 1,172 | [
"Civil engineering",
"Fluid mechanics"
] |
35,173,610 | https://en.wikipedia.org/wiki/Torsion%20conjecture | In algebraic geometry and number theory, the torsion conjecture or uniform boundedness conjecture for torsion points for abelian varieties states that the order of the torsion group of an abelian variety over a number field can be bounded in terms of the dimension of the variety and the number field. A stronger version of the conjecture is that the torsion is bounded in terms of the dimension of the variety and the degree of the number field. The torsion conjecture has been completely resolved in the case of elliptic curves.
Elliptic curves
From 1906 to 1911, Beppo Levi published a series of papers investigating the possible finite orders of points on elliptic curves over the rationals. He showed that there are infinitely many elliptic curves over the rationals with the following torsion groups:
Cn with 1 ≤ n ≤ 10, where Cn denotes the cyclic group of order n;
C12;
C2n × C2 with 1 ≤ n ≤ 4, where × denotes the direct sum.
At the 1908 International Mathematical Congress in Rome, Levi conjectured that this is a complete list of torsion groups for elliptic curves over the rationals. The torsion conjecture for elliptic curves over the rationals was independently reformulated by and again by , with the conjecture becoming commonly known as Ogg's conjecture.
drew the connection between the torsion conjecture for elliptic curves over the rationals and the theory of classical modular curves. In the early 1970s, the work of Gérard Ligozat, Daniel Kubert, Barry Mazur, and John Tate showed that several small values of n do not occur as orders of torsion points on elliptic curves over the rationals. proved the full torsion conjecture for elliptic curves over the rationals. His techniques were generalized by and , who obtained uniform boundedness for quadratic fields and number fields of degree at most 8 respectively. Finally, proved the conjecture for elliptic curves over any number field. He proved for a number field of degree and an elliptic curve that there is a bound on the order of the torsion group depending only on the degree . Furthermore if is a point of prime order we have
An effective bound for the size of the torsion group in terms of the degree of the number field was given by . Parent proved that for a point of prime power order we have
Setting we get from the structure result behind the Mordell-Weil theorem, i.e. there are two integers such that , a coarse but effective bound
Joseph Oesterlé gave in private notes from 1994 a slightly better bound for points of prime order of , which turns out to be useful for computations over fields of small order, but alone is not enough to yield an effective bound for . provide a published version of Oesterlé's result.
For number fields of small degree more refined results are known . A complete list of possible torsion groups has been given for elliptic curves over (see above) and for quadratic and cubic number fields. In degree 1 and 2 all groups that arise occur infinitely often. The same holds for cubic fields except for the group C21 which occurs only in a single elliptic curve over . For quartic and quintic number fields the torsion groups that arise infinitely often have been determined. The following table gives the set of all prime numbers that actually arise as the order of a torsion point where denotes the set of all prime numbers at most q ( and ).
The next table gives the set of all prime numbers that arise infinitely often as the order of a torsion point ().
Barry Mazur gave a survey talk on the torsion conjecture on the occasion of the establishment of the Ogg Professorship at the Institute for Advanced Study in October 2022.
See also
Bombieri–Lang conjecture
Uniform boundedness conjecture for preperiodic points
Uniform boundedness conjecture for rational points
References
Bibliography
Abelian varieties
Conjectures
Diophantine geometry
Theorems in number theory
Theorems in algebraic geometry | Torsion conjecture | [
"Mathematics"
] | 791 | [
"Theorems in algebraic geometry",
"Unsolved problems in mathematics",
"Mathematical theorems",
"Conjectures",
"Theorems in number theory",
"Theorems in geometry",
"Mathematical problems",
"Number theory"
] |
35,174,008 | https://en.wikipedia.org/wiki/Hypnogram | A hypnogram is a form of polysomnography; it is a graph that represents the stages of sleep as a function of time. It was developed as an easy way to present the recordings of the brain wave activity from an electroencephalogram (EEG) during a period of sleep. It allows the different stages of sleep: rapid eye movement sleep (REM) and non-rapid eye movement sleep (NREM) to be identified during the sleep cycle. NREM sleep can be further classified into NREM stage 1, 2 and 3. The previously considered 4th stage of NREM sleep has been included within stage 3; this stage is also called slow wave sleep (SWS) and is the deepest stage of sleep.
Method
Hypnograms are usually obtained by visually scoring the recordings from electroencephalogram (EEGs), electrooculography (EOGs) and electromyography (EMGs).
The output from these three sources is recorded simultaneously on a graph by a monitor or computer as a hypnogram. Certain frequencies displayed by EEGs, EOGs and EMGs are characteristic and determine what stage of sleep or wake the subject is in. There is a protocol defined by the American Academy of Sleep Medicine (AASM) for sleep scoring, whereby the sleep or wake state is recorded in 30-second epochs. Prior to this the Rechtschaffen and Kales (RK) rules were used to classify sleep stages.
Output
Normal sleep
Cycles of REM and non-REM stages make up sleep. A normal healthy adult requires 7–9 hours of sleep per night. The number of hours of sleep is variable, however the proportion of sleep spent in a particular stage remains mostly consistent; healthy adults normally spend 20–25% of their sleep in REM sleep. During rest following a sleep-deprived state, there is a period of rebound sleep which has longer and deeper episodes of SWS to make up for the lack of sleep.
On a hypnogram, a sleep cycle is usually around 90 minutes and there are four to six cycles of REM/NREM stages that occur during a major period of sleep. Most SWS occurs in the first one or two cycles; this is the deepest period of sleep. The second half of the sleeping period contains most REM sleep and little or no SWS and may contain brief periods of wakefulness which can be recorded but are not usually perceived. The stage that occurs before waking is normally REM sleep.
Hypnograms for healthy persons vary slightly according to age, emotional state, and environmental factors.
Disrupted sleep
Sleep architecture can be evaluated using hypnograms, demonstrating irregular sleeping patterns associated with sleep disorders. Disruptions or irregularities to the normal sleep cycle or sleep stage transitions can be detected; for example a hypnogram can show that in obstructive sleep apnea (OSA) the stability of transition between REM and NREM stages is disrupted.
The effects of certain medications on sleep architecture can be visualised on a hypnogram. For example, the anticonvulsant Phenytoin (PHT) can be seen to disrupt sleep by increasing the duration of NREM stage 1 and decreasing the duration of SWS; whereas the drug Gabapentin is seen to revive sleep by increasing the duration of SWS.
Analysis
The main use of a hypnogram is as a qualitative method to visualise the time period of each stage of sleep, as well as the number of transitions between stages. Hypnograms are rarely used to provide quantitative data, however it has been suggested that statistical evaluation can be carried out using multistate survival analysis and log-linear models to provide numerical significance.
Limitations
The restrictions of measuring sleep at short 30-second epochs limits the ability to record events shorter than 30 seconds; hence, the macrostructure of sleep can be evaluated while the microstructure is not. The sleep process is smoothened out in hypnogram results unlike it occurs naturally. Also some specific features of sleep such as sleep spindles and K complexes may not be defined in the hypnogram; this is particularly true for sleep scoring that is automated.
The method of obtaining the data used in a hypnogram is restricted to the input from an EEG, EOG or EMG. The interval of recording may include features from several stages, in which case it is recorded as the stage whose features occupy the recording for the longest duration. For this reason, the stage of sleep may be misrepresented on the hypnogram.
Research directions
Suggestions to improve the automated output of hypnograms to provide more reliable and accurate results include increasing the measures of sleep, for example by additionally measuring sleep with an electrocardiogram (ECG). Another advancement involves combining hypnograms with color density spectral arrays to improve the quality of sleep analysis.
References
External links
American Academy of Sleep Medicine
National Institutes of Health
Sleep medicine | Hypnogram | [
"Biology"
] | 1,031 | [
"Behavior",
"Sleep",
"Sleep medicine"
] |
35,175,454 | https://en.wikipedia.org/wiki/Bogomolny%20equations | In mathematics, and especially gauge theory, the Bogomolny equation for magnetic monopoles is the equation
where is the curvature of a connection on a principal -bundle over a 3-manifold , is a section of the corresponding adjoint bundle, is the exterior covariant derivative induced by on the adjoint bundle, and is the Hodge star operator on . These equations are named after E. B. Bogomolny and were studied extensively by Michael Atiyah and Nigel Hitchin.
The equations are a dimensional reduction of the self-dual Yang–Mills equations from four dimensions to three dimensions, and correspond to global minima of the appropriate action. If is closed, there are only trivial (i.e. flat) solutions.
See also
Monopole moduli space
Ginzburg–Landau theory
Seiberg–Witten theory
Bogomol'nyi–Prasad–Sommerfield bound
References
External links
Bogomolny equation on nLab
Differential geometry
Magnetic monopoles | Bogomolny equations | [
"Physics",
"Astronomy",
"Mathematics"
] | 205 | [
"Astronomical hypotheses",
"Applied mathematics",
"Unsolved problems in physics",
"Applied mathematics stubs",
"Magnetic monopoles"
] |
35,176,246 | https://en.wikipedia.org/wiki/Landauer%20formula | In mesoscopic physics, the Landauer formula—named after Rolf Landauer, who first suggested its prototype in 1957—is a formula relating the electrical resistance of a quantum conductor to the scattering properties of the conductor. It is the equivalent of Ohm's law for mesoscopic circuits with spatial dimensions in the order of or smaller than the phase coherence length of charge carriers (electrons and holes). In metals, the phase coherence length is of the order of the micrometre for temperatures less than .
Description
In the simplest case where the system only has two terminals, and the scattering matrix of the conductor does not depend on energy, the formula reads
where is the electrical conductance, is the conductance quantum, are the transmission eigenvalues of the channels, and the sum runs over all transport channels in the conductor. This formula is very simple and physically sensible: The conductance of a nanoscale conductor is given by the sum of all the transmission possibilities that an electron has when propagating with an energy equal to the chemical potential, .
Multiple terminals
A generalization of the Landauer formula for multiple terminals is the Landauer–Büttiker formula, proposed by . If terminal has voltage (that is, its chemical potential is and differs from terminal chemical potential), and is the sum of transmission probabilities from terminal to terminal (note that may or may not equal depending on the presence of a magnetic field), the net current leaving terminal is
In the case of a system with two terminals, the contact resistivity symmetry yields
and the generalized formula can be rewritten as
which leads us to
which implies that the scattering matrix of a system with two terminals is always symmetrical, even with the presence of a magnetic field. The reversal of the magnetic field will only change the propagation direction of the edge states, without affecting the transmission probability.
Example
As an example, in a three contact system, the net current leaving the contact 1 can be written as
Which is the carriers leaving contact 1 with a potential from which we subtract the carriers from contacts 2 and 3 with potentials and respectively, going into contact 1.
In the absence of an applied magnetic field, the generalized equation would be the result of applying Kirchhoff's law to a system of conductance . However, in the presence of a magnetic field, the time reversal symmetry would be broken and therefore, .
In the presence of more than two terminals in the system, the two terminals symmetry is broken. In the earlier given exemple, . This is due to the fact that the terminals "recycle" the incoming electrons, for which the phase coherence is lost when another electron is emitted towards terminal 1. However, since the carriers are moving through edge states, one can see that even with the presence of a third terminal. This is due to the fact that under magnetic field inversion, the edge states simply change their propagation orientation. This is especially true if terminal 3 is taken as a perfect potential probe.
See also
Ballistic conduction
Meir–Wingreen formula
Shot noise
Near-field radiative heat transfer
References
Mesoscopic physics
Quantum mechanics
Nanoelectronics | Landauer formula | [
"Physics",
"Materials_science"
] | 645 | [
"Theoretical physics",
"Quantum mechanics",
"Condensed matter physics",
"Nanoelectronics",
"Nanotechnology",
"Mesoscopic physics"
] |
35,182,609 | https://en.wikipedia.org/wiki/Measurements%20of%20neutrino%20speed | Measurements of neutrino speed have been conducted as tests of special relativity and for the determination of the mass of neutrinos. Astronomical searches investigate whether light and neutrinos emitted simultaneously from a distant source are arriving simultaneously on Earth. Terrestrial searches include time of flight measurements using synchronized clocks, and direct comparison of neutrino speed with the speed of other particles.
Since it is established that neutrinos possess mass, the speed of neutrinos of kinetic energies ranging from MeV to GeV should be slightly lower than the speed of light in accordance with special relativity. Existing measurements provided upper limits for deviations from light speed of approximately 10−9, or a few parts per billion. Within the margin of error this is consistent with no deviation at all.
Overview
It was assumed for a long time in the framework of the standard model of particle physics that neutrinos are massless. Thus, they should travel at exactly the speed of light, according to special relativity. However, since the discovery of neutrino oscillations, it is assumed that they possess some small amount of mass. Thus, they should travel slightly slower than light, otherwise their relativistic energy would become infinitely large. This energy is given by the formula:
,
with v being the neutrino speed and c the speed of light. The neutrino mass m is currently estimated as being 2 eV/c², and is possibly even lower than 0.2 eV/c². According to the latter mass value and the formula for relativistic energy, relative speed differences between light and neutrinos are smaller at high energies, and should arise as indicated in the figure on the right.
Time-of-flight measurements conducted so far investigated neutrinos of energy above 10 MeV. However, velocity differences predicted by relativity at such high energies cannot be determined with the current precision of time measurement. The reason why such measurements are still conducted is connected with the theoretical possibility that significantly larger deviations from light speed might arise under certain circumstances. For instance, it was postulated that neutrinos might be some sort of superluminal particles called tachyons, even though others criticized this proposal. While hypothetical tachyons are thought to be compatible with Lorentz invariance, superluminal neutrinos have also been studied in Lorentz invariance violating frameworks as motivated by speculative variants of quantum gravity, such as the Standard-Model Extension according to which Lorentz-violating neutrino oscillations can arise. Besides time-of-flight measurements, those models also allow for indirect determinations of neutrino speed and other modern searches for Lorentz violation. All of those experiments confirmed Lorentz invariance and special relativity.
Fermilab (1970s)
Fermilab conducted in the 1970s a series of terrestrial measurements, in which the speed of muons was compared with that of neutrinos and antineutrinos of energies between 30 and 200 GeV. The Fermilab narrow band neutrino beam was generated as follows: 400-GeV protons are hitting the target and causing the production of secondary beams consisting of pions and kaons. Then they are decaying in an evacuated decay tube of 235 meter length. The remaining hadrons were stopped by a secondary dump, so that only neutrinos and some energetic muons can penetrate the earth- and steel shield of 500 meter length, in order to reach the particle detector.
Since the protons are transferred in bunches of one nanosecond duration at an interval of 18.73 ns, the speed of muons and neutrinos could be determined. A speed difference would lead to an elongation of the neutrino bunches and to a displacement of the whole neutrino time spectrum. At first, the speeds of muons and neutrinos were compared.
Later, also antineutrinos were observed.
The upper limit for deviations from light speed was:
.
This was in agreement with the speed of light within the measurement accuracy (95% confidence level), and also no energy dependence of neutrino speeds could be found at this accuracy.
Supernova 1987A
The most precise agreement with the speed of light () was determined in 1987 by the observation of electron antineutrinos of energies between 7.5 and 35 MeV originated at the Supernova 1987A at a distance of 157000 ± 16000 light years. The upper limit for deviations from light speed was:
,
thus more than 0.999999998 times the speed of light. This value was obtained by comparing the arrival times of light and neutrinos. The difference of approximately three hours was explained by the circumstance, that the almost noninteracting neutrinos could pass the supernova unhindered while light required a longer time.
MINOS (2007)
The first terrestrial measurement of the absolute transit time was conducted by MINOS (2007) at Fermilab. In order to generate neutrinos (the so-called NuMI beam) they used the Fermilab Main Injector, by which 120-GeV-protons were directed to a graphite target in 5 to 6 batches per spill. The emerging mesons decayed in a 675 meter long decay tunnel into muon neutrinos (93%) and muon antineutrinos (6%). The travel time was determined by comparing the arrival times at the MINOS near- and far detector, apart from each other by 734 km. The clocks of both stations were synchronized by GPS, and long optical fibers were used for signal transmission.
They measured an early neutrino arrival of approximately 126 ns. Thus the relative speed difference was (68% confidence limit). This corresponds to 1.000051±29 times the speed of light, thus apparently faster than light. The major source of error were uncertainties in the fiber optic delays. The statistical significance of this result was less than 1.8σ, thus it was not significant since 5σ is required to be accepted as a scientific discovery.
At 99% confidence level it was given
,
a neutrino speed larger than 0.999976c and lower than 1.000126c. Thus the result is also compatible with subluminal speeds.
OPERA (2011, 2012)
Anomaly
In the OPERA experiment, 17-GeV neutrinos have been used, split in proton extractions of 10.5 μs length generated at CERN, which hit a target at a distance of 743 km. Then pions and kaons are produced which partially decayed into muons and muon neutrinos (CERN Neutrinos to Gran Sasso, CNGS). The neutrinos traveled further to the Laboratori Nazionali del Gran Sasso (LNGS) 730 km away, where the OPERA detector is located. GPS was used to synchronize the clocks and to determine the exact distance. In addition, optical fibers were used for signal transmission at LNGS. The temporal distribution of the proton extractions was statistically compared with approximately 16000 neutrino events. OPERA measured an early neutrinos arrival of approximately 60 nanoseconds, as compared to the expected arrival at the speed of light, thus indicating a neutrino speed faster than that of light. Contrary to the MINOS result, the deviation was 6σ and thus apparently significant.
To exclude possible statistical errors, CERN produced bunched proton beams between October and November 2011. The proton extractions were split into short bunches of 3 ns at intervals of 524 ns, so that every neutrino event could be directly connected to a proton bunch. The measurement of twenty neutrino events again gave an early arrival of about 62 ns, in agreement with the previous result. They updated their analysis and increased the significance up to 6,2σ.
In February and March 2012, it was shown that there were two mistakes in the experimental equipment: An erroneous cable connection at a computer card, making the neutrinos appearing faster than expected. The other one was an oscillator out of its specification, making the neutrinos appearing slower than expected. Then the time of arrival of cosmic high-energy muons at OPERA and the co-located LVD detector between 2007 and 2008, 2008–2011, and 2011–2012 were compared. It was found out that between 2008 and 2011, the cable connector error caused a deviation of approximately 73 ns, and the oscillator error caused ca. 15 ns in the opposite direction.
This and the measurement of neutrino velocities consistent with the speed of light by the ICARUS collaboration (see ICARUS (2012)), indicated that the neutrinos were probably not faster than light.
End result
Finally, in July 2012 the OPERA collaboration published a new analysis of their data from 2009 to 2011, which included the instrumental effects stated above, and obtained bounds for arrival time differences (compared to the speed of light):
nanoseconds,
and bounds for speed differences:
.
Also the corresponding new analysis for the bunched beam of October and November 2011 agreed with this result:
nanoseconds
Although at the extremes of error these results still allow for superluminal neutrino velocities, they are predominantly consistent with the speed of light, and the bound for the speed difference is more precise by one order of magnitude than previous terrestrial time-of-flight measurements.
LNGS (2012)
Continuing the OPERA and ICARUS measurements, the LNGS experiments Borexino, LVD, OPERA and ICARUS conducted new tests between 10 and 24 May 2012, after CERN provided another bunched beam rerun. All measurements were consistent with the speed of light. The 17-GeV muon neutrino beam consisted of 4 batches per extraction separated by ~300ns, and the batches consisted of 16 bunches separated by ~100ns, with a bunch width of ~2ns.
Borexino
The Borexino collaboration analyzed both the bunched beam rerun of Oct.–Nov. 2011 and the second rerun of May 2012.
For the 2011 data, they evaluated 36 neutrino events and obtained an upper limit for time of flight differences:
nanoseconds.
For the May 2012 measurements, they improved their equipment by installing a new analogue small–jitter triggering system and a geodetic GPS receiver coupled to a Rb clock. They also conducted an independent high precision geodesy measurement together with LVD and ICARUS. 62 neutrino events could be used for the final analysis, giving a more precise upper limit for time of flight differences
nanoseconds,
corresponding to
(90% C.L.).
LVD
The LVD collaboration first analyzed the beam rerun of Oct.–Nov. 2011. They evaluated 32 neutrino events and obtained an upper limit for time of flight differences:
nanoseconds.
In the May 2012 measurements, they used the new LNGS timing facility by the Borexino collaboration, and the geodetic data obtained by LVD, Borexino, and ICARUS (see above). They also updated their Scintillation counters and the trigger. 48 neutrino events (at energies above 50 MeV, average neutrino energy was 17 GeV) have been used for the May analysis, improving the upper limit for time of flight differences
nanoseconds,
corresponding to
(99% C.L.).
ICARUS
After publishing the analysis of the beam rerun of Oct.–Nov. 2011 (see above), the ICARUS collaboration also provided an analysis of the May rerun. They substantially improved their own internal timing system and between CERN-LNGS, used the geodetic LNGS measurement together with Borexino and LVD, and employed Borexino's timing facility. 25 neutrino events have been evaluated for the final analysis, yielding an upper limit for time of flight differences:
nanoseconds,
corresponding to
.
Neutrino velocities exceeding the speed of light by more than (95% C.L.) are excluded.
OPERA
After the correction of the initial results, OPERA published their May 2012 measurements as well.
An additional, independent timing system and four different methods of analysis were used for the evaluation of the neutrino events. They provided an upper limit for time of flight differences between light and muon neutrinos (48 to 59 neutrino events depending on the method of analysis):
nanoseconds,
and between light and anti-muon neutrinos (3 neutrino events):
nanoseconds,
consistent with the speed of light in the range of
(90% C. L.).
MINOS (2012)
Old timing system
The MINOS collaboration further elaborated on their speed measurements of 2007. They examined the data collected over seven years, improved the GPS timing system and the understanding of the delays of electronic components, and also used upgraded timing equipment. The neutrinos span a 10 μs spill containing 5-6 batches. The analyses have been conducted in two ways. First, as in the 2007 measurement, the data at the far detector was statistically determined by the data of the near detector ("Full Spill Approach"):
nanoseconds,
Second, the data connected with the batches themselves have been used ("Wrapped Spill Approach"):
nanoseconds,
This is consistent with neutrinos traveling at the speed of light, and substantially improves their preliminary 2007 results.
New timing system
In order to further improve the precision, a new timing system was developed. In particular, a "Resistive Wall Current Monitor" (RWCM) measuring the time distribution of the proton beam, CS atomic clocks, dual frequency GPS receivers, and auxiliary detectors to measure detector latencies have been installed. For the analysis, the neutrino events could be connected with a specific 10 μs proton spill, from which a likelihood analysis was generated, and then the likelihoods of different events have been combined. The result:
nanoseconds,
and
.
This was confirmed in the final publication in 2015.
Indirect determinations
Lorentz-violating frameworks such as the Standard-Model Extension including Lorentz-violating neutrino oscillations also allow for indirect determinations of deviations between light speed and neutrino speed by measuring their energy and the decay rates of other particles over large distances. By this method, much more stringent bounds can be obtained, such as by Stecker et al.:
.
For more such indirect bounds on superluminal neutrinos, see .
References
Related belletristic
"60.7 nanoseconds", by Gianfranco D'Anna (): a novel inspired by the superluminal neutrino claim, recounting an incredible story of ambition and bad luck in detail.
External links
INFN resource list with many papers on experiments and history: SuperLuminal Neutrino
Physics experiments
Special relativity | Measurements of neutrino speed | [
"Physics"
] | 3,124 | [
"Special relativity",
"Experimental physics",
"Physics experiments",
"Theory of relativity"
] |
35,183,128 | https://en.wikipedia.org/wiki/Quasioptics | Quasioptics concerns the propagation of electromagnetic radiation where the wavelength is comparable to the size of the optical components (e.g. lenses, mirrors, and apertures) and hence diffraction effects may become significant. It commonly describes the propagation of Gaussian beams where the beam width is comparable to the wavelength. This is in contrast to geometrical optics, where the wavelength is small compared to the relevant length scales. Quasioptics is so named because it represents an intermediate regime between conventional optics and electronics, and is often relevant to the description of signals in the far-infrared or terahertz region of the electromagnetic spectrum. It represents a simplified version of the more rigorous treatment of physical optics. Quasi-optical systems may also operate at lower frequencies such as millimeter wave, microwave, and even lower.
See also
Optoelectronics
References
Optics
Terahertz technology | Quasioptics | [
"Physics",
"Chemistry"
] | 181 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Optics",
"Electromagnetic spectrum",
" molecular",
"Atomic",
"Terahertz technology",
" and optical physics"
] |
35,183,882 | https://en.wikipedia.org/wiki/Projective%20bundle | In mathematics, a projective bundle is a fiber bundle whose fibers are projective spaces.
By definition, a scheme X over a Noetherian scheme S is a Pn-bundle if it is locally a projective n-space; i.e., and transition automorphisms are linear. Over a regular scheme S such as a smooth variety, every projective bundle is of the form for some vector bundle (locally free sheaf) E.
The projective bundle of a vector bundle
Every vector bundle over a variety X gives a projective bundle by taking the projective spaces of the fibers, but not all projective bundles arise in this way: there is an obstruction in the cohomology group H2(X,O*). To see why, recall that a projective bundle comes equipped with transition functions on double intersections of a suitable open cover. On triple overlaps, any lift of these transition functions satisfies the cocycle condition up to an invertible function. The collection of these functions forms a 2-cocycle which vanishes in H2(X,O*) only if the projective bundle is the projectivization of a vector bundle. In particular, if X is a compact Riemann surface then H2(X,O*)=0, and so this obstruction vanishes.
The projective bundle of a vector bundle E is the same thing as the Grassmann bundle of 1-planes in E.
The projective bundle P(E) of a vector bundle E is characterized by the universal property that says:
Given a morphism f: T → X, to factorize f through the projection map is to specify a line subbundle of f*E.
For example, taking f to be p, one gets the line subbundle O(-1) of p*E, called the tautological line bundle on P(E). Moreover, this O(-1) is a universal bundle in the sense that when a line bundle L gives a factorization f = p ∘ g, L is the pullback of O(-1) along g. See also Cone#O(1) for a more explicit construction of O(-1).
On P(E), there is a natural exact sequence (called the tautological exact sequence):
where Q is called the tautological quotient-bundle.
Let E ⊂ F be vector bundles (locally free sheaves of finite rank) on X and G = F/E. Let q: P(F) → X be the projection. Then the natural map is a global section of the sheaf hom . Moreover, this natural map vanishes at a point exactly when the point is a line in E; in other words, the zero-locus of this section is P(E).
A particularly useful instance of this construction is when F is the direct sum E ⊕ 1 of E and the trivial line bundle (i.e., the structure sheaf). Then P(E) is a hyperplane in P(E ⊕ 1), called the hyperplane at infinity, and the complement of P(E) can be identified with E. In this way, P(E ⊕ 1) is referred to as the projective completion (or "compactification") of E.
The projective bundle P(E) is stable under twisting E by a line bundle; precisely, given a line bundle L, there is the natural isomorphism:
such that (In fact, one gets g by the universal property applied to the line bundle on the right.)
Examples
Many non-trivial examples of projective bundles can be found using fibrations over such as Lefschetz fibrations. For example, an elliptic K3 surface is a K3 surface with a fibrationsuch that the fibers for are generically elliptic curves. Because every elliptic curve is a genus 1 curve with a distinguished point, there exists a global section of the fibration. Because of this global section, there exists a model of giving a morphism to the projective bundledefined by the Weierstrass equationwhere represent the local coordinates of , respectively, and the coefficientsare sections of sheaves on . Note this equation is well-defined because each term in the Weierstrass equation has total degree (meaning the degree of the coefficient plus the degree of the monomial. For example, ).
Cohomology ring and Chow group
Let X be a complex smooth projective variety and E a complex vector bundle of rank r on it. Let p: P(E) → X be the projective bundle of E. Then the cohomology ring H*(P(E)) is an algebra over H*(X) through the pullback p*. Then the first Chern class ζ = c1(O(1)) generates H*(P(E)) with the relation
where ci(E) is the i-th Chern class of E. One interesting feature of this description is that one can define Chern classes as the coefficients in the relation; this is the approach taken by Grothendieck.
Over fields other than the complex field, the same description remains true with Chow ring in place of cohomology ring (still assuming X is smooth). In particular, for Chow groups, there is the direct sum decomposition
As it turned out, this decomposition remains valid even if X is not smooth nor projective. In contrast, Ak(E) = Ak-r(X), via the Gysin homomorphism, morally because that the fibers of E, the vector spaces, are contractible.
See also
Proj construction
cone (algebraic geometry)
ruled surface (an example of a projective bundle)
Severi–Brauer variety
Hirzebruch surface
References
Algebraic topology
Algebraic geometry | Projective bundle | [
"Mathematics"
] | 1,182 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology",
"Algebraic geometry"
] |
35,184,684 | https://en.wikipedia.org/wiki/Mycoplasma%20haemomuris | Mycoplasma haemomuris, formerly known as Haemobartonella muris and Bartonella muris, is a Gram-negative bacillus. It is known to cause anemia in rats and mice.
References
Further reading
haemomuris | Mycoplasma haemomuris | [
"Biology"
] | 59 | [
"Bacteria stubs",
"Bacteria"
] |
35,185,688 | https://en.wikipedia.org/wiki/Phage%20major%20coat%20protein | In molecular biology, a phage major coat protein is an alpha-helical protein that forms a viral envelope of filamentous bacteriophages. These bacteriophages are flexible rods, about one to two micrometres long and six nm in diameter, with a helical shell of protein subunits surrounding a DNA core. The approximately 50-residue subunit of the major coat protein is largely alpha-helix, and the axis of the alpha-helix makes a small angle with the axis of the virion. The protein shell can be considered in three sections: the outer surface, occupied by the N-terminal region of the subunit and rich in acidic residues that give the virion a low isoelectric point; the interior of the shell (including a 19-residue stretch of apolar side-chains) where protein subunits interact, mainly with each other; and the inner surface (occupied by the C-terminal region of the subunit), rich in positively charged residues that interact with the DNA core.
References
Transmembrane proteins
Protein families | Phage major coat protein | [
"Biology"
] | 220 | [
"Protein families",
"Protein classification"
] |
43,316,441 | https://en.wikipedia.org/wiki/Illustris%20project | The Illustris project is an ongoing series of astrophysical simulations run by an international collaboration of scientists. The aim is to study the processes of galaxy formation and evolution in the universe with a comprehensive physical model. Early results were described in a number of publications following widespread press coverage. The project publicly released all data produced by the simulations in April, 2015. Key developers of the Illustris simulation have been Volker Springel (Max-Planck-Institut für Astrophysik) and Mark Vogelsberger (Massachusetts Institute of Technology). The Illustris simulation framework and galaxy formation model has been used for a wide range of spin-off projects, starting with Auriga and IllustrisTNG (both 2017) followed by Thesan (2021), MillenniumTNG (2022) and TNG-Cluster (2023).
Illustris simulation
Overview
The original Illustris project was carried out by Mark Vogelsberger and collaborators as the first large-scale galaxy formation application of Volker Springel's novel Arepo code.
The Illustris project included large-scale cosmological simulations of the evolution of the universe, spanning initial conditions of the Big Bang, to the present day, 13.8 billion years later. Modeling, based on the most precise data and calculations currently available, are compared to actual findings of the observable universe in order to better understand the nature of the universe, including galaxy formation, dark matter and dark energy.
The simulation included many physical processes which are thought to be critical for galaxy formation. These include the formation of stars and the subsequent "feedback" due to supernova explosions, as well as the formation of super-massive black holes, their consumption of nearby gas, and their multiple modes of energetic feedback.
Images, videos, and other data visualizations for public distribution are available at official media page.
Computational aspects
The main Illustris simulation was run on the Curie supercomputer at CEA (France) and the SuperMUC supercomputer at the Leibniz Computing Centre (Germany). A total of 19 million CPU hours was required, using 8,192 CPU cores. The peak memory usage was approximately 25 TB of RAM. A total of 136 snapshots were saved over the course of the simulation, totaling over 230 TB cumulative data volume.
A code called "Arepo" was used to run the Illustris simulations. It was written by Volker Springel, the same author as the GADGET code. The name is derived from the Sator Square. This code solves the coupled equations of gravity and hydrodynamics using a discretization of space based on a moving Voronoi tessellation. It is optimized for running on large, distributed memory supercomputers using an MPI approach.
Public data release
In April, 2015 (eleven months after the first papers were published) the project team publicly released all data products from all simulations. All original data files can be directly downloaded through the data release webpage. This includes group catalogs of individual halos and subhalos, merger trees tracking these objects through time, full snapshot particle data at 135 distinct time points, and various supplementary data catalogs. In addition to direct data download, a web-based API allows for many common search and data extraction tasks to be completed without needing access to the full data sets.
German postage stamp
In December 2018, the Illustris simulation was recognized by Deutsche Post through a special series stamp.
Illustris Spin-Off Projects
The Illustris simulation framework has been used by a wide range of spin-off projects that focus on specific scientific questions.
IllustrisTNG:
The IllustrisTNG project, "the next generation" follow up to the original Illustris simulation, was first presented in July, 2017. A team of scientists from Germany and the U.S. led by Prof. Volker Springel. First, a new physical model was developed, which among other features included Magnetohydrodynamics planned three simulations, which used different volumes at different resolutions. The intermediate simulation (TNG100) was equivalent to the original Illustris simulation. Unlike Illustris, it was run on the Hazel Hen machine at the High Performance Computing Center, Stuttgart in Germany. Up to 25,000 computer cores were employed. In December 2018 the simulation data from IllustrisTNG was released publicly. The data service includes a JupyterLab interface.
Auriga:
The Auriga project consists of high-resolution zoom simulations of Milky Way-like dark matter halos to understand the formation of our Milky Way galaxy.
Thesan:
The Thesan project is a radiative-transfer version of IllustrisTNG to explore the epoch of reionization.
MillenniumTNG:
The MillenniumTNG employs the IllustrisTNG galaxy formation model in a larger cosmological volume to explore the massive end of the halo mass function for detailed cosmological probe forecasts.
TNG-Cluster:
A suite of high-resolution zoom-in simulations of galaxy clusters.
Gallery
See also
Computational fluid dynamics
Large-scale structure of the universe
List of cosmological computation software
Millennium Run
N-body simulation
UniverseMachine
References
External links
Press release - Center for Astrophysics Harvard & Smithsonian (7 May 2014).
- Illustris-Project (6 May 2014).
- NASA (14 July 2014)
- article containing a comparison table of different simulation projects
Astrophysics
Cosmological simulation
Physical cosmology | Illustris project | [
"Physics",
"Astronomy"
] | 1,125 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"Computational physics",
"Cosmological simulation",
"Physical cosmology"
] |
43,318,505 | https://en.wikipedia.org/wiki/Gemtech | Gemtech (stylized as GEMTECH) is an American manufacturer of silencers (suppressors) for pistols, rifles, submachine guns, and personal defense weapons (PDWs). The company also produces ammunition and various accessories.
Gemtech was founded in 1993 and is headquartered in Eagle, Idaho. GSL Technology of Jackson, Michigan designed and manufactured Gemtech Suppressors from 1994 to 2016.
Suppressors
Gemtech offers a variety of different silencers.
Rimfire suppressors
Outback: The Outback was a "thread-on" suppressor for handguns and rifles chambered in .22 lr.
Quantum-200: The Quantum-200 was a .22 lr suppressor designed and sold in the 1990s.
Vortex-2: The Vortex-2 was a .22 lr muzzle suppressor designed for handguns or rifles.
LDES-2: The LDES-2 was a .22 lr handgun suppressor that is no longer in production.
Oasis: The Oasis was a .22 lr integrally suppressed aluminum upper receiver for the Ruger MK II and Ruger MK III automatic pistols; it is no longer in production.
Centerfire handgun suppressors
GM-45: The GM-45 is a suppressor for pistols chambered in .45 ACP, 9mm Luger, 10mm Auto and .40 S&W.
GM-9: The GM-9 is for 9mm and .300 Blackout (subsonic loads) firearms. It is rated for full-automatic fire.
Tundra: The Tundra was a 9mm suppressor and it is designed to be fired "dry."
Blackside-45: The Blackside-45 was a suppressor designed for handguns chambered in .45 ACP and .40 S&W.
SFN-57: The SFN-57 was designed for use with the FN Five-seven automatic pistol chambered in SFN-57 5.7×28mm. It may also be utilized on firearms chambered in .17 HMR, .22 lr, and .22 WMR.
Vortex-9: The Vortex-9 is a discontinued 9mm handgun suppressor.
Submachine gun and PDW suppressors
RAPTOR-II: The RAPTOR-II is a suppressor for 9mm submachine guns such as the Uzi and MP5.
RAPTOR-40: The RAPTOR-40 is a suppressor designed for submachine guns chambered in .40S&W and 10mm Auto, such as the UMP-40 or MP-5/10.
VIPER: The VIPER is a suppressor designed for the MAC line of submachine guns (e.g., MAC-10, MAC-11) and will work with firearms chambered in .380 ACP, 9mm, and .45 ACP. The VIPER is small, lighter, and more efficient than original MAC suppressors.
MOSSAD-II: The MOSSAD-II is a suppressor designed for the Uzi family of submachine guns.
MK-9K: The MK-9K is a 9mm suppressor designed for use with open-bolt submachine guns.
SAR57: The SAR57 is a 5.7mm suppressor designed for use with the SAR57. It is not recommended for use with the FN 5-7 pistol or P90 PDW.
Centerfire rifle suppressors
GMT-300BLK: The GMT-300BLK is a suppressor for .300 Blackout rifles and carbines. It may be utilized with both super and subsonic ammunition.
GMT-300WM: The GMT-300WM is for rifles chambered in .300 Winchester Magnum.
GMT-556LE: The GMT-556LE is a 5.56mm rifle or carbine suppressor for designed for law enforcement use.
GMT-556QM: The GMT-556QM is a 5.56mm automatic rifle or carbine suppressor for designed for military use.
STORMFRONT: The STORMFRONT was a suppressor for .50 BMG rifles.
TREK: The TREK is a 5.56mm "thread-on" suppressor for carbines and rifles.
SANDSTORM: The SANDSTORM was a titanium 7.62×51mm NATO / .308 Winchester suppressor.
QUICKSAND: The QUICKSAND was a light-weight, quick-detach version of the SANDSTORM.
HVT-QM: The HVT-QM was a stainless steel, .30-caliber suppressor that uses Gemtech's Quickmount system.
Ammunition
In 2011, Gemtech developed their own line of subsonic .22 Long Rifle ammunition optimized for use with sound suppressors. Kel Whelan, working with Brett Olin of CCI Ammunition came up with a round utilizing a unique 42 grain bullet and travelling at 1050 feet per second.
Two years later, the company began producing .300 Blackout ammunition in both supersonic and subsonic loads.
American Suppressor Association
Gemtech was instrumental in forming the American Suppressor Association (ASA), a nonprofit trade association "to further the pursuit of education, public relations, legislation, hunting applications and military applications for the silencer industry".
Purchase by Smith & Wesson
In July 2017, it was announced that Gemtech was purchased by firearm manufacturer, Smith & Wesson.
See also
Title II weapons
References
External links
Firearm components
Noise control | Gemtech | [
"Technology"
] | 1,129 | [
"Firearm components",
"Components"
] |
49,327,485 | https://en.wikipedia.org/wiki/Low-cycle%20fatigue | Low cycle fatigue (LCF) has two fundamental characteristics: plastic deformation in each cycle; and low cycle phenomenon, in which the materials have finite endurance for this type of load. The term cycle refers to repeated applications of stress that lead to eventual fatigue and failure; low-cycle pertains to a long period between applications.
Study in fatigue has been focusing on mainly two fields: size design in aeronautics and energy production using advanced calculation methods. The LCF result allows us to study the behavior of the material in greater depth to better understand the complex mechanical and metallurgical phenomena (crack propagation, work softening, strain concentration, work hardening, etc.).
History
Common factors that have been attributed to low-cycle fatigue (LCF) are high stress levels and a low number of cycles to failure. Many studies have been carried out, particularly in the last 50 years on metals and the relationship between temperature, stress, and number of cycles to failure. Tests are used to plot an S-N curve, and it has been shown that the number of cycles to failure decreased with increasing temperature. However, extensive testing would have been too costly so researchers mainly resorted to using finite element analysis using computer software.
Through many experiments, it has been found that characteristics of a material can change as a result of LCF. Fracture ductility tends to decrease, with the magnitude depending on the presence of small cracks to begin with. To perform these tests, an electro-hydraulic servo-controlled testing machine was generally used, as it is capable of not changing the stress amplitude. It was also discovered that performing low-cycle fatigue tests on specimens with holes already drilled in them were more susceptible to crack propagation, and hence a greater decrease in fracture ductility. This was true despite the small hole sizes, ranging from 40 to 200 μm.
Characteristics
When a component is subject to low cycle fatigue, it is repeatedly plastically deformed. For example, if a part were to be loaded in tension until it was permanently deformed (plastically deformed), that would be considered one quarter cycle of low cycle fatigue, or LCF. In order to complete a full cycle the part would need to be deformed back into its original shape. The number of LCF cycles that a part can withstand before failing is much lower than that of regular fatigue.
This condition of high cyclic strain is often the result of extreme operating conditions, such as high changes in temperature. Thermal stresses originating from an expansion or contraction of materials can exacerbate the loading conditions on a part and LCF characteristics can come into play.
Mechanics
A commonly used equation that describes the behavior of low-cycle fatigue is the Coffin-Manson relation (published by L. F. Coffin in 1954 and S. S. Manson in 1953):
where,
Δεt /2 is the total strain amplitude;
Δεp /2 is the plastic strain amplitude;
Δεe /2 is the elastic strain amplitude;
2N is the number of reversals to failure (N cycles);
εf' is an empirical constant known as the fatigue ductility coefficient defined by the strain intercept at 2N =1;
c is an empirical constant known as the fatigue ductility exponent, commonly ranging from -0.5 to -0.7. Small c results in long fatigue life.
ςf' is a constant known as the fatigue strength coefficient
b is an empirical constant known as the fatigue brittleness exponent.
The first half of the equation indicates the Plastic region, and the second half indicates the elastic region.
Morrow Approximation
In the above given Coffin-Manson relation the constant values (b and c) is determined by the given equations:
Notable failures
One noteworthy event in which the failure was a result of LCF was the 1994 Northridge earthquake. Many buildings and bridges collapsed, and as a result over 9,000 people were injured. Researchers at the University of Southern California analyzed the main areas of a ten-story building that were subjected to low-cycle fatigue. Unfortunately, there was limited experimental data available to directly construct a S-N curve for low-cycle fatigue, so most of the analysis consisted of plotting the high-cycle fatigue behavior on a S-N curve and extending the line for that graph to create the portion of the low-cycle fatigue curve using the Palmgren-Miner method. Ultimately, this data was used to more accurately predict and analyze similar types of damage that the ten-story steel building in Northridge faced.
Another more recent event was the 2010 Chile earthquake, in which several researchers from the University of Chile made reports of multiple reinforced concrete structures damaged throughout the country by the seismic event. Many structural elements such as beams, walls and columns failed due to fatigue, exposing the steel reinforcements used in the design with clear signs of longitudinal buckling. This event caused Chilean seismic design standards to be updated based on observations on damaged structures caused by the earthquake.
References
Materials degradation
Mechanical failure | Low-cycle fatigue | [
"Materials_science",
"Engineering"
] | 1,008 | [
"Materials degradation",
"Mechanical failure",
"Materials science",
"Mechanical engineering"
] |
49,331,144 | https://en.wikipedia.org/wiki/Pressure-driven%20flow | Pressure-driven flow is a method to displace liquids in a capillary or microfluidic channel with pressure. The pressure is typically generated pneumatically by compressed air or other gases (nitrogen, carbon dioxide, etc) or by electrical and magnetical fields or gravitation.
Physical fundamentals
It is known from thermodynamics that conjugated quantities scale in a different manner. Two classes can be distinguished: intensive quantities as temperature T, pressure P and amount of substance N or extensive quantities as entropy S, volume V and chemical potential μ. Extensive quantities scale with system size, whereas the intensive quantities do not. The quantity pressure, for example, is defined as the (differential) quotient of two extensive variables: p=dE/dV (energy E and volume V) and therefore scale independent as the same scaling factors appearing in the nominator as well as the denominator cancel. In microsystems the problem rises that the extremely small volumes are difficult to be controlled. The reason is the predominance of surface effects as surface charges, van-der-Waals forces and entropic effects (e.g. dewetting due to rough surfaces: the restriction in degrees of freedom of molecules penetrating such a surface is entropically more expensive than staying in bulk). Furthermore, the microsystem has to be controlled from a macroscopic human scale.
References
Fluid dynamics | Pressure-driven flow | [
"Chemistry",
"Engineering"
] | 294 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
45,061,245 | https://en.wikipedia.org/wiki/Flow%20Cytometry%20Standard | Flow Cytometry Standard (FCS) is a data file standard for the reading and writing of data from flow cytometry experiments. The FCS specification has traditionally been developed and maintained by the International Society for Advancement of Cytometry (ISAC). FCS used to be the only widely adopted file format in flow cytometry. Recently, additional standard file formats have been developed by ISAC.
File Format
The FCS file format describes a file that is a combination of textual data followed by binary data. The order of the file layout is as follows:
HEADER segment
TEXT segment
DATA segment
Optional ANALYSIS segment
CRC value
Optional OTHER segments
The HEADER segment is an ASCII text string that begins by identifying the version of the FCS standard used, followed by three pairs of byte offsets that designate the positions of the TEXT, DATA, and ANALYSIS segments. An example header segment is given below
FCS3.0 58 4380 4381 5586 0 0
Because the field width of the header segment byte positions is constrained by 8 characters, the maximum position it is capable of storing is 99,999,999. Anything beyond that is encoded as a 0 for both the start and end position, and the corresponding TEXT segment keyword is used instead.
The text segment is an ASCII text string that is divided into a series of key-value pairs that are delimited by some chosen character, e.g. '|'. The first character immediately following the header segment is the delimiter. An example of a header and text segment is given below
FCS3.0 58 4380 4381 5586 0 0|$BEGINANALYSIS|0|$BEGINDATA|4381|$BEGINSTEXT|0|$BTIM|08:24:37.64|$BYTEORD|1,2,3,4|$CELLS|RBC|...|
To be a valid FCS file, the text segment must contain all required keywords, which describe the DATA segment format and encoding. For FCS version 3.1, the required FCS primary TEXT segment keywords are as follows:
The DATA segment of the FCS file follows after the TEXT segment and is laid out event-wise (row-wise) according to the order described in the parameters (a.k.a. channels) $P1N $P2N$...PnN. An event is either an actual biological cell or some other mass that was large enough to trigger the data acquisition capturing device of the flow cytometer instrument. Data segments hold the following layout:
Data Segment
[Event1][Event2][Event3]...[Event$TOT]
Each event is laid out according to the number of bytes described by $PnB for each parameter. These bytes are to be interpreted according to the combination specified by $BYTEORD and $DATATYPE.
Event
[$P1B][$P2B][$P3B]...[$PnB]
Data structure
Flow cytometry data is typically saved for analysis in the form of an array, with fluorescence and scatter channels represented in columns, and individual "events" (most of which are cells) forming the rows.
The number of events acquired from each sample usually ranges between the low thousands and the low millions.
History
The first version of a Flow Cytometry Standard (FCS) was developed in 1984. Since then, FCS became the standard file format supported by all flow cytometry software and hardware vendors. FCS is a binary file format with three main segments: a text segment containing meta data in keyword/value pairs structures, a data segment usually containing a matrix of detected expression values (so called list mode format), and a rarely used analysis segment.
Over the years, updates were incorporated to adapt to technological advancements in both flow cytometry and computing technologies.
In 1990, FCS 2.0 was introduced. Features introduced in FCS 2.0 included the option of multiple data sets within a data file, the use of different byte orders accommodating hardware variations on different computing platforms, and basic compensation and scaling information. FCS 2.0 was followed by FCS 3.0 in 1997, which introduced the possibility of storing data sets larger than 100MB.
The latest version, FCS 3.1, was introduced in 2010. It retains the basic FCS file structure and most features of previous versions of the standard. Changes included in FCS 3.1 address potential ambiguities in the previous versions and provide a more robust standard. They include simplified support for international characters and improved support for storing compensation. The major additions are support for preferred display scale, a standardized way of capturing the sample volume, information about the origins of the data file, and support for plate and well identification in high throughput, plate based experiments.
See also
Flow cytometry
Flow cytometry bioinformatics
References
Flow cytometry
Bioinformatics | Flow Cytometry Standard | [
"Chemistry",
"Engineering",
"Biology"
] | 1,041 | [
"Bioinformatics",
"Biological engineering",
"Flow cytometry"
] |
21,986,738 | https://en.wikipedia.org/wiki/Type-1.5%20superconductor | Type-1.5 superconductors are multicomponent superconductors characterized by two or more coherence lengths, at least one of which is shorter than the magnetic field penetration length , and at least one of which is longer. This is in contrast to single-component superconductors, where there is only one coherence length and the superconductor is necessarily either type 1 () or type 2 () (often a coherence length is defined with extra factor, with such a definition the corresponding inequalities are and ). When placed in magnetic field, type-1.5 superconductors should form quantum vortices: magnetic-flux-carrying excitations. They allow magnetic field to pass through superconductors due to a vortex-like circulation of superconducting particles (electronic pairs). In type-1.5 superconductors these vortices have long-range attractive, short-range repulsive interaction. As a consequence a type-1.5 superconductor in a magnetic field can form a phase separation into domains with expelled magnetic field and clusters of quantum vortices which are bound together by attractive intervortex forces. The domains of the Meissner state retain the two-component superconductivity, while in the vortex clusters one of the superconducting components is suppressed. Thus such materials should allow coexistence of various properties of type-I and type-II superconductors.
Description
Type-I superconductors completely expel external magnetic fields if the strength of the applied field is sufficiently low. Also the supercurrent can flow only on the surface of such a superconductor but not in its interior. This state is called the Meissner state. However at elevated magnetic field, when the magnetic field energy becomes comparable with the superconducting condensation energy, the superconductivity is destroyed by the formation of macroscopically large inclusions of non-superconducting phase.
Type-II superconductors, besides the Meissner state, possess another state: a sufficiently strong applied magnetic field can produce currents in the interior of superconductor due to formation of quantum vortices. The vortices also carry magnetic flux through the interior of the superconductor. These quantum vortices repel each other and thus tend to form uniform vortex lattices or liquids. Formally, vortex solutions exist also in models of type-I superconductivity, but the interaction between vortices is purely attractive, so a system of many vortices is unstable against a collapse onto a state of a single giant normal domain with supercurrent flowing on its surface. More importantly, the vortices in type-I superconductor are energetically unfavorable. To produce them would require the application of a magnetic field stronger than what a superconducting condensate can sustain. Thus a type-I superconductor goes to non-superconducting states rather than forming vortices. In the usual Ginzburg–Landau theory, only the quantum vortices with purely repulsive interaction are energetically cheap enough to be induced by applied magnetic field.
It was proposed that the type-I/type-II dichotomy could be broken in a multi-component superconductors, which possess multiple coherence lengths.
Examples of multi-component superconductivity are multi-band superconductors magnesium diboride and oxypnictides and exotic superconductors with nontrivial Cooper-pairing. There, one can distinguish two or more superconducting components associated, for example with electrons belong to different bands band structure. A different example of two component systems is the projected superconducting states
of liquid metallic hydrogen or deuterium where mixtures of superconducting electrons and superconducting protons or deuterons were theoretically predicted.
It was also pointed out that systems which have phase transitions between different superconducting states such as between and or between and should rather generically fall into type-1.5 state near that transition due to divergence of one of the coherence lengths.
In mixtures of independently conserved condensates
For multicomponent superconductors with so called U(1)xU(1) symmetry the Ginzburg-Landau model is a sum of two single-component Ginzburg-Landau model which are coupled by a vector potential
:
where are two superconducting condensates. In case if the condensates are coupled only electromagnetically, i.e. by the model has three length scales: the London penetration length
and two coherence lengths .
The vortex excitations in that case have cores in both components which are co-centered because of electromagnetic coupling mediated by the field . The necessary but not sufficient condition for occurrence of type-1.5 regime is . Additional condition of thermodynamic stability is satisfied for a range of parameters. These vortices have a nonmonotonic interaction: they attract each other at large distances and repel each other at short distances.
It was shown that there is a range of parameters where these vortices are energetically favorable enough to be excitable by an external field, attractive interaction notwithstanding. This results in the formation of a special superconducting phase in low magnetic fields dubbed "Semi-Meissner" state. The vortices, whose density is controlled by applied magnetic flux density, do not form a regular structure. Instead, they should have a tendency to form vortex "droplets" because of the long-range attractive interaction caused by condensate density suppression in the area around the vortex. Such vortex clusters should coexist with the areas of vortex-less two-component Meissner domains. Inside such vortex cluster the component with larger coherence length is suppressed: so that component has appreciable current only at the boundary of the cluster.
In multiband systems
In a two-band superconductor the electrons in different bands are not independently conserved thus the definition of two superconducting components is different. A two-band superconductor is described by the following Ginzburg-Landau model.
where again are two superconducting condensates. In multiband superconductors quite generically .
When three length scales of the problem are again the London penetration length
and two coherence lengths. However, in this case the coherence lengths are associated with "mixed" combinations of density fields.
Microscopic models
A microscopic theory of type-1.5 superconductivity has been reported.
Current experimental research
In 2009, experimental results have been reported
claiming that magnesium diboride may fall into this new class of superconductivity. The term type-1.5 superconductor was coined for this state. Further experimental data backing this conclusion was reported in. More recent theoretical works show that the type-1.5 may be more general phenomenon because it does not require a material with two truly superconducting bands, but can also happen as a result of even very small interband proximity effect and is robust in the presence of various inter-band couplings such as interband Josephson coupling.
In 2014, experimental study suggested that Sr2RuO4 is type-1.5 superconductor.
Non-technical explanation
Type-I and type-II superconductors feature dramatically different charge flow patterns. Type-I superconductors have two state-defining properties: The lack of electric resistance and the fact that they do not allow an external magnetic field to pass through them. When a magnetic field is applied to these materials, superconducting electrons produce a strong current on the surface, which in turn produces a magnetic field in the opposite direction to cancel the interior magnetic field, similar to how typical conductors cancel interior electric fields with surface charge distributions. An externally applied magnetic field of sufficiently low strength is cancelled in the interior of a type-I superconductor by the field produced by the surface current. In type-II superconducting materials, however, a complicated flow of superconducting electrons can form deep in the interior of the material. In a type-II material, magnetic fields can penetrate into the interior, carried inside by vortices that form an Abrikosov vortex lattice.
In type-1.5 superconductors, there are at least two superconducting components. In such materials, the external magnetic field can produce clusters of tightly packed vortex droplets because in such materials vortices should attract each other at large distances and repel at short length scales. Since the attraction originates in vortex core's overlaps in one of the superconducting components, this component will be depleted in the vortex cluster. Thus a vortex cluster will represent two competing types of superflow. One component will form vortices bunched together while the second component will produce supercurrent flowing on the surface of vortex clusters in a way similar to how electrons flow on the exterior of type-I superconductors. These vortex clusters are separated by "voids," with no vortices, no currents and no magnetic field.
Animations
Movies from numerical simulations of the Semi-Meissner state where Meissner domains coexist with clusters where vortex droplets form in one superconducting components and macroscopic normal domains in the other.
References
External links
Animations from numerical calculations of vortex cluster formation are available at ""
Superconductivity | Type-1.5 superconductor | [
"Physics",
"Materials_science",
"Engineering"
] | 1,985 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
26,342,395 | https://en.wikipedia.org/wiki/Ferrouranium | Ferrouranium, also called ferro-uranium, is a ferroalloy, an alloy of iron and uranium, after World War II usually depleted uranium.
Composition and properties
The alloy contains about 35–50% uranium and 1.5–4.0% carbon. At least two intermetallic compounds of iron and uranium were identified: U6Fe and UFe2. Small amounts of uranium can drastically lower the melting point of iron and vice versa. reportedly melts at 1230 °C, at 805 °C; a mixture of these two can have melting point as low as 725 °C, a mixture of iron and can have melting point of 1055 °C. As ferrouranium readily dissolves in mineral acids, its chemical analysis is not problematic.
Use
The first uses of ferrouranium date back to 1897, when the French government attempted to use it for guns. Ferrouranium is used as a deoxidizer (more powerful than ferrovanadium), for denitrogenizing steel, for forming carbides, and as an alloying element. In ferrous alloys, uranium increases the elastic limit and the tensile strength. In high speed steels, it has been used to increase toughness and strength in amounts between 0.05 and 5%. Uranium-alloyed steels can be used at very low temperatures; nickel-uranium alloys are resistant to even very aggressive chemicals, including aqua regia.
Economics
The alloys did not prove to be commercially successful in the long run. However, during World War I and afterwards, uranium-doped steels were used for tools; large amounts of ferrouranium were produced between 1914 and 1916.
References
Ferroalloys
Deoxidizers
Uranium compounds
Iron compounds | Ferrouranium | [
"Chemistry",
"Materials_science"
] | 363 | [
"Deoxidizers",
"Metallurgy"
] |
26,342,640 | https://en.wikipedia.org/wiki/Apical%20constriction | In morphogenesis, apical constriction is the process in which contraction of the apical side of a cell causes the cell to take on a wedged shape. Generally, this shape change is coordinated across many cells of an epithelial layer, generating forces that can bend or fold the cell sheet.
Morphogenetic role
Apical constriction plays a central role in important morphogenetic events in both invertebrates and vertebrates. It is typically the first step in any invagination process and is also important in folding tissues at specified hingepoints.
During gastrulation in both invertebrates and vertebrates, apical constriction of a ring of cells leads to blastopore formation. These cells are known as bottle cells, for their eventual shape. Because all of the cells constrict on the apical side, the epithelial sheet bends convexly on the basal side.
In vertebrates, apical constriction plays a role in a range of other morphogenetic processes such neurulation, placode formation, and primitive streak formation.
Mechanism
Apical constriction occurs primarily through the contraction of cytoskeletal elements. The specific mechanism depends on the species, the cell type, and the morphogenetic movement. Model organisms that have been studied include the frog Xenopus, and the fly Drosophila.
Xenopus
During Xenopus gastrulation, bottle cells are located in the dorsal marginal zone and apically constrict inwards to initiate involution of the blastopore. In these cells, apical constriction occurs when actomyosin contractility folds the cell membrane to reduce the apical surface area. Endocytosis of the membrane at the apical side further reduces surface area. Active trafficking of these endocytosed vesicles along microtubule tracks is also believed to be important, since the depolymerization (but not stabilization) of microtubules reduces the extent of apical constriction.
Although apical constriction is always observed, it is not necessary for gastrulation, indicating that there are other morphogenetic forces working in parallel. Researchers have shown that the removal of bottle cells does not inhibit gastrulation, but simply makes it less efficient. Bottle cell removal does, however, result in deformed embryos.
Neural tube cells in Xenopus apically constrict during the initial invagination as well as during hingepoint folding. Here, the mechanism depends upon the protein Shroom3, which is sufficient to drive apical constriction. Because Shroom3 is an actin-binding protein and accumulates on the apical side, the most likely mechanism is that Shroom3 aggregates the actin meshwork, generating a squeezing force. Ectopic Shroom3 has been shown to be sufficient to induce apical constriction, but only in cells with apico-basal polarity.
Drosophila
The molecular picture of apical constriction is most complete for Drosophila. During Drosophila gastrulation, apical constriction of midline cells initiates invagination to create the ventral furrow. Like in Xenopus, actomyosin contractility plays a major role in constricting the apical side of the cell. The constricting cells have an actin meshwork directly beneath the apical membrane as well as circumferential actin belts lining the adherens junctions between cells. Pulsed contractions of the actin meshwork are believed to be primarily responsible for reducing the apical surface area.
In Drosophila, researchers have also pinpointed the molecules responsible for coordinating apical constriction in time. Protein folded gastrulation (Fog), a secreted protein and Concertina, a G alpha protein, are members of the same pathway that ensure that apical constriction is initiated in the right cells at the right time. The transmembrane protein T48 is part of a redundant pathway that is also needed for coordination of apical constriction. Both pathways must be disrupted in order to completely block ventral furrow formation. Both pathways also regulate the localization of RhoGEF2, a member of the Rho family GTPases, which are known to regulate actin dynamics.
References
External links
http://worms.zoology.wisc.edu/urchins/SUgast_primary4.html
http://www.sdbonline.org/fly/newgene/foldgs1.htm
Cell biology | Apical constriction | [
"Biology"
] | 950 | [
"Cell biology"
] |
26,343,111 | https://en.wikipedia.org/wiki/Sympathetic%20detonation | A sympathetic detonation (SD, or SYDET), also called flash over or secondary/secondaries (explosion), is a detonation, usually unintended, of an explosive charge by a nearby explosion.
Definition
A sympathetic detonation is caused by a shock wave, or impact of primary or secondary blast fragments.
The initiating explosive is called the donor explosive, the initiated one is known as the receptor explosive. In case of a chain detonation, a receptor explosive can become a donor one.
The shock sensitivity, also called gap sensitivity, which influences the susceptibility to sympathetic detonations, can be measured by gap tests.
If detonators with primary explosives are used, the shock wave of the initiating blast may set off the detonator and the attached charge. However even relatively insensitive explosives can be set off if their shock sensitivity is sufficient. Depending on the location, the shock wave can be transported by air, ground, or water. The process is probabilistic, a radius with 50% probability of sympathetic detonation often being used for quantifying the distances involved.
Sympathetic detonation presents problems in storage and transport of explosives and ordnance. Sufficient spacing between adjacent stacks of explosive materials has to be maintained. In case of an accidental detonation of one charge, other ones in the same container or dump can be detonated as well, but the explosion should not spread to other storage units. Special containers attenuating the shock wave can be used to prevent the sympathetic detonations; epoxy-bonded pumice liners were successfully tested. Blow-off panels may be used in structures, e.g. tank ammunition compartments, to channel the explosion overpressure in a desired direction to prevent a catastrophic failure.
Other factors causing unintended detonations are e.g. flame spread, heat radiation, and impact of fragmentation.
A related term is cooking off, setting off an explosive by subjecting it to sustained heat of e.g. a fire or a hot gun barrel. A cooked-off explosive may cause sympathetic detonation of adjacent explosives.
Military
Sympathetic detonations may occur in munitions stored in e.g. vehicles, ships (called a Magazine Explosion), gun mounts, or ammunition depot, by a sufficiently close explosion of a projectile or a bomb. Such detonations after receiving a hit have caused many catastrophic losses of vehicles.
To prevent sympathetic detonations, minimal distances (specific for a given type of the mine) have to be maintained between mines when laying a minefield.
Spallation of materials after an impact on the opposite side may create fragments capable of causing sympathetic detonations of stored explosives on the opposite side of an armour plate or a concrete wall. Transfer of the shock wave through the wall or armour may also be possible cause of a sympathetic detonation.
Class 1.1 solid rocket fuels are susceptible to sympathetic detonation. Conversely, class 1.3 fuels can be ignited by a nearby fire or explosion, but are generally not susceptible to sympathetic detonation. Class 1.1 fuels, however, tend to have slightly higher specific impulses, and therefore are used in those military applications where weight and/or size is at a premium, e.g. on ballistic and cruise missile submarines.
Sympathetic detonation can be used for the destruction of unexploded ordnance, improvised explosive devices, land mines, or naval mines by an adjacent bulk charge.
Special insensitive explosives, such as TATB, are used in certain military applications to avoid sympathetic detonations.
Examples
During the Attack of Pearl Harbor, the USS Arizona was struck with an armor-piercing bomb which penetrated the upper deck and stopped inside the forward magazine. The bomb triggered an explosion which was powerful enough to cut the Arizona in half and is considered a sympathetic detonation as there was an apparent delay between the detonation of the bomb and the contents of the forward magazine.
Sympathetic detonation killed 320 sailors and injured 390 others in the Port Chicago Disaster of July 17, 1944 at the Port Chicago Naval Magazine in Port Chicago, California.
During the 1967 USS Forrestal fire, eight old Composition B based iron bombs cooked off. The last one caused a sympathetic detonation of a ninth bomb, a more modern and less cookoff-susceptible Composition H6 based one.
The Russian submarine Kursk explosion was probably caused by a sympathetic explosion of several torpedo warheads. A single dummy torpedo VA-111 Shkval exploded; 135 seconds later a number of warheads simultaneously exploded and sank the submarine.
Multiple incidents have been recorded in the more recent GWoT where airstrikes have set off explosives or ammunition caches in insurgent positions.
Civilian
In rock blasting, sympathetic detonations occur when the blastholes are sufficiently close to each other, usually 24in or less, and especially in rocks that poorly attenuate the shock energy. Ground water in open channels facilitates sympathetic detonation as well. Blasthole spacing of 36in or more is suggested. However, in some ditch blasting cases sympathetic detonations are exploited purposefully. Nitroglycerine-based explosives are especially susceptible. Picric acid is sensitive as well. Water gel explosives, slurry explosives, and emulsion explosives tend to be insensitive to sympathetic detonations. For most industrial explosives, the maximum distances for possible sympathetic detonations are between 2–8 times of the charge diameter. Uncontrolled sympathetic detonations may cause excessive ground vibrations and/or flying rocks.
The spread of shock waves can be hindered by placing relief holes – drilled holes without explosive charges – between the blastholes.
The opposite phenomenon is dynamic desensitization. Some explosives, e.g. ANFO, show reduced sensitivity under pressure. A transient pressure wave from a nearby detonation may compress the explosive sufficiently to make its initiation fail. This can be prevented by introducing sufficient delays into the firing sequence.
A sympathetic detonation during mine blasting may influence the seismic signature of the blast, by boosting the P-wave amplitude without significantly amplifying the surface wave.
See also
Cooking off
References
Explosions
Explosives engineering | Sympathetic detonation | [
"Chemistry",
"Engineering"
] | 1,266 | [
"Explosives engineering",
"Explosions"
] |
26,344,581 | https://en.wikipedia.org/wiki/Iron%20group | In chemistry and physics, the iron group refers to elements that are in some way related to iron; mostly in period (row) 4 of the periodic table. The term has different meanings in different contexts.
In chemistry, the term is largely obsolete, but it often means iron, cobalt, and nickel, also called the iron triad; or, sometimes, other elements that resemble iron in some chemical aspects.
In astrophysics and nuclear physics, the term is still quite common, and it typically means those three plus chromium and manganese—five elements that are exceptionally abundant, both on Earth and elsewhere in the universe, compared to their neighbors in the periodic table. Titanium and vanadium are also produced in Type Ia supernovae.
General chemistry
In chemistry, "iron group" used to refer to iron and the next two elements in the periodic table, namely cobalt and nickel. These three comprised the "iron triad". They are the top elements of groups 8, 9, and 10 of the periodic table; or the top row of "group VIII" in the old (pre-1990) IUPAC system, or of "group VIIIB" in the CAS system. These three metals (and the three of the platinum group, immediately below them) were set aside from the other elements because they have obvious similarities in their chemistry, but are not obviously related to any of the other groups. The iron group and its alloys exhibit ferromagnetism.
The similarities in chemistry were noted as one of Döbereiner's triads and by Adolph Strecker in 1859. Indeed, Newlands' "octaves" (1865) were harshly criticized for separating iron from cobalt and nickel. Mendeleev stressed that groups of "chemically analogous elements" could have similar atomic weights as well as atomic weights which increase by equal increments, both in his original 1869 paper and his 1889 Faraday Lecture.
Analytical chemistry
In the traditional methods of qualitative inorganic analysis, the iron group consists of those cations which
have soluble chlorides; and
are not precipitated as sulfides by hydrogen sulfide in acidic conditions;
are precipitated as hydroxides at around pH 10 (or less) in the presence of ammonia.
The main cations in the iron group are iron itself (Fe2+ and Fe3+), aluminium (Al3+) and chromium (Cr3+). If manganese is present in the sample, a small amount of hydrated manganese dioxide is often precipitated with the iron group hydroxides. Less common cations which are precipitated with the iron group include beryllium, titanium, zirconium, vanadium, uranium, thorium and cerium.
Astrophysics
The iron group in astrophysics is the group of elements from chromium to nickel, which are substantially more abundant in the universe than those that come after them – or immediately before them – in order of atomic number. The study of the abundances of iron group elements relative to other elements in stars and supernovae allows the refinement of models of stellar evolution.
The explanation for this relative abundance can be found in the process of nucleosynthesis in certain stars, specifically those of about 8–11 Solar masses. At the end of their lives, once other fuels have been exhausted, such stars can enter a brief phase of "silicon burning". This involves the sequential addition of helium nuclei (an "alpha process") to the heavier elements present in the star, starting from :
: {| border="0"
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|- style="height:2em;"
| ||+ || ||→ ||
|}
All of these nuclear reactions are exothermic: the energy that is released partially offsets the gravitational contraction of the star. However, the series ends at , as the next reaction in the series
{| border="0"
|- style="height:2em;"
| ||+ || ||→ ||
|}
is endothermic. With no further source of energy to support itself, the core of the star collapses on itself while the outer regions are blown off in a Type II supernova.
Nickel-56 is unstable with respect to beta decay, and the final stable product of silicon burning is .
{| border="0"
|- style="height:2em;"
| ||→ || ||+ ||β+|| t1/2 = 6.075(10) d
|- style="height:2em;"
| ||→ || ||+ ||β+|| t1/2 = 77.233(27) d
|}
It is often incorrectly stated that iron-56 is exceptionally common because it is the most stable of all the nuclides. This is not quite true: and have slightly higher binding energies per nucleon – that is, they are slightly more stable as nuclides – as can be seen from the table on the right. However, there are no rapid nucleosynthetic routes to these nuclides.
In fact, there are several stable nuclides of elements from chromium to nickel around the top of the stability curve, accounting for their relative abundance in the universe. The nuclides which are not on the direct alpha-process pathway are formed by the s-process, the capture of slow neutrons within the star.
See also
S-process
Silicon burning process
Abundance of the chemical elements
Notes and references
Notes
References
Sets of chemical elements
Nucleosynthesis | Iron group | [
"Physics",
"Chemistry"
] | 1,290 | [
"Nuclear fission",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion"
] |
26,346,355 | https://en.wikipedia.org/wiki/Steam%20stripping | Steam stripping is a process used in petroleum refineries and petrochemical plants to remove volatile contaminants, such as hydrocarbons and other volatile organic compounds (VOCs), from wastewater. It typically consists of passing a stream of superheated steam through the wastewater.
This method is effective when the volatile compounds have lower boiling points than water or have limited solubility in water.
References
Petroleum engineering
Petrochemical industry
Industrial emissions control | Steam stripping | [
"Chemistry",
"Engineering"
] | 93 | [
"Industrial emissions control",
"Petroleum engineering",
"Energy engineering",
"Environmental engineering",
"Chemical process engineering",
"Petrochemical industry"
] |
36,196,484 | https://en.wikipedia.org/wiki/C24H23NO | {{DISPLAYTITLE:C24H23NO}}
The molecular formula C24H23NO (molar mass: 341.44 g/mol, exact mass: 341.1780 u) may refer to:
JWH-018, also known as 1-pentyl-3-(1-naphthoyl)indole or AM-678
JWH-148
Molecular formulas | C24H23NO | [
"Physics",
"Chemistry"
] | 88 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
36,196,505 | https://en.wikipedia.org/wiki/C23H21NO | {{DISPLAYTITLE:C23H21NO}}
The molecular formula C23H21NO (molar mass: 327.42 g/mol, exact mass: 327.1623 u) may refer to:
JWH-015
JWH-073
JWH-120
Molecular formulas | C23H21NO | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
36,203,874 | https://en.wikipedia.org/wiki/Unit%20of%20work | A unit of work is a behavioral pattern in software development. Martin Fowler has defined it as everything one does during a business transaction which can affect the database. When the unit of work is finished, it will provide everything that needs to be done to change the database as a result of the work.
A unit of work encapsulates one or more code repositories[de] and a list of actions to be performed which are necessary for the successful implementation of self-contained and consistent data change. A unit of work is also responsible for handling concurrency issues, and can be used for transactions and stability patterns.[de]
See also
ACID (atomicity, consistency, isolation, durability), a set of properties of database transactions
Database transaction, a unit of work within a database management system
Equi-join, a type of join where only equal signs are used in the join predicate
Lossless join decomposition, decomposition of a relation such that a natural join of the resulting relations yields back the original relation
References
Software engineering | Unit of work | [
"Technology",
"Engineering"
] | 210 | [
"Software engineering",
"Systems engineering",
"Information technology",
"Computer engineering"
] |
36,204,226 | https://en.wikipedia.org/wiki/Amplitude%20domain%20reflectometry | Many soil moisture measuring instruments are based on the principle of Amplitude Domain Reflectometry (ADR). This method measures the electrical impedance. Electromagnetic waves traveling along Transmission Lines (TL) enter in soil medium whose impedance is different from TL a part of the energy is reflected back to transmitter. The reflected wave interferes with incident wave and produces a standing wave along the TL, this changes the amplitude of wave along the TL. The impedance can be measured from difference in amplitude. The impedance has two components: electrical conductivity and dielectric constant. The effect of conductivity can be minimized by selecting an appropriate frequency.
References
Robinson, D. A., C. S. Campbell, J. W. Hopmans, B. K. Hornbuckle, S. B. Jones, R. Knight, F. Ogden, J. Selker and O. Wendroth (2008), Soil moisture measurement for ecological and hydrological watershed-scale observatories: A review, Vadose Zone Journal, 7(1), 358–389.
Soil science
Impedance measurements | Amplitude domain reflectometry | [
"Physics"
] | 234 | [
"Impedance measurements",
"Physical quantities",
"Electrical resistance and conductance"
] |
36,205,212 | https://en.wikipedia.org/wiki/Markov%E2%80%93Kakutani%20fixed-point%20theorem | In mathematics, the Markov–Kakutani fixed-point theorem, named after Andrey Markov and Shizuo Kakutani, states that a commuting family of continuous affine self-mappings of a compact convex subset in a locally convex topological vector space has a common fixed point. This theorem is a key tool in one of the quickest proofs of amenability of abelian groups.
Statement
Let be a locally convex topological vector space, with a compact convex subset .
Let be a family of continuous mappings of to itself which commute and are affine, meaning that for all in and in . Then the mappings in share a fixed point.
Proof for a single affine self-mapping
Let be a continuous affine self-mapping of .
For in define a net in by
Since is compact, there is a convergent subnet in :
To prove that is a fixed point, it suffices to show that for every in the dual of . (The dual separates points by the Hahn-Banach theorem; this is where the assumption of local convexity is used.)
Since is compact, is bounded on by a positive constant . On the other hand
Taking and passing to the limit as goes to infinity, it follows that
Hence
Proof of theorem
The set of fixed points of a single affine mapping is a non-empty compact convex set by the result for a single mapping. The other mappings in the family commute with so leave invariant. Applying the result for a single mapping successively, it follows that any finite subset of has a non-empty fixed point set given as the intersection of the compact convex sets as ranges over the subset. From the compactness of it follows that the set
is non-empty (and compact and convex).
Citations
References
Theorems in functional analysis
Topological vector spaces
Fixed-point theorems | Markov–Kakutani fixed-point theorem | [
"Mathematics"
] | 377 | [
"Theorems in mathematical analysis",
"Vector spaces",
"Fixed-point theorems",
"Topological vector spaces",
"Space (mathematics)",
"Theorems in topology",
"Theorems in functional analysis"
] |
29,594,218 | https://en.wikipedia.org/wiki/Transgenic%20Research | Transgenic Research, international in scope, is a bimonthly, peer-reviewed, scientific journal, published by Springer. The co-editors-in-chief are Johannes Buyel and Simon Lillico.
Scope
Transgenic Research focuses on transgenic and genome edited higher organisms. Manuscripts emphasizing biotechnological applications are strongly encouraged. Intellectual property, ethical issues, societal impact and regulatory aspects also fall within the scope of the journal. Transgenic Research aims to bridge the gap between fundamental and applied science in molecular biology and biotechnology for the plant and animal academic and associated industry communities.
The journal is associated with the International Society for Transgenic Technologies (ISTT).
Transgenic Research publishes
Research
Should describe novel research involving the production, characterization and application of genetically altered animals or plants. Reports of transient results may be considered if they have a clear focus or application in permanently modified multicellular organisms.
Reviews
Should critically summarize the current state-of-the-art of the subject in a dispassionate way. Authors are requested to contact a board member before submission. Reviews should not be descriptive; rather they should present the most up-to-date information on the subject in a dispassionate and critical way. Perspective Reviews which can address new or controversial aspects are encouraged.
Comment
Similar to reviews, this article type should refer to one or several recently published articles or topics currently under debate in the respective scientific community. The editorial board should be contacted before submission as described for reviews.
Brief Report
Should be short reports describing substantial developments in experiments involving transgenic or genome-edited multi-cellular organisms that are highly relevant for the research community and require a fast dissemination.
Methodology
Should describe in detail the development of new methods to generate, analyze or select transgenic or genome-edited multicellular organisms in a way that is advantageous compared to the current state of the art, including explicit benchmarking against existing gold-standards where appropriate.
Protocol
Should provide an in-depth, step-by-step description of relevant methods that allow successful reproduction in other laboratories without the need for additional details or information. Providing a brief description of typical results as well as a precise trouble-shooting guide is expected.
Abstracting and indexing
This journal is listed in the following databases:
Thomoson Reuters databases:
Biochemistry and Biophysics Citation Index
BIOSIS – Biological Abstracts
Biotechnology Citation Index
Current Contents/ Life Sciences
Journal Citation Reports/Science Edition
Science Citation Index
SciSearch
CABI Direct
CAB Abstracts
CAB International
Global Health
EBSCO
Environment Index
Elsevier Biobase
Current Awareness in Biological Sciences (CABS)
EMBASE
EMBiology
Chemical Abstracts Service (CAS) – CASSI
CSA/Proquest
Derwent Biotechnology Resource
Gale
Google Scholar
IBIDS
OCLC
PASCAL
PubMed/MEDLINE
Scopus
Summon by Serial Solutions
VINITI Database RAS
References
External links
International Society for Transgenic Technologies
Molecular and cellular biology journals
Springer Science+Business Media academic journals
Academic journals established in 1991
English-language journals
Bimonthly journals
Biotechnology journals | Transgenic Research | [
"Chemistry",
"Biology"
] | 604 | [
"Biotechnology literature",
"Molecular and cellular biology journals",
"Biotechnology journals",
"Molecular biology"
] |
33,574,047 | https://en.wikipedia.org/wiki/ANSI/TIA-568 | ANSI/TIA-568 is a technical standard for commercial building cabling for telecommunications products and services. The title of the standard is Commercial Building Telecommunications Cabling Standard and is published by the Telecommunications Industry Association (TIA), a body accredited by the American National Standards Institute (ANSI).
, the revision status of the standard is ANSI/TIA-568-E, published 2020, which replaced ANSI/TIA-568-D of 2015, revision C of 2009, revision B of 2001, and revision A of 1995, and the initial issue of 1991, which are now obsolete.
Perhaps the best-known features of ANSI/TIA-568 are the pin and pair assignments for eight-conductor 100-ohm balanced twisted pair cabling. These assignments are named T568A and T568B.
History
ANSI/TIA-568 was developed through the efforts of more than 60 contributing organizations including manufacturers, end-users, and consultants. Work on the standard began with the Electronic Industries Alliance (EIA), to define standards for telecommunications cabling systems. EIA agreed to develop a set of standards, and formed the TR-42 committee, with nine subcommittees to perform the work. The work continues to be maintained by TR-42 within the TIA. EIA no longer exists, hence EIA has been removed from the name.
The first version of the standard, EIA/TIA-568, was released in 1991. The standard was updated to revision A in 1995. The demands placed upon commercial wiring systems increased dramatically over this period due to the adoption of personal computers and data communication networks and advances in those technologies. The development of high-performance twisted pair cabling and the popularization of fiber optic cables also drove significant change in the standards. These changes were first released in a revision C in 2009 which has subsequently been replaced by revision D (named ANSI/TIA-568-D).
Goals
ANSI/TIA-568 defines structured cabling system standards for commercial buildings, and between buildings in campus environments. The bulk of the standards define cabling types, distances, connectors, cable system architectures, cable termination standards and performance characteristics, cable installation requirements and methods of testing installed cable. The main standard, ANSI/TIA-568.0-D defines general requirements, while ANSI/TIA-568-C.2 focuses on components of balanced twisted-pair cable systems. ANSI/TIA-568.3-D addresses components of fiber optic cable systems, and ANSI/TIA-568-C.4, addressed coaxial cabling components.
The intent of these standards is to provide recommended practices for the design and installation of cabling systems that will support a wide variety of existing and future services. Developers hope the standards will provide a lifespan for commercial cabling systems in excess of ten years. This effort has been largely successful, as evidenced by the definition of Category 5 cabling in 1991, a cabling standard that (mostly) satisfied cabling requirements for 1000BASE-T, released in 1999. Thus, the standardization process can reasonably be said to have provided at least a nine-year lifespan for premises cabling, and arguably a longer one.
All these documents accompany related standards that define commercial pathways and spaces (TIA-569-C-1, February 2013), residential cabling (ANSI/TIA-570-C, August 2012), administration standards (ANSI/TIA-606-B, December 2015), grounding and bonding (TIA-607-C, November 2015), and outside plant cabling (TIA-758-B, April 2012).
Cable categories
The standard defines categories of shielded and unshielded twisted pair cable systems, with different levels of performance in signal bandwidth, insertion loss, and cross-talk. Generally increasing category numbers correspond with a cable system suitable for higher rates of data transmission. Category 3 cable was suitable for telephone circuits and data rates up to 16 million bits per second. Category 5 cable, with more restrictions on attenuation and cross talk, has a bandwidth of 100 MHz. The 1995 edition of the standard defined Categories 3, 4, and 5. Categories 1 and 2 were excluded from the standard since these categories were only used for voice circuits, not for data. The current revision includes Category 5e (100 MHz), 6 (250 MHz), 6A (500 MHz), and 8 (2,000 MHz). Categories 7 and 7A were not officially recognized by TIA and were generally only used outside the United States. Category 8 was published with ANSI/TIA‑568‑C.2‑1 (June 2016) to meet the performance specification intended by Category 7.
Structured cable system topologies
ANSI/TIA-568-D defines a hierarchical cable system architecture, in which a main cross-connect (MCC) is connected via a star topology across backbone cabling to intermediate cross-connects (ICCs) and horizontal cross-connects (HCCs). Telecommunications design traditions utilized a similar topology. Many people refer to cross-connects by their telecommunications names: distribution frames (with the various hierarchies called main distribution frames (MDFs), intermediate distribution frames (IDFs) and wiring closets). Backbone cabling is also used to interconnect entrance facilities (such as telco demarcation points) to the main cross-connect.
Horizontal cross-connects provide a point for the consolidation of all horizontal cabling, which extends in a star topology to individual work areas such as cubicles and offices. Under TIA/EIA-568-B, maximum allowable horizontal cable distance is 90 meters of installed twisted-pair cabling, with 100 meters of maximum total length including patch cords. No patch cord should be longer than 5 meters. Optional consolidation points are allowable in horizontal cables, often appropriate for open-plan office layouts where consolidation points or media converters may connect cables to several desks or via partitions.
At the work area, equipment is connected by patch cords to horizontal cabling terminated at jack points.
TIA/EIA-568 also defines characteristics and cabling requirements for entrance facilities, equipment rooms and telecommunications rooms.
T568A and T568B termination
Perhaps the most comprehensive known and most discussed feature of ANSI/TIA-568 is the definition of the pin-to-pair assignments, or pinout, between the pins in a connector (a plug or a socket) and the wires in a cable. Pinouts are critical because cables do not function if the pinouts at their two ends aren't correctly matched.
The standard specifies how to connect eight-conductor 100-ohm balanced twisted-pair cabling, such as Category 5 cable, to 8P8C modular connectors (often referred to as RJ45 connectors). The standard defines two alternative pinouts: T568A and T568B.
ANSI/TIA-568 recommends the T568A pinout for horizontal cables. This pinout is compatible with the 1-pair and 2-pair Universal Service Order Codes (USOC) pinouts. The U.S. Government requires it in federal contracts. The standard also allows, only in certain circumstances, the T568B pinout "if necessary to accommodate certain 8-pin cabling systems", i.e. when, and only when, adding to an existing installation that used the T568B wiring pattern before it was defined, being those that pre-dated ANSI/TIA-568 and used the previous AT&T 258A (Systimax) standard. In the 1990s, when the original TIA/EIA-568 was published, the most widely installed wiring pattern in UTP cabling infrastructure was that of AT&T 258A (Systimax), hence the inclusion of the same wiring pattern (as T568B) as a secondary option for use in such installations. Many organizations still use T568B out of inertia.
The colors of the wire pairs in the cable, in order, are blue (for pair 1), orange, green, and brown (for pair 4). Each pair consists of one conductor of solid color and a second conductor, which is white with a stripe of the other color. The difference between the T568A and T568B pinouts is that pairs 2 and 3 (orange and green) are exchanged.
Wiring
See modular connector for numbering of the pins.
Both T568A and T568B configurations wire the pins "straight through," i.e., pins 1 through 8 on one end are connected to pins 1 through 8 on the other end. Also, the same sets of pins connect to the opposite ends that are paired in both configurations: pins 1 and 2 form a pair, as do 3 and 6, 4 and 5, and 7 and 8. One can use cables wired according to either configuration in the same installation without significant problems if the connections are the same on both ends.
A cable terminated according to T568A on one end and T568B on the other is a crossover cable when used with the earlier twisted-pair Ethernet standards that use only two of the pairs because the pairs used happen to be pairs 2 and 3, the same pairs on which T568A and T568B differ. Crossover cables are occasionally needed for 10BASE-T and 100BASE-TX Ethernet.
Swapping two wires between different pairs causes crosstalk, defeating one of the purposes of twisting wires in pairs.
Use for T1 connectivity
In Digital Signal 1 (T1) service, pairs 1 and 3 (T568A) are used, and the USOC-8 jack is wired according to the RJ-48C specification. The termination jack is often wired according to the RJ-48X specification, providing a transmit-to-receive loopback when the plug is withdrawn.
Vendor cables are often wired with tip and ring reversed—i.e., pins 1 and 2 or 4 and 5 reversed. This does not affect the quality of the T1 signal, which is fully differential and uses the alternate mark inversion (AMI) signaling scheme.
Backward compatibility
Conventional plain old telephone service up to four lines can use six-position (6P) and eight-position (8P) plugs and jacks, with line 1 on the center pins, line 2 straddling the center pair, and subsequent pairs proceeding outward, this pattern is often called USOC. One-, two-, and three-line service can use six-position jacks (respectively RJ11, RJ14, and RJ25), and four-line service eight-position jacks (RJ61).
Because pair 1 is on the center pins (4 and 5) of the 8P8C connector in USOC and both T568A and T568B, a telephone will connect to line 1 of both T568A and T568B as well as all of the above registered jacks, but if a second line is used (3 and 6) is used, it connects to line 2 (pair 2) of USOC and T568A jacks, but to pair 3 of T568B jacks. This makes T568B potentially confusing in telephone applications.
Because of different wire pairings of the outer pins, USOC plugs cannot connect to pair 3 or 4 from T568A, or pair 2 or 4 from T568B, without splitting pairs. This means either the lines don’t connect at all or likely unacceptable levels of hum, crosstalk, and noise.
Optical fiber
To maintain polarity for duplex connector the cabling shall be installed with alternating Position A at one end and Position B at the other.
Theory
The original idea in wiring modular connectors, as seen in the Bell System registered jacks, was that the first pair would go in the center positions, the next pair on the next-innermost ones, and so on. Also, signal shielding would be optimized by alternating the live and earthy pins of each pair. The TIA-568 terminations diverge from this concept by placing a pair on pins 1 and 2 and one on 7 and 8 because, on the eight-position connector, the original arrangement of conductors would separate the outer pairs substantially, impairing balanced line performance too much to meet the electrical requirements of high-speed LAN protocols.
Standards
ANSI/TIA-568.0 Generic Telecommunications Cabling for Customer Premises
ANSI/TIA-568.1 Commercial Building Telecommunications Infrastructure Standard
ANSI/TIA-568.2 Balanced Twisted-Pair Telecommunications Cabling and Components Standard
ANSI/TIA-568.3 Optical Fiber Cabling And Components Standard
ANSI/TIA-568.4 Broadband Coaxial Cabling and Components Standard
ANSI/TIA-568.5 Balanced Single Twisted-pair Telecommunications Cabling and Components Standard
See also
Ethernet over twisted pair
ISO/IEC 11801, similar international standard for network cables
References
Sources
TR-42.7 Copper Cabling Systems – February 2021
External links
CAT 5 / 5e / 6 / 6A / 6A / 7 Cable - RJ-45 Connector, ProAV.de
"UTP Cable Termination Standards 568A Vs 568B [sic]" (2006)
Standard Informant - Your Guide to Network Cabling and Data Center Standards
EIA standards
Ethernet
Networking standards
Signal cables
Telecommunications standards | ANSI/TIA-568 | [
"Technology",
"Engineering"
] | 2,788 | [
"Networking standards",
"Computer standards",
"Computer networks engineering"
] |
41,820,283 | https://en.wikipedia.org/wiki/Belt%20manlift | A belt manlift or manlift is a device for moving passengers between floors of a building. It is a simple belt with steps or platforms and handholds rather than an elevator with cars. Its design is similar to that of a paternoster lift. The belt is a loop that moves in a single direction, so one can go up or down by using the opposite sides of the loop. The belt moves continuously, so one can simply get on when a step passes and step off when passing any desired floor without having to call and wait for a car to arrive.
Although not technically a paternoster, it has many of the same design features and hazards associated with its use. There are several companies still making belt manlifts. They are used in grain elevators and parking garages where space is limited. In Canada, manlifts were retrofitted in the early 1990s with safety features after fatal accidents. Safety concerns have led to a decline in their use.
In popular culture
The opening sequence of the 1978 film The Driver features Ryan O'Neal ascending through a parking garage on a manlift. The film Our Man Flint (1966) features an operational manlift within the volcanic island complex, shot at the LADWP Scattergood Generating Station. Hitman Bruce Willis dispatches a bookie from a manlift, while ascending through a parking garage in the opening scene of Lucky Number Slevin.
In the parking garage scene of Ferris Bueller’s Day Off, a yellow operational manlift can be seen in the background just as the three teenagers pull in, before they get out of the Ferrari to pass the car keys to the valet.
See also
Man engine
References
External links
Endless Belt Manlifts
Elevators
Vertical transport devices | Belt manlift | [
"Physics",
"Technology",
"Engineering"
] | 354 | [
"Transport systems",
"Building engineering",
"Transport stubs",
"Physical systems",
"Transport",
"Vertical transport devices",
"Elevators"
] |
41,824,003 | https://en.wikipedia.org/wiki/MIMO-OFDM | Multiple-input, multiple-output orthogonal frequency-division multiplexing (MIMO-OFDM) is the dominant air interface for 4G and 5G broadband wireless communications. It combines multiple-input, multiple-output (MIMO) technology, which multiplies capacity by transmitting different signals over multiple antennas, and orthogonal frequency-division multiplexing (OFDM), which divides a radio channel into a large number of closely spaced subchannels to provide more reliable communications at high speeds. Research conducted during the mid-1990s showed that while MIMO can be used with other popular air interfaces such as time-division multiple access (TDMA) and code-division multiple access (CDMA), the combination of MIMO and OFDM is most practical at higher data rates.
MIMO-OFDM is the foundation for most advanced wireless local area network (wireless LAN) and mobile broadband network standards because it achieves the greatest spectral efficiency and, therefore, delivers the highest capacity and data throughput. Greg Raleigh invented MIMO in 1996 when he showed that different data streams could be transmitted at the same time on the same frequency by taking advantage of the fact that signals transmitted through space bounce off objects (such as the ground) and take multiple paths to the receiver. That is, by using multiple antennas and precoding the data, different data streams could be sent over different paths. Raleigh suggested and later proved that the processing required by MIMO at higher speeds would be most manageable using OFDM modulation, because OFDM converts a high-speed data channel into a number of parallel lower-speed channels.
Operation
In modern usage, the term "MIMO" indicates more than just the presence of multiple transmit antennas (multiple input) and multiple receive antennas (multiple output). While multiple transmit antennas can be used for beamforming, and multiple receive antennas can be used for diversity, the word "MIMO" refers to the simultaneous transmission of multiple signals (spatial multiplexing) to multiply spectral efficiency (capacity).
Traditionally, radio engineers treated natural multipath propagation as an impairment to be mitigated. MIMO is the first radio technology that treats multipath propagation as a phenomenon to be exploited. MIMO multiplies the capacity of a radio link by transmitting multiple signals over multiple, co-located antennas. This is accomplished without the need for additional power or bandwidth. Space–time codes are employed to ensure that the signals transmitted over the different antennas are orthogonal to each other, making it easier for the receiver to distinguish one from another. Even when there is line of sight access between two stations, dual antenna polarization may be used to ensure that there is more than one robust path.
OFDM enables reliable broadband communications by distributing user data across a number of closely spaced, narrowband subchannels. This arrangement makes it possible to eliminate the biggest obstacle to reliable broadband communications, intersymbol interference (ISI). ISI occurs when the overlap between consecutive symbols is large compared to the symbols’ duration. Normally, high data rates require shorter duration symbols, increasing the risk of ISI. By dividing a high-rate data stream into numerous low-rate data streams, OFDM enables longer duration symbols. A cyclic prefix (CP) may be inserted to create a (time) guard interval that prevents ISI entirely. If the guard interval is longer than the delay spreadthe difference in delays experienced by symbols transmitted over the channelthen there will be no overlap between adjacent symbols and consequently no intersymbol interference. Though the CP slightly reduces spectral capacity by consuming a small percentage of the available bandwidth, the elimination of ISI makes it an exceedingly worthwhile tradeoff.
A key advantage of OFDM is that fast Fourier transforms (FFTs) may be used to simplify implementation. Fourier transforms convert signals back and forth between the time domain and frequency domain. Consequently, Fourier transforms can exploit the fact that any complex waveform may be decomposed into a series of simple sinusoids. In signal processing applications, discrete Fourier transforms (DFTs) are used to operate on real-time signal samples. DFTs may be applied to composite OFDM signals, avoiding the need for the banks of oscillators and demodulators associated with individual subcarriers. Fast Fourier transforms are numerical algorithms used by computers to perform DFT calculations.
FFTs also enable OFDM to make efficient use of bandwidth. The subchannels must be spaced apart in frequency just enough to ensure that their time-domain waveforms are orthogonal to each other. In practice, this means that the subchannels are allowed to partially overlap in frequency.
MIMO-OFDM is a particularly powerful combination because MIMO does not attempt to mitigate multipath propagation and OFDM avoids the need for signal equalization. MIMO-OFDM can achieve very high spectral efficiency even when the transmitter does not possess channel state information (CSI). When the transmitter does possess CSI (which can be obtained through the use of training sequences), it is possible to approach the theoretical channel capacity. CSI may be used, for example, to allocate different size signal constellations to the individual subcarriers, making optimal use of the communications channel at any given moment of time.
More recent MIMO-OFDM developments include multi-user MIMO (MU-MIMO), higher order MIMO implementations (greater number of spatial streams), and research concerning massive MIMO and cooperative MIMO (CO-MIMO) for inclusion in coming 5G standards.
MU-MIMO is part of the IEEE 802.11ac standard, the first Wi-Fi standard to offer speeds in the gigabit per second range. MU-MIMO enables an access point (AP) to transmit to up to four client devices simultaneously. This eliminates contention delays, but requires frequent channel measurements to properly direct the signals. Each user may employ up to four of the available eight spatial streams. For example, an AP with eight antennas can talk to two client devices with four antennas, providing four spatial streams to each. Alternatively, the same AP can talk to four client devices with two antennas each, providing two spatial streams to each.
Multi-user MIMO beamforming even benefits single spatial stream devices. Prior to MU-MIMO beamforming, an access point communicating with multiple client devices could only transmit to one at a time. With MU-MIMO beamforming, the access point can transmit to up to four single stream devices at the same time on the same channel.
The 802.11ac standard also supports speeds up to 6.93 Gbit/s using eight spatial streams in single-user mode. The maximum data rate assumes use of the optional 160 MHz channel in the 5 GHz band and 256 QAM (quadrature amplitude modulation). Chipsets supporting six spatial streams have been introduced and chipsets supporting eight spatial streams are under development.
Massive MIMO consists of a large number of base station antennas operating in a MU-MIMO environment. While LTE networks already support handsets using two spatial streams, and handset antenna designs capable of supporting four spatial streams have been tested, massive MIMO can deliver significant capacity gains even to single spatial stream handsets. Again, MU-MIMO beamforming is used to enable the base station to transmit independent data streams to multiple handsets on the same channel at the same time. However, one question still to be answered by research is: When is it best to add antennas to the base station and when is it best to add small cells?
Another focus of research for 5G wireless is CO-MIMO. In CO-MIMO, clusters of base stations work together to boost performance. This can be done using macro diversity for improved reception of signals from handsets or multi-cell multiplexing to achieve higher downlink data rates. However, CO-MIMO requires high-speed communication between the cooperating base stations.
History
Gregory Raleigh was first to advocate the use of MIMO in combination with OFDM. In a theoretical paper, he proved that with the proper type of MIMO system—multiple, co-located antennas transmitting and receiving multiple information streams using multidimensional coding and encoding—multipath propagation could be exploited to multiply the capacity of a wireless link. Up to that time, radio engineers tried to make real-world channels behave like ideal channels by mitigating the effects of multipath propagation. However, mitigation strategies have never been fully successful. In order to exploit multipath propagation, it was necessary to identify modulation and coding techniques that perform robustly over time-varying, dispersive, multipath channels. Raleigh published additional research on MIMO-OFDM under time-varying conditions, MIMO-OFDM channel estimation, MIMO-OFDM synchronization techniques, and the performance of the first experimental MIMO-OFDM system.
Raleigh solidified the case for OFDM by analyzing the performance of MIMO with three leading modulation techniques in his PhD dissertation: quadrature amplitude modulation (QAM), direct sequence spread spectrum (DSSS), and discrete multi-tone (DMT). QAM is representative of narrowband schemes such as TDMA that use equalization to combat ISI. DSSS uses rake receivers to compensate for multipath and is used by CDMA systems. DMT uses interleaving and coding to eliminate ISI and is representative of OFDM systems. The analysis was performed by deriving the MIMO channel matrix models for the three modulation schemes, quantifying the computational complexity and assessing the channel estimation and synchronization challenges for each. The models showed that for a MIMO system using QAM with an equalizer or DSSS with a rake receiver, computational complexity grows quadratically as data rate is increased. In contrast, when MIMO is used with DMT, computational complexity grows log-linearly (i.e., n log n) as data rate is increased.
Raleigh subsequently founded Clarity Wireless in 1996 and Airgo Networks in 2001 to commercialize the technology. Clarity developed specifications in the Broadband Wireless Internet Forum (BWIF) that led to the IEEE 802.16 (commercialized as WiMAX) and LTE standards, both of which support MIMO. Airgo designed and shipped the first MIMO-OFDM chipsets for what became the IEEE 802.11n standard. MIMO-OFDM is also used in the 802.11ac standard and is expected to play a major role in 802.11ax and fifth generation (5G) mobile phone systems.
Several early papers on multi-user MIMO were authored by Ross Murch et al. at Hong Kong University of Science and Technology. MU-MIMO was included in the 802.11ac standard (developed starting in 2011 and approved in 2014). MU-MIMO capacity appears for the first time in what have become known as "Wave 2" products. Qualcomm announced chipsets supporting MU-MIMO in April 2014.
Broadcom introduced the first 802.11ac chipsets supporting six spatial streams for data rates up to 3.2 Gbit/s in April 2014. Quantenna says it is developing chipsets to support eight spatial streams for data rates up to 10 Gbit/s.
Massive MIMO, Cooperative MIMO (CO-MIMO), and HetNets (heterogeneous networks) are currently the focus of research concerning 5G wireless. The development of 5G standards is expected to begin in 2016. Prominent researchers to date include Jakob Hoydis (of Alcatel-Lucent), Robert W. Heath (at the University of Texas at Austin), Helmut Bölcskei (at ETH Zurich), and David Gesbert (at EURECOM).
Trials of 5G technology have been conducted by Samsung. Japanese operator NTT DoCoMo plans to trial 5G technology in collaboration with Alcatel-Lucent, Ericsson, Fujitsu, NEC, Nokia, and Samsung.
References
IEEE 802
Information theory
Mobile telecommunications standards
Radio resource management | MIMO-OFDM | [
"Mathematics",
"Technology",
"Engineering"
] | 2,470 | [
"Telecommunications engineering",
"Applied mathematics",
"Mobile telecommunications standards",
"Mobile telecommunications",
"Computer science",
"Information theory"
] |
41,827,467 | https://en.wikipedia.org/wiki/Foodomics | Foodomics was defined in 2009 as "a discipline that studies the Food and Nutrition domains through the application and integration of advanced -omics technologies to improve consumer's well-being, health, and knowledge". Foodomics requires the combination of food chemistry, biological sciences, and data analysis.
The study of foodomics became under the spotlight after it was introduced in the first international conference in 2009 at Cesena, Italy. Many experts in the field of omics and nutrition were invited to this event in order to find the new approach and possibility in the area of food science and technology. However, research and development of foodomics today are still limited due to high throughput analysis required. The American Chemical Society journal called Analytical Chemistry dedicated its cover to foodomics in December 2012.
Foodomics involves four main areas of omics:
Genomics, which involves investigation of genome and its pattern.
Transcriptomics, which explores a set of gene and identifies the difference among various conditions, organisms, and circumstance, by using several techniques including microarray analysis;
Proteomics, studies every kind of proteins that is a product of the genes. It covers how protein functions in a particular place, structures, interactions with other proteins, etc.;
Metabolomics, includes chemical diversity in the cells and how it affects cell behavior;
Advantages of foodomics
Foodomics greatly helps the scientists in an area of food science and nutrition to gain a better access to data, which is used to analyze the effects of food on human health, etc. It is believed to be another step towards better understanding of development and application of technology and food. Moreover, the study of foodomics leads to other omics sub-disciplines, including nutrigenomics which is the integration of the study of nutrition, gene and omics.
Colon cancer
Foodomics approach is used to analyze and establish the links between several substances presented in rosemary and the ability to cure colon cancer cells. There are thousands of chemical compounds in rosemary, but the ones that are able to help cure such disease are Carnosic acid (CA) and Carnosol (CS), which can be obtained by extracting rosemary via SFE. They have the potential to fight against and reduce the proliferation of human HT-29 colon cancer cells.
The experiment done by inserting rosemary extracts to the mice and collecting RNA and metabolites from each controlled and treated individual indicated that there is a correlation between the compounds used and the percentage of recovery from the cancer. This information is however never achievable without the help of foodomics knowledge as it was used to process data, analyze statistic, and identify biomarkers. Foodomics, coupled with transcriptomic data, shows that Carnosic acid leads to the accumulation of an antioxidant, glutothione (GSH). The chemical can be broken down to Cysteinylglycine, a naturally occurring dipeptide and an intermediate in the gamma glutamyl cycle. Moreover, the result from an integration of foodomics, transcriptomics and metabolomics reveals that provoking colon cancer cell compounds, such as N‐acetylputrescine, N‐acetylcadaverine, 5’MTA and γ‐aminobutyric acid, can also be lowered by CA treatment.
Thus, foodomics plays an important role in explaining the relationship between deadly disease, like colon cancer, and natural compounds existing in rosemary. Data obtained is useful in reaching another approach for tackling proliferation against cancer cells.
Processed meat
Aside from measuring the concentration of protein in meat, calculating bioavailability is another way in determining the total amount of component and quality. The calculation is done when food molecules are digested in various steps. Since human digestion is very complicated, a wide range of analytical techniques are used to obtain the data, including foodomics protocol and an in vitro static simulation of digestion.
The procedure is divided into 3 stages as the samples are collected from oral, gastric and duodenal digestion in order to study protein digestibility closely and thoroughly. A meat based food, Bresaola, is evaluated because beef muscles are still intact, which can be used to indicate nutritional value.
The consequences of oral step can be observed at the beginning of the gastric digestion, the first stage. As there is no enzymatic proteolytic activity at this stage, the level of H-NMR, a spectrum used to determine the structure, is still constant because there is no change going on. However, when pepsin takes action, TD-NMR, a special technique used for measuring mobile water population with macromolecular solutes, reveals that progressive unbundling of meat fibers helps pepsin activity to digest. TD-NMR data proves that bolus structure changes considerably during the first part of digestion and water molecules, consequently, leave the spaces inside the myofibrils and fiber bundles. This results in a low level of water that can be detected in duodenal stage. Since digestion is in progress, protein molecules become smaller and molecular weight gets lower, in other words, there is an increase in the spectra total area.
See also
Genomics
Nutrigenomics
Proteomics
List of omics topics in biology
References
Food science
Analytical chemistry
Metabolism
Omics | Foodomics | [
"Chemistry",
"Biology"
] | 1,080 | [
"Bioinformatics",
"Omics",
"Cellular processes",
"nan",
"Biochemistry",
"Metabolism"
] |
24,946,632 | https://en.wikipedia.org/wiki/C20H30O5 | {{DISPLAYTITLE:C20H30O5}}
The molecular formula C20H30O5 (molar mass: 350.449 g/mol, exact mass: 350.2093 u) may refer to:
Andrographolide
Prostaglandin E3 (PGE3)
Molecular formulas | C20H30O5 | [
"Physics",
"Chemistry"
] | 71 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,947,530 | https://en.wikipedia.org/wiki/Sagitta%20%28geometry%29 | In geometry, the sagitta (sometimes abbreviated as sag) of a circular arc is the distance from the midpoint of the arc to the midpoint of its chord. It is used extensively in architecture when calculating the arc necessary to span a certain height and distance and also in optics where it is used to find the depth of a spherical mirror or lens. The name comes directly from Latin sagitta, meaning an "arrow".
Formulas
In the following equations, denotes the sagitta (the depth or height of the arc), equals the radius of the circle, and the length of the chord spanning the base of the arc. As and are two sides of a right triangle with as the hypotenuse, the Pythagorean theorem gives us
This may be rearranged to give any of the other three:
The sagitta may also be calculated from the versine function, for an arc that spans an angle of , and coincides with the versine for unit circles
Approximation
When the sagitta is small in comparison to the radius, it may be approximated by the formula
Alternatively, if the sagitta is small and the sagitta, radius, and chord length are known, they may be used to estimate the arc length by the formula
where is the length of the arc; this formula was known to the Chinese mathematician Shen Kuo, and a more accurate formula also involving the sagitta was developed two centuries later by Guo Shoujing.
Applications
Architects, engineers, and contractors use these equations to create "flattened" arcs that are used in curved walls, arched ceilings, bridges, and numerous other applications.
The sagitta also has uses in physics where it is used, along with chord length, to calculate the radius of curvature of an accelerated particle. This is used especially in bubble chamber experiments where it is used to determine the momenta of decay particles. Likewise historically the sagitta is also utilised as a parameter in the calculation of moving bodies in a centripetal system. This method is utilised in Newton's Principia.
See also
Circular segment
Versine
Jyā, koti-jyā and utkrama-jyā
References
External links
Calculating the Sagitta of an Arc
Architectural terminology
Geometric measurement | Sagitta (geometry) | [
"Physics",
"Mathematics",
"Engineering"
] | 469 | [
"Geometric measurement",
"Physical quantities",
"Quantity",
"Geometry",
"Architectural terminology",
"Architecture"
] |
24,947,576 | https://en.wikipedia.org/wiki/Sagitta%20%28optics%29 | In optics and especially telescope making, sagitta or sag is a measure of the glass removed to yield an optical curve. It is approximated by the formula
,
where is the radius of curvature of the optical surface. The sag is the displacement along the optic axis of the surface from the vertex, at distance from the axis.
A good explanation of both this approximate formula and the exact formula can be found here.
Aspheric surfaces
Optical surfaces with non-spherical profiles, such as the surfaces of aspheric lenses, are typically designed such that their sag is described by the equation
Here, is the conic constant as measured at the vertex (where ). The coefficients describe the deviation of the surface from the axially symmetric quadric surface specified by and .
See also
Versine
Chord
References
Optics | Sagitta (optics) | [
"Physics",
"Chemistry"
] | 166 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
24,948,426 | https://en.wikipedia.org/wiki/Krogh%20length | The Krogh Length, , is the distance to which nutrients diffuse between capillaries, based on cellular consumption of the nutrients.
It can be described as:
where is the diffusion constant of the solute in the substrate, is the concentration in the channel, and is the consumption by the cells. Units are in terms of length.
See also
August Krogh
Biomedical engineering
Capillaries
Diffusion
Biot number
Peclet number
References
Cardiovascular physiology
Biomedical engineering
Fluid mechanics | Krogh length | [
"Engineering",
"Biology"
] | 96 | [
"Biological engineering",
"Bioengineering stubs",
"Biomedical engineering",
"Biotechnology stubs",
"Civil engineering",
"Medical technology stubs",
"Fluid mechanics",
"Medical technology"
] |
24,950,329 | https://en.wikipedia.org/wiki/C21H20N4O3 | {{DISPLAYTITLE:C21H20N4O3}}
The molecular formula C21H20N4O3 (molar mass: 376.41 g/mol, exact mass: 376.1535 u) may refer to:
Entinostat (SNDX-275)
Picotamide
Molecular formulas | C21H20N4O3 | [
"Physics",
"Chemistry"
] | 73 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,950,345 | https://en.wikipedia.org/wiki/Microscale%20thermophoresis | Microscale thermophoresis (MST) is a technology for the biophysical analysis of interactions between biomolecules. Microscale thermophoresis is based on the detection of a temperature-induced change in fluorescence of a target as a function of the concentration of a non-fluorescent ligand. The observed change in fluorescence is based on two distinct effects. On the one hand it is based on a temperature related intensity change (TRIC) of the fluorescent probe, which can be affected by binding events. On the other hand, it is based on thermophoresis, the directed movement of particles in a microscopic temperature gradient. Any change of the chemical microenvironment of the fluorescent probe, as well as changes in the hydration shell of biomolecules result in a relative change of the fluorescence detected when a temperature gradient is applied and can be used to determine binding affinities. MST allows measurement of interactions directly in solution without the need of immobilization to a surface (immobilization-free technology).
Applications
Affinity
between any kind of biomolecules including proteins, DNA, RNA, peptides, small molecules, fragments and ions
for interactions with high molecular weight complexes, large molecule assemblies, even with liposomes, vesicles, nanodiscs, nanoparticles and viruses
in any buffer, including serum and cell lysate
in competition experiments (for example with substrate and inhibitors)
Stoichiometry
Thermodynamic parameters
MST has been used to estimate the enthalpic and entropic contributions to biomolecular interactions.
Additional information
Sample property (homogeneity, aggregation, stability)
Multiple binding sites, cooperativity
Technology
MST is based on the quantifiable detection of a fluorescence change in a sample when a temperature change is applied. The fluorescence of a target molecule can be extrinsic or intrinsic (aromatic amino acids) and is altered in temperature gradients due to two distinct effects. On the one hand temperature related intensity change (TRIC), which describes the intrinsic property of fluorophores to change their fluorescence intensity as a function of temperature. The extent of the change in fluorescence intensity is affected by the chemical environment of the fluorescent probe, which can be altered in binding events due to conformational changes or proximity of ligands. On the other hand, MST is also based on the directed movement of molecules along temperature gradients, an effect termed thermophoresis. A spatial temperature difference ΔT leads to a change in molecule concentration in the region of elevated temperature, quantified by the Soret coefficient ST:chot/ccold = exp(-ST ΔT). Both, TRIC and thermophoresis contribute to the recorded signal in MST measurements in the following way: ∂/∂T(cF)=c∂F/∂T+F∂c/∂T. The first term in this equation c∂F/∂T describes TRIC as a change in fluorescence intensity (F) as a function of temperature (T), whereas the second term F∂c/∂T describes thermophoresis as the change in particle concentration (c) as a function of temperature. Thermophoresis depends on the interface between molecule and solvent. Under constant buffer conditions, thermophoresis probes the size, charge and solvation entropy of the molecules. The thermophoresis of a fluorescently labeled molecule A typically differs significantly from the thermophoresis of a molecule-target complex AT due to size, charge and solvation entropy differences. This difference in the molecule's thermophoresis is used to quantify the binding in titration experiments under constant buffer conditions.
The thermophoretic movement of the fluorescently labelled molecule is measured by monitoring the fluorescence distribution F inside a capillary. The microscopic temperature gradient is generated by an IR-Laser, which is focused into the capillary and is strongly absorbed by water. The temperature of the aqueous solution in the laser spot is raised by ΔT=1-10 K. Before the IR-Laser is switched on a homogeneous fluorescence distribution Fcold is observed inside the capillary. When the IR-Laser is switched on, two effects, occur on the same time-scale, contributing to the new fluorescence distribution Fhot. The thermal relaxation induces a binding-dependent drop in the fluorescence of the dye due to its local environmental-dependent response to the temperature jump (TRIC). At the same time molecules typically move from the locally heated region to the outer cold regions. The local concentration of molecules decreases in the heated region until it reaches a steady-state distribution.
While the mass diffusion D dictates the kinetics of depletion, ST determines the steady-state concentration ratio chot/ccold=exp(-ST ΔT) ≈ 1-ST ΔT under a temperature increase ΔT. The normalized fluorescence Fnorm=Fhot/Fcold measures mainly this concentration ratio, in addition to TRIC ∂F/∂T. In the linear approximation we find: Fnorm=1+(∂F/∂T-ST)ΔT. Due to the linearity of the fluorescence intensity and the thermophoretic depletion, the normalized fluorescence from the unbound molecule Fnorm(A) and the bound complex Fnorm(AT) superpose linearly. By denoting x the fraction of molecules bound to targets, the changing fluorescence signal during the titration of target T is given by: Fnorm=(1-x) Fnorm(A)+x Fnorm(AT).
Quantitative binding parameters are obtained by using a serial dilution of the binding substrate. By plotting Fnorm against the logarithm of the different concentrations of the dilution series, a sigmoidal binding curve is obtained. This binding curve can directly be fitted with the nonlinear solution of the law of mass action, with the dissociation constant KD as result.
References
Biochemistry methods
Protein methods
Biophysics
Molecular biology
Laboratory techniques | Microscale thermophoresis | [
"Physics",
"Chemistry",
"Biology"
] | 1,290 | [
"Biochemistry methods",
"Applied and interdisciplinary physics",
"Protein methods",
"Protein biochemistry",
"Biophysics",
"nan",
"Molecular biology",
"Biochemistry"
] |
24,950,940 | https://en.wikipedia.org/wiki/Brentuximab%20vedotin | Brentuximab vedotin, sold under the brand name Adcetris, is an antibody-drug conjugate medication used to treat relapsed or refractory Hodgkin lymphoma (HL) and systemic anaplastic large cell lymphoma (ALCL), a type of T cell non-Hodgkin lymphoma. It selectively targets tumor cells expressing the CD30 antigen, a defining marker of Hodgkin lymphoma and ALCL. The drug is being jointly marketed by Millennium Pharmaceuticals outside the US and by Seagen in the US.
Medical uses
In the United States, brentuximab vedotin is indicated for the treatment of Hodgkin lymphoma, systemic anaplastic large cell lymphoma, primary cutaneous anaplastic large cell lymphoma, and CD30-expressing mycosis fungoides.
In the European Union, brentuximab vedotin is indicated for the treatment of Hodgkin lymphoma, systemic anaplastic large cell lymphoma, and cutaneous T cell lymphoma.
Design
Brentuximab vedotin consists of the chimeric monoclonal antibody brentuximab (cAC10, which targets the cell-membrane protein CD30) linked with maleimide attachment groups, cathepsin-cleavable linkers (valine-citrulline), and para-aminobenzylcarbamate spacers to three to five units of the antimitotic agent monomethyl auristatin E (MMAE, reflected by the 'vedotin' in the drug's name). The peptide-based linker bonds the antibody to the cytotoxic compound in a stable manner so the drug is not easily released from the antibody under physiologic conditions to help prevent toxicity to healthy cells and ensure dosage efficiency. The peptide antibody-drug bond facilitates rapid and efficient drug cleavage inside target tumor cell. The antibody cAC10 part of the drug binds to CD30 which often occurs on diseased cells but rarely on normal tissues. The antibody portion of the drug attaches to CD30 on the surface of malignant cells, delivering MMAE which is responsible for the anti-tumour activity. Once bound, brentuximab vedotin is internalised by endocytosis and thus selectively taken up by targeted cells. The vesicle containing the drug is fused with lysosomes and lysosomal cysteine proteases, particularly cathepsin B, start to break down valine-citrulline linker and MMAE is no longer bound to the antibody and is released directly into the tumor environment.
Serious adverse events
Brentuximab vedotin was studied as monotherapy in 160 patients in two phase II trials. Across both trials, the most common adverse reactions (≥20%), regardless of causality, were chemotherapy-induced peripheral neuropathy (a progressive, enduring and often irreversible tingling numbness, intense pain, and hypersensitivity to cold, beginning in the hands and feet and sometimes involving the arms and legs), neutropenia (an immune system impairment), fatigue, nausea, anemia, upper respiratory tract infection, diarrhea, fever, rash, thrombocytopenia, cough and vomiting.
Black box warning
In January 2012, the FDA announced that because brentuximab vedotin had been linked with two cases of progressive multifocal leukoencephalopathy, they were requiring the addition of a black box warning to the drug label regarding this potential risk.
Society and culture
Legal status
In August 2011, the US Food and Drug Administration (FDA) granted accelerated approval to the biologics license application (BLA) submitted by Seattle Genetics for the use of brentuximab vedotin in the treatment of relapsed HL and ALCL.
In October 2012, the European Medicines Agency (EMA) gave it conditional marketing authorization for relapsed or refractory HL and ALCL.
In November 2017, the FDA approved brentuximab vedotin as a treatment for patients with cutaneous T-cell lymphoma (CTCL) who have received prior systemic therapy. This approval is for patients with primary cutaneous anaplastic large cell lymphoma (pcALCL) and CD30-expressing mycosis fungoides (MF).
In March 2018, the FDA approved brentuximab vedotin to treat adults with previously untreated stage III or IV classical Hodgkin lymphoma (cHL) in combination with chemotherapy.
In November 2018, the FDA expanded the approved use of brentuximab vedotin in combination with chemotherapy for adults with certain types of peripheral T-cell lymphoma (PTCL). This is the first FDA approval for treatment of newly diagnosed PTCL.
In November 2022, the FDA approved brentuximab vedotin in combination with doxorubicin, vincristine, etoposide, prednisone, and cyclophosphamide for people aged two years of age and older with previously untreated high risk classical Hodgkin lymphoma. This is the first pediatric approval for brentuximab vedotin.
Economics
The Australian Pharmaceutical Benefits Advisory Committee (PBAC) considered a March 2014 application by the manufacturer for inclusion of brentuximab vedotin under a Pharmaceutical Benefits Scheme Section 100 (Efficient Funding of Chemotherapy) arrangement. While this application was accepted, the committee noted that on the basis of inadequate cost-benefit, the medicine would not be made available more generally for the first-line treatment of relapsed or refractory systemic anaplastic large cell lymphoma (sALCL).
Brand names
Brentuximab vedotin is marketed as Adcetris.
Research
Clinical trials
In a 2010, clinical trial, 34% of patients with refractory Hodgkin lymphoma achieved complete remission and another 40% had partial remission. Tumor reductions were achieved in 94% of patients. In ALCL, 87% of patients had tumors shrink at least 50% and 97% of patients had some tumor shrinkage.
Reports in 2013, showed interim results from a Phase II, open-label, single-arm study designed to evaluate the antitumor activity of brentuximab vedotin in relapsed or refractory CD30-positive NHL, including B-cell neoplasms. These results demonstrated that single-agent brentuximab vedotin induced a 42% objective response rate and manageable safety profile among advanced diffuse large B-cell lymphoma patients.
A phase III trial funded by Millennium Pharmaceuticals compared ABVD (a combination of the chemotherapy drugs doxorubicin, bleomycin, vinblastine, and dacarbazine) versus A+AVD (a combination of brentuximab vedotin plus AVD, or doxorubicin, vinblastine, and dacarbazine) for treatment of classical Hodgkin lymphoma and found substituting brentuximab vedotin for bleomycin has both improved efficacy and lowered toxicity. A previously completed phase I study demonstrated that a greater number of patients experienced pulmonary toxicity with brentuximab vedotin-ABVD than with ABVD alone. Pulmonary fibrosis is a classical adverse effect of bleomycin; however, the incidence of pulmonary fibrosis in the brentuximab vedotin-ABVD arm was higher than the expected historical rate with ABVD alone. Overall, 24 out of 25 patients treated with brentuximab vedotin and AVD achieved complete remission.
Brentuximab vedotin is also being investigated as a substitute for vincristine (another mitotic inhibitor which prevents tubulin polymerization) in patients with being treated with CHOP (a combination of cyclophosphamide, hydroxydaunorubicin, vincristine, prednisone or prednisolone) for a non-Hodgkin lymphoma.
A phase III clinical trial comparing the two combination therapies (CHOP and CHP-brentuximab vedotin) was completed in October 2020, with results published in 2021.
The ECHELON-1 phase 3 trial compared brentuximab vedotin with bleomycin both in combination with adriamycin, vinblastine, dacarbazine (AVD) chemotherapy as a firstline treatment for advanced classical Hodgkin lymphoma. The outcome of the trial resulted in a positive recommendation by the Committee for Medicinal Products for Human Use (CHMP) as part of a combination treatment in adults with previously untreated CD30+ stage 3 Hodgkin lymphoma.
References
Monoclonal antibodies
Antibody-drug conjugates
Drugs developed by Takeda Pharmaceutical Company
Orphan drugs | Brentuximab vedotin | [
"Biology"
] | 1,901 | [
"Antibody-drug conjugates"
] |
24,952,147 | https://en.wikipedia.org/wiki/Lieb%E2%80%93Liniger%20model | In physics, the Lieb–Liniger model describes a gas of particles moving in one dimension and satisfying Bose–Einstein statistics. More specifically, it describes a one dimensional Bose gas with Dirac delta interactions. It is named after Elliott H. Lieb and who introduced the model in 1963. The model was developed to compare and test Nikolay Bogolyubov's theory of a weakly interaction Bose gas.
Definition
Given bosons moving in one-dimension on the -axis defined from with periodic boundary conditions, a state of the N-body system must be described by a many-body wave function . The Hamiltonian, of this model is introduced as
where is the Dirac delta function. The constant denotes the strength of the interaction, represents a repulsive interaction and an attractive interaction. The hard core limit is known as the Tonks–Girardeau gas.
For a collection of bosons, the wave function is unchanged under permutation of any two particles (permutation symmetry), i.e., for all and satisfies for all .
The delta function in the Hamiltonian gives rise to a boundary condition when two coordinates, say and are equal; this condition is that as , the derivative satisfies
.
Solution
The time-independent Schrödinger equation , is solved by explicit construction of . Since is symmetric it is completely determined by its values in the simplex , defined by the condition that .
The solution can be written in the form of a Bethe ansatz as
,
with wave vectors , where the sum is over all permutations, , of the integers , and maps to . The coefficients , as well as the 's are determined by the condition , and this leads to a total energy
,
with the amplitudes given by
These equations determine in terms of the 's. These lead to equations:
where are integers when is odd and, when is even, they take values . For the ground state the 's satisfy
Thermodynamic limit
References
Statistical mechanics | Lieb–Liniger model | [
"Physics"
] | 408 | [
"Statistical mechanics"
] |
24,953,216 | https://en.wikipedia.org/wiki/Hydrogen%20turboexpander-generator | A hydrogen turboexpander-generator or generator-loaded expander for hydrogen gas is an axial flow turbine or radial expander for energy recovery through which a high pressure hydrogen gas is expanded to produce work used to drive an electrical generator. It replaces the control valve or regulator where the pressure drops to the appropriate pressure for the low-pressure network. A turboexpander generator can help recover energy losses and offset electrical requirements and emissions.
Description
Per stage, 200 bar is handled with up to 15,000 kW power and a maximum expansion ratio of 14, the generator loaded expander for hydrogen gas is fitted with an automatic thrust balance, a dry gas seal, and a programmable logic control with remote monitoring and diagnostics.
Application
Hydrogen turboexpander-generators are used for hydrogen pipeline transport in combination with hydrogen compressors and energy recovery in underground hydrogen storage. A variation is the compressor loaded turboexpanders which are used in the liquefaction of gases such as liquid hydrogen.
See also
Compressed hydrogen
Letdown station
Hydrogen infrastructure
Turboexpander
References
External links
A preliminary inventory of the potential for electricity generation-2005
Hydrogen technologies
Mechanical engineering
Energy recovery
Turbo generators | Hydrogen turboexpander-generator | [
"Physics",
"Engineering"
] | 237 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
40,400,006 | https://en.wikipedia.org/wiki/Air%20bearing | Air bearings (also known as aerostatic or aerodynamic bearings) are bearings that use a thin film of pressurized gas to provide a low friction load-bearing interface between surfaces. The two surfaces do not touch, thus avoiding the traditional bearing-related problems of friction, wear, particulates, and lubricant handling, and offer distinct advantages in precision positioning, such as lacking backlash and static friction, as well as in high-speed applications. Space craft simulators now most often use air bearings and 3-D printers are now used to make air-bearing-based attitude simulators for CubeSat satellites.
A differentiation is made between aerodynamic bearings, which establish the air cushion through the relative motion between static and moving parts, and aerostatic bearings, in which the pressure is being externally inserted.
Gas bearings are mainly used in precision machinery tools (measuring and processing machines) and high-speed machines (spindle, small-scale turbomachinery, precision gyroscopes).
Gas bearing types
Gas-lubricated bearings are classified in two groups, depending on the source of pressurization of the gas film providing the load-carrying capacity:
Aerostatic bearings: the gas is externally-pressurized (using a compressor or a pressure tank) and injected in the clearance of the bearing. Consequently, aerostatics bearings can sustain a load even in absence of relative motion but require an external gas compression system, which induces costs in terms of complexity and energy.
Aerodynamic bearings: the gas is pressurized by the relative velocity between the static and moving surfaces in the bearing. Such bearings are self-acting and do not require an external input of compressed gas. However, mechanical contact occurs at zero speed, requiring a particular tribological consideration to avoid premature wear.
Hybrid bearings combining the two families also exist. In such cases, a bearing is typically fed with externally-compressed gas at low speed and then relies partially or entirely on the self-pressurizing effect at higher speeds.
Among these two technological categories, gas bearings are classified depending on the kind of linkage they realize:
Linear-motion bearings: Support a translation along 1 or 2 directions between two planes
Journal bearings: Support a rotation between two parts
Thrust bearings: Block the axial displacement of a rotating part, usually used in combination with journal bearings
The main air bearing types fall under the following categories:
Aerostatic bearings
Pressurized gas acts as a lubricant in the gap between bearing moving parts. The gas cushion carries the load without any contact between the moving parts. Normally, the compressed gas is supplied by a compressor. A key goal of supplying the gas pressure in the gap is that the stiffness and damping of the gas cushion reaches the highest possible level. In addition, gas consumption and uniformity of gas supply into the gap are crucial for the behaviors of aerostatic bearings.
Delivery of gas to the gap
Supplying gas to the interface between moving elements of an aerostatic bearing can be achieved in a few different methods:
Porous Surface
Partial porous surface
Discrete orifice feeding
Slot feeding
Groove feeding
There is no single best approach to feeding the film. All methods have their advantages and disadvantages specific to each application.
Dead volume
Dead volumes refer in particular to chambers and canals existing in conventional aerostatic bearings in order to distribute the gas and increase the compressed pressure within the gap. The cavity inside porous (sintered) gas bearings are also attributed to dead volume.
Conventional aerostatic bearings
With conventional single nozzle aerostatic bearings, the compressed air flows through a few relatively large nozzles (diameter 0.1 – 0.5 mm) into the bearing gap. The gas consumption thus allows only some flexibility such that the bearing's features (force, moments, bearing surface, bearing gap height, damping) can be adjusted only insufficiently. However, in order to allow a uniform gas pressure even with only some nozzles, aerostatic bearing manufacturers take constructive techniques. In doing so, these bearings cause dead volumes (non-compressible and thus weak air volume). In effect, this dead volume is very harmful for the gas bearing's dynamic and causes self-excited vibrations.
Single-nozzle aerostatic bearings
The pre-pressured chamber consists of a chamber around the centralized nozzle. Usually, this chamber's ratio is between 3% and 20% of the bearing's surface. Even with a chamber depth of 1/100 mm, the dead volume is very high. In the worst cases, these air bearings consist of a concave bearing surface instead of a chamber. Disadvantages of these air bearings include a very poor tilt stiffness.
Gas bearings with channels and chambers
Typically, conventional aerostatic bearings are implemented with chambers and canals. This design assumes that with a limited amount of nozzles, the dead volume should decrease while distributing the gas within the gap uniformly. Most constructive ideas refer to special canal structures. Since the late 1980s, aerostatic bearings with micro canal structures without chambers are manufactured. However, this technique also has to manage problems with dead volume. With an increasing gap height, the micro canal's load and stiffness decreases. As in the case of high-speed linear drives or high-frequency spindles, this may cause serious disadvantages.
Laser drilled Micro-nozzle aerostatic bearings
Laser-drilled micro nozzle aerostatic bearings make use of computerized manufacturing and design techniques to optimize performance and efficiency. This technology allows manufacturers more flexibility in manufacturing. In turn this allows a larger design envelope in which to optimize their designs for a given application. In many cases engineers can create air bearings that approach the theoretical limit of performance.
Rather than a few large nozzles, aerostatic bearings with many micro nozzles avoid dynamically disadvantageous dead volumes. Dead volumes refer to all cavities in which gas cannot be compressed during decrease of the gap. These appear as weak gas pressure stimulates vibration. Examples of the benefits are: linear drives with accelerations of more than 1,000 m/s² (100 g), or impact drives with even more than 100,000 m/s² (10,000 g) due to high damping in combination with dynamic stiffness; sub-nanometer movements due to lowest noise-induced errors; and seal-free transmission of gas or vacuum for rotary and linear drives via the gap due to guided air supply.
Micro-nozzle aerostatic bearings achieve an effective, nearly perfect pressure distribution within the gap with a large number of micro nozzles. Their typical diameter is between 0.02 mm and 0.06 mm. The narrowest cross-section of these nozzles lies exactly at the bearing's surface. Thereby the technology avoids a dead volume on the supporting air bearing's surface and within the area of the air supplying nozzles.
The micro nozzles are automatically drilled with a laser beam that provides top-quality and repeatability. The physical behaviors of the air bearings prove to have a low variation for large as well as for small production volumes. In contrast to conventional bearings, with this technique the air bearings require no manual or costly manufacturing.
The advantages of the micro-nozzle air bearing technology include:
efficient use of the air cushion (close to the physical limit) through a uniform pressure within the whole gap;
perfect combination of static and dynamic properties;
highest-possible flexibility of the air bearing properties: with a particular gap height, it is possible to optimize the air bearing such that it has, for example, a maximum load, stiffness, tilt stiffness, damping, or a minimum air consumption (respectively also in combination with others);
multi-approved highest precision of all air bearings, e.g. in the measurement technology due to slightest movements (<< 2 nanometer) through physical, lowest-possible self-excited vibrations;
considerably higher tilt stiffness than conventional air bearings such that the air within the gap flows through canals from the loaded to the unloaded areas away;
vibration-free within the entire operating range even with high air pressure supply (actually even much more than 10 bar are possible);
highest reliability due to the large number of nozzles: clogging of nozzles by particles is out of question (no failure in operation) because their diameters are much higher than the gap height;
possibility to adjust bearing properties for deformation and tolerances of the bearing and opposite surface;
proven usability for many bearing materials and coatings.
Some of these advantages, such as the high flexibility, the excellent static and dynamic properties in combination, and a low noise excitation, prove to be unique among all other aerostatic bearings.
Various designs
Standard air bearings are offered with various mountings to link them in a system:
Bearings for flexible connection with ball-pins. This design for standard air bearings is usually supplied on the market.
Bearings with a high-stiff joint instead of a conventional ball-pin. Using this version the stiffness of the complete system is significantly higher.
Bearings with integrated piston for preload of statically determined guidances.
In addition, there are also rectangular bearings with a fixed mounting (joint-less) for guidances with highest stiffness for highest accuracy or highest dynamic.
Furthermore, there are also air bearings with integrated vacuum or magnetic preloads, air bearings for high temperatures with more than 400 °C, as well as ones manufactured with alternative materials.
Advantages and disadvantages of gas-lubricated bearings
Advantages
Wearless operation, durability. Air bearings operate contact-free and so without abrasion. The only friction results from airflow between the bearing surfaces. Thus, the durability of air bearings is unlimited if they are designed and calculated correctly. Roller bearings and friction bearings have a high degree of friction when used at high speed or acceleration, causing a positive feedback loop where high abrasion decreases precision, which in turn causes greater wear, leading to their eventual failure.
Guiding, repeatability, and position accuracy. In the chip production and when positioning at the back-end, repeatability accuracy of 1-2 μm must be reached with the wire bonder. At the die bonder, even 5 μm must be achieved. With such a precision, roller bearings reach their physical limit without a lower acceleration. At the front end (lithography), air bearings are already established.
Cost advantage and repeatability. When applied in series, gas bearings can have a cost advantage over roller bearings: the production of a roller-guided high-frequency spindle is – according to a manufacturer – about 20% more expensive than air-guided spindles.
Environmental purity. Because they do not require the use of oil for their lubrication and are frictionless, gas bearings are suited for applications requiring a low contamination of the working fluid. This is a critical aspect to the pharmaceutical industry, nuclear fuel processing, semi-conductor manufacturing and energy conversion cycles.
Disadvantages
Self-excited vibration. In journal bearings, self-excited vibration can appear past a given speed, because of the cross-coupled stiffness and low damping of gas lubrication. This vibration can lead to an instability and threaten the gas bearing operation. Precise dynamic computations are required to ensure a safe operation within the desired speed range. This kind of instability is known as "half-speed whirl" and affects particularly aerodynamic bearings.
Tight manufacturing tolerances. In order to carry sufficient load and avoid the instability mentioned above, tight tolerances are required in the clearance between bearing surfaces. Typical clearances ranging from 5 μm to 50 μm are required for both aerodynamic and aerostatic bearings. Consequently, air bearings are expensive to manufacture.
Clean environment. Because of their small clearance, gas-lubricated bearings are sensitive to the presence of particulates and dust in the environment (in the case of aerodynamic bearings) and externally-pressurized gas (aerostatic bearings).
Theoretical modeling
Gas-lubricated bearings are usually modeled using the Reynolds equation to describe the evolution of pressure in the thin film domain. Unlike liquid-lubricated bearings, the gas lubricant has to be considered as compressible, leading to a non-linear differential equation to be solved.
Numerical methods such as Finite difference method or Finite element method are common for the discretization and the resolution of the equation, accounting for the boundary conditions associated to each bearing geometry (linear-motion, journal and thrust bearings). In most cases, the gas film can be considered as isothermal and respecting the ideal gas law, leading to a simplification of the Reynolds equation.
Examples
Automotive technology
Semiconductor technology
Linear drives
Medical technology
Fat- and oil-free drives for respirators, stick-slip-free movements of scanners or a high rotary speed of large rotors have all been achieved with air bearings.
Production technology
Primarily, stick-slip-free movements and/or smallest forces are required. The air bearing technology is predestinated for fat/oil-free high-dynamic movements with short strokes.
Space technology
Footnotes
References
Machines
Lubrication
Bearings (mechanical)
Aerodynamics | Air bearing | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 2,663 | [
"Machines",
"Aerodynamics",
"Physical systems",
"Mechanical engineering",
"Aerospace engineering",
"Fluid dynamics"
] |
40,406,762 | https://en.wikipedia.org/wiki/Combustion%20and%20Flame | Combustion and Flame is a monthly peer-reviewed scientific journal published by Elsevier on behalf of the Combustion Institute. It covers fundamental research on combustion science. The editors-in-chief are Fokion Egolfopoulos (University of Southern California) and Thierry Poinsot (Centre National de la Recherche Scientifique).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.185, ranking it 9th out of 60 in the category of Thermodynamics.
See also
References
External links
Elsevier academic journals
Chemistry journals
Physics journals
Engineering journals
Academic journals established in 1957
English-language journals
Combustion
Monthly journals | Combustion and Flame | [
"Chemistry"
] | 147 | [
"Combustion"
] |
40,407,198 | https://en.wikipedia.org/wiki/Combustion%2C%20Explosion%2C%20and%20Shock%20Waves | Combustion, Explosion, and Shock Waves (Russian: Fizika Goreniya i Vzryva, Физика горения и взрыва) is the English-language translated version of the Russian peer-reviewed scientific journal, Fizika Goreniya i Vzryva. It covers the combustion of gases and materials, detonation processes, dispersal and transformation of substances, and shock-wave propagation. The editor-in-chief is Anatoly A. Vasil'ev.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.946.
References
External links
Springer Science+Business Media academic journals
Physical chemistry journals
English-language journals
Russian-language journals
Magazines published in Novosibirsk
Academic journals established in 1965
Nauka academic journals
Bimonthly journals | Combustion, Explosion, and Shock Waves | [
"Chemistry"
] | 189 | [
"Physical chemistry journals",
"Physical chemistry stubs"
] |
47,959,004 | https://en.wikipedia.org/wiki/DFM%20analysis%20for%20stereolithography | In design for additive manufacturing (DFAM), there are both broad themes (which apply to many additive manufacturing processes) and optimizations specific to a particular AM process. Described here is DFM analysis for stereolithography, in which design for manufacturability (DFM) considerations are applied in designing a part (or assembly) to be manufactured by the stereolithography (SLA) process. In SLA, parts are built from a photocurable liquid resin that cures when exposed to a laser beam that scans across the surface of the resin (photopolymerization). Resins containing acrylate, epoxy, and urethane are typically used. Complex parts and assemblies can be directly made in one go, to a greater extent than in earlier forms of manufacturing such as casting, forming, metal fabrication, and machining. Realization of such a seamless process requires the designer to take in considerations of manufacturability of the part (or assembly) by the process. In any product design process, DFM considerations are important to reduce iterations, time and material wastage.
Challenges in stereolithography
Material
Excessive setup specific material cost and lack of support for 3rd party resins is a major challenge with SLA process:. The choice of material (a design process) is restricted by the supported resin. Hence, the mechanical properties are also fixed. When scaling up dimensions selectively to deal with expected stresses, post curing is done by further treatment with UV light and heat. Although advantageous to mechanical properties, the additional polymerization and cross linkage can result in shrinkage, warping and residual thermal stresses. Hence, the part shall be designed in its 'green' stage i.e. pre-treatment stage.
Setup and process
SLA process is an additive manufacturing process. Hence, design considerations such as orientation, process latitude, support structures etc. have to be considered.
Orientation affects the support structures, manufacturing time, part quality and part cost. Complex structures may fail to manufacture properly due to orientation which is not feasible resulting in undesirable stresses. This is when the DFM guidelines can be applied. Design feasibility for stereolithography can be validated by analytical as well as on the basis of simulation and/or guidelines
Rule-based DFM considerations
Rule-based considerations in DFM refer to certain criteria that the part has to meet in order to avoid failures during manufacturing. Given the layer-by-layer manufacturing technique the process follows, there isn't any constraint on the overall complexity that the part may have. But some rules have been developed through experience by the printer developer/academia which must be followed to ensure that the individual features that make up the part are within certain 'limits of feasibility'.
Printer constraints
Constraints/limitations in SLA manufacturing comes from the printer's accuracy, layer thickness, speed of curing, speed of printing etc. Various printer constraints are to be considered during design such as:
Minimum Wall Thickness (Supported and Unsupported): Wall thickness in geometries is limited by resin resolution. Supported walls have ends connected to other walls. Below a thickness limit, such walls wall may warp during peeling. Unsupported walls are even more liable to detachment hence higher limit is for such case.
Overhang (Maximum Unsupported Length and Minimum Unsupported Angle): Overhangs are geometric features that are not supported inherently in the part. These must be supported by support structures. There is a maximum limit when structures are not provided. This is to reduce bending under self-weight. Too shallow angles result in a longer unsupported (projected) length. Hence, a minimum limit on that.
Maximum Bridge Span: To avoid sagging of beam-like structures that are supported only at the ends, the maximum span length of such structures shall be limited. Whenever this is not possible, width should be increased for compensation.
Minimum Vertical pillar diameter: This is to ensure the slenderness is above a limit at which the feature becomes wavy.
Minimum dimensions of grooves and embossed detail: Grooves are imprinted and emboss are shallow raised features on the part surface. Features printed with dimensions smaller than the limits are unrecognizable.
Minimum Clearance between geometries: This is to ensure the parts don't fuse.
Minimum hole diameter and radius of curvatures: Small curvatures that aren't realizable by print dimensions may close up or smooth out/fuse.
Minimum internal volumes nominal diameters: Volumes that are too small may fill up.
Support structures
A point needs support if:
It is end point of support less edges
If length of the overhang is more than a critical value
It is at the geometric center of support less plane
While printing, support structures act as a part of design hence, their limitations and advantages are kept in mind while designing. Major considerations include:
Support shallow angle geometry: Shallow angles may result in improper resin (structural strength issues) curing unless supports are provided uniformly. Generally, beyond a certain angle (usually around 45 degrees), the surface doesn't require support.
Overhang base: Increase section thickness at base to avoid tearing. Avoid sharp transitions at overhang base.
Air pocket releaf: Without supports, printing parts with a flat surface and holes in the geometry may create air bubbles. As the part prints, these air pockets can cause voids in the model. The support structures, in this case, create pathways through which the air bubbles could escape.
Structure compatibility: Consider Supports compatibility for internal volume surface.
Feature Orientation: Orient to ensure overhangs are well supported.
Part deposition orientation
Part orientation is a very crucial decision in DFM analysis for SLA process. The build time, surface quality, volume/number of support structures etc. depend on this. In many cases, it is also possible to address the manufacturability issues just by reorienting the part. For example, an overhanging geometry with shallow angle may be oriented to ensure steep angles. Hence, major considerations include:
Surface finish improvement: Orient the part in such a way that a feature on critical surface is eliminated. Algorithmic point of view, a free-form surface is decomposed to combination of various plane surfaces and weight is calculated/assigned to each. Total of weights is minimized for best overall surface finish.
Build Time reduction: Rough estimation of build time is done using slicing. The build time is proportional to the sum of surface areas of each slice. (Can be approximated as height of the part)
Support structure optimization: Supported area varies as per orientation. In some orientations, it is possible to reduce support area.
Easy peel-off: Reorienting such that the projected area of layers varies gradually makes it easier to peel off the cured layer during printing. Orientation also helps in removal of the support structures at later stages.
Plan-based DFM considerations
Plan-based considerations in DFM refer to criteria that arise due to process plan. These are to be met in order to avoid failures during manufacturing of a part that may be satisfy the rule-based criteria but may have some manufacturing difficulties due to sequence in which features are produced.
Geometric tailoring
Geometric Tailoring bridges the mismatch of material properties and process differences described above. Both functionality and manufacturability issues are addressed. Functionality issues are addressed through 'tailoring' of dimensions of the part to compensate the stress and deflection behavior anomalies. Manufacturability issues are tackled through identification of difficult to manufacture geometric attributes (an approach used in most DFM handbooks) or through simulations of manufacturing processes. For RP-produced parts (as in SLA), the problem formulations are called material-process geometric tailoring (MPGT)/RP.
First, the designer specifies information such as: Parametric CAD model of the part; constraints and goals on functional, geometry, cost and time characteristics; analysis models for these constraints and goals; target values of goals; and preferences for the goals.
DFM problem is then formulated as the designer fills in the MPGT template with this information and sends to the manufacturer, who fills in the remaining 'manufacturing relevant' information. With the completed formulation, the manufacturer is now able to solve the DFM problem, performing GT of the part design. Hence, the MPGT serves as the digital interface between the designer and the manufacturer.
Various Process Planning (PP) strategies have been developed for geometric tailoring in SLA process.
DFM frameworks
The constraints imposed by the manufacturing process are mapped onto the design. This helps in identification of DFM problems while exploring process plans by acting as a retrieval method. Various DFM frameworks are developed in literature. These frameworks help in various decision making steps such as:
Product-process fit: Ensuring consideration of manufacturing issues during the design stage gives insight on whether SLA process is the right choice. Rapid prototyping can be done in various ways. The usual concern are process cost and availability. Through this DFM Framework, the designer can make necessary design changes to ease the component manufacturability in SLA Process. This framework hence ensures that the product is suitable for the manufacturing plan.
Feature recognition: This is done through integrated process planning tasks in commercial CAD/CAM software. This may include simulations of the manufacturing process to get an idea of the possible difficulties in a virtual manufacturing environment. Such integrated tools are in developmental stage.
Functionality considerations: In some cases, assemblies are directly printed instead of printing parts separately and assembling. In such cases, phenomenon such as flow of the resin may affect the functionality drastically which may not be addressed through just rule based analysis. In fact, the rule based analysis is only to ensure the bounds of design but the dimensions of the final part must be checked for manufacturability through Plan-based consideration. Considerable research has been going on in this since the past decade. DFM frameworks are being developed and put into packages.
See also
Rapid prototyping
References
External links
DFM framework for design for additive manufacturing problems
Geometric Tailoring for Rapid Prototyping and Rapid Tooling
Dfm2U Live
3D printing
Design for X
Industrial design | DFM analysis for stereolithography | [
"Engineering"
] | 2,085 | [
"Industrial design",
"Design engineering",
"Design",
"Design for X"
] |
47,962,902 | https://en.wikipedia.org/wiki/Stress%20wave%20tomography | Acoustic or stress wave tomography is a non-destructive measurement method for the visualization of the structural integrity of a solid object. It is being used to test the preservation of wood or concrete, for example. The term acoustic tomography refers to the perceptible sounds that are caused by the mechanical impulses used for measuring. The term stress wave tomography describes the measurement method more accurately.
Features
The method is based on multiple measurements of the time of flight of stress waves between sensors which are connected to a two- or three-dimensional sampling grid. In the acoustic stress wave tomography of trees (see also: tree diagnosis), concussion sensors are attached in one or several planes around a trunk or a branch and their positions are measured. Impulses are induced through strokes of a hammer and the arrival times at the sensors are recorded.
The propagation speed of impulses in solid objects correlates with the density and the elastic modulus of the material (see also: speed of sound). Internal damage, like rot or cracks, slows down the impulses or forms barriers that render transition of impulses more difficult. This leads to longer propagation times and gets interpreted as reduced speed. Apparent velocity of sound is calculated by dividing the smallest distance between sensors by the time of flight between them.
Special mathematical algorithms turn the matrix of velocities into a color or greyscale image (tomogram) which enables an assessment of the extent of damage. The precision of the method is limited by the number of sensors used. Image resolution is inferior to X-ray computed tomography due to the longer wavelength of the signals, but avoids issues with high energy radiation.
Devices of this kind are the Arbotom, the PiCUS acoustic tomograph and the Arborsonic 3D.
Literature
Tomikawa, Y., Iwase, Y., Arita, K., and Yamada, H., 1986. Nondestructive Inspection of a Wooden Pole Using Ultrasonic Computed Tomography. IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, 33 (4), 354–358.
Turpening, R.M., Zhu, Z., Matarese, J.R., and Lewis, C.E., 1999. Acoustic tree and wooden member imaging apparatus. Patent WO 99/44050 (1999.02.27)
Rinn, F. (1999): Vorrichtung zur Materialuntersuchung. / Device for investigation materials. International Patent PCT/DE00/01467 (1999.05.11).
Rust S.; Göcke, L. (2000): A new tomographic device for the non-destructive testing of standing trees. In: Proceedings of the 12th International Symposium on Nondestructive Testing of Wood. University of Western Hungary, Sopron, 13–15 September 2000, 233–238.
Rust, S. (2001): Baumdiagnose ohne Bohren. AFZ – Der Wald 56: 924–925.
Rinn, F. (2003): Technische Grundlagen der Impuls-Tomographie, Baumzeitung (8): 29–31.
Rabe, C., Ferner, D., Fink, S., Schwarze, F. (2004): Detection of decay in trees with stress waves and interpretation of acoustic tomograms. Arborcultural Journal 28 (1/2): 3–19
Haaben, C., Sander, C., Hapla, F., 2006: Untersuchung der Stammqualität verschiedener Laubholzarten mittels Schallimpuls-Tomographie. Holztechnologie 47 (6): 2–5
Solid mechanics
Forestry
Tomography
Trees
Imaging
Sound measurements
Materials testing | Stress wave tomography | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 805 | [
"Solid mechanics",
"Sound measurements",
"Physical quantities",
"Quantity",
"Materials science",
"Materials testing",
"Mechanics"
] |
47,971,685 | https://en.wikipedia.org/wiki/Extended%20evolutionary%20synthesis | The Extended Evolutionary Synthesis (EES) consists of a set of theoretical concepts argued to be more comprehensive than the earlier modern synthesis of evolutionary biology that took place between 1918 and 1942. The extended evolutionary synthesis was called for in the 1950s by C. H. Waddington, argued for on the basis of punctuated equilibrium by Stephen Jay Gould and Niles Eldredge in the 1980s, and was reconceptualized in 2007 by Massimo Pigliucci and Gerd B. Müller.
The extended evolutionary synthesis revisits the relative importance of different factors at play, examining several assumptions of the earlier synthesis, and augmenting it with additional causative factors. It includes multilevel selection, transgenerational epigenetic inheritance, niche construction, evolvability, and several concepts from evolutionary developmental biology.
Not all biologists have agreed on the need for, or the scope of, an extended synthesis. Many have collaborated on another synthesis in evolutionary developmental biology, which concentrates on developmental molecular genetics and evolution to understand how natural selection operated on developmental processes and deep homologies between organisms at the level of highly conserved genes.
The preceding "modern synthesis"
The modern synthesis was the widely accepted early-20th-century synthesis reconciling Charles Darwin's theory of evolution by natural selection and Gregor Mendel's theory of genetics in a joint mathematical framework. It established evolution as biology's central paradigm. The 19th-century ideas of natural selection by Darwin and Mendelian genetics were united by researchers who included Ronald Fisher, J. B. S. Haldane and Sewall Wright, the three founders of population genetics, between 1918 and 1932. Julian Huxley introduced the phrase "modern synthesis" in his 1942 book, Evolution: The Modern Synthesis.
Early history
During the 1950s, English biologist C. H. Waddington called for an extended synthesis based on his research on epigenetics and genetic assimilation.
In 1978, Michael J. D. White wrote about an extension of the modern synthesis based on new research from speciation. In the 1980s, entomologist Ryuichi Matsuda coined the term "pan-environmentalism" as an extended evolutionary synthesis which he saw as a fusion of Darwinism with neo-Lamarckism. He held that heterochrony is a main mechanism for evolutionary change and that novelty in evolution can be generated by genetic assimilation. An extended synthesis was also proposed by the Austrian zoologist Rupert Riedl, with the study of evolvability.
Gordon Rattray Taylor in his 1983 book The Great Evolution Mystery called for an extended synthesis, noting that the modern synthesis is only a subsection of a more comprehensive explanation for biological evolution still to be formulated. In 1985, biologist Robert G. B. Reid authored Evolutionary Theory: The Unfinished Synthesis, which argued that the modern synthesis with its emphasis on natural selection is an incomplete picture of evolution, and emergent evolution can explain the origin of genetic variation.
In 1988, ethologist John Endler wrote about developing a newer synthesis, discussing processes of evolution that he felt had been neglected.
In 2000, Robert L. Carroll called for an "expanded evolutionary synthesis" due to new research from molecular developmental biology, systematics, geology and the fossil record.
Punctuated equilibrium
In the 1980s, the American palaeontologists Stephen Jay Gould and Niles Eldredge argued for an extended synthesis based on their idea of punctuated equilibrium, the role of species selection shaping large scale evolutionary patterns and natural selection working on multiple levels extending from genes to species.
Contributions from evolutionary developmental biology
Some researchers in the field of evolutionary developmental biology proposed another synthesis. They argue that the modern and extended syntheses should mostly center on genes and suggest an integration of embryology with molecular genetics and evolution, aiming to understand how natural selection operates on gene regulation and deep homologies between organisms at the level of highly conserved genes, transcription factors and signalling pathways. By contrast, a different strand of evo-devo following an organismal approach contributes to the extended synthesis by emphasizing (amongst others) developmental bias (both through facilitation and constraint), evolvability, and inherency of form as primary factors in the evolution of complex structures and phenotypic novelties.
Recent history
The idea of an extended synthesis was relaunched in 2007 by Massimo Pigliucci, and Gerd B. Müller, with a book in 2010 titled Evolution: The Extended Synthesis, which has served as a launching point for work on the extended synthesis. This includes:
The role of prior configurations, genomic structures, and other traits in the organism in generating evolutionary variations.
How increasing dimensionality of fitness landscapes affects our view of speciation.
The role of multilevel selection in the major evolutionary transitions.
New types of inheritance, including cultural and epigenetic inheritance.
The way that organismal development and developmental plasticity channel evolutionary pathways and generates phenotypic novelty
How organisms modify the environments they belong to through niche construction.
Other processes such as evolvability, phenotypic plasticity, reticulate evolution, horizontal gene transfer, symbiogenesis are said by proponents to have been excluded or missed from the modern synthesis. The goal of Piglucci's and Müller's extended synthesis is to take evolution beyond the gene-centered approach of population genetics to consider more organism- and ecology-centered approaches. Many of these causes are currently considered secondary in evolutionary causation, and proponents of the extended synthesis want them to be considered first-class evolutionary causes.
Michael R. Rose and Todd Oakley have called for a postmodern synthesis, they commented that "it is now abundantly clear that living things often attain a degree of genomic complexity far beyond simple models like the "gene library" genome of the Modern Synthesis". Biologist Eugene Koonin has suggested that the gradualism of the modern synthesis is unsustainable as gene duplication, horizontal gene transfer and endosymbiosis play a pivotal role in evolution. Koonin commented that "the new developments in evolutionary biology by no account should be viewed as refutation of Darwin. On the contrary, they are widening the trails that Darwin blazed 150 years ago and reveal the extraordinary fertility of his thinking."
Arlin Stoltzfus and colleagues advocate mutational and developmental bias in the introduction of variation as an important source of orientation or direction in evolutionary change. They argue that bias in the introduction of variation was not formally recognized throughout the 20th century, due to the influence of neo-Darwinism on thinking about causation.
Organism-centered evolution
The early biologists of the organicist movement have influenced the modern extended evolutionary synthesis. Recent research has called for expanding the population genetic framework of evolutionary biology by a more organism-centered perspective. This has been described as "organism-centered evolution" which looks beyond the genome to the ways that individual organisms are participants in their own evolution. Philip Ball has written a research review on organism-centered evolution.
Rui Diogo has proposed a revision of evolutionary theory, which he has termed ONCE: Organic Nonoptimal Constrained Evolution. According to ONCE, evolution is mainly driven by the behavioural choices and persistence of organisms themselves, whilst natural selection plays a secondary role. ONCE cites examples of reciprocal causation between organism and the environment, Baldwin effect, organic selection, developmental bias and niche construction.
Predictions
The extended synthesis is characterized by its additional set of predictions that differ from the standard modern synthesis theory:
Change in phenotype can precede change in genotype
Changes in phenotype are predominantly positive, rather than neutral (see: neutral theory of molecular evolution)
Changes in phenotype are induced in many organisms, rather than one organism
Revolutionary change in phenotype can occur through mutation, facilitated variation or threshold events
Repeated evolution in isolated populations can be by convergent evolution or developmental bias
Adaptation can be caused by natural selection, environmental induction, non-genetic inheritance, learning and cultural transmission (see: Baldwin effect, meme, transgenerational epigenetic inheritance, ecological inheritance, non-Mendelian inheritance)
Rapid evolution can result from simultaneous induction, natural selection and developmental dynamics
Biodiversity can be affected by features of developmental systems such as differences in evolvability
Heritable variation is directed towards variants that are adaptive and integrated with phenotype
Niche construction is biased towards environmental changes that suit the constructor's phenotype, or that of its descendants, and enhance their fitness
Kin selection
Multilevel selection
Self-organization
Symbiogenesis
Testing
From 2016 to 2019, there was an organized project entitled "Putting The Extended Evolutionary Synthesis To The Test" supported by a 7.5 million USD grant from the John Templeton Foundation, supplemented with further money from participating instutitions including Clark University, Indiana University, Lund University, Stanford University, University of Southampton and University of St Andrews.
Publications from the project include over 200 papers, a special issue, and an anthology on Evolutionary Causation. In 2019 a final report of the 2016–2019 consortium was published, Putting the Extended Evolutionary Synthesis to the Test.
The project was headed by Kevin N. Laland at the University of St Andrews and Tobias Uller at Lund University. According to Laland what the extended synthesis "really boils down to is recognition that, in addition to selection, drift, mutation and other established evolutionary processes, other factors, particularly developmental influences, shape the evolutionary process in important ways."
Status
Biologists disagree on the need for an extended synthesis. Opponents contend that the modern synthesis is able to fully account for the newer observations, whereas others criticize the extended synthesis for not being radical enough. Proponents think that the conceptions of evolution at the core of the modern synthesis are too narrow and that even when the modern synthesis allows for the ideas in the extended synthesis, using the modern synthesis affects the way that biologists think about evolution. For example, Denis Noble says that using terms and categories of the modern synthesis distorts the picture of biology that modern experimentation has discovered. Proponents therefore claim that the extended synthesis is necessary to help expand the conceptions and framework of how evolution is considered throughout the biological disciplines. In 2022, the John Templeton Foundation published a review of recent literature.
References
Further reading
Defence of the extended synthesis
Gilbert, Scott F. (2000). "A New Evolutionary Synthesis". In Developmental Biology, 6th edition. Sinauer.
Lange, Axel (2023) Extending the Evolutionary Synthesis. Darwin's Legacy Redesigned. CRC Press. DOI https://doi.org/10.1201/9781003341413.
Lodé, Thierry (2013). Manifeste pour une écologie évolutive, Darwin et après. Eds Odile Jacob, Paris.
Messerly, J.G. (1992). Piaget's conception of evolution: Beyond Darwin and Lamarck. Lanham, MD: Rowman & Littlefield. .
Postdarwinism: "The New Synthesis". A review of Ecological Developmental Biology: Integrating Epigenetics, Medicine, and Evolution, by Scott F. Gilbert and David Epel (Sinauer, 2009).
"Post-modern synthesis?" A review of Developmental Plasticity and Evolution by Mary Jane West-Eberhard (Oxford University Press, 2003).
Criticism of the extended synthesis
Dickens, Thomas; Rahman, Qazi. (2012). "The extended evolutionary synthesis and the role of soft inheritance in evolution". Proceedings of the Royal Society: B biological sciences, 279 (1740). pp. 2913–2921.
External links
Extended Evolutionary Synthesis
Should Evolutionary Theory Evolve?, By Bob Grant, January 1, 2010 The Scientist.
Evolution
Biology theories
History of biology | Extended evolutionary synthesis | [
"Biology"
] | 2,381 | [
"Biology theories"
] |
35,187,509 | https://en.wikipedia.org/wiki/Motor%20constants | The motor size constant () and motor velocity constant (, alternatively called the back EMF constant) are values used to describe characteristics of electrical motors.
Motor constant
is the motor constant (sometimes, motor size constant). In SI units, the motor constant is expressed in newton metres per square root watt ():
where
is the motor torque (SI unit: newton–metre)
is the resistive power loss (SI unit: watt)
The motor constant is winding independent (as long as the same conductive material is used for wires); e.g., winding a motor with 6 turns with 2 parallel wires instead of 12 turns single wire will double the velocity constant, , but remains unchanged. can be used for selecting the size of a motor to use in an application. can be used for selecting the winding to use in the motor.
Since the torque is current multiplied by then becomes
where
is the current (SI unit, ampere)
is the resistance (SI unit, ohm)
is the motor torque constant (SI unit, newton–metre per ampere, N·m/A), see below
If two motors with the same and torque work in tandem, with rigidly connected shafts, the of the system is still the same assuming a parallel electrical connection. The of the combined system increased by , because both the torque and the losses double. Alternatively, the system could run at the same torque as before, with torque and current split equally across the two motors, which halves the resistive losses.
Units
The motor constant may be provided in one of several units. The table below provides conversions between common SI units
Motor velocity constant, back EMF constant
is the motor velocity, or motor speed, constant (not to be confused with kV, the symbol for kilovolt), measured in revolutions per minute (RPM) per volt or radians per volt second, rad/V·s:
The rating of a brushless motor is the ratio of the motor's unloaded rotational speed (measured in RPM) to the peak (not RMS) voltage on the wires connected to the coils (the back EMF). For example, an unloaded motor of supplied with 11.1 V will run at a nominal speed of 63,270 rpm (= 5,700 rpm/V × 11.1 V).
The motor may not reach this theoretical speed because there are non-linear mechanical losses. On the other hand, if the motor is driven as a generator, the no-load voltage between terminals is perfectly proportional to the RPM and true to the of the motor/generator.
The terms , are also used, as are the terms back EMF constant, or the generic electrical constant. In contrast to the value is often expressed in SI units volt–seconds per radian (V⋅s/rad), thus it is an inverse measure of . Sometimes it is expressed in non SI units volts per kilorevolution per minute (V/krpm).
The field flux may also be integrated into the formula:
where is back EMF, is the constant, is the flux, and is the angular velocity.
By Lenz's law, a running motor generates a back-EMF proportional to the speed. Once the motor's rotational velocity is such that the back-EMF is equal to the battery voltage (also called DC line voltage), the motor reaches its limit speed.
Motor torque constant
is the torque produced divided by armature current. It can be calculated from the motor velocity constant .
where is the armature current of the machine (SI unit: ampere). is primarily used to calculate the armature current for a given torque demand:
The SI units for the torque constant are newton meters per ampere (N·m/A). Since 1 N·m = 1 J, and 1 A = 1 C/s, then 1 N·m/A = 1 J·s/C = 1 V·s (same units as back EMF constant).
The relationship between and is not intuitive, to the point that many people simply assert that torque and are not related at all. An analogy with a hypothetical linear motor can help to convince that it is true. Suppose that a linear motor has a of 2 (m/s)/V, that is, the linear actuator generates one volt of back-EMF when moved (or driven) at a rate of 2 m/s. Conversely, ( is speed of the linear motor, is voltage).
The useful power of this linear motor is , being the power, the useful voltage (applied voltage minus back-EMF voltage), and the current. But, since power is also equal to force multiplied by speed, the force of the linear motor is or . The inverse relationship between force per unit current and of a linear motor has been demonstrated.
To translate this model to a rotating motor, one can simply attribute an arbitrary diameter to the motor armature e.g. 2 m and assume for simplicity that all force is applied at the outer perimeter of the rotor, giving 1 m of leverage.
Now, supposing that (angular speed per unit voltage) of the motor is 3600 rpm/V, it can be translated to "linear" by multiplying by 2π m (the perimeter of the rotor) and dividing by 60, since angular speed is per minute. This is linear .
Now, if this motor is fed with current of 2 A and assuming that back-EMF is exactly 2 V, it is rotating at 7200 rpm and the mechanical power is 4 W, and the force on rotor is N or 0.0053 N. The torque on shaft is 0.0053 N⋅m at 2 A because of the assumed radius of the rotor (exactly 1 m). Assuming a different radius would change the linear but would not change the final torque result. To check the result, remember that .
So, a motor with will generate 0.00265 N⋅m of torque per ampere of current, regardless of its size or other characteristics. This is exactly the value estimated by the formula stated earlier.
References
External links
Electric motors | Motor constants | [
"Technology",
"Engineering"
] | 1,270 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
35,188,921 | https://en.wikipedia.org/wiki/Bellman%20pseudospectral%20method | The Bellman pseudospectral method is a pseudospectral method for optimal control based on Bellman's principle of optimality. It is part of the larger theory of pseudospectral optimal control, a term coined by Ross. The method is named after Richard E. Bellman. It was introduced by Ross et al.
first as a means to solve multiscale optimal control problems, and later expanded to obtain suboptimal solutions for general optimal control problems.
Theoretical foundations
The multiscale version of the Bellman pseudospectral method is based on the spectral convergence property of the Ross–Fahroo pseudospectral methods. That is, because the Ross–Fahroo pseudospectral method converges at an exponentially fast rate, pointwise convergence to a solution is obtained at very low number of nodes even when the solution has high-frequency components. This aliasing phenomenon in optimal control was first discovered by Ross et al. Rather than use signal processing techniques to anti-alias the solution, Ross et al. proposed that Bellman's principle of optimality can be applied to the converged solution to extract information between the nodes. Because the Gauss–Lobatto nodes cluster at the boundary points, Ross et al. suggested that if the node density around the initial conditions satisfy the Nyquist–Shannon sampling theorem, then the complete solution can be recovered by solving the optimal control problem in a recursive fashion over piecewise segments known as Bellman segments.
In an expanded version of the method, Ross et al., proposed that method could also be used to generate feasible solutions that were not necessarily optimal. In this version, one can apply the Bellman pseudospectral method at even lower number of nodes even under the knowledge that the solution may not have converged to the optimal one. In this situation, one obtains a feasible solution.
A remarkable feature of the Bellman pseudospectral method is that it automatically determines several measures of suboptimality based on the original pseudospectral cost and the cost generated by the sum of the Bellman segments.
Computational efficiency
One of the computational advantages of the Bellman pseudospectral method is that it allows one to escape Gaussian rules in the distribution of node points. That is, in a standard pseudospectral method, the distribution of node points are Gaussian (typically Gauss-Lobatto for finite horizon and Gauss-Radau for infinite horizon). The Gaussian points are sparse in the middle of the interval (middle is defined in a shifted sense for infinite-horizon problems) and dense at the boundaries. The second-order accumulation of points near the boundaries have the effect of wasting nodes. The Bellman pseudospectral method takes advantage of the node accumulation at the initial point to anti-alias the solution and discards the remainder of the nodes. Thus the final distribution of nodes is non-Gaussian and dense while the computational method retains a sparse structure.
Applications
The Bellman pseudospectral method was first applied by Ross et al. to solve the challenging problem of very low thrust trajectory optimization. It has been successfully applied to solve a practical problem of generating very high accuracy solutions to a trans-Earth-injection problem of bringing a space capsule from a lunar orbit to a pin-pointed Earth-interface condition for successful reentry.
The Bellman pseudospectral method is most commonly used as an additional check on the optimality of a pseudospectral solution generated by the Ross–Fahroo pseudospectral methods. That is, in addition to the use of Pontryagin's minimum principle in conjunction with the solutions obtained by the Ross–Fahroo pseudospectral methods, the Bellman pseudospectral method is used as a primal-only test on the optimality of the computed solution.
See also
Legendre pseudospectral method
Chebyshev pseudospectral method
Pseudospectral knotting method
References
Optimal control
Numerical analysis
Control theory | Bellman pseudospectral method | [
"Mathematics"
] | 828 | [
"Applied mathematics",
"Control theory",
"Computational mathematics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Dynamical systems"
] |
35,189,720 | https://en.wikipedia.org/wiki/Outer%20membrane%20polysaccharide%20transporter | The extracellular polysaccharide colanic acid is produced by species of the family Enterobacteriaceae. In Escherichia coli strain K12 the colanic acid cluster comprises 19 genes. The wzx gene encodes a protein with multiple transmembrane segments that may function in export of the colanic acid repeat unit from the cytoplasm into the periplasm in a process analogous to O-unit export. The colanic acid gene clusters may be involved in the export of polysaccharide from the cell.
References
Outer membrane proteins
Protein families | Outer membrane polysaccharide transporter | [
"Chemistry",
"Biology"
] | 122 | [
"Biotechnology stubs",
"Protein classification",
"Biochemistry stubs",
"Biochemistry",
"Protein families"
] |
35,191,618 | https://en.wikipedia.org/wiki/Balconet | Balconet or balconette is an architectural term to describe a false balcony, or railing at the outer plane of a window-opening reaching to the floor, and having, when the window is open, the appearance of a balcony. They are common in France, Portugal, Spain, and Italy. They are often referred to as Juliet balconies after the scene from Shakespeare's play Romeo and Juliet. The wall-opening appearing alongside a balconette is referred to as French window.
A prominent example of a balconette is on the Palazzo Labia in Venice.
Balconette brassieres
The term has also been applied to a style of brassiere featuring low-cut cups and wide set straps that give the appearance of a square neckline. The name "balconette" came from men in the balcony of a theatre looking down upon women. A balconette bra could not be seen from above.
Materials
Balconets or Juliet balconies can be made from various materials. As they used to be made out of stone quite often, with modern advances there has been more options to create aesthetically pleasing balconets. Newer Juliet balconies can range from glass panels to stainless steel, to provide a more modern look to a building.
See also
French window
References
External links
Architectural elements | Balconet | [
"Technology",
"Engineering"
] | 267 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
35,194,042 | https://en.wikipedia.org/wiki/Raynaud%27s%20isogeny%20theorem | In mathematics, Raynaud's isogeny theorem, proved by , relates the Faltings heights of two isogeneous elliptic curves.
References
Elliptic curves
Theorems in algebraic geometry | Raynaud's isogeny theorem | [
"Mathematics"
] | 38 | [
"Theorems in algebraic geometry",
"Number theory stubs",
"Number theory",
"Theorems in geometry"
] |
35,194,136 | https://en.wikipedia.org/wiki/Tate%27s%20isogeny%20theorem | In mathematics, Tate's isogeny theorem, proved by , states that two abelian varieties over a finite field are isogeneous if and only if their Tate modules are isomorphic (as Galois representations).
References
Abelian varieties
Theorems in algebraic geometry | Tate's isogeny theorem | [
"Mathematics"
] | 55 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
35,195,419 | https://en.wikipedia.org/wiki/Kronecker%27s%20congruence | In mathematics, Kronecker's congruence, introduced by Kronecker, states that
where p is a prime and Φp(x,y) is the modular polynomial of order p, given by
for j the elliptic modular function and τ running through classes of imaginary quadratic integers of discriminant n.
References
Modular arithmetic
Theorems in number theory | Kronecker's congruence | [
"Mathematics"
] | 76 | [
"Number theory stubs",
"Theorems in number theory",
"Arithmetic",
"Mathematical problems",
"Mathematical theorems",
"Modular arithmetic",
"Number theory"
] |
35,195,665 | https://en.wikipedia.org/wiki/Hurwitz%20class%20number | In mathematics, the Hurwitz class number H(N), introduced by Adolf Hurwitz, is a modification of the class number of positive definite binary quadratic forms of discriminant –N, where forms are weighted by 2/g for g the order of their automorphism group, and where H(0) = –1/12.
showed that the Hurwitz class numbers are coefficients of a mock modular form of weight 3/2.
References
Number theory | Hurwitz class number | [
"Mathematics"
] | 99 | [
"Discrete mathematics",
"Number theory"
] |
39,109,817 | https://en.wikipedia.org/wiki/Kjartansson%20constant%20Q%20model | The Kjartansson constant Q model uses mathematical Q models to explain how the earth responds to seismic waves and is widely used in seismic geophysical applications. Because these models satisfies the Krämers–Krönig relations they should be preferable to the Kolsky model in seismic inverse Q filtering. Kjartanssons model is a simplification of the first of Azimi Q models (1968).
Kjartansson constant Q model
Kjartanssons model is a simplification of the first of Azimi Q models. Azimi proposed his first model together with Strick (1967) and has the attenuation proportional to |w|1 − γ| and is:
The phase velocity is written:
If the phase velocity goes to infinity in the first term on the right, we simply has:
This is Kjartansson constant Q model.
Computations
Studying the attenuation coefficient and phase velocity, and compare them with Kolskys Q model we have plotted the result on fig.1. The data for the models are taken from Ursin and Toverud.
Data for the Kolsky model (blue):
cr = 2000 m/s, Qr = 100, wr = 2100
Data for Kjartansson constant Q model (green):
a1 = 2.5 × 10 −6, γ = 0.0031
Notes
References
Seismology measurement
Geophysics | Kjartansson constant Q model | [
"Physics"
] | 291 | [
"Applied and interdisciplinary physics",
"Geophysics"
] |
39,112,255 | https://en.wikipedia.org/wiki/Critical%20plane%20analysis | Critical plane analysis refers to the analysis of stresses or strains as they are experienced by a particular plane in a material, as well as the identification of which plane is likely to experience the most extreme damage. Critical plane analysis is widely used in engineering to account for the effects of cyclic, multiaxial load histories on the fatigue life of materials and structures. When a structure is under cyclic multiaxial loading, it is necessary to use multiaxial fatigue criteria that account for the multiaxial loading. If the cyclic multiaxial loading is nonproportional it is mandatory to use a proper multiaxial fatigue criteria. The multiaxial criteria based on the Critical Plane Method are the most effective criteria.
For the plane stress case, the orientation of the plane may be specified by an angle in the plane, and the stresses and strains acting on this plane may be computed via Mohr's circle. For the general 3D case, the orientation may be specified via a unit normal vector of the plane, and the associated stresses strains may be computed via a tensor coordinate transformation law.
The chief advantage of critical plane analysis over earlier approaches like Sines rule, or like correlation against maximum principal stress or strain energy density, is the ability to account for damage on specific material planes. This means that cases involving multiple out-of-phase load inputs, or crack closure can be treated with high accuracy. Additionally, critical plane analysis offers the flexibility to adapt to a wide range of materials. Critical plane models for both metals and polymers are widely used.
History
Modern procedures for critical plane analysis trace back to research published in 1973 in which M. W. Brown and K. J. Miller observed that fatigue life under multiaxial conditions is governed by the experience of the plane receiving the most damage, and that both tension and shear loads on the critical plane must be considered.
References
External links
Book on Multiaxial Fatigue (by Darrell Socie and Gary Marquis)
Class notes on Multiaxial Fatigue (by Ali Fatemi)
Multiaxial Fatigue Theory (by MSC.Fatigue' Help)
Metal FE-Based Fatigue Analysis software: nCode DesignLife (by HBM)
Metal FE-Based Fatigue Analysis software: fe-safe (by Dassault Systemes SIMULIA)
Metal FE-Based Fatigue Analysis software: FEMFAT (by Magna Powertrain)
Metal FE-Based Fatigue Analysis software: MSC.Fatigue (by MSC Software)
Metal FE-Based Fatigue Analysis software: LMS Virtual.Lab Durability (by LMS)
Metal FE-Based Fatigue Analysis software: NX Durability (by Siemens)
Metal FE-Based Fatigue Analysis software: winLIFE (by Steinbeis-Transferzentrum)
Metal FE-Based Fatigue Analysis software: fatiga (by Fatec Engineering)
Rubber Fatigue Analysis software: Endurica (by Endurica LLC)
Materials degradation
Fracture mechanics
Mechanical failure modes
Solid mechanics
Mechanical failure | Critical plane analysis | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 590 | [
"Structural engineering",
"Solid mechanics",
"Mechanical failure modes",
"Fracture mechanics",
"Technological failures",
"Materials science",
"Mechanics",
"Mechanical engineering",
"Materials degradation",
"Mechanical failure"
] |
39,113,753 | https://en.wikipedia.org/wiki/Standard%20step%20method | The standard step method (STM) is a computational technique utilized to estimate one-dimensional surface water profiles in open channels with gradually varied flow under steady state conditions. It uses a combination of the energy, momentum, and continuity equations to determine water depth with a given a friction slope , channel slope , channel geometry, and also a given flow rate. In practice, this technique is widely used through the computer program HEC-RAS, developed by the US Army Corps of Engineers Hydrologic Engineering Center (HEC).
Open channel flow fundamentals
The energy equation used for open channel flow computations is a simplification of the Bernoulli Equation (See Bernoulli Principle), which takes into account pressure head, elevation head, and velocity head. (Note, energy and head are synonymous in Fluid Dynamics. See Pressure head for more details.) In open channels, it is assumed that changes in atmospheric pressure are negligible, therefore the “pressure head” term used in Bernoulli’s Equation is eliminated. The resulting energy equation is shown below:
Equation 1
For a given flow rate and channel geometry, there is a relationship between flow depth and total energy. This is illustrated below in the plot of energy vs. flow depth, widely known as an E-y diagram. In this plot, the depth where the minimum energy occurs is known as the critical depth. Consequently, this depth corresponds to a Froude Number of 1. Depths greater than critical depth are considered “subcritical” and have a Froude Number less than 1, while depths less than critical depth are considered supercritical and have Froude Numbers greater than 1.
Equation 2
Under steady state flow conditions (e.g. no flood wave), open channel flow can be subdivided into three types of flow: uniform flow, gradually varying flow, and rapidly varying flow. Uniform flow describes a situation where flow depth does not change with distance along the channel. This can only occur in a smooth channel that does not experience any changes in flow, channel geometry, roughness or channel slope. During uniform flow, the flow depth is known as normal depth (yn). This depth is analogous to the terminal velocity of an object in free fall, where gravity and frictional forces are in balance (Moglen, 2013). Typically, this depth is calculated using the Manning formula. Gradually varied flow occurs when the change in flow depth per change in flow distance is very small. In this case, hydrostatic relationships developed for uniform flow still apply. Examples of this include the backwater behind an in-stream structure (e.g. dam, sluice gate, weir, etc.), when there is a constriction in the channel, and when there is a minor change in channel slope. Rapidly varied flow occurs when the change in flow depth per change in flow distance is significant. In this case, hydrostatics relationships are not appropriate for analytical solutions, and continuity of momentum must be employed. Examples of this include large changes in slope like a spillway, abrupt constriction/expansion of flow, or a hydraulic jump.
Water surface profiles (gradually varied flow)
Typically, the STM is used to develop “surface water profiles,” or longitudinal representations of channel depth, for channels experiencing gradually varied flow. These transitions can be classified based on reach condition (mild or steep), and also the type of transition being made. Mild reaches occur where normal depth is subcritical (yn > yc) while steep reaches occur where normal depth is supercritical (yn<yc). The transitions are classified by zone. (See figure 3.)
Figure 3. This figure illustrates the different classes of surface water profiles experienced in steep and mild reaches during gradually varied flow conditions. Note: The Steep Reach column should be labeled "Steep Reach (yn<yc).
The above surface water profiles are based on the governing equation for gradually varied flow (seen below)
Equation 3
This equation (and associated surface water profiles) is based on the following assumptions:
The slope is relatively small
Channel cross-section is known at stations of interest
There is a hydrostatic pressure distribution
Standard step method calculation
The STM numerically solves equation 3 through an iterative process. This can be done using the bisection or Newton-Raphson Method, and is essentially solving for total head at a specified location using equations 4 and 5 by varying depth at the specified location.
Equation 4
Equation 5
In order to use this technique, it is important to note you must have some understanding of the system you are modeling. For each gradually varied flow transition, you must know both boundary conditions and you must also calculate length of that transition. (e.g. For an M1 Profile, you must find the rise at the downstream boundary condition, the normal depth at the upstream boundary condition, and also the length of the transition.) To find the length of the gradually varied flow transitions, iterate the “step length”, instead of height, at the boundary condition height until equations 4 and 5 agree. (e.g. For an M1 Profile, position 1 would be the downstream condition and you would solve for position two where the height is equal to normal depth.)
Newton–Raphson numerical method
Computer programs like excel contain iteration or goal seek functions that can automatically calculate the actual depth instead of manual iteration.
Conceptual surface water profiles (sluice gate)
Figure 4 illustrates the different surface water profiles associated with a sluice gate on a mild reach (top) and a steep reach (bottom). Note, the sluice gate induces a choke in the system, causing a “backwater” profile just upstream of the gate. In the mild reach, the hydraulic jump occurs downstream of the gate, but in the steep reach, the hydraulic jump occurs upstream of the gate. It is important to note that the gradually varied flow equations and associated numerical methods (including the standard step method) cannot accurately model the dynamics of a hydraulic jump. See the Hydraulic jumps in rectangular channels page for more information. Below, an example problem will use conceptual models to build a surface water profile using the STM.
Example problem
Solution
Using Figure 3 and knowledge of the upstream and downstream conditions and the depth values on either side of the gate, a general estimate of the profiles upstream and downstream of the gate can be generated. Upstream, the water surface must rise from a normal depth of 0.97 m to 9.21 m at the gate. The only way to do this on a mild reach is to follow an M1 profile. The same logic applies downstream to determine that the water surface follows an M3 profile from the gate until the depth reaches the conjugate depth of the normal depth at which point a hydraulic jump forms to raise the water surface to the normal depth.
Step 4: Use the Newton Raphson Method to solve the M1 and M3 surface water profiles. The upstream and downstream portions must be modeled separately with an initial depth of 9.21 m for the upstream portion, and 0.15 m for the downstream portion. The downstream depth should only be modeled until it reaches the conjugate depth of the normal depth, at which point a hydraulic jump will form. The solution presented explains how to solve the problem in a spreadsheet, showing the calculations column by column. Within Excel, the goal seek function can be used to set column 15 to 0 by changing the depth estimate in column 2 instead of iterating manually.
Table 1: Spreadsheet of Newton Raphson Method of downstream water surface elevation calculations
Step 5: Combine the results from the different profiles and display.
Normal depth was achieved at approximately 2,200 meters upstream of the gate.
Step 6: Solve the problem in the HEC-RAS Modeling Environment:
It is beyond the scope of this Wikipedia Page to explain the intricacies of operating HEC-RAS. For those interested in learning more, the HEC-RAS user’s manual is an excellent learning tool and the program is free to the public.
The first two figures below are the upstream and downstream water surface profiles modeled by HEC-RAS. There is also a table provided comparing the differences between the profiles estimated by the two different methods at different stations to show consistency between the two methods. While the two different methods modeled similar water surface shapes, the standard step method predicted that the flow would take a greater distance to reach normal depth upstream and downstream of the gate. This stretching is caused by the errors associated with assuming average gradients between two stations of interest during our calculations. Smaller dx values would reduce this error and produce more accurate surface profiles.
The HEC-RAS model calculated that the water backs up to a height of 9.21 meters at the upstream side of the sluice gate, which is the same as the manually calculated value. Normal depth was achieved at approximately 1,700 meters upstream of the gate.
HEC-RAS modeled the hydraulic jump to occur 18 meters downstream of the sluice gate.
References
Fluid mechanics | Standard step method | [
"Engineering"
] | 1,839 | [
"Civil engineering",
"Fluid mechanics"
] |
39,114,218 | https://en.wikipedia.org/wiki/Hagen%E2%80%93Rubens%20relation | In optics, the Hagen–Rubens relation (or Hagen–Rubens formula) is a relation between the coefficient of reflection and the conductivity for materials that are good conductors. The relation states that for solids where the contribution of the dielectric constant to the index of refraction is negligible, the reflection coefficient can be written as (in SI Units):
where is the frequency of observation, is the conductivity, and is the vacuum permittivity. For metals, this relation holds for frequencies (much) smaller than the Drude relaxation rate, and in this case the otherwise frequency-dependent conductivity can be assumed frequency-independent and equal to the dc conductivity.
The relation is named after German physicists Ernst Bessel Hagen and Heinrich Rubens who discovered it in 1903.
References
Scattering, absorption and radiative transfer (optics)
Infrared spectroscopy
Electric and magnetic fields in matter | Hagen–Rubens relation | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy",
"Engineering"
] | 183 | [
"Spectroscopy stubs",
" absorption and radiative transfer (optics)",
"Spectrum (physical sciences)",
"Scattering stubs",
"Electric and magnetic fields in matter",
"Astronomy stubs",
"Materials science",
"Infrared spectroscopy",
"Scattering",
"Condensed matter physics",
"Molecular physics stubs",... |
46,454,146 | https://en.wikipedia.org/wiki/Modane%20Underground%20Laboratory | The Modane Underground Laboratory (LSM) (; also known as the Fréjus Underground Laboratory) is a subterranean particle physics laboratory located within the Fréjus Road Tunnel near Modane, France. It is jointly operated by the French National Center for Scientific Research and the Atomic Energy and Alternative Energies Commission in partnership with the University of Savoie.
The laboratory sits almost exactly in the middle of the road tunnel, which links Modane to Bardonecchia, Italy, below Fréjus Peak. This depth translates to a meter water equivalent depth of . , it is the deepest laboratory in the European Union.
The LSM was built between 1981 and 1982 to host the "Fréjus" iron tracking calorimeter proton decay experiment. Today the site houses the Neutrino Ettore Majorana Observatory (NEMO) search for neutrinoless double beta decay, the EDELWEISS dark matter detector, and other particle detectors.
References
External links
Laboratoire Souterrain de Modane
Particle physics facilities
Science and technology in Europe
Underground laboratories
Laboratories in France
French UMR | Modane Underground Laboratory | [
"Physics"
] | 221 | [
"Particle physics stubs",
"Particle physics"
] |
46,455,334 | https://en.wikipedia.org/wiki/Trochoidal%20wave | In fluid dynamics, a trochoidal wave or Gerstner wave is an exact solution of the Euler equations for periodic surface gravity waves. It describes a progressive wave of permanent form on the surface of an incompressible fluid of infinite depth. The free surface of this wave solution is an inverted (upside-down) trochoid – with sharper crests and flat troughs. This wave solution was discovered by Gerstner in 1802, and rediscovered independently by Rankine in 1863.
The flow field associated with the trochoidal wave is not irrotational: it has vorticity. The vorticity is of such a specific strength and vertical distribution that the trajectories of the fluid parcels are closed circles. This is in contrast with the usual experimental observation of Stokes drift associated with the wave motion. Also the phase speed is independent of the trochoidal wave's amplitude, unlike other nonlinear wave-theories (like those of the Stokes wave and cnoidal wave) and observations. For these reasons – as well as for the fact that solutions for finite fluid depth are lacking – trochoidal waves are of limited use for engineering applications.
In computer graphics, the rendering of realistic-looking ocean waves can be done by use of so-called Gerstner waves. This is a multi-component and multi-directional extension of the traditional Gerstner wave, often using fast Fourier transforms to make (real-time) animation feasible.
Description of classical trochoidal wave
Using a Lagrangian specification of the flow field, the motion of fluid parcels is – for a periodic wave on the surface of a fluid layer of infinite depth:
where and are the positions of the fluid parcels in the plane at time , with the horizontal coordinate and the vertical coordinate (positive upward, in the direction opposing gravity). The Lagrangian coordinates label the fluid parcels, with the centres of the circular orbits – around which the corresponding fluid parcel moves with constant speed Further is the wavenumber (and the wavelength), while is the phase speed with which the wave propagates in the -direction. The phase speed satisfies the dispersion relation:
which is independent of the wave nonlinearity (i.e. does not depend on the wave height ), and this phase speed the same as for Airy's linear waves in deep water.
The free surface is a line of constant pressure, and is found to correspond with a line , where is a (nonpositive) constant. For the highest waves occur, with a cusp-shaped crest. Note that the highest (irrotational) Stokes wave has a crest angle of 120°, instead of the 0° for the rotational trochoidal wave.
The wave height of the trochoidal wave is The wave is periodic in the -direction, with wavelength and also periodic in time with period
The vorticity under the trochoidal wave is:
varying with Lagrangian elevation and diminishing rapidly with depth below the free surface.
In computer graphics
A multi-component and multi-directional extension of the Lagrangian description of the free-surface motion – as used in Gerstner's trochoidal wave – is used in computer graphics for the simulation of ocean waves. For the classical Gerstner wave the fluid motion exactly satisfies the nonlinear, incompressible and inviscid flow equations below the free surface. However, the extended Gerstner waves do in general not satisfy these flow equations exactly (although they satisfy them approximately, i.e. for the linearised Lagrangian description by potential flow). This description of the ocean can be programmed very efficiently by use of the fast Fourier transform (FFT). Moreover, the resulting ocean waves from this process look realistic, as a result of the nonlinear deformation of the free surface (due to the Lagrangian specification of the motion): sharper crests and flatter troughs.
The mathematical description of the free-surface in these Gerstner waves can be as follows: the horizontal coordinates are denoted as and , and the vertical coordinate is . The mean level of the free surface is at and the positive -direction is upward, opposing the Earth's gravity of strength The free surface is described parametrically as a function of the parameters and as well as of time The parameters are connected to the mean-surface points around which the fluid parcels at the wavy surface orbit. The free surface is specified through and with:
where is the hyperbolic tangent function, is the number of wave components considered, is the amplitude of component and its phase. Further is its wavenumber and its angular frequency. The latter two, and can not be chosen independently but are related through the dispersion relation:
with the mean water depth. In deep water () the hyperbolic tangent goes to one: The components and of the horizontal wavenumber vector determine the wave propagation direction of component
The choice of the various parameters and for and a certain mean depth determines the form of the ocean surface. A clever choice is needed in order to exploit the possibility of fast computation by means of the FFT. See e.g. for a description how to do this. Most often, the wavenumbers are chosen on a regular grid in -space. Thereafter, the amplitudes and phases are chosen randomly in accord with the variance-density spectrum of a certain desired sea state. Finally, by FFT, the ocean surface can be constructed in such a way that it is periodic both in space and time, enabling tiling – creating periodicity in time by slightly shifting the frequencies such that for
In rendering, also the normal vector to the surface is often needed. These can be computed using the cross product () as:
The unit normal vector then is with the norm of
Notes
References
. Reprinted in: Annalen der Physik 32(8), pp. 412–445, 1809.
Originally published in 1879, the 6th extended edition appeared first in 1932.
1802 introductions
1802 in science
Water waves
Wave mechanics
Physical oceanography
3D computer graphics
Articles containing video clips
Oceanographical terminology | Trochoidal wave | [
"Physics",
"Chemistry"
] | 1,264 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Water waves",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Physical oceanography",
"Fluid dynamics"
] |
43,325,520 | https://en.wikipedia.org/wiki/Thread%20%28network%20protocol%29 | Thread is an IPv6-based, low-power mesh networking technology for Internet of things (IoT) products. The Thread protocol specification is available at no cost; however, this requires agreement and continued adherence to an end-user license agreement (EULA), which states "Membership in Thread Group is necessary to implement, practice, and ship Thread technology and Thread Group specifications."
Often used as a transport for Matter (the combination being known as Matter over Thread), the protocol has seen increased use for connecting low-power and battery-operated smart-home devices.
Organization
In July 2014, the Thread Group alliance was formed as an industry group to develop, maintain and drive adoption of Thread as an industry networking standard for IoT applications. Thread Group provides certification for components and products to ensure adherence to the spec. Initial members were ARM Holdings, Big Ass Solutions, NXP Semiconductors/Freescale, Google-subsidiary Nest Labs, OSRAM, Samsung, Silicon Labs, Somfy, Tyco International, Qualcomm, and the Yale lock company. In August 2018, Apple joined the group, and released its first Thread product, the HomePod Mini, in late 2020.
Characteristics
Thread uses 6LoWPAN, which, in turn, uses the IEEE 802.15.4 wireless protocol with mesh communication (in the 2.4 GHz spectrum), as do Zigbee and other systems. However, Thread is IP-addressable, with cloud access and AES encryption. A BSD-licensed open-source implementation of Thread called OpenThread is available from and managed by Google.
The OpenThread network simulator, a part of the OpenThread implementation, simulates Thread networks using OpenThread POSIX instances. The simulator utilises discrete-event simulation and allows for visualisation of communications through a web interface.
Use cases
In 2019, the Connected Home over IP (CHIP) project, subsequently renamed to Matter, led by the Zigbee Alliance, now the Connectivity Standards Alliance (CSA), Google, Amazon, and Apple, announced a broad collaboration to create a royalty-free standard and open-source code base to promote interoperability in home connectivity, leveraging Thread, Wi-Fi, and Bluetooth Low Energy.
List of mobile phones with Thread
See also
Home automation
Wi-Fi Direct
Wi-Fi EasyMesh
DASH7
KNX
LonWorks
BACnet
References
External links
OpenThread
Home automation
Building automation
Personal area networks
Mesh networking
IEEE 802
IPv6
Internet properties established in 2014 | Thread (network protocol) | [
"Technology",
"Engineering"
] | 524 | [
"Home automation",
"Building engineering",
"Wireless networking",
"Wireless sensor network",
"Automation",
"Building automation",
"Mesh networking"
] |
43,326,086 | https://en.wikipedia.org/wiki/Journal%20of%20Non-Equilibrium%20Thermodynamics | The Journal of Non-Equilibrium Thermodynamics is a quarterly peer-reviewed scientific journal covering the field of non-equilibrium thermodynamics. It was established in 1976 by Jurgen Keller and its current editor-in-chief is Karl-Heinz Hoffmann (Chemnitz University of Technology).
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.290.
References
External links
De Gruyter academic journals
Quarterly journals
English-language journals
Academic journals established in 1976
Engineering journals
Physical chemistry journals
Thermodynamics | Journal of Non-Equilibrium Thermodynamics | [
"Physics",
"Chemistry",
"Mathematics"
] | 127 | [
"Thermodynamics stubs",
"Thermodynamics",
"Physical chemistry journals",
"Physical chemistry stubs",
"Dynamical systems"
] |
43,326,885 | https://en.wikipedia.org/wiki/Honeywell%20Aerospace%2C%20Cambridge | COM DEV International was a satellite technology, space sciences, and telecommunications company based in Cambridge, Ontario, Canada. The company had branches and offices in Ottawa, the United States, the United Kingdom, China and India.
COM DEV developed and manufactured specialized satellite systems, including microwave systems, switches, optical systems, specialized satellite antennas, as well as components for the aviation and aerospace industry. COM DEV also produced custom equipment designs for commercial, military and civilian purposes, as well as providing contract research for the space sciences.
History
COM DEV International was founded in 1974 and specialized in microwave technology for the aviation and aerospace industry. The company would go on to become a leader in space satellite componentry and hardware, specializing in telecommunication systems; a global designer and builder of telecommunication components and systems for space satellites; as well as one of Canada's largest sources of spacecraft instrumentation.
In 2001, its space products division opened an approximate $7-million Surface Acoustic Wave (SAW) development and manufacturing laboratory in its Cambridge facility.
In 2005, it purchased the EMS Technologies Space Science optical division in Ottawa, formerly CAL Corporation, from MacDonald, Dettwiler and Associates for $5 million.
In 2007, it purchased a Passive Microwave division in El Segundo, California, for $8.75 million. In 2010, it purchased Ottawa-based space instrument supplier Routes AstroEngineering for $1.7 million. Later that year, it established a subsidiary called exactEarth offering global ship tracking data services. In 2015, it purchased MESL Microwave of Edinburgh, Scotland. Also that year, it entered the waveguide market with the purchase of Pacific Wave Systems (PWS) of Garden Grove, California.
On November 15, 2015, Honeywell announced that it would acquire COM DEV, which would become part of Honeywell's Defense and Space business. On February 4, 2016, Honeywell announced that it had completed the acquisition, and COM DEV has since been renamed Honeywell Cambridge.
Products
Since the 1990s, the company has manufactured components for satellites including:
Telemetry communication and control modules
Multiplexer (MUX) switching networks and filters
Crossovers for microwave
Modulators, regulators
Surface acoustic wave filters
Assemblies for airline telecommunications
Special satellite antennas
Projects
The company has developed and built satellites assemblies or components for over 900 satellite missions, including:
Sapphire (satellite) Optical Imaging Payload
Swarm (spacecraft) Canadian Electric Field Instrument
ExactView 1
Terra (Satellite) MOPITT Instrument
CASSIOPE e-POP Radio Receiver Instrument
Dextre Force Moment Sensors
Upper Atmosphere Research Satellite WINDII Instrument
Far Ultraviolet Spectroscopic Explorer Fine Error Sensor
SCISAT-1 MAESTRO instrument and CALTRAC Startracker
Odin (Satellite) Odin-OSIRIS Instrument
Herschel Space Observatory HIFI Local oscillator Source Unit
Jason-1 CALTRAC Startrackers
Genesis (spacecraft) CALTRAC Startrackers
Formosat-2 CALTRAC Startrackers
Nozomi (spacecraft) Thermal Plasma Analyser
Akebono (satellite) Suprathermal ion Mass Spectrometer
Freja (satellite) Cold Gas Analyser and Auroral Imager
Interbol Ultraviolet Auroral Imager
Viking (satellite) Ultraviolet Imager
CloudSat
Meteosat
Upcoming missions include:
Maritime Monitoring and Messaging Micro-Satellite (M3MSat)
James Webb Space Telescope (JWST) Fine Guidance Sensor and Near Infrared Imaging and Slitless Spectrograph
Past projects have also included an Automatic Identification System (AIS) validation nanosatellite launched on an Antrix PSLV-C9 vehicle from the Satish Dhawan Space Centre in Sriharikota, India in April 2008. The AIS experimental spacecraft was built under contract by the University of Toronto Institute for Aerospace Studies (UTIAS) Space Flight Laboratory (SFL), which also designated with the responsibility for its operation.
Research and development
COM DEV provided research and development work in aeronautics and space technology. Many modules of the company are used in many well-known space probes and satellites. COM DEV was known for cooperating with major space agencies, including NASA, the European Space Agency (ESA), JAXA, Indian Space Research Organisation and the Canadian Space Agency (CSA).
See also
Boeing Canada
Bombardier Aerospace
Canadian Space Agency
CMC Electronics
Héroux-Devtek
MacDonald, Dettwiler and Associates
Spar Aerospace
Viking Air
References
Citations
Bibliography
Com Dev - Corporate
Com Dev - Financial Results 2012, accessed on 12 November 2013 (PDF, 2.3 MB)
Com Dev Fine Guidance Sensor for the James Webb Space Telescope, accessed on November 12, 2013
External links
Com DEV - Corporate Website (English)
Satellite Industry Association
Honeywell
Aerospace engineering
Aerospace companies of Canada
Spacecraft component manufacturers
Telecommunications equipment vendors
Manufacturing companies of Canada
Companies based in Cambridge, Ontario
Manufacturing companies based in Ontario
Technology companies established in 1974
Space industry companies of Canada
1974 establishments in Ontario
Canadian companies established in 1974 | Honeywell Aerospace, Cambridge | [
"Engineering"
] | 989 | [
"Aerospace engineering"
] |
43,333,666 | https://en.wikipedia.org/wiki/Materials%20Processing%20Institute | The Materials Processing Institute is a research centre serving organisations that work in advanced materials, low-carbon energy and the circular economy. The Institute is based in Tees Valley in the northeast of England.
Background
The British Iron and Steel Research Association (BISRA), was formed in 1944 with headquarters in London, originally at 11 Park Lane, later moved to 24 Buckingham Gate. Satellite laboratories were formed much later to support the larger UK steel producing centres of Sheffield, Swansea, Teesside, and Battersea.
Following the second nationalisation of the UK Steel industry in 1967, BISRA became the R&D function of the newly formed British Steel Corporation.
Teesside Laboratories, later rebranded as Teesside Technology Centre, survived several rounds of R&D restructuring in which laboratories in Battersea and Swansea were ultimately closed.
In 2001, following the merger of British Steel with Koninklijke Hoogovens in 1999 (forming Corus Group PLC) Teesside Technology Centre, Welsh Laboratories and Swinden Technology Centre were to close to form a centralised UK Technology Centre. This was to mirror the single Dutch R&D site in IJmuiden. This centralisation attempt was backtracked but resulted in the closure of Welsh Laboratories and significant losses in numbers, knowledge and experience from both remaining centres.
In 2007, Tata Steel secured 100% of the Corus Group PLC shares, taking the company off the financial market.
In 2014, Tata Steel, already in the process of restructuring the UK arm of the European business, decided to also restructure UK R&D. Teesside Technology Centre and Swinden Technology Centre were to close with a new R&D Division being formed at Warwick Manufacturing Group. To avoid closure of the Teesside Laboratories and loss of the remaining technical process expertise in Coal & Coke, Oxygen Steelmaking, Continuous Casting and Long Products Rolling, members of the sites management team (Chris McDonald, Gareth Fletcher and Dr. Richard Curry) and the Tata Steel Process R&D Director (Dr. Simon Pike) were permitted to spin the site out as an independent research institute.
The Materials Processing Institute was launched December 2014.
Present day
The Materials Processing Institute works with steel organisations from across the UK, including Tata Steel, Liberty House Group and British Steel, while welcoming delegations from global, industry partners such as voestalpine AG, thyssenkrupp, ArcelorMittal and Sidenor.
The SME Technology Centre collaborates with NEPIC and Teesside University to support SMEs operating in Tees Valley and the wider North East region, through the Innovate Tees Valley and Tees Valley Business Start Up programmes, which are part funded by the European Regional Development Fund (ERDF).
In 2016, the Institute launched a commercial steel-making operation from its Normanton Steel plant, which has the capabilities to produce high carbon, high chrome steels.
In 2017, Liberty House Group won regional funding to develop a process to manufacture powder feedstock for additive manufacturing processes. Liberty chose to house the plant within the Materials Processing Institute.
The Materials Processing Institute is a member of UK Steel and was ratified, in October 2017, as an affiliated member of the World Steel Association, the international trade body for the iron and steel industry.
In 2018, intellectual property firm Marks & Clerk joined several other small firms in renting office space at the Institute's site. The move and new office was officially launched by Redcar MP Anna Turley.
Services
The Materials Processing Institute consults in the fields of sourcing and blending, blast furnace processes, continuous caster tuning and physical modelling (water-based fluid scale modelling) of caster moulds. They have ventured into contemporary techniques such as computational fluid dynamics and process instrumentation but cutbacks in staff and investment have limited development in these areas.
The site hosts a c1960's manual control electric arc furnace and vacuum ladle arc furnace plus a mothballed single mould billet caster with mould section donated from the 1970s Stocksbridge vertical caster in early 2000s.
The 7t arc plant is often used to melt and sandcast waste turnings from local industries at low cost and quick turnaround. The plant has limited gas extraction facilities and can only process material with low volatile content and low poisonous metals etc.
The Materials Processing Institute houses Liberty Speciality Steel's Additive Manufacturing Steelmaking Powder Metallurgy Development Plant.
Specialist services
SME Technology Centre: The Materials Processing Institute supports businesses throughout the North East of England through its SME Technology Centre. The Centre provides technical support, facilities support and business support services to companies from various sectors. There are also a number of SMEs operating from the Institute’s campus.
The Doctoral Academy: The Academy has formed relationships with industrial companies, including SMEs, universities and Centres for Doctoral Training (CDT).
References
1945 establishments in the United Kingdom
Buildings and structures in Redcar and Cleveland
Companies based in Middlesbrough
Metallurgical industry of the United Kingdom
Research institutes in North Yorkshire
Scientific organizations established in 1945
Technology companies of the United Kingdom | Materials Processing Institute | [
"Chemistry"
] | 1,030 | [
"Metallurgical industry of the United Kingdom",
"Metallurgical industry by country"
] |
31,997,273 | https://en.wikipedia.org/wiki/FPG%20IleRS%20zinc%20finger | The FPG IleRS zinc finger domain represents a zinc finger domain found at the C-terminal in both DNA glycosylase/AP lyase enzymes and in isoleucyl tRNA synthetase. In these two types of enzymes, the C-terminal domain forms a zinc finger.
DNA glycosylase/AP lyase enzymes are involved in base excision repair of DNA damaged by oxidation or by mutagenic agents. These enzymes have both DNA glycosylase activity (EC) and AP lyase activity (EC). Examples include formamidopyrimidine-DNA glycosylases (Fpg; MutM) and endonuclease VIII (Nei). Formamidopyrimidine-DNA glycosylases (Fpg, MutM) is a trifunctional DNA base excision repair enzyme that removes a wide range of oxidation-damaged bases (N-glycosylase activity; EC) and cleaves both the 3'- and 5'-phosphodiester bonds of the resulting apurinic/apyrimidinic site (AP lyase activity; EC). Fpg has a preference for oxidised purines, excising oxidized purine bases such as 7,8-dihydro-8-oxoguanine (8-oxoG). ITs AP (apurinic/apyrimidinic) lyase activity introduces nicks in the DNA strand, cleaving the DNA backbone by beta-delta elimination to generate a single-strand break at the site of the removed base with both 3'- and 5'-phosphates. Fpg is a monomer composed of 2 domains connected by a flexible hinge. The two DNA-binding motifs (a zinc finger and the helix-two-turns-helix motifs) suggest that the oxidized base is flipped out from double-stranded DNA in the binding mode and excised by a catalytic mechanism similar to that of bifunctional base excision repair enzymes. Fpg binds one ion of zinc at the C terminus, which contains four conserved and essential cysteines. Endonuclease VIII (Nei) has the same enzyme activities as Fpg above, but with a preference for oxidized pyrimidines, such as thymine glycol, 5,6-dihydrouracil and 5,6-dihydrothymine.
An Fpg-type zinc finger is also found at the C terminus of isoleucyl tRNA synthetase (EC). This enzyme catalyses the attachment of isoleucine to tRNA(Ile). As IleRS can inadvertently accommodate and process structurally similar amino acids such as valine, to avoid such errors it has two additional distinct tRNA(Ile)-dependent editing activities. One activity is designated as 'pre-transfer' editing and involves the hydrolysis of activated Val-AMP. The other activity is designated 'post-transfer' editing and involves deacylation of mischarged Val-tRNA(Ile).
References
Protein domains | FPG IleRS zinc finger | [
"Biology"
] | 657 | [
"Protein domains",
"Protein classification"
] |
31,997,352 | https://en.wikipedia.org/wiki/ZapA%20family | In molecular biology, the ZapA protein family is a group of related proteins that includes the cell division protein ZapA. The structure of ZapA has a core structure consisting of two layers alpha/beta, and has a long C-terminal helix that forms dimeric parallel and tetrameric antiparallel coiled coils. ZapA interacts with FtsZ, where FtsZ is part of a mid-cell cytokinetic structure termed the Z-ring that recruits a hierarchy of fission related proteins early in the bacterial cell cycle. ZapA drives the polymerisation and filament bundling of FtsZ, thereby contributing to the spatio-temporal tuning of the Z-ring.
References
Protein families | ZapA family | [
"Biology"
] | 145 | [
"Protein families",
"Protein classification"
] |
31,997,774 | https://en.wikipedia.org/wiki/Bariloche%20Atomic%20Centre | The Bariloche Atomic Centre () is one of the research and development centres of the Argentine National Atomic Energy Commission. As its name implies, it is located in the city of San Carlos de Bariloche. Bariloche Atomic Centre is responsible for research in physics and nuclear engineering. It also hosts the Balseiro Institute, a collaboration between National University of Cuyo and the National Atomic Energy Commission. The Bariloche Atomic Centre opened in 1955 with its first director, José Antonio Balseiro. The RA-6 reactor started operations in 1982.
Activity
The centre is devoted to basic and applied physics research as well as Nuclear and Mechanical Engineering.
Basic research is focused on deepening understanding of nuclear energy. Applied sciences have provided support for both state- and privately owned companies. The main areas of research include: materials, neutrons, thermodynamics and theoretical physics.
Nuclear Engineering at the Centre is aimed at further developing Argentina's atomic technology. Most of the research is done taking advantage of RA-6, a 1 MW experimental reactor. Experiments done with the RA-6 include irradiation and radioactive activation of several materials.
Some research groups also focus on refining reactor calculations and performance measurements and designing mechanical devices for those tasks.
Various companies have sprung out of the Bariloche Atomic Center, such as INVAP y ALTEC.
References
Buildings and structures in Río Negro Province
Research institutes in Argentina
Nuclear technology in Argentina
Nuclear research institutes
Nuclear power in Argentina
Bariloche
1955 establishments in Argentina | Bariloche Atomic Centre | [
"Physics",
"Engineering"
] | 305 | [
"Nuclear research institutes",
"Nuclear and atomic physics stubs",
"Nuclear organizations",
"Nuclear physics"
] |
31,998,107 | https://en.wikipedia.org/wiki/Kameleon%20FireEx%20KFX | Kameleon FireEx KFX, often only referred to as KFX, is a commercial Computational Fluid Dynamics (CFD) program with main focus on gas dispersion and fire simulation.
KFX uses the k-epsilon model for turbulence modelling, the Eddy Dissipation Concept (EDC) for combustion modelling, and a radiation model based on the Discrete Transfer Method (DTM) by Lockwood and Shah.
External links
Standard k-epsilon model on CFD-wiki
F.C. Lockwood and N.G. Shah, "A new radiation solution method for incorporation in general combustion prediction procedures", 18th Symposium (International) on Combustion, The Combustion Institute, Pittsburgh, PA, pp. 1405–1414 (1981), .
Computational fluid dynamics
Simulation software | Kameleon FireEx KFX | [
"Physics",
"Chemistry"
] | 165 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Fluid dynamics stubs",
"Computational physics"
] |
31,999,888 | https://en.wikipedia.org/wiki/Coorbit%20theory | In mathematics, coorbit theory was developed by Hans Georg Feichtinger and Karlheinz Gröchenig around 1990. It provides theory for atomic decomposition of a range of Banach spaces of distributions. Among others the well established wavelet transform and the short-time Fourier transform are covered by the theory.
The starting point is a square integrable representation of a locally compact group on a Hilbert space , with which one can define a transform of a function with respect to by . Many important transforms are special cases of the transform, e.g. the short-time Fourier transform and the wavelet transform for the Heisenberg group and the affine group respectively. Representation theory yields the reproducing formula . By discretization of this continuous convolution integral it can be shown that by sufficiently dense sampling in phase space the corresponding functions will span a frame for the Hilbert space.
An important aspect of the theory is the derivation of atomic decompositions for Banach spaces. One of the key steps is to define the voice transform for distributions in a natural way. For a given Banach space , the corresponding coorbit space is defined as the set of all distributions such that . The reproducing formula is true also in this case and therefore it is possible to obtain atomic decompositions for coorbit spaces.
References
Hilbert spaces | Coorbit theory | [
"Physics"
] | 274 | [
"Hilbert spaces",
"Quantum mechanics"
] |
32,000,072 | https://en.wikipedia.org/wiki/Methylene%20green | Methylene green is a heterocyclic aromatic chemical compound similar to methylene blue. It is used as a dye. It functions as a visible light-activated photocatalyst in organic synthesis.
References
External links
methylene green (at stainsfile)
Histology
Thiazine dyes
Phenothiazines
Chlorides | Methylene green | [
"Chemistry"
] | 70 | [
"Chlorides",
"Inorganic compounds",
"Salts",
"Histology",
"Microscopy"
] |
32,000,325 | https://en.wikipedia.org/wiki/Peregrine%20soliton | The Peregrine soliton (or Peregrine breather) is an analytic solution of the nonlinear Schrödinger equation. This solution was proposed in 1983 by Howell Peregrine, researcher at the mathematics department of the University of Bristol.
Main properties
Contrary to the usual fundamental soliton that can maintain its profile unchanged during propagation, the Peregrine soliton presents a double spatio-temporal localization. Therefore, starting from a weak oscillation on a continuous background, the Peregrine soliton develops undergoing a progressive increase of its amplitude and a narrowing of its temporal duration. At the point of maximum compression, the amplitude is three times the level of the continuous background (and if one considers the intensity as it is relevant in optics, there is a factor 9 between the peak intensity and the surrounding background). After this point of maximal compression, the wave's amplitude decreases and its width increases.
These features of the Peregrine soliton are fully consistent with the quantitative criteria usually used in order to qualify a wave as a rogue wave. Therefore, the Peregrine soliton is an attractive hypothesis to explain the formation of those waves which have a high amplitude and may appear from nowhere and disappear without a trace.
Mathematical expression
In the spatio-temporal domain
The Peregrine soliton is a solution of the one-dimensional nonlinear Schrödinger equation that can be written in normalized units as follows :
with the spatial coordinate and the temporal coordinate. being the envelope of a surface wave in deep water. The dispersion is anomalous and the nonlinearity is self-focusing (note that similar results could be obtained for a normally dispersive medium combined with a defocusing nonlinearity).
The Peregrine analytical expression is:
so that the temporal and spatial maxima are obtained for and .
In the spectral domain
It is also possible to mathematically express the Peregrine soliton according to the spatial frequency :
with being the Dirac delta function.
This corresponds to a modulus (with the constant continuous background here omitted) :
One can notice that for any given time , the modulus of the spectrum exhibits a typical triangular shape when plotted on a logarithmic scale. The broadest spectrum is obtained for , which corresponds to the maximum of compression of the spatio-temporal nonlinear structure.
Different interpretations of the Peregrine soliton
As a rational soliton
The Peregrine soliton is a first-order rational soliton.
As an Akhmediev breather
The Peregrine soliton can also be seen as the limiting case of the space-periodic Akhmediev breather when the period tends to infinity.
As a Kuznetsov-Ma soliton
The Peregrine soliton can also be seen as the limiting case of the time-periodic Kuznetsov-Ma breather when the period tends to infinity.
Experimental demonstration
Mathematical predictions by H. Peregrine had initially been established in the domain of hydrodynamics. This is however very different from where the Peregrine soliton has been for the first time experimentally generated and characterized.
Generation in optics
In 2010, more than 25 years after the initial work of Peregrine, researchers took advantage of the analogy that can be drawn between hydrodynamics and optics in order to generate Peregrine solitons in optical fibers. In fact, the evolution of light in fiber optics and the evolution of surface waves in deep water are both modelled by the nonlinear Schrödinger equation (note however that spatial and temporal variables have to be switched). Such an analogy has been exploited in the past in order to generate optical solitons in optical fibers.
More precisely, the nonlinear Schrödinger equation can be written in the context of optical fibers under the following dimensional form :
with being the second order dispersion (supposed to be anomalous, i.e. ) and being the nonlinear Kerr coefficient. and are the propagation distance and the temporal coordinate respectively.
In this context, the Peregrine soliton has the following dimensional expression:
.
is a nonlinear length defined as with being the power of the continuous background. is a duration defined as .
By using exclusively standard optical communication components, it has been shown that even with an approximate initial condition (in the case of this work, an initial sinusoidal beating), a profile very close to the ideal Peregrine soliton can be generated. However, the non-ideal input condition lead to substructures that appear after the point of maximum compression. Those substructures have also a profile close to a Peregrine soliton, which can be analytically explained using a Darboux transformation.
The typical triangular spectral shape has also been experimentally confirmed.
Generation in hydrodynamics
These results in optics have been confirmed in 2011 in hydrodynamics with experiments carried out in a 15-m long water wave tank. In 2013, complementary experiments using a scale model of a chemical tanker ship have discussed the potential devastating effects on the ship.
Generation in other fields of physics
Other experiments carried out in the physics of plasmas have also highlighted the emergence of Peregrine solitons in other fields ruled by the nonlinear Schrödinger equation.
See also
Nonlinear Schrödinger equation
Breather
Rogue wave,
Optical rogue waves
Notes and references
Solitons
Fluid dynamics
Waves
Nonlinear optics
Water waves | Peregrine soliton | [
"Physics",
"Chemistry",
"Engineering"
] | 1,121 | [
"Physical phenomena",
"Water waves",
"Chemical engineering",
"Waves",
"Motion (physics)",
"Piping",
"Fluid dynamics"
] |
32,005,243 | https://en.wikipedia.org/wiki/Polymeric%20surface | Polymeric materials have widespread application due to their versatile characteristics, cost-effectiveness, and highly tailored production. The science of polymer synthesis allows for excellent control over the properties of a bulk polymer sample. However, surface interactions of polymer substrates are an essential area of study in biotechnology, nanotechnology, and in all forms of coating applications. In these cases, the surface characteristics of the polymer and material, and the resulting forces between them largely determine its utility and reliability. In biomedical applications for example, the bodily response to foreign material, and thus biocompatibility, is governed by surface interactions. In addition, surface science is integral part of the formulation, manufacturing, and application of coatings.
Chemical methods
A polymeric material can be functionalized by the addition of small moieties, oligomers, and even other polymers (grafting copolymers) onto the surface or interface.
Grafting copolymers
Grafting, in the context of polymer chemistry, refers to the addition of polymer chains onto a surface. In the so-called 'grafting onto' mechanism, a polymer chain adsorbs onto a surface out of solution. In the more extensive 'grafting from' mechanism, a polymer chain is initiated and propagated at the surface. Because pre-polymerized chains used in the 'grafting onto' method have a thermodynamically favored conformation in solution (an equilibrium hydrodynamic volume), their adsorption density is self-limiting. The radius of gyration of the polymer therefore is the limiting factor in the number of polymer chains that can reach the surface and adhere. The 'grafting from' technique circumvents this phenomenon and allows for greater grafting densities.
The processes of grafting "onto", "from", and "through" are all different ways to alter the chemical reactivity of the surface they attach with. Grafting onto allows a preformed polymer, generally in a "mushroom regime", to adhere to the surface of either a droplet or bead in solution. Due to the larger volume of the coiled polymer and the steric hindrance this causes, the grafting density is lower for 'onto' in comparison to 'grafting from'. The surface of the bead is wetted by the polymer and the interaction in the solution caused the polymer to become more flexible. The 'extended conformation' of the polymer grafted, or polymerized, from the surface of the bead means that the monomer must be in the solution and there for lyophilic. This results with a polymer that has favorable interactions with the solution, allowing the polymer to form more linearly. Grafting from therefore has a higher grafting density since there are more access to chain ends.
Peptide synthesis can provide one example of a 'grafting from' synthetic process. In this process, an amino acid chain is grown by a series of condensation reaction from a polymer bead surface. This grafting technique allows for excellent control over the peptide composition as the bonded chain can be washed without desorption from the polymer.
Polymeric coatings are another area of applied grafting techniques. In the formulation of water-borne paint, latex particles are often surface modified to control particle dispersion and thus coating characteristics such as viscosity, film formation, and environmental stability (UV exposure and temperature variations).
Oxidation
Plasma processing, corona treatment, and flame treatment can all be classified as surface oxidation mechanisms. These methods all involve cleavage of polymer chains in the material and the incorporation of carbonyl, and hydroxyl functional groups. The incorporation of oxygen into the surface creates a higher surface energy allowing the substrate to be coated.
Methodology
Oxidizing polymeric surfaces
Corona treatment
Corona treatment is a surface modification method using a low temperature corona discharge to increase the surface energy of a material, often polymers and natural fibers. Most commonly, a thin polymer sheet is rolled through an array of high-voltage electrodes, using the plasma created to functionalize the surface. The limited penetration depth of such treatment provides vastly improved adhesion while preserving bulk mechanical properties.
Commercially, corona treatment has been used widely for improved dye adhesion before printing text and images on plastic packaging materials. The hazardous nature of remnant ozone after corona treatment stipulates careful filtration and ventilation during processing, restricting its implementation to applications with strict catalytic filtered systems. This limitation prevents widespread use within open-line manufacturing processes
Several factors influence the efficiency of the flame treatment such as air-to-gas ratio, thermal output, surface distance, and oxidation zone dwell time. Upon conception of the process, a corona treatment immediately followed film extrusions, but the development of careful transportation techniques allows treatment at an optimized location. Conversely, in-line corona treatments have been implemented into full-scale production lines such as those in the newspaper industry. These in-line solutions are developed to counteract the decrease in wetting characteristics caused by excessive solvent use.
Atmosphere- and pressure-dependent plasma processing
Plasma processing provides interfacial energies and injected monomer fragments larger than comparable processes. However, limited fluxes prevent high process rates. In addition, plasmas are thermodynamically unfavorable and therefore plasma-processed surfaces lack uniformity, consistency, and permanence. These obstacles with plasma processing preclude it from being a competitive surface modification method within industry.
The process begins with production of plasma via ionization either by deposition on monomer mixtures or gaseous carrier ions. The power required to produce the necessary plasma flux can be derived from the active volume mass/energy balance:
where
is the active volume
is the ionization rate
is the neutral density
is the electron density
is the ion loss by diffusion, convection, attachment, and recombination
Dissipation is generally initiated via direct current (DC), radio frequency (RF), or microwave power. Gas ionization efficiency can decrease the power efficiency more than tenfold depending on the carrier plasma and substrate.
Flamed plasma processing
Flame treatment is a controlled, rapid, cost-effective method of increasing surface energy and wettability of polyolefins and metallic components. This high-temperature plasma treatment uses ionized gaseous oxygen via jet flames across a surface to add polar functional groups while melting the surface molecules, locking them into place upon cooling.
Thermoplastic polyethylene and polypropylene treated with brief oxygen plasma exposure have seen contact angles as low as 22°, and the resulting surface modification can last years with proper packaging. Flame plasma treatment has become increasingly popular with intravascular devices such as balloon catheters due to the precision and cost-effectiveness demanded in the medical industry.
Grafting techniques
Grafting copolymers to a surface can be envisioned as fixing polymeric chains to a structurally different polymer substrate with the intention of changing surface functionality while preserving bulk mechanical properties. The nature and degree of surface functionalization is determined by both the choice of copolymer and the type and extent of grafting.
Photografting
The modification of inert surfaces of polyolefins, polyesters, and polyamides by grafting functional vinyl monomers has been used to increase hydrophobicity, dye absorption, and polymer adhesion. This photografting method is generally used during continuous filament or thin film processing. On a bulk commercial scale, the grafting technique is referred to as photoinitiated lamination, where desired surfaces are joined by grafting a polymeric adhesion network between the two films. The low adhesion and absorption of polyolefins, polyesters, and polyamides is improved by UV-irradiation of an initiator and monomer transferred through the vapor phase to the substrate. Functionalization of porous surfaces have seen great success with high temperature photografting techniques.
In microfluidic chips, functionalizing channels allows directed flow to preserve lamellar behavior between and within junctions. The adverse turbulent flow in microfluidic applications can compound component failure modes due to the increased level of channel interdependency and network complexity. In addition, the imprinted design of microfluidic channels can be reproduced for photografting the corresponding channels with a high degree of accuracy.
Surface analytical techniques
Surface energy measurement
In industrial corona and plasma processes, cost-efficient and rapid analytical methods are required for confirming adequate surface functionality on a given substrate. Measuring the surface energy is an indirect method for confirming the presence of surface functional groups without the need for microscopy or spectroscopy, often expensive and demanding tools. Contact angle measurement (goniometry) can be used to find the surface energy of the treated and non-treated surface. Young's relation can be used to find surface energy assuming the simplification of experimental conditions to a three phase equilibrium (i.e. liquid drop applied to flat rigid solid surface in a controlled atmosphere), yielding
where
denotes the surface energy of the solid–liquid, liquid–gas, or solid–gas interface
is the measured contact angle
A series of solutions with known surface tension (e.g., Dyne solutions) can be used to estimate the surface energy of the polymer substrate qualitatively by observing the wettability of each. These methods are applicable to macroscopic surface oxidation, as in industrial processing.
Infrared spectroscopy
In the case of oxidizing treatments, spectra taken from treated surfaces will indicate the presence of functionalities in carbonyl and hydroxyl regions according to the Infrared spectroscopy correlation table.
XPS and EDS
X-ray photoelectron spectroscopy (XPS) and Energy-dispersive X-ray spectroscopy (EDS/EDX) are composition characterization techniques that use x-ray excitation of electrons to discrete energy levels to quantify chemical composition. These techniques provide characterization at surface depths of 1–10 nanometers, approximately the range of oxidation in plasma and corona treatments. In addition, these processes offer the benefit of characterizing microscopic variations in surface composition.
In the context of plasma processed polymer surfaces, oxidized surfaces will obviously show a greater oxygen content. Elemental analysis allows for quantitative data to be obtained and used in the analysis of process efficiency.
Atomic force microscopy
Atomic force microscopy (AFM), a type of scanning force microscopy, was developed for mapping three-dimensional topographical variations in atomic surfaces with high resolution (on the order of fraction of nanometers). AFM was developed to overcome the material conduction limitations of electron transmission and scanning microscopy methods (SEM & STM). Invented by Binnig, Quate, and Gerbe in 1985, atomic force microscopy uses laser beam deflection to measure the variations in atomic surfaces. The method does not rely on the variation in electron conduction through the material, as the scanning tunneling microscope (STM) does, and therefore allow microscopy on nearly all materials, including polymers.
The application of AFM on polymeric surfaces is especially favorable because polymer general lack of crystallinity leads to large variations in surface topography. Surface functionalization techniques such as grafting, corona treatment, and plasma processing increase the surface roughness greatly (compared to the unprocessed substrate surface) and are therefore accurately measured by AFM.
Applications
Biomaterials
Biomaterial surfaces are often modified using light-activated mechanisms (such as photografting) to functionalize the surface without compromising bulk mechanical properties.
The modification of surfaces to keep polymers biologically inert has found wide uses in biomedical applications such as cardiovascular stents and in many skeletal prostheses. Functionalizing polymer surfaces can inhibit protein adsorption, which may otherwise initiate cellular interrogation upon the implant, a predominant failure mode of medical prostheses.
Narrow biocompatibility requirements within the medical industry have over the past ten years driven surface modification techniques to reach an unprecedented level of accuracy.
Coatings
In water-borne coatings, an aqueous polymer dispersion creates a film on the substrate once the solvent has evaporated. Surface functionalization of the polymer particles is a key component of a coating formulation allowing control over such properties as dispersion, film formation temperature, and the coating rheology. Dispersing aids often involve steric or electrostatic repulsion of the polymer particles, providing colloidal stability. The dispersing aids adsorb (as in a grafting onto scheme) onto latex particles giving them functionality. The association of other additives, such as thickeners shown in the schematic to the right, with adsorbed polymer material give rise to complex rheological behavior and excellent control over a coating's flow properties.
See also
Surface modification
Surface engineering
Tribology
References
Polymer chemistry | Polymeric surface | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,596 | [
"Materials science",
"Polymer chemistry"
] |
32,005,612 | https://en.wikipedia.org/wiki/Polyelectrolyte%20adsorption | Adsorption of polyelectrolytes on solid substrates is a surface phenomenon where long-chained polymer molecules with charged groups (dubbed polyelectrolytes) bind to a surface that is charged in the opposite polarity. On the molecular level, the polymers do not actually bond to the surface, but tend to "stick" to the surface via intermolecular forces and the charges created by the dissociation of various side groups of the polymer. Because the polymer molecules are so long, they have a large amount of surface area with which to contact the surface and thus do not desorb as small molecules are likely to do. This means that adsorbed layers of polyelectrolytes form a very durable coating. Due to this important characteristic of polyelectrolyte layers they are used extensively in industry as flocculants, for solubilization, as supersorbers, antistatic agents, as oil recovery aids, as gelling aids in nutrition, additives in concrete, or for blood compatibility enhancement to name a few.
Kinetics of layer formation
Models for the adsorption behavior of polyelectrolytes in solution to a solid surface are extremely situational. Vastly different behaviors are exhibited based on varying polyelectrolyte character and concentration, ionic strength of the solution, solid surface character, and pH, among several other factors. These complex models are specialized by application for certain parameters in order to create accurate models.
Theoretical kinetics
However, the general character of the process can be reasonably well modeled with a polyelectrolyte in solution, and an oppositely charged surface where no covalent interaction between the surface and chain occurs. This model for the adsorbed amount of polyelectrolyte at a charged surface is derived from DLVO theory, which models the interaction of charged particles in solution, and mean field theory, which simplifies systems for analysis.
Using a modified Poisson-Boltzmann equation and mean field equation, the concentration profile near a charged surface is solved numerically. The solution of these equations yields a simple relation for the adsorbed amount, Γ, based on electrolyte charge fraction, ρ, and bulk salt concentration, .
where is the reduced surface potential:
and is the Bjerrum length:
Layer-by-layer adsorption
Since charge plays a key role in polyelectrolyte adsorption, the initial rates of polyelectrolyte adsorption to charged surfaces are often rapid, limited only by the rate of mass-transport (diffusion) to the surface. This high rate then quickly drops off as charge accumulation at the surface occurs, and attractive forces are no longer drawing more polyelectrolyte chains to the surface. This drop in adsorption rates can be countered by exploiting the tendency for charge overcompensation to occur. In the case of a negatively charged solid surface, cationic polyelectrolate chains are adsorbed to the oppositely charged surface. Their large size and high charge densities tend to overcompensate the original negative surface charge, resulting in a net positive charge due to the cationic polyelectrolytes. This solid surface, with its cationic polyelectrolyte film and consequent positive surface charge, can then be exposed to an anionic polyelectrolyte solution, where the process begins again, creating another film with an oppositely charged surface. This process can then be repeated to create several bilayers on the solid surface.
Effects of contents and quality of the solution
The effectiveness of polyelectrolyte adsorption is greatly affected by the contents of the solution and by the quality of the solvent in which the polyelectrolytes are dissolved. The primary mechanisms by which the solvent affects the adsorption characteristics of the surface-polymer interface are the dielectric effect of the solvent, the steric attraction or repulsion facilitated by the chemical nature of or species in the solvent, and its temperature. Repulsive steric forces are based on entropy and are caused by the reduced configuration entropy of the polymer chains. It is difficult to model precisely the interaction that any particular polyelectrolyte solution will exhibit because the steric forces are dependent upon the combination of the chemical makeup of both the polymer and the solvent as well as any ionic species present in the solution.
Solvent choice
The interactions between a polyelectrolyte and the solvent it is placed in have a large effect on the conformation of the polymer both in solution and upon deposition onto the substrate. Due to their unique nature, polyelectrolytes have many options for solvents that traditional polymers such as polyethylene, styrene, and others, would not be soluble in. An excellent example of this is water. While water is a high-polarity solvent, it will still dissolve many polyelectrolytes. The conformation of a polyelectrolyte in solution is determined by a balance of the (usually unfavorable) interactions between the solvent and the polymer, and the electrostatic repulsion between the individual repeat units of the polymer. It has been suggested that a polyelectrolyte chain will form an elongated cylindrical globule in order to optimize its energy. Some models go further and postulate that the most efficient configuration is a series of cylindrical globules linking much larger diameter spherical globules in a "necklace" configuration.
Good solvent
In a good solvent, the electrostatic forces between the repeat units of the polymer and the solvent are favorable. While not entirely intuitive, this causes the polymer to assume a more tightly packed conformation. This is due to the screening the solvent molecules perform between the charged repeat units of the polyelectrolyte, decreasing the electrostatic repulsion the polymer chain experiences. Since the polymer backbone does not repel itself as strongly as it would in a poor solvent, the polymer chain acts more similarly to an uncharged polymer, assuming a compact conformation.
Poor solvent
In a poor solvent, the solvent molecules interact poorly or unfavorably with the charged portions of the polyelectrolyte. The inability of the solvent to effectively screen the charges between repeat units causes the polymer to assume a looser conformation due to electrostatic repulsion of its repeat units. These interactions allow for the polymer to be more uniformly deposited onto the substrate.
Salt concentration
When an ionic compound is dissolved in the solvent, the ions act to screen the charges on the polyelectrolyte chains. The ionic concentration of the solution will determine the layer formation characteristics of the polyelectrolyte as well as the conformation the polymer assumes in solution.
High salt
High salt concentrations cause conditions similar to the interactions experienced by a polymer in a favorable solvent. Polyelectrolytes, while charged, are still mainly non-polar with carbon backbones. While the charges on the polymer backbone exert an electrostatic force that drives the polymer into a more open and loose conformation, if the surrounding solution has a high concentration of salt, then the charge repulsion will be screened. Once this charge is screened the polyelectrolyte will act as any other non-polar polymer would in a high ionic strength solution and begin to minimize interactions with the solvent. This leads to a much more clumped and dense polymer deposited onto the surface.
Low Salt
In a low ionic strength solution, the charges present on the repeat units of the polymer are the dominant force controlling conformation. Since there is very little charge present to screen the repulsive interactions between the repeat units, the polymer assumes a very spread out, loose conformation. This conformation allows for more uniform layering on the substrate, which is helpful in preventing surface defects and non-uniform surface properties.
Industrial uses of polyelectrolyte layers
Polyelectrolytes can be applied to multiple types of surfaces due to the variety of ionic polymers available. They can be applied to solid surfaces in multi-layer form to fulfill a variety of design objectives, they can be used to surround solid particles to enhance the stability of a colloidal system, and they can even be assembled to form an independent structure that can be used to ferry drugs throughout the human body.
Polymer coatings
Polyelectrolyte multi-layers are a promising area of research in the polymer coating industry because they can be applied in a spray-on fashion at low cost in a water-based solvent. Although the polymers are held to the surface only by electrostatic forces, the multi-layer coatings adhere aggressively under liquid shear. The disadvantage to this coating technology is that the layers have the consistency of a gel and thus are weak against abrasion.
Stainless steel corrosion resistance
Polyelectrolytes have been used by scientists to coat stainless steel using the layer-by-layer application method in order to inhibit corrosion. The exact mechanism by which corrosion is restricted is unknown because polyelectrolyte multi-layers are water-logged and of a gel-like consistency. One theory is that the layers form a barrier impenetrable to small ions that facilitate corrosion of the steel. Additionally, the water molecules within the multi-layer film are held in a restricted state by the ionic groups of the polyelectrolytes. This decreases the chemical activity of the water at the surface of the steel.
Implant enhancement
Many biomedical devices that come into contact with bodily fluids are susceptible to adverse foreign body response, or rejection and thus, failure of the device. The main mechanism of infection is the formation of a biofilm, which is a matrix of sessile bacteria consisting of around 15% bacterial cells by mass and 85% hydrophobic exopolysaccharide fibers. One way to eliminate this risk is to apply localized treatment to the area in the vicinity of the implant. This can be done by applying a drug-impregnated polyelectrolyte multi-layer to the medical device prior to implantation. The goal with this technology is to create a combination of polyelectrolyte multi-layers where one multi-layer prevents the formation of a biofilm and another releases a small-molecule drug through diffusion. This would be more effective than the current technique of releasing a high dose of drugs into the body and counting on some of it to navigate to the afflicted area. The base layer for an effective coating for an implant is DMLPEI/PAA, or linear N, N-dodecyl,methyl-poly(ethyleneimine) / poly (acrylic acid).
Colloid stability
Another of the major applications of polyelectrolyte adsorption is the stabilization (or destabilization) of solid colloidal suspensions, or sols. Particles in solution tend to have attractive forces similar to van der Waals forces, modeled by Hamaker theory. These forces tend to cause colloidal particles to aggregate or flocculate. The Hamaker attractive effect is balanced by one or both of two repulsive effects of colloids in solution. The first is electrostatic stabilization, in which like charges of the particles repel one another. This effect is due to the zeta potential that exists due to a particle's surface charge in solution. The second is steric stabilization, due to steric effects. Drawing particles together with adsorbed polymer chains greatly decreases the conformational entropy of the polymer chains at the surface, which is thermodynamically unfavorable, making flocculation and coagulation more difficult.
The adsorption of polyelectrolytes can be used to stabilize suspensions, such as in the case of dyes and paints. It can also be used to destabilize suspensions by adsorbing oppositely charged chains to the particle surface, neutralizing the zeta-potential and causing flocculation or coagulation of contaminants. This is used heavily in waste-water treatment to force suspensions of contaminants to flocculate, allowing them to be filtered. There are a variety of industrial flocculants that are either cationic or anionic in nature for targeting particular species.
Encapsulation of liquid cores
An application of the additional stability a polyelectrolyte multi-layer will grant a colloid is the creation of a solid coating for a liquid core. While polyelectrolyte layers are generally adsorbed onto solid substrates, they may also be adsorbed to liquid substrates such as oil in water emulsions or colloids. This process has much potential, but is rife with difficulty. Since colloids are generally stabilized by surfactants, and often ionic surfactants, the adsorption of a multi-layer that is similarly charged to the surfactant causes problems due to the electrostatic repulsion between the polyelectrolyte and the surfactant. This can be circumvented by using non-ionic surfactants; however, the solubility of these non-ionic surfactants in water is greatly decreased compared to ionic surfactants.
These cores, once created, can be used for things such as drug delivery and microreactors. For drug delivery, the polyelectrolyte shell would break down after a certain amount of time, releasing the drug and helping it travel through the digestive tract, which is one of the biggest barriers for the effectiveness of drug delivery.
References
Materials science | Polyelectrolyte adsorption | [
"Physics",
"Materials_science",
"Engineering"
] | 2,772 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
32,006,111 | https://en.wikipedia.org/wiki/Protein%20adsorption%20in%20the%20food%20industry | Protein adsorption refers to the adhesion of proteins to solid surfaces. This phenomenon is an important issue in the food processing industry, particularly in milk processing and wine and beer making. Excessive adsorption, or protein fouling, can lead to health and sanitation issues, as the adsorbed protein is very difficult to clean and can harbor bacteria, as is the case in biofilms. Product quality can be adversely affected if the adsorbed material interferes with processing steps, like pasteurization. However, in some cases protein adsorption is used to improve food quality, as is the case in fining of wines.
Protein adsorption
Protein adsorption and protein fouling can cause major problems in the food industry (particularly the dairy industry) when proteins from food adsorb to processing surfaces, such as stainless steel or plastic (e.g. polypropylene). Protein fouling is the gathering of protein aggregates on a surface. This is most common in heating processes that create a temperature gradient between the equipment and the bulk substance being heated. In protein-fouled heating equipment, adsorbed proteins can create an insulating layer between the heater and the bulk material, reducing heating efficiency. This leads to inefficient sterilization and pasteurization. Also, proteins stuck to the heater may cause a burned taste or color in the bulk material. Additionally, in processes that employ filtration, protein aggregates that gather on the surface of the filter can block the flow of the bulk material and greatly reduce filter efficiency.
Examples of adsorption
Beer stone
Beerstone is a buildup that forms when oxalate, proteins, and calcium or magnesium salts from the grains and water in the beer brewing process precipitate and form scale on kegs, barrels and tap lines. The minerals adsorb to the surface of the container first, driven by charge attractions. Proteins are often coordinated to these minerals in the solution and can bind with them to the surface. In other cases proteins also adsorb to the minerals on the surface, making deposits difficult to remove, as well as providing a surface that can easily harbor microorganisms. If built-up beer stone inside tap lines flakes off, it can negatively affect the quality of the finished product by making beer hazy and contributing "off" flavors. It is also harmful from a nutritional standpoint: oxalates can decrease absorption of calcium in the body, in addition to increasing risk of kidney stone formation.
Wine making
Grape and wine proteins tend to aggregate and form hazes and sediment in finished wines, especially white wines. Haze-causing proteins can persist in wine due to low settling velocities or charge repulsion on individual particles. Fining agents, such as bentonite clays, are used to clarify wine by removing these proteins. Also, proteinaceous agents such as albumin, casein, or gelatin are used in wine clarification to remove tannins or other phenols.
Biofilms
A biofilm is a community of microorganisms adsorbed to a surface. Microorganisms in biofilms are enclosed in a polymeric matrix consisting of exopolysaccharides, extracellular DNA and proteins. Seconds after a surface (usually metal) is placed in a solution, inorganic and organic molecules adsorb onto the surface. These molecules are attracted mainly by Coulombic forces (see above section), and can adhere very strongly to the surface. This first layer is called the conditioning layer, and is necessary for the microorganisms to bind to the surface. These microorganisms then attach reversibly by Van der Waals forces, followed by irreversible adhesion through self-produced attachment structures such as pili or flagella. Biofilms form on solid substrates such as stainless steel. A biofilm's enclosing polymeric matrix offers protection to its microbes, increasing their resistance to detergents and cleaning agents. Biofilms on food processing surfaces can be a biological hazard to food safety. Increased chemical resistance in biofilms can lead to a persistent contamination condition.
Dairy industry
Thermal treatment of milk by indirect heating (e.g. pasteurization) to reduce microbial load and increase shelf life is generally performed by a plate heat exchanger. Heat exchanger surfaces can become fouled by adsorbed milk protein deposits. Fouling is initiated by formation of a protein monolayer at room temperature, followed by heat induced aggregation and deposition of whey protein and calcium phosphate deposits. Adsorbed proteins decrease efficiency of heat transfer and potentially affect product quality by preventing adequate heating of milk.
Mechanisms for protein adsorption
The common trend in all examples of protein adsorption in the food industry is that of adsorption to minerals adsorbed to the surface first. This phenomenon has been studied but it is not well understood. Spectroscopy of proteins adsorbed onto clay-like minerals show variations in the C=O and N-H bond stretches, meaning that these bonds are involved in the protein binding.
Coulombic
In some cases proteins are attracted to surfaces by an excessive surface charge. When a surface in a fluid has a net charge, ions in the fluid will adsorb to the surface. Proteins also have charged surfaces due to charge amino acid residues on the surface of the protein. The surface and the protein are then attracted by Coulombic forces.
The attraction a protein feels from a charged surface () depends exponentially on the surface's charge, as described by the following formula:
Where
is the potential felt by the protein
is the actual potential of the surface
is the distance from the protein to the surface, and
is the Debye length.
A protein's surface's potential is given by the number of charged amino acids it has and its isoelectric point, pI.
Thermodynamic
Protein adsorption can also occur as a direct result of heating a mixture. Protein adsorption in milk processing is often used as a model for this type of adsorption in other situations. Milk is composed mainly of water, with less than 20% of suspended solids or dissolved proteins. Proteins make up only 3.6% of milk in total, and only 26% of the components that are not water. These proteins are all responsible for fouling that occurs during pasteurization.
As milk is heated during pasteurization many of the proteins in the milk are denatured. Pasteurization temperatures can reach 161 °F (71.7 °C). This temperature is high enough to denature the proteins below, lowering the nutritional value of the milk and causing fouling. Milk is heated to these high temperatures for a short time (15–20 seconds) to reduce the amount of denaturization. However fouling from denatured proteins is still a significant problem.
Denaturation exposes hydrophobic amino acid residues in the protein, which had been previously protected by the protein. The exposed hydrophobic amino acids decrease the entropy of the water surrounding them, making it favorable for surface adsorption. Some of the β-lactoglobulin (β-lg) will adsorb directly onto the surface of a heat exchanger or container. Other denatured β-lg molecules adsorb to casein micelles, which are also present in the milk. As more and more β-lg proteins bind to the casein micelle it forms an aggregate, which will then diffuse to the heat exchanger and/or surface of the container.
Biochemical
While the aggregates can explain much of the protein fouling found in milk processing, this does not account for it all. A third type of fouling has been discovered that is explained by the chemical interactions of the denatured β-lg proteins.
β-lg contains 5 cysteine residues, four of which are covalently bonded to each other, forming an S-S bond. When β-lg is denatured, the fifth cysteine residue is exposed to the water. This residue then bonds to other β-lg proteins, including those already adsorbed to the surface. This produces a strong interaction between the denatured proteins and the surface of the container.
Isotherms
Isotherms are used to quantify the amount of adsorbed protein on a surface at a constant temperature, depending on the concentration of protein above the surface. Researchers have used a Langmuir-type isotherm model to describe experimental values for protein adsorption.
In this equation
is the amount of adsorbed protein
is the surface area per molecule
is the partial molar volume of protein
is the negative of the Gibbs Free Energy of adsorption per unit area and
is the equilibrium protein concentration.
This equation has been applied to a laboratory setting of protein adsorption at temperatures higher than 50 °C from a model solution of protein and water. It is especially useful for modeling protein fouling in milk processing.
Removal of adsorbed proteins
Adsorbed proteins are among the most difficult food soils to remove from food contact surfaces. In particular, heat-denatured proteins (such as those found in dairy industry applications) adhere tightly to surfaces and require strong alkaline cleaners for removal. It is important that cleaning methods are capable of removing both visible and non-visible protein soils. Nutrients for bacterial growth must be removed as well as biofilms that may have built up on the food contact surface. Proteins are water-insoluble, slightly soluble in acidic solutions and soluble in alkaline solutions, which limits the type of cleaner that can be used to remove protein from the surface. Generally speaking, highly alkaline cleaners with peptizing and wetting agents are most effective in protein removal on food contact surfaces. Cleaning temperature is also a concern for effective protein removal. As temperature increases, the activity of the cleaning compound increases, making soil removal easier. However, at higher temperatures (> 55 °C) proteins denature and cleaning efficacy is reduced.
Alkaline cleaners
Alkaline cleaners are classified as compounds with pH 7-14. Proteins are most effectively removed from surfaces by cleaners with a pH of 11 or higher. An example of a strong alkaline cleaning agent is sodium hydroxide, also called caustic soda. Although sodium hydroxide (NaOH) can cause corrosion on food contact surfaces such as stainless steel, it is the preferred cleaning agent for protein removal due to its efficacy in dissolving proteins and dispersing/emulsifying food soils. Silicates are often added to these cleaners to reduce corrosion on metal surfaces. The mechanism of alkaline cleaning action in proteins follows a three-step process:
Gel formation: Upon contact with the alkaline solution, the protein soil swells and forms a removable gel.
Protein removal: The protein gel is removed through mass transfer, while the cleaning agent continues to diffuse through the soil, increasing gel formation.
Decay stage: The protein gel has been eroded to the point where it is a thin deposit. Removal at this stage is governed by shear stress forces and mass transfer of the gel.
Hypochlorite is often added to alkaline cleaners to peptize proteins. Chlorinated cleansers work by oxidizing sulfide crosslinks in proteins. Cleaning speed and efficiency is improved due to increased diffusion of the cleaner into the soil matrix, now composed of smaller, more soluble proteins.
Enzyme cleaners
Enzyme-based cleaners are especially useful for biofilm removal. Bacteria are somewhat difficult to remove with traditional alkaline or acid cleaners. Enzyme cleaners are more effective on biofilms since they work as proteases by breaking down proteins at bacterial attachment sites. They work at maximum efficiency at high pH and at temperatures below 60 °C. Enzyme cleaners are an increasingly attractive alternative to traditional chemical cleaners because of biodegradability and other environmental factors, such as reduced wastewater generation and energy savings from using cold water. However, they are typically more expensive than alkaline or acid cleaners.
References
Biochemistry
Food industry
Proteins | Protein adsorption in the food industry | [
"Chemistry",
"Biology"
] | 2,495 | [
"Biomolecules by chemical classification",
"nan",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
41,830,137 | https://en.wikipedia.org/wiki/Bubble%20oxygenator | A bubble oxygenator is an early implementation of the oxygenator used for cardiopulmonary bypass. It has since been supplanted by the membrane oxygenator
as a result of advances in material science. Some continue to promote it as a low-cost alternative allowing greater self-sufficiency.
History
Open-heart surgery developed rapidly beginning in the 1950s, and many methods were developed for oxygenating blood outside the body. A bubble oxygenator was introduced in 1950 by Clark, Gollan, and Gupta. The method faced initial skepticism but in 1956 the University of Minnesota's De-Wall-Lillehei bubble oxygenator was demonstrated to be relatively simple, inexpensive, and easy to operate.
The device faced competition from membrane oxygenators, which arrived within the same decade and were found to provide better oxygenation for periods over eight hours, and other advantages beyond six hours. However, most open-heart operations were substantially shorter, and by 1976 the bubble oxygenator was predominant.
In the 1980s, microporous membrane oxygenators were developed, and replaced bubble oxygenators in most applications.
References
Medical equipment | Bubble oxygenator | [
"Biology"
] | 226 | [
"Medical equipment",
"Medical technology"
] |
37,647,575 | https://en.wikipedia.org/wiki/Temperature%E2%80%93salinity%20diagram | In oceanography, temperature-salinity diagrams, sometimes called T-S diagrams, are used to identify water masses. In a T-S diagram, rather than plotting each water property as a separate "profile," with pressure or depth as the vertical coordinate, potential temperature (on the vertical axis) is plotted versus salinity (on the horizontal axis).
Temperature and salinity combine to determine the potential density of seawater; contours of constant potential density are often shown in T-S diagrams. Each contour is known as an isopycnal, or a region of constant density. These isopycnals appear curved because of the nonlinearity of the equation of state of seawater. The thermal expansion coefficient, αT, and the haline contraction coefficient, βS, vary with temperature and salinity because both properties affect the potential density of seawater.
As long as it remains isolated from the surface, where heat or fresh water can be gained or lost, and in the absence of mixing with other water masses, a water parcel's potential temperature and salinity are conserved. Deep water masses thus retain their T-S characteristics for long periods of time, and can be identified readily on a T-S plot. Deep water masses are formed in different locations, and therefore have differing characteristic properties, like ranges of temperature and salinity values. When T-S plots are created by compiling data collected from various locations, it is possible to group the data points based on where the water mass was formed. This gives an idea of how the properties of different water masses compare to each other, which can give an idea of the way that thermohaline circulation works. In general, the depth of the water increases as you move to the bottom right corner of the graph (high salinity, low temperature), but there is some variation.
References
Water masses
Temperature | Temperature–salinity diagram | [
"Physics",
"Chemistry"
] | 391 | [
"Scalar physical quantities",
"Temperature",
"Thermodynamic properties",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Chemical oceanography",
"Water masses",
"Thermodynamics",
"Geochemistry stubs",
"Wikipedia categories named after physical quantities"
] |
37,648,306 | https://en.wikipedia.org/wiki/Cooperative%20luminescence%20and%20cooperative%20absorption | Cooperative luminescence is the radiative process in which two excited ions simultaneously make downward transition to emit one photon with the sum of their excitation energies. The inverse process is cooperative absorption, in which a photon can be absorbed by a coupled pair of two ions, making them excited simultaneously.
References
Radiation | Cooperative luminescence and cooperative absorption | [
"Physics",
"Chemistry"
] | 66 | [
"Transport phenomena",
"Physical phenomena",
"Waves",
"Radiation",
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
28,210,693 | https://en.wikipedia.org/wiki/Fastboot | Fastboot is a communication protocol used primarily with Android devices. It is implemented in a command-line interface tool of the same name and as a mode of the bootloader of Android devices. The tool is included with the Android SDK package and used primarily to modify the flash filesystem via a USB connection from a host computer. It requires that the device be started in Fastboot mode. If the mode is enabled, it will accept a specific set of commands, sent through USB bulk transfers. Fastboot on some devices allows unlocking the bootloader, and subsequently, enables installing custom recovery image and custom ROM on the device. Fastboot does not require USB debugging to be enabled on the device. To use fastboot, a specific combination of keys must be held during boot.
Not all Android devices have fastboot enabled, and Android device manufacturers are allowed to choose if they want to implement fastboot or some other protocol.
Keys pressed
The keys that have to be pressed for fastboot differ for various vendors.
HTC, Google Pixel, and Xiaomi: Power and volume down
Zebra and symbol devices: Right scan/action button
Sony: Power and volume up
Google Nexus: Power, volume up and volume down
On Samsung devices, (excluding the Nexus S and Galaxy Nexus devices), power, volume down and home has to be pressed for entering ODIN mode. This is a proprietary protocol, and tool, as an alternative to fastboot. The tool has a partial alternative.
Commands
Some of the most commonly used fastboot commands include:
flash rewrites a partition with a binary image stored on the host computer.
flashing unlock/oem unlock *** unlocks an OEM locked bootloader for flashing custom/unsigned ROMs. The *** is a device specific unlock key.
flashing lock/oem lock *** locks an OEM unlocked bootloader.
erase erases a specific partition.
reboot reboots the device into either the main operating system, the system recovery partition or back into its boot loader.
devices displays a list of all devices (with the serial number) connected to the host computer.
format formats a specific partition; the file system of the partition must be recognized by the device.
oem device-info checks the bootloader state.
getvar all displays all information about device (IMEI, bootloader version, battery state etc.).
Implementations
The fastboot protocol has been implemented in the Android bootloader called ABOOT, the Little Kernel fork of Qualcomm, TianoCore EDK II, and Das U-Boot.
See also
Bootloader unlocking
Android recovery mode
Thor (protocol)
DFU (Device Firmware Upgrade mechanism)
References
External links
Flashing Devices - Android.com
Fastboot protocol specification
Reverse Engineering Android's Aboot
Android (operating system)
Communications protocols
Android (operating system) development software
Booting | Fastboot | [
"Technology"
] | 591 | [
"Computer standards",
"Communications protocols"
] |
28,211,833 | https://en.wikipedia.org/wiki/Exact%20solutions%20of%20classical%20central-force%20problems | In the classical central-force problem of classical mechanics, some potential energy functions produce motions or orbits that can be expressed in terms of well-known functions, such as the trigonometric functions and elliptic functions. This article describes these functions and the corresponding solutions for the orbits.
General problem
Let . Then the Binet equation for can be solved numerically for nearly any central force . However, only a handful of forces result in formulae for in terms of known functions. The solution for can be expressed as an integral over
A central-force problem is said to be "integrable" if this integration can be solved in terms of known functions.
If the force is a power law, i.e., if , then can be expressed in terms of circular functions and/or elliptic functions if equals 1, -2, -3 (circular functions) and -7, -5, -4, 0, 3, 5, -3/2, -5/2, -1/3, -5/3 and -7/3 (elliptic functions).
If the force is the sum of an inverse quadratic law and a linear term, i.e., if , the problem also is solved explicitly in terms of Weierstrass elliptic functions.
References
Bibliography
Classical mechanics | Exact solutions of classical central-force problems | [
"Physics"
] | 263 | [
"Mechanics",
"Classical mechanics"
] |
28,217,722 | https://en.wikipedia.org/wiki/Distance%20correlation | In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The population distance correlation coefficient is zero if and only if the random vectors are independent. Thus, distance correlation measures both linear and nonlinear association between two random variables or random vectors. This is in contrast to Pearson's correlation, which can only detect linear association between two random variables.
Distance correlation can be used to perform a statistical test of dependence with a permutation test. One first computes the distance correlation (involving the re-centering of Euclidean distance matrices) between two random vectors, and then compares this value to the distance correlations of many shuffles of the data.
Background
The classical measure of dependence, the Pearson correlation coefficient, is mainly sensitive to a linear relationship between two variables. Distance correlation was introduced in 2005 by Gábor J. Székely in several lectures to address this deficiency of Pearson's correlation, namely that it can easily be zero for dependent variables. Correlation = 0 (uncorrelatedness) does not imply independence while distance correlation = 0 does imply independence. The first results on distance correlation were published in 2007 and 2009. It was proved that distance covariance is the same as the Brownian covariance. These measures are examples of energy distances.
The distance correlation is derived from a number of other quantities that are used in its specification, specifically: distance variance, distance standard deviation, and distance covariance. These quantities take the same roles as the ordinary moments with corresponding names in the specification of the Pearson product-moment correlation coefficient.
Definitions
Distance covariance
Let us start with the definition of the sample distance covariance. Let (Xk, Yk), k = 1, 2, ..., n be a statistical sample from a pair of real valued or vector valued random variables (X, Y). First, compute the n by n distance matrices (aj, k) and (bj, k) containing all pairwise distances
where ||⋅ ||denotes Euclidean norm. Then take all doubly centered distances
where is the -th row mean, is the -th column mean, and is the grand mean of the distance matrix of the sample. The notation is similar for the values. (In the matrices of centered distances (Aj, k) and (Bj,k) all rows and all columns sum to zero.) The squared sample distance covariance (a scalar) is simply the arithmetic average of the products Aj, k Bj, k:
The statistic Tn = n dCov2n(X, Y) determines a consistent multivariate test of independence of random vectors in arbitrary dimensions. For an implementation see dcov.test function in the energy package for R.
The population value of distance covariance can be defined along the same lines. Let X be a random variable that takes values in a p-dimensional Euclidean space with probability distribution and let Y be a random variable that takes values in a q-dimensional Euclidean space with probability distribution , and suppose that X and Y have finite expectations. Write
Finally, define the population value of squared distance covariance of X and Y as
One can show that this is equivalent to the following definition:
where E denotes expected value, and and are independent and identically distributed. The primed random variables and denote
independent and identically distributed (iid) copies of the variables and and are similarly iid. Distance covariance can be expressed in terms of the classical Pearson's covariance,
cov, as follows:
This identity shows that the distance covariance is not the same as the covariance of distances, ). This can be zero even if X and Y are not independent.
Alternatively, the distance covariance can be defined as the weighted L2 norm of the distance between the joint characteristic function of the random variables and the product of their marginal characteristic functions:
where , , and are the characteristic functions of X, and Y, respectively, p, q denote the Euclidean dimension of X and Y, and thus of s and t, and cp, cq are constants. The weight function is chosen to produce a scale equivariant and rotation invariant measure that doesn't go to zero for dependent variables. One interpretation of the characteristic function definition is that the variables eisX and eitY are cyclic representations of X and Y with different periods given by s and t, and the expression in the numerator of the characteristic function definition of distance covariance is simply the classical covariance of eisX and eitY. The characteristic function definition clearly shows that
dCov2(X, Y) = 0 if and only if X and Y are independent.
Distance variance and distance standard deviation
The distance variance is a special case of distance covariance when the two variables are identical. The population value of distance variance is the square root of
where , , and are independent and identically distributed random variables, denotes the expected value, and for function , e.g., .
The sample distance variance is the square root of
which is a relative of Corrado Gini's mean difference introduced in 1912 (but Gini did not work with centered distances).
The distance standard deviation is the square root of the distance variance.
Distance correlation
The distance correlation of two random variables is obtained by dividing their distance covariance by the product of their distance standard deviations. The distance correlation is the square root of
and the sample distance correlation is defined by substituting the sample distance covariance and distance variances for the population coefficients above.
For easy computation of sample distance correlation see the dcor function in the energy package for R.
Properties
Distance correlation
Distance covariance
This last property is the most important effect of working with centered distances.
The statistic is a biased estimator of . Under independence of X and Y
An unbiased estimator of is given by Székely and Rizzo.
Distance variance
Equality holds in (iv) if and only if one of the random variables or is a constant.
Generalization
Distance covariance can be generalized to include powers of Euclidean distance. Define
Then for every , and are independent if and only if . It is important to note that this characterization does not hold for exponent ; in this case for bivariate , is a deterministic function of the Pearson correlation. If and are powers of the corresponding distances, , then sample distance covariance can be defined as the nonnegative number for which
One can extend to metric-space-valued random variables and : If has law in a metric space with metric , then define , , and (provided is finite, i.e., has finite first moment), . Then if has law (in a possibly different metric space with finite first moment), define
This is non-negative for all such iff both metric spaces have negative type. Here, a metric space has negative type if is isometric to a subset of a Hilbert space. If both metric spaces have strong negative type, then iff are independent.
Alternative definition of distance covariance
The original distance covariance has been defined as the square root of , rather than the squared coefficient itself. has the property that it is the energy distance between the joint distribution of and the product of its marginals. Under this definition, however, the distance variance, rather than the distance standard deviation, is measured in the same units as the distances.
Alternately, one could define distance covariance to be the square of the energy distance:
In this case, the distance standard deviation of is measured in the same units as distance, and there exists an unbiased estimator for the population distance covariance.
Under these alternate definitions, the distance correlation is also defined as the square , rather than the square root.
Alternative formulation: Brownian covariance
Brownian covariance is motivated by generalization of the notion of covariance to stochastic processes. The square of the covariance of random variables X and Y can be written in the following form:
where E denotes the expected value and the prime denotes independent and identically distributed copies. We need the following generalization of this formula. If U(s), V(t) are arbitrary random processes defined for all real s and t then define the U-centered version of X by
whenever the subtracted conditional expected value exists and denote by YV the V-centered version of Y. The (U,V) covariance of (X,Y) is defined as the nonnegative number whose square is
whenever the right-hand side is nonnegative and finite. The most important example is when U and V are two-sided independent Brownian motions /Wiener processes with expectation zero and covariance (for nonnegative s, t only). (This is twice the covariance of the standard Wiener process; here the factor 2 simplifies the computations.) In this case the (U,V) covariance is called Brownian covariance and is denoted by
There is a surprising coincidence: The Brownian covariance is the same as the distance covariance:
and thus Brownian correlation is the same as distance correlation.
On the other hand, if we replace the Brownian motion with the deterministic identity function id then Covid(X,Y) is simply the absolute value of the classical Pearson covariance,
Related metrics
Other correlational metrics, including kernel-based correlational metrics (such as the Hilbert-Schmidt Independence Criterion or HSIC) can also detect linear and nonlinear interactions. Both distance correlation and kernel-based metrics can be used in methods such as canonical correlation analysis and independent component analysis to yield stronger statistical power.
See also
RV coefficient
For a related third-order statistic, see Distance skewness.
Notes
References
External links
E-statistics (energy statistics)
Statistical distance
Theory of probability distributions
Covariance and correlation | Distance correlation | [
"Physics"
] | 2,051 | [
"Physical quantities",
"Statistical distance",
"Distance"
] |
28,219,167 | https://en.wikipedia.org/wiki/Kneser%E2%80%93Tits%20conjecture | In mathematics, the Kneser–Tits problem, introduced by based on a suggestion by Martin Kneser, asks whether the Whitehead group W(G,K) of a semisimple simply connected isotropic algebraic group G over a field K is trivial. ["Généralisant le problème de Tannaka-Artin, M.Kneser a posé la question suivante que j’ai imprudemment transformé en conjecture." - J. Tits 1978.] The Whitehead group is the quotient of the rational points of G by the normal subgroup generated by K-subgroups isomorphic to the additive group.
Fields for which the Whitehead group vanishes
A special case of the Kneser–Tits problem asks for which fields the Whitehead group of a semisimple almost simple simply connected isotropic algebraic group is always trivial.
showed that this Whitehead group is trivial for local fields K, and gave examples of fields for which it is not always trivial. For global fields the combined work of several authors shows that this Whitehead group is always trivial .
References
External links
Algebraic groups
Conjectures | Kneser–Tits conjecture | [
"Mathematics"
] | 230 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Conjectures"
] |
28,219,940 | https://en.wikipedia.org/wiki/List%20of%20electronic%20color%20code%20mnemonics | Mnemonics are used to help memorize the electronic color codes for resistors. Mnemonics describing specific and relatable scenarios are more memorable than abstract phrases.
Resistor color code
The first letter of the color code is matched by order of increasing magnitude. The electronic color codes, in order, are:
0 = Black
1 = Brown
2 = Red
3 = Orange
4 = Yellow
5 = Green
6 = Blue
7 = Violet
8 = Gray
9 = White
Easy to remember
A mnemonic which includes color name(s) generally reduces the chances of confusing black and brown. Some mnemonics that are easy to remember:
Big Boys Race Our Young Girls But Violet Generally Wins.
Better Be Right Or Your Great Big Venture Goes West.
Beetle Bailey Runs Over Your General Before Very Good Witnesses.
Beach Bums Rarely Offer You Gatorade But Very Good Water.
Buster Brown Races Our Young Girls But Violet Generally Wins.
Better Be Right Or Your Great Big Vacation Goes Wrong.
Better Be Right Or Your Great Big Values Go Wrong.
Better Be Right Or Your Great Big Plan Goes Wrong. (with P = Purple for Violet)
Back-Breaking Rascals Often Yield Grudgingly But Virtuous Gentlemen Will Give Shelter Nobly. (with tolerance bands Gold, Silver or None)
Better Be Right Or Your Great Big Plan Goes Wrong - Go Start Now!
Black Beetles Running Over Your Garden Bring Very Grey Weather.
Bad Booze Rots Our Young Guts But Vodka Goes Well – get some now.
Bad Boys Run Over Yellow Gardenias Behind Victory Garden Walls.
Bat Brained Resistor Order You Gotta Be Very Good With.
Betty Brown Runs Over Your Garden But Violet Gingerly Walks.
Big Beautiful Roses Occupy Your Garden But Violets Grow Wild.
Big Brown Rabbits Often Yield Great Big Vocal Groans When Gingerly Slapped Needlessly.
Black Bananas Really Offend Your Girlfriend But Violets Get Welcomed.
Black Birds Run Over Your Gay Barely Visible Grey Worms.
Badly Burnt Resistors On Your Ground Bus Void General Warranty.
Billy Brown Ran Out Yelling Get Back Violets Getting Wet.
Better Be Right Or You're Gonna Be Violently Gouged With Golden Spaghetti.
Bright Boys Rave Over Young Girls But Veto Getting Wed.
Black Bears Raid Our Yellow Green Bins Violently Grabbing Whatever Goodies Smell Nice.
Bad Bears Raid Our Yummy Grub But Veto Grey Waffles.
By Being Revolutionary, Our Young Girls Become Very Great Women.
Bachelor Boys Rush Our Young Girls But Veronica Goes Wild for Gold or Silver Necklaces.
Canada
A mnemonic that is taught in classrooms in Canada:
Black Bears Roam Our Yukon Grounds But Vanish in Gray Winter for Gold Silver Necklaces
India
A mnemonic that is commonly taught in classrooms in India:
B B ROY of Great Britain had a Very Good Wife who wore Gold and Silver Necklace.
Bill Brown Realized Only Yesterday Good Boys Value Good Work.
UK
Mnemonics commonly taught in UK engineering courses include:
Bye Bye Rosie Off You Go to Birmingham Via Great Western.
Bye Bye Rosie Off You Go to Bristol Via Great Western.
Bye Bye Rosie Off You Go to Become a Very Good Wife.
Bill Brown Realised Only Yesterday Good Boys Value Good Work.
Dutch
This mnemonic is commonly taught in the Netherlands:
Zij Bracht Rozen Op Gerrits Graf Bij Vies Grijs Weer (she brought roses onto Gerrit's grave in dirty grey weather)
Vacuum tube era
Popular in the days of vacuum-tube radios:
Better Buy Resistors Or Your Grid Bias Voltages Go West. ("go west" means die)
Offensive/outdated
The following historical mnemonics are generally considered offensive/outdated and should not be used in current electronics training:
Bad boys rape our young girls but Violet gives willingly. (Get Some Now (refers to the tolerance bands Gold, Silver or None))
Bad boys run our young girls behind victory garden walls.
Batman blows Robin on yon Gotham bridge; Vows Gordon's next.
Batman blows Robin on yon Gotham bridge; Very good Wayne! Get Superman Next!
Big boys rape our young girls but Violet goes willingly.
Black boys rape our young girls but Violet goes willingly.
Black boys rape our young girls because virgins go wild.
Black boys rape our young girls behind victory garden walls.
Black boys ride our young girls but virgins go without.
Black boy raped our young girl, bam, virginity gone west.
BaBy ROY of Great Britain is Very Gay With Gold & Silver Necklace.
Casual use in an engineering class has been cited as evidence of the sexism faced by women in scientific fields. Latanya Arvette Sweeney, associate professor of computer science at Carnegie Mellon, mentions yet another as one reason why she felt alienated and eventually dropped out of MIT in the 1980s to form her own software company. In 2011, a teacher in the UK was reprimanded by the General Teaching Council for alluding to an offensive mnemonic and partial use of another.
References
Technology-related lists
Mnemonics
Resistive components
Electronic color code mnemonics | List of electronic color code mnemonics | [
"Physics"
] | 1,014 | [
"Resistive components",
"Physical quantities",
"Electrical resistance and conductance"
] |
29,607,414 | https://en.wikipedia.org/wiki/Lactifluus%20corrugis | Lactifluus corrugis (formerly Lactarius corrugis), commonly known as the corrugated-cap milky, is an edible species of fungus in the family Russulaceae.
Taxonomy
The species was first described by American mycologist Charles Horton Peck in 1880.
Description
The brownish-red cap is wide, and is usually dusted by a light bloom (turning dark when touched). The gills are light yellow and leak white latex, which stains brown. The stem is long and . The spore print is white.
It resembles Lactifluus volemus, the latex of which also stains brown. Additionally, L. hygrophoroides has a pinkish-orange cap.
Habitat and distribution
The mushroom can be found under oak trees in eastern North America between July and September.
Uses
L. corrugis is considered a choice edible mushroom.
See also
List of Lactifluus species
References
corrugis
Edible fungi
Fungi described in 1880
Fungi of North America
Taxa named by Charles Horton Peck
Fungus species | Lactifluus corrugis | [
"Biology"
] | 219 | [
"Fungi",
"Fungus species"
] |
49,341,161 | https://en.wikipedia.org/wiki/Leray%20projection | The Leray projection, named after Jean Leray, is a linear operator used in the theory of partial differential equations, specifically in the fields of fluid dynamics. Informally, it can be seen as the projection on the divergence-free vector fields. It is used in particular to eliminate both the pressure term and the divergence-free term in the Stokes equations and Navier–Stokes equations.
Definition
By pseudo-differential approach
Source:
For vector fields (in any dimension ), the Leray projection is defined by
This definition must be understood in the sense of pseudo-differential operators: its matrix valued Fourier multiplier is given by
Here, is the Kronecker delta. Formally, it means that for all , one has
where is the Schwartz space. We use here the Einstein notation for the summation.
By Helmholtz–Leray decomposition
Source:
One can show that a given vector field can be decomposed as
Different than the usual Helmholtz decomposition, the
Helmholtz–Leray decomposition of is unique (up to an
additive constant for ). Then we can define as
The Leray projector is defined similarly on function spaces other than the Schwartz space, and on different domains with different boundary conditions. The four properties listed below will continue to hold in those cases.
Properties
The Leray projection has the following properties:
The Leray projection is a projection: for all .
The Leray projection is a divergence-free operator: for all .
The Leray projection is simply the identity for the divergence-free vector fields: for all such that .
The Leray projection vanishes for the vector fields coming from a potential: for all .
Application to Navier–Stokes equations
The incompressible Navier–Stokes equations are the partial differential equations given by
where is the velocity of the fluid, the pressure, the viscosity and the external volumetric force.
By applying the Leray projection to the first equation, we may rewrite the Navier-Stokes equations as an abstract differential equation on an infinite dimensional phase space, such as , the space of continuous functions from to where and is the space of square-integrable functions on the physical domain :
where we have defined the Stokes operator and the bilinear form by
The pressure and the divergence free condition are "projected away". In general, we assume for simplicity that is divergence free, so that ; this can always be done, by adding the term to the pressure.
References
Differential equations
Fluid dynamics | Leray projection | [
"Chemistry",
"Mathematics",
"Engineering"
] | 507 | [
"Chemical engineering",
"Mathematical objects",
"Differential equations",
"Equations",
"Piping",
"Fluid dynamics"
] |
49,341,684 | https://en.wikipedia.org/wiki/Generalised%20beam%20theory | In structural engineering and mechanical engineering, generalised beam theory (GBT) is a one-dimensional theory used to mathematically model how beams bend and twist under various loads. It is a generalization of classical Euler–Bernoulli beam theory that approximates a beam as an assembly of thin-walled plates that are constrained to deform as a linear combination of specified deformation modes.
History
Its origin is due to Richard Schardt (1966). Since then many other authors have extended the initial (first-order elastic) GBT formulations developed by Schardt and his co-workers. Many extensions and applications of GBT have been developed by Camotim (Instituto Superior Técnico, University of Lisbon, Portugal) and collaborators, since the beginning of the 21st century.
Description
The theory can be applied without restrictions to any prismatic thin-walled structural member exhibiting straight or curved axial axis (any loading, any cross-section geometry, any boundary conditions). GBT is in some ways analogous to the finite strip method and can be a more computationally efficient method than modeling a beam with a full 2D or 3D finite element method to predict the member structural behavior.
GBT has been widely recognized as an efficient approach to analyzing thin-walled members and structural systems. The efficiency arises mostly from its modal nature – the displacement field is expressed as a linear combination of cross-section deformation modes whose amplitudes vary continuously along the member length (x axis) - see Figures 2-3. Due to GBT assumptions inherent to a thin-walled member, only 3 non-null stress components are considered in the formulations (see Fig. 1).
Membrane displacement field (i.e., in the cross-section mid-surface):
The GBT modal nature makes it possible to (i) acquire in-depth knowledge on the mechanics of the thin-walled member behaviour and (ii) judiciously exclude, from subsequent similar GBT analyses, those deformation modes found to play no (or negligible) role in the particular behaviour under scrutiny. Eliminating modes that play no role reduces the number of degrees of freedom involved in a GBT analysis and increases its computational efficiency. GBT has proven useful in the understanding of the structural behaviour under analysis as well as in its computational efficiency.
References
Beam theory
Civil engineering
Structural engineering
Mechanical engineering
Aerospace engineering | Generalised beam theory | [
"Physics",
"Engineering"
] | 481 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Construction",
"Civil engineering",
"Mechanical engineering",
"Aerospace engineering"
] |
49,342,572 | https://en.wikipedia.org/wiki/Group%20actions%20in%20computational%20anatomy | Group actions are central to Riemannian geometry and defining orbits (control theory).
The orbits of computational anatomy consist of anatomical shapes and medical images; the anatomical shapes are submanifolds of differential geometry consisting of points, curves, surfaces and subvolumes,.
This generalized the ideas of the more familiar orbits of linear algebra which are linear vector spaces. Medical images are scalar and tensor images from medical imaging. The group actions are used to define models of human shape which accommodate variation. These orbits are deformable templates as originally formulated more abstractly in pattern theory.
The orbit model of computational anatomy
The central model of human anatomy in computational anatomy is a Groups and group action, a classic formulation from differential geometry. The orbit is called the space of shapes and forms. The space of shapes are denoted , with the group with law of composition ; the action of the group on shapes is denoted , where the action of the group is defined to satisfy
The orbit of the template becomes the space of all shapes, .
Several group actions in computational anatomy
The central group in CA defined on volumes in are the diffeomorphism group which are mappings with 3-components , law of composition of functions , with inverse .
Submanifolds: organs, subcortical structures, charts, and immersions
For sub-manifolds , parametrized by a chart or immersion , the diffeomorphic action the flow of the position
.
Scalar images such as MRI, CT, PET
Most popular are scalar images, , with action on the right via the inverse.
.
Oriented tangents on curves, eigenvectors of tensor matrices
Many different imaging modalities are being used with various actions. For images such that is a three-dimensional vector then
Tensor matrices
Cao et al.
examined actions for mapping MRI images measured via diffusion tensor imaging and represented via there principle eigenvector.
For tensor fields a positively oriented orthonormal basis
of , termed frames, vector cross product denoted then
The Frénet frame of three orthonormal vectors, deforms as a tangent, deforms like
a normal to the plane generated by , and . H is uniquely constrained by the
basis being positive and orthonormal.
For non-negative symmetric matrices, an action would become .
For mapping MRI DTI images (tensors), then eigenvalues are preserved with the diffeomorphism rotating eigenvectors and preserves the eigenvalues.
Given eigenelements
, then the action becomes
Orientation Distribution Function and High Angular Resolution HARDI
Orientation distribution function (ODF) characterizes the angular profile of the diffusion probability density function of water molecules and can be reconstructed from High Angular Resolution Diffusion Imaging (HARDI). The ODF is a probability density function defined on a unit sphere, . In the field of information geometry, the space of ODF forms a Riemannian manifold with the Fisher-Rao metric. For the purpose of LDDMM ODF mapping, the square-root representation is chosen because it is one of the most efficient representations found to date as the various Riemannian operations, such as geodesics, exponential maps, and logarithm maps, are available in closed form. In the following, denote square-root ODF () as , where is non-negative to ensure uniqueness and .
Denote diffeomorphic transformation as . Group action of diffeomorphism on , , needs to guarantee the non-negativity and . Based on the derivation in, this group action is defined as
where is the Jacobian of .
References
Computational anatomy
Computational anatomy
Geometry
Fluid mechanics
Theory of probability distributions
Neural engineering
Biomedical engineering | Group actions in computational anatomy | [
"Physics",
"Mathematics",
"Engineering",
"Biology"
] | 751 | [
"Biological engineering",
"Group actions",
"Biomedical engineering",
"Civil engineering",
"Geometry",
"Fluid mechanics",
"Symmetry",
"Medical technology"
] |
45,079,558 | https://en.wikipedia.org/wiki/Eosinophil%20peroxidase | Eosinophil peroxidase is an enzyme found within the eosinophil granulocytes, innate immune cells of humans and mammals. This oxidoreductase protein is encoded by the gene EPX, expressed within these myeloid cells. EPO shares many similarities with its orthologous peroxidases, myeloperoxidase (MPO), lactoperoxidase (LPO), and thyroid peroxidase (TPO). The protein is concentrated in secretory granules within eosinophils. Eosinophil peroxidase is a heme peroxidase, its activities including the oxidation of halide ions to bacteriocidal reactive oxygen species, the cationic disruption of bacterial cell walls, and the post-translational modification of protein amino acid residues.
The major function of eosinophil peroxidase is to catalyze the formation of hypohalous acids from hydrogen peroxide and halide ions in solution. For example:
H2O2 + Br− → HOBr + H2O
Hypohalous acids formed from halides or pseudohalides are potent oxidizing agents. However, the role of eosinophilic peroxidase seems to be to generate hyphalous acids largely from bromide and iodide rather than chloride, since the former are favored greatly over the latter. The enzyme myeloperoxidase is responsible for formation of most of the hypochlorous acid in the body, and eosinophil peroxidase is responsible for reactions involving bromide and iodide.
Gene
The open reading frame of human eosinophil peroxidase was found to have a length of 2,106 base pairs (bp). This comprises a 381-bp prosequence, a 333-bp sequence encoding the light chain and a 1,392-bp sequence encoding the heavy chain. In addition to these there is a 452-bp untranslated region at the 3' end containing the AATAAA polyadenylation signal.
The promoter sequence for human eosinophil peroxidase is an unusually strong promoter. All the major regulatory elements are located within 100 bp upstream of the gene.
The profile of EPX expression has been characterized and is available online via BioGPS. This dataset indicates that both in humans and mice, EPX is only expressed in the bone marrow. At this level, it is more than 30 times the average level of expression over all tissues in the body.
Protein
Molecular weight: 57 kDa (heavy chain), 11 kDa (light chain) (predicted); 52 kDa, 15 kDa (observed)
Isoelectric point pI = 10.31 (predicted); 7.62 (observed)
Electronic absorption maximum at 413 nm (Soret band)
Binds 1 equivalent of calcium
Glycosylated at four asparagine residues: 315, 351, 443, and 695
One active site per monomer.
The polypeptide chain is processed proteolytically into a heavy and a light chain during maturation. However, the two chains are still intimately connected not least of all by the covalently linked heme cofactor. The protein is produced on ribosomes embedded on the surface of the endoplasmic reticulum, since it must be ultimately localized to the granules.
The precursor protein goes through the following processing steps before becoming active:
ER signal sequence cleavage
propeptide cleavage
modification of heme cofactor
covalent linkage of heme cofactor.
Unlike MPO, heme in EPO is not linked via methionine. This affects the catalytic characteristics (see Active site).
Secondary structure
Eosinophil peroxidase is a predominately α-helical heme-containing enzyme. The core of the catalytic domain surrounding the active site consists of six α-helices, five from the heavy polypeptide chain and one from the light. The fold of the enzyme is known as the heme peroxidase fold, conserved among all members of this gene family. However, not all members possess peroxidase activity.
The calcium ion binding site has typical pentagonal bipyramidal geometry. It is bound within a loop of eight residues of the heavy chain. Ligands are provided by serine and threonine hydroxyl; backbone carbonyl; and carboxylic acid groups, one of which comes from the light polypeptide chain. The calcium site serves not only as a scaffold for protein folding, but also for proper association of the two chains. In fact, when the calcium ion is removed, the protein precipitates out of solution.
Tertiary structure
The protein contains only a single modular domain. In this respect it is primarily a metabolic enzyme or terminal effector; it has little role in cellular signalling pathways. The overall structure of the four mammalian heme peroxidases (MPO, LPO, EPO and TPO) is almost identical. However, MPO is unique in existing as a catalytic dimer bridged by a disulphide bond. One of the first aspects known of eosinophil peroxidase was that it was highly cationic, as indicated by its high isoelectric point (see Protein). Eosinophil peroxidase has not been characterized by X-ray crystallography. However, a direct correspondence between the absorption spectra of EPO, TPO and LPO as well as high sequence similarity allows us to compare the properties of the three. Myeloperoxidase's characteristics are somewhat different, owing to its multimerization state as well as its alternative heme linkage. Further, a homology model has been created for EPO based on the X-ray diffraction structure.
The fold is highly conserved and seems to be optimized for catalytic function. However, differences exist which unsurprisingly account for differences in substrate specificity among peroxidases. This furcation is commonplace in the study of protein evolution. Structural features which are highly necessary for function are subjected to strong conservation pressure, while regions distant from the active site undergo genetic drift. This can lead to the specialization or differentiation of function arising from modification of an enzymatic core moiety. For example, the closely related thyroid peroxidase catalyzes a specific oxidation reaction in the biosynthesis of a hormone, while other heme peroxidases fulfill roles in immune defense and redox signalling.
Quaternary structure
Human EPO is known to exist as a soluble monomer.
Active site
The active site of eosinophil peroxidase contains a single iron atom in tetradentate complexation with a protoporphyrin IX cofactor. It is notable in that this prosthetic group is linked covalently to the polypeptide via ester bonds. Asp232 and Glu380 of EPO are covalently linked through their terminal oxygen atoms to the modified side chains of the protoporphyrin. For comparison, in myeloperoxidase, there is a third attachment point, Met243 forming a sulphonium ion bridge with the pendant vinyl group on heme. This feature is absent in EPO and the corresponding residue is threonine.
The fifth ligand of iron is a conserved histidine residue, hydrogen bonded directly to an asparagine residue. These two critical residues ensure that iron has an appropriate Fe(III)/Fe(II) reduction potential for catalysis. The sixth ligands of iron are said to be located on the distal side of the heme group. These include a short water network comprising five molecules; stabilized by hydrogen bonding with histidine, glutamine, and arginine residues. The distal face is used for substrate binding and catalysis.
The crystal structures of MPO have been solved both in native states and with inhibitors bound and are deposited in the Protein Data Bank under the accession numbers 1CXP, 1D5L, 1D2V, and 1D7W.
Mechanism
The basic mechanism of heme peroxidases consists in using hydrogen peroxide to produce an activated form of the heme cofactor, in which iron takes the oxidation state +4. The activated oxygen may then be transferred to a substrate in order to convert it into a reactive oxygen species.
There are three distinct cycles which EPO can undergo. The first is the halogenation cycle:
[Fe(III)...Por] + H2O2 → [Fe(IV)=O...Por•+] + H2O
where Por denotes the heme cofactor, and • denotes a chemical radical. This activated state of heme is called compound I. In this state oxygen could be described as an oxyferryl species. It's thought that the pi-cation porphyrin radical undergoes reactivity at the methine bridges connecting the four rings. Compound I reduction in the presence of halides X− proceeds as follows:
[Fe(IV)=O...Por•+] + X− → [Fe(III)...Por] + HOX
Thus, compound I is reduced back to the enzyme's resting state, and halide ions bound in the distal cavity are oxidized to potent oxidizing agents.
However, there is a second cycle wherein compound I can proceed via two one-electron reduction steps to oxidize arbitrary substrates to their radical forms. This process operates on the majority of non-halide substrates. The first step is identical followed by:
[Fe(IV)=O...Por•+] + RH → [Fe(IV)=O...Por] + R• + H+
[Fe(IV)=O...Por] + RH → [Fe(IV)=O...Por] + R• + H2O
The physiological implications of this second mechanism are important. Eosinophil peroxidase has been demonstrated to oxidize tyrosine residues on proteins, which has also been implicated in reactive oxygen signalling cascades.
The third and less relevant mechanism is the catalase activity of peroxidases. This mechanism appears to operate only in the absence of one-electron donors.
[Fe(IV)=O...Por•+] + H2O2 → [Fe(III)...Por] + O2 + H2O
Substrates
Eosinophil peroxidase catalyzes the haloperoxidase reaction. EPO can take chloride, bromide and iodide as substrates, as well as the pseudohalide thiocyanate (SCN−). However, the enzyme prefers bromide over chloride, iodide over bromide and thiocyanate over iodide, with regard to reaction velocities. In fact, only myeloperoxidase can oxidize chloride with any considerable rate. The rate of iodide catalysis is five orders of magnitude greater than the rate of chloride catalysis, for comparison. The mutant of MPO wherein heme-linked Met243 was mutated nonconservatively showed a lack of chlorination ability, implicating this residue or its peculiar functional group in substrate specificity.
Inhibitors
Cyanide binds very tightly to mammalian heme peroxidases. Tight binding directly to heme iron converts the protein to a low-spin species. Binding of cyanide requires the deprotonated form of a group with pKa of 4.0-4.3. This appears to be the distal histidine residue. The structure of the ternary complex of MPO, cyanide and bromide is thought to be a good model for the compound I-halide complex due to its similar geometry (cf. 1D7W).
The nitrite ion also binds tightly, forming low-spin heme.
Mutants
One of the first well-characterized mutants of EPX was a G→A transition resulting in a nonconservative mutation at the protein level.
Cytology
Large multicellular organisms engage multiple systems as defensive efforts against infecting bacteria or invading parasites. One strategy, which falls under the domain of cellular immunity, depends on the action of enzymes which catalyze the peroxidase reaction. Eosinophil peroxidase can be found in the primary (azurophilic) granules of human and mammalian leukocytes. Peroxidase localization in leukocytes has been studied throughout the 20th century using staining agents such as benzidine hydrochloride. Before the introduction of specific immunoreactive staining, such chemical indicators of enzymatic activity were commonplace.
Following the advent of the electron microscope, the ultrastructure of many cell types was vigorously investigated. Subsequently, eosinophil peroxidase was found to be localized to primary and secondary granules of the eosinophil.
Eosinophils form part of the myelocytic lineage, one of two major classes of bone-marrow-derived cell types (along with the lymphocytes) which circulate in the blood and lymph and play critical roles in immune responses. Eosinophil peroxidase is secreted by eosinophil cells into the tissue at the site of infection. Activation of cells in the face of an infection leads to the release of granule contents and externalization of protein and chemical agents from the cell.
Having diverged from myeloperoxidase and lactoperoxidase, these three enzymes now perform distinct but not non-overlapping roles; lactoperoxidase helps maintain the sterility of mammalian milk; myeloperoxidase and eosinophil peroxidase inhabit granules and play roles in host defense—an example of how the concept of a single chemical function can be harnessed in myriad ways in nature.
Deficiency and disease
Specific deficiency of eosinophil peroxidase without concomitant deficiency of myeloperoxidase is rare. In a clinical setting, deficiencies of leukocyte enzymes are conveniently studied by optical flow cytometry. Specific deficiencies of myeloperoxidase were known since the 1970s. Myeloperoxidase deficiency resulted in an absence of peroxidase staining in neutrophils but not eosinophils. Early studies on myeloperoxidase deficiency revealed that the most common disease variants were missense mutations, including that of the heme-linked methionine residue. This deficiency was often not inherited as a simple autosomal recessive trait but rather as a compound heterozygous mutation. It is thought that patients with myeloperoxidase deficiency have an increased incidence of malignant tumours. However, they do not have a significantly increased rate of infection, owing to redundancy in peroxidase-mediated immune mechanisms.
See also
Eosinophil
Major basic protein
Secretory pathway
Peroxiredoxin
Catalase
Reactive oxygen species
Antimicrobial peptides
Notes
References
External links
Eosinophil peroxidase on InterPro
Human myeloperoxidase on SCOP (Structural Classification of Proteins)
Source: The J. C. Segen Dictionary of Modern Medicine database.
Proteins
EC 1.11.1 | Eosinophil peroxidase | [
"Chemistry"
] | 3,268 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
45,079,606 | https://en.wikipedia.org/wiki/Nickel%20superoxide%20dismutase | Nickel superoxide dismutase (Ni-SOD) is a metalloenzyme that, like the other superoxide dismutases, protects cells from oxidative damage by catalyzing the disproportionation of the cytotoxic superoxide radical (O) to hydrogen peroxide and molecular oxygen. Superoxide is a reactive oxygen species that is produced in large amounts during photosynthesis and aerobic cellular respiration. The equation for the disproportionation of superoxide is shown below:
Ni-SOD was first isolated in 1996 from Streptomyces bacteria and is primarily found in prokaryotic organisms. It has since been observed in cyanobacteria and a number of other aquatic microbes.
Ni-SOD is homohexameric, meaning that it has six identical subunits. Each subunit has a single nickel containing active site. The disproportionation mechanism involves a reduction-oxidation cycle where a single electron transfer is catalyzed by the Ni2+/Ni3+ redox couple. Ni-SOD catalyzes close to the barrier of diffusion.
Structure
Ni-SOD is a globular protein and is shaped like a hollow sphere. It is homohexameric, meaning that it is made up of six identical subunits. Each subunit is a bundle of four right-handed α-helixes and has a molecular mass of 13.4 kDa (117 amino acids). The subunits align to give Ni-SOD a three-fold axis of symmetry. There are six nickel cofactors in total (one for each subunit). The subunits also have a hydrophobic core, which helps drive protein folding. The core is made up of 17 aliphatic amino acids.
Nickel binding hook
All of the amino acids involved in catalysis and nickel binding are located within the first six residues from the N-terminus of each subunit. This region has a curved and disordered shape in the absence of nickel, which gives it its nickname, “the nickel binding hook”. After nickel binds, this motif takes on a highly ordered structure and forms the enzyme's active site. The nickel binding hook is composed of the conserved sequence H2N-His-Cys-X-X-Pro-Cys-Gly-X-Tyr (where X could be any amino acid, i.e. the position isn't conserved). Proline-5 creates a sharp turn, giving this region a hook shape. His-1, Cys-2, Cys-6 and the N-terminus make up the ligand set for the nickel cofactors. After nickel binds the ordered structure of the nickel binding hook is stabilized by forming hydrogen bonds with amino acids at the interface of two different subunits.
Active site
The six active sites are located in the nickel binding hook of each subunit.
Ni-SOD is the only superoxide dismutase with ligands other than histidine, aspartate or water. The amino acid residues that define the coordination sphere of Ni are cysteine-2, cysteine-6 and histidine-1. The equatorial ligands include the thiolates of cysteine-2 and cysteine-6, as well as a deprotonated backbone amide nitrogen and the N-terminal amine. This is one of the few examples of a backbone amide group acting as a metal ligand in a protein.
The thiolate sulfur centers are susceptible to oxidative damage.
Coordination geometry the nickel cofactor
In its oxidized (Ni(III)) state, the coordination geometry of nickel is square pyramidal. The binding of histidine-1 as an axial ligand in the reduced enzyme is uncertain. If histidine is not a ligand in the reduced enzyme, the nickel(II) cofactor would be square planar. However, the His-1 may remain in place throughout the redox cycle, meaning that the nickel cofactor would always have a square-pyramidal geometry. His-1 is held in place over the nickel cofactor in a tight hydrogen bonding network with a glutamic acid residue and an arginine residue.
Mechanism
The catalytic mechanism is analogous to highly efficient "ping-pong" mechanism of copper-zinc superoxide dismutase, where O alternately reduces and oxidizes the nickel cofactor. Two single electron transfer steps are involved:
Several aspects of the mechanism that are remain unclear. For example, both the H+ source and transfer mechanism are still vague. H+ is most likely carried into the active site by the substrate, meaning that superoxide enters the enzyme in its protonated form (HO2). The disproportionation is most likely catalyzed in the second coordination sphere, but the mechanism of electron transfer is still up for debate. It is possible that a quantum tunnelling event is involved. Nickel superoxide dismutase is an incredibly efficient enzyme, indicating the redox mechanism is very fast. This means that large structural rearrangements or dramatic changes to the coordination sphere are unlikely to be involved in the catalytic mechanism.
Occurrence
Nickel superoxide dismutase is primarily found in bacteria. The only known example of a eukaryote expressing a nickel containing superoxide dismutase is in the cytoplasm of a number of green algae species. Ni-SOD was first isolated from Streptomyces bacteria, which are mostly found in soil. Streptomyces Ni-SOD has been the most heavily studied nickel containing SOD to date. These enzymes are now known to exist in a number of other prokaryotes, including cyanobacteria and several Actinomycetes species. Some of the Actinomycetes species that express nickel containing superoxide dismutatses are Micromonospora rosia, Microtetraspora glauca and Kitasatospora griseola. Ni-SOD hasn't been found in any archaea.
Regulation
Nickel is the primary regulatory factor in the expression of Ni-SOD. Increased nickel concentration in the cytosol increases the expression of sodN, the gene that encodes Ni-SOD in Streptomyces. In the absence of nickel sodN isn't transcribed, indicating that nickel positively regulates Ni-SOD expression. The folding of the enzyme is also dependent on the presence of nickel in the cytosol. As mentioned above, the nickel binding hook is disordered when nickel isn't present.
Nickel also acts as a negative regulator, repressing the transcription of other superoxide dismutases. In particular, expression of iron superoxide dismutase (Fe-SOD) is repressed by nickel in Streptomyces coelicolor. A quintessential example of this negative regulation is Nur, nickel binding repressor. When nickel is present Nur binds to the promoter of sodF, stopping the production of iron superoxide dismutase.
Post-translational modification is also required to produce the active enzyme. In order to expose the nickel-binding hook, a leader sequence must be enzymatically cleaved off the N-terminus.
References
Metalloproteins
Nickel compounds | Nickel superoxide dismutase | [
"Chemistry"
] | 1,520 | [
"Metalloproteins",
"Bioinorganic chemistry"
] |
45,084,195 | https://en.wikipedia.org/wiki/Phil%20Muntz | Eric Phillip Muntz (May 18, 1934 – August 1, 2017) was a prominent American scientist and a former Canadian football player who played for the Calgary Stampeders in 1956 and Toronto Argonauts from 1957 to 1960. He previously played at the University of Toronto
where he received a B.S. degree in Aeronautical Engineering in 1956 and a PhD in 1961 specializing in Aerophysics.
From 1969, Muntz was a professor at University of Southern California who made important contributions to the development of electron beam fluorescence technique as well as its applications for high-speed flow measurements. He was an inventor on over 25 patents.
In 1993, Muntz was elected a member of US National Academy of Engineering with the citation "For technical and academic leadership in rarified-gas dynamics and non-equilibrium flow phenomena".
In late 1990s and early 2000s, Muntz introduced and developed the concept of Knudsen compressor, a multi-stage vacuum pump with no moving parts or fluids. He died on August 1, 2017.
References
1934 births
2017 deaths
Aerospace engineers
Calgary Stampeders players
Canadian football running backs
Members of the United States National Academy of Engineering
Players of Canadian football from Ontario
Sportspeople from Hamilton, Ontario
Toronto Varsity Blues football players
Toronto Argonauts players
University of Southern California faculty | Phil Muntz | [
"Engineering"
] | 263 | [
"Aerospace engineers",
"Aerospace engineering"
] |
22,003,136 | https://en.wikipedia.org/wiki/Quantum%20triviality | In a quantum field theory, charge screening can restrict the value of the observable "renormalized" charge of a classical theory. If the only resulting value of the renormalized charge is zero, the theory is said to be "trivial" or noninteracting. Thus, surprisingly, a classical theory that appears to describe interacting particles can, when realized as a quantum field theory, become a "trivial" theory of noninteracting free particles. This phenomenon is referred to as quantum triviality. Strong evidence supports the idea that a field theory involving only a scalar Higgs boson is trivial in four spacetime dimensions, but the situation for realistic models including other particles in addition to the Higgs boson is not known in general. Nevertheless, because the Higgs boson plays a central role in the Standard Model of particle physics, the question of triviality in Higgs models is of great importance.
This Higgs triviality is similar to the Landau pole problem in quantum electrodynamics, where this quantum theory may be inconsistent at very high momentum scales unless the renormalized charge is set to zero, i.e., unless the field theory has no interactions. The Landau pole question is generally considered to be of minor academic interest for quantum electrodynamics because of the inaccessibly large momentum scale at which the inconsistency appears. This is not however the case in theories that involve the elementary scalar Higgs boson, as the momentum scale at which a "trivial" theory exhibits inconsistencies may be accessible to present experimental efforts such as at the Large Hadron Collider (LHC) at CERN. In these Higgs theories, the interactions of the Higgs particle with itself are posited to generate the masses of the W and Z bosons, as well as lepton masses like those of the electron and muon. If realistic models of particle physics such as the Standard Model suffer from triviality issues, the idea of an elementary scalar Higgs particle may have to be modified or abandoned.
The situation becomes more complex in theories that involve other particles however. In fact, the addition of other particles can turn a trivial theory into a nontrivial one, at the cost of introducing constraints. Depending on the details of the theory, the Higgs mass can be bounded or even calculable. These quantum triviality constraints are in sharp contrast to the picture one derives at the classical level, where the Higgs mass is a free parameter. Quantum triviality can also lead to a calculable Higgs mass in asymptotic safety scenarios.
Triviality and the renormalization group
Modern considerations of triviality are usually formulated in terms of the real-space renormalization group, largely developed by Kenneth Wilson and others. Investigations of triviality are usually performed in the context of lattice gauge theory. A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilatation group of conventional renormalizable theories, came from condensed matter physics. Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group. The blocking idea is a way to define the components of the theory at large distances as aggregates of components at shorter distances.
This approach covered the conceptual point and was given full computational substance in Wilson's extensive important contributions. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1974, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.
In more technical terms, let us assume that we have a theory described by a certain function of the state variables and a certain set of coupling constants . This function may be a partition function, an action, a Hamiltonian, etc. It must contain the
whole description of the physics of the system.
Now we consider a certain blocking transformation of the state variables ,
the number of must be lower than the number of . Now let us try to rewrite the function only in terms of the . If this is achievable by a certain change in the parameters, , then the theory is said to be renormalizable. The most important information in the RG flow are its fixed points. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to be trivial. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.
Historical background
The first evidence of possible triviality of quantum field theories was obtained by Landau, Abrikosov, and Khalatnikov by finding the following relation of the observable charge with the "bare" charge ,
where is the mass of the particle, and is the momentum cut-off. If is finite, then tends to zero in the limit of infinite cut-off .
In fact, the proper interpretation of Eq.1 consists in its inversion, so that (related to the length scale ) is chosen to give a correct value of ,
The growth of with invalidates Eqs. () and () in the region (since they were obtained for ) and the existence of the "Landau pole" in Eq.2 has no physical meaning.
The actual behavior of the charge as a function of the momentum scale is determined by the full Gell-Mann–Low equation
which gives Eqs.(),() if it is integrated under conditions for and for , when only the term with is retained in the right hand side.
The general behavior of relies on the appearance of the function . According to the classification by Bogoliubov and Shirkov, there are three qualitatively different situations:
The latter case corresponds to the quantum triviality in the full theory (beyond its perturbation context), as can be seen by reductio ad absurdum. Indeed, if is finite, the theory is internally inconsistent. The only way to avoid it, is to tend to infinity, which is possible only for .
Conclusions
As a result, the question of whether the Standard Model of particle physics is nontrivial remains a serious unresolved question. Theoretical proofs of triviality of the pure scalar field theory exist, but the situation for the full standard model is unknown. The implied constraints on the standard model have been discussed.
See also
Hierarchy problem
References
Renormalization group
Quantum mechanics
Mathematical physics
Physical phenomena | Quantum triviality | [
"Physics",
"Mathematics"
] | 1,376 | [
"Quantum field theory",
"Physical phenomena",
"Applied mathematics",
"Theoretical physics",
"Critical phenomena",
"Quantum mechanics",
"Renormalization group",
"Statistical mechanics",
"Mathematical physics"
] |
22,003,959 | https://en.wikipedia.org/wiki/Albatross%20expedition | The Albatross expedition (Albatrossexpeditionen) was a Swedish oceanographic expedition that between July 4, 1947, and October 3, 1948, sailed around the world during 15 months covering 45 000 nautical miles. The expedition is considered the second largest Swedish research expedition after the Vega expedition. The expedition was very successful, received international attention, and is considered one of the important steps in the history of oceanography.
The Albatross
The expedition was carried out on board the newly built training ship Albatross. The 70 meter long and 11 meter wide vessel was a combined motor and sailing vessel. The Boström line (Broströmskoncernen) had just built the student ship to train prospective ship's officers and this vessel with associated crew was lent to the expedition.
Since the Boström line lent the ship at almost no cost, the expedition could be financed and carried out with only private donations. The leader of the expedition was Swedish physicist and oceanographer Hans Pettersson.
The main task of the expedition was to take up to 20 m long sediment cores from the ocean floor. This was made using a newly developed corer, known as piston sampler, developed by Börje Kullenberg. Until then the longest cores that could be taken were 2 m.
The expedition also carried out the first seismic reflection measurements of the sediment thickness, using sink bombs. The results of the sediment studies were ground-breaking since they revealed that the sediment thickness increased away from the mid-oceanic ridges, along with the sediment accumulation time. This was one of several pieces of evidence that eventually led to the acceptance of the theory of plate tectonics.
Apart from sediments, the expedition looked at biology. The first deep sea trawling, at 7 600-7 900 m depth, revealed that those depths were not the dead zone that previously had been the accepted view.
Notes
Other sources
Hans Pettersson (1950) Med Albatross över havsdjupen (Stockholm: Bonnier)
Eric Olausson (1996) The Swedish Deep-Sea Expedition with the "Albatross" 1947-1948 (Novum, Grafiska AB)
Oceanography
Science and technology in Sweden
Oceanographic expeditions
Expeditions from Sweden | Albatross expedition | [
"Physics",
"Environmental_science"
] | 453 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
22,007,111 | https://en.wikipedia.org/wiki/Ashby%20technique | The Ashby technique is a method for determining the volume and life span of red blood cells in humans, first published by Dr. Winifred Ashby in 1919. The technique involves injection of compatible donor red blood cells of a different blood group into a recipient, followed by blood testing periodically afterwards. Differential agglutination of the red cells is then used to determine the number of remaining donor cells, allowing the survival rate to be determined. It does not involve radioisotope technology, and was the first technique to successfully establish the correct red blood cell life span. In particular, Type O blood is first transfused into Type A or B subjects. In subsequent blood samples, the patient's own A and B blood cells are removed by agglutination with either anti-A or anti-B serum. The number of remaining nonagglutinated Type O cells as a function of time defines the survival rate of blood cells. This technique was used extensively during World War II and shortly after but has more recently been replaced by techniques that label one's own blood, due to the dangers of using donor blood.
References
Further reading
Hematology
Blood tests | Ashby technique | [
"Chemistry"
] | 236 | [
"Blood tests",
"Chemical pathology"
] |
36,216,833 | https://en.wikipedia.org/wiki/MRI%20RF%20shielding | RF shielding for MRI rooms is necessary to prevent noise of radio frequency from entering into the MRI scanner and distorting the image. The three main types of shielding used for MRIs are copper, steel, and aluminum. Copper is generally considered the best shielding for MRI rooms.
RF shielding should not be confused with magnetic shielding, which is used to prevent the magnetic field of the MRI magnet from interfering with pacemakers and other equipment outside of the MRI room.
After the MRI room has been completely shielded, all utility services such as electrical for lights, air conditioning, fire sprinklers and other penetrations into the room must be routed through specialized filters provided by the RF shielding vendor.
References
(Also see the publisher's site)
Electromagnetism | MRI RF shielding | [
"Physics",
"Chemistry",
"Materials_science"
] | 153 | [
"Electromagnetism",
"Physical phenomena",
"Nuclear magnetic resonance",
"Materials science stubs",
"Nuclear chemistry stubs",
"Fundamental interactions",
"Nuclear magnetic resonance stubs",
"Electromagnetism stubs"
] |
36,218,848 | https://en.wikipedia.org/wiki/MARTINI | Martini is a coarse-grained (CG) force field developed by Marrink and coworkers at the University of Groningen, initially developed in 2004 for molecular dynamics simulation of lipids, later (2007) extended to various other molecules. The force field applies a mapping of four heavy atoms to one CG interaction site and is parametrized with the aim of reproducing thermodynamic properties.
In 2021, a new version of the force field has been published, dubbed Martini 3.
Overview
For the Martini force field 4 bead categories have been defined: Q (charged), P (polar), N (nonpolar), and C (apolar). These bead types are in turn split in 4 or 5 different levels, giving a total of 20 beadtypes. For the interactions between the beads, 10 different interaction levels are defined (O-IX). The beads can be used at normal size (4:1 mapping), S-size (small, 3:1 mapping) or T-size (tiny, 2:1 mapping). The S-particles are mainly used in ring structures whereas the T-particles are currently used in nucleic acids only. Bonded interactions (bonds, angles, dihedrals, and impropers) are derived from atomistic simulations of crystal structures.
Use
The Martini force field has become one of the most used coarse grained force fields in the field of molecular dynamics simulations for biomolecules. The original 2004 and 2007 papers have been cited 1850 and 3400 times, respectively. The force field has been implemented in three major simulation codes: GROningen MAchine for Chemical Simulations (GROMACS), GROningen MOlecular Simulation (GROMOS), and Nanoscale Molecular Dynamics (NAMD). Notable successes are simulations of the clustering behavior of syntaxin-1A, the simulations of the opening of mechanosensitive channels (MscL) and the simulation of the domain partitioning of membrane peptides.
Parameter sets
Lipids
The initial papers contained parameters for water, simple alkanes, organic solvents, surfactants, a wide range of lipids and cholesterol. They semiquantitatively reproduce the phase behavior of bilayers with other bilayer properties, and more complex bilayer behavior.
Proteins
Compatible parameters for proteins were introduced by Monticelli et al.. Secondary structure elements, like alpha helixes and beta sheets (β-sheets), are constrained. Martini proteins are often simulated in combination with an elastic network, such as Elnedyn, to maintain the overall structure. However, the use of the elastic network restricts the use of the Martini force field for the study of large conformational changes (e.g. folding). The GōMartini approach introduced by Poma et al. removes this limitation.
Carbohydrates
Compatible parameters were released in 2009.
Nucleic acids
Compatible parameters were released for DNA in 2015 and RNA in 2017.
Other
Parameters for different other molecules, including carbon nanoparticles, ionic liquids, and a number of polymers, are available from the Martini website.
See also
GROMACS
VOTCA
Comparison of software for molecular mechanics modeling
Comparison of force field implementations
References
External links
Force fields (chemistry) | MARTINI | [
"Chemistry"
] | 669 | [
"Molecular dynamics",
"Computational chemistry",
"Force fields (chemistry)"
] |
33,581,389 | https://en.wikipedia.org/wiki/CLs%20method%20%28particle%20physics%29 | In particle physics, CLs represents a statistical method for setting upper limits (also called exclusion limits) on model parameters, a particular form of interval estimation used for parameters that can take only non-negative values. Although CLs are said to refer to Confidence Levels, "The method's name is ... misleading, as the CLs exclusion region is not a confidence interval." It was first introduced by physicists working at the LEP experiment at CERN and has since been used by many high energy physics experiments. It is a frequentist method in the sense that the properties of the limit are defined by means of error probabilities, however it differs from standard confidence intervals in that the stated confidence level of the interval is not equal to its coverage probability. The reason for this deviation is that standard upper limits based on a most powerful test necessarily produce empty intervals with some fixed probability when the parameter value is zero, and this property is considered undesirable by most physicists and statisticians.
Upper limits derived with the CLs method always contain the zero value of the parameter and hence the coverage probability at this point is always 100%. The definition of CLs does not follow from any precise theoretical framework of statistical inference and is therefore described sometimes as ad hoc. It has however close resemblance to concepts of statistical evidence
proposed by the statistician Allan Birnbaum.
Definition
Let X be a random sample from a probability distribution with a real non-negative parameter . A CLs upper limit for the parameter θ, with confidence level , is a statistic (i.e., observable random variable) which has the property:
The inequality is used in the definition to account for cases where the distribution of X is discrete and an equality can not be achieved precisely. If the distribution of X is continuous then this should be replaced by an equality. Note that the definition implies that the coverage probability is always larger than .
An equivalent definition can be made by considering a hypothesis test of the null hypothesis against the alternative . Then the numerator in (), when evaluated at , correspond to the type-I error probability () of the test (i.e., is rejected when ) and the denominator to the power (). The criterion for rejecting thus requires that the ratio will be smaller than . This can be interpreted intuitively as saying that is excluded because it is less likely to observe such an extreme outcome as X when is true than it is when the alternative is true.
The calculation of the upper limit is usually done by constructing a test statistic and finding the value of for which
where is the observed outcome of the experiment.
Usage in high energy physics
Upper limits based on the CLs method were used in numerous publications of experimental results obtained at particle accelerator experiments such as LEP, the Tevatron and the LHC, most notable in the searches for new particles.
Origin
The original motivation for CLs was based on a conditional probability calculation suggested by physicist G. Zech for an event counting experiment. Suppose an experiment consists of measuring events coming from signal and background processes, both described by Poisson distributions with respective rates and , namely . is assumed to be known and is the parameter to be estimated by the experiment. The standard procedure for setting an upper limit on given an experimental outcome consists of excluding values of for which , which guarantees at least coverage. Consider, for example, a case where and events are observed, then one finds that is excluded at 95% confidence level. But this implies that is excluded, namely all possible values of . Such a result is difficult to interpret because the experiment cannot essentially distinguish very small values of from the background-only hypothesis, and thus declaring that such small values are excluded (in favor of the background-only hypothesis) seems inappropriate. To overcome this difficulty Zech suggested conditioning the probability that on the observation that , where is the (unmeasurable) number of background events. The reasoning behind this is that when is small the procedure is more likely to produce an error (i.e., an interval that does not cover the true value) than when is large, and the distribution of itself is independent of . That is, not the over-all error probability should be reported but the conditional probability given the knowledge one has on the number of background events in the sample. This conditional probability is
which correspond to the above definition of CLs. The first equality just uses the definition of Conditional probability, and the second equality comes from the fact that if and the number of background events is by definition independent of the signal strength.
Generalization of the conditional argument
Zech's conditional argument can be formally extended to the general case. Suppose that is a test statistic from which the confidence interval is derived, and let
where is the outcome observed by the experiment. Then can be regarded as an unmeasurable (since is unknown) random variable, whose distribution is uniform between 0 and 1 independent of . If the test is unbiased then the outcome implies
from which, similarly to conditioning on in the previous case, one obtains
Relation to foundational principles
The arguments given above can be viewed as following the spirit of the conditionality principle of statistical inference, although they express a more generalized notion of conditionality which do not require the existence of an ancillary statistic. The conditionality principle however, already in its original more restricted version, formally implies the likelihood principle, a result famously shown by Birnbaum. CLs does not obey the likelihood principle, and thus such considerations may only be used to suggest plausibility, but not theoretical completeness from the foundational point of view. (The same however can be said on any frequentist method if the conditionality principle is regarded as necessary).
Birnbaum himself suggested in his 1962 paper that the CLs ratio should be used as a measure of the strength of statistical evidence provided by significance tests, rather than alone. This followed from a simple application of the likelihood principle: if the outcome of an experiment is to be only reported in a form of a "accept"/"reject" decision, then the overall procedure is equivalent to an experiment that has only two possible outcomes, with probabilities , and , under . The likelihood ratio associated with the outcome "reject " is therefore and hence should determine the evidential interpretation of this result. (Since, for a test of two simple hypotheses, the likelihood ratio is a compact representation of the likelihood function). On the other hand, if the likelihood principle is to be followed consistently, then the likelihood ratio of the original outcome should be used and not , making the basis of such an interpretation questionable. Birnbaum later described this as having "at most heuristic, but not substantial, value for evidential interpretation".
A more direct approach leading to a similar conclusion can be found in Birnbaum's formulation of the Confidence principle, which, unlike the more common version, refers to error probabilities of both kinds. This is stated as follows:
"A concept of statistical evidence is not plausible unless it finds 'strong evidence for as against ' with small probability when is true, and with much larger probability when is true."
Such definition of confidence can naturally seem to be satisfied by the definition of CLs. It remains true that
both this and the more common (as associated with the Neyman-Pearson theory) versions of the confidence principle are incompatible with the likelihood principle, and therefore no frequentist method can be regarded as a truly complete solution to the problems raised by considering conditional properties of confidence intervals.
Calculation in the large sample limit
If certain regularity conditions are met, then a general likelihood function will become a Gaussian function in the large sample limit. In such case the CLs upper limit at confidence level (derived from the uniformly most powerful test) is given by
where is the standard normal cumulative distribution, is the maximum likelihood estimator of and is its standard deviation; the latter might be estimated from the inverse of the Fisher information matrix or by using the "Asimov" data set. This result happens to be equivalent to a Bayesian credible interval if a uniform prior for is used.
References
Further reading
External links
The Particle Data Group (PDG) review of statistical methods
Statistical intervals
Experimental particle physics | CLs method (particle physics) | [
"Physics"
] | 1,680 | [
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
33,584,042 | https://en.wikipedia.org/wiki/Novikov%E2%80%93Veselov%20equation | In mathematics, the Novikov–Veselov equation (or Veselov–Novikov equation) is a natural (2+1)-dimensional analogue of the Korteweg–de Vries (KdV) equation. Unlike another (2+1)-dimensional analogue of KdV, the Kadomtsev–Petviashvili equation, it is integrable via the inverse scattering transform for the 2-dimensional stationary Schrödinger equation. Similarly, the Korteweg–de Vries equation is integrable via the inverse scattering transform for the 1-dimensional Schrödinger equation. The equation is named after S.P. Novikov and A.P. Veselov who published it in .
Definition
The Novikov–Veselov equation is most commonly written as
where and the following standard notation of complex analysis is used: is the real part,
The function is generally considered to be real-valued. The function is an auxiliary function defined via up to a holomorphic summand, is a real parameter corresponding to the energy level of the related 2-dimensional Schrödinger equation
Relation to other nonlinear integrable equations
When the functions and in the Novikov–Veselov equation depend only on one spatial variable, e.g. , , then the equation is reduced to the classical Korteweg–de Vries equation. If in the Novikov–Veselov equation , then the equation reduces to another (2+1)-dimensional analogue of the KdV equation, the Kadomtsev–Petviashvili equation (to KP-I and KP-II, respectively) .
History
The inverse scattering transform method for solving nonlinear partial differential equations (PDEs) begins with the discovery of C.S. Gardner, J.M. Greene, M.D. Kruskal, R.M. Miura , who demonstrated that the Korteweg–de Vries equation can be integrated via the inverse scattering problem for the 1-dimensional stationary Schrödinger equation. The algebraic nature of this discovery was revealed by Lax who showed that the Korteweg–de Vries equation can be written in the following operator form (the so-called Lax pair):
where , and is a commutator. Equation () is a compatibility condition for the equations
for all values of .
Afterwards, a representation of the form () was found for many other physically interesting nonlinear equations, like the Kadomtsev–Petviashvili equation, sine-Gordon equation, nonlinear Schrödinger equation and others. This led to an extensive development of the theory of inverse scattering transform for integrating nonlinear partial differential equations.
When trying to generalize representation () to two dimensions, one obtains that it holds only for trivial cases (operators , , have constant coefficients or operator is a differential operator of order not larger than 1 with respect to one of the variables). However, S.V. Manakov showed that in the two-dimensional case it is more correct to consider the following representation (further called the Manakov L-A-B triple):
or, equivalently, to search for the condition of compatibility of the equations
at one fixed value of parameter .
Representation () for the 2-dimensional Schrödinger operator was found by S.P. Novikov and A.P. Veselov in . The authors also constructed a hierarchy of evolution equations integrable via the inverse scattering transform for the 2-dimensional Schrödinger equation at fixed energy. This set of evolution equations (which is sometimes called the hierarchy of the Novikov–Veselov equations) contains, in particular, the equation ().
Physical applications
The dispersionless version of the Novikov–Veselov equation was derived in a model of nonlinear geometrical optics .
Behavior of solutions
The behavior of solutions to the Novikov–Veselov equation depends essentially on the regularity of the scattering data for this solution. If the scattering data are regular, then the solution vanishes uniformly with time. If the scattering data have singularities, then the solution may develop solitons. For example, the scattering data of the Grinevich–Zakharov soliton solutions of the Novikov–Veselov equation have singular points.
Solitons are traditionally a key object of study in the theory of nonlinear integrable equations. The solitons of the Novikov–Veselov equation at positive energy are transparent potentials, similarly to the one-dimensional case (in which solitons are reflectionless potentials). However, unlike the one-dimensional case where there exist well-known exponentially decaying solitons, the Novikov–Veselov equation (at least at non-zero energy) does not possess exponentially localized solitons .
References
(English translation: Russian Math. Surveys 31 (1976), no. 5, 245–246.)
External links
The inverse scattering method for the Novikov–Veselov equation
Partial differential equations
Exactly solvable models
Integrable systems
Solitons | Novikov–Veselov equation | [
"Physics"
] | 1,049 | [
"Integrable systems",
"Theoretical physics"
] |
26,356,935 | https://en.wikipedia.org/wiki/Energy%20operator | In quantum mechanics, energy is defined in terms of the energy operator, acting on the wave function of the system as a consequence of time translation symmetry.
Definition
It is given by:
It acts on the wave function (the probability amplitude for different configurations of the system)
Application
The energy operator corresponds to the full energy of a system. The Schrödinger equation describes the space- and time-dependence of the slow changing (non-relativistic) wave function of a quantum system. The solution of the Schrödinger equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta.
Schrödinger equation
Using the energy operator in the Schrödinger equation:
one obtains:
where i is the imaginary unit, ħ is the reduced Planck constant, and is the Hamiltonian operator expressed as:
From the equation, the equality can be made:, where is the expectation value of energy.
Properties
It can be shown that the expectation value of energy will always be greater than or equal to the minimum potential of the system.
Consider computing the expectation value of kinetic energy:
Hence the expectation value of kinetic energy is always non-negative. This result can be used with the linearity condition to calculate the expectation value of the total energy which is given for a normalized wavefunction as:
which complete the proof. Similarly, the same condition can be generalized to any higher dimensions.
Constant energy
Working from the definition, a partial solution for a wavefunction of a particle with a constant energy can be constructed. If the wavefunction is assumed to be separable, then the time dependence can be stated as , where E is the constant energy. In full,
where is the partial solution of the wavefunction dependent on position. Applying the energy operator, we have
This is also known as the stationary state, and can be used to analyse the time-independent Schrödinger equation:
where E is an eigenvalue of energy.
Klein–Gordon equation
The relativistic mass-energy relation:
where again E = total energy, p = total 3-momentum of the particle, m = invariant mass, and c = speed of light, can similarly yield the Klein–Gordon equation:
where is the momentum operator. That is:
Derivation
The energy operator is easily derived from using the free particle wave function (plane wave solution to Schrödinger's equation). Starting in one dimension the wave function is
The time derivative of is
By the De Broglie relation:
we have
Re-arranging the equation leads to
where the energy factor E is a scalar value, the energy the particle has and the value that is measured. The partial derivative is a linear operator so this expression is the operator for energy:
It can be concluded that the scalar E is the eigenvalue of the operator, while is the operator. Summarizing these results:
For a 3-d plane wave
the derivation is exactly identical, as no change is made to the term including time and therefore the time derivative. Since the operator is linear, they are valid for any linear combination of plane waves, and so they can act on any wave function without affecting the properties of the wave function or operators. Hence this must be true for any wave function. It turns out to work even in relativistic quantum mechanics, such as the Klein–Gordon equation above.
See also
Time translation symmetry
Planck constant
Schrödinger equation
Momentum operator
Hamiltonian (quantum mechanics)
Conservation of energy
Complex number
Stationary state
References
Energy
Partial differential equations
Quantum mechanics | Energy operator | [
"Physics"
] | 735 | [
"Physical quantities",
"Quantum mechanics",
"Energy (physics)",
"Quantum operators",
"Energy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.